problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_171 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-563 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
External pull requests currently fail the deploy task
The build on #536 is currently failing because the decryption is trying to run and it doesn't have access to the decryption environment variables because it comes from @Zac-HD's fork rather than the main repo.
The solution is just to have that task skip for external pull requests I think.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/deploy.py`
Content:
```
1 #!/usr/bin/env python
2
3 # coding=utf-8
4 #
5 # This file is part of Hypothesis, which may be found at
6 # https://github.com/HypothesisWorks/hypothesis-python
7 #
8 # Most of this work is copyright (C) 2013-2017 David R. MacIver
9 # ([email protected]), but it contains contributions by others. See
10 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
11 # consult the git log if you need to determine who owns an individual
12 # contribution.
13 #
14 # This Source Code Form is subject to the terms of the Mozilla Public License,
15 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
16 # obtain one at http://mozilla.org/MPL/2.0/.
17 #
18 # END HEADER
19
20 from __future__ import division, print_function, absolute_import
21
22 import os
23 import sys
24 import random
25 import shutil
26 import subprocess
27 from time import time, sleep
28
29 import hypothesistooling as tools
30
31 sys.path.append(os.path.dirname(__file__)) # noqa
32
33
34 DIST = os.path.join(tools.ROOT, 'dist')
35
36
37 PENDING_STATUS = ('started', 'created')
38
39
40 if __name__ == '__main__':
41
42 print('Decrypting secrets')
43
44 # We'd normally avoid the use of shell=True, but this is more or less
45 # intended as an opaque string that was given to us by Travis that happens
46 # to be a shell command that we run, and there are a number of good reasons
47 # this particular instance is harmless and would be high effort to
48 # convert (principally: Lack of programmatic generation of the string and
49 # extensive use of environment variables in it), so we're making an
50 # exception here.
51 subprocess.check_call(
52 'openssl aes-256-cbc -K $encrypted_39cb4cc39a80_key '
53 '-iv $encrypted_39cb4cc39a80_iv -in secrets.tar.enc '
54 '-out secrets.tar -d',
55 shell=True
56 )
57
58 subprocess.check_call([
59 'tar', '-xvf', 'secrets.tar',
60 ])
61
62 last_release = tools.latest_version()
63
64 print('Current version: %s. Latest released version: %s' % (
65 tools.__version__, last_release
66 ))
67
68 print('Building an sdist...')
69
70 if os.path.exists(DIST):
71 shutil.rmtree(DIST)
72
73 subprocess.check_output([
74 sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST,
75 ])
76
77 if not tools.on_master():
78 print('Not deploying due to not being on master')
79 sys.exit(0)
80
81 if not tools.has_source_changes(last_release):
82 print('Not deploying due to no source changes')
83 sys.exit(0)
84
85 start_time = time()
86
87 prev_pending = None
88
89 # We time out after an hour, which is a stupidly long time and it should
90 # never actually take that long: A full Travis run only takes about 20-30
91 # minutes! This is really just here as a guard in case something goes
92 # wrong and we're not paying attention so as to not be too mean to Travis..
93 while time() <= start_time + 60 * 60:
94 jobs = tools.build_jobs()
95
96 failed_jobs = [
97 (k, v)
98 for k, vs in jobs.items()
99 if k not in PENDING_STATUS + ('passed',)
100 for v in vs
101 ]
102
103 if failed_jobs:
104 print('Failing this due to failure of jobs %s' % (
105 ', '.join('%s(%s)' % (s, j) for j, s in failed_jobs),
106 ))
107 sys.exit(1)
108 else:
109 pending = [j for s in PENDING_STATUS for j in jobs.get(s, ())]
110 try:
111 # This allows us to test the deploy job for a build locally.
112 pending.remove('deploy')
113 except ValueError:
114 pass
115 if pending:
116 still_pending = set(pending)
117 if prev_pending is None:
118 print('Waiting for the following jobs to complete:')
119 for p in sorted(still_pending):
120 print(' * %s' % (p,))
121 print()
122 else:
123 completed = prev_pending - still_pending
124 if completed:
125 print('%s completed since last check.' % (
126 ', '.join(sorted(completed)),))
127 prev_pending = still_pending
128 naptime = 10.0 * (2 + random.random())
129 print('Waiting %.2fs for %d more job%s to complete' % (
130 naptime, len(pending), 's' if len(pending) > 1 else '',))
131 sleep(naptime)
132 else:
133 break
134 else:
135 print("We've been waiting for an hour. That seems bad. Failing now.")
136 sys.exit(1)
137
138 print('Looks good to release!')
139 print('Now uploading to pypi.')
140
141 subprocess.check_output([
142 sys.executable, '-m', 'twine', 'upload',
143 '--config-file', './.pypirc',
144 os.path.join(DIST, '*'),
145 ])
146
147 print('Release seems good. Pushing the tag now.')
148
149 tools.create_tag()
150 sys.exit(0)
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/deploy.py b/scripts/deploy.py
--- a/scripts/deploy.py
+++ b/scripts/deploy.py
@@ -38,6 +38,8 @@
if __name__ == '__main__':
+ if os.environ.get('TRAVIS_SECURE_ENV_VARS', None) != 'true':
+ sys.exit(0)
print('Decrypting secrets')
| {"golden_diff": "diff --git a/scripts/deploy.py b/scripts/deploy.py\n--- a/scripts/deploy.py\n+++ b/scripts/deploy.py\n@@ -38,6 +38,8 @@\n \n \n if __name__ == '__main__':\n+ if os.environ.get('TRAVIS_SECURE_ENV_VARS', None) != 'true':\n+ sys.exit(0)\n \n print('Decrypting secrets')\n", "issue": "External pull requests currently fail the deploy task\nThe build on #536 is currently failing because the decryption is trying to run and it doesn't have access to the decryption environment variables because it comes from @Zac-HD's fork rather than the main repo.\r\n\r\nThe solution is just to have that task skip for external pull requests I think.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport random\nimport shutil\nimport subprocess\nfrom time import time, sleep\n\nimport hypothesistooling as tools\n\nsys.path.append(os.path.dirname(__file__)) # noqa\n\n\nDIST = os.path.join(tools.ROOT, 'dist')\n\n\nPENDING_STATUS = ('started', 'created')\n\n\nif __name__ == '__main__':\n\n print('Decrypting secrets')\n\n # We'd normally avoid the use of shell=True, but this is more or less\n # intended as an opaque string that was given to us by Travis that happens\n # to be a shell command that we run, and there are a number of good reasons\n # this particular instance is harmless and would be high effort to\n # convert (principally: Lack of programmatic generation of the string and\n # extensive use of environment variables in it), so we're making an\n # exception here.\n subprocess.check_call(\n 'openssl aes-256-cbc -K $encrypted_39cb4cc39a80_key '\n '-iv $encrypted_39cb4cc39a80_iv -in secrets.tar.enc '\n '-out secrets.tar -d',\n shell=True\n )\n\n subprocess.check_call([\n 'tar', '-xvf', 'secrets.tar',\n ])\n\n last_release = tools.latest_version()\n\n print('Current version: %s. Latest released version: %s' % (\n tools.__version__, last_release\n ))\n\n print('Building an sdist...')\n\n if os.path.exists(DIST):\n shutil.rmtree(DIST)\n\n subprocess.check_output([\n sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST,\n ])\n\n if not tools.on_master():\n print('Not deploying due to not being on master')\n sys.exit(0)\n\n if not tools.has_source_changes(last_release):\n print('Not deploying due to no source changes')\n sys.exit(0)\n\n start_time = time()\n\n prev_pending = None\n\n # We time out after an hour, which is a stupidly long time and it should\n # never actually take that long: A full Travis run only takes about 20-30\n # minutes! This is really just here as a guard in case something goes\n # wrong and we're not paying attention so as to not be too mean to Travis..\n while time() <= start_time + 60 * 60:\n jobs = tools.build_jobs()\n\n failed_jobs = [\n (k, v)\n for k, vs in jobs.items()\n if k not in PENDING_STATUS + ('passed',)\n for v in vs\n ]\n\n if failed_jobs:\n print('Failing this due to failure of jobs %s' % (\n ', '.join('%s(%s)' % (s, j) for j, s in failed_jobs),\n ))\n sys.exit(1)\n else:\n pending = [j for s in PENDING_STATUS for j in jobs.get(s, ())]\n try:\n # This allows us to test the deploy job for a build locally.\n pending.remove('deploy')\n except ValueError:\n pass\n if pending:\n still_pending = set(pending)\n if prev_pending is None:\n print('Waiting for the following jobs to complete:')\n for p in sorted(still_pending):\n print(' * %s' % (p,))\n print()\n else:\n completed = prev_pending - still_pending\n if completed:\n print('%s completed since last check.' % (\n ', '.join(sorted(completed)),))\n prev_pending = still_pending\n naptime = 10.0 * (2 + random.random())\n print('Waiting %.2fs for %d more job%s to complete' % (\n naptime, len(pending), 's' if len(pending) > 1 else '',))\n sleep(naptime)\n else:\n break\n else:\n print(\"We've been waiting for an hour. That seems bad. Failing now.\")\n sys.exit(1)\n\n print('Looks good to release!')\n print('Now uploading to pypi.')\n\n subprocess.check_output([\n sys.executable, '-m', 'twine', 'upload',\n '--config-file', './.pypirc',\n os.path.join(DIST, '*'),\n ])\n\n print('Release seems good. Pushing the tag now.')\n\n tools.create_tag()\n sys.exit(0)\n", "path": "scripts/deploy.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport random\nimport shutil\nimport subprocess\nfrom time import time, sleep\n\nimport hypothesistooling as tools\n\nsys.path.append(os.path.dirname(__file__)) # noqa\n\n\nDIST = os.path.join(tools.ROOT, 'dist')\n\n\nPENDING_STATUS = ('started', 'created')\n\n\nif __name__ == '__main__':\n if os.environ.get('TRAVIS_SECURE_ENV_VARS', None) != 'true':\n sys.exit(0)\n\n print('Decrypting secrets')\n\n # We'd normally avoid the use of shell=True, but this is more or less\n # intended as an opaque string that was given to us by Travis that happens\n # to be a shell command that we run, and there are a number of good reasons\n # this particular instance is harmless and would be high effort to\n # convert (principally: Lack of programmatic generation of the string and\n # extensive use of environment variables in it), so we're making an\n # exception here.\n subprocess.check_call(\n 'openssl aes-256-cbc -K $encrypted_39cb4cc39a80_key '\n '-iv $encrypted_39cb4cc39a80_iv -in secrets.tar.enc '\n '-out secrets.tar -d',\n shell=True\n )\n\n subprocess.check_call([\n 'tar', '-xvf', 'secrets.tar',\n ])\n\n last_release = tools.latest_version()\n\n print('Current version: %s. Latest released version: %s' % (\n tools.__version__, last_release\n ))\n\n print('Building an sdist...')\n\n if os.path.exists(DIST):\n shutil.rmtree(DIST)\n\n subprocess.check_output([\n sys.executable, 'setup.py', 'sdist', '--dist-dir', DIST,\n ])\n\n if not tools.on_master():\n print('Not deploying due to not being on master')\n sys.exit(0)\n\n if not tools.has_source_changes(last_release):\n print('Not deploying due to no source changes')\n sys.exit(0)\n\n start_time = time()\n\n prev_pending = None\n\n # We time out after an hour, which is a stupidly long time and it should\n # never actually take that long: A full Travis run only takes about 20-30\n # minutes! This is really just here as a guard in case something goes\n # wrong and we're not paying attention so as to not be too mean to Travis..\n while time() <= start_time + 60 * 60:\n jobs = tools.build_jobs()\n\n failed_jobs = [\n (k, v)\n for k, vs in jobs.items()\n if k not in PENDING_STATUS + ('passed',)\n for v in vs\n ]\n\n if failed_jobs:\n print('Failing this due to failure of jobs %s' % (\n ', '.join('%s(%s)' % (s, j) for j, s in failed_jobs),\n ))\n sys.exit(1)\n else:\n pending = [j for s in PENDING_STATUS for j in jobs.get(s, ())]\n try:\n # This allows us to test the deploy job for a build locally.\n pending.remove('deploy')\n except ValueError:\n pass\n if pending:\n still_pending = set(pending)\n if prev_pending is None:\n print('Waiting for the following jobs to complete:')\n for p in sorted(still_pending):\n print(' * %s' % (p,))\n print()\n else:\n completed = prev_pending - still_pending\n if completed:\n print('%s completed since last check.' % (\n ', '.join(sorted(completed)),))\n prev_pending = still_pending\n naptime = 10.0 * (2 + random.random())\n print('Waiting %.2fs for %d more job%s to complete' % (\n naptime, len(pending), 's' if len(pending) > 1 else '',))\n sleep(naptime)\n else:\n break\n else:\n print(\"We've been waiting for an hour. That seems bad. Failing now.\")\n sys.exit(1)\n\n print('Looks good to release!')\n print('Now uploading to pypi.')\n\n subprocess.check_output([\n sys.executable, '-m', 'twine', 'upload',\n '--config-file', './.pypirc',\n os.path.join(DIST, '*'),\n ])\n\n print('Release seems good. Pushing the tag now.')\n\n tools.create_tag()\n sys.exit(0)\n", "path": "scripts/deploy.py"}]} | 1,853 | 83 |
gh_patches_debug_5678 | rasdani/github-patches | git_diff | stephenmcd__mezzanine-1954 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Small typo in mezzanine/core/auth_backends.py
Should read verification rather than verficiation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mezzanine/core/auth_backends.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from django.contrib.auth import get_user_model
4 from django.contrib.auth.backends import ModelBackend
5 from django.contrib.auth.tokens import default_token_generator
6 from django.db.models import Q
7 from django.utils.http import base36_to_int
8
9
10 User = get_user_model()
11
12
13 class MezzanineBackend(ModelBackend):
14 """
15 Extends Django's ``ModelBackend`` to allow login via username,
16 email, or verification token.
17
18 Args are either ``username`` and ``password``, or ``uidb36``
19 and ``token``. In either case, ``is_active`` can also be given.
20
21 For login, is_active is not given, so that the login form can
22 raise a specific error for inactive users.
23 For password reset, True is given for is_active.
24 For signup verficiation, False is given for is_active.
25 """
26
27 def authenticate(self, *args, **kwargs):
28 if kwargs:
29 username = kwargs.pop("username", None)
30 if username:
31 username_or_email = Q(username=username) | Q(email=username)
32 password = kwargs.pop("password", None)
33 try:
34 user = User.objects.get(username_or_email, **kwargs)
35 except User.DoesNotExist:
36 pass
37 else:
38 if user.check_password(password):
39 return user
40 else:
41 if 'uidb36' not in kwargs:
42 return
43 kwargs["id"] = base36_to_int(kwargs.pop("uidb36"))
44 token = kwargs.pop("token")
45 try:
46 user = User.objects.get(**kwargs)
47 except User.DoesNotExist:
48 pass
49 else:
50 if default_token_generator.check_token(user, token):
51 return user
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mezzanine/core/auth_backends.py b/mezzanine/core/auth_backends.py
--- a/mezzanine/core/auth_backends.py
+++ b/mezzanine/core/auth_backends.py
@@ -21,7 +21,7 @@
For login, is_active is not given, so that the login form can
raise a specific error for inactive users.
For password reset, True is given for is_active.
- For signup verficiation, False is given for is_active.
+ For signup verification, False is given for is_active.
"""
def authenticate(self, *args, **kwargs):
| {"golden_diff": "diff --git a/mezzanine/core/auth_backends.py b/mezzanine/core/auth_backends.py\n--- a/mezzanine/core/auth_backends.py\n+++ b/mezzanine/core/auth_backends.py\n@@ -21,7 +21,7 @@\n For login, is_active is not given, so that the login form can\n raise a specific error for inactive users.\n For password reset, True is given for is_active.\n- For signup verficiation, False is given for is_active.\n+ For signup verification, False is given for is_active.\n \"\"\"\n \n def authenticate(self, *args, **kwargs):\n", "issue": "Small typo in mezzanine/core/auth_backends.py\nShould read verification rather than verficiation.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.backends import ModelBackend\nfrom django.contrib.auth.tokens import default_token_generator\nfrom django.db.models import Q\nfrom django.utils.http import base36_to_int\n\n\nUser = get_user_model()\n\n\nclass MezzanineBackend(ModelBackend):\n \"\"\"\n Extends Django's ``ModelBackend`` to allow login via username,\n email, or verification token.\n\n Args are either ``username`` and ``password``, or ``uidb36``\n and ``token``. In either case, ``is_active`` can also be given.\n\n For login, is_active is not given, so that the login form can\n raise a specific error for inactive users.\n For password reset, True is given for is_active.\n For signup verficiation, False is given for is_active.\n \"\"\"\n\n def authenticate(self, *args, **kwargs):\n if kwargs:\n username = kwargs.pop(\"username\", None)\n if username:\n username_or_email = Q(username=username) | Q(email=username)\n password = kwargs.pop(\"password\", None)\n try:\n user = User.objects.get(username_or_email, **kwargs)\n except User.DoesNotExist:\n pass\n else:\n if user.check_password(password):\n return user\n else:\n if 'uidb36' not in kwargs:\n return\n kwargs[\"id\"] = base36_to_int(kwargs.pop(\"uidb36\"))\n token = kwargs.pop(\"token\")\n try:\n user = User.objects.get(**kwargs)\n except User.DoesNotExist:\n pass\n else:\n if default_token_generator.check_token(user, token):\n return user\n", "path": "mezzanine/core/auth_backends.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.backends import ModelBackend\nfrom django.contrib.auth.tokens import default_token_generator\nfrom django.db.models import Q\nfrom django.utils.http import base36_to_int\n\n\nUser = get_user_model()\n\n\nclass MezzanineBackend(ModelBackend):\n \"\"\"\n Extends Django's ``ModelBackend`` to allow login via username,\n email, or verification token.\n\n Args are either ``username`` and ``password``, or ``uidb36``\n and ``token``. In either case, ``is_active`` can also be given.\n\n For login, is_active is not given, so that the login form can\n raise a specific error for inactive users.\n For password reset, True is given for is_active.\n For signup verification, False is given for is_active.\n \"\"\"\n\n def authenticate(self, *args, **kwargs):\n if kwargs:\n username = kwargs.pop(\"username\", None)\n if username:\n username_or_email = Q(username=username) | Q(email=username)\n password = kwargs.pop(\"password\", None)\n try:\n user = User.objects.get(username_or_email, **kwargs)\n except User.DoesNotExist:\n pass\n else:\n if user.check_password(password):\n return user\n else:\n if 'uidb36' not in kwargs:\n return\n kwargs[\"id\"] = base36_to_int(kwargs.pop(\"uidb36\"))\n token = kwargs.pop(\"token\")\n try:\n user = User.objects.get(**kwargs)\n except User.DoesNotExist:\n pass\n else:\n if default_token_generator.check_token(user, token):\n return user\n", "path": "mezzanine/core/auth_backends.py"}]} | 745 | 137 |
gh_patches_debug_794 | rasdani/github-patches | git_diff | scikit-image__scikit-image-3650 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tifffile: try to use the one in the user's install first
Should we try importing tifffile before using the one we versionned it?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/io/_plugins/tifffile_plugin.py`
Content:
```
1 from ...external.tifffile import TiffFile, imsave, parse_kwargs
2
3
4 def imread(fname, dtype=None, **kwargs):
5 """Load a tiff image from file.
6
7 Parameters
8 ----------
9 fname : str or file
10 File name or file-like-object.
11 dtype : numpy dtype object or string specifier
12 Specifies data type of array elements (Not currently used).
13 kwargs : keyword pairs, optional
14 Additional keyword arguments to pass through (see ``tifffile``'s
15 ``imread`` function).
16
17 Notes
18 -----
19 Provided by Christophe Golhke's tifffile.py [1]_, and supports many
20 advanced image types including multi-page and floating point.
21
22 References
23 ----------
24 .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py
25
26 """
27
28 if 'img_num' in kwargs:
29 kwargs['key'] = kwargs.pop('img_num')
30
31 # parse_kwargs will extract keyword arguments intended for the TiffFile
32 # class and remove them from the kwargs dictionary in-place
33 tiff_keys = ['multifile', 'multifile_close', 'pages', 'fastij', 'is_ome']
34 kwargs_tiff = parse_kwargs(kwargs, *tiff_keys)
35
36 # read and return tiff as numpy array
37 with TiffFile(fname, **kwargs_tiff) as tif:
38 return tif.asarray(**kwargs)
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/io/_plugins/tifffile_plugin.py b/skimage/io/_plugins/tifffile_plugin.py
--- a/skimage/io/_plugins/tifffile_plugin.py
+++ b/skimage/io/_plugins/tifffile_plugin.py
@@ -1,4 +1,7 @@
-from ...external.tifffile import TiffFile, imsave, parse_kwargs
+try:
+ from tifffile import TiffFile, imsave, parse_kwargs
+except ImportError:
+ from ...external.tifffile import TiffFile, imsave, parse_kwargs
def imread(fname, dtype=None, **kwargs):
| {"golden_diff": "diff --git a/skimage/io/_plugins/tifffile_plugin.py b/skimage/io/_plugins/tifffile_plugin.py\n--- a/skimage/io/_plugins/tifffile_plugin.py\n+++ b/skimage/io/_plugins/tifffile_plugin.py\n@@ -1,4 +1,7 @@\n-from ...external.tifffile import TiffFile, imsave, parse_kwargs\n+try:\n+ from tifffile import TiffFile, imsave, parse_kwargs\n+except ImportError:\n+ from ...external.tifffile import TiffFile, imsave, parse_kwargs\n \n \n def imread(fname, dtype=None, **kwargs):\n", "issue": "tifffile: try to use the one in the user's install first\nShould we try importing tifffile before using the one we versionned it?\n", "before_files": [{"content": "from ...external.tifffile import TiffFile, imsave, parse_kwargs\n\n\ndef imread(fname, dtype=None, **kwargs):\n \"\"\"Load a tiff image from file.\n\n Parameters\n ----------\n fname : str or file\n File name or file-like-object.\n dtype : numpy dtype object or string specifier\n Specifies data type of array elements (Not currently used).\n kwargs : keyword pairs, optional\n Additional keyword arguments to pass through (see ``tifffile``'s\n ``imread`` function).\n\n Notes\n -----\n Provided by Christophe Golhke's tifffile.py [1]_, and supports many\n advanced image types including multi-page and floating point.\n\n References\n ----------\n .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py\n\n \"\"\"\n\n if 'img_num' in kwargs:\n kwargs['key'] = kwargs.pop('img_num')\n\n # parse_kwargs will extract keyword arguments intended for the TiffFile \n # class and remove them from the kwargs dictionary in-place\n tiff_keys = ['multifile', 'multifile_close', 'pages', 'fastij', 'is_ome']\n kwargs_tiff = parse_kwargs(kwargs, *tiff_keys)\n\n # read and return tiff as numpy array\n with TiffFile(fname, **kwargs_tiff) as tif:\n return tif.asarray(**kwargs)\n", "path": "skimage/io/_plugins/tifffile_plugin.py"}], "after_files": [{"content": "try:\n from tifffile import TiffFile, imsave, parse_kwargs\nexcept ImportError:\n from ...external.tifffile import TiffFile, imsave, parse_kwargs\n\n\ndef imread(fname, dtype=None, **kwargs):\n \"\"\"Load a tiff image from file.\n\n Parameters\n ----------\n fname : str or file\n File name or file-like-object.\n dtype : numpy dtype object or string specifier\n Specifies data type of array elements (Not currently used).\n kwargs : keyword pairs, optional\n Additional keyword arguments to pass through (see ``tifffile``'s\n ``imread`` function).\n\n Notes\n -----\n Provided by Christophe Golhke's tifffile.py [1]_, and supports many\n advanced image types including multi-page and floating point.\n\n References\n ----------\n .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py\n\n \"\"\"\n\n if 'img_num' in kwargs:\n kwargs['key'] = kwargs.pop('img_num')\n\n # parse_kwargs will extract keyword arguments intended for the TiffFile \n # class and remove them from the kwargs dictionary in-place\n tiff_keys = ['multifile', 'multifile_close', 'pages', 'fastij', 'is_ome']\n kwargs_tiff = parse_kwargs(kwargs, *tiff_keys)\n\n # read and return tiff as numpy array\n with TiffFile(fname, **kwargs_tiff) as tif:\n return tif.asarray(**kwargs)\n", "path": "skimage/io/_plugins/tifffile_plugin.py"}]} | 682 | 142 |
gh_patches_debug_1461 | rasdani/github-patches | git_diff | kartoza__prj.app-346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Display thumbnails in a modal window when we click on fullscreen
We can see a lot of GIF in the QGIS changelog. These thumbnails are too small to see so I have to click on the button to see it fullscreen. For now, it redirects to the GIF url like http://changelog.qgis.org/media/images/entries/53f72a9cf1bf32d73eb5174c37e54c60002b9707.gif
The user needs to use the "previous" button in the web browser to come back to the changelog.
It would be better to implement a javascript modal window to show the GIF and to stay on the URL http://changelog.qgis.org/en/qgis/version/2.16.0/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/core/settings/project.py`
Content:
```
1 # coding=utf-8
2
3 """Project level settings.
4
5 Adjust these values as needed but don't commit passwords etc. to any public
6 repository!
7 """
8
9 import os # noqa
10 from django.utils.translation import ugettext_lazy as _
11 from .utils import absolute_path
12 from .contrib import * # noqa
13
14 # Project apps
15 INSTALLED_APPS += (
16 'base',
17 'changes',
18 'github_issue',
19 'vota',
20 )
21
22 # Due to profile page does not available,
23 # this will redirect to home page after login
24 LOGIN_REDIRECT_URL = '/'
25
26 # How many versions to list in each project box
27 PROJECT_VERSION_LIST_SIZE = 10
28
29 # Set debug to false for production
30 DEBUG = TEMPLATE_DEBUG = False
31
32 SOUTH_TESTS_MIGRATE = False
33
34
35 # Set languages which want to be translated
36 LANGUAGES = (
37 ('en', _('English')),
38 ('af', _('Afrikaans')),
39 ('id', _('Indonesian')),
40 ('ko', _('Korean')),
41 )
42
43 # Set storage path for the translation files
44 LOCALE_PATHS = (absolute_path('locale'),)
45
46
47 MIDDLEWARE_CLASSES = (
48 # For nav bar generation
49 'core.custom_middleware.NavContextMiddleware',
50 ) + MIDDLEWARE_CLASSES
51
52 # Project specific javascript files to be pipelined
53 # For third party libs like jquery should go in contrib.py
54 PIPELINE_JS['project'] = {
55 'source_filenames': (
56 'js/csrf-ajax.js',
57 'js/changelog.js',
58 'js/github-issue.js'
59 ),
60 'output_filename': 'js/project.js',
61 }
62
63 # Project specific css files to be pipelined
64 # For third party libs like bootstrap should go in contrib.py
65 PIPELINE_CSS['project'] = {
66 'source_filenames': (
67 'css/changelog.css',
68 'css/form.css',
69 'css/fonts.css'
70 ),
71 'output_filename': 'css/project.css',
72 'extra_context': {
73 'media': 'screen, projection',
74 },
75 }
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django_project/core/settings/project.py b/django_project/core/settings/project.py
--- a/django_project/core/settings/project.py
+++ b/django_project/core/settings/project.py
@@ -55,7 +55,8 @@
'source_filenames': (
'js/csrf-ajax.js',
'js/changelog.js',
- 'js/github-issue.js'
+ 'js/github-issue.js',
+ 'js/entry.js',
),
'output_filename': 'js/project.js',
}
| {"golden_diff": "diff --git a/django_project/core/settings/project.py b/django_project/core/settings/project.py\n--- a/django_project/core/settings/project.py\n+++ b/django_project/core/settings/project.py\n@@ -55,7 +55,8 @@\n 'source_filenames': (\n 'js/csrf-ajax.js',\n 'js/changelog.js',\n- 'js/github-issue.js'\n+ 'js/github-issue.js',\n+ 'js/entry.js',\n ),\n 'output_filename': 'js/project.js',\n }\n", "issue": "Display thumbnails in a modal window when we click on fullscreen\nWe can see a lot of GIF in the QGIS changelog. These thumbnails are too small to see so I have to click on the button to see it fullscreen. For now, it redirects to the GIF url like http://changelog.qgis.org/media/images/entries/53f72a9cf1bf32d73eb5174c37e54c60002b9707.gif\nThe user needs to use the \"previous\" button in the web browser to come back to the changelog.\n\nIt would be better to implement a javascript modal window to show the GIF and to stay on the URL http://changelog.qgis.org/en/qgis/version/2.16.0/\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Project level settings.\n\nAdjust these values as needed but don't commit passwords etc. to any public\nrepository!\n\"\"\"\n\nimport os # noqa\nfrom django.utils.translation import ugettext_lazy as _\nfrom .utils import absolute_path\nfrom .contrib import * # noqa\n\n# Project apps\nINSTALLED_APPS += (\n 'base',\n 'changes',\n 'github_issue',\n 'vota',\n)\n\n# Due to profile page does not available,\n# this will redirect to home page after login\nLOGIN_REDIRECT_URL = '/'\n\n# How many versions to list in each project box\nPROJECT_VERSION_LIST_SIZE = 10\n\n# Set debug to false for production\nDEBUG = TEMPLATE_DEBUG = False\n\nSOUTH_TESTS_MIGRATE = False\n\n\n# Set languages which want to be translated\nLANGUAGES = (\n ('en', _('English')),\n ('af', _('Afrikaans')),\n ('id', _('Indonesian')),\n ('ko', _('Korean')),\n)\n\n# Set storage path for the translation files\nLOCALE_PATHS = (absolute_path('locale'),)\n\n\nMIDDLEWARE_CLASSES = (\n # For nav bar generation\n 'core.custom_middleware.NavContextMiddleware',\n) + MIDDLEWARE_CLASSES\n\n# Project specific javascript files to be pipelined\n# For third party libs like jquery should go in contrib.py\nPIPELINE_JS['project'] = {\n 'source_filenames': (\n 'js/csrf-ajax.js',\n 'js/changelog.js',\n 'js/github-issue.js'\n ),\n 'output_filename': 'js/project.js',\n}\n\n# Project specific css files to be pipelined\n# For third party libs like bootstrap should go in contrib.py\nPIPELINE_CSS['project'] = {\n 'source_filenames': (\n 'css/changelog.css',\n 'css/form.css',\n 'css/fonts.css'\n ),\n 'output_filename': 'css/project.css',\n 'extra_context': {\n 'media': 'screen, projection',\n },\n}\n", "path": "django_project/core/settings/project.py"}], "after_files": [{"content": "# coding=utf-8\n\n\"\"\"Project level settings.\n\nAdjust these values as needed but don't commit passwords etc. to any public\nrepository!\n\"\"\"\n\nimport os # noqa\nfrom django.utils.translation import ugettext_lazy as _\nfrom .utils import absolute_path\nfrom .contrib import * # noqa\n\n# Project apps\nINSTALLED_APPS += (\n 'base',\n 'changes',\n 'github_issue',\n 'vota',\n)\n\n# Due to profile page does not available,\n# this will redirect to home page after login\nLOGIN_REDIRECT_URL = '/'\n\n# How many versions to list in each project box\nPROJECT_VERSION_LIST_SIZE = 10\n\n# Set debug to false for production\nDEBUG = TEMPLATE_DEBUG = False\n\nSOUTH_TESTS_MIGRATE = False\n\n\n# Set languages which want to be translated\nLANGUAGES = (\n ('en', _('English')),\n ('af', _('Afrikaans')),\n ('id', _('Indonesian')),\n ('ko', _('Korean')),\n)\n\n# Set storage path for the translation files\nLOCALE_PATHS = (absolute_path('locale'),)\n\n\nMIDDLEWARE_CLASSES = (\n # For nav bar generation\n 'core.custom_middleware.NavContextMiddleware',\n) + MIDDLEWARE_CLASSES\n\n# Project specific javascript files to be pipelined\n# For third party libs like jquery should go in contrib.py\nPIPELINE_JS['project'] = {\n 'source_filenames': (\n 'js/csrf-ajax.js',\n 'js/changelog.js',\n 'js/github-issue.js',\n 'js/entry.js',\n ),\n 'output_filename': 'js/project.js',\n}\n\n# Project specific css files to be pipelined\n# For third party libs like bootstrap should go in contrib.py\nPIPELINE_CSS['project'] = {\n 'source_filenames': (\n 'css/changelog.css',\n 'css/form.css',\n 'css/fonts.css'\n ),\n 'output_filename': 'css/project.css',\n 'extra_context': {\n 'media': 'screen, projection',\n },\n}\n", "path": "django_project/core/settings/project.py"}]} | 1,002 | 110 |
gh_patches_debug_50751 | rasdani/github-patches | git_diff | pantsbuild__pants-16113 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pants poetry-based lockfiles fail to include hashes.
This was detected in a unit test in the Pants repo, but is a wider problem for all versions of Pants that support generating lockfiles using Poetry.
The proximal cause is this announcement from PyPI:
https://discuss.python.org/t/backwards-incompatible-change-to-pypi-json-api/17154
And the root cause is this Poetry code:
https://github.com/python-poetry/poetry/blob/bce13c14f73060b3abbb791dea585d8fde26eaef/poetry/repositories/pypi_repository.py#L272-L283
There was a Poetry fix released and backported to the 1.1. branch here:
https://github.com/python-poetry/poetry/pull/5973
Users can fix with 2 steps:
1. Update Pants config
```toml
[poetry]
# N.B.: Works around issue described at https://github.com/pantsbuild/pants/issues/16111
# Undo once on a Pants with this version or greater as the default.
version = "poetry==1.1.14"
```
2. Clear Poetry caches with `rm -rf ~/.cache/pypoetry` on Linux and `rm -rf ~/Library/Caches/pypoetry` on Mac.
This issue tracks bumping Pants default to this fixed Poetry version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/python/subsystems/poetry.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from collections import defaultdict
7 from dataclasses import dataclass
8 from textwrap import dedent
9 from typing import Any, Iterable, Sequence
10
11 import toml
12 from pkg_resources import Requirement
13
14 from pants.backend.python.subsystems.python_tool_base import PythonToolRequirementsBase
15 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints
16 from pants.engine.fs import FileContent
17
18 # ----------------------------------------------------------------------------------------
19 # Subsystem
20 # ----------------------------------------------------------------------------------------
21
22
23 class PoetrySubsystem(PythonToolRequirementsBase):
24 options_scope = "poetry"
25 help = "Used to generate lockfiles for third-party Python dependencies."
26
27 default_version = "poetry==1.1.8"
28
29 register_interpreter_constraints = True
30 default_interpreter_constraints = ["CPython>=3.7,<4"]
31
32
33 # We must monkeypatch Poetry to include `setuptools` and `wheel` in the lockfile. This was fixed
34 # in Poetry 1.2. See https://github.com/python-poetry/poetry/issues/1584.
35 # WONTFIX(#12314): only use this custom launcher if using Poetry 1.1..
36 POETRY_LAUNCHER = FileContent(
37 "__pants_poetry_launcher.py",
38 dedent(
39 """\
40 from poetry.console import main
41 from poetry.puzzle.provider import Provider
42
43 Provider.UNSAFE_PACKAGES = set()
44 main()
45 """
46 ).encode(),
47 )
48
49
50 # ----------------------------------------------------------------------------------------
51 # Parsing
52 # ----------------------------------------------------------------------------------------
53
54 _HEADER = {
55 "name": "pants-lockfile-generation",
56 "version": "0.1.0",
57 "description": "",
58 "authors": ["pantsbuild"],
59 }
60
61
62 def create_pyproject_toml(
63 requirements: Iterable[str], interpreter_constraints: InterpreterConstraints
64 ) -> str:
65 return toml.dumps(create_pyproject_toml_as_dict(requirements, interpreter_constraints))
66
67
68 def create_pyproject_toml_as_dict(
69 raw_requirements: Iterable[str], interpreter_constraints: InterpreterConstraints
70 ) -> dict:
71 python_constraint = {"python": interpreter_constraints.to_poetry_constraint()}
72 project_name_to_poetry_deps = defaultdict(list)
73 for raw_req in raw_requirements:
74 # WONTFIX(#12314): add error handling.
75 req = Requirement.parse(raw_req)
76 poetry_dep = PoetryDependency.from_requirement(req)
77 project_name_to_poetry_deps[req.project_name].append(poetry_dep)
78
79 deps = {
80 project_name: PoetryDependency.to_pyproject_toml_metadata(poetry_deps)
81 for project_name, poetry_deps in project_name_to_poetry_deps.items()
82 }
83 return {"tool": {"poetry": {**_HEADER, "dependencies": {**python_constraint, **deps}}}}
84
85
86 @dataclass(frozen=True)
87 class PoetryDependency:
88 name: str
89 version: str | None
90 extras: tuple[str, ...] = ()
91 markers: str | None = None
92
93 @classmethod
94 def from_requirement(cls, requirement: Requirement) -> PoetryDependency:
95 return PoetryDependency(
96 requirement.project_name,
97 version=str(requirement.specifier) or None, # type: ignore[attr-defined]
98 extras=tuple(sorted(requirement.extras)),
99 markers=str(requirement.marker) if requirement.marker else None,
100 )
101
102 @classmethod
103 def to_pyproject_toml_metadata(
104 cls, deps: Sequence[PoetryDependency]
105 ) -> dict[str, Any] | list[dict[str, Any]]:
106 def convert_dep(dep: PoetryDependency) -> dict[str, Any]:
107 metadata: dict[str, Any] = {"version": dep.version or "*"}
108 if dep.extras:
109 metadata["extras"] = dep.extras
110 if dep.markers:
111 metadata["markers"] = dep.markers
112 return metadata
113
114 if not deps:
115 raise AssertionError("Must have at least one element!")
116 if len(deps) == 1:
117 return convert_dep(deps[0])
118
119 entries = []
120 name = deps[0].name
121 for dep in deps:
122 if dep.name != name:
123 raise AssertionError(f"All elements must have the same project name. Given: {deps}")
124 entries.append(convert_dep(dep))
125 return entries
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/backend/python/subsystems/poetry.py b/src/python/pants/backend/python/subsystems/poetry.py
--- a/src/python/pants/backend/python/subsystems/poetry.py
+++ b/src/python/pants/backend/python/subsystems/poetry.py
@@ -24,7 +24,7 @@
options_scope = "poetry"
help = "Used to generate lockfiles for third-party Python dependencies."
- default_version = "poetry==1.1.8"
+ default_version = "poetry==1.1.14"
register_interpreter_constraints = True
default_interpreter_constraints = ["CPython>=3.7,<4"]
| {"golden_diff": "diff --git a/src/python/pants/backend/python/subsystems/poetry.py b/src/python/pants/backend/python/subsystems/poetry.py\n--- a/src/python/pants/backend/python/subsystems/poetry.py\n+++ b/src/python/pants/backend/python/subsystems/poetry.py\n@@ -24,7 +24,7 @@\n options_scope = \"poetry\"\n help = \"Used to generate lockfiles for third-party Python dependencies.\"\n \n- default_version = \"poetry==1.1.8\"\n+ default_version = \"poetry==1.1.14\"\n \n register_interpreter_constraints = True\n default_interpreter_constraints = [\"CPython>=3.7,<4\"]\n", "issue": "Pants poetry-based lockfiles fail to include hashes.\nThis was detected in a unit test in the Pants repo, but is a wider problem for all versions of Pants that support generating lockfiles using Poetry.\r\n\r\nThe proximal cause is this announcement from PyPI:\r\n https://discuss.python.org/t/backwards-incompatible-change-to-pypi-json-api/17154\r\n\r\nAnd the root cause is this Poetry code:\r\n https://github.com/python-poetry/poetry/blob/bce13c14f73060b3abbb791dea585d8fde26eaef/poetry/repositories/pypi_repository.py#L272-L283\r\n\r\nThere was a Poetry fix released and backported to the 1.1. branch here:\r\n https://github.com/python-poetry/poetry/pull/5973\r\n\r\nUsers can fix with 2 steps:\r\n1. Update Pants config\r\n```toml\r\n[poetry]\r\n# N.B.: Works around issue described at https://github.com/pantsbuild/pants/issues/16111\r\n# Undo once on a Pants with this version or greater as the default.\r\nversion = \"poetry==1.1.14\"\r\n```\r\n2. Clear Poetry caches with `rm -rf ~/.cache/pypoetry` on Linux and `rm -rf ~/Library/Caches/pypoetry` on Mac.\r\n\r\nThis issue tracks bumping Pants default to this fixed Poetry version.\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom collections import defaultdict\nfrom dataclasses import dataclass\nfrom textwrap import dedent\nfrom typing import Any, Iterable, Sequence\n\nimport toml\nfrom pkg_resources import Requirement\n\nfrom pants.backend.python.subsystems.python_tool_base import PythonToolRequirementsBase\nfrom pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints\nfrom pants.engine.fs import FileContent\n\n# ----------------------------------------------------------------------------------------\n# Subsystem\n# ----------------------------------------------------------------------------------------\n\n\nclass PoetrySubsystem(PythonToolRequirementsBase):\n options_scope = \"poetry\"\n help = \"Used to generate lockfiles for third-party Python dependencies.\"\n\n default_version = \"poetry==1.1.8\"\n\n register_interpreter_constraints = True\n default_interpreter_constraints = [\"CPython>=3.7,<4\"]\n\n\n# We must monkeypatch Poetry to include `setuptools` and `wheel` in the lockfile. This was fixed\n# in Poetry 1.2. See https://github.com/python-poetry/poetry/issues/1584.\n# WONTFIX(#12314): only use this custom launcher if using Poetry 1.1..\nPOETRY_LAUNCHER = FileContent(\n \"__pants_poetry_launcher.py\",\n dedent(\n \"\"\"\\\n from poetry.console import main\n from poetry.puzzle.provider import Provider\n\n Provider.UNSAFE_PACKAGES = set()\n main()\n \"\"\"\n ).encode(),\n)\n\n\n# ----------------------------------------------------------------------------------------\n# Parsing\n# ----------------------------------------------------------------------------------------\n\n_HEADER = {\n \"name\": \"pants-lockfile-generation\",\n \"version\": \"0.1.0\",\n \"description\": \"\",\n \"authors\": [\"pantsbuild\"],\n}\n\n\ndef create_pyproject_toml(\n requirements: Iterable[str], interpreter_constraints: InterpreterConstraints\n) -> str:\n return toml.dumps(create_pyproject_toml_as_dict(requirements, interpreter_constraints))\n\n\ndef create_pyproject_toml_as_dict(\n raw_requirements: Iterable[str], interpreter_constraints: InterpreterConstraints\n) -> dict:\n python_constraint = {\"python\": interpreter_constraints.to_poetry_constraint()}\n project_name_to_poetry_deps = defaultdict(list)\n for raw_req in raw_requirements:\n # WONTFIX(#12314): add error handling.\n req = Requirement.parse(raw_req)\n poetry_dep = PoetryDependency.from_requirement(req)\n project_name_to_poetry_deps[req.project_name].append(poetry_dep)\n\n deps = {\n project_name: PoetryDependency.to_pyproject_toml_metadata(poetry_deps)\n for project_name, poetry_deps in project_name_to_poetry_deps.items()\n }\n return {\"tool\": {\"poetry\": {**_HEADER, \"dependencies\": {**python_constraint, **deps}}}}\n\n\n@dataclass(frozen=True)\nclass PoetryDependency:\n name: str\n version: str | None\n extras: tuple[str, ...] = ()\n markers: str | None = None\n\n @classmethod\n def from_requirement(cls, requirement: Requirement) -> PoetryDependency:\n return PoetryDependency(\n requirement.project_name,\n version=str(requirement.specifier) or None, # type: ignore[attr-defined]\n extras=tuple(sorted(requirement.extras)),\n markers=str(requirement.marker) if requirement.marker else None,\n )\n\n @classmethod\n def to_pyproject_toml_metadata(\n cls, deps: Sequence[PoetryDependency]\n ) -> dict[str, Any] | list[dict[str, Any]]:\n def convert_dep(dep: PoetryDependency) -> dict[str, Any]:\n metadata: dict[str, Any] = {\"version\": dep.version or \"*\"}\n if dep.extras:\n metadata[\"extras\"] = dep.extras\n if dep.markers:\n metadata[\"markers\"] = dep.markers\n return metadata\n\n if not deps:\n raise AssertionError(\"Must have at least one element!\")\n if len(deps) == 1:\n return convert_dep(deps[0])\n\n entries = []\n name = deps[0].name\n for dep in deps:\n if dep.name != name:\n raise AssertionError(f\"All elements must have the same project name. Given: {deps}\")\n entries.append(convert_dep(dep))\n return entries\n", "path": "src/python/pants/backend/python/subsystems/poetry.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom collections import defaultdict\nfrom dataclasses import dataclass\nfrom textwrap import dedent\nfrom typing import Any, Iterable, Sequence\n\nimport toml\nfrom pkg_resources import Requirement\n\nfrom pants.backend.python.subsystems.python_tool_base import PythonToolRequirementsBase\nfrom pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints\nfrom pants.engine.fs import FileContent\n\n# ----------------------------------------------------------------------------------------\n# Subsystem\n# ----------------------------------------------------------------------------------------\n\n\nclass PoetrySubsystem(PythonToolRequirementsBase):\n options_scope = \"poetry\"\n help = \"Used to generate lockfiles for third-party Python dependencies.\"\n\n default_version = \"poetry==1.1.14\"\n\n register_interpreter_constraints = True\n default_interpreter_constraints = [\"CPython>=3.7,<4\"]\n\n\n# We must monkeypatch Poetry to include `setuptools` and `wheel` in the lockfile. This was fixed\n# in Poetry 1.2. See https://github.com/python-poetry/poetry/issues/1584.\n# WONTFIX(#12314): only use this custom launcher if using Poetry 1.1..\nPOETRY_LAUNCHER = FileContent(\n \"__pants_poetry_launcher.py\",\n dedent(\n \"\"\"\\\n from poetry.console import main\n from poetry.puzzle.provider import Provider\n\n Provider.UNSAFE_PACKAGES = set()\n main()\n \"\"\"\n ).encode(),\n)\n\n\n# ----------------------------------------------------------------------------------------\n# Parsing\n# ----------------------------------------------------------------------------------------\n\n_HEADER = {\n \"name\": \"pants-lockfile-generation\",\n \"version\": \"0.1.0\",\n \"description\": \"\",\n \"authors\": [\"pantsbuild\"],\n}\n\n\ndef create_pyproject_toml(\n requirements: Iterable[str], interpreter_constraints: InterpreterConstraints\n) -> str:\n return toml.dumps(create_pyproject_toml_as_dict(requirements, interpreter_constraints))\n\n\ndef create_pyproject_toml_as_dict(\n raw_requirements: Iterable[str], interpreter_constraints: InterpreterConstraints\n) -> dict:\n python_constraint = {\"python\": interpreter_constraints.to_poetry_constraint()}\n project_name_to_poetry_deps = defaultdict(list)\n for raw_req in raw_requirements:\n # WONTFIX(#12314): add error handling.\n req = Requirement.parse(raw_req)\n poetry_dep = PoetryDependency.from_requirement(req)\n project_name_to_poetry_deps[req.project_name].append(poetry_dep)\n\n deps = {\n project_name: PoetryDependency.to_pyproject_toml_metadata(poetry_deps)\n for project_name, poetry_deps in project_name_to_poetry_deps.items()\n }\n return {\"tool\": {\"poetry\": {**_HEADER, \"dependencies\": {**python_constraint, **deps}}}}\n\n\n@dataclass(frozen=True)\nclass PoetryDependency:\n name: str\n version: str | None\n extras: tuple[str, ...] = ()\n markers: str | None = None\n\n @classmethod\n def from_requirement(cls, requirement: Requirement) -> PoetryDependency:\n return PoetryDependency(\n requirement.project_name,\n version=str(requirement.specifier) or None, # type: ignore[attr-defined]\n extras=tuple(sorted(requirement.extras)),\n markers=str(requirement.marker) if requirement.marker else None,\n )\n\n @classmethod\n def to_pyproject_toml_metadata(\n cls, deps: Sequence[PoetryDependency]\n ) -> dict[str, Any] | list[dict[str, Any]]:\n def convert_dep(dep: PoetryDependency) -> dict[str, Any]:\n metadata: dict[str, Any] = {\"version\": dep.version or \"*\"}\n if dep.extras:\n metadata[\"extras\"] = dep.extras\n if dep.markers:\n metadata[\"markers\"] = dep.markers\n return metadata\n\n if not deps:\n raise AssertionError(\"Must have at least one element!\")\n if len(deps) == 1:\n return convert_dep(deps[0])\n\n entries = []\n name = deps[0].name\n for dep in deps:\n if dep.name != name:\n raise AssertionError(f\"All elements must have the same project name. Given: {deps}\")\n entries.append(convert_dep(dep))\n return entries\n", "path": "src/python/pants/backend/python/subsystems/poetry.py"}]} | 1,785 | 152 |
gh_patches_debug_13638 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-2503 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Current `tox` configuration ends up testing old code
For example:
``` bash
$ rm -r .tox/
$ tox -e system-tests --notest
GLOB sdist-make: /home/tseaver/projects/agendaless/Google/src/google-cloud-python/setup.py
system-tests create: /home/tseaver/projects/agendaless/Google/src/google-cloud-python/.tox/system-tests
system-tests inst: /home/tseaver/projects/agendaless/Google/src/google-cloud-python/.tox/dist/google-cloud-0.20.0.zip
system-tests installed: -f file:///home/tseaver/.pip/wheels,enum34==1.1.6,future==0.15.2,futures==3.0.5,gapic-google-logging-v2==0.10.1,gapic-google-pubsub-v1==0.10.1,google-cloud==0.20.0,google-cloud-bigquery==0.20.0,google-cloud-bigtable==0.20.0,google-cloud-core==0.20.0,google-cloud-datastore==0.20.0,google-cloud-dns==0.20.0,google-cloud-error-reporting==0.20.0,google-cloud-happybase==0.20.0,google-cloud-language==0.20.0,google-cloud-logging==0.20.0,google-cloud-monitoring==0.20.0,google-cloud-pubsub==0.20.0,google-cloud-resource-manager==0.20.0,google-cloud-storage==0.20.0,google-cloud-translate==0.20.0,google-cloud-vision==0.20.0,google-gax==0.14.1,googleapis-common-protos==1.3.5,grpc-google-iam-v1==0.10.1,grpc-google-logging-v2==0.10.1,grpc-google-pubsub-v1==0.10.1,grpcio==1.0.0,httplib2==0.9.2,oauth2client==3.0.0,ply==3.8,protobuf==3.1.0.post1,pyasn1==0.1.9,pyasn1-modules==0.0.8,rsa==3.4.2,six==1.10.0
___________________________________ summary ____________________________________
system-tests: skipped tests
congratulations :)
$ diff -ru .tox/system-tests/lib/python2.7/site-packages/google/cloud/storage/ storage/google/cloud/storage/ --exclude="*.pyc"
diff -ru '--exclude=*.pyc' .tox/system-tests/lib/python2.7/site-packages/google/cloud/storage/blob.py storage/google/cloud/storage/blob.py
--- .tox/system-tests/lib/python2.7/site-packages/google/cloud/storage/blob.py 2016-10-05 18:15:48.724796000 -0400
+++ storage/google/cloud/storage/blob.py 2016-10-05 18:02:55.872830411 -0400
@@ -655,6 +655,32 @@
self.acl.all().grant_read()
self.acl.save(client=client)
+ def compose(self, sources, client=None):
+ """Concatenate source blobs into this one.
+
+ :type sources: list of :class:`Blob`
+ :param sources: blobs whose contents will be composed into this blob.
+
+ :type client: :class:`~google.cloud.storage.client.Client` or
+ ``NoneType``
+ :param client: Optional. The client to use. If not passed, falls back
+ to the ``client`` stored on the blob's bucket.
+
+ :raises: :exc:`ValueError` if this blob does not have its
+ :attr:`content_type` set.
+ """
+ if self.content_type is None:
+ raise ValueError("Destination 'content_type' not set.")
+ client = self._require_client(client)
+ request = {
+ 'sourceObjects': [{'name': source.name} for source in sources],
+ 'destination': self._properties.copy(),
+ }
+ api_response = client.connection.api_request(
+ method='POST', path=self.path + '/compose', data=request,
+ _target_object=self)
+ self._set_properties(api_response)
+
cache_control = _scalar_property('cacheControl')
"""HTTP 'Cache-Control' header for this object.
```
Somehow, the tarball / wheel is being cached.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/pycodestyle_on_repo.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Custom script to run pycodestyle on google-cloud codebase.
16
17 This runs pycodestyle as a script via subprocess but only runs it on the
18 .py files that are checked in to the repository.
19 """
20
21
22 import os
23 import subprocess
24 import sys
25
26 from script_utils import get_affected_files
27
28
29 def main():
30 """Run pycodestyle on all Python files in the repository."""
31 git_root = subprocess.check_output(
32 ['git', 'rev-parse', '--show-toplevel']).strip()
33 os.chdir(git_root)
34 candidates, _ = get_affected_files()
35 python_files = [
36 candidate for candidate in candidates if candidate.endswith('.py')]
37
38 pycodestyle_command = ['pycodestyle'] + python_files
39 status_code = subprocess.call(pycodestyle_command)
40 sys.exit(status_code)
41
42
43 if __name__ == '__main__':
44 main()
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/pycodestyle_on_repo.py b/scripts/pycodestyle_on_repo.py
--- a/scripts/pycodestyle_on_repo.py
+++ b/scripts/pycodestyle_on_repo.py
@@ -19,6 +19,8 @@
"""
+from __future__ import print_function
+
import os
import subprocess
import sys
@@ -35,9 +37,12 @@
python_files = [
candidate for candidate in candidates if candidate.endswith('.py')]
- pycodestyle_command = ['pycodestyle'] + python_files
- status_code = subprocess.call(pycodestyle_command)
- sys.exit(status_code)
+ if not python_files:
+ print('No Python files to lint, exiting.')
+ else:
+ pycodestyle_command = ['pycodestyle'] + python_files
+ status_code = subprocess.call(pycodestyle_command)
+ sys.exit(status_code)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/scripts/pycodestyle_on_repo.py b/scripts/pycodestyle_on_repo.py\n--- a/scripts/pycodestyle_on_repo.py\n+++ b/scripts/pycodestyle_on_repo.py\n@@ -19,6 +19,8 @@\n \"\"\"\n \n \n+from __future__ import print_function\n+\n import os\n import subprocess\n import sys\n@@ -35,9 +37,12 @@\n python_files = [\n candidate for candidate in candidates if candidate.endswith('.py')]\n \n- pycodestyle_command = ['pycodestyle'] + python_files\n- status_code = subprocess.call(pycodestyle_command)\n- sys.exit(status_code)\n+ if not python_files:\n+ print('No Python files to lint, exiting.')\n+ else:\n+ pycodestyle_command = ['pycodestyle'] + python_files\n+ status_code = subprocess.call(pycodestyle_command)\n+ sys.exit(status_code)\n \n \n if __name__ == '__main__':\n", "issue": "Current `tox` configuration ends up testing old code\nFor example:\n\n``` bash\n$ rm -r .tox/\n$ tox -e system-tests --notest\nGLOB sdist-make: /home/tseaver/projects/agendaless/Google/src/google-cloud-python/setup.py\nsystem-tests create: /home/tseaver/projects/agendaless/Google/src/google-cloud-python/.tox/system-tests\nsystem-tests inst: /home/tseaver/projects/agendaless/Google/src/google-cloud-python/.tox/dist/google-cloud-0.20.0.zip\nsystem-tests installed: -f file:///home/tseaver/.pip/wheels,enum34==1.1.6,future==0.15.2,futures==3.0.5,gapic-google-logging-v2==0.10.1,gapic-google-pubsub-v1==0.10.1,google-cloud==0.20.0,google-cloud-bigquery==0.20.0,google-cloud-bigtable==0.20.0,google-cloud-core==0.20.0,google-cloud-datastore==0.20.0,google-cloud-dns==0.20.0,google-cloud-error-reporting==0.20.0,google-cloud-happybase==0.20.0,google-cloud-language==0.20.0,google-cloud-logging==0.20.0,google-cloud-monitoring==0.20.0,google-cloud-pubsub==0.20.0,google-cloud-resource-manager==0.20.0,google-cloud-storage==0.20.0,google-cloud-translate==0.20.0,google-cloud-vision==0.20.0,google-gax==0.14.1,googleapis-common-protos==1.3.5,grpc-google-iam-v1==0.10.1,grpc-google-logging-v2==0.10.1,grpc-google-pubsub-v1==0.10.1,grpcio==1.0.0,httplib2==0.9.2,oauth2client==3.0.0,ply==3.8,protobuf==3.1.0.post1,pyasn1==0.1.9,pyasn1-modules==0.0.8,rsa==3.4.2,six==1.10.0\n___________________________________ summary ____________________________________\n system-tests: skipped tests\n congratulations :)\n$ diff -ru .tox/system-tests/lib/python2.7/site-packages/google/cloud/storage/ storage/google/cloud/storage/ --exclude=\"*.pyc\"\ndiff -ru '--exclude=*.pyc' .tox/system-tests/lib/python2.7/site-packages/google/cloud/storage/blob.py storage/google/cloud/storage/blob.py\n--- .tox/system-tests/lib/python2.7/site-packages/google/cloud/storage/blob.py 2016-10-05 18:15:48.724796000 -0400\n+++ storage/google/cloud/storage/blob.py 2016-10-05 18:02:55.872830411 -0400\n@@ -655,6 +655,32 @@\n self.acl.all().grant_read()\n self.acl.save(client=client)\n\n+ def compose(self, sources, client=None):\n+ \"\"\"Concatenate source blobs into this one.\n+\n+ :type sources: list of :class:`Blob`\n+ :param sources: blobs whose contents will be composed into this blob.\n+\n+ :type client: :class:`~google.cloud.storage.client.Client` or\n+ ``NoneType``\n+ :param client: Optional. The client to use. If not passed, falls back\n+ to the ``client`` stored on the blob's bucket.\n+\n+ :raises: :exc:`ValueError` if this blob does not have its\n+ :attr:`content_type` set.\n+ \"\"\"\n+ if self.content_type is None:\n+ raise ValueError(\"Destination 'content_type' not set.\")\n+ client = self._require_client(client)\n+ request = {\n+ 'sourceObjects': [{'name': source.name} for source in sources],\n+ 'destination': self._properties.copy(),\n+ }\n+ api_response = client.connection.api_request(\n+ method='POST', path=self.path + '/compose', data=request,\n+ _target_object=self)\n+ self._set_properties(api_response)\n+\n cache_control = _scalar_property('cacheControl')\n \"\"\"HTTP 'Cache-Control' header for this object.\n```\n\nSomehow, the tarball / wheel is being cached.\n\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Custom script to run pycodestyle on google-cloud codebase.\n\nThis runs pycodestyle as a script via subprocess but only runs it on the\n.py files that are checked in to the repository.\n\"\"\"\n\n\nimport os\nimport subprocess\nimport sys\n\nfrom script_utils import get_affected_files\n\n\ndef main():\n \"\"\"Run pycodestyle on all Python files in the repository.\"\"\"\n git_root = subprocess.check_output(\n ['git', 'rev-parse', '--show-toplevel']).strip()\n os.chdir(git_root)\n candidates, _ = get_affected_files()\n python_files = [\n candidate for candidate in candidates if candidate.endswith('.py')]\n\n pycodestyle_command = ['pycodestyle'] + python_files\n status_code = subprocess.call(pycodestyle_command)\n sys.exit(status_code)\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/pycodestyle_on_repo.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Custom script to run pycodestyle on google-cloud codebase.\n\nThis runs pycodestyle as a script via subprocess but only runs it on the\n.py files that are checked in to the repository.\n\"\"\"\n\n\nfrom __future__ import print_function\n\nimport os\nimport subprocess\nimport sys\n\nfrom script_utils import get_affected_files\n\n\ndef main():\n \"\"\"Run pycodestyle on all Python files in the repository.\"\"\"\n git_root = subprocess.check_output(\n ['git', 'rev-parse', '--show-toplevel']).strip()\n os.chdir(git_root)\n candidates, _ = get_affected_files()\n python_files = [\n candidate for candidate in candidates if candidate.endswith('.py')]\n\n if not python_files:\n print('No Python files to lint, exiting.')\n else:\n pycodestyle_command = ['pycodestyle'] + python_files\n status_code = subprocess.call(pycodestyle_command)\n sys.exit(status_code)\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/pycodestyle_on_repo.py"}]} | 1,702 | 210 |
gh_patches_debug_8101 | rasdani/github-patches | git_diff | scrapy__scrapy-1983 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
empty WARNING message in scrapy.core.downloader.tls (1.1.0rc4/master)
Sometimes I'm getting empty warnings now, on 1.1.0rc4 and master branch.
(at least on rc3 as well)
```
2016-05-07 00:33:46 [scrapy.core.downloader.tls] WARNING:
2016-05-07 00:33:47 [scrapy.core.downloader.tls] WARNING:
2016-05-07 00:33:48 [scrapy.core.downloader.tls] WARNING:
```
It happens in a broad linkcheck crawl; so I couldn't pinpoint what URLs might be responsible for that, at this time. The only other observation so far is, that it doesn't happen on a cache-replayed run (which might be obvious, as there is no TLS there).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/core/downloader/tls.py`
Content:
```
1 import logging
2 from OpenSSL import SSL
3
4
5 logger = logging.getLogger(__name__)
6
7 METHOD_SSLv3 = 'SSLv3'
8 METHOD_TLS = 'TLS'
9 METHOD_TLSv10 = 'TLSv1.0'
10 METHOD_TLSv11 = 'TLSv1.1'
11 METHOD_TLSv12 = 'TLSv1.2'
12
13 openssl_methods = {
14 METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)
15 METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)
16 METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only
17 METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only
18 METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only
19 }
20
21 # ClientTLSOptions requires a recent-enough version of Twisted
22 try:
23
24 # taken from twisted/twisted/internet/_sslverify.py
25 try:
26 from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START
27 except ImportError:
28 SSL_CB_HANDSHAKE_START = 0x10
29 SSL_CB_HANDSHAKE_DONE = 0x20
30
31 from twisted.internet._sslverify import (ClientTLSOptions,
32 _maybeSetHostNameIndication,
33 verifyHostname,
34 VerificationError)
35
36 class ScrapyClientTLSOptions(ClientTLSOptions):
37 # same as Twisted's ClientTLSOptions,
38 # except that VerificationError is caught
39 # and doesn't close the connection
40 def _identityVerifyingInfoCallback(self, connection, where, ret):
41 if where & SSL_CB_HANDSHAKE_START:
42 _maybeSetHostNameIndication(connection, self._hostnameBytes)
43 elif where & SSL_CB_HANDSHAKE_DONE:
44 try:
45 verifyHostname(connection, self._hostnameASCII)
46 except VerificationError as e:
47 logger.warning(e)
48
49 except ImportError:
50 # ImportError should not matter for older Twisted versions
51 # as the above is not used in the fallback ScrapyClientContextFactory
52 pass
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py
--- a/scrapy/core/downloader/tls.py
+++ b/scrapy/core/downloader/tls.py
@@ -44,7 +44,9 @@
try:
verifyHostname(connection, self._hostnameASCII)
except VerificationError as e:
- logger.warning(e)
+ logger.warning(
+ 'Remote certificate is not valid for hostname "{}"; {}'.format(
+ self._hostnameASCII, e))
except ImportError:
# ImportError should not matter for older Twisted versions
| {"golden_diff": "diff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py\n--- a/scrapy/core/downloader/tls.py\n+++ b/scrapy/core/downloader/tls.py\n@@ -44,7 +44,9 @@\n try:\n verifyHostname(connection, self._hostnameASCII)\n except VerificationError as e:\n- logger.warning(e)\n+ logger.warning(\n+ 'Remote certificate is not valid for hostname \"{}\"; {}'.format(\n+ self._hostnameASCII, e))\n \n except ImportError:\n # ImportError should not matter for older Twisted versions\n", "issue": "empty WARNING message in scrapy.core.downloader.tls (1.1.0rc4/master)\nSometimes I'm getting empty warnings now, on 1.1.0rc4 and master branch.\n(at least on rc3 as well)\n\n```\n2016-05-07 00:33:46 [scrapy.core.downloader.tls] WARNING: \n2016-05-07 00:33:47 [scrapy.core.downloader.tls] WARNING: \n2016-05-07 00:33:48 [scrapy.core.downloader.tls] WARNING: \n```\n\nIt happens in a broad linkcheck crawl; so I couldn't pinpoint what URLs might be responsible for that, at this time. The only other observation so far is, that it doesn't happen on a cache-replayed run (which might be obvious, as there is no TLS there).\n\n", "before_files": [{"content": "import logging\nfrom OpenSSL import SSL\n\n\nlogger = logging.getLogger(__name__)\n\nMETHOD_SSLv3 = 'SSLv3'\nMETHOD_TLS = 'TLS'\nMETHOD_TLSv10 = 'TLSv1.0'\nMETHOD_TLSv11 = 'TLSv1.1'\nMETHOD_TLSv12 = 'TLSv1.2'\n\nopenssl_methods = {\n METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)\n METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)\n METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n}\n\n# ClientTLSOptions requires a recent-enough version of Twisted\ntry:\n\n # taken from twisted/twisted/internet/_sslverify.py\n try:\n from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START\n except ImportError:\n SSL_CB_HANDSHAKE_START = 0x10\n SSL_CB_HANDSHAKE_DONE = 0x20\n\n from twisted.internet._sslverify import (ClientTLSOptions,\n _maybeSetHostNameIndication,\n verifyHostname,\n VerificationError)\n\n class ScrapyClientTLSOptions(ClientTLSOptions):\n # same as Twisted's ClientTLSOptions,\n # except that VerificationError is caught\n # and doesn't close the connection\n def _identityVerifyingInfoCallback(self, connection, where, ret):\n if where & SSL_CB_HANDSHAKE_START:\n _maybeSetHostNameIndication(connection, self._hostnameBytes)\n elif where & SSL_CB_HANDSHAKE_DONE:\n try:\n verifyHostname(connection, self._hostnameASCII)\n except VerificationError as e:\n logger.warning(e)\n\nexcept ImportError:\n # ImportError should not matter for older Twisted versions\n # as the above is not used in the fallback ScrapyClientContextFactory\n pass\n", "path": "scrapy/core/downloader/tls.py"}], "after_files": [{"content": "import logging\nfrom OpenSSL import SSL\n\n\nlogger = logging.getLogger(__name__)\n\nMETHOD_SSLv3 = 'SSLv3'\nMETHOD_TLS = 'TLS'\nMETHOD_TLSv10 = 'TLSv1.0'\nMETHOD_TLSv11 = 'TLSv1.1'\nMETHOD_TLSv12 = 'TLSv1.2'\n\nopenssl_methods = {\n METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)\n METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)\n METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n}\n\n# ClientTLSOptions requires a recent-enough version of Twisted\ntry:\n\n # taken from twisted/twisted/internet/_sslverify.py\n try:\n from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START\n except ImportError:\n SSL_CB_HANDSHAKE_START = 0x10\n SSL_CB_HANDSHAKE_DONE = 0x20\n\n from twisted.internet._sslverify import (ClientTLSOptions,\n _maybeSetHostNameIndication,\n verifyHostname,\n VerificationError)\n\n class ScrapyClientTLSOptions(ClientTLSOptions):\n # same as Twisted's ClientTLSOptions,\n # except that VerificationError is caught\n # and doesn't close the connection\n def _identityVerifyingInfoCallback(self, connection, where, ret):\n if where & SSL_CB_HANDSHAKE_START:\n _maybeSetHostNameIndication(connection, self._hostnameBytes)\n elif where & SSL_CB_HANDSHAKE_DONE:\n try:\n verifyHostname(connection, self._hostnameASCII)\n except VerificationError as e:\n logger.warning(\n 'Remote certificate is not valid for hostname \"{}\"; {}'.format(\n self._hostnameASCII, e))\n\nexcept ImportError:\n # ImportError should not matter for older Twisted versions\n # as the above is not used in the fallback ScrapyClientContextFactory\n pass\n", "path": "scrapy/core/downloader/tls.py"}]} | 1,042 | 127 |
gh_patches_debug_7730 | rasdani/github-patches | git_diff | freedomofpress__securedrop-3737 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SecureDrop backups from previous versions don't work if database migration has occurred
## Description
Backup restore (https://github.com/freedomofpress/securedrop/blob/develop/install_files/ansible-base/roles/restore/files/restore.py) script does not apply database migrations, and as such breaks the application upon backup restore. Manual workaround that appears to have no side-effects is to run `sudo dpkg-reconfigure securedrop-app-code` on the app server.
## Steps to Reproduce
1. Install SecureDrop 0.8.0
2. `securedrop-admin backup`
3. Upgrade to 0.9.0 (or higher)
4. `securedrop-admin restore`
5. Observe source and journalist interface return error 500
## Expected Behavior
The application should be operational.
## Actual Behavior
The source and journalist interfaces return error 500s.
## Comments
Running `sudo dpkg-reconfigure securedrop-app-code` calls the postinst script which will apply migration. Based on my testing, this seems to work reliably.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install_files/ansible-base/roles/restore/files/restore.py`
Content:
```
1 #!/usr/bin/python2.7
2 """
3 This script and backup archive should be copied to the App server and run by
4 the Ansible playbook. When run (as root), it restores the contents of the 0.3
5 backup file to the machine it's run on.
6
7 python restore.py sd-backup-TIMESTAMP.tar.gz
8 """
9
10 import os
11 import subprocess
12 import sys
13 import tarfile
14
15
16 def verify_args():
17 usage = """
18 Usage: restore.py <backup file>
19
20 <backup file> Path to a SecureDrop 0.3 backup created by backup.py"
21 """
22 if len(sys.argv) != 2:
23 print(usage)
24 sys.exit(1)
25
26 if not os.path.exists(sys.argv[1]):
27 print("<backup file> '{}' not found".format(sys.argv[1]))
28 sys.exit(1)
29
30 if os.geteuid() != 0:
31 print("This program must be run as root!")
32 sys.exit(1)
33
34
35 def main():
36 verify_args()
37
38 with tarfile.open(sys.argv[1], 'r:*') as backup:
39 # This assumes that both the old installation (source of the backup)
40 # and the new installation (destination of the restore) used the
41 # default paths for various locations.
42 backup.extractall(path='/')
43
44 # Reload Tor and the web server so they pick up the new configuration
45 # If the process exits with a non-zero return code, raises an exception.
46 subprocess.check_call(['service', 'apache2', 'restart'])
47 subprocess.check_call(['service', 'tor', 'reload'])
48
49
50 if __name__ == "__main__":
51 main()
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/install_files/ansible-base/roles/restore/files/restore.py b/install_files/ansible-base/roles/restore/files/restore.py
--- a/install_files/ansible-base/roles/restore/files/restore.py
+++ b/install_files/ansible-base/roles/restore/files/restore.py
@@ -45,6 +45,8 @@
# If the process exits with a non-zero return code, raises an exception.
subprocess.check_call(['service', 'apache2', 'restart'])
subprocess.check_call(['service', 'tor', 'reload'])
+ # Apply database migrations (if backed-up version < version to restore)
+ subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])
if __name__ == "__main__":
| {"golden_diff": "diff --git a/install_files/ansible-base/roles/restore/files/restore.py b/install_files/ansible-base/roles/restore/files/restore.py\n--- a/install_files/ansible-base/roles/restore/files/restore.py\n+++ b/install_files/ansible-base/roles/restore/files/restore.py\n@@ -45,6 +45,8 @@\n # If the process exits with a non-zero return code, raises an exception.\n subprocess.check_call(['service', 'apache2', 'restart'])\n subprocess.check_call(['service', 'tor', 'reload'])\n+ # Apply database migrations (if backed-up version < version to restore)\n+ subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])\n \n \n if __name__ == \"__main__\":\n", "issue": "SecureDrop backups from previous versions don't work if database migration has occurred\n## Description\r\n\r\nBackup restore (https://github.com/freedomofpress/securedrop/blob/develop/install_files/ansible-base/roles/restore/files/restore.py) script does not apply database migrations, and as such breaks the application upon backup restore. Manual workaround that appears to have no side-effects is to run `sudo dpkg-reconfigure securedrop-app-code` on the app server.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Install SecureDrop 0.8.0\r\n2. `securedrop-admin backup`\r\n3. Upgrade to 0.9.0 (or higher)\r\n4. `securedrop-admin restore`\r\n5. Observe source and journalist interface return error 500\r\n\r\n## Expected Behavior\r\n\r\nThe application should be operational.\r\n\r\n## Actual Behavior\r\n\r\nThe source and journalist interfaces return error 500s.\r\n\r\n## Comments\r\n\r\nRunning `sudo dpkg-reconfigure securedrop-app-code` calls the postinst script which will apply migration. Based on my testing, this seems to work reliably.\n", "before_files": [{"content": "#!/usr/bin/python2.7\n\"\"\"\nThis script and backup archive should be copied to the App server and run by\nthe Ansible playbook. When run (as root), it restores the contents of the 0.3\nbackup file to the machine it's run on.\n\npython restore.py sd-backup-TIMESTAMP.tar.gz\n\"\"\"\n\nimport os\nimport subprocess\nimport sys\nimport tarfile\n\n\ndef verify_args():\n usage = \"\"\"\nUsage: restore.py <backup file>\n\n <backup file> Path to a SecureDrop 0.3 backup created by backup.py\"\n \"\"\"\n if len(sys.argv) != 2:\n print(usage)\n sys.exit(1)\n\n if not os.path.exists(sys.argv[1]):\n print(\"<backup file> '{}' not found\".format(sys.argv[1]))\n sys.exit(1)\n\n if os.geteuid() != 0:\n print(\"This program must be run as root!\")\n sys.exit(1)\n\n\ndef main():\n verify_args()\n\n with tarfile.open(sys.argv[1], 'r:*') as backup:\n # This assumes that both the old installation (source of the backup)\n # and the new installation (destination of the restore) used the\n # default paths for various locations.\n backup.extractall(path='/')\n\n # Reload Tor and the web server so they pick up the new configuration\n # If the process exits with a non-zero return code, raises an exception.\n subprocess.check_call(['service', 'apache2', 'restart'])\n subprocess.check_call(['service', 'tor', 'reload'])\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install_files/ansible-base/roles/restore/files/restore.py"}], "after_files": [{"content": "#!/usr/bin/python2.7\n\"\"\"\nThis script and backup archive should be copied to the App server and run by\nthe Ansible playbook. When run (as root), it restores the contents of the 0.3\nbackup file to the machine it's run on.\n\npython restore.py sd-backup-TIMESTAMP.tar.gz\n\"\"\"\n\nimport os\nimport subprocess\nimport sys\nimport tarfile\n\n\ndef verify_args():\n usage = \"\"\"\nUsage: restore.py <backup file>\n\n <backup file> Path to a SecureDrop 0.3 backup created by backup.py\"\n \"\"\"\n if len(sys.argv) != 2:\n print(usage)\n sys.exit(1)\n\n if not os.path.exists(sys.argv[1]):\n print(\"<backup file> '{}' not found\".format(sys.argv[1]))\n sys.exit(1)\n\n if os.geteuid() != 0:\n print(\"This program must be run as root!\")\n sys.exit(1)\n\n\ndef main():\n verify_args()\n\n with tarfile.open(sys.argv[1], 'r:*') as backup:\n # This assumes that both the old installation (source of the backup)\n # and the new installation (destination of the restore) used the\n # default paths for various locations.\n backup.extractall(path='/')\n\n # Reload Tor and the web server so they pick up the new configuration\n # If the process exits with a non-zero return code, raises an exception.\n subprocess.check_call(['service', 'apache2', 'restart'])\n subprocess.check_call(['service', 'tor', 'reload'])\n # Apply database migrations (if backed-up version < version to restore)\n subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install_files/ansible-base/roles/restore/files/restore.py"}]} | 941 | 162 |
gh_patches_debug_5171 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2581 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing type annotation in `strawberry.fastapi.BaseContext` causes mypy to trip
## Describe the Bug
I built a custom context based on [the guide in the docs](https://strawberry.rocks/docs/guides/authentication):
```python
from strawberry.fastapi import BaseContext
class CustomContext(BaseContext):
@property
def user(self) -> User:
...
async def get_context() -> CustomContext:
return CustomContext()
```
With that I receive the following mypy error:
```shell
error: Call to untyped function "CustomContext" in typed context [no-untyped-call]
```
For now, I added the following workaround to my code:
```python
class CustomContext(BaseContext):
if typing.TYPE_CHECKING:
def __init__(self) -> None:
pass
...
```
## System Information
- Operating system: macOS Monterey
- Strawberry version (if applicable): `0.158.1`
- FastAPI version (if applicable): `0.92.0`
- mypy version (if applicable): `0.991` (also tested with `1.0.1`)
## Additional Context
I'm happy to provide a PR to address the issue myself.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/fastapi/context.py`
Content:
```
1 from typing import Any, Dict, Optional, Union
2
3 from starlette.background import BackgroundTasks
4 from starlette.requests import Request
5 from starlette.responses import Response
6 from starlette.websockets import WebSocket
7
8 CustomContext = Union["BaseContext", Dict[str, Any]]
9 MergedContext = Union[
10 "BaseContext", Dict[str, Union[Any, BackgroundTasks, Request, Response, WebSocket]]
11 ]
12
13
14 class BaseContext:
15 connection_params: Optional[Any] = None
16
17 def __init__(self):
18 self.request: Optional[Union[Request, WebSocket]] = None
19 self.background_tasks: Optional[BackgroundTasks] = None
20 self.response: Optional[Response] = None
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/fastapi/context.py b/strawberry/fastapi/context.py
--- a/strawberry/fastapi/context.py
+++ b/strawberry/fastapi/context.py
@@ -14,7 +14,7 @@
class BaseContext:
connection_params: Optional[Any] = None
- def __init__(self):
+ def __init__(self) -> None:
self.request: Optional[Union[Request, WebSocket]] = None
self.background_tasks: Optional[BackgroundTasks] = None
self.response: Optional[Response] = None
| {"golden_diff": "diff --git a/strawberry/fastapi/context.py b/strawberry/fastapi/context.py\n--- a/strawberry/fastapi/context.py\n+++ b/strawberry/fastapi/context.py\n@@ -14,7 +14,7 @@\n class BaseContext:\n connection_params: Optional[Any] = None\n \n- def __init__(self):\n+ def __init__(self) -> None:\n self.request: Optional[Union[Request, WebSocket]] = None\n self.background_tasks: Optional[BackgroundTasks] = None\n self.response: Optional[Response] = None\n", "issue": "Missing type annotation in `strawberry.fastapi.BaseContext` causes mypy to trip\n## Describe the Bug\r\n\r\nI built a custom context based on [the guide in the docs](https://strawberry.rocks/docs/guides/authentication):\r\n\r\n```python\r\nfrom strawberry.fastapi import BaseContext\r\n\r\nclass CustomContext(BaseContext):\r\n @property\r\n def user(self) -> User:\r\n ...\r\n\r\nasync def get_context() -> CustomContext:\r\n return CustomContext()\r\n```\r\n\r\nWith that I receive the following mypy error:\r\n```shell\r\n error: Call to untyped function \"CustomContext\" in typed context [no-untyped-call]\r\n```\r\n\r\nFor now, I added the following workaround to my code:\r\n```python\r\nclass CustomContext(BaseContext):\r\n if typing.TYPE_CHECKING:\r\n def __init__(self) -> None:\r\n pass\r\n \r\n ...\r\n```\r\n\r\n## System Information\r\n\r\n - Operating system: macOS Monterey\r\n - Strawberry version (if applicable): `0.158.1`\r\n - FastAPI version (if applicable): `0.92.0`\r\n - mypy version (if applicable): `0.991` (also tested with `1.0.1`)\r\n\r\n## Additional Context\r\n\r\nI'm happy to provide a PR to address the issue myself.\r\n\n", "before_files": [{"content": "from typing import Any, Dict, Optional, Union\n\nfrom starlette.background import BackgroundTasks\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.websockets import WebSocket\n\nCustomContext = Union[\"BaseContext\", Dict[str, Any]]\nMergedContext = Union[\n \"BaseContext\", Dict[str, Union[Any, BackgroundTasks, Request, Response, WebSocket]]\n]\n\n\nclass BaseContext:\n connection_params: Optional[Any] = None\n\n def __init__(self):\n self.request: Optional[Union[Request, WebSocket]] = None\n self.background_tasks: Optional[BackgroundTasks] = None\n self.response: Optional[Response] = None\n", "path": "strawberry/fastapi/context.py"}], "after_files": [{"content": "from typing import Any, Dict, Optional, Union\n\nfrom starlette.background import BackgroundTasks\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.websockets import WebSocket\n\nCustomContext = Union[\"BaseContext\", Dict[str, Any]]\nMergedContext = Union[\n \"BaseContext\", Dict[str, Union[Any, BackgroundTasks, Request, Response, WebSocket]]\n]\n\n\nclass BaseContext:\n connection_params: Optional[Any] = None\n\n def __init__(self) -> None:\n self.request: Optional[Union[Request, WebSocket]] = None\n self.background_tasks: Optional[BackgroundTasks] = None\n self.response: Optional[Response] = None\n", "path": "strawberry/fastapi/context.py"}]} | 707 | 128 |
gh_patches_debug_18850 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-1066 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
opentelemetry-instrument command can cause recursive creation of subprocesses
**Describe your environment**
Python3.9, linux.
**Steps to reproduce**
Using `opentelemetry-instrument` with any exporter or instrumentation which invokes a python subprocess **during initialization**. For example, the `opentelemetry-exporter-gcp-trace` exporter may invoke the `gcloud` (written in python) command in a subprocess to get project information and authentication tokens. The subprocess will then try to autoinstrument, creating a recursive loop of subprocesses being created.
**What is the expected behavior?**
Auto-instrumentation should not apply to subprocesses created in the `initialize()` phase of auto-instrumentation. The `PYTHONPATH` environment variable should have the `sitecustomize.py` dirname stripped out at the beginning of `sitecustomize.py`. This would prevent subprocesses from being autoinstrumented during setup, which can cause a loop.
**What is the actual behavior?**
`PYTHONPATH` is correctly stripped later on to avoid this https://github.com/open-telemetry/opentelemetry-python-contrib/blob/e9f83e1292b0fe5f3478c9b23f3b5a5508481e68/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py#L120-L125
However, any subprocesses created in [these lines](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/e9f83e1292b0fe5f3478c9b23f3b5a5508481e68/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py#L114-L117) will cause a loop.
**Additional context**
I can write a repro if necessary.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from logging import getLogger
16 from os import environ
17 from os.path import abspath, dirname, pathsep
18 from re import sub
19
20 from pkg_resources import iter_entry_points
21
22 from opentelemetry.instrumentation.dependencies import (
23 get_dist_dependency_conflicts,
24 )
25 from opentelemetry.instrumentation.distro import BaseDistro, DefaultDistro
26 from opentelemetry.instrumentation.environment_variables import (
27 OTEL_PYTHON_DISABLED_INSTRUMENTATIONS,
28 )
29 from opentelemetry.instrumentation.version import __version__
30
31 logger = getLogger(__name__)
32
33
34 def _load_distros() -> BaseDistro:
35 for entry_point in iter_entry_points("opentelemetry_distro"):
36 try:
37 distro = entry_point.load()()
38 if not isinstance(distro, BaseDistro):
39 logger.debug(
40 "%s is not an OpenTelemetry Distro. Skipping",
41 entry_point.name,
42 )
43 continue
44 logger.debug(
45 "Distribution %s will be configured", entry_point.name
46 )
47 return distro
48 except Exception as exc: # pylint: disable=broad-except
49 logger.exception(
50 "Distribution %s configuration failed", entry_point.name
51 )
52 raise exc
53 return DefaultDistro()
54
55
56 def _load_instrumentors(distro):
57 package_to_exclude = environ.get(OTEL_PYTHON_DISABLED_INSTRUMENTATIONS, [])
58 if isinstance(package_to_exclude, str):
59 package_to_exclude = package_to_exclude.split(",")
60 # to handle users entering "requests , flask" or "requests, flask" with spaces
61 package_to_exclude = [x.strip() for x in package_to_exclude]
62
63 for entry_point in iter_entry_points("opentelemetry_pre_instrument"):
64 entry_point.load()()
65
66 for entry_point in iter_entry_points("opentelemetry_instrumentor"):
67 if entry_point.name in package_to_exclude:
68 logger.debug(
69 "Instrumentation skipped for library %s", entry_point.name
70 )
71 continue
72
73 try:
74 conflict = get_dist_dependency_conflicts(entry_point.dist)
75 if conflict:
76 logger.debug(
77 "Skipping instrumentation %s: %s",
78 entry_point.name,
79 conflict,
80 )
81 continue
82
83 # tell instrumentation to not run dep checks again as we already did it above
84 distro.load_instrumentor(entry_point, skip_dep_check=True)
85 logger.debug("Instrumented %s", entry_point.name)
86 except Exception as exc: # pylint: disable=broad-except
87 logger.exception("Instrumenting of %s failed", entry_point.name)
88 raise exc
89
90 for entry_point in iter_entry_points("opentelemetry_post_instrument"):
91 entry_point.load()()
92
93
94 def _load_configurators():
95 configured = None
96 for entry_point in iter_entry_points("opentelemetry_configurator"):
97 if configured is not None:
98 logger.warning(
99 "Configuration of %s not loaded, %s already loaded",
100 entry_point.name,
101 configured,
102 )
103 continue
104 try:
105 entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore
106 configured = entry_point.name
107 except Exception as exc: # pylint: disable=broad-except
108 logger.exception("Configuration of %s failed", entry_point.name)
109 raise exc
110
111
112 def initialize():
113 try:
114 distro = _load_distros()
115 distro.configure()
116 _load_configurators()
117 _load_instrumentors(distro)
118 except Exception: # pylint: disable=broad-except
119 logger.exception("Failed to auto initialize opentelemetry")
120 finally:
121 environ["PYTHONPATH"] = sub(
122 rf"{dirname(abspath(__file__))}{pathsep}?",
123 "",
124 environ["PYTHONPATH"],
125 )
126
127
128 initialize()
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py
--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py
+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py
@@ -110,6 +110,13 @@
def initialize():
+ # prevents auto-instrumentation of subprocesses if code execs another python process
+ environ["PYTHONPATH"] = sub(
+ rf"{dirname(abspath(__file__))}{pathsep}?",
+ "",
+ environ["PYTHONPATH"],
+ )
+
try:
distro = _load_distros()
distro.configure()
@@ -117,12 +124,6 @@
_load_instrumentors(distro)
except Exception: # pylint: disable=broad-except
logger.exception("Failed to auto initialize opentelemetry")
- finally:
- environ["PYTHONPATH"] = sub(
- rf"{dirname(abspath(__file__))}{pathsep}?",
- "",
- environ["PYTHONPATH"],
- )
initialize()
| {"golden_diff": "diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py\n--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py\n+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py\n@@ -110,6 +110,13 @@\n \n \n def initialize():\n+ # prevents auto-instrumentation of subprocesses if code execs another python process\n+ environ[\"PYTHONPATH\"] = sub(\n+ rf\"{dirname(abspath(__file__))}{pathsep}?\",\n+ \"\",\n+ environ[\"PYTHONPATH\"],\n+ )\n+\n try:\n distro = _load_distros()\n distro.configure()\n@@ -117,12 +124,6 @@\n _load_instrumentors(distro)\n except Exception: # pylint: disable=broad-except\n logger.exception(\"Failed to auto initialize opentelemetry\")\n- finally:\n- environ[\"PYTHONPATH\"] = sub(\n- rf\"{dirname(abspath(__file__))}{pathsep}?\",\n- \"\",\n- environ[\"PYTHONPATH\"],\n- )\n \n \n initialize()\n", "issue": "opentelemetry-instrument command can cause recursive creation of subprocesses\n**Describe your environment**\r\n\r\nPython3.9, linux.\r\n\r\n**Steps to reproduce**\r\nUsing `opentelemetry-instrument` with any exporter or instrumentation which invokes a python subprocess **during initialization**. For example, the `opentelemetry-exporter-gcp-trace` exporter may invoke the `gcloud` (written in python) command in a subprocess to get project information and authentication tokens. The subprocess will then try to autoinstrument, creating a recursive loop of subprocesses being created.\r\n\r\n**What is the expected behavior?**\r\nAuto-instrumentation should not apply to subprocesses created in the `initialize()` phase of auto-instrumentation. The `PYTHONPATH` environment variable should have the `sitecustomize.py` dirname stripped out at the beginning of `sitecustomize.py`. This would prevent subprocesses from being autoinstrumented during setup, which can cause a loop.\r\n\r\n**What is the actual behavior?**\r\n`PYTHONPATH` is correctly stripped later on to avoid this https://github.com/open-telemetry/opentelemetry-python-contrib/blob/e9f83e1292b0fe5f3478c9b23f3b5a5508481e68/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py#L120-L125\r\n\r\nHowever, any subprocesses created in [these lines](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/e9f83e1292b0fe5f3478c9b23f3b5a5508481e68/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py#L114-L117) will cause a loop.\r\n\r\n**Additional context**\r\nI can write a repro if necessary.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom logging import getLogger\nfrom os import environ\nfrom os.path import abspath, dirname, pathsep\nfrom re import sub\n\nfrom pkg_resources import iter_entry_points\n\nfrom opentelemetry.instrumentation.dependencies import (\n get_dist_dependency_conflicts,\n)\nfrom opentelemetry.instrumentation.distro import BaseDistro, DefaultDistro\nfrom opentelemetry.instrumentation.environment_variables import (\n OTEL_PYTHON_DISABLED_INSTRUMENTATIONS,\n)\nfrom opentelemetry.instrumentation.version import __version__\n\nlogger = getLogger(__name__)\n\n\ndef _load_distros() -> BaseDistro:\n for entry_point in iter_entry_points(\"opentelemetry_distro\"):\n try:\n distro = entry_point.load()()\n if not isinstance(distro, BaseDistro):\n logger.debug(\n \"%s is not an OpenTelemetry Distro. Skipping\",\n entry_point.name,\n )\n continue\n logger.debug(\n \"Distribution %s will be configured\", entry_point.name\n )\n return distro\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\n \"Distribution %s configuration failed\", entry_point.name\n )\n raise exc\n return DefaultDistro()\n\n\ndef _load_instrumentors(distro):\n package_to_exclude = environ.get(OTEL_PYTHON_DISABLED_INSTRUMENTATIONS, [])\n if isinstance(package_to_exclude, str):\n package_to_exclude = package_to_exclude.split(\",\")\n # to handle users entering \"requests , flask\" or \"requests, flask\" with spaces\n package_to_exclude = [x.strip() for x in package_to_exclude]\n\n for entry_point in iter_entry_points(\"opentelemetry_pre_instrument\"):\n entry_point.load()()\n\n for entry_point in iter_entry_points(\"opentelemetry_instrumentor\"):\n if entry_point.name in package_to_exclude:\n logger.debug(\n \"Instrumentation skipped for library %s\", entry_point.name\n )\n continue\n\n try:\n conflict = get_dist_dependency_conflicts(entry_point.dist)\n if conflict:\n logger.debug(\n \"Skipping instrumentation %s: %s\",\n entry_point.name,\n conflict,\n )\n continue\n\n # tell instrumentation to not run dep checks again as we already did it above\n distro.load_instrumentor(entry_point, skip_dep_check=True)\n logger.debug(\"Instrumented %s\", entry_point.name)\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\"Instrumenting of %s failed\", entry_point.name)\n raise exc\n\n for entry_point in iter_entry_points(\"opentelemetry_post_instrument\"):\n entry_point.load()()\n\n\ndef _load_configurators():\n configured = None\n for entry_point in iter_entry_points(\"opentelemetry_configurator\"):\n if configured is not None:\n logger.warning(\n \"Configuration of %s not loaded, %s already loaded\",\n entry_point.name,\n configured,\n )\n continue\n try:\n entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore\n configured = entry_point.name\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\"Configuration of %s failed\", entry_point.name)\n raise exc\n\n\ndef initialize():\n try:\n distro = _load_distros()\n distro.configure()\n _load_configurators()\n _load_instrumentors(distro)\n except Exception: # pylint: disable=broad-except\n logger.exception(\"Failed to auto initialize opentelemetry\")\n finally:\n environ[\"PYTHONPATH\"] = sub(\n rf\"{dirname(abspath(__file__))}{pathsep}?\",\n \"\",\n environ[\"PYTHONPATH\"],\n )\n\n\ninitialize()\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom logging import getLogger\nfrom os import environ\nfrom os.path import abspath, dirname, pathsep\nfrom re import sub\n\nfrom pkg_resources import iter_entry_points\n\nfrom opentelemetry.instrumentation.dependencies import (\n get_dist_dependency_conflicts,\n)\nfrom opentelemetry.instrumentation.distro import BaseDistro, DefaultDistro\nfrom opentelemetry.instrumentation.environment_variables import (\n OTEL_PYTHON_DISABLED_INSTRUMENTATIONS,\n)\nfrom opentelemetry.instrumentation.version import __version__\n\nlogger = getLogger(__name__)\n\n\ndef _load_distros() -> BaseDistro:\n for entry_point in iter_entry_points(\"opentelemetry_distro\"):\n try:\n distro = entry_point.load()()\n if not isinstance(distro, BaseDistro):\n logger.debug(\n \"%s is not an OpenTelemetry Distro. Skipping\",\n entry_point.name,\n )\n continue\n logger.debug(\n \"Distribution %s will be configured\", entry_point.name\n )\n return distro\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\n \"Distribution %s configuration failed\", entry_point.name\n )\n raise exc\n return DefaultDistro()\n\n\ndef _load_instrumentors(distro):\n package_to_exclude = environ.get(OTEL_PYTHON_DISABLED_INSTRUMENTATIONS, [])\n if isinstance(package_to_exclude, str):\n package_to_exclude = package_to_exclude.split(\",\")\n # to handle users entering \"requests , flask\" or \"requests, flask\" with spaces\n package_to_exclude = [x.strip() for x in package_to_exclude]\n\n for entry_point in iter_entry_points(\"opentelemetry_pre_instrument\"):\n entry_point.load()()\n\n for entry_point in iter_entry_points(\"opentelemetry_instrumentor\"):\n if entry_point.name in package_to_exclude:\n logger.debug(\n \"Instrumentation skipped for library %s\", entry_point.name\n )\n continue\n\n try:\n conflict = get_dist_dependency_conflicts(entry_point.dist)\n if conflict:\n logger.debug(\n \"Skipping instrumentation %s: %s\",\n entry_point.name,\n conflict,\n )\n continue\n\n # tell instrumentation to not run dep checks again as we already did it above\n distro.load_instrumentor(entry_point, skip_dep_check=True)\n logger.debug(\"Instrumented %s\", entry_point.name)\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\"Instrumenting of %s failed\", entry_point.name)\n raise exc\n\n for entry_point in iter_entry_points(\"opentelemetry_post_instrument\"):\n entry_point.load()()\n\n\ndef _load_configurators():\n configured = None\n for entry_point in iter_entry_points(\"opentelemetry_configurator\"):\n if configured is not None:\n logger.warning(\n \"Configuration of %s not loaded, %s already loaded\",\n entry_point.name,\n configured,\n )\n continue\n try:\n entry_point.load()().configure(auto_instrumentation_version=__version__) # type: ignore\n configured = entry_point.name\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\"Configuration of %s failed\", entry_point.name)\n raise exc\n\n\ndef initialize():\n # prevents auto-instrumentation of subprocesses if code execs another python process\n environ[\"PYTHONPATH\"] = sub(\n rf\"{dirname(abspath(__file__))}{pathsep}?\",\n \"\",\n environ[\"PYTHONPATH\"],\n )\n\n try:\n distro = _load_distros()\n distro.configure()\n _load_configurators()\n _load_instrumentors(distro)\n except Exception: # pylint: disable=broad-except\n logger.exception(\"Failed to auto initialize opentelemetry\")\n\n\ninitialize()\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py"}]} | 1,888 | 281 |
gh_patches_debug_19558 | rasdani/github-patches | git_diff | open-mmlab__mmcv-97 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mmcv error
My environment is macOS Mojave 10.14.4, Anaconda 4.4.0,Python 3.6.1.
I directly use "pip install mmcv and got:
"Running setup.py clean for mmcv
Failed to build mmcv
Installing collected packages: mmcv
Running setup.py install for mmcv ... error" and :
"In file included from ./mmcv/video/optflow_warp/flow_warp.cpp:1:
./mmcv/video/optflow_warp/flow_warp.hpp:3:10: fatal error: 'iostream' file not found
#include <iostream>"
Anybody help? Thank you very much.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import sys
2 from io import open # for Python 2 (identical to builtin in Python 3)
3
4 from setuptools import Extension, find_packages, setup
5
6 import numpy
7 from Cython.Distutils import build_ext
8
9 install_requires = [
10 'numpy>=1.11.1', 'pyyaml', 'six', 'addict', 'requests', 'opencv-python',
11 'Cython'
12 ]
13 if sys.version_info < (3, 3):
14 install_requires.append('backports.shutil_get_terminal_size')
15 if sys.version_info < (3, 4):
16 install_requires.extend(['enum34', 'pathlib'])
17
18
19 def readme():
20 with open('README.rst', encoding='utf-8') as f:
21 content = f.read()
22 return content
23
24
25 def get_version():
26 version_file = 'mmcv/version.py'
27 with open(version_file, 'r', encoding='utf-8') as f:
28 exec(compile(f.read(), version_file, 'exec'))
29 return locals()['__version__']
30
31
32 EXT_MODULES = [
33 Extension(
34 name='mmcv._ext',
35 sources=[
36 './mmcv/video/optflow_warp/flow_warp.cpp',
37 './mmcv/video/optflow_warp/flow_warp_module.pyx'
38 ],
39 include_dirs=[numpy.get_include()],
40 language="c++",
41 ),
42 ]
43
44 setup(
45 name='mmcv',
46 version=get_version(),
47 description='Open MMLab Computer Vision Foundation',
48 long_description=readme(),
49 keywords='computer vision',
50 packages=find_packages(),
51 classifiers=[
52 'Development Status :: 4 - Beta',
53 'License :: OSI Approved :: Apache Software License',
54 'Operating System :: OS Independent',
55 'Programming Language :: Python :: 2',
56 'Programming Language :: Python :: 2.7',
57 'Programming Language :: Python :: 3',
58 'Programming Language :: Python :: 3.4',
59 'Programming Language :: Python :: 3.5',
60 'Programming Language :: Python :: 3.6',
61 'Programming Language :: Python :: 3.7',
62 'Topic :: Utilities',
63 ],
64 url='https://github.com/open-mmlab/mmcv',
65 author='Kai Chen',
66 author_email='[email protected]',
67 setup_requires=['pytest-runner'],
68 tests_require=['pytest'],
69 install_requires=install_requires,
70 ext_modules=EXT_MODULES,
71 cmdclass={'build_ext': build_ext},
72 zip_safe=False)
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,6 +1,6 @@
+import platform
import sys
from io import open # for Python 2 (identical to builtin in Python 3)
-
from setuptools import Extension, find_packages, setup
import numpy
@@ -29,6 +29,13 @@
return locals()['__version__']
+if platform.system() == 'Darwin':
+ extra_compile_args = ['-stdlib=libc++']
+ extra_link_args = ['-stdlib=libc++']
+else:
+ extra_compile_args = []
+ extra_link_args = []
+
EXT_MODULES = [
Extension(
name='mmcv._ext',
@@ -37,7 +44,9 @@
'./mmcv/video/optflow_warp/flow_warp_module.pyx'
],
include_dirs=[numpy.get_include()],
- language="c++",
+ language='c++',
+ extra_compile_args=extra_compile_args,
+ extra_link_args=extra_link_args,
),
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,6 +1,6 @@\n+import platform\n import sys\n from io import open # for Python 2 (identical to builtin in Python 3)\n-\n from setuptools import Extension, find_packages, setup\n \n import numpy\n@@ -29,6 +29,13 @@\n return locals()['__version__']\n \n \n+if platform.system() == 'Darwin':\n+ extra_compile_args = ['-stdlib=libc++']\n+ extra_link_args = ['-stdlib=libc++']\n+else:\n+ extra_compile_args = []\n+ extra_link_args = []\n+\n EXT_MODULES = [\n Extension(\n name='mmcv._ext',\n@@ -37,7 +44,9 @@\n './mmcv/video/optflow_warp/flow_warp_module.pyx'\n ],\n include_dirs=[numpy.get_include()],\n- language=\"c++\",\n+ language='c++',\n+ extra_compile_args=extra_compile_args,\n+ extra_link_args=extra_link_args,\n ),\n ]\n", "issue": "mmcv error\nMy environment is macOS Mojave 10.14.4, Anaconda 4.4.0,Python 3.6.1.\r\n I directly use \"pip install mmcv and got:\r\n\"Running setup.py clean for mmcv\r\nFailed to build mmcv\r\nInstalling collected packages: mmcv\r\nRunning setup.py install for mmcv ... error\" and :\r\n\"In file included from ./mmcv/video/optflow_warp/flow_warp.cpp:1:\r\n./mmcv/video/optflow_warp/flow_warp.hpp:3:10: fatal error: 'iostream' file not found\r\n#include <iostream>\"\r\nAnybody help? Thank you very much.\n", "before_files": [{"content": "import sys\nfrom io import open # for Python 2 (identical to builtin in Python 3)\n\nfrom setuptools import Extension, find_packages, setup\n\nimport numpy\nfrom Cython.Distutils import build_ext\n\ninstall_requires = [\n 'numpy>=1.11.1', 'pyyaml', 'six', 'addict', 'requests', 'opencv-python',\n 'Cython'\n]\nif sys.version_info < (3, 3):\n install_requires.append('backports.shutil_get_terminal_size')\nif sys.version_info < (3, 4):\n install_requires.extend(['enum34', 'pathlib'])\n\n\ndef readme():\n with open('README.rst', encoding='utf-8') as f:\n content = f.read()\n return content\n\n\ndef get_version():\n version_file = 'mmcv/version.py'\n with open(version_file, 'r', encoding='utf-8') as f:\n exec(compile(f.read(), version_file, 'exec'))\n return locals()['__version__']\n\n\nEXT_MODULES = [\n Extension(\n name='mmcv._ext',\n sources=[\n './mmcv/video/optflow_warp/flow_warp.cpp',\n './mmcv/video/optflow_warp/flow_warp_module.pyx'\n ],\n include_dirs=[numpy.get_include()],\n language=\"c++\",\n ),\n]\n\nsetup(\n name='mmcv',\n version=get_version(),\n description='Open MMLab Computer Vision Foundation',\n long_description=readme(),\n keywords='computer vision',\n packages=find_packages(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Utilities',\n ],\n url='https://github.com/open-mmlab/mmcv',\n author='Kai Chen',\n author_email='[email protected]',\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n install_requires=install_requires,\n ext_modules=EXT_MODULES,\n cmdclass={'build_ext': build_ext},\n zip_safe=False)\n", "path": "setup.py"}], "after_files": [{"content": "import platform\nimport sys\nfrom io import open # for Python 2 (identical to builtin in Python 3)\nfrom setuptools import Extension, find_packages, setup\n\nimport numpy\nfrom Cython.Distutils import build_ext\n\ninstall_requires = [\n 'numpy>=1.11.1', 'pyyaml', 'six', 'addict', 'requests', 'opencv-python',\n 'Cython'\n]\nif sys.version_info < (3, 3):\n install_requires.append('backports.shutil_get_terminal_size')\nif sys.version_info < (3, 4):\n install_requires.extend(['enum34', 'pathlib'])\n\n\ndef readme():\n with open('README.rst', encoding='utf-8') as f:\n content = f.read()\n return content\n\n\ndef get_version():\n version_file = 'mmcv/version.py'\n with open(version_file, 'r', encoding='utf-8') as f:\n exec(compile(f.read(), version_file, 'exec'))\n return locals()['__version__']\n\n\nif platform.system() == 'Darwin':\n extra_compile_args = ['-stdlib=libc++']\n extra_link_args = ['-stdlib=libc++']\nelse:\n extra_compile_args = []\n extra_link_args = []\n\nEXT_MODULES = [\n Extension(\n name='mmcv._ext',\n sources=[\n './mmcv/video/optflow_warp/flow_warp.cpp',\n './mmcv/video/optflow_warp/flow_warp_module.pyx'\n ],\n include_dirs=[numpy.get_include()],\n language='c++',\n extra_compile_args=extra_compile_args,\n extra_link_args=extra_link_args,\n ),\n]\n\nsetup(\n name='mmcv',\n version=get_version(),\n description='Open MMLab Computer Vision Foundation',\n long_description=readme(),\n keywords='computer vision',\n packages=find_packages(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Utilities',\n ],\n url='https://github.com/open-mmlab/mmcv',\n author='Kai Chen',\n author_email='[email protected]',\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n install_requires=install_requires,\n ext_modules=EXT_MODULES,\n cmdclass={'build_ext': build_ext},\n zip_safe=False)\n", "path": "setup.py"}]} | 1,067 | 236 |
gh_patches_debug_19740 | rasdani/github-patches | git_diff | tough-dev-school__education-backend-180 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Кастомные теги в мейлчимпе
Чтобы можно было в лид-магните указать теги, которые пробрасываются в аудиторию мейлчимпа
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/magnets/creator.py`
Content:
```
1 from magnets.models import EmailLeadMagnetCampaign, LeadCampaignLogEntry
2 from users.creator import UserCreator
3 from users.models import User
4
5
6 class LeadCreator:
7 def __init__(self, campaign: EmailLeadMagnetCampaign, email: str, name: str = None):
8 self.data = {
9 'name': name,
10 'email': email,
11 }
12
13 self.campaign = campaign
14
15 def __call__(self):
16 self.user = self._create_user()
17 self._create_log_entry()
18
19 self.campaign.execute(self.user)
20
21 def _create_user(self) -> User:
22 return UserCreator(
23 name=self.data['name'],
24 email=self.data['email'],
25 subscribe=True,
26 )()
27
28 def _create_log_entry(self):
29 LeadCampaignLogEntry.objects.create(
30 user=self.user,
31 campaign=self.campaign,
32 )
33
```
Path: `src/shipping/shipments/course.py`
Content:
```
1 from typing import Optional
2
3 from app.tasks import invite_to_clickmeeting, invite_to_zoomus, send_mail, subscribe_to_mailchimp
4 from products.models import Course
5 from shipping import factory
6 from shipping.shipments.base import BaseShipment
7
8
9 @factory.register(Course)
10 class CourseShipment(BaseShipment):
11 @property
12 def course(self):
13 return self.stuff_to_ship
14
15 def ship(self):
16 self.invite_to_clickmeeting()
17 self.invite_to_zoomus()
18 self.subscribe_to_mailchimp()
19
20 self.send_welcome_letter()
21
22 def subscribe_to_mailchimp(self):
23 if self.course.mailchimp_list_id is not None:
24 subscribe_to_mailchimp.delay(
25 list_id=self.course.mailchimp_list_id,
26 user_id=self.user.pk,
27 tags=[self.course.slug],
28 )
29
30 def invite_to_clickmeeting(self):
31 if self.course.clickmeeting_room_url is not None:
32 invite_to_clickmeeting.delay(
33 room_url=self.course.clickmeeting_room_url,
34 email=self.user.email,
35 )
36
37 def invite_to_zoomus(self):
38 if self.course.zoomus_webinar_id is not None and len(self.course.zoomus_webinar_id):
39 invite_to_zoomus.delay(
40 webinar_id=self.course.zoomus_webinar_id,
41 user_id=self.user.id,
42 )
43
44 def send_welcome_letter(self):
45 if self.welcome_letter_template_id is not None:
46 send_mail.delay(
47 to=self.user.email,
48 template_id=self.welcome_letter_template_id,
49 ctx=self.get_template_context(),
50 disable_antispam=True,
51 )
52
53 def get_template_context(self) -> dict:
54 return {
55 'name': self.course.name,
56 'slug': self.course.slug,
57 'name_genitive': self.course.name_genitive,
58 **self.get_gift_template_context(),
59 }
60
61 @property
62 def welcome_letter_template_id(self) -> Optional[str]:
63 """Get special gift template letter id if order is a gift and it is present"""
64 template_id = self.course.welcome_letter_template_id
65
66 if self.order is not None and self.order.giver is not None: # this is a gift
67 template_id = self.course.gift_welcome_letter_template_id or self.course.welcome_letter_template_id
68
69 if template_id is None or not len(template_id): # fuck this null=True in CharFields
70 return None
71
72 return template_id
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/magnets/creator.py b/src/magnets/creator.py
--- a/src/magnets/creator.py
+++ b/src/magnets/creator.py
@@ -23,6 +23,7 @@
name=self.data['name'],
email=self.data['email'],
subscribe=True,
+ tags=self.tags,
)()
def _create_log_entry(self):
@@ -30,3 +31,7 @@
user=self.user,
campaign=self.campaign,
)
+
+ @property
+ def tags(self):
+ return [f'{self.campaign.slug}-lead-magnet']
diff --git a/src/shipping/shipments/course.py b/src/shipping/shipments/course.py
--- a/src/shipping/shipments/course.py
+++ b/src/shipping/shipments/course.py
@@ -24,7 +24,7 @@
subscribe_to_mailchimp.delay(
list_id=self.course.mailchimp_list_id,
user_id=self.user.pk,
- tags=[self.course.slug],
+ tags=[self.course.slug, f'{self.course.slug}-purchased'],
)
def invite_to_clickmeeting(self):
| {"golden_diff": "diff --git a/src/magnets/creator.py b/src/magnets/creator.py\n--- a/src/magnets/creator.py\n+++ b/src/magnets/creator.py\n@@ -23,6 +23,7 @@\n name=self.data['name'],\n email=self.data['email'],\n subscribe=True,\n+ tags=self.tags,\n )()\n \n def _create_log_entry(self):\n@@ -30,3 +31,7 @@\n user=self.user,\n campaign=self.campaign,\n )\n+\n+ @property\n+ def tags(self):\n+ return [f'{self.campaign.slug}-lead-magnet']\ndiff --git a/src/shipping/shipments/course.py b/src/shipping/shipments/course.py\n--- a/src/shipping/shipments/course.py\n+++ b/src/shipping/shipments/course.py\n@@ -24,7 +24,7 @@\n subscribe_to_mailchimp.delay(\n list_id=self.course.mailchimp_list_id,\n user_id=self.user.pk,\n- tags=[self.course.slug],\n+ tags=[self.course.slug, f'{self.course.slug}-purchased'],\n )\n \n def invite_to_clickmeeting(self):\n", "issue": "\u041a\u0430\u0441\u0442\u043e\u043c\u043d\u044b\u0435 \u0442\u0435\u0433\u0438 \u0432 \u043c\u0435\u0439\u043b\u0447\u0438\u043c\u043f\u0435\n\u0427\u0442\u043e\u0431\u044b \u043c\u043e\u0436\u043d\u043e \u0431\u044b\u043b\u043e \u0432 \u043b\u0438\u0434-\u043c\u0430\u0433\u043d\u0438\u0442\u0435 \u0443\u043a\u0430\u0437\u0430\u0442\u044c \u0442\u0435\u0433\u0438, \u043a\u043e\u0442\u043e\u0440\u044b\u0435 \u043f\u0440\u043e\u0431\u0440\u0430\u0441\u044b\u0432\u0430\u044e\u0442\u0441\u044f \u0432 \u0430\u0443\u0434\u0438\u0442\u043e\u0440\u0438\u044e \u043c\u0435\u0439\u043b\u0447\u0438\u043c\u043f\u0430\n", "before_files": [{"content": "from magnets.models import EmailLeadMagnetCampaign, LeadCampaignLogEntry\nfrom users.creator import UserCreator\nfrom users.models import User\n\n\nclass LeadCreator:\n def __init__(self, campaign: EmailLeadMagnetCampaign, email: str, name: str = None):\n self.data = {\n 'name': name,\n 'email': email,\n }\n\n self.campaign = campaign\n\n def __call__(self):\n self.user = self._create_user()\n self._create_log_entry()\n\n self.campaign.execute(self.user)\n\n def _create_user(self) -> User:\n return UserCreator(\n name=self.data['name'],\n email=self.data['email'],\n subscribe=True,\n )()\n\n def _create_log_entry(self):\n LeadCampaignLogEntry.objects.create(\n user=self.user,\n campaign=self.campaign,\n )\n", "path": "src/magnets/creator.py"}, {"content": "from typing import Optional\n\nfrom app.tasks import invite_to_clickmeeting, invite_to_zoomus, send_mail, subscribe_to_mailchimp\nfrom products.models import Course\nfrom shipping import factory\nfrom shipping.shipments.base import BaseShipment\n\n\[email protected](Course)\nclass CourseShipment(BaseShipment):\n @property\n def course(self):\n return self.stuff_to_ship\n\n def ship(self):\n self.invite_to_clickmeeting()\n self.invite_to_zoomus()\n self.subscribe_to_mailchimp()\n\n self.send_welcome_letter()\n\n def subscribe_to_mailchimp(self):\n if self.course.mailchimp_list_id is not None:\n subscribe_to_mailchimp.delay(\n list_id=self.course.mailchimp_list_id,\n user_id=self.user.pk,\n tags=[self.course.slug],\n )\n\n def invite_to_clickmeeting(self):\n if self.course.clickmeeting_room_url is not None:\n invite_to_clickmeeting.delay(\n room_url=self.course.clickmeeting_room_url,\n email=self.user.email,\n )\n\n def invite_to_zoomus(self):\n if self.course.zoomus_webinar_id is not None and len(self.course.zoomus_webinar_id):\n invite_to_zoomus.delay(\n webinar_id=self.course.zoomus_webinar_id,\n user_id=self.user.id,\n )\n\n def send_welcome_letter(self):\n if self.welcome_letter_template_id is not None:\n send_mail.delay(\n to=self.user.email,\n template_id=self.welcome_letter_template_id,\n ctx=self.get_template_context(),\n disable_antispam=True,\n )\n\n def get_template_context(self) -> dict:\n return {\n 'name': self.course.name,\n 'slug': self.course.slug,\n 'name_genitive': self.course.name_genitive,\n **self.get_gift_template_context(),\n }\n\n @property\n def welcome_letter_template_id(self) -> Optional[str]:\n \"\"\"Get special gift template letter id if order is a gift and it is present\"\"\"\n template_id = self.course.welcome_letter_template_id\n\n if self.order is not None and self.order.giver is not None: # this is a gift\n template_id = self.course.gift_welcome_letter_template_id or self.course.welcome_letter_template_id\n\n if template_id is None or not len(template_id): # fuck this null=True in CharFields\n return None\n\n return template_id\n", "path": "src/shipping/shipments/course.py"}], "after_files": [{"content": "from magnets.models import EmailLeadMagnetCampaign, LeadCampaignLogEntry\nfrom users.creator import UserCreator\nfrom users.models import User\n\n\nclass LeadCreator:\n def __init__(self, campaign: EmailLeadMagnetCampaign, email: str, name: str = None):\n self.data = {\n 'name': name,\n 'email': email,\n }\n\n self.campaign = campaign\n\n def __call__(self):\n self.user = self._create_user()\n self._create_log_entry()\n\n self.campaign.execute(self.user)\n\n def _create_user(self) -> User:\n return UserCreator(\n name=self.data['name'],\n email=self.data['email'],\n subscribe=True,\n tags=self.tags,\n )()\n\n def _create_log_entry(self):\n LeadCampaignLogEntry.objects.create(\n user=self.user,\n campaign=self.campaign,\n )\n\n @property\n def tags(self):\n return [f'{self.campaign.slug}-lead-magnet']\n", "path": "src/magnets/creator.py"}, {"content": "from typing import Optional\n\nfrom app.tasks import invite_to_clickmeeting, invite_to_zoomus, send_mail, subscribe_to_mailchimp\nfrom products.models import Course\nfrom shipping import factory\nfrom shipping.shipments.base import BaseShipment\n\n\[email protected](Course)\nclass CourseShipment(BaseShipment):\n @property\n def course(self):\n return self.stuff_to_ship\n\n def ship(self):\n self.invite_to_clickmeeting()\n self.invite_to_zoomus()\n self.subscribe_to_mailchimp()\n\n self.send_welcome_letter()\n\n def subscribe_to_mailchimp(self):\n if self.course.mailchimp_list_id is not None:\n subscribe_to_mailchimp.delay(\n list_id=self.course.mailchimp_list_id,\n user_id=self.user.pk,\n tags=[self.course.slug, f'{self.course.slug}-purchased'],\n )\n\n def invite_to_clickmeeting(self):\n if self.course.clickmeeting_room_url is not None:\n invite_to_clickmeeting.delay(\n room_url=self.course.clickmeeting_room_url,\n email=self.user.email,\n )\n\n def invite_to_zoomus(self):\n if self.course.zoomus_webinar_id is not None and len(self.course.zoomus_webinar_id):\n invite_to_zoomus.delay(\n webinar_id=self.course.zoomus_webinar_id,\n user_id=self.user.id,\n )\n\n def send_welcome_letter(self):\n if self.welcome_letter_template_id is not None:\n send_mail.delay(\n to=self.user.email,\n template_id=self.welcome_letter_template_id,\n ctx=self.get_template_context(),\n disable_antispam=True,\n )\n\n def get_template_context(self) -> dict:\n return {\n 'name': self.course.name,\n 'slug': self.course.slug,\n 'name_genitive': self.course.name_genitive,\n **self.get_gift_template_context(),\n }\n\n @property\n def welcome_letter_template_id(self) -> Optional[str]:\n \"\"\"Get special gift template letter id if order is a gift and it is present\"\"\"\n template_id = self.course.welcome_letter_template_id\n\n if self.order is not None and self.order.giver is not None: # this is a gift\n template_id = self.course.gift_welcome_letter_template_id or self.course.welcome_letter_template_id\n\n if template_id is None or not len(template_id): # fuck this null=True in CharFields\n return None\n\n return template_id\n", "path": "src/shipping/shipments/course.py"}]} | 1,223 | 256 |
gh_patches_debug_4029 | rasdani/github-patches | git_diff | saleor__saleor-723 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add checkout steps navigation
Now there's no explicit navigation. Using browser back button can be dangerous in some cases and it's not a common behavior in ecommerce
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/order/views.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.contrib import messages, auth
5 from django.db import transaction
6 from django.http import Http404, HttpResponseForbidden
7 from django.shortcuts import get_object_or_404, redirect
8 from django.utils.translation import ugettext as _
9 from django.template.response import TemplateResponse
10 from payments import RedirectNeeded
11
12 from .forms import PaymentDeleteForm, PaymentMethodsForm, PasswordForm
13 from .models import Order, Payment
14 from ..core.utils import get_client_ip
15 from .utils import check_order_status
16
17 logger = logging.getLogger(__name__)
18
19
20 def details(request, token):
21 orders = Order.objects.prefetch_related('groups__items')
22 order = get_object_or_404(orders, token=token)
23 groups = order.groups.all()
24 return TemplateResponse(request, 'order/details.html',
25 {'order': order, 'groups': groups})
26
27
28 def payment(request, token):
29 orders = Order.objects.prefetch_related('groups__items')
30 order = get_object_or_404(orders, token=token)
31 groups = order.groups.all()
32 payments = order.payments.all()
33 form_data = request.POST or None
34 try:
35 waiting_payment = order.payments.get(status='waiting')
36 except Payment.DoesNotExist:
37 waiting_payment = None
38 waiting_payment_form = None
39 else:
40 form_data = None
41 waiting_payment_form = PaymentDeleteForm(
42 None, order=order, initial={'payment_id': waiting_payment.id})
43 if order.is_fully_paid():
44 form_data = None
45 payment_form = None
46 if not order.is_pre_authorized():
47 payment_form = PaymentMethodsForm(form_data)
48 # FIXME: redirect if there is only one payment method
49 if payment_form.is_valid():
50 payment_method = payment_form.cleaned_data['method']
51 return redirect('order:payment', token=order.token,
52 variant=payment_method)
53 return TemplateResponse(request, 'order/payment.html',
54 {'order': order, 'groups': groups,
55 'payment_form': payment_form,
56 'waiting_payment': waiting_payment,
57 'waiting_payment_form': waiting_payment_form,
58 'payments': payments})
59
60
61 @check_order_status
62 def start_payment(request, order, variant):
63 waiting_payments = order.payments.filter(status='waiting').exists()
64 if waiting_payments:
65 return redirect('order:payment', token=order.token)
66 billing = order.billing_address
67 total = order.get_total()
68 defaults = {'total': total.gross,
69 'tax': total.tax, 'currency': total.currency,
70 'delivery': order.get_delivery_total().gross,
71 'billing_first_name': billing.first_name,
72 'billing_last_name': billing.last_name,
73 'billing_address_1': billing.street_address_1,
74 'billing_address_2': billing.street_address_2,
75 'billing_city': billing.city,
76 'billing_postcode': billing.postal_code,
77 'billing_country_code': billing.country,
78 'billing_email': order.user_email,
79 'description': _('Order %(order_number)s') % {
80 'order_number': order},
81 'billing_country_area': billing.country_area,
82 'customer_ip_address': get_client_ip(request)}
83 variant_choices = settings.CHECKOUT_PAYMENT_CHOICES
84 if variant not in [code for code, dummy_name in variant_choices]:
85 raise Http404('%r is not a valid payment variant' % (variant,))
86 with transaction.atomic():
87 order.change_status('payment-pending')
88 payment, dummy_created = Payment.objects.get_or_create(
89 variant=variant, status='waiting', order=order, defaults=defaults)
90 try:
91 form = payment.get_form(data=request.POST or None)
92 except RedirectNeeded as redirect_to:
93 return redirect(str(redirect_to))
94 except Exception:
95 logger.exception('Error communicating with the payment gateway')
96 messages.error(
97 request,
98 _('Oops, it looks like we were unable to contact the selected'
99 ' payment service'))
100 payment.change_status('error')
101 return redirect('order:payment', token=order.token)
102 template = 'order/payment/%s.html' % variant
103 return TemplateResponse(request, [template, 'order/payment/default.html'],
104 {'form': form, 'payment': payment})
105
106
107 @check_order_status
108 def cancel_payment(request, order):
109 form = PaymentDeleteForm(request.POST or None, order=order)
110 if form.is_valid():
111 with transaction.atomic():
112 form.save()
113 return redirect('order:payment', token=order.token)
114 return HttpResponseForbidden()
115
116
117 def create_password(request, token):
118 if request.user.is_authenticated():
119 return redirect('order:details', token=token)
120 order = get_object_or_404(Order, token=token)
121 email = order.user_email
122 form_data = request.POST.copy()
123 if form_data:
124 form_data.update({'email': email})
125 form = PasswordForm(form_data or None)
126
127 if form.is_valid():
128 user = form.save(request)
129 order.user = user
130 order.save(update_fields=['user'])
131 password = form_data.get('password1')
132 auth_user = auth.authenticate(email=email, password=password)
133 if auth_user is not None:
134 auth.login(request, auth_user)
135 return redirect('order:details', token=token)
136 ctx = {'form': form, 'email': email}
137 return TemplateResponse(request, 'order/create_password.html', ctx)
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/order/views.py b/saleor/order/views.py
--- a/saleor/order/views.py
+++ b/saleor/order/views.py
@@ -133,5 +133,5 @@
if auth_user is not None:
auth.login(request, auth_user)
return redirect('order:details', token=token)
- ctx = {'form': form, 'email': email}
+ ctx = {'form': form, 'email': email, 'order': order}
return TemplateResponse(request, 'order/create_password.html', ctx)
| {"golden_diff": "diff --git a/saleor/order/views.py b/saleor/order/views.py\n--- a/saleor/order/views.py\n+++ b/saleor/order/views.py\n@@ -133,5 +133,5 @@\n if auth_user is not None:\n auth.login(request, auth_user)\n return redirect('order:details', token=token)\n- ctx = {'form': form, 'email': email}\n+ ctx = {'form': form, 'email': email, 'order': order}\n return TemplateResponse(request, 'order/create_password.html', ctx)\n", "issue": "Add checkout steps navigation\nNow there's no explicit navigation. Using browser back button can be dangerous in some cases and it's not a common behavior in ecommerce\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages, auth\nfrom django.db import transaction\nfrom django.http import Http404, HttpResponseForbidden\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils.translation import ugettext as _\nfrom django.template.response import TemplateResponse\nfrom payments import RedirectNeeded\n\nfrom .forms import PaymentDeleteForm, PaymentMethodsForm, PasswordForm\nfrom .models import Order, Payment\nfrom ..core.utils import get_client_ip\nfrom .utils import check_order_status\n\nlogger = logging.getLogger(__name__)\n\n\ndef details(request, token):\n orders = Order.objects.prefetch_related('groups__items')\n order = get_object_or_404(orders, token=token)\n groups = order.groups.all()\n return TemplateResponse(request, 'order/details.html',\n {'order': order, 'groups': groups})\n\n\ndef payment(request, token):\n orders = Order.objects.prefetch_related('groups__items')\n order = get_object_or_404(orders, token=token)\n groups = order.groups.all()\n payments = order.payments.all()\n form_data = request.POST or None\n try:\n waiting_payment = order.payments.get(status='waiting')\n except Payment.DoesNotExist:\n waiting_payment = None\n waiting_payment_form = None\n else:\n form_data = None\n waiting_payment_form = PaymentDeleteForm(\n None, order=order, initial={'payment_id': waiting_payment.id})\n if order.is_fully_paid():\n form_data = None\n payment_form = None\n if not order.is_pre_authorized():\n payment_form = PaymentMethodsForm(form_data)\n # FIXME: redirect if there is only one payment method\n if payment_form.is_valid():\n payment_method = payment_form.cleaned_data['method']\n return redirect('order:payment', token=order.token,\n variant=payment_method)\n return TemplateResponse(request, 'order/payment.html',\n {'order': order, 'groups': groups,\n 'payment_form': payment_form,\n 'waiting_payment': waiting_payment,\n 'waiting_payment_form': waiting_payment_form,\n 'payments': payments})\n\n\n@check_order_status\ndef start_payment(request, order, variant):\n waiting_payments = order.payments.filter(status='waiting').exists()\n if waiting_payments:\n return redirect('order:payment', token=order.token)\n billing = order.billing_address\n total = order.get_total()\n defaults = {'total': total.gross,\n 'tax': total.tax, 'currency': total.currency,\n 'delivery': order.get_delivery_total().gross,\n 'billing_first_name': billing.first_name,\n 'billing_last_name': billing.last_name,\n 'billing_address_1': billing.street_address_1,\n 'billing_address_2': billing.street_address_2,\n 'billing_city': billing.city,\n 'billing_postcode': billing.postal_code,\n 'billing_country_code': billing.country,\n 'billing_email': order.user_email,\n 'description': _('Order %(order_number)s') % {\n 'order_number': order},\n 'billing_country_area': billing.country_area,\n 'customer_ip_address': get_client_ip(request)}\n variant_choices = settings.CHECKOUT_PAYMENT_CHOICES\n if variant not in [code for code, dummy_name in variant_choices]:\n raise Http404('%r is not a valid payment variant' % (variant,))\n with transaction.atomic():\n order.change_status('payment-pending')\n payment, dummy_created = Payment.objects.get_or_create(\n variant=variant, status='waiting', order=order, defaults=defaults)\n try:\n form = payment.get_form(data=request.POST or None)\n except RedirectNeeded as redirect_to:\n return redirect(str(redirect_to))\n except Exception:\n logger.exception('Error communicating with the payment gateway')\n messages.error(\n request,\n _('Oops, it looks like we were unable to contact the selected'\n ' payment service'))\n payment.change_status('error')\n return redirect('order:payment', token=order.token)\n template = 'order/payment/%s.html' % variant\n return TemplateResponse(request, [template, 'order/payment/default.html'],\n {'form': form, 'payment': payment})\n\n\n@check_order_status\ndef cancel_payment(request, order):\n form = PaymentDeleteForm(request.POST or None, order=order)\n if form.is_valid():\n with transaction.atomic():\n form.save()\n return redirect('order:payment', token=order.token)\n return HttpResponseForbidden()\n\n\ndef create_password(request, token):\n if request.user.is_authenticated():\n return redirect('order:details', token=token)\n order = get_object_or_404(Order, token=token)\n email = order.user_email\n form_data = request.POST.copy()\n if form_data:\n form_data.update({'email': email})\n form = PasswordForm(form_data or None)\n\n if form.is_valid():\n user = form.save(request)\n order.user = user\n order.save(update_fields=['user'])\n password = form_data.get('password1')\n auth_user = auth.authenticate(email=email, password=password)\n if auth_user is not None:\n auth.login(request, auth_user)\n return redirect('order:details', token=token)\n ctx = {'form': form, 'email': email}\n return TemplateResponse(request, 'order/create_password.html', ctx)\n", "path": "saleor/order/views.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages, auth\nfrom django.db import transaction\nfrom django.http import Http404, HttpResponseForbidden\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils.translation import ugettext as _\nfrom django.template.response import TemplateResponse\nfrom payments import RedirectNeeded\n\nfrom .forms import PaymentDeleteForm, PaymentMethodsForm, PasswordForm\nfrom .models import Order, Payment\nfrom ..core.utils import get_client_ip\nfrom .utils import check_order_status\n\nlogger = logging.getLogger(__name__)\n\n\ndef details(request, token):\n orders = Order.objects.prefetch_related('groups__items')\n order = get_object_or_404(orders, token=token)\n groups = order.groups.all()\n return TemplateResponse(request, 'order/details.html',\n {'order': order, 'groups': groups})\n\n\ndef payment(request, token):\n orders = Order.objects.prefetch_related('groups__items')\n order = get_object_or_404(orders, token=token)\n groups = order.groups.all()\n payments = order.payments.all()\n form_data = request.POST or None\n try:\n waiting_payment = order.payments.get(status='waiting')\n except Payment.DoesNotExist:\n waiting_payment = None\n waiting_payment_form = None\n else:\n form_data = None\n waiting_payment_form = PaymentDeleteForm(\n None, order=order, initial={'payment_id': waiting_payment.id})\n if order.is_fully_paid():\n form_data = None\n payment_form = None\n if not order.is_pre_authorized():\n payment_form = PaymentMethodsForm(form_data)\n # FIXME: redirect if there is only one payment method\n if payment_form.is_valid():\n payment_method = payment_form.cleaned_data['method']\n return redirect('order:payment', token=order.token,\n variant=payment_method)\n return TemplateResponse(request, 'order/payment.html',\n {'order': order, 'groups': groups,\n 'payment_form': payment_form,\n 'waiting_payment': waiting_payment,\n 'waiting_payment_form': waiting_payment_form,\n 'payments': payments})\n\n\n@check_order_status\ndef start_payment(request, order, variant):\n waiting_payments = order.payments.filter(status='waiting').exists()\n if waiting_payments:\n return redirect('order:payment', token=order.token)\n billing = order.billing_address\n total = order.get_total()\n defaults = {'total': total.gross,\n 'tax': total.tax, 'currency': total.currency,\n 'delivery': order.get_delivery_total().gross,\n 'billing_first_name': billing.first_name,\n 'billing_last_name': billing.last_name,\n 'billing_address_1': billing.street_address_1,\n 'billing_address_2': billing.street_address_2,\n 'billing_city': billing.city,\n 'billing_postcode': billing.postal_code,\n 'billing_country_code': billing.country,\n 'billing_email': order.user_email,\n 'description': _('Order %(order_number)s') % {\n 'order_number': order},\n 'billing_country_area': billing.country_area,\n 'customer_ip_address': get_client_ip(request)}\n variant_choices = settings.CHECKOUT_PAYMENT_CHOICES\n if variant not in [code for code, dummy_name in variant_choices]:\n raise Http404('%r is not a valid payment variant' % (variant,))\n with transaction.atomic():\n order.change_status('payment-pending')\n payment, dummy_created = Payment.objects.get_or_create(\n variant=variant, status='waiting', order=order, defaults=defaults)\n try:\n form = payment.get_form(data=request.POST or None)\n except RedirectNeeded as redirect_to:\n return redirect(str(redirect_to))\n except Exception:\n logger.exception('Error communicating with the payment gateway')\n messages.error(\n request,\n _('Oops, it looks like we were unable to contact the selected'\n ' payment service'))\n payment.change_status('error')\n return redirect('order:payment', token=order.token)\n template = 'order/payment/%s.html' % variant\n return TemplateResponse(request, [template, 'order/payment/default.html'],\n {'form': form, 'payment': payment})\n\n\n@check_order_status\ndef cancel_payment(request, order):\n form = PaymentDeleteForm(request.POST or None, order=order)\n if form.is_valid():\n with transaction.atomic():\n form.save()\n return redirect('order:payment', token=order.token)\n return HttpResponseForbidden()\n\n\ndef create_password(request, token):\n if request.user.is_authenticated():\n return redirect('order:details', token=token)\n order = get_object_or_404(Order, token=token)\n email = order.user_email\n form_data = request.POST.copy()\n if form_data:\n form_data.update({'email': email})\n form = PasswordForm(form_data or None)\n\n if form.is_valid():\n user = form.save(request)\n order.user = user\n order.save(update_fields=['user'])\n password = form_data.get('password1')\n auth_user = auth.authenticate(email=email, password=password)\n if auth_user is not None:\n auth.login(request, auth_user)\n return redirect('order:details', token=token)\n ctx = {'form': form, 'email': email, 'order': order}\n return TemplateResponse(request, 'order/create_password.html', ctx)\n", "path": "saleor/order/views.py"}]} | 1,750 | 125 |
gh_patches_debug_15822 | rasdani/github-patches | git_diff | goauthentik__authentik-7315 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Login fails at GET /api/v3/flows/executor/default-authentication-flow/
**Describe the bug**
Authentication fails. I've tried my own auth flow, which includes a passwordless option. This log is from switching back to the default auth flow.
**To Reproduce**
Steps to reproduce the behavior:
1. Enter Username
2. Enter Password
3. Press Enter
4. See error
**Expected behavior**
This stage would normally be to select the authenticator to use. (Key or TOTP, in my case)
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Logs**
<details>
<summary>Stacktrace from authentik</summary>
```
Traceback (most recent call last):
File "/authentik/flows/views/executor.py", line 287, in get
stage_response = self.current_stage_view.dispatch(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ak-root/venv/lib/python3.11/site-packages/django/views/generic/base.py", line 143, in dispatch
return handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/authentik/stages/authenticator_validate/stage.py", line 222, in get
challenges = self.get_device_challenges()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/authentik/stages/authenticator_validate/stage.py", line 157, in get_device_challenges
user_devices = list(devices_for_user(self.get_pending_user()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/authentik/stages/authenticator/__init__.py", line 93, in devices_for_user
yield from device_set
File "/ak-root/venv/lib/python3.11/site-packages/django/db/models/query.py", line 398, in __iter__
self._fetch_all()
File "/ak-root/venv/lib/python3.11/site-packages/django/db/models/query.py", line 1881, in _fetch_all
self._result_cache = list(self._iterable_class(self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ak-root/venv/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/ak-root/venv/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/ak-root/venv/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ak-root/venv/lib/python3.11/site-packages/django_prometheus/db/common.py", line 69, in execute
return super().execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/ak-root/venv/lib/python3.11/site-packages/psycopg/cursor.py", line 737, in execute
raise ex.with_traceback(None)
django.db.utils.ProgrammingError: relation "authentik_stages_authenticator_static_staticdevice" does not exist
LINE 1: ...tic_staticdevice"."throttling_failure_count" FROM "authentik...
^
```
</details>
**Version and Deployment (please complete the following information):**
- authentik version: gh-next as of 10/21/2023
- Deployment: docker-compose
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lifecycle/system_migrations/otp_merge.py`
Content:
```
1 # flake8: noqa
2 from lifecycle.migrate import BaseMigration
3
4 SQL_STATEMENT = """
5 DELETE FROM django_migrations WHERE app = 'otp_static';
6 DELETE FROM django_migrations WHERE app = 'otp_totp';
7 -- Rename tables (static)
8 ALTER TABLE otp_static_staticdevice RENAME TO authentik_stages_authenticator_static_staticdevice;
9 ALTER TABLE otp_static_statictoken RENAME TO authentik_stages_authenticator_static_statictoken;
10 ALTER SEQUENCE otp_static_statictoken_id_seq RENAME TO authentik_stages_authenticator_static_statictoken_id_seq;
11 ALTER SEQUENCE otp_static_staticdevice_id_seq RENAME TO authentik_stages_authenticator_static_staticdevice_id_seq;
12 -- Rename tables (totp)
13 ALTER TABLE otp_totp_totpdevice RENAME TO authentik_stages_authenticator_totp_totpdevice;
14 ALTER SEQUENCE otp_totp_totpdevice_id_seq RENAME TO authentik_stages_authenticator_totp_totpdevice_id_seq;
15 """
16
17
18 class Migration(BaseMigration):
19 def needs_migration(self) -> bool:
20 self.cur.execute(
21 "select * from information_schema.tables WHERE table_name='otp_static_staticdevice'"
22 )
23 return bool(self.cur.rowcount)
24
25 def run(self):
26 self.cur.execute(SQL_STATEMENT)
27 self.fake_migration(
28 (
29 "authentik_stages_authenticator_static",
30 "0008_initial",
31 ),
32 (
33 "authentik_stages_authenticator_static",
34 "0009_throttling",
35 ),
36 (
37 "authentik_stages_authenticator_totp",
38 "0008_initial",
39 ),
40 (
41 "authentik_stages_authenticator_totp",
42 "0009_auto_20190420_0723",
43 ),
44 )
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lifecycle/system_migrations/otp_merge.py b/lifecycle/system_migrations/otp_merge.py
--- a/lifecycle/system_migrations/otp_merge.py
+++ b/lifecycle/system_migrations/otp_merge.py
@@ -2,6 +2,7 @@
from lifecycle.migrate import BaseMigration
SQL_STATEMENT = """
+BEGIN TRANSACTION;
DELETE FROM django_migrations WHERE app = 'otp_static';
DELETE FROM django_migrations WHERE app = 'otp_totp';
-- Rename tables (static)
@@ -12,6 +13,7 @@
-- Rename tables (totp)
ALTER TABLE otp_totp_totpdevice RENAME TO authentik_stages_authenticator_totp_totpdevice;
ALTER SEQUENCE otp_totp_totpdevice_id_seq RENAME TO authentik_stages_authenticator_totp_totpdevice_id_seq;
+COMMIT;
"""
| {"golden_diff": "diff --git a/lifecycle/system_migrations/otp_merge.py b/lifecycle/system_migrations/otp_merge.py\n--- a/lifecycle/system_migrations/otp_merge.py\n+++ b/lifecycle/system_migrations/otp_merge.py\n@@ -2,6 +2,7 @@\n from lifecycle.migrate import BaseMigration\n \n SQL_STATEMENT = \"\"\"\n+BEGIN TRANSACTION;\n DELETE FROM django_migrations WHERE app = 'otp_static';\n DELETE FROM django_migrations WHERE app = 'otp_totp';\n -- Rename tables (static)\n@@ -12,6 +13,7 @@\n -- Rename tables (totp)\n ALTER TABLE otp_totp_totpdevice RENAME TO authentik_stages_authenticator_totp_totpdevice;\n ALTER SEQUENCE otp_totp_totpdevice_id_seq RENAME TO authentik_stages_authenticator_totp_totpdevice_id_seq;\n+COMMIT;\n \"\"\"\n", "issue": "Login fails at GET /api/v3/flows/executor/default-authentication-flow/ \n**Describe the bug**\r\nAuthentication fails. I've tried my own auth flow, which includes a passwordless option. This log is from switching back to the default auth flow.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Enter Username\r\n2. Enter Password\r\n3. Press Enter\r\n4. See error\r\n\r\n**Expected behavior**\r\nThis stage would normally be to select the authenticator to use. (Key or TOTP, in my case)\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Logs**\r\n<details>\r\n <summary>Stacktrace from authentik</summary>\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/authentik/flows/views/executor.py\", line 287, in get\r\n stage_response = self.current_stage_view.dispatch(request)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/views/generic/base.py\", line 143, in dispatch\r\n return handler(request, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/authentik/stages/authenticator_validate/stage.py\", line 222, in get\r\n challenges = self.get_device_challenges()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/authentik/stages/authenticator_validate/stage.py\", line 157, in get_device_challenges\r\n user_devices = list(devices_for_user(self.get_pending_user()))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/authentik/stages/authenticator/__init__.py\", line 93, in devices_for_user\r\n yield from device_set\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/models/query.py\", line 398, in __iter__\r\n self._fetch_all()\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/models/query.py\", line 1881, in _fetch_all\r\n self._result_cache = list(self._iterable_class(self))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/models/query.py\", line 91, in __iter__\r\n results = compiler.execute_sql(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/models/sql/compiler.py\", line 1562, in execute_sql\r\n cursor.execute(sql, params)\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py\", line 67, in execute\r\n return self._execute_with_wrappers(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py\", line 80, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py\", line 84, in _execute\r\n with self.db.wrap_database_errors:\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/utils.py\", line 91, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/ak-root/venv/lib/python3.11/site-packages/django_prometheus/db/common.py\", line 69, in execute\r\n return super().execute(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/ak-root/venv/lib/python3.11/site-packages/psycopg/cursor.py\", line 737, in execute\r\n raise ex.with_traceback(None)\r\ndjango.db.utils.ProgrammingError: relation \"authentik_stages_authenticator_static_staticdevice\" does not exist\r\nLINE 1: ...tic_staticdevice\".\"throttling_failure_count\" FROM \"authentik...\r\n ^\r\n```\r\n</details>\r\n\r\n\r\n**Version and Deployment (please complete the following information):**\r\n- authentik version: gh-next as of 10/21/2023\r\n- Deployment: docker-compose\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n \n", "before_files": [{"content": "# flake8: noqa\nfrom lifecycle.migrate import BaseMigration\n\nSQL_STATEMENT = \"\"\"\nDELETE FROM django_migrations WHERE app = 'otp_static';\nDELETE FROM django_migrations WHERE app = 'otp_totp';\n-- Rename tables (static)\nALTER TABLE otp_static_staticdevice RENAME TO authentik_stages_authenticator_static_staticdevice;\nALTER TABLE otp_static_statictoken RENAME TO authentik_stages_authenticator_static_statictoken;\nALTER SEQUENCE otp_static_statictoken_id_seq RENAME TO authentik_stages_authenticator_static_statictoken_id_seq;\nALTER SEQUENCE otp_static_staticdevice_id_seq RENAME TO authentik_stages_authenticator_static_staticdevice_id_seq;\n-- Rename tables (totp)\nALTER TABLE otp_totp_totpdevice RENAME TO authentik_stages_authenticator_totp_totpdevice;\nALTER SEQUENCE otp_totp_totpdevice_id_seq RENAME TO authentik_stages_authenticator_totp_totpdevice_id_seq;\n\"\"\"\n\n\nclass Migration(BaseMigration):\n def needs_migration(self) -> bool:\n self.cur.execute(\n \"select * from information_schema.tables WHERE table_name='otp_static_staticdevice'\"\n )\n return bool(self.cur.rowcount)\n\n def run(self):\n self.cur.execute(SQL_STATEMENT)\n self.fake_migration(\n (\n \"authentik_stages_authenticator_static\",\n \"0008_initial\",\n ),\n (\n \"authentik_stages_authenticator_static\",\n \"0009_throttling\",\n ),\n (\n \"authentik_stages_authenticator_totp\",\n \"0008_initial\",\n ),\n (\n \"authentik_stages_authenticator_totp\",\n \"0009_auto_20190420_0723\",\n ),\n )\n", "path": "lifecycle/system_migrations/otp_merge.py"}], "after_files": [{"content": "# flake8: noqa\nfrom lifecycle.migrate import BaseMigration\n\nSQL_STATEMENT = \"\"\"\nBEGIN TRANSACTION;\nDELETE FROM django_migrations WHERE app = 'otp_static';\nDELETE FROM django_migrations WHERE app = 'otp_totp';\n-- Rename tables (static)\nALTER TABLE otp_static_staticdevice RENAME TO authentik_stages_authenticator_static_staticdevice;\nALTER TABLE otp_static_statictoken RENAME TO authentik_stages_authenticator_static_statictoken;\nALTER SEQUENCE otp_static_statictoken_id_seq RENAME TO authentik_stages_authenticator_static_statictoken_id_seq;\nALTER SEQUENCE otp_static_staticdevice_id_seq RENAME TO authentik_stages_authenticator_static_staticdevice_id_seq;\n-- Rename tables (totp)\nALTER TABLE otp_totp_totpdevice RENAME TO authentik_stages_authenticator_totp_totpdevice;\nALTER SEQUENCE otp_totp_totpdevice_id_seq RENAME TO authentik_stages_authenticator_totp_totpdevice_id_seq;\nCOMMIT;\n\"\"\"\n\n\nclass Migration(BaseMigration):\n def needs_migration(self) -> bool:\n self.cur.execute(\n \"select * from information_schema.tables WHERE table_name='otp_static_staticdevice'\"\n )\n return bool(self.cur.rowcount)\n\n def run(self):\n self.cur.execute(SQL_STATEMENT)\n self.fake_migration(\n (\n \"authentik_stages_authenticator_static\",\n \"0008_initial\",\n ),\n (\n \"authentik_stages_authenticator_static\",\n \"0009_throttling\",\n ),\n (\n \"authentik_stages_authenticator_totp\",\n \"0008_initial\",\n ),\n (\n \"authentik_stages_authenticator_totp\",\n \"0009_auto_20190420_0723\",\n ),\n )\n", "path": "lifecycle/system_migrations/otp_merge.py"}]} | 1,760 | 183 |
gh_patches_debug_35030 | rasdani/github-patches | git_diff | pfnet__pytorch-pfn-extras-582 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Align terminology (`option` v.s. `config`)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_pfn_extras/runtime/_to.py`
Content:
```
1 from typing import Any, Dict, Optional, Type, TypeVar
2
3 import torch
4
5 import pytorch_pfn_extras as ppe
6 from pytorch_pfn_extras.runtime._runtime import DeviceLike, BaseRuntime
7
8
9 ModuleOrTensor = TypeVar('ModuleOrTensor', torch.nn.Module, torch.Tensor)
10
11
12 def to(
13 module_or_tensor: ModuleOrTensor,
14 device: DeviceLike,
15 *,
16 config: Optional[Dict[str, Any]] = None,
17 runtime_class: Optional[Type[BaseRuntime]] = None,
18 ) -> ModuleOrTensor:
19 """A function to transfer the given object to the given device.
20
21 If PyTorch's device type is given as the ``device`` argument,
22 the behavior of this function is equivalent to
23 ``module_or_tensor.to(module_or_tensor, device)``.
24
25 Otherwise, this function uses the **Runtime** mechanism.
26 This function looks for the Runtime for the device from the RuntimeRegistry
27 and delegates the actual transfer operation to it.
28
29 See also the documentation of ``ppe.runtime.BaseRuntime`` for details.
30
31 Args:
32 module_or_tensor (torch.nn.Module or torch.Tensor):
33 An object to be transferred.
34 device (torch.device or str):
35 The device that the input object is transferred to.
36 config (dict, optional):
37 A config of dictionary type that is passed to
38 ``runtime_class.__init__`` as an argument.
39 runtime_class:
40 A runtime class inherited from `BaseRuntime` class.
41 If ``None``, a runtime class is automatically selected
42 based on the ``device`` argument from the runtime registry.
43
44 Returns:
45 A `torch.Tensor` with the specified device.
46 """
47 if config is None:
48 config = {}
49 if runtime_class is None:
50 registry = ppe.runtime.runtime_registry
51 runtime_class = registry.get_runtime_class_for_device_spec(device)
52 runtime = runtime_class(device, config)
53 obj = module_or_tensor
54 if isinstance(obj, torch.nn.Module):
55 ppe.runtime._runtime._set_module_runtime_tag(obj, runtime)
56 return runtime.move_module(obj)
57 elif isinstance(obj, torch.Tensor):
58 return runtime.move_tensor(obj)
59 else:
60 raise ValueError('Unsupported type for module_or_tensor')
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_pfn_extras/runtime/_to.py b/pytorch_pfn_extras/runtime/_to.py
--- a/pytorch_pfn_extras/runtime/_to.py
+++ b/pytorch_pfn_extras/runtime/_to.py
@@ -13,8 +13,9 @@
module_or_tensor: ModuleOrTensor,
device: DeviceLike,
*,
- config: Optional[Dict[str, Any]] = None,
+ options: Optional[Dict[str, Any]] = None,
runtime_class: Optional[Type[BaseRuntime]] = None,
+ config: Optional[Dict[str, Any]] = None,
) -> ModuleOrTensor:
"""A function to transfer the given object to the given device.
@@ -33,23 +34,30 @@
An object to be transferred.
device (torch.device or str):
The device that the input object is transferred to.
- config (dict, optional):
- A config of dictionary type that is passed to
+ options (dict, optional):
+ An options of dictionary type that is passed to
``runtime_class.__init__`` as an argument.
runtime_class:
A runtime class inherited from `BaseRuntime` class.
If ``None``, a runtime class is automatically selected
based on the ``device`` argument from the runtime registry.
+ config (dict, optional):
+ DEPRECATED. Use `options`.
Returns:
A `torch.Tensor` with the specified device.
"""
- if config is None:
- config = {}
+ if options is None:
+ options = {}
+ if config is not None:
+ options = config
+ elif config is not None:
+ raise ValueError('options and config cannot be specified together')
+
if runtime_class is None:
registry = ppe.runtime.runtime_registry
runtime_class = registry.get_runtime_class_for_device_spec(device)
- runtime = runtime_class(device, config)
+ runtime = runtime_class(device, options)
obj = module_or_tensor
if isinstance(obj, torch.nn.Module):
ppe.runtime._runtime._set_module_runtime_tag(obj, runtime)
| {"golden_diff": "diff --git a/pytorch_pfn_extras/runtime/_to.py b/pytorch_pfn_extras/runtime/_to.py\n--- a/pytorch_pfn_extras/runtime/_to.py\n+++ b/pytorch_pfn_extras/runtime/_to.py\n@@ -13,8 +13,9 @@\n module_or_tensor: ModuleOrTensor,\n device: DeviceLike,\n *,\n- config: Optional[Dict[str, Any]] = None,\n+ options: Optional[Dict[str, Any]] = None,\n runtime_class: Optional[Type[BaseRuntime]] = None,\n+ config: Optional[Dict[str, Any]] = None,\n ) -> ModuleOrTensor:\n \"\"\"A function to transfer the given object to the given device.\n \n@@ -33,23 +34,30 @@\n An object to be transferred.\n device (torch.device or str):\n The device that the input object is transferred to.\n- config (dict, optional):\n- A config of dictionary type that is passed to\n+ options (dict, optional):\n+ An options of dictionary type that is passed to\n ``runtime_class.__init__`` as an argument.\n runtime_class:\n A runtime class inherited from `BaseRuntime` class.\n If ``None``, a runtime class is automatically selected\n based on the ``device`` argument from the runtime registry.\n+ config (dict, optional):\n+ DEPRECATED. Use `options`.\n \n Returns:\n A `torch.Tensor` with the specified device.\n \"\"\"\n- if config is None:\n- config = {}\n+ if options is None:\n+ options = {}\n+ if config is not None:\n+ options = config\n+ elif config is not None:\n+ raise ValueError('options and config cannot be specified together')\n+\n if runtime_class is None:\n registry = ppe.runtime.runtime_registry\n runtime_class = registry.get_runtime_class_for_device_spec(device)\n- runtime = runtime_class(device, config)\n+ runtime = runtime_class(device, options)\n obj = module_or_tensor\n if isinstance(obj, torch.nn.Module):\n ppe.runtime._runtime._set_module_runtime_tag(obj, runtime)\n", "issue": "Align terminology (`option` v.s. `config`)\n\n", "before_files": [{"content": "from typing import Any, Dict, Optional, Type, TypeVar\n\nimport torch\n\nimport pytorch_pfn_extras as ppe\nfrom pytorch_pfn_extras.runtime._runtime import DeviceLike, BaseRuntime\n\n\nModuleOrTensor = TypeVar('ModuleOrTensor', torch.nn.Module, torch.Tensor)\n\n\ndef to(\n module_or_tensor: ModuleOrTensor,\n device: DeviceLike,\n *,\n config: Optional[Dict[str, Any]] = None,\n runtime_class: Optional[Type[BaseRuntime]] = None,\n) -> ModuleOrTensor:\n \"\"\"A function to transfer the given object to the given device.\n\n If PyTorch's device type is given as the ``device`` argument,\n the behavior of this function is equivalent to\n ``module_or_tensor.to(module_or_tensor, device)``.\n\n Otherwise, this function uses the **Runtime** mechanism.\n This function looks for the Runtime for the device from the RuntimeRegistry\n and delegates the actual transfer operation to it.\n\n See also the documentation of ``ppe.runtime.BaseRuntime`` for details.\n\n Args:\n module_or_tensor (torch.nn.Module or torch.Tensor):\n An object to be transferred.\n device (torch.device or str):\n The device that the input object is transferred to.\n config (dict, optional):\n A config of dictionary type that is passed to\n ``runtime_class.__init__`` as an argument.\n runtime_class:\n A runtime class inherited from `BaseRuntime` class.\n If ``None``, a runtime class is automatically selected\n based on the ``device`` argument from the runtime registry.\n\n Returns:\n A `torch.Tensor` with the specified device.\n \"\"\"\n if config is None:\n config = {}\n if runtime_class is None:\n registry = ppe.runtime.runtime_registry\n runtime_class = registry.get_runtime_class_for_device_spec(device)\n runtime = runtime_class(device, config)\n obj = module_or_tensor\n if isinstance(obj, torch.nn.Module):\n ppe.runtime._runtime._set_module_runtime_tag(obj, runtime)\n return runtime.move_module(obj)\n elif isinstance(obj, torch.Tensor):\n return runtime.move_tensor(obj)\n else:\n raise ValueError('Unsupported type for module_or_tensor')\n", "path": "pytorch_pfn_extras/runtime/_to.py"}], "after_files": [{"content": "from typing import Any, Dict, Optional, Type, TypeVar\n\nimport torch\n\nimport pytorch_pfn_extras as ppe\nfrom pytorch_pfn_extras.runtime._runtime import DeviceLike, BaseRuntime\n\n\nModuleOrTensor = TypeVar('ModuleOrTensor', torch.nn.Module, torch.Tensor)\n\n\ndef to(\n module_or_tensor: ModuleOrTensor,\n device: DeviceLike,\n *,\n options: Optional[Dict[str, Any]] = None,\n runtime_class: Optional[Type[BaseRuntime]] = None,\n config: Optional[Dict[str, Any]] = None,\n) -> ModuleOrTensor:\n \"\"\"A function to transfer the given object to the given device.\n\n If PyTorch's device type is given as the ``device`` argument,\n the behavior of this function is equivalent to\n ``module_or_tensor.to(module_or_tensor, device)``.\n\n Otherwise, this function uses the **Runtime** mechanism.\n This function looks for the Runtime for the device from the RuntimeRegistry\n and delegates the actual transfer operation to it.\n\n See also the documentation of ``ppe.runtime.BaseRuntime`` for details.\n\n Args:\n module_or_tensor (torch.nn.Module or torch.Tensor):\n An object to be transferred.\n device (torch.device or str):\n The device that the input object is transferred to.\n options (dict, optional):\n An options of dictionary type that is passed to\n ``runtime_class.__init__`` as an argument.\n runtime_class:\n A runtime class inherited from `BaseRuntime` class.\n If ``None``, a runtime class is automatically selected\n based on the ``device`` argument from the runtime registry.\n config (dict, optional):\n DEPRECATED. Use `options`.\n\n Returns:\n A `torch.Tensor` with the specified device.\n \"\"\"\n if options is None:\n options = {}\n if config is not None:\n options = config\n elif config is not None:\n raise ValueError('options and config cannot be specified together')\n\n if runtime_class is None:\n registry = ppe.runtime.runtime_registry\n runtime_class = registry.get_runtime_class_for_device_spec(device)\n runtime = runtime_class(device, options)\n obj = module_or_tensor\n if isinstance(obj, torch.nn.Module):\n ppe.runtime._runtime._set_module_runtime_tag(obj, runtime)\n return runtime.move_module(obj)\n elif isinstance(obj, torch.Tensor):\n return runtime.move_tensor(obj)\n else:\n raise ValueError('Unsupported type for module_or_tensor')\n", "path": "pytorch_pfn_extras/runtime/_to.py"}]} | 867 | 465 |
gh_patches_debug_7472 | rasdani/github-patches | git_diff | getnikola__nikola-3211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot use "ignored_assets" properly on Windows with custom themes
<!--
Before creating an issue:
* make sure you are using an up-to-date version of Nikola
* search for existing issues that might be related
Make sure to:
* provide information about your environment (below)
* include all the output you get, and any other information related to your problem
Nikola v7.6.4, as provided by Ubuntu, is NOT SUPPORTED.
If you are using this version, you should upgrade: https://getnikola.com/getting-started.html
-->
### Environment
**Python Version:** Python 3.7.1
**Nikola Version:** Nikola v8.0.1
**Operating System:** Windows 10
### Description:
The option "ignored_assets" used in theme meta files does not work properly on Windows: the relative path is only recognized when using a backslash, i.e. css\theme.css instead of css/theme.css ([see doc](https://getnikola.com/theming.html#theme-meta-files)).
### Solution:
Normalize the path to the convention of the operating system when parsing the theme meta file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/task/copy_assets.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2019 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Copy theme assets into output."""
28
29
30 import io
31 import os
32
33 from nikola.plugin_categories import Task
34 from nikola import utils
35
36
37 class CopyAssets(Task):
38 """Copy theme assets into output."""
39
40 name = "copy_assets"
41
42 def gen_tasks(self):
43 """Create tasks to copy the assets of the whole theme chain.
44
45 If a file is present on two themes, use the version
46 from the "youngest" theme.
47 """
48 kw = {
49 "themes": self.site.THEMES,
50 "translations": self.site.translations,
51 "files_folders": self.site.config['FILES_FOLDERS'],
52 "output_folder": self.site.config['OUTPUT_FOLDER'],
53 "filters": self.site.config['FILTERS'],
54 "code_color_scheme": self.site.config['CODE_COLOR_SCHEME'],
55 "code.css_selectors": ['pre.code', '.highlight pre'],
56 "code.css_head": '/* code.css file generated by Nikola */\n',
57 "code.css_close": "\ntable.codetable { width: 100%;} td.linenos {text-align: right; width: 4em;}\n",
58 }
59 tasks = {}
60 code_css_path = os.path.join(kw['output_folder'], 'assets', 'css', 'code.css')
61 code_css_input = utils.get_asset_path('assets/css/code.css',
62 themes=kw['themes'],
63 files_folders=kw['files_folders'], output_dir=None)
64 yield self.group_task()
65
66 main_theme = utils.get_theme_path(kw['themes'][0])
67 theme_ini = utils.parse_theme_meta(main_theme)
68 if theme_ini:
69 ignored_assets = theme_ini.get("Nikola", "ignored_assets", fallback='').split(',')
70 ignored_assets = [asset_name.strip() for asset_name in ignored_assets]
71 else:
72 ignored_assets = []
73
74 for theme_name in kw['themes']:
75 src = os.path.join(utils.get_theme_path(theme_name), 'assets')
76 dst = os.path.join(kw['output_folder'], 'assets')
77 for task in utils.copy_tree(src, dst):
78 asset_name = os.path.relpath(task['name'], dst)
79 if task['name'] in tasks or asset_name in ignored_assets:
80 continue
81 tasks[task['name']] = task
82 task['uptodate'] = [utils.config_changed(kw, 'nikola.plugins.task.copy_assets')]
83 task['basename'] = self.name
84 if code_css_input:
85 if 'file_dep' not in task:
86 task['file_dep'] = []
87 task['file_dep'].append(code_css_input)
88 yield utils.apply_filters(task, kw['filters'])
89
90 # Check whether or not there is a code.css file around.
91 if not code_css_input and kw['code_color_scheme']:
92 def create_code_css():
93 from pygments.formatters import get_formatter_by_name
94 formatter = get_formatter_by_name('html', style=kw["code_color_scheme"])
95 utils.makedirs(os.path.dirname(code_css_path))
96 with io.open(code_css_path, 'w+', encoding='utf8') as outf:
97 outf.write(kw["code.css_head"])
98 outf.write(formatter.get_style_defs(kw["code.css_selectors"]))
99 outf.write(kw["code.css_close"])
100
101 if os.path.exists(code_css_path):
102 with io.open(code_css_path, 'r', encoding='utf-8') as fh:
103 testcontents = fh.read(len(kw["code.css_head"])) == kw["code.css_head"]
104 else:
105 testcontents = False
106
107 task = {
108 'basename': self.name,
109 'name': code_css_path,
110 'targets': [code_css_path],
111 'uptodate': [utils.config_changed(kw, 'nikola.plugins.task.copy_assets'), testcontents],
112 'actions': [(create_code_css, [])],
113 'clean': True,
114 }
115 yield utils.apply_filters(task, kw['filters'])
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/task/copy_assets.py b/nikola/plugins/task/copy_assets.py
--- a/nikola/plugins/task/copy_assets.py
+++ b/nikola/plugins/task/copy_assets.py
@@ -67,7 +67,7 @@
theme_ini = utils.parse_theme_meta(main_theme)
if theme_ini:
ignored_assets = theme_ini.get("Nikola", "ignored_assets", fallback='').split(',')
- ignored_assets = [asset_name.strip() for asset_name in ignored_assets]
+ ignored_assets = [os.path.normpath(asset_name.strip()) for asset_name in ignored_assets]
else:
ignored_assets = []
| {"golden_diff": "diff --git a/nikola/plugins/task/copy_assets.py b/nikola/plugins/task/copy_assets.py\n--- a/nikola/plugins/task/copy_assets.py\n+++ b/nikola/plugins/task/copy_assets.py\n@@ -67,7 +67,7 @@\n theme_ini = utils.parse_theme_meta(main_theme)\n if theme_ini:\n ignored_assets = theme_ini.get(\"Nikola\", \"ignored_assets\", fallback='').split(',')\n- ignored_assets = [asset_name.strip() for asset_name in ignored_assets]\n+ ignored_assets = [os.path.normpath(asset_name.strip()) for asset_name in ignored_assets]\n else:\n ignored_assets = []\n", "issue": "Cannot use \"ignored_assets\" properly on Windows with custom themes\n<!--\r\nBefore creating an issue:\r\n* make sure you are using an up-to-date version of Nikola\r\n* search for existing issues that might be related\r\n\r\nMake sure to:\r\n* provide information about your environment (below)\r\n* include all the output you get, and any other information related to your problem\r\n\r\nNikola v7.6.4, as provided by Ubuntu, is NOT SUPPORTED.\r\nIf you are using this version, you should upgrade: https://getnikola.com/getting-started.html\r\n-->\r\n\r\n### Environment\r\n\r\n**Python Version:** Python 3.7.1\r\n\r\n**Nikola Version:** Nikola v8.0.1\r\n\r\n**Operating System:** Windows 10\r\n\r\n### Description:\r\n\r\nThe option \"ignored_assets\" used in theme meta files does not work properly on Windows: the relative path is only recognized when using a backslash, i.e. css\\theme.css instead of css/theme.css ([see doc](https://getnikola.com/theming.html#theme-meta-files)).\r\n\r\n### Solution:\r\n\r\nNormalize the path to the convention of the operating system when parsing the theme meta file.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2019 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Copy theme assets into output.\"\"\"\n\n\nimport io\nimport os\n\nfrom nikola.plugin_categories import Task\nfrom nikola import utils\n\n\nclass CopyAssets(Task):\n \"\"\"Copy theme assets into output.\"\"\"\n\n name = \"copy_assets\"\n\n def gen_tasks(self):\n \"\"\"Create tasks to copy the assets of the whole theme chain.\n\n If a file is present on two themes, use the version\n from the \"youngest\" theme.\n \"\"\"\n kw = {\n \"themes\": self.site.THEMES,\n \"translations\": self.site.translations,\n \"files_folders\": self.site.config['FILES_FOLDERS'],\n \"output_folder\": self.site.config['OUTPUT_FOLDER'],\n \"filters\": self.site.config['FILTERS'],\n \"code_color_scheme\": self.site.config['CODE_COLOR_SCHEME'],\n \"code.css_selectors\": ['pre.code', '.highlight pre'],\n \"code.css_head\": '/* code.css file generated by Nikola */\\n',\n \"code.css_close\": \"\\ntable.codetable { width: 100%;} td.linenos {text-align: right; width: 4em;}\\n\",\n }\n tasks = {}\n code_css_path = os.path.join(kw['output_folder'], 'assets', 'css', 'code.css')\n code_css_input = utils.get_asset_path('assets/css/code.css',\n themes=kw['themes'],\n files_folders=kw['files_folders'], output_dir=None)\n yield self.group_task()\n\n main_theme = utils.get_theme_path(kw['themes'][0])\n theme_ini = utils.parse_theme_meta(main_theme)\n if theme_ini:\n ignored_assets = theme_ini.get(\"Nikola\", \"ignored_assets\", fallback='').split(',')\n ignored_assets = [asset_name.strip() for asset_name in ignored_assets]\n else:\n ignored_assets = []\n\n for theme_name in kw['themes']:\n src = os.path.join(utils.get_theme_path(theme_name), 'assets')\n dst = os.path.join(kw['output_folder'], 'assets')\n for task in utils.copy_tree(src, dst):\n asset_name = os.path.relpath(task['name'], dst)\n if task['name'] in tasks or asset_name in ignored_assets:\n continue\n tasks[task['name']] = task\n task['uptodate'] = [utils.config_changed(kw, 'nikola.plugins.task.copy_assets')]\n task['basename'] = self.name\n if code_css_input:\n if 'file_dep' not in task:\n task['file_dep'] = []\n task['file_dep'].append(code_css_input)\n yield utils.apply_filters(task, kw['filters'])\n\n # Check whether or not there is a code.css file around.\n if not code_css_input and kw['code_color_scheme']:\n def create_code_css():\n from pygments.formatters import get_formatter_by_name\n formatter = get_formatter_by_name('html', style=kw[\"code_color_scheme\"])\n utils.makedirs(os.path.dirname(code_css_path))\n with io.open(code_css_path, 'w+', encoding='utf8') as outf:\n outf.write(kw[\"code.css_head\"])\n outf.write(formatter.get_style_defs(kw[\"code.css_selectors\"]))\n outf.write(kw[\"code.css_close\"])\n\n if os.path.exists(code_css_path):\n with io.open(code_css_path, 'r', encoding='utf-8') as fh:\n testcontents = fh.read(len(kw[\"code.css_head\"])) == kw[\"code.css_head\"]\n else:\n testcontents = False\n\n task = {\n 'basename': self.name,\n 'name': code_css_path,\n 'targets': [code_css_path],\n 'uptodate': [utils.config_changed(kw, 'nikola.plugins.task.copy_assets'), testcontents],\n 'actions': [(create_code_css, [])],\n 'clean': True,\n }\n yield utils.apply_filters(task, kw['filters'])\n", "path": "nikola/plugins/task/copy_assets.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2019 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Copy theme assets into output.\"\"\"\n\n\nimport io\nimport os\n\nfrom nikola.plugin_categories import Task\nfrom nikola import utils\n\n\nclass CopyAssets(Task):\n \"\"\"Copy theme assets into output.\"\"\"\n\n name = \"copy_assets\"\n\n def gen_tasks(self):\n \"\"\"Create tasks to copy the assets of the whole theme chain.\n\n If a file is present on two themes, use the version\n from the \"youngest\" theme.\n \"\"\"\n kw = {\n \"themes\": self.site.THEMES,\n \"translations\": self.site.translations,\n \"files_folders\": self.site.config['FILES_FOLDERS'],\n \"output_folder\": self.site.config['OUTPUT_FOLDER'],\n \"filters\": self.site.config['FILTERS'],\n \"code_color_scheme\": self.site.config['CODE_COLOR_SCHEME'],\n \"code.css_selectors\": ['pre.code', '.highlight pre'],\n \"code.css_head\": '/* code.css file generated by Nikola */\\n',\n \"code.css_close\": \"\\ntable.codetable { width: 100%;} td.linenos {text-align: right; width: 4em;}\\n\",\n }\n tasks = {}\n code_css_path = os.path.join(kw['output_folder'], 'assets', 'css', 'code.css')\n code_css_input = utils.get_asset_path('assets/css/code.css',\n themes=kw['themes'],\n files_folders=kw['files_folders'], output_dir=None)\n yield self.group_task()\n\n main_theme = utils.get_theme_path(kw['themes'][0])\n theme_ini = utils.parse_theme_meta(main_theme)\n if theme_ini:\n ignored_assets = theme_ini.get(\"Nikola\", \"ignored_assets\", fallback='').split(',')\n ignored_assets = [os.path.normpath(asset_name.strip()) for asset_name in ignored_assets]\n else:\n ignored_assets = []\n\n for theme_name in kw['themes']:\n src = os.path.join(utils.get_theme_path(theme_name), 'assets')\n dst = os.path.join(kw['output_folder'], 'assets')\n for task in utils.copy_tree(src, dst):\n asset_name = os.path.relpath(task['name'], dst)\n if task['name'] in tasks or asset_name in ignored_assets:\n continue\n tasks[task['name']] = task\n task['uptodate'] = [utils.config_changed(kw, 'nikola.plugins.task.copy_assets')]\n task['basename'] = self.name\n if code_css_input:\n if 'file_dep' not in task:\n task['file_dep'] = []\n task['file_dep'].append(code_css_input)\n yield utils.apply_filters(task, kw['filters'])\n\n # Check whether or not there is a code.css file around.\n if not code_css_input and kw['code_color_scheme']:\n def create_code_css():\n from pygments.formatters import get_formatter_by_name\n formatter = get_formatter_by_name('html', style=kw[\"code_color_scheme\"])\n utils.makedirs(os.path.dirname(code_css_path))\n with io.open(code_css_path, 'w+', encoding='utf8') as outf:\n outf.write(kw[\"code.css_head\"])\n outf.write(formatter.get_style_defs(kw[\"code.css_selectors\"]))\n outf.write(kw[\"code.css_close\"])\n\n if os.path.exists(code_css_path):\n with io.open(code_css_path, 'r', encoding='utf-8') as fh:\n testcontents = fh.read(len(kw[\"code.css_head\"])) == kw[\"code.css_head\"]\n else:\n testcontents = False\n\n task = {\n 'basename': self.name,\n 'name': code_css_path,\n 'targets': [code_css_path],\n 'uptodate': [utils.config_changed(kw, 'nikola.plugins.task.copy_assets'), testcontents],\n 'actions': [(create_code_css, [])],\n 'clean': True,\n }\n yield utils.apply_filters(task, kw['filters'])\n", "path": "nikola/plugins/task/copy_assets.py"}]} | 1,837 | 143 |
gh_patches_debug_353 | rasdani/github-patches | git_diff | sopel-irc__sopel-1044 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[announce] Send confirmation to caller after all channels announced
When Sopel is in many channels, announces are likely to be rate-limited. This makes it hard to know, for example, when it's safe to shut down the bot if announce is being used to broadcast an upgrade notice.
It's an easy fix, and I'll open a PR for it tomorrow if there are no objections.
I am as-yet undecided whether it's best to use `bot.reply()` or `bot.notice()` for this (or even `bot.msg()` via PM to the caller), but I'll think about it between now and when I open the PR, and it can always be changed before merging.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/announce.py`
Content:
```
1 # coding=utf-8
2 """
3 announce.py - Send a message to all channels
4 Copyright © 2013, Elad Alfassa, <[email protected]>
5 Licensed under the Eiffel Forum License 2.
6
7 """
8 from __future__ import unicode_literals, absolute_import, print_function, division
9
10 from sopel.module import commands, example
11
12
13 @commands('announce')
14 @example('.announce Some important message here')
15 def announce(bot, trigger):
16 """
17 Send an announcement to all channels the bot is in
18 """
19 if not trigger.admin:
20 bot.reply('Sorry, I can\'t let you do that')
21 return
22 for channel in bot.channels:
23 bot.msg(channel, '[ANNOUNCEMENT] %s' % trigger.group(2))
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/announce.py b/sopel/modules/announce.py
--- a/sopel/modules/announce.py
+++ b/sopel/modules/announce.py
@@ -21,3 +21,4 @@
return
for channel in bot.channels:
bot.msg(channel, '[ANNOUNCEMENT] %s' % trigger.group(2))
+ bot.reply('Announce complete.')
| {"golden_diff": "diff --git a/sopel/modules/announce.py b/sopel/modules/announce.py\n--- a/sopel/modules/announce.py\n+++ b/sopel/modules/announce.py\n@@ -21,3 +21,4 @@\n return\n for channel in bot.channels:\n bot.msg(channel, '[ANNOUNCEMENT] %s' % trigger.group(2))\n+ bot.reply('Announce complete.')\n", "issue": "[announce] Send confirmation to caller after all channels announced\nWhen Sopel is in many channels, announces are likely to be rate-limited. This makes it hard to know, for example, when it's safe to shut down the bot if announce is being used to broadcast an upgrade notice.\n\nIt's an easy fix, and I'll open a PR for it tomorrow if there are no objections.\n\nI am as-yet undecided whether it's best to use `bot.reply()` or `bot.notice()` for this (or even `bot.msg()` via PM to the caller), but I'll think about it between now and when I open the PR, and it can always be changed before merging.\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nannounce.py - Send a message to all channels\nCopyright \u00a9 2013, Elad Alfassa, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nfrom sopel.module import commands, example\n\n\n@commands('announce')\n@example('.announce Some important message here')\ndef announce(bot, trigger):\n \"\"\"\n Send an announcement to all channels the bot is in\n \"\"\"\n if not trigger.admin:\n bot.reply('Sorry, I can\\'t let you do that')\n return\n for channel in bot.channels:\n bot.msg(channel, '[ANNOUNCEMENT] %s' % trigger.group(2))\n", "path": "sopel/modules/announce.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nannounce.py - Send a message to all channels\nCopyright \u00a9 2013, Elad Alfassa, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nfrom sopel.module import commands, example\n\n\n@commands('announce')\n@example('.announce Some important message here')\ndef announce(bot, trigger):\n \"\"\"\n Send an announcement to all channels the bot is in\n \"\"\"\n if not trigger.admin:\n bot.reply('Sorry, I can\\'t let you do that')\n return\n for channel in bot.channels:\n bot.msg(channel, '[ANNOUNCEMENT] %s' % trigger.group(2))\n bot.reply('Announce complete.')\n", "path": "sopel/modules/announce.py"}]} | 608 | 90 |
gh_patches_debug_60583 | rasdani/github-patches | git_diff | fonttools__fonttools-1715 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ascender and ascent
The [opentype spec ](https://docs.microsoft.com/en-gb/typography/opentype/spec/hhea) calls the first two substantive entries in the `hhea` table "`ascender`" and "`descender`". fonttools calls them "`ascent`" and "`descent`".
This was surprising! Maybe it's too late to change then but can we at least have an alias?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/ttLib/tables/_h_h_e_a.py`
Content:
```
1 from fontTools.misc.py23 import *
2 from fontTools.misc import sstruct
3 from fontTools.misc.textTools import safeEval
4 from fontTools.misc.fixedTools import (
5 ensureVersionIsLong as fi2ve, versionToFixed as ve2fi)
6 from . import DefaultTable
7 import math
8
9
10 hheaFormat = """
11 > # big endian
12 tableVersion: L
13 ascent: h
14 descent: h
15 lineGap: h
16 advanceWidthMax: H
17 minLeftSideBearing: h
18 minRightSideBearing: h
19 xMaxExtent: h
20 caretSlopeRise: h
21 caretSlopeRun: h
22 caretOffset: h
23 reserved0: h
24 reserved1: h
25 reserved2: h
26 reserved3: h
27 metricDataFormat: h
28 numberOfHMetrics: H
29 """
30
31
32 class table__h_h_e_a(DefaultTable.DefaultTable):
33
34 # Note: Keep in sync with table__v_h_e_a
35
36 dependencies = ['hmtx', 'glyf', 'CFF ']
37
38 def decompile(self, data, ttFont):
39 sstruct.unpack(hheaFormat, data, self)
40
41 def compile(self, ttFont):
42 if ttFont.recalcBBoxes and (ttFont.isLoaded('glyf') or ttFont.isLoaded('CFF ')):
43 self.recalc(ttFont)
44 self.tableVersion = fi2ve(self.tableVersion)
45 return sstruct.pack(hheaFormat, self)
46
47 def recalc(self, ttFont):
48 if 'hmtx' in ttFont:
49 hmtxTable = ttFont['hmtx']
50 self.advanceWidthMax = max(adv for adv, _ in hmtxTable.metrics.values())
51
52 boundsWidthDict = {}
53 if 'glyf' in ttFont:
54 glyfTable = ttFont['glyf']
55 for name in ttFont.getGlyphOrder():
56 g = glyfTable[name]
57 if g.numberOfContours == 0:
58 continue
59 if g.numberOfContours < 0 and not hasattr(g, "xMax"):
60 # Composite glyph without extents set.
61 # Calculate those.
62 g.recalcBounds(glyfTable)
63 boundsWidthDict[name] = g.xMax - g.xMin
64 elif 'CFF ' in ttFont:
65 topDict = ttFont['CFF '].cff.topDictIndex[0]
66 charStrings = topDict.CharStrings
67 for name in ttFont.getGlyphOrder():
68 cs = charStrings[name]
69 bounds = cs.calcBounds(charStrings)
70 if bounds is not None:
71 boundsWidthDict[name] = int(
72 math.ceil(bounds[2]) - math.floor(bounds[0]))
73
74 if boundsWidthDict:
75 minLeftSideBearing = float('inf')
76 minRightSideBearing = float('inf')
77 xMaxExtent = -float('inf')
78 for name, boundsWidth in boundsWidthDict.items():
79 advanceWidth, lsb = hmtxTable[name]
80 rsb = advanceWidth - lsb - boundsWidth
81 extent = lsb + boundsWidth
82 minLeftSideBearing = min(minLeftSideBearing, lsb)
83 minRightSideBearing = min(minRightSideBearing, rsb)
84 xMaxExtent = max(xMaxExtent, extent)
85 self.minLeftSideBearing = minLeftSideBearing
86 self.minRightSideBearing = minRightSideBearing
87 self.xMaxExtent = xMaxExtent
88
89 else: # No glyph has outlines.
90 self.minLeftSideBearing = 0
91 self.minRightSideBearing = 0
92 self.xMaxExtent = 0
93
94 def toXML(self, writer, ttFont):
95 formatstring, names, fixes = sstruct.getformat(hheaFormat)
96 for name in names:
97 value = getattr(self, name)
98 if name == "tableVersion":
99 value = fi2ve(value)
100 value = "0x%08x" % value
101 writer.simpletag(name, value=value)
102 writer.newline()
103
104 def fromXML(self, name, attrs, content, ttFont):
105 if name == "tableVersion":
106 setattr(self, name, ve2fi(attrs["value"]))
107 return
108 setattr(self, name, safeEval(attrs["value"]))
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Lib/fontTools/ttLib/tables/_h_h_e_a.py b/Lib/fontTools/ttLib/tables/_h_h_e_a.py
--- a/Lib/fontTools/ttLib/tables/_h_h_e_a.py
+++ b/Lib/fontTools/ttLib/tables/_h_h_e_a.py
@@ -35,6 +35,19 @@
dependencies = ['hmtx', 'glyf', 'CFF ']
+ # OpenType spec renamed these, add aliases for compatibility
+ @property
+ def ascender(self): return self.ascent
+
+ @ascender.setter
+ def ascender(self,value): self.ascent = value
+
+ @property
+ def descender(self): return self.descent
+
+ @descender.setter
+ def descender(self,value): self.descent = value
+
def decompile(self, data, ttFont):
sstruct.unpack(hheaFormat, data, self)
| {"golden_diff": "diff --git a/Lib/fontTools/ttLib/tables/_h_h_e_a.py b/Lib/fontTools/ttLib/tables/_h_h_e_a.py\n--- a/Lib/fontTools/ttLib/tables/_h_h_e_a.py\n+++ b/Lib/fontTools/ttLib/tables/_h_h_e_a.py\n@@ -35,6 +35,19 @@\n \n \tdependencies = ['hmtx', 'glyf', 'CFF ']\n \n+\t# OpenType spec renamed these, add aliases for compatibility\n+\t@property\n+\tdef ascender(self): return self.ascent\n+\n+\[email protected]\n+\tdef ascender(self,value): self.ascent = value\n+\n+\t@property\n+\tdef descender(self): return self.descent\n+\n+\[email protected]\n+\tdef descender(self,value): self.descent = value\n+\n \tdef decompile(self, data, ttFont):\n \t\tsstruct.unpack(hheaFormat, data, self)\n", "issue": "ascender and ascent\nThe [opentype spec ](https://docs.microsoft.com/en-gb/typography/opentype/spec/hhea) calls the first two substantive entries in the `hhea` table \"`ascender`\" and \"`descender`\". fonttools calls them \"`ascent`\" and \"`descent`\".\r\n\r\nThis was surprising! Maybe it's too late to change then but can we at least have an alias?\n", "before_files": [{"content": "from fontTools.misc.py23 import *\nfrom fontTools.misc import sstruct\nfrom fontTools.misc.textTools import safeEval\nfrom fontTools.misc.fixedTools import (\n\tensureVersionIsLong as fi2ve, versionToFixed as ve2fi)\nfrom . import DefaultTable\nimport math\n\n\nhheaFormat = \"\"\"\n\t\t> # big endian\n\t\ttableVersion: L\n\t\tascent: h\n\t\tdescent: h\n\t\tlineGap: h\n\t\tadvanceWidthMax: H\n\t\tminLeftSideBearing: h\n\t\tminRightSideBearing: h\n\t\txMaxExtent: h\n\t\tcaretSlopeRise: h\n\t\tcaretSlopeRun: h\n\t\tcaretOffset: h\n\t\treserved0: h\n\t\treserved1: h\n\t\treserved2: h\n\t\treserved3: h\n\t\tmetricDataFormat: h\n\t\tnumberOfHMetrics: H\n\"\"\"\n\n\nclass table__h_h_e_a(DefaultTable.DefaultTable):\n\n\t# Note: Keep in sync with table__v_h_e_a\n\n\tdependencies = ['hmtx', 'glyf', 'CFF ']\n\n\tdef decompile(self, data, ttFont):\n\t\tsstruct.unpack(hheaFormat, data, self)\n\n\tdef compile(self, ttFont):\n\t\tif ttFont.recalcBBoxes and (ttFont.isLoaded('glyf') or ttFont.isLoaded('CFF ')):\n\t\t\tself.recalc(ttFont)\n\t\tself.tableVersion = fi2ve(self.tableVersion)\n\t\treturn sstruct.pack(hheaFormat, self)\n\n\tdef recalc(self, ttFont):\n\t\tif 'hmtx' in ttFont:\n\t\t\thmtxTable = ttFont['hmtx']\n\t\t\tself.advanceWidthMax = max(adv for adv, _ in hmtxTable.metrics.values())\n\n\t\tboundsWidthDict = {}\n\t\tif 'glyf' in ttFont:\n\t\t\tglyfTable = ttFont['glyf']\n\t\t\tfor name in ttFont.getGlyphOrder():\n\t\t\t\tg = glyfTable[name]\n\t\t\t\tif g.numberOfContours == 0:\n\t\t\t\t\tcontinue\n\t\t\t\tif g.numberOfContours < 0 and not hasattr(g, \"xMax\"):\n\t\t\t\t\t# Composite glyph without extents set.\n\t\t\t\t\t# Calculate those.\n\t\t\t\t\tg.recalcBounds(glyfTable)\n\t\t\t\tboundsWidthDict[name] = g.xMax - g.xMin\n\t\telif 'CFF ' in ttFont:\n\t\t\ttopDict = ttFont['CFF '].cff.topDictIndex[0]\n\t\t\tcharStrings = topDict.CharStrings\n\t\t\tfor name in ttFont.getGlyphOrder():\n\t\t\t\tcs = charStrings[name]\n\t\t\t\tbounds = cs.calcBounds(charStrings)\n\t\t\t\tif bounds is not None:\n\t\t\t\t\tboundsWidthDict[name] = int(\n\t\t\t\t\t\tmath.ceil(bounds[2]) - math.floor(bounds[0]))\n\n\t\tif boundsWidthDict:\n\t\t\tminLeftSideBearing = float('inf')\n\t\t\tminRightSideBearing = float('inf')\n\t\t\txMaxExtent = -float('inf')\n\t\t\tfor name, boundsWidth in boundsWidthDict.items():\n\t\t\t\tadvanceWidth, lsb = hmtxTable[name]\n\t\t\t\trsb = advanceWidth - lsb - boundsWidth\n\t\t\t\textent = lsb + boundsWidth\n\t\t\t\tminLeftSideBearing = min(minLeftSideBearing, lsb)\n\t\t\t\tminRightSideBearing = min(minRightSideBearing, rsb)\n\t\t\t\txMaxExtent = max(xMaxExtent, extent)\n\t\t\tself.minLeftSideBearing = minLeftSideBearing\n\t\t\tself.minRightSideBearing = minRightSideBearing\n\t\t\tself.xMaxExtent = xMaxExtent\n\n\t\telse: # No glyph has outlines.\n\t\t\tself.minLeftSideBearing = 0\n\t\t\tself.minRightSideBearing = 0\n\t\t\tself.xMaxExtent = 0\n\n\tdef toXML(self, writer, ttFont):\n\t\tformatstring, names, fixes = sstruct.getformat(hheaFormat)\n\t\tfor name in names:\n\t\t\tvalue = getattr(self, name)\n\t\t\tif name == \"tableVersion\":\n\t\t\t\tvalue = fi2ve(value)\n\t\t\t\tvalue = \"0x%08x\" % value\n\t\t\twriter.simpletag(name, value=value)\n\t\t\twriter.newline()\n\n\tdef fromXML(self, name, attrs, content, ttFont):\n\t\tif name == \"tableVersion\":\n\t\t\tsetattr(self, name, ve2fi(attrs[\"value\"]))\n\t\t\treturn\n\t\tsetattr(self, name, safeEval(attrs[\"value\"]))\n", "path": "Lib/fontTools/ttLib/tables/_h_h_e_a.py"}], "after_files": [{"content": "from fontTools.misc.py23 import *\nfrom fontTools.misc import sstruct\nfrom fontTools.misc.textTools import safeEval\nfrom fontTools.misc.fixedTools import (\n\tensureVersionIsLong as fi2ve, versionToFixed as ve2fi)\nfrom . import DefaultTable\nimport math\n\n\nhheaFormat = \"\"\"\n\t\t> # big endian\n\t\ttableVersion: L\n\t\tascent: h\n\t\tdescent: h\n\t\tlineGap: h\n\t\tadvanceWidthMax: H\n\t\tminLeftSideBearing: h\n\t\tminRightSideBearing: h\n\t\txMaxExtent: h\n\t\tcaretSlopeRise: h\n\t\tcaretSlopeRun: h\n\t\tcaretOffset: h\n\t\treserved0: h\n\t\treserved1: h\n\t\treserved2: h\n\t\treserved3: h\n\t\tmetricDataFormat: h\n\t\tnumberOfHMetrics: H\n\"\"\"\n\n\nclass table__h_h_e_a(DefaultTable.DefaultTable):\n\n\t# Note: Keep in sync with table__v_h_e_a\n\n\tdependencies = ['hmtx', 'glyf', 'CFF ']\n\n\t# OpenType spec renamed these, add aliases for compatibility\n\t@property\n\tdef ascender(self): return self.ascent\n\n\[email protected]\n\tdef ascender(self,value): self.ascent = value\n\n\t@property\n\tdef descender(self): return self.descent\n\n\[email protected]\n\tdef descender(self,value): self.descent = value\n\n\tdef decompile(self, data, ttFont):\n\t\tsstruct.unpack(hheaFormat, data, self)\n\n\tdef compile(self, ttFont):\n\t\tif ttFont.recalcBBoxes and (ttFont.isLoaded('glyf') or ttFont.isLoaded('CFF ')):\n\t\t\tself.recalc(ttFont)\n\t\tself.tableVersion = fi2ve(self.tableVersion)\n\t\treturn sstruct.pack(hheaFormat, self)\n\n\tdef recalc(self, ttFont):\n\t\tif 'hmtx' in ttFont:\n\t\t\thmtxTable = ttFont['hmtx']\n\t\t\tself.advanceWidthMax = max(adv for adv, _ in hmtxTable.metrics.values())\n\n\t\tboundsWidthDict = {}\n\t\tif 'glyf' in ttFont:\n\t\t\tglyfTable = ttFont['glyf']\n\t\t\tfor name in ttFont.getGlyphOrder():\n\t\t\t\tg = glyfTable[name]\n\t\t\t\tif g.numberOfContours == 0:\n\t\t\t\t\tcontinue\n\t\t\t\tif g.numberOfContours < 0 and not hasattr(g, \"xMax\"):\n\t\t\t\t\t# Composite glyph without extents set.\n\t\t\t\t\t# Calculate those.\n\t\t\t\t\tg.recalcBounds(glyfTable)\n\t\t\t\tboundsWidthDict[name] = g.xMax - g.xMin\n\t\telif 'CFF ' in ttFont:\n\t\t\ttopDict = ttFont['CFF '].cff.topDictIndex[0]\n\t\t\tcharStrings = topDict.CharStrings\n\t\t\tfor name in ttFont.getGlyphOrder():\n\t\t\t\tcs = charStrings[name]\n\t\t\t\tbounds = cs.calcBounds(charStrings)\n\t\t\t\tif bounds is not None:\n\t\t\t\t\tboundsWidthDict[name] = int(\n\t\t\t\t\t\tmath.ceil(bounds[2]) - math.floor(bounds[0]))\n\n\t\tif boundsWidthDict:\n\t\t\tminLeftSideBearing = float('inf')\n\t\t\tminRightSideBearing = float('inf')\n\t\t\txMaxExtent = -float('inf')\n\t\t\tfor name, boundsWidth in boundsWidthDict.items():\n\t\t\t\tadvanceWidth, lsb = hmtxTable[name]\n\t\t\t\trsb = advanceWidth - lsb - boundsWidth\n\t\t\t\textent = lsb + boundsWidth\n\t\t\t\tminLeftSideBearing = min(minLeftSideBearing, lsb)\n\t\t\t\tminRightSideBearing = min(minRightSideBearing, rsb)\n\t\t\t\txMaxExtent = max(xMaxExtent, extent)\n\t\t\tself.minLeftSideBearing = minLeftSideBearing\n\t\t\tself.minRightSideBearing = minRightSideBearing\n\t\t\tself.xMaxExtent = xMaxExtent\n\n\t\telse: # No glyph has outlines.\n\t\t\tself.minLeftSideBearing = 0\n\t\t\tself.minRightSideBearing = 0\n\t\t\tself.xMaxExtent = 0\n\n\tdef toXML(self, writer, ttFont):\n\t\tformatstring, names, fixes = sstruct.getformat(hheaFormat)\n\t\tfor name in names:\n\t\t\tvalue = getattr(self, name)\n\t\t\tif name == \"tableVersion\":\n\t\t\t\tvalue = fi2ve(value)\n\t\t\t\tvalue = \"0x%08x\" % value\n\t\t\twriter.simpletag(name, value=value)\n\t\t\twriter.newline()\n\n\tdef fromXML(self, name, attrs, content, ttFont):\n\t\tif name == \"tableVersion\":\n\t\t\tsetattr(self, name, ve2fi(attrs[\"value\"]))\n\t\t\treturn\n\t\tsetattr(self, name, safeEval(attrs[\"value\"]))\n", "path": "Lib/fontTools/ttLib/tables/_h_h_e_a.py"}]} | 1,560 | 215 |
gh_patches_debug_8925 | rasdani/github-patches | git_diff | freqtrade__freqtrade-3200 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docker image making logfiles in user_data
docker image places error logfiles in user_data by default. (apparently it should not be doing that)
Maybe cud have it put them in a log dir?
docker-compose.yml
command: >
trade
--logfile /freqtrade/user_data/freqtrade.log
can maybe be changed to
--logfile /freqtrade/user_data/log/freqtrade.log
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/configuration/directory_operations.py`
Content:
```
1 import logging
2 import shutil
3 from pathlib import Path
4 from typing import Any, Dict, Optional
5
6 from freqtrade.exceptions import OperationalException
7 from freqtrade.constants import USER_DATA_FILES
8
9 logger = logging.getLogger(__name__)
10
11
12 def create_datadir(config: Dict[str, Any], datadir: Optional[str] = None) -> Path:
13
14 folder = Path(datadir) if datadir else Path(f"{config['user_data_dir']}/data")
15 if not datadir:
16 # set datadir
17 exchange_name = config.get('exchange', {}).get('name').lower()
18 folder = folder.joinpath(exchange_name)
19
20 if not folder.is_dir():
21 folder.mkdir(parents=True)
22 logger.info(f'Created data directory: {datadir}')
23 return folder
24
25
26 def create_userdata_dir(directory: str, create_dir: bool = False) -> Path:
27 """
28 Create userdata directory structure.
29 if create_dir is True, then the parent-directory will be created if it does not exist.
30 Sub-directories will always be created if the parent directory exists.
31 Raises OperationalException if given a non-existing directory.
32 :param directory: Directory to check
33 :param create_dir: Create directory if it does not exist.
34 :return: Path object containing the directory
35 """
36 sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "notebooks",
37 "plot", "strategies", ]
38 folder = Path(directory)
39 if not folder.is_dir():
40 if create_dir:
41 folder.mkdir(parents=True)
42 logger.info(f'Created user-data directory: {folder}')
43 else:
44 raise OperationalException(
45 f"Directory `{folder}` does not exist. "
46 "Please use `freqtrade create-userdir` to create a user directory")
47
48 # Create required subdirectories
49 for f in sub_dirs:
50 subfolder = folder / f
51 if not subfolder.is_dir():
52 subfolder.mkdir(parents=False)
53 return folder
54
55
56 def copy_sample_files(directory: Path, overwrite: bool = False) -> None:
57 """
58 Copy files from templates to User data directory.
59 :param directory: Directory to copy data to
60 :param overwrite: Overwrite existing sample files
61 """
62 if not directory.is_dir():
63 raise OperationalException(f"Directory `{directory}` does not exist.")
64 sourcedir = Path(__file__).parents[1] / "templates"
65 for source, target in USER_DATA_FILES.items():
66 targetdir = directory / target
67 if not targetdir.is_dir():
68 raise OperationalException(f"Directory `{targetdir}` does not exist.")
69 targetfile = targetdir / source
70 if targetfile.exists():
71 if not overwrite:
72 logger.warning(f"File `{targetfile}` exists already, not deploying sample file.")
73 continue
74 else:
75 logger.warning(f"File `{targetfile}` exists already, overwriting.")
76 shutil.copy(str(sourcedir / source), str(targetfile))
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py
--- a/freqtrade/configuration/directory_operations.py
+++ b/freqtrade/configuration/directory_operations.py
@@ -33,8 +33,8 @@
:param create_dir: Create directory if it does not exist.
:return: Path object containing the directory
"""
- sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "notebooks",
- "plot", "strategies", ]
+ sub_dirs = ["backtest_results", "data", "hyperopts", "hyperopt_results", "logs",
+ "notebooks", "plot", "strategies", ]
folder = Path(directory)
if not folder.is_dir():
if create_dir:
| {"golden_diff": "diff --git a/freqtrade/configuration/directory_operations.py b/freqtrade/configuration/directory_operations.py\n--- a/freqtrade/configuration/directory_operations.py\n+++ b/freqtrade/configuration/directory_operations.py\n@@ -33,8 +33,8 @@\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n- sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"notebooks\",\n- \"plot\", \"strategies\", ]\n+ sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n+ \"notebooks\", \"plot\", \"strategies\", ]\n folder = Path(directory)\n if not folder.is_dir():\n if create_dir:\n", "issue": " Docker image making logfiles in user_data\ndocker image places error logfiles in user_data by default. (apparently it should not be doing that)\r\nMaybe cud have it put them in a log dir?\r\n\r\n\r\ndocker-compose.yml\r\n\r\ncommand: >\r\n trade\r\n --logfile /freqtrade/user_data/freqtrade.log\r\n\r\ncan maybe be changed to \r\n --logfile /freqtrade/user_data/log/freqtrade.log\r\n\n", "before_files": [{"content": "import logging\nimport shutil\nfrom pathlib import Path\nfrom typing import Any, Dict, Optional\n\nfrom freqtrade.exceptions import OperationalException\nfrom freqtrade.constants import USER_DATA_FILES\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_datadir(config: Dict[str, Any], datadir: Optional[str] = None) -> Path:\n\n folder = Path(datadir) if datadir else Path(f\"{config['user_data_dir']}/data\")\n if not datadir:\n # set datadir\n exchange_name = config.get('exchange', {}).get('name').lower()\n folder = folder.joinpath(exchange_name)\n\n if not folder.is_dir():\n folder.mkdir(parents=True)\n logger.info(f'Created data directory: {datadir}')\n return folder\n\n\ndef create_userdata_dir(directory: str, create_dir: bool = False) -> Path:\n \"\"\"\n Create userdata directory structure.\n if create_dir is True, then the parent-directory will be created if it does not exist.\n Sub-directories will always be created if the parent directory exists.\n Raises OperationalException if given a non-existing directory.\n :param directory: Directory to check\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"notebooks\",\n \"plot\", \"strategies\", ]\n folder = Path(directory)\n if not folder.is_dir():\n if create_dir:\n folder.mkdir(parents=True)\n logger.info(f'Created user-data directory: {folder}')\n else:\n raise OperationalException(\n f\"Directory `{folder}` does not exist. \"\n \"Please use `freqtrade create-userdir` to create a user directory\")\n\n # Create required subdirectories\n for f in sub_dirs:\n subfolder = folder / f\n if not subfolder.is_dir():\n subfolder.mkdir(parents=False)\n return folder\n\n\ndef copy_sample_files(directory: Path, overwrite: bool = False) -> None:\n \"\"\"\n Copy files from templates to User data directory.\n :param directory: Directory to copy data to\n :param overwrite: Overwrite existing sample files\n \"\"\"\n if not directory.is_dir():\n raise OperationalException(f\"Directory `{directory}` does not exist.\")\n sourcedir = Path(__file__).parents[1] / \"templates\"\n for source, target in USER_DATA_FILES.items():\n targetdir = directory / target\n if not targetdir.is_dir():\n raise OperationalException(f\"Directory `{targetdir}` does not exist.\")\n targetfile = targetdir / source\n if targetfile.exists():\n if not overwrite:\n logger.warning(f\"File `{targetfile}` exists already, not deploying sample file.\")\n continue\n else:\n logger.warning(f\"File `{targetfile}` exists already, overwriting.\")\n shutil.copy(str(sourcedir / source), str(targetfile))\n", "path": "freqtrade/configuration/directory_operations.py"}], "after_files": [{"content": "import logging\nimport shutil\nfrom pathlib import Path\nfrom typing import Any, Dict, Optional\n\nfrom freqtrade.exceptions import OperationalException\nfrom freqtrade.constants import USER_DATA_FILES\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_datadir(config: Dict[str, Any], datadir: Optional[str] = None) -> Path:\n\n folder = Path(datadir) if datadir else Path(f\"{config['user_data_dir']}/data\")\n if not datadir:\n # set datadir\n exchange_name = config.get('exchange', {}).get('name').lower()\n folder = folder.joinpath(exchange_name)\n\n if not folder.is_dir():\n folder.mkdir(parents=True)\n logger.info(f'Created data directory: {datadir}')\n return folder\n\n\ndef create_userdata_dir(directory: str, create_dir: bool = False) -> Path:\n \"\"\"\n Create userdata directory structure.\n if create_dir is True, then the parent-directory will be created if it does not exist.\n Sub-directories will always be created if the parent directory exists.\n Raises OperationalException if given a non-existing directory.\n :param directory: Directory to check\n :param create_dir: Create directory if it does not exist.\n :return: Path object containing the directory\n \"\"\"\n sub_dirs = [\"backtest_results\", \"data\", \"hyperopts\", \"hyperopt_results\", \"logs\",\n \"notebooks\", \"plot\", \"strategies\", ]\n folder = Path(directory)\n if not folder.is_dir():\n if create_dir:\n folder.mkdir(parents=True)\n logger.info(f'Created user-data directory: {folder}')\n else:\n raise OperationalException(\n f\"Directory `{folder}` does not exist. \"\n \"Please use `freqtrade create-userdir` to create a user directory\")\n\n # Create required subdirectories\n for f in sub_dirs:\n subfolder = folder / f\n if not subfolder.is_dir():\n subfolder.mkdir(parents=False)\n return folder\n\n\ndef copy_sample_files(directory: Path, overwrite: bool = False) -> None:\n \"\"\"\n Copy files from templates to User data directory.\n :param directory: Directory to copy data to\n :param overwrite: Overwrite existing sample files\n \"\"\"\n if not directory.is_dir():\n raise OperationalException(f\"Directory `{directory}` does not exist.\")\n sourcedir = Path(__file__).parents[1] / \"templates\"\n for source, target in USER_DATA_FILES.items():\n targetdir = directory / target\n if not targetdir.is_dir():\n raise OperationalException(f\"Directory `{targetdir}` does not exist.\")\n targetfile = targetdir / source\n if targetfile.exists():\n if not overwrite:\n logger.warning(f\"File `{targetfile}` exists already, not deploying sample file.\")\n continue\n else:\n logger.warning(f\"File `{targetfile}` exists already, overwriting.\")\n shutil.copy(str(sourcedir / source), str(targetfile))\n", "path": "freqtrade/configuration/directory_operations.py"}]} | 1,126 | 175 |
gh_patches_debug_20487 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-8118 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: GitPython breaking API change in 3.1.30, breaks extension updates
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
[Per this PR](https://github.com/gitpython-developers/GitPython/pull/1518) and [the changelog](https://github.com/gitpython-developers/GitPython/pull/1518) you can no longer feed arbitrary arguments to prevent remote code execution.
Easy fix, just use the built kwarg that's already there for it.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blame/0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8/modules/extensions.py#LL69C28-L69C28
there may be other places in the code as well, I'll take a peek
### Steps to reproduce the problem
1. `pip install 'gitpython>=3.1.30'
2. run the web-ui and try to install/check for updates while watching terminal
### What should have happened?
Should have successfully run the git commands and updated the git repos
### Commit where the problem happens
0cc0ee1b
### What platforms do you use to access the UI ?
Windows
### What browsers do you use to access the UI ?
Google Chrome
### Command Line Arguments
```Shell
No
```
### List of extensions
No
### Console logs
```Shell
Traceback (most recent call last):
File "/mnt/d/stable-diffusion/stable-diffusion-webui/modules/ui_extensions.py", line 66, in check_updates
ext.check_updates()
File "/mnt/d/stable-diffusion/stable-diffusion-webui/modules/extensions.py", line 69, in check_updates
for fetch in repo.remote().fetch("--dry-run"):
File "/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py", line 1007, in fetch
res = self._get_fetch_info_from_stderr(proc, progress, kill_after_timeout=kill_after_timeout)
File "/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py", line 848, in _get_fetch_info_from_stderr
proc.wait(stderr=stderr_text)
File "/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/cmd.py", line 604, in wait
raise GitCommandError(remove_password_if_present(self.args), status, errstr)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git fetch -v -- origin --dry-run
stderr: 'fatal: couldn't find remote ref --dry-run'
```
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modules/extensions.py`
Content:
```
1 import os
2 import sys
3 import traceback
4
5 import time
6 import git
7
8 from modules import paths, shared
9
10 extensions = []
11 extensions_dir = os.path.join(paths.data_path, "extensions")
12 extensions_builtin_dir = os.path.join(paths.script_path, "extensions-builtin")
13
14 if not os.path.exists(extensions_dir):
15 os.makedirs(extensions_dir)
16
17 def active():
18 return [x for x in extensions if x.enabled]
19
20
21 class Extension:
22 def __init__(self, name, path, enabled=True, is_builtin=False):
23 self.name = name
24 self.path = path
25 self.enabled = enabled
26 self.status = ''
27 self.can_update = False
28 self.is_builtin = is_builtin
29 self.version = ''
30
31 repo = None
32 try:
33 if os.path.exists(os.path.join(path, ".git")):
34 repo = git.Repo(path)
35 except Exception:
36 print(f"Error reading github repository info from {path}:", file=sys.stderr)
37 print(traceback.format_exc(), file=sys.stderr)
38
39 if repo is None or repo.bare:
40 self.remote = None
41 else:
42 try:
43 self.remote = next(repo.remote().urls, None)
44 self.status = 'unknown'
45 head = repo.head.commit
46 ts = time.asctime(time.gmtime(repo.head.commit.committed_date))
47 self.version = f'{head.hexsha[:8]} ({ts})'
48
49 except Exception:
50 self.remote = None
51
52 def list_files(self, subdir, extension):
53 from modules import scripts
54
55 dirpath = os.path.join(self.path, subdir)
56 if not os.path.isdir(dirpath):
57 return []
58
59 res = []
60 for filename in sorted(os.listdir(dirpath)):
61 res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename)))
62
63 res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]
64
65 return res
66
67 def check_updates(self):
68 repo = git.Repo(self.path)
69 for fetch in repo.remote().fetch("--dry-run"):
70 if fetch.flags != fetch.HEAD_UPTODATE:
71 self.can_update = True
72 self.status = "behind"
73 return
74
75 self.can_update = False
76 self.status = "latest"
77
78 def fetch_and_reset_hard(self):
79 repo = git.Repo(self.path)
80 # Fix: `error: Your local changes to the following files would be overwritten by merge`,
81 # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
82 repo.git.fetch('--all')
83 repo.git.reset('--hard', 'origin')
84
85
86 def list_extensions():
87 extensions.clear()
88
89 if not os.path.isdir(extensions_dir):
90 return
91
92 paths = []
93 for dirname in [extensions_dir, extensions_builtin_dir]:
94 if not os.path.isdir(dirname):
95 return
96
97 for extension_dirname in sorted(os.listdir(dirname)):
98 path = os.path.join(dirname, extension_dirname)
99 if not os.path.isdir(path):
100 continue
101
102 paths.append((extension_dirname, path, dirname == extensions_builtin_dir))
103
104 for dirname, path, is_builtin in paths:
105 extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)
106 extensions.append(extension)
107
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modules/extensions.py b/modules/extensions.py
--- a/modules/extensions.py
+++ b/modules/extensions.py
@@ -66,7 +66,7 @@
def check_updates(self):
repo = git.Repo(self.path)
- for fetch in repo.remote().fetch("--dry-run"):
+ for fetch in repo.remote().fetch(dry_run=True):
if fetch.flags != fetch.HEAD_UPTODATE:
self.can_update = True
self.status = "behind"
@@ -79,8 +79,8 @@
repo = git.Repo(self.path)
# Fix: `error: Your local changes to the following files would be overwritten by merge`,
# because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
- repo.git.fetch('--all')
- repo.git.reset('--hard', 'origin')
+ repo.git.fetch(all=True)
+ repo.git.reset('origin', hard=True)
def list_extensions():
| {"golden_diff": "diff --git a/modules/extensions.py b/modules/extensions.py\n--- a/modules/extensions.py\n+++ b/modules/extensions.py\n@@ -66,7 +66,7 @@\n \r\n def check_updates(self):\r\n repo = git.Repo(self.path)\r\n- for fetch in repo.remote().fetch(\"--dry-run\"):\r\n+ for fetch in repo.remote().fetch(dry_run=True):\r\n if fetch.flags != fetch.HEAD_UPTODATE:\r\n self.can_update = True\r\n self.status = \"behind\"\r\n@@ -79,8 +79,8 @@\n repo = git.Repo(self.path)\r\n # Fix: `error: Your local changes to the following files would be overwritten by merge`,\r\n # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.\r\n- repo.git.fetch('--all')\r\n- repo.git.reset('--hard', 'origin')\r\n+ repo.git.fetch(all=True)\r\n+ repo.git.reset('origin', hard=True)\r\n \r\n \r\n def list_extensions():\n", "issue": "[Bug]: GitPython breaking API change in 3.1.30, breaks extension updates\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\n[Per this PR](https://github.com/gitpython-developers/GitPython/pull/1518) and [the changelog](https://github.com/gitpython-developers/GitPython/pull/1518) you can no longer feed arbitrary arguments to prevent remote code execution.\r\n\r\nEasy fix, just use the built kwarg that's already there for it.\r\nhttps://github.com/AUTOMATIC1111/stable-diffusion-webui/blame/0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8/modules/extensions.py#LL69C28-L69C28\r\n\r\nthere may be other places in the code as well, I'll take a peek\n\n### Steps to reproduce the problem\n\n1. `pip install 'gitpython>=3.1.30'\r\n2. run the web-ui and try to install/check for updates while watching terminal\n\n### What should have happened?\n\nShould have successfully run the git commands and updated the git repos\n\n### Commit where the problem happens\n\n0cc0ee1b\n\n### What platforms do you use to access the UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n```Shell\nNo\n```\n\n\n### List of extensions\n\nNo\n\n### Console logs\n\n```Shell\nTraceback (most recent call last):\r\n File \"/mnt/d/stable-diffusion/stable-diffusion-webui/modules/ui_extensions.py\", line 66, in check_updates\r\n ext.check_updates()\r\n File \"/mnt/d/stable-diffusion/stable-diffusion-webui/modules/extensions.py\", line 69, in check_updates\r\n for fetch in repo.remote().fetch(\"--dry-run\"):\r\n File \"/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py\", line 1007, in fetch\r\n res = self._get_fetch_info_from_stderr(proc, progress, kill_after_timeout=kill_after_timeout)\r\n File \"/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/remote.py\", line 848, in _get_fetch_info_from_stderr\r\n proc.wait(stderr=stderr_text)\r\n File \"/home/adam/.cache/pypoetry/virtualenvs/sd-deps-z4SYejYZ-py3.10/lib/python3.10/site-packages/git/cmd.py\", line 604, in wait\r\n raise GitCommandError(remove_password_if_present(self.args), status, errstr)\r\ngit.exc.GitCommandError: Cmd('git') failed due to: exit code(128)\r\n cmdline: git fetch -v -- origin --dry-run\r\n stderr: 'fatal: couldn't find remote ref --dry-run'\n```\n\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import os\r\nimport sys\r\nimport traceback\r\n\r\nimport time\r\nimport git\r\n\r\nfrom modules import paths, shared\r\n\r\nextensions = []\r\nextensions_dir = os.path.join(paths.data_path, \"extensions\")\r\nextensions_builtin_dir = os.path.join(paths.script_path, \"extensions-builtin\")\r\n\r\nif not os.path.exists(extensions_dir):\r\n os.makedirs(extensions_dir)\r\n\r\ndef active():\r\n return [x for x in extensions if x.enabled]\r\n\r\n\r\nclass Extension:\r\n def __init__(self, name, path, enabled=True, is_builtin=False):\r\n self.name = name\r\n self.path = path\r\n self.enabled = enabled\r\n self.status = ''\r\n self.can_update = False\r\n self.is_builtin = is_builtin\r\n self.version = ''\r\n\r\n repo = None\r\n try:\r\n if os.path.exists(os.path.join(path, \".git\")):\r\n repo = git.Repo(path)\r\n except Exception:\r\n print(f\"Error reading github repository info from {path}:\", file=sys.stderr)\r\n print(traceback.format_exc(), file=sys.stderr)\r\n\r\n if repo is None or repo.bare:\r\n self.remote = None\r\n else:\r\n try:\r\n self.remote = next(repo.remote().urls, None)\r\n self.status = 'unknown'\r\n head = repo.head.commit\r\n ts = time.asctime(time.gmtime(repo.head.commit.committed_date))\r\n self.version = f'{head.hexsha[:8]} ({ts})'\r\n\r\n except Exception:\r\n self.remote = None\r\n\r\n def list_files(self, subdir, extension):\r\n from modules import scripts\r\n\r\n dirpath = os.path.join(self.path, subdir)\r\n if not os.path.isdir(dirpath):\r\n return []\r\n\r\n res = []\r\n for filename in sorted(os.listdir(dirpath)):\r\n res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename)))\r\n\r\n res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]\r\n\r\n return res\r\n\r\n def check_updates(self):\r\n repo = git.Repo(self.path)\r\n for fetch in repo.remote().fetch(\"--dry-run\"):\r\n if fetch.flags != fetch.HEAD_UPTODATE:\r\n self.can_update = True\r\n self.status = \"behind\"\r\n return\r\n\r\n self.can_update = False\r\n self.status = \"latest\"\r\n\r\n def fetch_and_reset_hard(self):\r\n repo = git.Repo(self.path)\r\n # Fix: `error: Your local changes to the following files would be overwritten by merge`,\r\n # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.\r\n repo.git.fetch('--all')\r\n repo.git.reset('--hard', 'origin')\r\n\r\n\r\ndef list_extensions():\r\n extensions.clear()\r\n\r\n if not os.path.isdir(extensions_dir):\r\n return\r\n\r\n paths = []\r\n for dirname in [extensions_dir, extensions_builtin_dir]:\r\n if not os.path.isdir(dirname):\r\n return\r\n\r\n for extension_dirname in sorted(os.listdir(dirname)):\r\n path = os.path.join(dirname, extension_dirname)\r\n if not os.path.isdir(path):\r\n continue\r\n\r\n paths.append((extension_dirname, path, dirname == extensions_builtin_dir))\r\n\r\n for dirname, path, is_builtin in paths:\r\n extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)\r\n extensions.append(extension)\r\n\r\n", "path": "modules/extensions.py"}], "after_files": [{"content": "import os\r\nimport sys\r\nimport traceback\r\n\r\nimport time\r\nimport git\r\n\r\nfrom modules import paths, shared\r\n\r\nextensions = []\r\nextensions_dir = os.path.join(paths.data_path, \"extensions\")\r\nextensions_builtin_dir = os.path.join(paths.script_path, \"extensions-builtin\")\r\n\r\nif not os.path.exists(extensions_dir):\r\n os.makedirs(extensions_dir)\r\n\r\ndef active():\r\n return [x for x in extensions if x.enabled]\r\n\r\n\r\nclass Extension:\r\n def __init__(self, name, path, enabled=True, is_builtin=False):\r\n self.name = name\r\n self.path = path\r\n self.enabled = enabled\r\n self.status = ''\r\n self.can_update = False\r\n self.is_builtin = is_builtin\r\n self.version = ''\r\n\r\n repo = None\r\n try:\r\n if os.path.exists(os.path.join(path, \".git\")):\r\n repo = git.Repo(path)\r\n except Exception:\r\n print(f\"Error reading github repository info from {path}:\", file=sys.stderr)\r\n print(traceback.format_exc(), file=sys.stderr)\r\n\r\n if repo is None or repo.bare:\r\n self.remote = None\r\n else:\r\n try:\r\n self.remote = next(repo.remote().urls, None)\r\n self.status = 'unknown'\r\n head = repo.head.commit\r\n ts = time.asctime(time.gmtime(repo.head.commit.committed_date))\r\n self.version = f'{head.hexsha[:8]} ({ts})'\r\n\r\n except Exception:\r\n self.remote = None\r\n\r\n def list_files(self, subdir, extension):\r\n from modules import scripts\r\n\r\n dirpath = os.path.join(self.path, subdir)\r\n if not os.path.isdir(dirpath):\r\n return []\r\n\r\n res = []\r\n for filename in sorted(os.listdir(dirpath)):\r\n res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename)))\r\n\r\n res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]\r\n\r\n return res\r\n\r\n def check_updates(self):\r\n repo = git.Repo(self.path)\r\n for fetch in repo.remote().fetch(dry_run=True):\r\n if fetch.flags != fetch.HEAD_UPTODATE:\r\n self.can_update = True\r\n self.status = \"behind\"\r\n return\r\n\r\n self.can_update = False\r\n self.status = \"latest\"\r\n\r\n def fetch_and_reset_hard(self):\r\n repo = git.Repo(self.path)\r\n # Fix: `error: Your local changes to the following files would be overwritten by merge`,\r\n # because WSL2 Docker set 755 file permissions instead of 644, this results to the error.\r\n repo.git.fetch(all=True)\r\n repo.git.reset('origin', hard=True)\r\n\r\n\r\ndef list_extensions():\r\n extensions.clear()\r\n\r\n if not os.path.isdir(extensions_dir):\r\n return\r\n\r\n paths = []\r\n for dirname in [extensions_dir, extensions_builtin_dir]:\r\n if not os.path.isdir(dirname):\r\n return\r\n\r\n for extension_dirname in sorted(os.listdir(dirname)):\r\n path = os.path.join(dirname, extension_dirname)\r\n if not os.path.isdir(path):\r\n continue\r\n\r\n paths.append((extension_dirname, path, dirname == extensions_builtin_dir))\r\n\r\n for dirname, path, is_builtin in paths:\r\n extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)\r\n extensions.append(extension)\r\n\r\n", "path": "modules/extensions.py"}]} | 1,901 | 219 |
gh_patches_debug_34955 | rasdani/github-patches | git_diff | elastic__apm-agent-python-881 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'FragmentDefinition' object has no attribute 'operation'"
**Describe the bug**:
I'm using elastic APM with Django 3.1.2 and graphql.
On every GraphQL Query, I'm seeing now this error: `AttributeError: 'FragmentDefinition' object has no attribute 'operation'`
The relevant file is: `elasticapm/instrumentation/packages/graphql.py in get_graphql_tx_name at line 99`
**To Reproduce**
I'm not sure yet, why the error is occurring and I'm just getting started with the service. If you can guide me to the right direction, I can create a reproducible example.
**Environment (please complete the following information)**
- OS: Linux docker Container
- Python version:
- Framework and version : Django 3.1.2
- APM Server version:
- Agent version: 5.9.0
**Additional context**
Add any other context about the problem here.
- Agent config options <!-- be careful not to post sensitive information -->
<details>
<summary>Click to expand</summary>
```
replace this line with your agent config options
remember to mask any sensitive fields like tokens
```
</details>
- `requirements.txt`:
<details>
<summary>Click to expand</summary>
```
replace this line with your `requirements.txt`
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/graphql.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm import set_transaction_name
32 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
33 from elasticapm.traces import capture_span
34
35
36 class GraphQLExecutorInstrumentation(AbstractInstrumentedModule):
37 name = "graphql"
38
39 instrument_list = [
40 ("graphql.execution.executors.sync", "SyncExecutor.execute"),
41 ("graphql.execution.executors.gevent", "GeventExecutor.execute"),
42 ("graphql.execution.executors.asyncio", "AsyncioExecutor.execute"),
43 ("graphql.execution.executors.process", "ProcessExecutor.execute"),
44 ("graphql.execution.executors.thread", "ThreadExecutor.execute_in_thread"),
45 ("graphql.execution.executors.thread", "ThreadExecutor.execute_in_pool"),
46 ]
47
48 def call(self, module, method, wrapped, instance, args, kwargs):
49 name = "GraphQL"
50
51 info = ""
52 query = args[2]
53
54 if "ResolveInfo" == type(query).__name__:
55 if str(query.return_type) in [
56 'Boolean',
57 'Context',
58 'Date',
59 'DateTime',
60 'Decimal',
61 'Dynamic',
62 'Float',
63 'ID',
64 'Int',
65 'String',
66 'Time',
67 'UUID',
68 'Boolean',
69 'String'
70 ]:
71 return wrapped(*args, **kwargs)
72
73 op = query.operation.operation
74 field = query.field_name
75 info = "%s %s" % (op, field)
76 elif "RequestParams" == type(query).__name__:
77 info = "%s %s" % ("request", query.query)
78 else:
79 info = str(query)
80
81 with capture_span(
82 "%s.%s" % (name, info),
83 span_type="external",
84 span_subtype="graphql",
85 span_action="query"
86 ):
87 return wrapped(*args, **kwargs)
88
89
90 class GraphQLBackendInstrumentation(AbstractInstrumentedModule):
91 name = "graphql"
92
93 instrument_list = [
94 ("graphql.backend.core", "GraphQLCoreBackend.document_from_string"),
95 ("graphql.backend.cache", "GraphQLCachedBackend.document_from_string"),
96 ]
97
98 def get_graphql_tx_name(self, graphql_doc):
99 op = graphql_doc.definitions[0].operation
100 fields = graphql_doc.definitions[0].selection_set.selections
101 return "GraphQL %s %s" % (op.upper(), "+".join([f.name.value for f in fields]))
102
103 def call(self, module, method, wrapped, instance, args, kwargs):
104 graphql_document = wrapped(*args, **kwargs)
105 transaction_name = self.get_graphql_tx_name(graphql_document.document_ast)
106 set_transaction_name(transaction_name)
107 return graphql_document
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/instrumentation/packages/graphql.py b/elasticapm/instrumentation/packages/graphql.py
--- a/elasticapm/instrumentation/packages/graphql.py
+++ b/elasticapm/instrumentation/packages/graphql.py
@@ -53,20 +53,20 @@
if "ResolveInfo" == type(query).__name__:
if str(query.return_type) in [
- 'Boolean',
- 'Context',
- 'Date',
- 'DateTime',
- 'Decimal',
- 'Dynamic',
- 'Float',
- 'ID',
- 'Int',
- 'String',
- 'Time',
- 'UUID',
- 'Boolean',
- 'String'
+ "Boolean",
+ "Context",
+ "Date",
+ "DateTime",
+ "Decimal",
+ "Dynamic",
+ "Float",
+ "ID",
+ "Int",
+ "String",
+ "Time",
+ "UUID",
+ "Boolean",
+ "String",
]:
return wrapped(*args, **kwargs)
@@ -78,12 +78,7 @@
else:
info = str(query)
- with capture_span(
- "%s.%s" % (name, info),
- span_type="external",
- span_subtype="graphql",
- span_action="query"
- ):
+ with capture_span("%s.%s" % (name, info), span_type="external", span_subtype="graphql", span_action="query"):
return wrapped(*args, **kwargs)
@@ -96,9 +91,15 @@
]
def get_graphql_tx_name(self, graphql_doc):
- op = graphql_doc.definitions[0].operation
- fields = graphql_doc.definitions[0].selection_set.selections
- return "GraphQL %s %s" % (op.upper(), "+".join([f.name.value for f in fields]))
+ try:
+ op_def = [i for i in graphql_doc.definitions if type(i).__name__ == "OperationDefinition"][0]
+ except KeyError:
+ return "GraphQL unknown operation"
+
+ op = op_def.operation
+ name = op_def.name
+ fields = op_def.selection_set.selections
+ return "GraphQL %s %s" % (op.upper(), name if name else "+".join([f.name.value for f in fields]))
def call(self, module, method, wrapped, instance, args, kwargs):
graphql_document = wrapped(*args, **kwargs)
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/graphql.py b/elasticapm/instrumentation/packages/graphql.py\n--- a/elasticapm/instrumentation/packages/graphql.py\n+++ b/elasticapm/instrumentation/packages/graphql.py\n@@ -53,20 +53,20 @@\n \n if \"ResolveInfo\" == type(query).__name__:\n if str(query.return_type) in [\n- 'Boolean',\n- 'Context',\n- 'Date',\n- 'DateTime',\n- 'Decimal',\n- 'Dynamic',\n- 'Float',\n- 'ID',\n- 'Int',\n- 'String',\n- 'Time',\n- 'UUID',\n- 'Boolean',\n- 'String'\n+ \"Boolean\",\n+ \"Context\",\n+ \"Date\",\n+ \"DateTime\",\n+ \"Decimal\",\n+ \"Dynamic\",\n+ \"Float\",\n+ \"ID\",\n+ \"Int\",\n+ \"String\",\n+ \"Time\",\n+ \"UUID\",\n+ \"Boolean\",\n+ \"String\",\n ]:\n return wrapped(*args, **kwargs)\n \n@@ -78,12 +78,7 @@\n else:\n info = str(query)\n \n- with capture_span(\n- \"%s.%s\" % (name, info),\n- span_type=\"external\",\n- span_subtype=\"graphql\",\n- span_action=\"query\"\n- ):\n+ with capture_span(\"%s.%s\" % (name, info), span_type=\"external\", span_subtype=\"graphql\", span_action=\"query\"):\n return wrapped(*args, **kwargs)\n \n \n@@ -96,9 +91,15 @@\n ]\n \n def get_graphql_tx_name(self, graphql_doc):\n- op = graphql_doc.definitions[0].operation\n- fields = graphql_doc.definitions[0].selection_set.selections\n- return \"GraphQL %s %s\" % (op.upper(), \"+\".join([f.name.value for f in fields]))\n+ try:\n+ op_def = [i for i in graphql_doc.definitions if type(i).__name__ == \"OperationDefinition\"][0]\n+ except KeyError:\n+ return \"GraphQL unknown operation\"\n+\n+ op = op_def.operation\n+ name = op_def.name\n+ fields = op_def.selection_set.selections\n+ return \"GraphQL %s %s\" % (op.upper(), name if name else \"+\".join([f.name.value for f in fields]))\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n graphql_document = wrapped(*args, **kwargs)\n", "issue": "'FragmentDefinition' object has no attribute 'operation'\"\n**Describe the bug**: \r\nI'm using elastic APM with Django 3.1.2 and graphql.\r\nOn every GraphQL Query, I'm seeing now this error: `AttributeError: 'FragmentDefinition' object has no attribute 'operation'`\r\n\r\nThe relevant file is: `elasticapm/instrumentation/packages/graphql.py in get_graphql_tx_name at line 99`\r\n\r\n**To Reproduce**\r\nI'm not sure yet, why the error is occurring and I'm just getting started with the service. If you can guide me to the right direction, I can create a reproducible example.\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux docker Container\r\n- Python version:\r\n- Framework and version : Django 3.1.2\r\n- APM Server version: \r\n- Agent version: 5.9.0\r\n\r\n\r\n**Additional context**\r\n\r\nAdd any other context about the problem here.\r\n\r\n- Agent config options <!-- be careful not to post sensitive information -->\r\n <details>\r\n <summary>Click to expand</summary>\r\n\r\n ```\r\n replace this line with your agent config options\r\n remember to mask any sensitive fields like tokens\r\n ```\r\n </details>\r\n- `requirements.txt`:\r\n <details>\r\n <summary>Click to expand</summary>\r\n\r\n ```\r\n replace this line with your `requirements.txt`\r\n ```\r\n </details>\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm import set_transaction_name\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\n\n\nclass GraphQLExecutorInstrumentation(AbstractInstrumentedModule):\n name = \"graphql\"\n\n instrument_list = [\n (\"graphql.execution.executors.sync\", \"SyncExecutor.execute\"),\n (\"graphql.execution.executors.gevent\", \"GeventExecutor.execute\"),\n (\"graphql.execution.executors.asyncio\", \"AsyncioExecutor.execute\"),\n (\"graphql.execution.executors.process\", \"ProcessExecutor.execute\"),\n (\"graphql.execution.executors.thread\", \"ThreadExecutor.execute_in_thread\"),\n (\"graphql.execution.executors.thread\", \"ThreadExecutor.execute_in_pool\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n name = \"GraphQL\"\n\n info = \"\"\n query = args[2]\n\n if \"ResolveInfo\" == type(query).__name__:\n if str(query.return_type) in [\n 'Boolean',\n 'Context',\n 'Date',\n 'DateTime',\n 'Decimal',\n 'Dynamic',\n 'Float',\n 'ID',\n 'Int',\n 'String',\n 'Time',\n 'UUID',\n 'Boolean',\n 'String'\n ]:\n return wrapped(*args, **kwargs)\n\n op = query.operation.operation\n field = query.field_name\n info = \"%s %s\" % (op, field)\n elif \"RequestParams\" == type(query).__name__:\n info = \"%s %s\" % (\"request\", query.query)\n else:\n info = str(query)\n\n with capture_span(\n \"%s.%s\" % (name, info),\n span_type=\"external\",\n span_subtype=\"graphql\",\n span_action=\"query\"\n ):\n return wrapped(*args, **kwargs)\n\n\nclass GraphQLBackendInstrumentation(AbstractInstrumentedModule):\n name = \"graphql\"\n\n instrument_list = [\n (\"graphql.backend.core\", \"GraphQLCoreBackend.document_from_string\"),\n (\"graphql.backend.cache\", \"GraphQLCachedBackend.document_from_string\"),\n ]\n\n def get_graphql_tx_name(self, graphql_doc):\n op = graphql_doc.definitions[0].operation\n fields = graphql_doc.definitions[0].selection_set.selections\n return \"GraphQL %s %s\" % (op.upper(), \"+\".join([f.name.value for f in fields]))\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n graphql_document = wrapped(*args, **kwargs)\n transaction_name = self.get_graphql_tx_name(graphql_document.document_ast)\n set_transaction_name(transaction_name)\n return graphql_document\n", "path": "elasticapm/instrumentation/packages/graphql.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm import set_transaction_name\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\n\n\nclass GraphQLExecutorInstrumentation(AbstractInstrumentedModule):\n name = \"graphql\"\n\n instrument_list = [\n (\"graphql.execution.executors.sync\", \"SyncExecutor.execute\"),\n (\"graphql.execution.executors.gevent\", \"GeventExecutor.execute\"),\n (\"graphql.execution.executors.asyncio\", \"AsyncioExecutor.execute\"),\n (\"graphql.execution.executors.process\", \"ProcessExecutor.execute\"),\n (\"graphql.execution.executors.thread\", \"ThreadExecutor.execute_in_thread\"),\n (\"graphql.execution.executors.thread\", \"ThreadExecutor.execute_in_pool\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n name = \"GraphQL\"\n\n info = \"\"\n query = args[2]\n\n if \"ResolveInfo\" == type(query).__name__:\n if str(query.return_type) in [\n \"Boolean\",\n \"Context\",\n \"Date\",\n \"DateTime\",\n \"Decimal\",\n \"Dynamic\",\n \"Float\",\n \"ID\",\n \"Int\",\n \"String\",\n \"Time\",\n \"UUID\",\n \"Boolean\",\n \"String\",\n ]:\n return wrapped(*args, **kwargs)\n\n op = query.operation.operation\n field = query.field_name\n info = \"%s %s\" % (op, field)\n elif \"RequestParams\" == type(query).__name__:\n info = \"%s %s\" % (\"request\", query.query)\n else:\n info = str(query)\n\n with capture_span(\"%s.%s\" % (name, info), span_type=\"external\", span_subtype=\"graphql\", span_action=\"query\"):\n return wrapped(*args, **kwargs)\n\n\nclass GraphQLBackendInstrumentation(AbstractInstrumentedModule):\n name = \"graphql\"\n\n instrument_list = [\n (\"graphql.backend.core\", \"GraphQLCoreBackend.document_from_string\"),\n (\"graphql.backend.cache\", \"GraphQLCachedBackend.document_from_string\"),\n ]\n\n def get_graphql_tx_name(self, graphql_doc):\n try:\n op_def = [i for i in graphql_doc.definitions if type(i).__name__ == \"OperationDefinition\"][0]\n except KeyError:\n return \"GraphQL unknown operation\"\n\n op = op_def.operation\n name = op_def.name\n fields = op_def.selection_set.selections\n return \"GraphQL %s %s\" % (op.upper(), name if name else \"+\".join([f.name.value for f in fields]))\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n graphql_document = wrapped(*args, **kwargs)\n transaction_name = self.get_graphql_tx_name(graphql_document.document_ast)\n set_transaction_name(transaction_name)\n return graphql_document\n", "path": "elasticapm/instrumentation/packages/graphql.py"}]} | 1,681 | 565 |
gh_patches_debug_37682 | rasdani/github-patches | git_diff | apluslms__a-plus-1005 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A+ Security logs, CEF format
After security audit in spring 2021, some new security-related log events were added, using SecurityLog class. The log output format should be converted to CEF format that can be exported to Aalto ITS logging systems. Also, the current log events should be reviewed: do they contain sufficient information, and should some additional events be added. Note that security log should contain only relevant events, that can be justified from security point of view.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/logging.py`
Content:
```
1 from django.http import UnreadablePostError
2 import logging
3 from django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed
4 from django.dispatch import receiver
5 from django.http.request import HttpRequest
6
7 def skip_unreadable_post(record):
8 """Skips log records of unfinished post requests."""
9 return not record.exc_info or not issubclass(record.exc_info[0], UnreadablePostError)
10
11 class SecurityLog:
12 """
13 Static singleton class used for A+ - wide security logging.
14 Django signals are used to track login/logout events.
15 """
16
17 seclogger = logging.getLogger('aplus.security')
18
19 @staticmethod
20 def logger() -> logging.Logger:
21 return SecurityLog.seclogger
22
23 @staticmethod
24 def logevent(request: HttpRequest, type: str, message: str) -> None:
25 # Unit tests do not have user defined in request object.
26 if request and hasattr(request, 'user'):
27 user=request.user
28 ip=request.META.get('REMOTE_ADDR')
29 else:
30 user='?'
31 ip='?'
32 SecurityLog.logger().info("({}/{}): {}: {}".format(
33 user,
34 ip,
35 type,
36 message
37 ))
38
39
40 # This example was used as an inspiration for the following functions:
41 # https://stackoverflow.com/questions/37618473/how-can-i-log-both-successful-and-failed-login-and-logout-attempts-in-django
42 @receiver(user_logged_in)
43 def user_logged_in_callback(sender, request, user, **kwargs):
44 SecurityLog.logevent(request, "login", "")
45
46 @receiver(user_logged_out)
47 def user_logged_out_callback(sender, request, user, **kwargs):
48 SecurityLog.logevent(request, "logout", "")
49
50 # Unfortunately the request object is not passed with this signal,
51 # so we cannot get the IP.
52 @receiver(user_login_failed)
53 def user_login_failed_callback(sender, credentials, **kwargs):
54 SecurityLog.logevent(None, "login-failed","{credentials}".format(
55 credentials=credentials)
56 )
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/logging.py b/lib/logging.py
--- a/lib/logging.py
+++ b/lib/logging.py
@@ -3,6 +3,12 @@
from django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed
from django.dispatch import receiver
from django.http.request import HttpRequest
+from format_cef import format_cef
+from aplus import VERSION
+
+
+CEF_VENDOR = 'Aalto'
+CEF_PRODUCT = 'aplus'
def skip_unreadable_post(record):
"""Skips log records of unfinished post requests."""
@@ -10,7 +16,8 @@
class SecurityLog:
"""
- Static singleton class used for A+ - wide security logging.
+ Static singleton class used for A+ - wide security logging,
+ to produce ArcSight Common Event Format (CEF) log.
Django signals are used to track login/logout events.
"""
@@ -21,20 +28,32 @@
return SecurityLog.seclogger
@staticmethod
- def logevent(request: HttpRequest, type: str, message: str) -> None:
- # Unit tests do not have user defined in request object.
- if request and hasattr(request, 'user'):
- user=request.user
- ip=request.META.get('REMOTE_ADDR')
- else:
- user='?'
- ip='?'
- SecurityLog.logger().info("({}/{}): {}: {}".format(
- user,
- ip,
- type,
- message
- ))
+ def logevent(
+ request: HttpRequest,
+ type: str,
+ message: str,
+ severity: int = 5,
+ ) -> None:
+ extensions = {}
+ # Unit tests may not have user or source address defined.
+ if request:
+ if hasattr(request, 'user'):
+ extensions['sourceUserName'] = str(request.user)
+ extensions['sourceUserId'] = str(request.user.id)
+ if (addr := request.META.get('REMOTE_ADDR')):
+ extensions['sourceAddress'] = addr
+
+ SecurityLog.logger().info(
+ format_cef(
+ CEF_VENDOR,
+ CEF_PRODUCT,
+ VERSION,
+ type,
+ message,
+ severity,
+ extensions,
+ ).decode("utf-8")
+ )
# This example was used as an inspiration for the following functions:
@@ -51,6 +70,8 @@
# so we cannot get the IP.
@receiver(user_login_failed)
def user_login_failed_callback(sender, credentials, **kwargs):
- SecurityLog.logevent(None, "login-failed","{credentials}".format(
- credentials=credentials)
- )
+ try:
+ SecurityLog.logevent(None, "login-failed", f"username: {credentials['username']}")
+ except KeyError:
+ # Unit tests do not have 'username' in credentials, let's not fail them for that
+ pass
| {"golden_diff": "diff --git a/lib/logging.py b/lib/logging.py\n--- a/lib/logging.py\n+++ b/lib/logging.py\n@@ -3,6 +3,12 @@\n from django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed\n from django.dispatch import receiver\n from django.http.request import HttpRequest\n+from format_cef import format_cef\n+from aplus import VERSION\n+\n+\n+CEF_VENDOR = 'Aalto'\n+CEF_PRODUCT = 'aplus'\n \n def skip_unreadable_post(record):\n \"\"\"Skips log records of unfinished post requests.\"\"\"\n@@ -10,7 +16,8 @@\n \n class SecurityLog:\n \"\"\"\n- Static singleton class used for A+ - wide security logging.\n+ Static singleton class used for A+ - wide security logging,\n+ to produce ArcSight Common Event Format (CEF) log.\n Django signals are used to track login/logout events.\n \"\"\"\n \n@@ -21,20 +28,32 @@\n return SecurityLog.seclogger\n \n @staticmethod\n- def logevent(request: HttpRequest, type: str, message: str) -> None:\n- # Unit tests do not have user defined in request object.\n- if request and hasattr(request, 'user'):\n- user=request.user\n- ip=request.META.get('REMOTE_ADDR')\n- else:\n- user='?'\n- ip='?'\n- SecurityLog.logger().info(\"({}/{}): {}: {}\".format(\n- user,\n- ip,\n- type,\n- message\n- ))\n+ def logevent(\n+ request: HttpRequest,\n+ type: str,\n+ message: str,\n+ severity: int = 5,\n+ ) -> None:\n+ extensions = {}\n+ # Unit tests may not have user or source address defined.\n+ if request:\n+ if hasattr(request, 'user'):\n+ extensions['sourceUserName'] = str(request.user)\n+ extensions['sourceUserId'] = str(request.user.id)\n+ if (addr := request.META.get('REMOTE_ADDR')):\n+ extensions['sourceAddress'] = addr\n+\n+ SecurityLog.logger().info(\n+ format_cef(\n+ CEF_VENDOR,\n+ CEF_PRODUCT,\n+ VERSION,\n+ type,\n+ message,\n+ severity,\n+ extensions,\n+ ).decode(\"utf-8\")\n+ )\n \n \n # This example was used as an inspiration for the following functions:\n@@ -51,6 +70,8 @@\n # so we cannot get the IP.\n @receiver(user_login_failed)\n def user_login_failed_callback(sender, credentials, **kwargs):\n- SecurityLog.logevent(None, \"login-failed\",\"{credentials}\".format(\n- credentials=credentials)\n- )\n+ try:\n+ SecurityLog.logevent(None, \"login-failed\", f\"username: {credentials['username']}\")\n+ except KeyError:\n+ # Unit tests do not have 'username' in credentials, let's not fail them for that\n+ pass\n", "issue": "A+ Security logs, CEF format\nAfter security audit in spring 2021, some new security-related log events were added, using SecurityLog class. The log output format should be converted to CEF format that can be exported to Aalto ITS logging systems. Also, the current log events should be reviewed: do they contain sufficient information, and should some additional events be added. Note that security log should contain only relevant events, that can be justified from security point of view.\n", "before_files": [{"content": "from django.http import UnreadablePostError\nimport logging\nfrom django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed\nfrom django.dispatch import receiver\nfrom django.http.request import HttpRequest\n\ndef skip_unreadable_post(record):\n \"\"\"Skips log records of unfinished post requests.\"\"\"\n return not record.exc_info or not issubclass(record.exc_info[0], UnreadablePostError)\n\nclass SecurityLog:\n \"\"\"\n Static singleton class used for A+ - wide security logging.\n Django signals are used to track login/logout events.\n \"\"\"\n\n seclogger = logging.getLogger('aplus.security')\n\n @staticmethod\n def logger() -> logging.Logger:\n return SecurityLog.seclogger\n\n @staticmethod\n def logevent(request: HttpRequest, type: str, message: str) -> None:\n # Unit tests do not have user defined in request object.\n if request and hasattr(request, 'user'):\n user=request.user\n ip=request.META.get('REMOTE_ADDR')\n else:\n user='?'\n ip='?'\n SecurityLog.logger().info(\"({}/{}): {}: {}\".format(\n user,\n ip,\n type,\n message\n ))\n\n\n# This example was used as an inspiration for the following functions:\n# https://stackoverflow.com/questions/37618473/how-can-i-log-both-successful-and-failed-login-and-logout-attempts-in-django\n@receiver(user_logged_in)\ndef user_logged_in_callback(sender, request, user, **kwargs):\n SecurityLog.logevent(request, \"login\", \"\")\n\n@receiver(user_logged_out)\ndef user_logged_out_callback(sender, request, user, **kwargs):\n SecurityLog.logevent(request, \"logout\", \"\")\n\n# Unfortunately the request object is not passed with this signal,\n# so we cannot get the IP.\n@receiver(user_login_failed)\ndef user_login_failed_callback(sender, credentials, **kwargs):\n SecurityLog.logevent(None, \"login-failed\",\"{credentials}\".format(\n credentials=credentials)\n )\n", "path": "lib/logging.py"}], "after_files": [{"content": "from django.http import UnreadablePostError\nimport logging\nfrom django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed\nfrom django.dispatch import receiver\nfrom django.http.request import HttpRequest\nfrom format_cef import format_cef\nfrom aplus import VERSION\n\n\nCEF_VENDOR = 'Aalto'\nCEF_PRODUCT = 'aplus'\n\ndef skip_unreadable_post(record):\n \"\"\"Skips log records of unfinished post requests.\"\"\"\n return not record.exc_info or not issubclass(record.exc_info[0], UnreadablePostError)\n\nclass SecurityLog:\n \"\"\"\n Static singleton class used for A+ - wide security logging,\n to produce ArcSight Common Event Format (CEF) log.\n Django signals are used to track login/logout events.\n \"\"\"\n\n seclogger = logging.getLogger('aplus.security')\n\n @staticmethod\n def logger() -> logging.Logger:\n return SecurityLog.seclogger\n\n @staticmethod\n def logevent(\n request: HttpRequest,\n type: str,\n message: str,\n severity: int = 5,\n ) -> None:\n extensions = {}\n # Unit tests may not have user or source address defined.\n if request:\n if hasattr(request, 'user'):\n extensions['sourceUserName'] = str(request.user)\n extensions['sourceUserId'] = str(request.user.id)\n if (addr := request.META.get('REMOTE_ADDR')):\n extensions['sourceAddress'] = addr\n\n SecurityLog.logger().info(\n format_cef(\n CEF_VENDOR,\n CEF_PRODUCT,\n VERSION,\n type,\n message,\n severity,\n extensions,\n ).decode(\"utf-8\")\n )\n\n\n# This example was used as an inspiration for the following functions:\n# https://stackoverflow.com/questions/37618473/how-can-i-log-both-successful-and-failed-login-and-logout-attempts-in-django\n@receiver(user_logged_in)\ndef user_logged_in_callback(sender, request, user, **kwargs):\n SecurityLog.logevent(request, \"login\", \"\")\n\n@receiver(user_logged_out)\ndef user_logged_out_callback(sender, request, user, **kwargs):\n SecurityLog.logevent(request, \"logout\", \"\")\n\n# Unfortunately the request object is not passed with this signal,\n# so we cannot get the IP.\n@receiver(user_login_failed)\ndef user_login_failed_callback(sender, credentials, **kwargs):\n try:\n SecurityLog.logevent(None, \"login-failed\", f\"username: {credentials['username']}\")\n except KeyError:\n # Unit tests do not have 'username' in credentials, let's not fail them for that\n pass\n", "path": "lib/logging.py"}]} | 897 | 654 |
gh_patches_debug_22756 | rasdani/github-patches | git_diff | streamlit__streamlit-929 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make it easy to screencast a Streamlit app
See spec here: https://docs.google.com/presentation/d/18bNul9a6rjScGhxRmGbZbCcU3uYp_b3ckxA9DVFJlKM/edit
And see a crappy demo I wrote with some code you can steal:
https://gist.github.com/tvst/c114620cf36b77732d5d67f411c55f12
Questions:
* What browsers support this?
* Can we record as mp4 / h264?
* Can we record in a format that works in both Windows and Mac without extra installs? (Linux is not a problem -- users know how to open video in different formats)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_latex.py`
Content:
```
1 import streamlit as st
2
3 st.latex(r"\LaTeX")
4
5 try:
6 import sympy
7
8 a, b = sympy.symbols("a b")
9 out = a + b
10 except:
11 out = "a + b"
12
13 st.latex(out)
14
```
Path: `e2e/scripts/st_chart_utc_time.py`
Content:
```
1 from datetime import date
2
3 import pandas as pd
4 import streamlit as st
5
6 df = pd.DataFrame(
7 {
8 "index": [
9 date(2019, 8, 9),
10 date(2019, 8, 10),
11 date(2019, 8, 11),
12 date(2019, 8, 12),
13 ],
14 "numbers": [10, 50, 30, 40],
15 }
16 )
17
18 df.set_index("index", inplace=True)
19
20 # st.area/bar/line_chart all use Altair/Vega-Lite under the hood.
21 # By default, Vega-Lite displays time values in the browser's local
22 # time zone. In `altair.generate_chart`, we explicitly set the time
23 # display to UTC, so that our results are consistent. This test verifies
24 # that change!
25 st.area_chart(df)
26 st.bar_chart(df)
27 st.line_chart(df)
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_chart_utc_time.py b/e2e/scripts/st_chart_utc_time.py
--- a/e2e/scripts/st_chart_utc_time.py
+++ b/e2e/scripts/st_chart_utc_time.py
@@ -1,3 +1,18 @@
+# -*- coding: utf-8 -*-
+# Copyright 2018-2020 Streamlit Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
from datetime import date
import pandas as pd
diff --git a/e2e/scripts/st_latex.py b/e2e/scripts/st_latex.py
--- a/e2e/scripts/st_latex.py
+++ b/e2e/scripts/st_latex.py
@@ -1,3 +1,18 @@
+# -*- coding: utf-8 -*-
+# Copyright 2018-2020 Streamlit Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
import streamlit as st
st.latex(r"\LaTeX")
| {"golden_diff": "diff --git a/e2e/scripts/st_chart_utc_time.py b/e2e/scripts/st_chart_utc_time.py\n--- a/e2e/scripts/st_chart_utc_time.py\n+++ b/e2e/scripts/st_chart_utc_time.py\n@@ -1,3 +1,18 @@\n+# -*- coding: utf-8 -*-\n+# Copyright 2018-2020 Streamlit Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n from datetime import date\n \n import pandas as pd\ndiff --git a/e2e/scripts/st_latex.py b/e2e/scripts/st_latex.py\n--- a/e2e/scripts/st_latex.py\n+++ b/e2e/scripts/st_latex.py\n@@ -1,3 +1,18 @@\n+# -*- coding: utf-8 -*-\n+# Copyright 2018-2020 Streamlit Inc.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n import streamlit as st\n \n st.latex(r\"\\LaTeX\")\n", "issue": "Make it easy to screencast a Streamlit app\nSee spec here: https://docs.google.com/presentation/d/18bNul9a6rjScGhxRmGbZbCcU3uYp_b3ckxA9DVFJlKM/edit\r\n\r\nAnd see a crappy demo I wrote with some code you can steal:\r\nhttps://gist.github.com/tvst/c114620cf36b77732d5d67f411c55f12\r\n\r\nQuestions:\r\n* What browsers support this?\r\n* Can we record as mp4 / h264?\r\n* Can we record in a format that works in both Windows and Mac without extra installs? (Linux is not a problem -- users know how to open video in different formats)\n", "before_files": [{"content": "import streamlit as st\n\nst.latex(r\"\\LaTeX\")\n\ntry:\n import sympy\n\n a, b = sympy.symbols(\"a b\")\n out = a + b\nexcept:\n out = \"a + b\"\n\nst.latex(out)\n", "path": "e2e/scripts/st_latex.py"}, {"content": "from datetime import date\n\nimport pandas as pd\nimport streamlit as st\n\ndf = pd.DataFrame(\n {\n \"index\": [\n date(2019, 8, 9),\n date(2019, 8, 10),\n date(2019, 8, 11),\n date(2019, 8, 12),\n ],\n \"numbers\": [10, 50, 30, 40],\n }\n)\n\ndf.set_index(\"index\", inplace=True)\n\n# st.area/bar/line_chart all use Altair/Vega-Lite under the hood.\n# By default, Vega-Lite displays time values in the browser's local\n# time zone. In `altair.generate_chart`, we explicitly set the time\n# display to UTC, so that our results are consistent. This test verifies\n# that change!\nst.area_chart(df)\nst.bar_chart(df)\nst.line_chart(df)\n", "path": "e2e/scripts/st_chart_utc_time.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.latex(r\"\\LaTeX\")\n\ntry:\n import sympy\n\n a, b = sympy.symbols(\"a b\")\n out = a + b\nexcept:\n out = \"a + b\"\n\nst.latex(out)\n", "path": "e2e/scripts/st_latex.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom datetime import date\n\nimport pandas as pd\nimport streamlit as st\n\ndf = pd.DataFrame(\n {\n \"index\": [\n date(2019, 8, 9),\n date(2019, 8, 10),\n date(2019, 8, 11),\n date(2019, 8, 12),\n ],\n \"numbers\": [10, 50, 30, 40],\n }\n)\n\ndf.set_index(\"index\", inplace=True)\n\n# st.area/bar/line_chart all use Altair/Vega-Lite under the hood.\n# By default, Vega-Lite displays time values in the browser's local\n# time zone. In `altair.generate_chart`, we explicitly set the time\n# display to UTC, so that our results are consistent. This test verifies\n# that change!\nst.area_chart(df)\nst.bar_chart(df)\nst.line_chart(df)\n", "path": "e2e/scripts/st_chart_utc_time.py"}]} | 791 | 434 |
gh_patches_debug_8419 | rasdani/github-patches | git_diff | searxng__searxng-2830 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: internetarchivescholar engine
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/searxng/searxng
Branch: master
Version: 2023.9.19+3ac7c40b6
<!-- Check if these values are correct -->
**How did you install SearXNG?**
<!-- Did you install SearXNG using the official wiki or using searxng-docker
or manually by executing the searx/webapp.py file? -->
**What happened?**
<!-- A clear and concise description of what the bug is. -->
**How To Reproduce**
<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->
**Technical report**
Error
* Error: KeyError
* Percentage: 25
* Parameters: `()`
* File name: `searx/engines/internet_archive_scholar.py:59`
* Function: `response`
* Code: `'title': result['biblio']['title'],`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/internet_archive_scholar.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Internet Archive scholar(science)
4 """
5
6 from datetime import datetime
7 from urllib.parse import urlencode
8 from searx.utils import html_to_text
9
10 about = {
11 "website": "https://scholar.archive.org/",
12 "wikidata_id": "Q115667709",
13 "official_api_documentation": "https://scholar.archive.org/api/redoc",
14 "use_official_api": True,
15 "require_api_key": False,
16 "results": "JSON",
17 }
18 categories = ['science', 'scientific publications']
19 paging = True
20
21 base_url = "https://scholar.archive.org"
22 results_per_page = 15
23
24
25 def request(query, params):
26 args = {
27 "q": query,
28 "limit": results_per_page,
29 "offset": (params["pageno"] - 1) * results_per_page,
30 }
31 params["url"] = f"{base_url}/search?{urlencode(args)}"
32 params["headers"]["Accept"] = "application/json"
33 return params
34
35
36 def response(resp):
37 results = []
38
39 json = resp.json()
40
41 for result in json["results"]:
42 publishedDate, content, doi = None, '', None
43
44 if result['biblio'].get('release_date'):
45 publishedDate = datetime.strptime(result['biblio']['release_date'], "%Y-%m-%d")
46
47 if len(result['abstracts']) > 0:
48 content = result['abstracts'][0].get('body')
49 elif len(result['_highlights']) > 0:
50 content = result['_highlights'][0]
51
52 if len(result['releases']) > 0:
53 doi = result['releases'][0].get('doi')
54
55 results.append(
56 {
57 'template': 'paper.html',
58 'url': result['fulltext']['access_url'],
59 'title': result['biblio']['title'],
60 'content': html_to_text(content),
61 'publisher': result['biblio'].get('publisher'),
62 'doi': doi,
63 'journal': result['biblio'].get('container_name'),
64 'authors': result['biblio'].get('contrib_names'),
65 'tags': result['tags'],
66 'publishedDate': publishedDate,
67 'issns': result['biblio'].get('issns'),
68 'pdf_url': result['fulltext'].get('access_url'),
69 }
70 )
71
72 return results
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/internet_archive_scholar.py b/searx/engines/internet_archive_scholar.py
--- a/searx/engines/internet_archive_scholar.py
+++ b/searx/engines/internet_archive_scholar.py
@@ -56,7 +56,7 @@
{
'template': 'paper.html',
'url': result['fulltext']['access_url'],
- 'title': result['biblio']['title'],
+ 'title': result['biblio'].get('title') or result['biblio'].get('container_name'),
'content': html_to_text(content),
'publisher': result['biblio'].get('publisher'),
'doi': doi,
| {"golden_diff": "diff --git a/searx/engines/internet_archive_scholar.py b/searx/engines/internet_archive_scholar.py\n--- a/searx/engines/internet_archive_scholar.py\n+++ b/searx/engines/internet_archive_scholar.py\n@@ -56,7 +56,7 @@\n {\n 'template': 'paper.html',\n 'url': result['fulltext']['access_url'],\n- 'title': result['biblio']['title'],\n+ 'title': result['biblio'].get('title') or result['biblio'].get('container_name'),\n 'content': html_to_text(content),\n 'publisher': result['biblio'].get('publisher'),\n 'doi': doi,\n", "issue": "Bug: internetarchivescholar engine\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nRepository: https://github.com/searxng/searxng\r\nBranch: master\r\nVersion: 2023.9.19+3ac7c40b6\r\n<!-- Check if these values are correct -->\r\n\r\n**How did you install SearXNG?**\r\n<!-- Did you install SearXNG using the official wiki or using searxng-docker\r\nor manually by executing the searx/webapp.py file? -->\r\n**What happened?**\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n**How To Reproduce**\r\n<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\n\r\n**Technical report**\r\n\r\nError\r\n * Error: KeyError\r\n * Percentage: 25\r\n * Parameters: `()`\r\n * File name: `searx/engines/internet_archive_scholar.py:59`\r\n * Function: `response`\r\n * Code: `'title': result['biblio']['title'],`\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Internet Archive scholar(science)\n\"\"\"\n\nfrom datetime import datetime\nfrom urllib.parse import urlencode\nfrom searx.utils import html_to_text\n\nabout = {\n \"website\": \"https://scholar.archive.org/\",\n \"wikidata_id\": \"Q115667709\",\n \"official_api_documentation\": \"https://scholar.archive.org/api/redoc\",\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": \"JSON\",\n}\ncategories = ['science', 'scientific publications']\npaging = True\n\nbase_url = \"https://scholar.archive.org\"\nresults_per_page = 15\n\n\ndef request(query, params):\n args = {\n \"q\": query,\n \"limit\": results_per_page,\n \"offset\": (params[\"pageno\"] - 1) * results_per_page,\n }\n params[\"url\"] = f\"{base_url}/search?{urlencode(args)}\"\n params[\"headers\"][\"Accept\"] = \"application/json\"\n return params\n\n\ndef response(resp):\n results = []\n\n json = resp.json()\n\n for result in json[\"results\"]:\n publishedDate, content, doi = None, '', None\n\n if result['biblio'].get('release_date'):\n publishedDate = datetime.strptime(result['biblio']['release_date'], \"%Y-%m-%d\")\n\n if len(result['abstracts']) > 0:\n content = result['abstracts'][0].get('body')\n elif len(result['_highlights']) > 0:\n content = result['_highlights'][0]\n\n if len(result['releases']) > 0:\n doi = result['releases'][0].get('doi')\n\n results.append(\n {\n 'template': 'paper.html',\n 'url': result['fulltext']['access_url'],\n 'title': result['biblio']['title'],\n 'content': html_to_text(content),\n 'publisher': result['biblio'].get('publisher'),\n 'doi': doi,\n 'journal': result['biblio'].get('container_name'),\n 'authors': result['biblio'].get('contrib_names'),\n 'tags': result['tags'],\n 'publishedDate': publishedDate,\n 'issns': result['biblio'].get('issns'),\n 'pdf_url': result['fulltext'].get('access_url'),\n }\n )\n\n return results\n", "path": "searx/engines/internet_archive_scholar.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Internet Archive scholar(science)\n\"\"\"\n\nfrom datetime import datetime\nfrom urllib.parse import urlencode\nfrom searx.utils import html_to_text\n\nabout = {\n \"website\": \"https://scholar.archive.org/\",\n \"wikidata_id\": \"Q115667709\",\n \"official_api_documentation\": \"https://scholar.archive.org/api/redoc\",\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": \"JSON\",\n}\ncategories = ['science', 'scientific publications']\npaging = True\n\nbase_url = \"https://scholar.archive.org\"\nresults_per_page = 15\n\n\ndef request(query, params):\n args = {\n \"q\": query,\n \"limit\": results_per_page,\n \"offset\": (params[\"pageno\"] - 1) * results_per_page,\n }\n params[\"url\"] = f\"{base_url}/search?{urlencode(args)}\"\n params[\"headers\"][\"Accept\"] = \"application/json\"\n return params\n\n\ndef response(resp):\n results = []\n\n json = resp.json()\n\n for result in json[\"results\"]:\n publishedDate, content, doi = None, '', None\n\n if result['biblio'].get('release_date'):\n publishedDate = datetime.strptime(result['biblio']['release_date'], \"%Y-%m-%d\")\n\n if len(result['abstracts']) > 0:\n content = result['abstracts'][0].get('body')\n elif len(result['_highlights']) > 0:\n content = result['_highlights'][0]\n\n if len(result['releases']) > 0:\n doi = result['releases'][0].get('doi')\n\n results.append(\n {\n 'template': 'paper.html',\n 'url': result['fulltext']['access_url'],\n 'title': result['biblio'].get('title') or result['biblio'].get('container_name'),\n 'content': html_to_text(content),\n 'publisher': result['biblio'].get('publisher'),\n 'doi': doi,\n 'journal': result['biblio'].get('container_name'),\n 'authors': result['biblio'].get('contrib_names'),\n 'tags': result['tags'],\n 'publishedDate': publishedDate,\n 'issns': result['biblio'].get('issns'),\n 'pdf_url': result['fulltext'].get('access_url'),\n }\n )\n\n return results\n", "path": "searx/engines/internet_archive_scholar.py"}]} | 1,246 | 163 |
gh_patches_debug_21665 | rasdani/github-patches | git_diff | qtile__qtile-1241 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fake screens
[This part of the documentation](https://github.com/qtile/qtile/blob/ed7198a5fb5438110f81a8c6ecc0e6289676c057/libqtile/config.py#L231-L232) mentions "fake screens", and the term is also found [in the code](https://github.com/qtile/qtile/blob/7c2a88fba68bdcf6f25dfb5494a74afc475d674e/libqtile/manager.py#L357-L373).
What are they? How to use them?
We need to document answers to those questions, and then make sure they work correctly.
See #1192 for this last point.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/confreader.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright (c) 2008, Aldo Cortesi <[email protected]>
4 # Copyright (c) 2011, Andrew Grigorev <[email protected]>
5 #
6 # All rights reserved.
7 #
8 # Permission is hereby granted, free of charge, to any person obtaining a copy
9 # of this software and associated documentation files (the "Software"), to deal
10 # in the Software without restriction, including without limitation the rights
11 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
12 # copies of the Software, and to permit persons to whom the Software is
13 # furnished to do so, subject to the following conditions:
14 #
15 # The above copyright notice and this permission notice shall be included in
16 # all copies or substantial portions of the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
19 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
20 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
21 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
22 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
23 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
24 # SOFTWARE.
25 import os
26 import sys
27
28
29 class ConfigError(Exception):
30 pass
31
32
33 class Config(object):
34 settings_keys = [
35 "keys",
36 "mouse",
37 "groups",
38 "dgroups_key_binder",
39 "dgroups_app_rules",
40 "follow_mouse_focus",
41 "focus_on_window_activation",
42 "cursor_warp",
43 "layouts",
44 "floating_layout",
45 "screens",
46 "main",
47 "auto_fullscreen",
48 "widget_defaults",
49 "extension_defaults",
50 "bring_front_click",
51 "wmname",
52 ]
53
54 def __init__(self, **settings):
55 """Create a Config() object from settings
56
57 Only attributes found in Config.settings_keys will be added to object.
58 config attribute precedence is 1.) **settings 2.) self 3.) default_config
59 """
60 from .resources import default_config
61 default = vars(default_config)
62 for key in self.settings_keys:
63 try:
64 value = settings[key]
65 except KeyError:
66 value = getattr(self, key, default[key])
67 setattr(self, key, value)
68 self._init_deprecated(**settings)
69
70 def _init_deprecated(self, extensions=None, **settings):
71 "Initialize deprecated settings."
72 if extensions: # Deprecated in v0.10.7
73 import warnings
74 warnings.warn("'extentions' is deprecated, use "
75 "'extension_defaults'", DeprecationWarning)
76 self.extension_defaults.update(extensions.get('dmenu', {}))
77
78 @classmethod
79 def from_file(cls, path):
80 "Create a Config() object from the python file located at path."
81 try:
82 sys.path.insert(0, os.path.dirname(path))
83 config = __import__(os.path.basename(path)[:-3])
84 except Exception:
85 import traceback
86 from .log_utils import logger
87 logger.exception('Could not import config file %r', path)
88 tb = traceback.format_exc()
89 raise ConfigError(tb)
90 return cls(**vars(config))
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/confreader.py b/libqtile/confreader.py
--- a/libqtile/confreader.py
+++ b/libqtile/confreader.py
@@ -66,6 +66,7 @@
value = getattr(self, key, default[key])
setattr(self, key, value)
self._init_deprecated(**settings)
+ self._init_fake_screens(**settings)
def _init_deprecated(self, extensions=None, **settings):
"Initialize deprecated settings."
@@ -75,6 +76,14 @@
"'extension_defaults'", DeprecationWarning)
self.extension_defaults.update(extensions.get('dmenu', {}))
+ def _init_fake_screens(self, **settings):
+ " Initiaize fake_screens if they are set."
+ try:
+ value = settings['fake_screens']
+ setattr(self, 'fake_screens', value)
+ except KeyError:
+ pass
+
@classmethod
def from_file(cls, path):
"Create a Config() object from the python file located at path."
| {"golden_diff": "diff --git a/libqtile/confreader.py b/libqtile/confreader.py\n--- a/libqtile/confreader.py\n+++ b/libqtile/confreader.py\n@@ -66,6 +66,7 @@\n value = getattr(self, key, default[key])\n setattr(self, key, value)\n self._init_deprecated(**settings)\n+ self._init_fake_screens(**settings)\n \n def _init_deprecated(self, extensions=None, **settings):\n \"Initialize deprecated settings.\"\n@@ -75,6 +76,14 @@\n \"'extension_defaults'\", DeprecationWarning)\n self.extension_defaults.update(extensions.get('dmenu', {}))\n \n+ def _init_fake_screens(self, **settings):\n+ \" Initiaize fake_screens if they are set.\"\n+ try:\n+ value = settings['fake_screens']\n+ setattr(self, 'fake_screens', value)\n+ except KeyError:\n+ pass\n+\n @classmethod\n def from_file(cls, path):\n \"Create a Config() object from the python file located at path.\"\n", "issue": "Fake screens\n[This part of the documentation](https://github.com/qtile/qtile/blob/ed7198a5fb5438110f81a8c6ecc0e6289676c057/libqtile/config.py#L231-L232) mentions \"fake screens\", and the term is also found [in the code](https://github.com/qtile/qtile/blob/7c2a88fba68bdcf6f25dfb5494a74afc475d674e/libqtile/manager.py#L357-L373).\r\n\r\nWhat are they? How to use them?\r\n\r\nWe need to document answers to those questions, and then make sure they work correctly.\r\n\r\nSee #1192 for this last point.\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright (c) 2008, Aldo Cortesi <[email protected]>\n# Copyright (c) 2011, Andrew Grigorev <[email protected]>\n#\n# All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nimport os\nimport sys\n\n\nclass ConfigError(Exception):\n pass\n\n\nclass Config(object):\n settings_keys = [\n \"keys\",\n \"mouse\",\n \"groups\",\n \"dgroups_key_binder\",\n \"dgroups_app_rules\",\n \"follow_mouse_focus\",\n \"focus_on_window_activation\",\n \"cursor_warp\",\n \"layouts\",\n \"floating_layout\",\n \"screens\",\n \"main\",\n \"auto_fullscreen\",\n \"widget_defaults\",\n \"extension_defaults\",\n \"bring_front_click\",\n \"wmname\",\n ]\n\n def __init__(self, **settings):\n \"\"\"Create a Config() object from settings\n\n Only attributes found in Config.settings_keys will be added to object.\n config attribute precedence is 1.) **settings 2.) self 3.) default_config\n \"\"\"\n from .resources import default_config\n default = vars(default_config)\n for key in self.settings_keys:\n try:\n value = settings[key]\n except KeyError:\n value = getattr(self, key, default[key])\n setattr(self, key, value)\n self._init_deprecated(**settings)\n\n def _init_deprecated(self, extensions=None, **settings):\n \"Initialize deprecated settings.\"\n if extensions: # Deprecated in v0.10.7\n import warnings\n warnings.warn(\"'extentions' is deprecated, use \"\n \"'extension_defaults'\", DeprecationWarning)\n self.extension_defaults.update(extensions.get('dmenu', {}))\n\n @classmethod\n def from_file(cls, path):\n \"Create a Config() object from the python file located at path.\"\n try:\n sys.path.insert(0, os.path.dirname(path))\n config = __import__(os.path.basename(path)[:-3])\n except Exception:\n import traceback\n from .log_utils import logger\n logger.exception('Could not import config file %r', path)\n tb = traceback.format_exc()\n raise ConfigError(tb)\n return cls(**vars(config))\n", "path": "libqtile/confreader.py"}], "after_files": [{"content": "# coding: utf-8\n#\n# Copyright (c) 2008, Aldo Cortesi <[email protected]>\n# Copyright (c) 2011, Andrew Grigorev <[email protected]>\n#\n# All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nimport os\nimport sys\n\n\nclass ConfigError(Exception):\n pass\n\n\nclass Config(object):\n settings_keys = [\n \"keys\",\n \"mouse\",\n \"groups\",\n \"dgroups_key_binder\",\n \"dgroups_app_rules\",\n \"follow_mouse_focus\",\n \"focus_on_window_activation\",\n \"cursor_warp\",\n \"layouts\",\n \"floating_layout\",\n \"screens\",\n \"main\",\n \"auto_fullscreen\",\n \"widget_defaults\",\n \"extension_defaults\",\n \"bring_front_click\",\n \"wmname\",\n ]\n\n def __init__(self, **settings):\n \"\"\"Create a Config() object from settings\n\n Only attributes found in Config.settings_keys will be added to object.\n config attribute precedence is 1.) **settings 2.) self 3.) default_config\n \"\"\"\n from .resources import default_config\n default = vars(default_config)\n for key in self.settings_keys:\n try:\n value = settings[key]\n except KeyError:\n value = getattr(self, key, default[key])\n setattr(self, key, value)\n self._init_deprecated(**settings)\n self._init_fake_screens(**settings)\n\n def _init_deprecated(self, extensions=None, **settings):\n \"Initialize deprecated settings.\"\n if extensions: # Deprecated in v0.10.7\n import warnings\n warnings.warn(\"'extentions' is deprecated, use \"\n \"'extension_defaults'\", DeprecationWarning)\n self.extension_defaults.update(extensions.get('dmenu', {}))\n\n def _init_fake_screens(self, **settings):\n \" Initiaize fake_screens if they are set.\"\n try:\n value = settings['fake_screens']\n setattr(self, 'fake_screens', value)\n except KeyError:\n pass\n\n @classmethod\n def from_file(cls, path):\n \"Create a Config() object from the python file located at path.\"\n try:\n sys.path.insert(0, os.path.dirname(path))\n config = __import__(os.path.basename(path)[:-3])\n except Exception:\n import traceback\n from .log_utils import logger\n logger.exception('Could not import config file %r', path)\n tb = traceback.format_exc()\n raise ConfigError(tb)\n return cls(**vars(config))\n", "path": "libqtile/confreader.py"}]} | 1,325 | 236 |
gh_patches_debug_795 | rasdani/github-patches | git_diff | Parsl__parsl-140 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Do not import `parsl` before requirements are setup
```
[annawoodard@midway001 parsl]$ python setup.py install
Traceback (most recent call last):
File "setup.py", line 2, in <module>
from parsl.version import VERSION
File "/home/annawoodard/parsl/parsl/__init__.py", line 35, in <module>
from parsl.executors.ipp import IPyParallelExecutor
File "/home/annawoodard/parsl/parsl/executors/ipp.py", line 4, in <module>
from ipyparallel import Client
ModuleNotFoundError: No module named 'ipyparallel'
```
Setuptools is supposed to take care of dependencies for us, but importing parsl in `setup.py` breaks that (because we require the dependencies by importing the parsl version from `version.py` before they can be installed). We should avoid this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 from parsl.version import VERSION
3
4 with open('requirements.txt') as f:
5 install_requires = f.readlines()
6
7 # tests_require = parse_requirements('test-requirements.txt')
8
9 setup(
10 name='parsl',
11 version=VERSION,
12 description='Simple data dependent workflows in Python',
13 long_description='Simple and easy parallel workflows system for Python',
14 url='https://github.com/Parsl/parsl',
15 author='Yadu Nand Babuji',
16 author_email='[email protected]',
17 license='Apache 2.0',
18 download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),
19 package_data={'': ['LICENSE']},
20 packages=find_packages(),
21 install_requires=install_requires,
22 classifiers=[
23 # Maturity
24 'Development Status :: 3 - Alpha',
25 # Intended audience
26 'Intended Audience :: Developers',
27 # Licence, must match with licence above
28 'License :: OSI Approved :: Apache Software License',
29 # Python versions supported
30 'Programming Language :: Python :: 3.5',
31 'Programming Language :: Python :: 3.6',
32 ],
33 keywords=['Workflows', 'Scientific computing'],
34 )
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,5 +1,7 @@
from setuptools import setup, find_packages
-from parsl.version import VERSION
+
+with open('parsl/version.py') as f:
+ exec(f.read())
with open('requirements.txt') as f:
install_requires = f.readlines()
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,5 +1,7 @@\n from setuptools import setup, find_packages\n-from parsl.version import VERSION\n+\n+with open('parsl/version.py') as f:\n+ exec(f.read())\n \n with open('requirements.txt') as f:\n install_requires = f.readlines()\n", "issue": "Do not import `parsl` before requirements are setup\n```\r\n[annawoodard@midway001 parsl]$ python setup.py install\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 2, in <module>\r\n from parsl.version import VERSION\r\n File \"/home/annawoodard/parsl/parsl/__init__.py\", line 35, in <module>\r\n from parsl.executors.ipp import IPyParallelExecutor\r\n File \"/home/annawoodard/parsl/parsl/executors/ipp.py\", line 4, in <module>\r\n from ipyparallel import Client\r\nModuleNotFoundError: No module named 'ipyparallel'\r\n```\r\n\r\nSetuptools is supposed to take care of dependencies for us, but importing parsl in `setup.py` breaks that (because we require the dependencies by importing the parsl version from `version.py` before they can be installed). We should avoid this.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom parsl.version import VERSION\n\nwith open('requirements.txt') as f:\n install_requires = f.readlines()\n\n# tests_require = parse_requirements('test-requirements.txt')\n\nsetup(\n name='parsl',\n version=VERSION,\n description='Simple data dependent workflows in Python',\n long_description='Simple and easy parallel workflows system for Python',\n url='https://github.com/Parsl/parsl',\n author='Yadu Nand Babuji',\n author_email='[email protected]',\n license='Apache 2.0',\n download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),\n package_data={'': ['LICENSE']},\n packages=find_packages(),\n install_requires=install_requires,\n classifiers=[\n # Maturity\n 'Development Status :: 3 - Alpha',\n # Intended audience\n 'Intended Audience :: Developers',\n # Licence, must match with licence above\n 'License :: OSI Approved :: Apache Software License',\n # Python versions supported\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords=['Workflows', 'Scientific computing'],\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\n\nwith open('parsl/version.py') as f:\n exec(f.read())\n\nwith open('requirements.txt') as f:\n install_requires = f.readlines()\n\n# tests_require = parse_requirements('test-requirements.txt')\n\nsetup(\n name='parsl',\n version=VERSION,\n description='Simple data dependent workflows in Python',\n long_description='Simple and easy parallel workflows system for Python',\n url='https://github.com/Parsl/parsl',\n author='Yadu Nand Babuji',\n author_email='[email protected]',\n license='Apache 2.0',\n download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),\n package_data={'': ['LICENSE']},\n packages=find_packages(),\n install_requires=install_requires,\n classifiers=[\n # Maturity\n 'Development Status :: 3 - Alpha',\n # Intended audience\n 'Intended Audience :: Developers',\n # Licence, must match with licence above\n 'License :: OSI Approved :: Apache Software License',\n # Python versions supported\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords=['Workflows', 'Scientific computing'],\n)\n", "path": "setup.py"}]} | 796 | 81 |
gh_patches_debug_29007 | rasdani/github-patches | git_diff | vega__altair-2642 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dots aren't showing up in ranged dot plot

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/examples/ranged_dot_plot.py`
Content:
```
1 """
2 Ranged Dot Plot
3 -----------------
4 This example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).
5 """
6 # category: other charts
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.countries.url
11
12 chart = alt.layer(
13 data=source
14 ).transform_filter(
15 filter={"field": 'country',
16 "oneOf": ["China", "India", "United States", "Indonesia", "Brazil"]}
17 ).transform_filter(
18 filter={'field': 'year',
19 "oneOf": [1955, 2000]}
20 )
21
22 chart += alt.Chart().mark_line(color='#db646f').encode(
23 x='life_expect:Q',
24 y='country:N',
25 detail='country:N'
26 )
27 # Add points for life expectancy in 1955 & 2000
28 chart += alt.Chart().mark_point(
29 size=100,
30 opacity=1,
31 filled=True
32 ).encode(
33 x='life_expect:Q',
34 y='country:N',
35 color=alt.Color('year:O',
36 scale=alt.Scale(
37 domain=['1955', '2000'],
38 range=['#e6959c', '#911a24']
39 )
40 )
41 ).interactive()
42
43 chart
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/examples/ranged_dot_plot.py b/altair/examples/ranged_dot_plot.py
--- a/altair/examples/ranged_dot_plot.py
+++ b/altair/examples/ranged_dot_plot.py
@@ -1,7 +1,7 @@
"""
Ranged Dot Plot
------------------
-This example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).
+---------------
+This example shows a ranged dot plot to convey changing life expectancy for the five most populous countries (between 1955 and 2000).
"""
# category: other charts
import altair as alt
@@ -9,7 +9,7 @@
source = data.countries.url
-chart = alt.layer(
+chart = alt.Chart(
data=source
).transform_filter(
filter={"field": 'country',
@@ -19,13 +19,13 @@
"oneOf": [1955, 2000]}
)
-chart += alt.Chart().mark_line(color='#db646f').encode(
+line = chart.mark_line(color='#db646f').encode(
x='life_expect:Q',
y='country:N',
detail='country:N'
)
# Add points for life expectancy in 1955 & 2000
-chart += alt.Chart().mark_point(
+points = chart.mark_point(
size=100,
opacity=1,
filled=True
@@ -34,10 +34,10 @@
y='country:N',
color=alt.Color('year:O',
scale=alt.Scale(
- domain=['1955', '2000'],
+ domain=[1955, 2000],
range=['#e6959c', '#911a24']
)
)
).interactive()
-chart
+(line + points)
| {"golden_diff": "diff --git a/altair/examples/ranged_dot_plot.py b/altair/examples/ranged_dot_plot.py\n--- a/altair/examples/ranged_dot_plot.py\n+++ b/altair/examples/ranged_dot_plot.py\n@@ -1,7 +1,7 @@\n \"\"\"\n Ranged Dot Plot\n------------------\n-This example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).\n+---------------\n+This example shows a ranged dot plot to convey changing life expectancy for the five most populous countries (between 1955 and 2000).\n \"\"\"\n # category: other charts\n import altair as alt\n@@ -9,7 +9,7 @@\n \n source = data.countries.url\n \n-chart = alt.layer(\n+chart = alt.Chart(\n data=source\n ).transform_filter(\n filter={\"field\": 'country',\n@@ -19,13 +19,13 @@\n \"oneOf\": [1955, 2000]}\n )\n \n-chart += alt.Chart().mark_line(color='#db646f').encode(\n+line = chart.mark_line(color='#db646f').encode(\n x='life_expect:Q',\n y='country:N',\n detail='country:N'\n )\n # Add points for life expectancy in 1955 & 2000\n-chart += alt.Chart().mark_point(\n+points = chart.mark_point(\n size=100,\n opacity=1,\n filled=True\n@@ -34,10 +34,10 @@\n y='country:N',\n color=alt.Color('year:O',\n scale=alt.Scale(\n- domain=['1955', '2000'],\n+ domain=[1955, 2000],\n range=['#e6959c', '#911a24']\n )\n )\n ).interactive()\n \n-chart\n+(line + points)\n", "issue": "Dots aren't showing up in ranged dot plot\n\r\n\n", "before_files": [{"content": "\"\"\"\nRanged Dot Plot\n-----------------\nThis example shows a ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).\n\"\"\"\n# category: other charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.countries.url\n\nchart = alt.layer(\n data=source\n).transform_filter(\n filter={\"field\": 'country',\n \"oneOf\": [\"China\", \"India\", \"United States\", \"Indonesia\", \"Brazil\"]}\n).transform_filter(\n filter={'field': 'year',\n \"oneOf\": [1955, 2000]}\n)\n\nchart += alt.Chart().mark_line(color='#db646f').encode(\n x='life_expect:Q',\n y='country:N',\n detail='country:N'\n)\n# Add points for life expectancy in 1955 & 2000\nchart += alt.Chart().mark_point(\n size=100,\n opacity=1,\n filled=True\n).encode(\n x='life_expect:Q',\n y='country:N',\n color=alt.Color('year:O',\n scale=alt.Scale(\n domain=['1955', '2000'],\n range=['#e6959c', '#911a24']\n )\n )\n).interactive()\n\nchart\n", "path": "altair/examples/ranged_dot_plot.py"}], "after_files": [{"content": "\"\"\"\nRanged Dot Plot\n---------------\nThis example shows a ranged dot plot to convey changing life expectancy for the five most populous countries (between 1955 and 2000).\n\"\"\"\n# category: other charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.countries.url\n\nchart = alt.Chart(\n data=source\n).transform_filter(\n filter={\"field\": 'country',\n \"oneOf\": [\"China\", \"India\", \"United States\", \"Indonesia\", \"Brazil\"]}\n).transform_filter(\n filter={'field': 'year',\n \"oneOf\": [1955, 2000]}\n)\n\nline = chart.mark_line(color='#db646f').encode(\n x='life_expect:Q',\n y='country:N',\n detail='country:N'\n)\n# Add points for life expectancy in 1955 & 2000\npoints = chart.mark_point(\n size=100,\n opacity=1,\n filled=True\n).encode(\n x='life_expect:Q',\n y='country:N',\n color=alt.Color('year:O',\n scale=alt.Scale(\n domain=[1955, 2000],\n range=['#e6959c', '#911a24']\n )\n )\n).interactive()\n\n(line + points)\n", "path": "altair/examples/ranged_dot_plot.py"}]} | 740 | 435 |
gh_patches_debug_7421 | rasdani/github-patches | git_diff | safe-global__safe-config-service-65 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Format JSON keys in camel case
Endpoints consumed by the clients should have the JSON keys in camel case. By having them camel case, it follows the formatting that we have in other services.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/config/settings.py`
Content:
```
1 """
2 Django settings for safe_client_config_service project.
3
4 Generated by 'django-admin startproject' using Django 3.2.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.2/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.2/ref/settings/
11 """
12 import os
13 from distutils.util import strtobool
14 from pathlib import Path
15
16 # Build paths inside the project like this: BASE_DIR / 'subdir'.
17 BASE_DIR = Path(__file__).resolve().parent.parent
18
19 # Quick-start development settings - unsuitable for production
20 # See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
21
22 # SECURITY WARNING: keep the secret key used in production secret!
23 SECRET_KEY = os.getenv("SECRET_KEY", None)
24
25 # SECURITY WARNING: don't run with debug turned on in production!
26 DEBUG = bool(strtobool(os.getenv("DEBUG", "false")))
27
28 # https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS
29 allowed_hosts = os.getenv("DJANGO_ALLOWED_HOSTS", ".localhost,127.0.0.1,[::1]")
30 ALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(",")]
31
32 # Application definition
33
34 default_renderer_classes = os.getenv(
35 "REST_DEFAULT_RENDERER_CLASSES", "rest_framework.renderers.JSONRenderer"
36 )
37 REST_FRAMEWORK = {
38 # https://www.django-rest-framework.org/api-guide/renderers/
39 "DEFAULT_RENDERER_CLASSES": [
40 default_renderer_class.strip()
41 for default_renderer_class in default_renderer_classes.split(",")
42 ]
43 }
44
45 INSTALLED_APPS = [
46 "safe_apps.apps.AppsConfig",
47 "django.contrib.admin",
48 "django.contrib.auth",
49 "django.contrib.contenttypes",
50 "django.contrib.sessions",
51 "django.contrib.messages",
52 "django.contrib.staticfiles",
53 "rest_framework",
54 ]
55
56 MIDDLEWARE = [
57 "config.middleware.LoggingMiddleware",
58 "django.middleware.security.SecurityMiddleware",
59 "django.contrib.sessions.middleware.SessionMiddleware",
60 "django.middleware.common.CommonMiddleware",
61 "django.middleware.csrf.CsrfViewMiddleware",
62 "django.contrib.auth.middleware.AuthenticationMiddleware",
63 "django.contrib.messages.middleware.MessageMiddleware",
64 "django.middleware.clickjacking.XFrameOptionsMiddleware",
65 ]
66
67 CACHES = {
68 "default": {
69 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
70 },
71 "safe-apps": {
72 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
73 },
74 }
75
76 LOGGING = {
77 "version": 1,
78 "disable_existing_loggers": False,
79 "formatters": {
80 "short": {"format": "%(asctime)s %(message)s"},
81 "verbose": {
82 "format": "%(asctime)s [%(levelname)s] [%(processName)s] %(message)s"
83 },
84 },
85 "handlers": {
86 "console": {
87 "class": "logging.StreamHandler",
88 "formatter": "verbose",
89 },
90 "console_short": {
91 "class": "logging.StreamHandler",
92 "formatter": "short",
93 },
94 },
95 "root": {
96 "handlers": ["console"],
97 "level": os.getenv("ROOT_LOG_LEVEL", "INFO"),
98 },
99 "loggers": {
100 "LoggingMiddleware": {
101 "handlers": ["console_short"],
102 "level": "INFO",
103 "propagate": False,
104 },
105 },
106 }
107
108 ROOT_URLCONF = "config.urls"
109
110 TEMPLATES = [
111 {
112 "BACKEND": "django.template.backends.django.DjangoTemplates",
113 "DIRS": [],
114 "APP_DIRS": True,
115 "OPTIONS": {
116 "context_processors": [
117 "django.template.context_processors.debug",
118 "django.template.context_processors.request",
119 "django.contrib.auth.context_processors.auth",
120 "django.contrib.messages.context_processors.messages",
121 ],
122 },
123 },
124 ]
125
126 WSGI_APPLICATION = "config.wsgi.application"
127
128 # Database
129 # https://docs.djangoproject.com/en/3.2/ref/settings/#databases
130
131 DATABASES = {
132 "default": {
133 "ENGINE": "django.db.backends.postgresql",
134 "NAME": os.getenv("POSTGRES_NAME", "postgres"),
135 "USER": os.getenv("POSTGRES_USER", "postgres"),
136 "PASSWORD": os.getenv("POSTGRES_PASSWORD", "postgres"),
137 "HOST": os.getenv("POSTGRES_HOST", "db"),
138 "PORT": os.getenv("POSTGRES_PORT", "5432"),
139 }
140 }
141
142 # Password validation
143 # https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
144
145 AUTH_PASSWORD_VALIDATORS = [
146 {
147 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
148 },
149 {
150 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
151 },
152 {
153 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
154 },
155 {
156 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
157 },
158 ]
159
160 # Internationalization
161 # https://docs.djangoproject.com/en/3.2/topics/i18n/
162
163 LANGUAGE_CODE = "en-us"
164
165 TIME_ZONE = "UTC"
166
167 USE_I18N = True
168
169 USE_L10N = True
170
171 USE_TZ = True
172
173 # Static files (CSS, JavaScript, Images)
174 # https://docs.djangoproject.com/en/3.2/howto/static-files/
175
176 STATIC_URL = "/static/"
177
178 # Default primary key field type
179 # https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
180
181 DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/config/settings.py b/src/config/settings.py
--- a/src/config/settings.py
+++ b/src/config/settings.py
@@ -31,14 +31,10 @@
# Application definition
-default_renderer_classes = os.getenv(
- "REST_DEFAULT_RENDERER_CLASSES", "rest_framework.renderers.JSONRenderer"
-)
REST_FRAMEWORK = {
# https://www.django-rest-framework.org/api-guide/renderers/
"DEFAULT_RENDERER_CLASSES": [
- default_renderer_class.strip()
- for default_renderer_class in default_renderer_classes.split(",")
+ "djangorestframework_camel_case.render.CamelCaseJSONRenderer",
]
}
| {"golden_diff": "diff --git a/src/config/settings.py b/src/config/settings.py\n--- a/src/config/settings.py\n+++ b/src/config/settings.py\n@@ -31,14 +31,10 @@\n \n # Application definition\n \n-default_renderer_classes = os.getenv(\n- \"REST_DEFAULT_RENDERER_CLASSES\", \"rest_framework.renderers.JSONRenderer\"\n-)\n REST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n- default_renderer_class.strip()\n- for default_renderer_class in default_renderer_classes.split(\",\")\n+ \"djangorestframework_camel_case.render.CamelCaseJSONRenderer\",\n ]\n }\n", "issue": "Format JSON keys in camel case\nEndpoints consumed by the clients should have the JSON keys in camel case. By having them camel case, it follows the formatting that we have in other services. \n", "before_files": [{"content": "\"\"\"\nDjango settings for safe_client_config_service project.\n\nGenerated by 'django-admin startproject' using Django 3.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\nimport os\nfrom distutils.util import strtobool\nfrom pathlib import Path\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.getenv(\"SECRET_KEY\", None)\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = bool(strtobool(os.getenv(\"DEBUG\", \"false\")))\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\nallowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\nALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(\",\")]\n\n# Application definition\n\ndefault_renderer_classes = os.getenv(\n \"REST_DEFAULT_RENDERER_CLASSES\", \"rest_framework.renderers.JSONRenderer\"\n)\nREST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n default_renderer_class.strip()\n for default_renderer_class in default_renderer_classes.split(\",\")\n ]\n}\n\nINSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n]\n\nMIDDLEWARE = [\n \"config.middleware.LoggingMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n \"safe-apps\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n}\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"short\": {\"format\": \"%(asctime)s %(message)s\"},\n \"verbose\": {\n \"format\": \"%(asctime)s [%(levelname)s] [%(processName)s] %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"console_short\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"short\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": os.getenv(\"ROOT_LOG_LEVEL\", \"INFO\"),\n },\n \"loggers\": {\n \"LoggingMiddleware\": {\n \"handlers\": [\"console_short\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": os.getenv(\"POSTGRES_NAME\", \"postgres\"),\n \"USER\": os.getenv(\"POSTGRES_USER\", \"postgres\"),\n \"PASSWORD\": os.getenv(\"POSTGRES_PASSWORD\", \"postgres\"),\n \"HOST\": os.getenv(\"POSTGRES_HOST\", \"db\"),\n \"PORT\": os.getenv(\"POSTGRES_PORT\", \"5432\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n", "path": "src/config/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for safe_client_config_service project.\n\nGenerated by 'django-admin startproject' using Django 3.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\nimport os\nfrom distutils.util import strtobool\nfrom pathlib import Path\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.getenv(\"SECRET_KEY\", None)\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = bool(strtobool(os.getenv(\"DEBUG\", \"false\")))\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\nallowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\nALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(\",\")]\n\n# Application definition\n\nREST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n \"djangorestframework_camel_case.render.CamelCaseJSONRenderer\",\n ]\n}\n\nINSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n]\n\nMIDDLEWARE = [\n \"config.middleware.LoggingMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n \"safe-apps\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n}\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"short\": {\"format\": \"%(asctime)s %(message)s\"},\n \"verbose\": {\n \"format\": \"%(asctime)s [%(levelname)s] [%(processName)s] %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"console_short\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"short\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": os.getenv(\"ROOT_LOG_LEVEL\", \"INFO\"),\n },\n \"loggers\": {\n \"LoggingMiddleware\": {\n \"handlers\": [\"console_short\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": os.getenv(\"POSTGRES_NAME\", \"postgres\"),\n \"USER\": os.getenv(\"POSTGRES_USER\", \"postgres\"),\n \"PASSWORD\": os.getenv(\"POSTGRES_PASSWORD\", \"postgres\"),\n \"HOST\": os.getenv(\"POSTGRES_HOST\", \"db\"),\n \"PORT\": os.getenv(\"POSTGRES_PORT\", \"5432\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n", "path": "src/config/settings.py"}]} | 1,950 | 138 |
gh_patches_debug_19361 | rasdani/github-patches | git_diff | docker__docker-py-1248 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
create a sans-io docker client class and impliment Client with blocking requests
use https://github.com/mikeal/deferred to create a sans-io version of Client (SansIOClient) that requires something like:
``` python
class SimpleStream(object):
def next(self) -> Deferred:
...
class IOAdapter(object):
def request(self, **kwargs) -> Deferred:
...
def stream(self, **kwargs) -> SimpleStream:
...
def unwrap_deferred(self, deferred: Deferred) -> Any:
...
```
and then implement it with something like:
``` python
class BlockingSimpleStream(SimpleStream):
def __init__(self, stream):
self.generator = _stream_helper(stream):
def next(self) -> Deferred:
return deferred.succeeded(next(self.generator))
class BlockingIOAdapter(IOAdapter):
def __init__(session: requests.Session):
self.session = session
def request(self, **kwargs) -> Deferred:
return deferred.execute(self.session.request, **kwargs)
def stream(self, **kwargs) -> BlockingSimpleStream:
return BlockingSimpleStream(self.session.request(**kwargs, stream=True))
def unwrap_deferred(self, d: Deferred):
return deferred.waitForDeferred(d).getResult()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup
6
7
8 ROOT_DIR = os.path.dirname(__file__)
9 SOURCE_DIR = os.path.join(ROOT_DIR)
10
11 requirements = [
12 'requests >= 2.5.2, < 2.11',
13 'six >= 1.4.0',
14 'websocket-client >= 0.32.0',
15 'docker-pycreds >= 0.2.1'
16 ]
17
18 if sys.platform == 'win32':
19 requirements.append('pypiwin32 >= 219')
20
21 extras_require = {
22 ':python_version < "3.5"': 'backports.ssl_match_hostname >= 3.5',
23 # While not imported explicitly, the ipaddress module is required for
24 # ssl_match_hostname to verify hosts match with certificates via
25 # ServerAltname: https://pypi.python.org/pypi/backports.ssl_match_hostname
26 ':python_version < "3.3"': 'ipaddress >= 1.0.16',
27 }
28
29 version = None
30 exec(open('docker/version.py').read())
31
32 with open('./test-requirements.txt') as test_reqs_txt:
33 test_requirements = [line for line in test_reqs_txt]
34
35
36 setup(
37 name="docker-py",
38 version=version,
39 description="Python client for Docker.",
40 url='https://github.com/docker/docker-py/',
41 packages=[
42 'docker', 'docker.api', 'docker.auth', 'docker.transport',
43 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',
44 'docker.types',
45 ],
46 install_requires=requirements,
47 tests_require=test_requirements,
48 extras_require=extras_require,
49 zip_safe=False,
50 test_suite='tests',
51 classifiers=[
52 'Development Status :: 4 - Beta',
53 'Environment :: Other Environment',
54 'Intended Audience :: Developers',
55 'Operating System :: OS Independent',
56 'Programming Language :: Python',
57 'Programming Language :: Python :: 2',
58 'Programming Language :: Python :: 2.6',
59 'Programming Language :: Python :: 2.7',
60 'Programming Language :: Python :: 3',
61 'Programming Language :: Python :: 3.3',
62 'Programming Language :: Python :: 3.4',
63 'Programming Language :: Python :: 3.5',
64 'Topic :: Utilities',
65 'License :: OSI Approved :: Apache Software License',
66 ],
67 )
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,10 +33,20 @@
test_requirements = [line for line in test_reqs_txt]
+long_description = ''
+try:
+ with open('./README.rst') as readme_rst:
+ long_description = readme_rst.read()
+except IOError:
+ # README.rst is only generated on release. Its absence should not prevent
+ # setup.py from working properly.
+ pass
+
setup(
name="docker-py",
version=version,
description="Python client for Docker.",
+ long_description=long_description,
url='https://github.com/docker/docker-py/',
packages=[
'docker', 'docker.api', 'docker.auth', 'docker.transport',
@@ -64,4 +74,6 @@
'Topic :: Utilities',
'License :: OSI Approved :: Apache Software License',
],
+ maintainer='Joffrey F',
+ maintainer_email='[email protected]',
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,10 +33,20 @@\n test_requirements = [line for line in test_reqs_txt]\n \n \n+long_description = ''\n+try:\n+ with open('./README.rst') as readme_rst:\n+ long_description = readme_rst.read()\n+except IOError:\n+ # README.rst is only generated on release. Its absence should not prevent\n+ # setup.py from working properly.\n+ pass\n+\n setup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n+ long_description=long_description,\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n@@ -64,4 +74,6 @@\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n+ maintainer='Joffrey F',\n+ maintainer_email='[email protected]',\n )\n", "issue": "create a sans-io docker client class and impliment Client with blocking requests\nuse https://github.com/mikeal/deferred to create a sans-io version of Client (SansIOClient) that requires something like:\n\n``` python\nclass SimpleStream(object):\n def next(self) -> Deferred:\n ...\n\nclass IOAdapter(object):\n def request(self, **kwargs) -> Deferred:\n ...\n\n def stream(self, **kwargs) -> SimpleStream:\n ...\n\n def unwrap_deferred(self, deferred: Deferred) -> Any:\n ...\n```\n\nand then implement it with something like:\n\n``` python\nclass BlockingSimpleStream(SimpleStream):\n def __init__(self, stream):\n self.generator = _stream_helper(stream):\n def next(self) -> Deferred:\n return deferred.succeeded(next(self.generator))\n\nclass BlockingIOAdapter(IOAdapter):\n def __init__(session: requests.Session):\n self.session = session\n\n def request(self, **kwargs) -> Deferred:\n return deferred.execute(self.session.request, **kwargs)\n\n def stream(self, **kwargs) -> BlockingSimpleStream:\n return BlockingSimpleStream(self.session.request(**kwargs, stream=True))\n\n def unwrap_deferred(self, d: Deferred):\n return deferred.waitForDeferred(d).getResult()\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n # While not imported explicitly, the ipaddress module is required for\n # ssl_match_hostname to verify hosts match with certificates via\n # ServerAltname: https://pypi.python.org/pypi/backports.ssl_match_hostname\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n # While not imported explicitly, the ipaddress module is required for\n # ssl_match_hostname to verify hosts match with certificates via\n # ServerAltname: https://pypi.python.org/pypi/backports.ssl_match_hostname\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nlong_description = ''\ntry:\n with open('./README.rst') as readme_rst:\n long_description = readme_rst.read()\nexcept IOError:\n # README.rst is only generated on release. Its absence should not prevent\n # setup.py from working properly.\n pass\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n long_description=long_description,\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n maintainer='Joffrey F',\n maintainer_email='[email protected]',\n)\n", "path": "setup.py"}]} | 1,164 | 233 |
gh_patches_debug_49088 | rasdani/github-patches | git_diff | fossasia__open-event-server-5232 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
is_stripe_linked property does not set on connecting stripe account
**Describe the bug**
is_stripe_linked property does not set on connecting stripe account. It throws error.
**Expected behavior**
Property should set appropriately.
**Error**
```
Class 'sqlalchemy.orm.query.Query' is not mapped Traceback (most recent call last):
File "/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 1722, in add
state = attributes.instance_state(instance)
AttributeError: 'Query' object has no attribute '_sa_instance_state'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rs/Pradeep/github/open-event-server/app/api/helpers/db.py", line 22, in save_to_db
db.session.add(item)
File "/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/scoping.py", line 157, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 1724, in add
raise exc.UnmappedInstanceError(instance)
sqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.query.Query' is not mapped
ERROR:root:DB Exception! Class 'sqlalchemy.orm.query.Query' is not mapped
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/stripe_authorization.py`
Content:
```
1 from flask_rest_jsonapi import ResourceDetail, ResourceList
2 from sqlalchemy.orm.exc import NoResultFound
3
4 from app.api.bootstrap import api
5 from app.api.helpers.db import safe_query, get_count, save_to_db
6 from app.api.helpers.exceptions import ForbiddenException, ConflictException, UnprocessableEntity
7 from app.api.helpers.payment import StripePaymentsManager
8 from app.api.helpers.permission_manager import has_access
9 from app.api.helpers.permissions import jwt_required
10 from app.api.helpers.utilities import require_relationship
11 from app.api.schema.stripe_authorization import StripeAuthorizationSchema
12 from app.models import db
13 from app.models.event import Event
14 from app.models.stripe_authorization import StripeAuthorization
15
16
17 class StripeAuthorizationListPost(ResourceList):
18 """
19 List and Create Stripe Authorization
20 """
21 def before_post(self, args, kwargs, data):
22 """
23 before post method to check for required relationship and proper permission
24 :param args:
25 :param kwargs:
26 :param data:
27 :return:
28 """
29 require_relationship(['event'], data)
30 if not has_access('is_organizer', event_id=data['event']):
31 raise ForbiddenException({'source': ''}, "Minimum Organizer access required")
32 if get_count(db.session.query(Event).filter_by(id=int(data['event']), can_pay_by_stripe=False)) > 0:
33 raise ForbiddenException({'pointer': ''}, "Stripe payment is disabled for this Event")
34
35 def before_create_object(self, data, view_kwargs):
36 """
37 method to check if stripe authorization object already exists for an event.
38 Raises ConflictException if it already exists.
39 If it doesn't, then uses the StripePaymentManager to get the other credentials from Stripe.
40 :param data:
41 :param view_kwargs:
42 :return:
43 """
44 try:
45 self.session.query(StripeAuthorization).filter_by(event_id=data['event'], deleted_at=None).one()
46 except NoResultFound:
47 credentials = StripePaymentsManager\
48 .get_event_organizer_credentials_from_stripe(data['stripe_auth_code'])
49 if 'error' in credentials:
50 raise UnprocessableEntity({'pointer': '/data/stripe_auth_code'}, credentials['error_description'])
51 data['stripe_secret_key'] = credentials['access_token']
52 data['stripe_refresh_token'] = credentials['refresh_token']
53 data['stripe_publishable_key'] = credentials['stripe_publishable_key']
54 data['stripe_user_id'] = credentials['stripe_user_id']
55 else:
56 raise ConflictException({'pointer': '/data/relationships/event'},
57 "Stripe Authorization already exists for this event")
58
59 def after_create_object(self, stripe_authorization, data, view_kwargs):
60 """
61 after create object method for StripeAuthorizationListPost Class
62 :param stripe_authorization: Stripe authorization created from mashmallow_jsonapi
63 :param data:
64 :param view_kwargs:
65 :return:
66 """
67 event = db.session.query(Event).filter_by(id=int(data['event']))
68 event.is_stripe_linked = True
69 save_to_db(event)
70
71 schema = StripeAuthorizationSchema
72 decorators = (jwt_required, )
73 methods = ['POST']
74 data_layer = {'session': db.session,
75 'model': StripeAuthorization,
76 'methods': {
77 'before_create_object': before_create_object,
78 'after_create_object': after_create_object
79 }}
80
81
82 class StripeAuthorizationDetail(ResourceDetail):
83 """
84 Stripe Authorization Detail Resource by ID
85 """
86 def before_get_object(self, view_kwargs):
87 """
88 method to get id of stripe authorization related to an event
89 :param view_kwargs:
90 :return:
91 """
92 if view_kwargs.get('event_identifier'):
93 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
94 view_kwargs['event_id'] = event.id
95
96 if view_kwargs.get('event_id'):
97 stripe_authorization = \
98 safe_query(self, StripeAuthorization, 'event_id', view_kwargs['event_id'], 'event_id')
99 view_kwargs['id'] = stripe_authorization.id
100
101 def after_delete_object(self, stripe_authorization, view_kwargs):
102 """Make work after delete object
103 :param stripe_authorization: stripe authorization.
104 :param dict view_kwargs: kwargs from the resource view
105 """
106 event = stripe_authorization.event
107 event.is_stripe_linked = False
108 save_to_db(event)
109
110 decorators = (api.has_permission('is_coorganizer', fetch="event_id",
111 fetch_as="event_id", model=StripeAuthorization),)
112 schema = StripeAuthorizationSchema
113 data_layer = {'session': db.session,
114 'model': StripeAuthorization,
115 'methods': {
116 'before_get_object': before_get_object,
117 'after_delete_object': after_delete_object
118 }}
119
120
121 class StripeAuthorizationRelationship(ResourceDetail):
122 """
123 Stripe Authorization Relationship
124 """
125
126 decorators = (api.has_permission('is_coorganizer', fetch="event_id",
127 fetch_as="event_id", model=StripeAuthorization),)
128 schema = StripeAuthorizationSchema
129 data_layer = {'session': db.session,
130 'model': StripeAuthorization}
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/stripe_authorization.py b/app/api/stripe_authorization.py
--- a/app/api/stripe_authorization.py
+++ b/app/api/stripe_authorization.py
@@ -64,7 +64,7 @@
:param view_kwargs:
:return:
"""
- event = db.session.query(Event).filter_by(id=int(data['event']))
+ event = db.session.query(Event).filter_by(id=int(data['event'])).one()
event.is_stripe_linked = True
save_to_db(event)
| {"golden_diff": "diff --git a/app/api/stripe_authorization.py b/app/api/stripe_authorization.py\n--- a/app/api/stripe_authorization.py\n+++ b/app/api/stripe_authorization.py\n@@ -64,7 +64,7 @@\n :param view_kwargs:\n :return:\n \"\"\"\n- event = db.session.query(Event).filter_by(id=int(data['event']))\n+ event = db.session.query(Event).filter_by(id=int(data['event'])).one()\n event.is_stripe_linked = True\n save_to_db(event)\n", "issue": "is_stripe_linked property does not set on connecting stripe account\n**Describe the bug**\r\nis_stripe_linked property does not set on connecting stripe account. It throws error.\r\n\r\n**Expected behavior**\r\nProperty should set appropriately.\r\n\r\n**Error**\r\n```\r\nClass 'sqlalchemy.orm.query.Query' is not mapped Traceback (most recent call last):\r\n File \"/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py\", line 1722, in add\r\n state = attributes.instance_state(instance)\r\nAttributeError: 'Query' object has no attribute '_sa_instance_state'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/rs/Pradeep/github/open-event-server/app/api/helpers/db.py\", line 22, in save_to_db\r\n db.session.add(item)\r\n File \"/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/scoping.py\", line 157, in do\r\n return getattr(self.registry(), name)(*args, **kwargs)\r\n File \"/home/rs/Pradeep/github/open-event-server/env/lib/python3.5/site-packages/sqlalchemy/orm/session.py\", line 1724, in add\r\n raise exc.UnmappedInstanceError(instance)\r\nsqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.query.Query' is not mapped\r\nERROR:root:DB Exception! Class 'sqlalchemy.orm.query.Query' is not mapped\r\n```\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count, save_to_db\nfrom app.api.helpers.exceptions import ForbiddenException, ConflictException, UnprocessableEntity\nfrom app.api.helpers.payment import StripePaymentsManager\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.stripe_authorization import StripeAuthorizationSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.stripe_authorization import StripeAuthorization\n\n\nclass StripeAuthorizationListPost(ResourceList):\n \"\"\"\n List and Create Stripe Authorization\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationship and proper permission\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_organizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, \"Minimum Organizer access required\")\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), can_pay_by_stripe=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Stripe payment is disabled for this Event\")\n\n def before_create_object(self, data, view_kwargs):\n \"\"\"\n method to check if stripe authorization object already exists for an event.\n Raises ConflictException if it already exists.\n If it doesn't, then uses the StripePaymentManager to get the other credentials from Stripe.\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n try:\n self.session.query(StripeAuthorization).filter_by(event_id=data['event'], deleted_at=None).one()\n except NoResultFound:\n credentials = StripePaymentsManager\\\n .get_event_organizer_credentials_from_stripe(data['stripe_auth_code'])\n if 'error' in credentials:\n raise UnprocessableEntity({'pointer': '/data/stripe_auth_code'}, credentials['error_description'])\n data['stripe_secret_key'] = credentials['access_token']\n data['stripe_refresh_token'] = credentials['refresh_token']\n data['stripe_publishable_key'] = credentials['stripe_publishable_key']\n data['stripe_user_id'] = credentials['stripe_user_id']\n else:\n raise ConflictException({'pointer': '/data/relationships/event'},\n \"Stripe Authorization already exists for this event\")\n\n def after_create_object(self, stripe_authorization, data, view_kwargs):\n \"\"\"\n after create object method for StripeAuthorizationListPost Class\n :param stripe_authorization: Stripe authorization created from mashmallow_jsonapi\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n event = db.session.query(Event).filter_by(id=int(data['event']))\n event.is_stripe_linked = True\n save_to_db(event)\n\n schema = StripeAuthorizationSchema\n decorators = (jwt_required, )\n methods = ['POST']\n data_layer = {'session': db.session,\n 'model': StripeAuthorization,\n 'methods': {\n 'before_create_object': before_create_object,\n 'after_create_object': after_create_object\n }}\n\n\nclass StripeAuthorizationDetail(ResourceDetail):\n \"\"\"\n Stripe Authorization Detail Resource by ID\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n method to get id of stripe authorization related to an event\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n view_kwargs['event_id'] = event.id\n\n if view_kwargs.get('event_id'):\n stripe_authorization = \\\n safe_query(self, StripeAuthorization, 'event_id', view_kwargs['event_id'], 'event_id')\n view_kwargs['id'] = stripe_authorization.id\n\n def after_delete_object(self, stripe_authorization, view_kwargs):\n \"\"\"Make work after delete object\n :param stripe_authorization: stripe authorization.\n :param dict view_kwargs: kwargs from the resource view\n \"\"\"\n event = stripe_authorization.event\n event.is_stripe_linked = False\n save_to_db(event)\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\",\n fetch_as=\"event_id\", model=StripeAuthorization),)\n schema = StripeAuthorizationSchema\n data_layer = {'session': db.session,\n 'model': StripeAuthorization,\n 'methods': {\n 'before_get_object': before_get_object,\n 'after_delete_object': after_delete_object\n }}\n\n\nclass StripeAuthorizationRelationship(ResourceDetail):\n \"\"\"\n Stripe Authorization Relationship\n \"\"\"\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\",\n fetch_as=\"event_id\", model=StripeAuthorization),)\n schema = StripeAuthorizationSchema\n data_layer = {'session': db.session,\n 'model': StripeAuthorization}\n", "path": "app/api/stripe_authorization.py"}], "after_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count, save_to_db\nfrom app.api.helpers.exceptions import ForbiddenException, ConflictException, UnprocessableEntity\nfrom app.api.helpers.payment import StripePaymentsManager\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.stripe_authorization import StripeAuthorizationSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.stripe_authorization import StripeAuthorization\n\n\nclass StripeAuthorizationListPost(ResourceList):\n \"\"\"\n List and Create Stripe Authorization\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationship and proper permission\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_organizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, \"Minimum Organizer access required\")\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), can_pay_by_stripe=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Stripe payment is disabled for this Event\")\n\n def before_create_object(self, data, view_kwargs):\n \"\"\"\n method to check if stripe authorization object already exists for an event.\n Raises ConflictException if it already exists.\n If it doesn't, then uses the StripePaymentManager to get the other credentials from Stripe.\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n try:\n self.session.query(StripeAuthorization).filter_by(event_id=data['event'], deleted_at=None).one()\n except NoResultFound:\n credentials = StripePaymentsManager\\\n .get_event_organizer_credentials_from_stripe(data['stripe_auth_code'])\n if 'error' in credentials:\n raise UnprocessableEntity({'pointer': '/data/stripe_auth_code'}, credentials['error_description'])\n data['stripe_secret_key'] = credentials['access_token']\n data['stripe_refresh_token'] = credentials['refresh_token']\n data['stripe_publishable_key'] = credentials['stripe_publishable_key']\n data['stripe_user_id'] = credentials['stripe_user_id']\n else:\n raise ConflictException({'pointer': '/data/relationships/event'},\n \"Stripe Authorization already exists for this event\")\n\n def after_create_object(self, stripe_authorization, data, view_kwargs):\n \"\"\"\n after create object method for StripeAuthorizationListPost Class\n :param stripe_authorization: Stripe authorization created from mashmallow_jsonapi\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n event = db.session.query(Event).filter_by(id=int(data['event'])).one()\n event.is_stripe_linked = True\n save_to_db(event)\n\n schema = StripeAuthorizationSchema\n decorators = (jwt_required, )\n methods = ['POST']\n data_layer = {'session': db.session,\n 'model': StripeAuthorization,\n 'methods': {\n 'before_create_object': before_create_object,\n 'after_create_object': after_create_object\n }}\n\n\nclass StripeAuthorizationDetail(ResourceDetail):\n \"\"\"\n Stripe Authorization Detail Resource by ID\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n method to get id of stripe authorization related to an event\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n view_kwargs['event_id'] = event.id\n\n if view_kwargs.get('event_id'):\n stripe_authorization = \\\n safe_query(self, StripeAuthorization, 'event_id', view_kwargs['event_id'], 'event_id')\n view_kwargs['id'] = stripe_authorization.id\n\n def after_delete_object(self, stripe_authorization, view_kwargs):\n \"\"\"Make work after delete object\n :param stripe_authorization: stripe authorization.\n :param dict view_kwargs: kwargs from the resource view\n \"\"\"\n event = stripe_authorization.event\n event.is_stripe_linked = False\n save_to_db(event)\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\",\n fetch_as=\"event_id\", model=StripeAuthorization),)\n schema = StripeAuthorizationSchema\n data_layer = {'session': db.session,\n 'model': StripeAuthorization,\n 'methods': {\n 'before_get_object': before_get_object,\n 'after_delete_object': after_delete_object\n }}\n\n\nclass StripeAuthorizationRelationship(ResourceDetail):\n \"\"\"\n Stripe Authorization Relationship\n \"\"\"\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\",\n fetch_as=\"event_id\", model=StripeAuthorization),)\n schema = StripeAuthorizationSchema\n data_layer = {'session': db.session,\n 'model': StripeAuthorization}\n", "path": "app/api/stripe_authorization.py"}]} | 1,939 | 116 |
gh_patches_debug_32168 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-4544 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
azure - event hub resources
Add event hub resource & implement firewall filter
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/c7n_azure/c7n_azure/resources/event_hub.py`
Content:
```
1 # Copyright 2019 Microsoft Corporation
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from c7n_azure.provider import resources
16 from c7n_azure.resources.arm import ArmResourceManager
17
18
19 @resources.register('eventhub')
20 class EventHub(ArmResourceManager):
21 """Event Hub Resource
22
23 :example:
24
25 Finds all Event Hub resources in the subscription.
26
27 .. code-block:: yaml
28
29 policies:
30 - name: find-all-eventhubs
31 resource: azure.eventhub
32
33 """
34
35 class resource_type(ArmResourceManager.resource_type):
36 doc_groups = ['Events']
37
38 service = 'azure.mgmt.eventhub'
39 client = 'EventHubManagementClient'
40 enum_spec = ('namespaces', 'list', None)
41 default_report_fields = (
42 'name',
43 'location',
44 'resourceGroup',
45 'sku.name',
46 'properties.isAutoInflateEnabled'
47 )
48 resource_type = 'Microsoft.EventHub/namespaces'
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/c7n_azure/c7n_azure/resources/event_hub.py b/tools/c7n_azure/c7n_azure/resources/event_hub.py
--- a/tools/c7n_azure/c7n_azure/resources/event_hub.py
+++ b/tools/c7n_azure/c7n_azure/resources/event_hub.py
@@ -12,8 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import logging
+
+from c7n_azure.filters import FirewallRulesFilter
from c7n_azure.provider import resources
from c7n_azure.resources.arm import ArmResourceManager
+from netaddr import IPSet
@resources.register('eventhub')
@@ -22,13 +26,17 @@
:example:
- Finds all Event Hub resources in the subscription.
+ This policy will find all Event Hubs allowing traffic from 1.2.2.128/25 CIDR.
.. code-block:: yaml
policies:
- - name: find-all-eventhubs
- resource: azure.eventhub
+ - name: find-event-hub-allowing-subnet
+ resource: azure.eventhub
+ filters:
+ - type: firewall-rules
+ include:
+ - '1.2.2.128/25'
"""
@@ -46,3 +54,29 @@
'properties.isAutoInflateEnabled'
)
resource_type = 'Microsoft.EventHub/namespaces'
+
+
[email protected]_registry.register('firewall-rules')
+class EventHubFirewallRulesFilter(FirewallRulesFilter):
+
+ def __init__(self, data, manager=None):
+ super(EventHubFirewallRulesFilter, self).__init__(data, manager)
+ self._log = logging.getLogger('custodian.azure.eventhub')
+ self.client = None
+
+ @property
+ def log(self):
+ return self._log
+
+ def process(self, resources, event=None):
+ self.client = self.manager.get_client()
+ return super(EventHubFirewallRulesFilter, self).process(resources, event)
+
+ def _query_rules(self, resource):
+ query = self.client.namespaces.get_network_rule_set(
+ resource['resourceGroup'],
+ resource['name'])
+
+ resource_rules = IPSet([r.ip_mask for r in query.ip_rules])
+
+ return resource_rules
| {"golden_diff": "diff --git a/tools/c7n_azure/c7n_azure/resources/event_hub.py b/tools/c7n_azure/c7n_azure/resources/event_hub.py\n--- a/tools/c7n_azure/c7n_azure/resources/event_hub.py\n+++ b/tools/c7n_azure/c7n_azure/resources/event_hub.py\n@@ -12,8 +12,12 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import logging\n+\n+from c7n_azure.filters import FirewallRulesFilter\n from c7n_azure.provider import resources\n from c7n_azure.resources.arm import ArmResourceManager\n+from netaddr import IPSet\n \n \n @resources.register('eventhub')\n@@ -22,13 +26,17 @@\n \n :example:\n \n- Finds all Event Hub resources in the subscription.\n+ This policy will find all Event Hubs allowing traffic from 1.2.2.128/25 CIDR.\n \n .. code-block:: yaml\n \n policies:\n- - name: find-all-eventhubs\n- resource: azure.eventhub\n+ - name: find-event-hub-allowing-subnet\n+ resource: azure.eventhub\n+ filters:\n+ - type: firewall-rules\n+ include:\n+ - '1.2.2.128/25'\n \n \"\"\"\n \n@@ -46,3 +54,29 @@\n 'properties.isAutoInflateEnabled'\n )\n resource_type = 'Microsoft.EventHub/namespaces'\n+\n+\[email protected]_registry.register('firewall-rules')\n+class EventHubFirewallRulesFilter(FirewallRulesFilter):\n+\n+ def __init__(self, data, manager=None):\n+ super(EventHubFirewallRulesFilter, self).__init__(data, manager)\n+ self._log = logging.getLogger('custodian.azure.eventhub')\n+ self.client = None\n+\n+ @property\n+ def log(self):\n+ return self._log\n+\n+ def process(self, resources, event=None):\n+ self.client = self.manager.get_client()\n+ return super(EventHubFirewallRulesFilter, self).process(resources, event)\n+\n+ def _query_rules(self, resource):\n+ query = self.client.namespaces.get_network_rule_set(\n+ resource['resourceGroup'],\n+ resource['name'])\n+\n+ resource_rules = IPSet([r.ip_mask for r in query.ip_rules])\n+\n+ return resource_rules\n", "issue": "azure - event hub resources\nAdd event hub resource & implement firewall filter\n", "before_files": [{"content": "# Copyright 2019 Microsoft Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom c7n_azure.provider import resources\nfrom c7n_azure.resources.arm import ArmResourceManager\n\n\[email protected]('eventhub')\nclass EventHub(ArmResourceManager):\n \"\"\"Event Hub Resource\n\n :example:\n\n Finds all Event Hub resources in the subscription.\n\n .. code-block:: yaml\n\n policies:\n - name: find-all-eventhubs\n resource: azure.eventhub\n\n \"\"\"\n\n class resource_type(ArmResourceManager.resource_type):\n doc_groups = ['Events']\n\n service = 'azure.mgmt.eventhub'\n client = 'EventHubManagementClient'\n enum_spec = ('namespaces', 'list', None)\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n 'sku.name',\n 'properties.isAutoInflateEnabled'\n )\n resource_type = 'Microsoft.EventHub/namespaces'\n", "path": "tools/c7n_azure/c7n_azure/resources/event_hub.py"}], "after_files": [{"content": "# Copyright 2019 Microsoft Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\n\nfrom c7n_azure.filters import FirewallRulesFilter\nfrom c7n_azure.provider import resources\nfrom c7n_azure.resources.arm import ArmResourceManager\nfrom netaddr import IPSet\n\n\[email protected]('eventhub')\nclass EventHub(ArmResourceManager):\n \"\"\"Event Hub Resource\n\n :example:\n\n This policy will find all Event Hubs allowing traffic from 1.2.2.128/25 CIDR.\n\n .. code-block:: yaml\n\n policies:\n - name: find-event-hub-allowing-subnet\n resource: azure.eventhub\n filters:\n - type: firewall-rules\n include:\n - '1.2.2.128/25'\n\n \"\"\"\n\n class resource_type(ArmResourceManager.resource_type):\n doc_groups = ['Events']\n\n service = 'azure.mgmt.eventhub'\n client = 'EventHubManagementClient'\n enum_spec = ('namespaces', 'list', None)\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n 'sku.name',\n 'properties.isAutoInflateEnabled'\n )\n resource_type = 'Microsoft.EventHub/namespaces'\n\n\[email protected]_registry.register('firewall-rules')\nclass EventHubFirewallRulesFilter(FirewallRulesFilter):\n\n def __init__(self, data, manager=None):\n super(EventHubFirewallRulesFilter, self).__init__(data, manager)\n self._log = logging.getLogger('custodian.azure.eventhub')\n self.client = None\n\n @property\n def log(self):\n return self._log\n\n def process(self, resources, event=None):\n self.client = self.manager.get_client()\n return super(EventHubFirewallRulesFilter, self).process(resources, event)\n\n def _query_rules(self, resource):\n query = self.client.namespaces.get_network_rule_set(\n resource['resourceGroup'],\n resource['name'])\n\n resource_rules = IPSet([r.ip_mask for r in query.ip_rules])\n\n return resource_rules\n", "path": "tools/c7n_azure/c7n_azure/resources/event_hub.py"}]} | 693 | 552 |
gh_patches_debug_20098 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3495 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Project list widget fails due to a date vs. datetime formatting error
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/templatetags/rsr_filters.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the Akvo RSR module.
6 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 import datetime
10 import time
11
12 from django import template
13 from django.conf import settings
14 from decimal import Decimal, ROUND_HALF_UP
15
16 register = template.Library()
17
18 DECIMAL_PLACES = getattr(settings, 'DECIMALS_DECIMAL_PLACES', 2)
19
20
21 @register.filter
22 def get_item(dictionary, key):
23 """Enable lookup in dicts."""
24 return dictionary.get(key)
25
26
27 @register.filter
28 def string_to_date(value):
29 try:
30 time_format = "%Y-%m-%d %H:%M:%S"
31 fmt_time = time.strptime(value, time_format)
32 return datetime.datetime(*fmt_time[:6])
33 except:
34 return value
35
36 # http://stackoverflow.com/questions/250357/smart-truncate-in-python
37
38
39 @register.filter("smart_truncate")
40 def smart_truncate(content, length=100, suffix='...'):
41 if len(content) <= length:
42 return content
43 else:
44 return content[:length].rsplit(' ', 1)[0] + suffix
45
46
47 @register.filter
48 def round(value, decimal_places=DECIMAL_PLACES):
49 try:
50 value = Decimal(str(value))
51 except:
52 return u''
53 if settings.DECIMALS_DEBUG:
54 decimal_result = value.quantize(Decimal(10) ** -decimal_places)
55 return decimal_result
56 else:
57 decimal_result = value.quantize(Decimal(10), ROUND_HALF_UP)
58 return 0 if decimal_result <= 0 else decimal_result
59 round.is_safe = True
60
61
62 @register.filter
63 def countries_list(obj):
64 """ return a list of the countries of all locations of an object.
65 currently works for Project and Organisation """
66 return obj.locations.values_list('country__name', flat=True)
67
68
69 @register.filter
70 def continents_list(obj):
71 """return a list of the continents of all locations of an object"
72 currently works for Project and Organisation """
73 return obj.locations.values_list('country__continent', flat=True)
74
75
76 @register.filter
77 def rsr_sorted_set(iterable):
78 """ create a set of the iterable to eliminate duplicates
79 then make a list of the set and sort it
80 used with countries_list and continents_list
81 """
82 set_list = list(frozenset(iterable))
83 set_list.sort()
84 return set_list
85
86
87 @register.filter
88 def load_partnerships_and_orgs(project):
89 return project.partnerships.prefetch_related('organisation').all()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rsr/templatetags/rsr_filters.py b/akvo/rsr/templatetags/rsr_filters.py
--- a/akvo/rsr/templatetags/rsr_filters.py
+++ b/akvo/rsr/templatetags/rsr_filters.py
@@ -6,9 +6,6 @@
For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
"""
-import datetime
-import time
-
from django import template
from django.conf import settings
from decimal import Decimal, ROUND_HALF_UP
@@ -24,18 +21,7 @@
return dictionary.get(key)
[email protected]
-def string_to_date(value):
- try:
- time_format = "%Y-%m-%d %H:%M:%S"
- fmt_time = time.strptime(value, time_format)
- return datetime.datetime(*fmt_time[:6])
- except:
- return value
-
# http://stackoverflow.com/questions/250357/smart-truncate-in-python
-
-
@register.filter("smart_truncate")
def smart_truncate(content, length=100, suffix='...'):
if len(content) <= length:
| {"golden_diff": "diff --git a/akvo/rsr/templatetags/rsr_filters.py b/akvo/rsr/templatetags/rsr_filters.py\n--- a/akvo/rsr/templatetags/rsr_filters.py\n+++ b/akvo/rsr/templatetags/rsr_filters.py\n@@ -6,9 +6,6 @@\n For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n \"\"\"\n \n-import datetime\n-import time\n-\n from django import template\n from django.conf import settings\n from decimal import Decimal, ROUND_HALF_UP\n@@ -24,18 +21,7 @@\n return dictionary.get(key)\n \n \[email protected]\n-def string_to_date(value):\n- try:\n- time_format = \"%Y-%m-%d %H:%M:%S\"\n- fmt_time = time.strptime(value, time_format)\n- return datetime.datetime(*fmt_time[:6])\n- except:\n- return value\n-\n # http://stackoverflow.com/questions/250357/smart-truncate-in-python\n-\n-\n @register.filter(\"smart_truncate\")\n def smart_truncate(content, length=100, suffix='...'):\n if len(content) <= length:\n", "issue": "Project list widget fails due to a date vs. datetime formatting error\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nAkvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nimport datetime\nimport time\n\nfrom django import template\nfrom django.conf import settings\nfrom decimal import Decimal, ROUND_HALF_UP\n\nregister = template.Library()\n\nDECIMAL_PLACES = getattr(settings, 'DECIMALS_DECIMAL_PLACES', 2)\n\n\[email protected]\ndef get_item(dictionary, key):\n \"\"\"Enable lookup in dicts.\"\"\"\n return dictionary.get(key)\n\n\[email protected]\ndef string_to_date(value):\n try:\n time_format = \"%Y-%m-%d %H:%M:%S\"\n fmt_time = time.strptime(value, time_format)\n return datetime.datetime(*fmt_time[:6])\n except:\n return value\n\n# http://stackoverflow.com/questions/250357/smart-truncate-in-python\n\n\[email protected](\"smart_truncate\")\ndef smart_truncate(content, length=100, suffix='...'):\n if len(content) <= length:\n return content\n else:\n return content[:length].rsplit(' ', 1)[0] + suffix\n\n\[email protected]\ndef round(value, decimal_places=DECIMAL_PLACES):\n try:\n value = Decimal(str(value))\n except:\n return u''\n if settings.DECIMALS_DEBUG:\n decimal_result = value.quantize(Decimal(10) ** -decimal_places)\n return decimal_result\n else:\n decimal_result = value.quantize(Decimal(10), ROUND_HALF_UP)\n return 0 if decimal_result <= 0 else decimal_result\nround.is_safe = True\n\n\[email protected]\ndef countries_list(obj):\n \"\"\" return a list of the countries of all locations of an object.\n currently works for Project and Organisation \"\"\"\n return obj.locations.values_list('country__name', flat=True)\n\n\[email protected]\ndef continents_list(obj):\n \"\"\"return a list of the continents of all locations of an object\"\n currently works for Project and Organisation \"\"\"\n return obj.locations.values_list('country__continent', flat=True)\n\n\[email protected]\ndef rsr_sorted_set(iterable):\n \"\"\" create a set of the iterable to eliminate duplicates\n then make a list of the set and sort it\n used with countries_list and continents_list\n \"\"\"\n set_list = list(frozenset(iterable))\n set_list.sort()\n return set_list\n\n\[email protected]\ndef load_partnerships_and_orgs(project):\n return project.partnerships.prefetch_related('organisation').all()\n", "path": "akvo/rsr/templatetags/rsr_filters.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nAkvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django import template\nfrom django.conf import settings\nfrom decimal import Decimal, ROUND_HALF_UP\n\nregister = template.Library()\n\nDECIMAL_PLACES = getattr(settings, 'DECIMALS_DECIMAL_PLACES', 2)\n\n\[email protected]\ndef get_item(dictionary, key):\n \"\"\"Enable lookup in dicts.\"\"\"\n return dictionary.get(key)\n\n\n# http://stackoverflow.com/questions/250357/smart-truncate-in-python\[email protected](\"smart_truncate\")\ndef smart_truncate(content, length=100, suffix='...'):\n if len(content) <= length:\n return content\n else:\n return content[:length].rsplit(' ', 1)[0] + suffix\n\n\[email protected]\ndef round(value, decimal_places=DECIMAL_PLACES):\n try:\n value = Decimal(str(value))\n except:\n return u''\n if settings.DECIMALS_DEBUG:\n decimal_result = value.quantize(Decimal(10) ** -decimal_places)\n return decimal_result\n else:\n decimal_result = value.quantize(Decimal(10), ROUND_HALF_UP)\n return 0 if decimal_result <= 0 else decimal_result\nround.is_safe = True\n\n\[email protected]\ndef countries_list(obj):\n \"\"\" return a list of the countries of all locations of an object.\n currently works for Project and Organisation \"\"\"\n return obj.locations.values_list('country__name', flat=True)\n\n\[email protected]\ndef continents_list(obj):\n \"\"\"return a list of the continents of all locations of an object\"\n currently works for Project and Organisation \"\"\"\n return obj.locations.values_list('country__continent', flat=True)\n\n\[email protected]\ndef rsr_sorted_set(iterable):\n \"\"\" create a set of the iterable to eliminate duplicates\n then make a list of the set and sort it\n used with countries_list and continents_list\n \"\"\"\n set_list = list(frozenset(iterable))\n set_list.sort()\n return set_list\n\n\[email protected]\ndef load_partnerships_and_orgs(project):\n return project.partnerships.prefetch_related('organisation').all()\n", "path": "akvo/rsr/templatetags/rsr_filters.py"}]} | 1,044 | 268 |
gh_patches_debug_3430 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-1781 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot reproduce the results of SSD-300 on WIDER_FACE
Hi @sovrasov,
Recently, I'm reproducing the results of SSD-300 on WIDER_FACE.
I use the provided config based on mmdetection and get Recall 36.9, Precision 0.023 and AP 27.3.
The results are worse than Recall 43.4, Precision 0.029 and AP 34.7 (shown in https://github.com/open-mmlab/mmdetection/pull/765#issuecomment-502579220).
Generally, the models provided by mmdetection are trained on 8 gpus. So, is the SSD-300 on WIDER_FACE also trained on 8 gpus? If so, maybe I will change the learning rate because I trained the model on 4 gpus.
Moreover, any other advice?
Thanks a lot.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `configs/wider_face/ssd300_wider_face.py`
Content:
```
1 # model settings
2 input_size = 300
3 model = dict(
4 type='SingleStageDetector',
5 pretrained='open-mmlab://vgg16_caffe',
6 backbone=dict(
7 type='SSDVGG',
8 input_size=input_size,
9 depth=16,
10 with_last_pool=False,
11 ceil_mode=True,
12 out_indices=(3, 4),
13 out_feature_indices=(22, 34),
14 l2_norm_scale=20),
15 neck=None,
16 bbox_head=dict(
17 type='SSDHead',
18 input_size=input_size,
19 in_channels=(512, 1024, 512, 256, 256, 256),
20 num_classes=2,
21 anchor_strides=(8, 16, 32, 64, 100, 300),
22 basesize_ratio_range=(0.15, 0.9),
23 anchor_ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]),
24 target_means=(.0, .0, .0, .0),
25 target_stds=(0.1, 0.1, 0.2, 0.2)))
26 # model training and testing settings
27 cudnn_benchmark = True
28 train_cfg = dict(
29 assigner=dict(
30 type='MaxIoUAssigner',
31 pos_iou_thr=0.5,
32 neg_iou_thr=0.5,
33 min_pos_iou=0.,
34 ignore_iof_thr=-1,
35 gt_max_assign_all=False),
36 smoothl1_beta=1.,
37 allowed_border=-1,
38 pos_weight=-1,
39 neg_pos_ratio=3,
40 debug=False)
41 test_cfg = dict(
42 nms=dict(type='nms', iou_thr=0.45),
43 min_bbox_size=0,
44 score_thr=0.02,
45 max_per_img=200)
46 # dataset settings
47 dataset_type = 'WIDERFaceDataset'
48 data_root = 'data/WIDERFace/'
49 img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)
50 train_pipeline = [
51 dict(type='LoadImageFromFile', to_float32=True),
52 dict(type='LoadAnnotations', with_bbox=True),
53 dict(
54 type='PhotoMetricDistortion',
55 brightness_delta=32,
56 contrast_range=(0.5, 1.5),
57 saturation_range=(0.5, 1.5),
58 hue_delta=18),
59 dict(
60 type='Expand',
61 mean=img_norm_cfg['mean'],
62 to_rgb=img_norm_cfg['to_rgb'],
63 ratio_range=(1, 4)),
64 dict(
65 type='MinIoURandomCrop',
66 min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
67 min_crop_size=0.3),
68 dict(type='Resize', img_scale=(300, 300), keep_ratio=False),
69 dict(type='Normalize', **img_norm_cfg),
70 dict(type='RandomFlip', flip_ratio=0.5),
71 dict(type='DefaultFormatBundle'),
72 dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
73 ]
74 test_pipeline = [
75 dict(type='LoadImageFromFile'),
76 dict(
77 type='MultiScaleFlipAug',
78 img_scale=(300, 300),
79 flip=False,
80 transforms=[
81 dict(type='Resize', keep_ratio=False),
82 dict(type='Normalize', **img_norm_cfg),
83 dict(type='ImageToTensor', keys=['img']),
84 dict(type='Collect', keys=['img']),
85 ])
86 ]
87 data = dict(
88 imgs_per_gpu=60,
89 workers_per_gpu=2,
90 train=dict(
91 type='RepeatDataset',
92 times=2,
93 dataset=dict(
94 type=dataset_type,
95 ann_file=data_root + 'train.txt',
96 img_prefix=data_root + 'WIDER_train/',
97 min_size=17,
98 pipeline=train_pipeline)),
99 val=dict(
100 type=dataset_type,
101 ann_file=data_root + 'val.txt',
102 img_prefix=data_root + 'WIDER_val/',
103 pipeline=test_pipeline),
104 test=dict(
105 type=dataset_type,
106 ann_file=data_root + 'val.txt',
107 img_prefix=data_root + 'WIDER_val/',
108 pipeline=test_pipeline))
109 # optimizer
110 optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)
111 optimizer_config = dict()
112 # learning policy
113 lr_config = dict(
114 policy='step',
115 warmup='linear',
116 warmup_iters=1000,
117 warmup_ratio=1.0 / 3,
118 step=[16, 20])
119 checkpoint_config = dict(interval=1)
120 # yapf:disable
121 log_config = dict(
122 interval=1,
123 hooks=[
124 dict(type='TextLoggerHook'),
125 # dict(type='TensorboardLoggerHook')
126 ])
127 # yapf:enable
128 # runtime settings
129 total_epochs = 24
130 dist_params = dict(backend='nccl')
131 log_level = 'INFO'
132 work_dir = './work_dirs/ssd300_wider'
133 load_from = None
134 resume_from = None
135 workflow = [('train', 1)]
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/configs/wider_face/ssd300_wider_face.py b/configs/wider_face/ssd300_wider_face.py
--- a/configs/wider_face/ssd300_wider_face.py
+++ b/configs/wider_face/ssd300_wider_face.py
@@ -107,7 +107,7 @@
img_prefix=data_root + 'WIDER_val/',
pipeline=test_pipeline))
# optimizer
-optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)
+optimizer = dict(type='SGD', lr=0.012, momentum=0.9, weight_decay=5e-4)
optimizer_config = dict()
# learning policy
lr_config = dict(
| {"golden_diff": "diff --git a/configs/wider_face/ssd300_wider_face.py b/configs/wider_face/ssd300_wider_face.py\n--- a/configs/wider_face/ssd300_wider_face.py\n+++ b/configs/wider_face/ssd300_wider_face.py\n@@ -107,7 +107,7 @@\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline))\n # optimizer\n-optimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)\n+optimizer = dict(type='SGD', lr=0.012, momentum=0.9, weight_decay=5e-4)\n optimizer_config = dict()\n # learning policy\n lr_config = dict(\n", "issue": "Cannot reproduce the results of SSD-300 on WIDER_FACE\nHi @sovrasov,\r\nRecently, I'm reproducing the results of SSD-300 on WIDER_FACE.\r\nI use the provided config based on mmdetection and get Recall 36.9, Precision 0.023 and AP 27.3.\r\nThe results are worse than Recall 43.4, Precision 0.029 and AP 34.7 (shown in https://github.com/open-mmlab/mmdetection/pull/765#issuecomment-502579220).\r\n\r\nGenerally, the models provided by mmdetection are trained on 8 gpus. So, is the SSD-300 on WIDER_FACE also trained on 8 gpus? If so, maybe I will change the learning rate because I trained the model on 4 gpus.\r\n\r\nMoreover, any other advice?\r\n\r\nThanks a lot.\r\n\n", "before_files": [{"content": "# model settings\ninput_size = 300\nmodel = dict(\n type='SingleStageDetector',\n pretrained='open-mmlab://vgg16_caffe',\n backbone=dict(\n type='SSDVGG',\n input_size=input_size,\n depth=16,\n with_last_pool=False,\n ceil_mode=True,\n out_indices=(3, 4),\n out_feature_indices=(22, 34),\n l2_norm_scale=20),\n neck=None,\n bbox_head=dict(\n type='SSDHead',\n input_size=input_size,\n in_channels=(512, 1024, 512, 256, 256, 256),\n num_classes=2,\n anchor_strides=(8, 16, 32, 64, 100, 300),\n basesize_ratio_range=(0.15, 0.9),\n anchor_ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]),\n target_means=(.0, .0, .0, .0),\n target_stds=(0.1, 0.1, 0.2, 0.2)))\n# model training and testing settings\ncudnn_benchmark = True\ntrain_cfg = dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.5,\n neg_iou_thr=0.5,\n min_pos_iou=0.,\n ignore_iof_thr=-1,\n gt_max_assign_all=False),\n smoothl1_beta=1.,\n allowed_border=-1,\n pos_weight=-1,\n neg_pos_ratio=3,\n debug=False)\ntest_cfg = dict(\n nms=dict(type='nms', iou_thr=0.45),\n min_bbox_size=0,\n score_thr=0.02,\n max_per_img=200)\n# dataset settings\ndataset_type = 'WIDERFaceDataset'\ndata_root = 'data/WIDERFace/'\nimg_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile', to_float32=True),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='PhotoMetricDistortion',\n brightness_delta=32,\n contrast_range=(0.5, 1.5),\n saturation_range=(0.5, 1.5),\n hue_delta=18),\n dict(\n type='Expand',\n mean=img_norm_cfg['mean'],\n to_rgb=img_norm_cfg['to_rgb'],\n ratio_range=(1, 4)),\n dict(\n type='MinIoURandomCrop',\n min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),\n min_crop_size=0.3),\n dict(type='Resize', img_scale=(300, 300), keep_ratio=False),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(300, 300),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=False),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n imgs_per_gpu=60,\n workers_per_gpu=2,\n train=dict(\n type='RepeatDataset',\n times=2,\n dataset=dict(\n type=dataset_type,\n ann_file=data_root + 'train.txt',\n img_prefix=data_root + 'WIDER_train/',\n min_size=17,\n pipeline=train_pipeline)),\n val=dict(\n type=dataset_type,\n ann_file=data_root + 'val.txt',\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline),\n test=dict(\n type=dataset_type,\n ann_file=data_root + 'val.txt',\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline))\n# optimizer\noptimizer = dict(type='SGD', lr=1e-3, momentum=0.9, weight_decay=5e-4)\noptimizer_config = dict()\n# learning policy\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=1000,\n warmup_ratio=1.0 / 3,\n step=[16, 20])\ncheckpoint_config = dict(interval=1)\n# yapf:disable\nlog_config = dict(\n interval=1,\n hooks=[\n dict(type='TextLoggerHook'),\n # dict(type='TensorboardLoggerHook')\n ])\n# yapf:enable\n# runtime settings\ntotal_epochs = 24\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nwork_dir = './work_dirs/ssd300_wider'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\n", "path": "configs/wider_face/ssd300_wider_face.py"}], "after_files": [{"content": "# model settings\ninput_size = 300\nmodel = dict(\n type='SingleStageDetector',\n pretrained='open-mmlab://vgg16_caffe',\n backbone=dict(\n type='SSDVGG',\n input_size=input_size,\n depth=16,\n with_last_pool=False,\n ceil_mode=True,\n out_indices=(3, 4),\n out_feature_indices=(22, 34),\n l2_norm_scale=20),\n neck=None,\n bbox_head=dict(\n type='SSDHead',\n input_size=input_size,\n in_channels=(512, 1024, 512, 256, 256, 256),\n num_classes=2,\n anchor_strides=(8, 16, 32, 64, 100, 300),\n basesize_ratio_range=(0.15, 0.9),\n anchor_ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]),\n target_means=(.0, .0, .0, .0),\n target_stds=(0.1, 0.1, 0.2, 0.2)))\n# model training and testing settings\ncudnn_benchmark = True\ntrain_cfg = dict(\n assigner=dict(\n type='MaxIoUAssigner',\n pos_iou_thr=0.5,\n neg_iou_thr=0.5,\n min_pos_iou=0.,\n ignore_iof_thr=-1,\n gt_max_assign_all=False),\n smoothl1_beta=1.,\n allowed_border=-1,\n pos_weight=-1,\n neg_pos_ratio=3,\n debug=False)\ntest_cfg = dict(\n nms=dict(type='nms', iou_thr=0.45),\n min_bbox_size=0,\n score_thr=0.02,\n max_per_img=200)\n# dataset settings\ndataset_type = 'WIDERFaceDataset'\ndata_root = 'data/WIDERFace/'\nimg_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)\ntrain_pipeline = [\n dict(type='LoadImageFromFile', to_float32=True),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='PhotoMetricDistortion',\n brightness_delta=32,\n contrast_range=(0.5, 1.5),\n saturation_range=(0.5, 1.5),\n hue_delta=18),\n dict(\n type='Expand',\n mean=img_norm_cfg['mean'],\n to_rgb=img_norm_cfg['to_rgb'],\n ratio_range=(1, 4)),\n dict(\n type='MinIoURandomCrop',\n min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),\n min_crop_size=0.3),\n dict(type='Resize', img_scale=(300, 300), keep_ratio=False),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(300, 300),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=False),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n imgs_per_gpu=60,\n workers_per_gpu=2,\n train=dict(\n type='RepeatDataset',\n times=2,\n dataset=dict(\n type=dataset_type,\n ann_file=data_root + 'train.txt',\n img_prefix=data_root + 'WIDER_train/',\n min_size=17,\n pipeline=train_pipeline)),\n val=dict(\n type=dataset_type,\n ann_file=data_root + 'val.txt',\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline),\n test=dict(\n type=dataset_type,\n ann_file=data_root + 'val.txt',\n img_prefix=data_root + 'WIDER_val/',\n pipeline=test_pipeline))\n# optimizer\noptimizer = dict(type='SGD', lr=0.012, momentum=0.9, weight_decay=5e-4)\noptimizer_config = dict()\n# learning policy\nlr_config = dict(\n policy='step',\n warmup='linear',\n warmup_iters=1000,\n warmup_ratio=1.0 / 3,\n step=[16, 20])\ncheckpoint_config = dict(interval=1)\n# yapf:disable\nlog_config = dict(\n interval=1,\n hooks=[\n dict(type='TextLoggerHook'),\n # dict(type='TensorboardLoggerHook')\n ])\n# yapf:enable\n# runtime settings\ntotal_epochs = 24\ndist_params = dict(backend='nccl')\nlog_level = 'INFO'\nwork_dir = './work_dirs/ssd300_wider'\nload_from = None\nresume_from = None\nworkflow = [('train', 1)]\n", "path": "configs/wider_face/ssd300_wider_face.py"}]} | 1,955 | 176 |
gh_patches_debug_60370 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-597 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Confusing KerError message for flash registry
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
Steps to reproduce the behavior:
```
from flash.image import ImageClassificationData, ImageClassifier
print(ImageClassifier.backbones.get('abcd'))
```
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
It should throw a keyerror.
### Environment
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
Sending in PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash/core/registry.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from functools import partial
15 from types import FunctionType
16 from typing import Any, Callable, Dict, List, Optional, Union
17
18 from pytorch_lightning.utilities import rank_zero_info
19 from pytorch_lightning.utilities.exceptions import MisconfigurationException
20
21 _REGISTERED_FUNCTION = Dict[str, Any]
22
23
24 class FlashRegistry:
25 """This class is used to register function or :class:`functools.partial` class to a registry."""
26
27 def __init__(self, name: str, verbose: bool = False) -> None:
28 self.name = name
29 self.functions: List[_REGISTERED_FUNCTION] = []
30 self._verbose = verbose
31
32 def __len__(self) -> int:
33 return len(self.functions)
34
35 def __contains__(self, key) -> bool:
36 return any(key == e["name"] for e in self.functions)
37
38 def __repr__(self) -> str:
39 return f'{self.__class__.__name__}(name={self.name}, functions={self.functions})'
40
41 def get(
42 self,
43 key: str,
44 with_metadata: bool = False,
45 strict: bool = True,
46 **metadata,
47 ) -> Union[Callable, _REGISTERED_FUNCTION, List[_REGISTERED_FUNCTION], List[Callable]]:
48 """
49 This function is used to gather matches from the registry:
50
51 Args:
52 key: Name of the registered function.
53 with_metadata: Whether to include the associated metadata in the return value.
54 strict: Whether to return all matches or just one.
55 metadata: Metadata used to filter against existing registry item's metadata.
56 """
57 matches = [e for e in self.functions if key == e["name"]]
58 if not matches:
59 raise KeyError(f"Key: {key} is not in {repr(self)}")
60
61 if metadata:
62 matches = [m for m in matches if metadata.items() <= m["metadata"].items()]
63 if not matches:
64 raise KeyError("Found no matches that fit your metadata criteria. Try removing some metadata")
65
66 matches = [e if with_metadata else e["fn"] for e in matches]
67 return matches[0] if strict else matches
68
69 def remove(self, key: str) -> None:
70 self.functions = [f for f in self.functions if f["name"] != key]
71
72 def _register_function(
73 self,
74 fn: Callable,
75 name: Optional[str] = None,
76 override: bool = False,
77 metadata: Optional[Dict[str, Any]] = None
78 ):
79 if not isinstance(fn, FunctionType) and not isinstance(fn, partial):
80 raise MisconfigurationException(f"You can only register a function, found: {fn}")
81
82 name = name or fn.__name__
83
84 if self._verbose:
85 rank_zero_info(f"Registering: {fn.__name__} function with name: {name} and metadata: {metadata}")
86
87 item = {"fn": fn, "name": name, "metadata": metadata or {}}
88
89 matching_index = self._find_matching_index(item)
90 if override and matching_index is not None:
91 self.functions[matching_index] = item
92 else:
93 if matching_index is not None:
94 raise MisconfigurationException(
95 f"Function with name: {name} and metadata: {metadata} is already present within {self}."
96 " HINT: Use `override=True`."
97 )
98 self.functions.append(item)
99
100 def _find_matching_index(self, item: _REGISTERED_FUNCTION) -> Optional[int]:
101 for idx, fn in enumerate(self.functions):
102 if all(fn[k] == item[k] for k in ("fn", "name", "metadata")):
103 return idx
104
105 def __call__(
106 self,
107 fn: Optional[Callable[..., Any]] = None,
108 name: Optional[str] = None,
109 override: bool = False,
110 **metadata
111 ) -> Callable:
112 """
113 This function is used to register new functions to the registry along their metadata.
114
115 Functions can be filtered using metadata using the ``get`` function.
116
117 """
118 if fn is not None:
119 self._register_function(fn=fn, name=name, override=override, metadata=metadata)
120 return fn
121
122 # raise the error ahead of time
123 if not (name is None or isinstance(name, str)):
124 raise TypeError(f'`name` must be a str, found {name}')
125
126 def _register(cls):
127 self._register_function(fn=cls, name=name, override=override, metadata=metadata)
128 return cls
129
130 return _register
131
132 def available_keys(self) -> List[str]:
133 return sorted(v["name"] for v in self.functions)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flash/core/registry.py b/flash/core/registry.py
--- a/flash/core/registry.py
+++ b/flash/core/registry.py
@@ -56,7 +56,7 @@
"""
matches = [e for e in self.functions if key == e["name"]]
if not matches:
- raise KeyError(f"Key: {key} is not in {repr(self)}")
+ raise KeyError(f"Key: {key} is not in {type(self).__name__}")
if metadata:
matches = [m for m in matches if metadata.items() <= m["metadata"].items()]
| {"golden_diff": "diff --git a/flash/core/registry.py b/flash/core/registry.py\n--- a/flash/core/registry.py\n+++ b/flash/core/registry.py\n@@ -56,7 +56,7 @@\n \"\"\"\n matches = [e for e in self.functions if key == e[\"name\"]]\n if not matches:\n- raise KeyError(f\"Key: {key} is not in {repr(self)}\")\n+ raise KeyError(f\"Key: {key} is not in {type(self).__name__}\")\n \n if metadata:\n matches = [m for m in matches if metadata.items() <= m[\"metadata\"].items()]\n", "issue": "Confusing KerError message for flash registry\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n### To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```\r\nfrom flash.image import ImageClassificationData, ImageClassifier\r\n\r\nprint(ImageClassifier.backbones.get('abcd'))\r\n```\r\n\r\n#### Code sample\r\n<!-- Ideally attach a minimal code sample to reproduce the decried issue.\r\nMinimal means having the shortest code but still preserving the bug. -->\r\n\r\n### Expected behavior\r\n\r\nIt should throw a keyerror.\r\n\r\n### Environment\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n### Additional context\r\n\r\nSending in PR.\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom functools import partial\nfrom types import FunctionType\nfrom typing import Any, Callable, Dict, List, Optional, Union\n\nfrom pytorch_lightning.utilities import rank_zero_info\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\n_REGISTERED_FUNCTION = Dict[str, Any]\n\n\nclass FlashRegistry:\n \"\"\"This class is used to register function or :class:`functools.partial` class to a registry.\"\"\"\n\n def __init__(self, name: str, verbose: bool = False) -> None:\n self.name = name\n self.functions: List[_REGISTERED_FUNCTION] = []\n self._verbose = verbose\n\n def __len__(self) -> int:\n return len(self.functions)\n\n def __contains__(self, key) -> bool:\n return any(key == e[\"name\"] for e in self.functions)\n\n def __repr__(self) -> str:\n return f'{self.__class__.__name__}(name={self.name}, functions={self.functions})'\n\n def get(\n self,\n key: str,\n with_metadata: bool = False,\n strict: bool = True,\n **metadata,\n ) -> Union[Callable, _REGISTERED_FUNCTION, List[_REGISTERED_FUNCTION], List[Callable]]:\n \"\"\"\n This function is used to gather matches from the registry:\n\n Args:\n key: Name of the registered function.\n with_metadata: Whether to include the associated metadata in the return value.\n strict: Whether to return all matches or just one.\n metadata: Metadata used to filter against existing registry item's metadata.\n \"\"\"\n matches = [e for e in self.functions if key == e[\"name\"]]\n if not matches:\n raise KeyError(f\"Key: {key} is not in {repr(self)}\")\n\n if metadata:\n matches = [m for m in matches if metadata.items() <= m[\"metadata\"].items()]\n if not matches:\n raise KeyError(\"Found no matches that fit your metadata criteria. Try removing some metadata\")\n\n matches = [e if with_metadata else e[\"fn\"] for e in matches]\n return matches[0] if strict else matches\n\n def remove(self, key: str) -> None:\n self.functions = [f for f in self.functions if f[\"name\"] != key]\n\n def _register_function(\n self,\n fn: Callable,\n name: Optional[str] = None,\n override: bool = False,\n metadata: Optional[Dict[str, Any]] = None\n ):\n if not isinstance(fn, FunctionType) and not isinstance(fn, partial):\n raise MisconfigurationException(f\"You can only register a function, found: {fn}\")\n\n name = name or fn.__name__\n\n if self._verbose:\n rank_zero_info(f\"Registering: {fn.__name__} function with name: {name} and metadata: {metadata}\")\n\n item = {\"fn\": fn, \"name\": name, \"metadata\": metadata or {}}\n\n matching_index = self._find_matching_index(item)\n if override and matching_index is not None:\n self.functions[matching_index] = item\n else:\n if matching_index is not None:\n raise MisconfigurationException(\n f\"Function with name: {name} and metadata: {metadata} is already present within {self}.\"\n \" HINT: Use `override=True`.\"\n )\n self.functions.append(item)\n\n def _find_matching_index(self, item: _REGISTERED_FUNCTION) -> Optional[int]:\n for idx, fn in enumerate(self.functions):\n if all(fn[k] == item[k] for k in (\"fn\", \"name\", \"metadata\")):\n return idx\n\n def __call__(\n self,\n fn: Optional[Callable[..., Any]] = None,\n name: Optional[str] = None,\n override: bool = False,\n **metadata\n ) -> Callable:\n \"\"\"\n This function is used to register new functions to the registry along their metadata.\n\n Functions can be filtered using metadata using the ``get`` function.\n\n \"\"\"\n if fn is not None:\n self._register_function(fn=fn, name=name, override=override, metadata=metadata)\n return fn\n\n # raise the error ahead of time\n if not (name is None or isinstance(name, str)):\n raise TypeError(f'`name` must be a str, found {name}')\n\n def _register(cls):\n self._register_function(fn=cls, name=name, override=override, metadata=metadata)\n return cls\n\n return _register\n\n def available_keys(self) -> List[str]:\n return sorted(v[\"name\"] for v in self.functions)\n", "path": "flash/core/registry.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom functools import partial\nfrom types import FunctionType\nfrom typing import Any, Callable, Dict, List, Optional, Union\n\nfrom pytorch_lightning.utilities import rank_zero_info\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\n_REGISTERED_FUNCTION = Dict[str, Any]\n\n\nclass FlashRegistry:\n \"\"\"This class is used to register function or :class:`functools.partial` class to a registry.\"\"\"\n\n def __init__(self, name: str, verbose: bool = False) -> None:\n self.name = name\n self.functions: List[_REGISTERED_FUNCTION] = []\n self._verbose = verbose\n\n def __len__(self) -> int:\n return len(self.functions)\n\n def __contains__(self, key) -> bool:\n return any(key == e[\"name\"] for e in self.functions)\n\n def __repr__(self) -> str:\n return f'{self.__class__.__name__}(name={self.name}, functions={self.functions})'\n\n def get(\n self,\n key: str,\n with_metadata: bool = False,\n strict: bool = True,\n **metadata,\n ) -> Union[Callable, _REGISTERED_FUNCTION, List[_REGISTERED_FUNCTION], List[Callable]]:\n \"\"\"\n This function is used to gather matches from the registry:\n\n Args:\n key: Name of the registered function.\n with_metadata: Whether to include the associated metadata in the return value.\n strict: Whether to return all matches or just one.\n metadata: Metadata used to filter against existing registry item's metadata.\n \"\"\"\n matches = [e for e in self.functions if key == e[\"name\"]]\n if not matches:\n raise KeyError(f\"Key: {key} is not in {type(self).__name__}\")\n\n if metadata:\n matches = [m for m in matches if metadata.items() <= m[\"metadata\"].items()]\n if not matches:\n raise KeyError(\"Found no matches that fit your metadata criteria. Try removing some metadata\")\n\n matches = [e if with_metadata else e[\"fn\"] for e in matches]\n return matches[0] if strict else matches\n\n def remove(self, key: str) -> None:\n self.functions = [f for f in self.functions if f[\"name\"] != key]\n\n def _register_function(\n self,\n fn: Callable,\n name: Optional[str] = None,\n override: bool = False,\n metadata: Optional[Dict[str, Any]] = None\n ):\n if not isinstance(fn, FunctionType) and not isinstance(fn, partial):\n raise MisconfigurationException(f\"You can only register a function, found: {fn}\")\n\n name = name or fn.__name__\n\n if self._verbose:\n rank_zero_info(f\"Registering: {fn.__name__} function with name: {name} and metadata: {metadata}\")\n\n item = {\"fn\": fn, \"name\": name, \"metadata\": metadata or {}}\n\n matching_index = self._find_matching_index(item)\n if override and matching_index is not None:\n self.functions[matching_index] = item\n else:\n if matching_index is not None:\n raise MisconfigurationException(\n f\"Function with name: {name} and metadata: {metadata} is already present within {self}.\"\n \" HINT: Use `override=True`.\"\n )\n self.functions.append(item)\n\n def _find_matching_index(self, item: _REGISTERED_FUNCTION) -> Optional[int]:\n for idx, fn in enumerate(self.functions):\n if all(fn[k] == item[k] for k in (\"fn\", \"name\", \"metadata\")):\n return idx\n\n def __call__(\n self,\n fn: Optional[Callable[..., Any]] = None,\n name: Optional[str] = None,\n override: bool = False,\n **metadata\n ) -> Callable:\n \"\"\"\n This function is used to register new functions to the registry along their metadata.\n\n Functions can be filtered using metadata using the ``get`` function.\n\n \"\"\"\n if fn is not None:\n self._register_function(fn=fn, name=name, override=override, metadata=metadata)\n return fn\n\n # raise the error ahead of time\n if not (name is None or isinstance(name, str)):\n raise TypeError(f'`name` must be a str, found {name}')\n\n def _register(cls):\n self._register_function(fn=cls, name=name, override=override, metadata=metadata)\n return cls\n\n return _register\n\n def available_keys(self) -> List[str]:\n return sorted(v[\"name\"] for v in self.functions)\n", "path": "flash/core/registry.py"}]} | 1,869 | 137 |
gh_patches_debug_16275 | rasdani/github-patches | git_diff | Kinto__kinto-1256 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Accounts post API crashes with unset id (if basicauth is enabled too)
```
gsurita-30820:~ gsurita$ echo '{"data": {"password": "me"}}' | http post localhost:8888/v1/accounts -a foo:bar
HTTP/1.1 500 Internal Server Error
(...)
```
```
Traceback (most recent call last):
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py", line 22, in excview_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 119, in tm_tween
reraise(*exc_info)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py", line 15, in reraise
raise value
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 98, in tm_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py", line 155, in handle_request
view_name
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py", line 612, in _call_view
response = view_callable(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py", line 181, in __call__
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 389, in attr_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 367, in predicate_wrapper
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 300, in secured_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 438, in rendered_view
result = view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py", line 491, in wrapper
response = view_()
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 81, in collection_post
result = super(Account, self).collection_post()
File "/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py", line 341, in collection_post
new_record = self.process_record(new_record)
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 102, in process_record
if new[self.model.id_field] != self.request.selected_userid:
KeyError: 'id'
```
Accounts post API crashes with unset id (if basicauth is enabled too)
```
gsurita-30820:~ gsurita$ echo '{"data": {"password": "me"}}' | http post localhost:8888/v1/accounts -a foo:bar
HTTP/1.1 500 Internal Server Error
(...)
```
```
Traceback (most recent call last):
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py", line 22, in excview_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 119, in tm_tween
reraise(*exc_info)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py", line 15, in reraise
raise value
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py", line 98, in tm_tween
response = handler(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py", line 155, in handle_request
view_name
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py", line 612, in _call_view
response = view_callable(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py", line 181, in __call__
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 389, in attr_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 367, in predicate_wrapper
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 300, in secured_view
return view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 438, in rendered_view
result = view(context, request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py", line 147, in _requestonly_view
response = view(request)
File "/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py", line 491, in wrapper
response = view_()
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 81, in collection_post
result = super(Account, self).collection_post()
File "/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py", line 341, in collection_post
new_record = self.process_record(new_record)
File "/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py", line 102, in process_record
if new[self.model.id_field] != self.request.selected_userid:
KeyError: 'id'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/plugins/accounts/__init__.py`
Content:
```
1 from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
2
3
4 def includeme(config):
5 config.add_api_capability(
6 'accounts',
7 description='Manage user accounts.',
8 url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')
9
10 config.scan('kinto.plugins.accounts.views')
11
12 PERMISSIONS_INHERITANCE_TREE[''].update({
13 'account:create': {}
14 })
15 PERMISSIONS_INHERITANCE_TREE['account'] = {
16 'write': {'account': ['write']},
17 'read': {'account': ['write', 'read']}
18 }
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py
--- a/kinto/plugins/accounts/__init__.py
+++ b/kinto/plugins/accounts/__init__.py
@@ -1,4 +1,5 @@
from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
+from pyramid.exceptions import ConfigurationError
def includeme(config):
@@ -16,3 +17,12 @@
'write': {'account': ['write']},
'read': {'account': ['write', 'read']}
}
+
+ # Add some safety to avoid weird behaviour with basicauth default policy.
+ settings = config.get_settings()
+ auth_policies = settings['multiauth.policies']
+ if 'basicauth' in auth_policies and 'account' in auth_policies:
+ if auth_policies.index('basicauth') < auth_policies.index('account'):
+ error_msg = ("'basicauth' should not be mentioned before 'account' "
+ "in 'multiauth.policies' setting.")
+ raise ConfigurationError(error_msg)
| {"golden_diff": "diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py\n--- a/kinto/plugins/accounts/__init__.py\n+++ b/kinto/plugins/accounts/__init__.py\n@@ -1,4 +1,5 @@\n from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\n+from pyramid.exceptions import ConfigurationError\n \n \n def includeme(config):\n@@ -16,3 +17,12 @@\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n+\n+ # Add some safety to avoid weird behaviour with basicauth default policy.\n+ settings = config.get_settings()\n+ auth_policies = settings['multiauth.policies']\n+ if 'basicauth' in auth_policies and 'account' in auth_policies:\n+ if auth_policies.index('basicauth') < auth_policies.index('account'):\n+ error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n+ \"in 'multiauth.policies' setting.\")\n+ raise ConfigurationError(error_msg)\n", "issue": "Accounts post API crashes with unset id (if basicauth is enabled too)\n```\r\ngsurita-30820:~ gsurita$ echo '{\"data\": {\"password\": \"me\"}}' | http post localhost:8888/v1/accounts -a foo:bar\r\nHTTP/1.1 500 Internal Server Error\r\n(...)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py\", line 22, in excview_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 119, in tm_tween\r\n reraise(*exc_info)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py\", line 15, in reraise\r\n raise value\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 98, in tm_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py\", line 155, in handle_request\r\n view_name\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py\", line 612, in _call_view\r\n response = view_callable(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py\", line 181, in __call__\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 389, in attr_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 367, in predicate_wrapper\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 300, in secured_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 438, in rendered_view\r\n result = view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\r\n response = view(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py\", line 491, in wrapper\r\n response = view_()\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 81, in collection_post\r\n result = super(Account, self).collection_post()\r\n File \"/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py\", line 341, in collection_post\r\n new_record = self.process_record(new_record)\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 102, in process_record\r\n if new[self.model.id_field] != self.request.selected_userid:\r\nKeyError: 'id'\r\n```\nAccounts post API crashes with unset id (if basicauth is enabled too)\n```\r\ngsurita-30820:~ gsurita$ echo '{\"data\": {\"password\": \"me\"}}' | http post localhost:8888/v1/accounts -a foo:bar\r\nHTTP/1.1 500 Internal Server Error\r\n(...)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/tweens.py\", line 22, in excview_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 119, in tm_tween\r\n reraise(*exc_info)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/compat.py\", line 15, in reraise\r\n raise value\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid_tm/__init__.py\", line 98, in tm_tween\r\n response = handler(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/router.py\", line 155, in handle_request\r\n view_name\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/view.py\", line 612, in _call_view\r\n response = view_callable(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/config/views.py\", line 181, in __call__\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 389, in attr_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 367, in predicate_wrapper\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 300, in secured_view\r\n return view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 438, in rendered_view\r\n result = view(context, request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/pyramid/viewderivers.py\", line 147, in _requestonly_view\r\n response = view(request)\r\n File \"/Users/gsurita/kinto/kinto/.venv/lib/python3.6/site-packages/cornice/service.py\", line 491, in wrapper\r\n response = view_()\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 81, in collection_post\r\n result = super(Account, self).collection_post()\r\n File \"/Users/gsurita/kinto/kinto/kinto/core/resource/__init__.py\", line 341, in collection_post\r\n new_record = self.process_record(new_record)\r\n File \"/Users/gsurita/kinto/kinto/kinto/plugins/accounts/views.py\", line 102, in process_record\r\n if new[self.model.id_field] != self.request.selected_userid:\r\nKeyError: 'id'\r\n```\n", "before_files": [{"content": "from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\n\n\ndef includeme(config):\n config.add_api_capability(\n 'accounts',\n description='Manage user accounts.',\n url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')\n\n config.scan('kinto.plugins.accounts.views')\n\n PERMISSIONS_INHERITANCE_TREE[''].update({\n 'account:create': {}\n })\n PERMISSIONS_INHERITANCE_TREE['account'] = {\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n", "path": "kinto/plugins/accounts/__init__.py"}], "after_files": [{"content": "from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\nfrom pyramid.exceptions import ConfigurationError\n\n\ndef includeme(config):\n config.add_api_capability(\n 'accounts',\n description='Manage user accounts.',\n url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')\n\n config.scan('kinto.plugins.accounts.views')\n\n PERMISSIONS_INHERITANCE_TREE[''].update({\n 'account:create': {}\n })\n PERMISSIONS_INHERITANCE_TREE['account'] = {\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n\n # Add some safety to avoid weird behaviour with basicauth default policy.\n settings = config.get_settings()\n auth_policies = settings['multiauth.policies']\n if 'basicauth' in auth_policies and 'account' in auth_policies:\n if auth_policies.index('basicauth') < auth_policies.index('account'):\n error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n \"in 'multiauth.policies' setting.\")\n raise ConfigurationError(error_msg)\n", "path": "kinto/plugins/accounts/__init__.py"}]} | 2,010 | 241 |
gh_patches_debug_26382 | rasdani/github-patches | git_diff | beetbox__beets-1675 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EmbedArt Plugin: remove_art_file doesn't seem to work
I'm running beets version 1.15. The EmbedArt plugin isn't removing the art file from the file system.
Logfile: http://pastebin.com/n10bbdpS
Config: http://pastebin.com/ztrjd16C
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/embedart.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2015, Adrian Sampson.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Allows beets to embed album art into file metadata."""
16 from __future__ import (division, absolute_import, print_function,
17 unicode_literals)
18
19 import os.path
20
21 from beets.plugins import BeetsPlugin
22 from beets import ui
23 from beets.ui import decargs
24 from beets.util import syspath, normpath, displayable_path, bytestring_path
25 from beets.util.artresizer import ArtResizer
26 from beets import config
27 from beets import art
28
29
30 class EmbedCoverArtPlugin(BeetsPlugin):
31 """Allows albumart to be embedded into the actual files.
32 """
33 def __init__(self):
34 super(EmbedCoverArtPlugin, self).__init__()
35 self.config.add({
36 'maxwidth': 0,
37 'auto': True,
38 'compare_threshold': 0,
39 'ifempty': False,
40 'remove_art_file': False
41 })
42
43 if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:
44 self.config['maxwidth'] = 0
45 self._log.warning(u"ImageMagick or PIL not found; "
46 u"'maxwidth' option ignored")
47 if self.config['compare_threshold'].get(int) and not \
48 ArtResizer.shared.can_compare:
49 self.config['compare_threshold'] = 0
50 self._log.warning(u"ImageMagick 6.8.7 or higher not installed; "
51 u"'compare_threshold' option ignored")
52
53 self.register_listener('art_set', self.process_album)
54
55 def commands(self):
56 # Embed command.
57 embed_cmd = ui.Subcommand(
58 'embedart', help='embed image files into file metadata'
59 )
60 embed_cmd.parser.add_option(
61 '-f', '--file', metavar='PATH', help='the image file to embed'
62 )
63 maxwidth = self.config['maxwidth'].get(int)
64 compare_threshold = self.config['compare_threshold'].get(int)
65 ifempty = self.config['ifempty'].get(bool)
66 remove_art_file = self.config['remove_art_file'].get(bool)
67
68 def embed_func(lib, opts, args):
69 if opts.file:
70 imagepath = normpath(opts.file)
71 if not os.path.isfile(syspath(imagepath)):
72 raise ui.UserError(u'image file {0} not found'.format(
73 displayable_path(imagepath)
74 ))
75 for item in lib.items(decargs(args)):
76 art.embed_item(self._log, item, imagepath, maxwidth, None,
77 compare_threshold, ifempty)
78 else:
79 for album in lib.albums(decargs(args)):
80 art.embed_album(self._log, album, maxwidth, False,
81 compare_threshold, ifempty)
82
83 if remove_art_file and album.artpath is not None:
84 if os.path.isfile(album.artpath):
85 self._log.debug(u'Removing album art file '
86 u'for {0}', album)
87 os.remove(album.artpath)
88 album.artpath = None
89 album.store()
90
91 embed_cmd.func = embed_func
92
93 # Extract command.
94 extract_cmd = ui.Subcommand('extractart',
95 help='extract an image from file metadata')
96 extract_cmd.parser.add_option('-o', dest='outpath',
97 help='image output file')
98 extract_cmd.parser.add_option('-n', dest='filename',
99 help='image filename to create for all '
100 'matched albums')
101 extract_cmd.parser.add_option('-a', dest='associate',
102 action='store_true',
103 help='associate the extracted images '
104 'with the album')
105
106 def extract_func(lib, opts, args):
107 if opts.outpath:
108 art.extract_first(self._log, normpath(opts.outpath),
109 lib.items(decargs(args)))
110 else:
111 filename = bytestring_path(opts.filename or
112 config['art_filename'].get())
113 if os.path.dirname(filename) != '':
114 self._log.error(u"Only specify a name rather than a path "
115 u"for -n")
116 return
117 for album in lib.albums(decargs(args)):
118 artpath = normpath(os.path.join(album.path, filename))
119 artpath = art.extract_first(self._log, artpath,
120 album.items())
121 if artpath and opts.associate:
122 album.set_art(artpath)
123 album.store()
124 extract_cmd.func = extract_func
125
126 # Clear command.
127 clear_cmd = ui.Subcommand('clearart',
128 help='remove images from file metadata')
129
130 def clear_func(lib, opts, args):
131 art.clear(self._log, lib, decargs(args))
132 clear_cmd.func = clear_func
133
134 return [embed_cmd, extract_cmd, clear_cmd]
135
136 def process_album(self, album):
137 """Automatically embed art after art has been set
138 """
139 if self.config['auto'] and config['import']['write']:
140 max_width = self.config['maxwidth'].get(int)
141 art.embed_album(self._log, album, max_width, True,
142 self.config['compare_threshold'].get(int),
143 self.config['ifempty'].get(bool))
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/beetsplug/embedart.py b/beetsplug/embedart.py
--- a/beetsplug/embedart.py
+++ b/beetsplug/embedart.py
@@ -79,14 +79,7 @@
for album in lib.albums(decargs(args)):
art.embed_album(self._log, album, maxwidth, False,
compare_threshold, ifempty)
-
- if remove_art_file and album.artpath is not None:
- if os.path.isfile(album.artpath):
- self._log.debug(u'Removing album art file '
- u'for {0}', album)
- os.remove(album.artpath)
- album.artpath = None
- album.store()
+ self.remove_artfile(album)
embed_cmd.func = embed_func
@@ -141,3 +134,12 @@
art.embed_album(self._log, album, max_width, True,
self.config['compare_threshold'].get(int),
self.config['ifempty'].get(bool))
+ self.remove_artfile(album)
+ def remove_artfile(self, album)
+ if self.config['remove_art_file'] and album.artpath:
+ if os.path.isfile(album.artpath):
+ self._log.debug(u'Removing album art file '
+ u'for {0}', album)
+ os.remove(album.artpath)
+ album.artpath = None
+ album.store()
| {"golden_diff": "diff --git a/beetsplug/embedart.py b/beetsplug/embedart.py\n--- a/beetsplug/embedart.py\n+++ b/beetsplug/embedart.py\n@@ -79,14 +79,7 @@\n for album in lib.albums(decargs(args)):\n art.embed_album(self._log, album, maxwidth, False,\n compare_threshold, ifempty)\n-\n- if remove_art_file and album.artpath is not None:\n- if os.path.isfile(album.artpath):\n- self._log.debug(u'Removing album art file '\n- u'for {0}', album)\n- os.remove(album.artpath)\n- album.artpath = None\n- album.store()\n+ self.remove_artfile(album)\n \n embed_cmd.func = embed_func\n \n@@ -141,3 +134,12 @@\n art.embed_album(self._log, album, max_width, True,\n self.config['compare_threshold'].get(int),\n self.config['ifempty'].get(bool))\n+ self.remove_artfile(album)\n+ def remove_artfile(self, album)\n+ if self.config['remove_art_file'] and album.artpath:\n+ if os.path.isfile(album.artpath):\n+ self._log.debug(u'Removing album art file '\n+ u'for {0}', album)\n+ os.remove(album.artpath)\n+ album.artpath = None\n+ album.store()\n", "issue": "EmbedArt Plugin: remove_art_file doesn't seem to work\nI'm running beets version 1.15. The EmbedArt plugin isn't removing the art file from the file system. \nLogfile: http://pastebin.com/n10bbdpS\nConfig: http://pastebin.com/ztrjd16C\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Allows beets to embed album art into file metadata.\"\"\"\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\nimport os.path\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets.ui import decargs\nfrom beets.util import syspath, normpath, displayable_path, bytestring_path\nfrom beets.util.artresizer import ArtResizer\nfrom beets import config\nfrom beets import art\n\n\nclass EmbedCoverArtPlugin(BeetsPlugin):\n \"\"\"Allows albumart to be embedded into the actual files.\n \"\"\"\n def __init__(self):\n super(EmbedCoverArtPlugin, self).__init__()\n self.config.add({\n 'maxwidth': 0,\n 'auto': True,\n 'compare_threshold': 0,\n 'ifempty': False,\n 'remove_art_file': False\n })\n\n if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:\n self.config['maxwidth'] = 0\n self._log.warning(u\"ImageMagick or PIL not found; \"\n u\"'maxwidth' option ignored\")\n if self.config['compare_threshold'].get(int) and not \\\n ArtResizer.shared.can_compare:\n self.config['compare_threshold'] = 0\n self._log.warning(u\"ImageMagick 6.8.7 or higher not installed; \"\n u\"'compare_threshold' option ignored\")\n\n self.register_listener('art_set', self.process_album)\n\n def commands(self):\n # Embed command.\n embed_cmd = ui.Subcommand(\n 'embedart', help='embed image files into file metadata'\n )\n embed_cmd.parser.add_option(\n '-f', '--file', metavar='PATH', help='the image file to embed'\n )\n maxwidth = self.config['maxwidth'].get(int)\n compare_threshold = self.config['compare_threshold'].get(int)\n ifempty = self.config['ifempty'].get(bool)\n remove_art_file = self.config['remove_art_file'].get(bool)\n\n def embed_func(lib, opts, args):\n if opts.file:\n imagepath = normpath(opts.file)\n if not os.path.isfile(syspath(imagepath)):\n raise ui.UserError(u'image file {0} not found'.format(\n displayable_path(imagepath)\n ))\n for item in lib.items(decargs(args)):\n art.embed_item(self._log, item, imagepath, maxwidth, None,\n compare_threshold, ifempty)\n else:\n for album in lib.albums(decargs(args)):\n art.embed_album(self._log, album, maxwidth, False,\n compare_threshold, ifempty)\n\n if remove_art_file and album.artpath is not None:\n if os.path.isfile(album.artpath):\n self._log.debug(u'Removing album art file '\n u'for {0}', album)\n os.remove(album.artpath)\n album.artpath = None\n album.store()\n\n embed_cmd.func = embed_func\n\n # Extract command.\n extract_cmd = ui.Subcommand('extractart',\n help='extract an image from file metadata')\n extract_cmd.parser.add_option('-o', dest='outpath',\n help='image output file')\n extract_cmd.parser.add_option('-n', dest='filename',\n help='image filename to create for all '\n 'matched albums')\n extract_cmd.parser.add_option('-a', dest='associate',\n action='store_true',\n help='associate the extracted images '\n 'with the album')\n\n def extract_func(lib, opts, args):\n if opts.outpath:\n art.extract_first(self._log, normpath(opts.outpath),\n lib.items(decargs(args)))\n else:\n filename = bytestring_path(opts.filename or\n config['art_filename'].get())\n if os.path.dirname(filename) != '':\n self._log.error(u\"Only specify a name rather than a path \"\n u\"for -n\")\n return\n for album in lib.albums(decargs(args)):\n artpath = normpath(os.path.join(album.path, filename))\n artpath = art.extract_first(self._log, artpath,\n album.items())\n if artpath and opts.associate:\n album.set_art(artpath)\n album.store()\n extract_cmd.func = extract_func\n\n # Clear command.\n clear_cmd = ui.Subcommand('clearart',\n help='remove images from file metadata')\n\n def clear_func(lib, opts, args):\n art.clear(self._log, lib, decargs(args))\n clear_cmd.func = clear_func\n\n return [embed_cmd, extract_cmd, clear_cmd]\n\n def process_album(self, album):\n \"\"\"Automatically embed art after art has been set\n \"\"\"\n if self.config['auto'] and config['import']['write']:\n max_width = self.config['maxwidth'].get(int)\n art.embed_album(self._log, album, max_width, True,\n self.config['compare_threshold'].get(int),\n self.config['ifempty'].get(bool))\n", "path": "beetsplug/embedart.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Allows beets to embed album art into file metadata.\"\"\"\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\nimport os.path\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets.ui import decargs\nfrom beets.util import syspath, normpath, displayable_path, bytestring_path\nfrom beets.util.artresizer import ArtResizer\nfrom beets import config\nfrom beets import art\n\n\nclass EmbedCoverArtPlugin(BeetsPlugin):\n \"\"\"Allows albumart to be embedded into the actual files.\n \"\"\"\n def __init__(self):\n super(EmbedCoverArtPlugin, self).__init__()\n self.config.add({\n 'maxwidth': 0,\n 'auto': True,\n 'compare_threshold': 0,\n 'ifempty': False,\n 'remove_art_file': False\n })\n\n if self.config['maxwidth'].get(int) and not ArtResizer.shared.local:\n self.config['maxwidth'] = 0\n self._log.warning(u\"ImageMagick or PIL not found; \"\n u\"'maxwidth' option ignored\")\n if self.config['compare_threshold'].get(int) and not \\\n ArtResizer.shared.can_compare:\n self.config['compare_threshold'] = 0\n self._log.warning(u\"ImageMagick 6.8.7 or higher not installed; \"\n u\"'compare_threshold' option ignored\")\n\n self.register_listener('art_set', self.process_album)\n\n def commands(self):\n # Embed command.\n embed_cmd = ui.Subcommand(\n 'embedart', help='embed image files into file metadata'\n )\n embed_cmd.parser.add_option(\n '-f', '--file', metavar='PATH', help='the image file to embed'\n )\n maxwidth = self.config['maxwidth'].get(int)\n compare_threshold = self.config['compare_threshold'].get(int)\n ifempty = self.config['ifempty'].get(bool)\n remove_art_file = self.config['remove_art_file'].get(bool)\n\n def embed_func(lib, opts, args):\n if opts.file:\n imagepath = normpath(opts.file)\n if not os.path.isfile(syspath(imagepath)):\n raise ui.UserError(u'image file {0} not found'.format(\n displayable_path(imagepath)\n ))\n for item in lib.items(decargs(args)):\n art.embed_item(self._log, item, imagepath, maxwidth, None,\n compare_threshold, ifempty)\n else:\n for album in lib.albums(decargs(args)):\n art.embed_album(self._log, album, maxwidth, False,\n compare_threshold, ifempty)\n self.remove_artfile(album)\n\n embed_cmd.func = embed_func\n\n # Extract command.\n extract_cmd = ui.Subcommand('extractart',\n help='extract an image from file metadata')\n extract_cmd.parser.add_option('-o', dest='outpath',\n help='image output file')\n extract_cmd.parser.add_option('-n', dest='filename',\n help='image filename to create for all '\n 'matched albums')\n extract_cmd.parser.add_option('-a', dest='associate',\n action='store_true',\n help='associate the extracted images '\n 'with the album')\n\n def extract_func(lib, opts, args):\n if opts.outpath:\n art.extract_first(self._log, normpath(opts.outpath),\n lib.items(decargs(args)))\n else:\n filename = bytestring_path(opts.filename or\n config['art_filename'].get())\n if os.path.dirname(filename) != '':\n self._log.error(u\"Only specify a name rather than a path \"\n u\"for -n\")\n return\n for album in lib.albums(decargs(args)):\n artpath = normpath(os.path.join(album.path, filename))\n artpath = art.extract_first(self._log, artpath,\n album.items())\n if artpath and opts.associate:\n album.set_art(artpath)\n album.store()\n extract_cmd.func = extract_func\n\n # Clear command.\n clear_cmd = ui.Subcommand('clearart',\n help='remove images from file metadata')\n\n def clear_func(lib, opts, args):\n art.clear(self._log, lib, decargs(args))\n clear_cmd.func = clear_func\n\n return [embed_cmd, extract_cmd, clear_cmd]\n\n def process_album(self, album):\n \"\"\"Automatically embed art after art has been set\n \"\"\"\n if self.config['auto'] and config['import']['write']:\n max_width = self.config['maxwidth'].get(int)\n art.embed_album(self._log, album, max_width, True,\n self.config['compare_threshold'].get(int),\n self.config['ifempty'].get(bool))\n self.remove_artfile(album)\n def remove_artfile(self, album)\n if self.config['remove_art_file'] and album.artpath:\n if os.path.isfile(album.artpath):\n self._log.debug(u'Removing album art file '\n u'for {0}', album)\n os.remove(album.artpath)\n album.artpath = None\n album.store()\n", "path": "beetsplug/embedart.py"}]} | 1,878 | 305 |
gh_patches_debug_13392 | rasdani/github-patches | git_diff | pulp__pulpcore-4311 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import-check may return error saying "director does not exist" when it actually just lack permissions
Endpoint /pulp/api/v3/importers/core/pulp/import-check/ returns error saying "Directory does not exist" when pulp user lack permissions to read said directory.
**To Reproduce**
Try importing content from a directory where pulp user doesn't have read access.
**Expected behavior**
Error returned should indicate the permission error.
**Additional context**
Pulp is using os.path.exists() method to verify if the directory exists: https://github.com/pulp/pulpcore/blob/main/pulpcore/app/views/importer.py#L44-L45
However, the method can return false if permission is not granted to access the directory even if the directory exists
~~~
os.path.exists(path)
Return True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists.
~~~
os.path method documentation -> https://docs.python.org/3/library/os.path.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/views/importer.py`
Content:
```
1 from gettext import gettext as _
2 import json
3 import os
4 from drf_spectacular.utils import extend_schema
5 from rest_framework.views import APIView
6 from rest_framework.response import Response
7
8 from pulpcore.app import settings
9 from pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer
10
11
12 def _check_allowed_import_path(a_path):
13 user_provided_realpath = os.path.realpath(a_path)
14 for allowed_path in settings.ALLOWED_IMPORT_PATHS:
15 if user_provided_realpath.startswith(allowed_path):
16 return True, None
17 return False, _(
18 "{} is not an allowed import path".format(os.path.dirname(os.path.realpath(a_path)))
19 )
20
21
22 def _validate_file(in_param, data):
23 """
24 Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]
25
26 We check for a number of things, attempting to return all the errors we can find. We don't want
27 to give out information for files in arbitrary locations on the filesystem; if the check
28 for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.
29
30 If the directory containing the base-file doesn't exist, or isn't readable, or the specified
31 file doesn't exist, report and return.
32
33 Error-messages for all other checks are additive.
34 """
35 # check allowed, leave if failed
36 file = data[in_param]
37 real_file = os.path.realpath(file)
38 rc, msg = _check_allowed_import_path(real_file)
39 if not rc:
40 return rc, [msg]
41
42 # check directory-sanity, leave if failed
43 owning_dir = os.path.dirname(real_file)
44 if not os.path.exists(owning_dir):
45 return False, [_("directory {} does not exist").format(owning_dir)]
46 if not os.access(owning_dir, os.R_OK):
47 return False, [_("directory {} does not allow read-access").format(owning_dir)]
48
49 # check file-exists, leave if failed
50 if not os.path.exists(real_file):
51 return False, [_("file {} does not exist").format(real_file)]
52
53 # check file-sanity
54 msgs = []
55 isfile = os.path.isfile(real_file)
56 readable = os.access(real_file, os.R_OK)
57
58 rc = isfile and readable
59 if not isfile:
60 msgs.append(_("{} is not a file".format(real_file)))
61 if not readable:
62 msgs.append(_("{} exists but cannot be read".format(real_file)))
63
64 # extra check for toc-dir-write
65 if in_param == "toc":
66 if not os.access(owning_dir, os.W_OK):
67 rc = False
68 msgs.append(_("directory {} must allow pulp write-access".format(owning_dir)))
69
70 return rc, msgs
71
72
73 class PulpImporterImportCheckView(APIView):
74 """
75 Returns validity of proposed parameters for a PulpImport call.
76 """
77
78 @extend_schema(
79 summary="Validate the parameters to be used for a PulpImport call",
80 operation_id="pulp_import_check_post",
81 request=PulpImportCheckSerializer,
82 responses={200: PulpImportCheckResponseSerializer},
83 )
84 def post(self, request, format=None):
85 """
86 Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.
87
88 * Checks that toc, path are in ALLOWED_IMPORT_PATHS
89 * if ALLOWED:
90 * Checks that toc, path exist and are readable
91 * If toc specified, checks that containing dir is writeable
92 * Checks that repo_mapping is valid JSON
93 """
94 serializer = PulpImportCheckSerializer(data=request.data)
95 if serializer.is_valid():
96 data = {}
97 if "toc" in serializer.data:
98 data["toc"] = {}
99 data["toc"]["context"] = serializer.data["toc"]
100 data["toc"]["is_valid"], data["toc"]["messages"] = _validate_file(
101 "toc", serializer.data
102 )
103
104 if "path" in serializer.data:
105 data["path"] = {}
106 data["path"]["context"] = serializer.data["path"]
107 data["path"]["is_valid"], data["path"]["messages"] = _validate_file(
108 "path", serializer.data
109 )
110
111 if "repo_mapping" in serializer.data:
112 data["repo_mapping"] = {}
113 data["repo_mapping"]["context"] = serializer.data["repo_mapping"]
114 try:
115 json.loads(serializer.data["repo_mapping"])
116 data["repo_mapping"]["is_valid"] = True
117 data["repo_mapping"]["messages"] = []
118 except json.JSONDecodeError:
119 data["repo_mapping"]["is_valid"] = False
120 data["repo_mapping"]["messages"] = [_("invalid JSON")]
121
122 crs = PulpImportCheckResponseSerializer(data, context={"request": request})
123 return Response(crs.data)
124 return Response(serializer.errors, status=400)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/views/importer.py b/pulpcore/app/views/importer.py
--- a/pulpcore/app/views/importer.py
+++ b/pulpcore/app/views/importer.py
@@ -40,11 +40,14 @@
return rc, [msg]
# check directory-sanity, leave if failed
+ # use os.stat to ensure directory exists and pulp has read-access
+ # return any errors received from os.stat to the user
+
owning_dir = os.path.dirname(real_file)
- if not os.path.exists(owning_dir):
- return False, [_("directory {} does not exist").format(owning_dir)]
- if not os.access(owning_dir, os.R_OK):
- return False, [_("directory {} does not allow read-access").format(owning_dir)]
+ try:
+ os.stat(owning_dir)
+ except OSError as e:
+ return False, [_("{}").format(e)]
# check file-exists, leave if failed
if not os.path.exists(real_file):
| {"golden_diff": "diff --git a/pulpcore/app/views/importer.py b/pulpcore/app/views/importer.py\n--- a/pulpcore/app/views/importer.py\n+++ b/pulpcore/app/views/importer.py\n@@ -40,11 +40,14 @@\n return rc, [msg]\n \n # check directory-sanity, leave if failed\n+ # use os.stat to ensure directory exists and pulp has read-access\n+ # return any errors received from os.stat to the user\n+\n owning_dir = os.path.dirname(real_file)\n- if not os.path.exists(owning_dir):\n- return False, [_(\"directory {} does not exist\").format(owning_dir)]\n- if not os.access(owning_dir, os.R_OK):\n- return False, [_(\"directory {} does not allow read-access\").format(owning_dir)]\n+ try:\n+ os.stat(owning_dir)\n+ except OSError as e:\n+ return False, [_(\"{}\").format(e)]\n \n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n", "issue": "Import-check may return error saying \"director does not exist\" when it actually just lack permissions\nEndpoint /pulp/api/v3/importers/core/pulp/import-check/ returns error saying \"Directory does not exist\" when pulp user lack permissions to read said directory.\r\n\r\n**To Reproduce**\r\n\r\nTry importing content from a directory where pulp user doesn't have read access.\r\n\r\n**Expected behavior**\r\nError returned should indicate the permission error.\r\n\r\n**Additional context**\r\n\r\nPulp is using os.path.exists() method to verify if the directory exists: https://github.com/pulp/pulpcore/blob/main/pulpcore/app/views/importer.py#L44-L45\r\n\r\nHowever, the method can return false if permission is not granted to access the directory even if the directory exists\r\n\r\n~~~\r\nos.path.exists(path)\r\nReturn True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists.\r\n~~~\r\n\r\nos.path method documentation -> https://docs.python.org/3/library/os.path.html\r\n\n", "before_files": [{"content": "from gettext import gettext as _\nimport json\nimport os\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\n\nfrom pulpcore.app import settings\nfrom pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer\n\n\ndef _check_allowed_import_path(a_path):\n user_provided_realpath = os.path.realpath(a_path)\n for allowed_path in settings.ALLOWED_IMPORT_PATHS:\n if user_provided_realpath.startswith(allowed_path):\n return True, None\n return False, _(\n \"{} is not an allowed import path\".format(os.path.dirname(os.path.realpath(a_path)))\n )\n\n\ndef _validate_file(in_param, data):\n \"\"\"\n Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]\n\n We check for a number of things, attempting to return all the errors we can find. We don't want\n to give out information for files in arbitrary locations on the filesystem; if the check\n for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.\n\n If the directory containing the base-file doesn't exist, or isn't readable, or the specified\n file doesn't exist, report and return.\n\n Error-messages for all other checks are additive.\n \"\"\"\n # check allowed, leave if failed\n file = data[in_param]\n real_file = os.path.realpath(file)\n rc, msg = _check_allowed_import_path(real_file)\n if not rc:\n return rc, [msg]\n\n # check directory-sanity, leave if failed\n owning_dir = os.path.dirname(real_file)\n if not os.path.exists(owning_dir):\n return False, [_(\"directory {} does not exist\").format(owning_dir)]\n if not os.access(owning_dir, os.R_OK):\n return False, [_(\"directory {} does not allow read-access\").format(owning_dir)]\n\n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n return False, [_(\"file {} does not exist\").format(real_file)]\n\n # check file-sanity\n msgs = []\n isfile = os.path.isfile(real_file)\n readable = os.access(real_file, os.R_OK)\n\n rc = isfile and readable\n if not isfile:\n msgs.append(_(\"{} is not a file\".format(real_file)))\n if not readable:\n msgs.append(_(\"{} exists but cannot be read\".format(real_file)))\n\n # extra check for toc-dir-write\n if in_param == \"toc\":\n if not os.access(owning_dir, os.W_OK):\n rc = False\n msgs.append(_(\"directory {} must allow pulp write-access\".format(owning_dir)))\n\n return rc, msgs\n\n\nclass PulpImporterImportCheckView(APIView):\n \"\"\"\n Returns validity of proposed parameters for a PulpImport call.\n \"\"\"\n\n @extend_schema(\n summary=\"Validate the parameters to be used for a PulpImport call\",\n operation_id=\"pulp_import_check_post\",\n request=PulpImportCheckSerializer,\n responses={200: PulpImportCheckResponseSerializer},\n )\n def post(self, request, format=None):\n \"\"\"\n Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.\n\n * Checks that toc, path are in ALLOWED_IMPORT_PATHS\n * if ALLOWED:\n * Checks that toc, path exist and are readable\n * If toc specified, checks that containing dir is writeable\n * Checks that repo_mapping is valid JSON\n \"\"\"\n serializer = PulpImportCheckSerializer(data=request.data)\n if serializer.is_valid():\n data = {}\n if \"toc\" in serializer.data:\n data[\"toc\"] = {}\n data[\"toc\"][\"context\"] = serializer.data[\"toc\"]\n data[\"toc\"][\"is_valid\"], data[\"toc\"][\"messages\"] = _validate_file(\n \"toc\", serializer.data\n )\n\n if \"path\" in serializer.data:\n data[\"path\"] = {}\n data[\"path\"][\"context\"] = serializer.data[\"path\"]\n data[\"path\"][\"is_valid\"], data[\"path\"][\"messages\"] = _validate_file(\n \"path\", serializer.data\n )\n\n if \"repo_mapping\" in serializer.data:\n data[\"repo_mapping\"] = {}\n data[\"repo_mapping\"][\"context\"] = serializer.data[\"repo_mapping\"]\n try:\n json.loads(serializer.data[\"repo_mapping\"])\n data[\"repo_mapping\"][\"is_valid\"] = True\n data[\"repo_mapping\"][\"messages\"] = []\n except json.JSONDecodeError:\n data[\"repo_mapping\"][\"is_valid\"] = False\n data[\"repo_mapping\"][\"messages\"] = [_(\"invalid JSON\")]\n\n crs = PulpImportCheckResponseSerializer(data, context={\"request\": request})\n return Response(crs.data)\n return Response(serializer.errors, status=400)\n", "path": "pulpcore/app/views/importer.py"}], "after_files": [{"content": "from gettext import gettext as _\nimport json\nimport os\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\n\nfrom pulpcore.app import settings\nfrom pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer\n\n\ndef _check_allowed_import_path(a_path):\n user_provided_realpath = os.path.realpath(a_path)\n for allowed_path in settings.ALLOWED_IMPORT_PATHS:\n if user_provided_realpath.startswith(allowed_path):\n return True, None\n return False, _(\n \"{} is not an allowed import path\".format(os.path.dirname(os.path.realpath(a_path)))\n )\n\n\ndef _validate_file(in_param, data):\n \"\"\"\n Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]\n\n We check for a number of things, attempting to return all the errors we can find. We don't want\n to give out information for files in arbitrary locations on the filesystem; if the check\n for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.\n\n If the directory containing the base-file doesn't exist, or isn't readable, or the specified\n file doesn't exist, report and return.\n\n Error-messages for all other checks are additive.\n \"\"\"\n # check allowed, leave if failed\n file = data[in_param]\n real_file = os.path.realpath(file)\n rc, msg = _check_allowed_import_path(real_file)\n if not rc:\n return rc, [msg]\n\n # check directory-sanity, leave if failed\n # use os.stat to ensure directory exists and pulp has read-access\n # return any errors received from os.stat to the user\n\n owning_dir = os.path.dirname(real_file)\n try:\n os.stat(owning_dir)\n except OSError as e:\n return False, [_(\"{}\").format(e)]\n\n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n return False, [_(\"file {} does not exist\").format(real_file)]\n\n # check file-sanity\n msgs = []\n isfile = os.path.isfile(real_file)\n readable = os.access(real_file, os.R_OK)\n\n rc = isfile and readable\n if not isfile:\n msgs.append(_(\"{} is not a file\".format(real_file)))\n if not readable:\n msgs.append(_(\"{} exists but cannot be read\".format(real_file)))\n\n # extra check for toc-dir-write\n if in_param == \"toc\":\n if not os.access(owning_dir, os.W_OK):\n rc = False\n msgs.append(_(\"directory {} must allow pulp write-access\".format(owning_dir)))\n\n return rc, msgs\n\n\nclass PulpImporterImportCheckView(APIView):\n \"\"\"\n Returns validity of proposed parameters for a PulpImport call.\n \"\"\"\n\n @extend_schema(\n summary=\"Validate the parameters to be used for a PulpImport call\",\n operation_id=\"pulp_import_check_post\",\n request=PulpImportCheckSerializer,\n responses={200: PulpImportCheckResponseSerializer},\n )\n def post(self, request, format=None):\n \"\"\"\n Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.\n\n * Checks that toc, path are in ALLOWED_IMPORT_PATHS\n * if ALLOWED:\n * Checks that toc, path exist and are readable\n * If toc specified, checks that containing dir is writeable\n * Checks that repo_mapping is valid JSON\n \"\"\"\n serializer = PulpImportCheckSerializer(data=request.data)\n if serializer.is_valid():\n data = {}\n if \"toc\" in serializer.data:\n data[\"toc\"] = {}\n data[\"toc\"][\"context\"] = serializer.data[\"toc\"]\n data[\"toc\"][\"is_valid\"], data[\"toc\"][\"messages\"] = _validate_file(\n \"toc\", serializer.data\n )\n\n if \"path\" in serializer.data:\n data[\"path\"] = {}\n data[\"path\"][\"context\"] = serializer.data[\"path\"]\n data[\"path\"][\"is_valid\"], data[\"path\"][\"messages\"] = _validate_file(\n \"path\", serializer.data\n )\n\n if \"repo_mapping\" in serializer.data:\n data[\"repo_mapping\"] = {}\n data[\"repo_mapping\"][\"context\"] = serializer.data[\"repo_mapping\"]\n try:\n json.loads(serializer.data[\"repo_mapping\"])\n data[\"repo_mapping\"][\"is_valid\"] = True\n data[\"repo_mapping\"][\"messages\"] = []\n except json.JSONDecodeError:\n data[\"repo_mapping\"][\"is_valid\"] = False\n data[\"repo_mapping\"][\"messages\"] = [_(\"invalid JSON\")]\n\n crs = PulpImportCheckResponseSerializer(data, context={\"request\": request})\n return Response(crs.data)\n return Response(serializer.errors, status=400)\n", "path": "pulpcore/app/views/importer.py"}]} | 1,824 | 237 |
gh_patches_debug_14994 | rasdani/github-patches | git_diff | rootpy__rootpy-773 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception on import when not forwarding X11
Dear developers,
I believe I'm experiencing a bug when trying to use rootpy over SSH. Simply importing
```Python
from rootpy.plotting import Hist
```
results in an exception:
```Python
WARNING:ROOT.TUnixSystem.SetDisplay] DISPLAY not set, setting it to :pts/0:S.8
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/__init__.py", line 12, in <module>
from .legend import Legend
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/legend.py", line 8, in <module>
from .box import _Positionable
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/box.py", line 5, in <module>
from .utils import canvases_with
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/utils.py", line 7, in <module>
from .canvas import _PadBase
File "/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py", line 318, in _importhook
return _orig_ihook( name, *args, **kwds )
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/canvas.py", line 186, in <module>
class Pad(_PadBase, QROOT.TPad):
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/module_facade.py", line 84, in __getattr__
result = sup.__getattr__(key)
File "/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/quickroot.py", line 71, in __getattr__
libname, symbol))
RuntimeError: Unable to load libGui (required by TPad)
```
The problem does not occur if I connect with `ssh -Y`, but I would expect rootpy be usable also without GUI.
I'm using rootpy 1.0.0 installed with pip, Python 3.5.3, ROOT 6.10.04 with Scientific Linux 6.5.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rootpy/utils/quickroot.py`
Content:
```
1 """
2 Quickly load ROOT symbols without triggering PyROOT's finalSetup().
3 The main principle is that appropriate dictionaries first need to be loaded.
4 """
5 from __future__ import absolute_import
6
7 import ROOT
8
9 from .. import log; log = log[__name__]
10 from .module_facade import Facade
11
12 __all__ = []
13
14
15 root_module = ROOT.module._root
16 if hasattr(root_module, 'LookupCppEntity'): # pragma: no cover
17 lookup_func = 'LookupCppEntity'
18 else: # pragma: no cover
19 lookup_func = 'LookupRootEntity'
20
21 # Quick's __name__ needs to be the ROOT module for this to be transparent.
22 # The below is one way of obtaining such a function
23 # First determine the ROOT version without triggering PyROOT's finalSetup()
24 Quick = eval('lambda symbol: module._root.{0}(symbol)'.format(lookup_func),
25 ROOT.__dict__)
26
27 _gSystem = Quick("gSystem")
28 Load = _gSystem.Load
29
30 # It is not vital to list _all_ symbols in here, just enough that a library
31 # will be loaded by the time it is needed.
32 SYMBOLS = dict(
33 Hist='TH1 TGraph TGraphAsymmErrors',
34 Tree='TCut TTree',
35 Gui='TPad TCanvas',
36 Graf='TLegend TLine TEllipse',
37 Physics='TVector2 TVector3 TLorentzVector TRotation TLorentzRotation',
38 Matrix='TMatrixT',
39 RooStats='RooStats RooMsgService',
40 RooFit='RooFit RooWorkspace',
41 )
42
43 # Mapping of symbols to libraries which need to be loaded
44 SYMBOLS_TO_LIB = dict(
45 (sym, lib) for lib, syms in SYMBOLS.items() for sym in syms.split())
46
47 # If you encounter problems with particular symbols, add them to this set.
48 SLOW = set("".split())
49
50
51 @Facade(__name__, expose_internal=False)
52 class QuickROOT(object):
53 def __getattr__(self, symbol):
54 if symbol in SLOW: # pragma: no cover
55 log.warning(
56 "Tried to quickly load {0} which is always slow".format(symbol))
57
58 lib = SYMBOLS_TO_LIB.get(symbol, None)
59 if lib:
60 # Load() doesn't cost anything if the library is already loaded
61 libname = "lib{0}".format(lib)
62 if libname not in _gSystem.GetLibraries():
63 regex = "^duplicate entry .* for level 0; ignored$"
64 with log["/ROOT.TEnvRec.ChangeValue"].ignore(regex):
65 if Load(libname) == 0:
66 log.debug("Loaded {0} (required by {1})".format(
67 libname, symbol))
68 else: # pragma: no cover
69 raise RuntimeError(
70 "Unable to load {0} (required by {1})".format(
71 libname, symbol))
72
73 try:
74 thing = Quick(symbol)
75 except NameError: # pragma: no cover
76 # NameError: global name 'module' is not defined
77 # Python must be exiting...
78 return None
79 if isinstance(thing, root_module.PropertyProxy): # descriptor
80 setattr(self.__class__, symbol, thing)
81 return getattr(self, symbol)
82 # normal member
83 return thing
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rootpy/utils/quickroot.py b/rootpy/utils/quickroot.py
--- a/rootpy/utils/quickroot.py
+++ b/rootpy/utils/quickroot.py
@@ -65,6 +65,12 @@
if Load(libname) == 0:
log.debug("Loaded {0} (required by {1})".format(
libname, symbol))
+ elif lib == 'Gui':
+ # Possibly no X11 forwarding
+ log.debug("Unable to load {0} (required by {1}). "
+ "Putting ROOT in batch mode.".format(
+ libname, symbol))
+ ROOT.gROOT.SetBatch(True)
else: # pragma: no cover
raise RuntimeError(
"Unable to load {0} (required by {1})".format(
| {"golden_diff": "diff --git a/rootpy/utils/quickroot.py b/rootpy/utils/quickroot.py\n--- a/rootpy/utils/quickroot.py\n+++ b/rootpy/utils/quickroot.py\n@@ -65,6 +65,12 @@\n if Load(libname) == 0:\n log.debug(\"Loaded {0} (required by {1})\".format(\n libname, symbol))\n+ elif lib == 'Gui':\n+ # Possibly no X11 forwarding\n+ log.debug(\"Unable to load {0} (required by {1}). \"\n+ \"Putting ROOT in batch mode.\".format(\n+ libname, symbol))\n+ ROOT.gROOT.SetBatch(True)\n else: # pragma: no cover\n raise RuntimeError(\n \"Unable to load {0} (required by {1})\".format(\n", "issue": "Exception on import when not forwarding X11\nDear developers,\r\n\r\nI believe I'm experiencing a bug when trying to use rootpy over SSH. Simply importing\r\n```Python\r\nfrom rootpy.plotting import Hist\r\n```\r\nresults in an exception:\r\n```Python\r\nWARNING:ROOT.TUnixSystem.SetDisplay] DISPLAY not set, setting it to :pts/0:S.8\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/__init__.py\", line 12, in <module>\r\n from .legend import Legend\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/legend.py\", line 8, in <module>\r\n from .box import _Positionable\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/box.py\", line 5, in <module>\r\n from .utils import canvases_with\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/utils.py\", line 7, in <module>\r\n from .canvas import _PadBase\r\n File \"/gridsoft/ipnls/root/v6.10.04/lib/ROOT.py\", line 318, in _importhook\r\n return _orig_ihook( name, *args, **kwds )\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/plotting/canvas.py\", line 186, in <module>\r\n class Pad(_PadBase, QROOT.TPad):\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/module_facade.py\", line 84, in __getattr__\r\n result = sup.__getattr__(key)\r\n File \"/home/cms/popov/.local/lib/python3.5/site-packages/rootpy/utils/quickroot.py\", line 71, in __getattr__\r\n libname, symbol))\r\nRuntimeError: Unable to load libGui (required by TPad)\r\n```\r\nThe problem does not occur if I connect with `ssh -Y`, but I would expect rootpy be usable also without GUI.\r\n\r\nI'm using rootpy 1.0.0 installed with pip, Python 3.5.3, ROOT 6.10.04 with Scientific Linux 6.5.\n", "before_files": [{"content": "\"\"\"\nQuickly load ROOT symbols without triggering PyROOT's finalSetup().\nThe main principle is that appropriate dictionaries first need to be loaded.\n\"\"\"\nfrom __future__ import absolute_import\n\nimport ROOT\n\nfrom .. import log; log = log[__name__]\nfrom .module_facade import Facade\n\n__all__ = []\n\n\nroot_module = ROOT.module._root\nif hasattr(root_module, 'LookupCppEntity'): # pragma: no cover\n lookup_func = 'LookupCppEntity'\nelse: # pragma: no cover\n lookup_func = 'LookupRootEntity'\n\n# Quick's __name__ needs to be the ROOT module for this to be transparent.\n# The below is one way of obtaining such a function\n# First determine the ROOT version without triggering PyROOT's finalSetup()\nQuick = eval('lambda symbol: module._root.{0}(symbol)'.format(lookup_func),\n ROOT.__dict__)\n\n_gSystem = Quick(\"gSystem\")\nLoad = _gSystem.Load\n\n# It is not vital to list _all_ symbols in here, just enough that a library\n# will be loaded by the time it is needed.\nSYMBOLS = dict(\n Hist='TH1 TGraph TGraphAsymmErrors',\n Tree='TCut TTree',\n Gui='TPad TCanvas',\n Graf='TLegend TLine TEllipse',\n Physics='TVector2 TVector3 TLorentzVector TRotation TLorentzRotation',\n Matrix='TMatrixT',\n RooStats='RooStats RooMsgService',\n RooFit='RooFit RooWorkspace',\n)\n\n# Mapping of symbols to libraries which need to be loaded\nSYMBOLS_TO_LIB = dict(\n (sym, lib) for lib, syms in SYMBOLS.items() for sym in syms.split())\n\n# If you encounter problems with particular symbols, add them to this set.\nSLOW = set(\"\".split())\n\n\n@Facade(__name__, expose_internal=False)\nclass QuickROOT(object):\n def __getattr__(self, symbol):\n if symbol in SLOW: # pragma: no cover\n log.warning(\n \"Tried to quickly load {0} which is always slow\".format(symbol))\n\n lib = SYMBOLS_TO_LIB.get(symbol, None)\n if lib:\n # Load() doesn't cost anything if the library is already loaded\n libname = \"lib{0}\".format(lib)\n if libname not in _gSystem.GetLibraries():\n regex = \"^duplicate entry .* for level 0; ignored$\"\n with log[\"/ROOT.TEnvRec.ChangeValue\"].ignore(regex):\n if Load(libname) == 0:\n log.debug(\"Loaded {0} (required by {1})\".format(\n libname, symbol))\n else: # pragma: no cover\n raise RuntimeError(\n \"Unable to load {0} (required by {1})\".format(\n libname, symbol))\n\n try:\n thing = Quick(symbol)\n except NameError: # pragma: no cover\n # NameError: global name 'module' is not defined\n # Python must be exiting...\n return None\n if isinstance(thing, root_module.PropertyProxy): # descriptor\n setattr(self.__class__, symbol, thing)\n return getattr(self, symbol)\n # normal member\n return thing\n", "path": "rootpy/utils/quickroot.py"}], "after_files": [{"content": "\"\"\"\nQuickly load ROOT symbols without triggering PyROOT's finalSetup().\nThe main principle is that appropriate dictionaries first need to be loaded.\n\"\"\"\nfrom __future__ import absolute_import\n\nimport ROOT\n\nfrom .. import log; log = log[__name__]\nfrom .module_facade import Facade\n\n__all__ = []\n\n\nroot_module = ROOT.module._root\nif hasattr(root_module, 'LookupCppEntity'): # pragma: no cover\n lookup_func = 'LookupCppEntity'\nelse: # pragma: no cover\n lookup_func = 'LookupRootEntity'\n\n# Quick's __name__ needs to be the ROOT module for this to be transparent.\n# The below is one way of obtaining such a function\n# First determine the ROOT version without triggering PyROOT's finalSetup()\nQuick = eval('lambda symbol: module._root.{0}(symbol)'.format(lookup_func),\n ROOT.__dict__)\n\n_gSystem = Quick(\"gSystem\")\nLoad = _gSystem.Load\n\n# It is not vital to list _all_ symbols in here, just enough that a library\n# will be loaded by the time it is needed.\nSYMBOLS = dict(\n Hist='TH1 TGraph TGraphAsymmErrors',\n Tree='TCut TTree',\n Gui='TPad TCanvas',\n Graf='TLegend TLine TEllipse',\n Physics='TVector2 TVector3 TLorentzVector TRotation TLorentzRotation',\n Matrix='TMatrixT',\n RooStats='RooStats RooMsgService',\n RooFit='RooFit RooWorkspace',\n)\n\n# Mapping of symbols to libraries which need to be loaded\nSYMBOLS_TO_LIB = dict(\n (sym, lib) for lib, syms in SYMBOLS.items() for sym in syms.split())\n\n# If you encounter problems with particular symbols, add them to this set.\nSLOW = set(\"\".split())\n\n\n@Facade(__name__, expose_internal=False)\nclass QuickROOT(object):\n def __getattr__(self, symbol):\n if symbol in SLOW: # pragma: no cover\n log.warning(\n \"Tried to quickly load {0} which is always slow\".format(symbol))\n\n lib = SYMBOLS_TO_LIB.get(symbol, None)\n if lib:\n # Load() doesn't cost anything if the library is already loaded\n libname = \"lib{0}\".format(lib)\n if libname not in _gSystem.GetLibraries():\n regex = \"^duplicate entry .* for level 0; ignored$\"\n with log[\"/ROOT.TEnvRec.ChangeValue\"].ignore(regex):\n if Load(libname) == 0:\n log.debug(\"Loaded {0} (required by {1})\".format(\n libname, symbol))\n elif lib == 'Gui':\n # Possibly no X11 forwarding\n log.debug(\"Unable to load {0} (required by {1}). \"\n \"Putting ROOT in batch mode.\".format(\n libname, symbol))\n ROOT.gROOT.SetBatch(True)\n else: # pragma: no cover\n raise RuntimeError(\n \"Unable to load {0} (required by {1})\".format(\n libname, symbol))\n\n try:\n thing = Quick(symbol)\n except NameError: # pragma: no cover\n # NameError: global name 'module' is not defined\n # Python must be exiting...\n return None\n if isinstance(thing, root_module.PropertyProxy): # descriptor\n setattr(self.__class__, symbol, thing)\n return getattr(self, symbol)\n # normal member\n return thing\n", "path": "rootpy/utils/quickroot.py"}]} | 1,801 | 179 |
gh_patches_debug_21555 | rasdani/github-patches | git_diff | getpelican__pelican-845 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Conflicts rendering Category pages when category is not defined in consistent case
I was testing a jinja macro that dealt with creating links for categories.
I noted that if you define a category in one article as `Category: Something` and in another article as `Category: something` that these are treated as separate categories, however, when your category page is rendered, there is only the lowecase url, e.g. `category/something.html`. This will only associate with the articles with meta data defined as `Category: something` and not anywhere where it is defined with uppercase since there is no `category/Something.html`.
I am not sure if making this case insensitive would break code. Certainly, it would be unclear when printing the category name which case to use. From an intelligent template process, you would set you case using CSS style attribute to be sure it was the way you want, and it could always render categories in lower case.
Otherwise, it might just be sufficient to put this into the documentation. I always tend to capitalize by categories, but some people might not notice and wonder why some articles are missing. I have not yet tested this, but I would imagine the same issue exists for tags.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pelican/urlwrappers.py`
Content:
```
1 import os
2 import functools
3 import logging
4
5 import six
6
7 from pelican.utils import (slugify, python_2_unicode_compatible)
8
9 logger = logging.getLogger(__name__)
10
11
12 @python_2_unicode_compatible
13 @functools.total_ordering
14 class URLWrapper(object):
15 def __init__(self, name, settings):
16 self.name = name
17 self.slug = slugify(self.name)
18 self.settings = settings
19
20 def as_dict(self):
21 return self.__dict__
22
23 def __hash__(self):
24 return hash(self.name)
25
26 def _key(self):
27 return self.name
28
29 def _normalize_key(self, key):
30 return six.text_type(key)
31
32 def __eq__(self, other):
33 return self._key() == self._normalize_key(other)
34
35 def __ne__(self, other):
36 return self._key() != self._normalize_key(other)
37
38 def __lt__(self, other):
39 return self._key() < self._normalize_key(other)
40
41 def __str__(self):
42 return self.name
43
44 def __repr__(self):
45 return '<{} {}>'.format(type(self).__name__, str(self))
46
47 def _from_settings(self, key, get_page_name=False):
48 """Returns URL information as defined in settings.
49
50 When get_page_name=True returns URL without anything after {slug} e.g.
51 if in settings: CATEGORY_URL="cat/{slug}.html" this returns
52 "cat/{slug}" Useful for pagination.
53
54 """
55 setting = "%s_%s" % (self.__class__.__name__.upper(), key)
56 value = self.settings[setting]
57 if not isinstance(value, six.string_types):
58 logger.warning('%s is set to %s' % (setting, value))
59 return value
60 else:
61 if get_page_name:
62 return os.path.splitext(value)[0].format(**self.as_dict())
63 else:
64 return value.format(**self.as_dict())
65
66 page_name = property(functools.partial(_from_settings, key='URL',
67 get_page_name=True))
68 url = property(functools.partial(_from_settings, key='URL'))
69 save_as = property(functools.partial(_from_settings, key='SAVE_AS'))
70
71
72 class Category(URLWrapper):
73 pass
74
75
76 class Tag(URLWrapper):
77 def __init__(self, name, *args, **kwargs):
78 super(Tag, self).__init__(name.strip(), *args, **kwargs)
79
80
81 class Author(URLWrapper):
82 pass
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pelican/urlwrappers.py b/pelican/urlwrappers.py
--- a/pelican/urlwrappers.py
+++ b/pelican/urlwrappers.py
@@ -13,21 +13,35 @@
@functools.total_ordering
class URLWrapper(object):
def __init__(self, name, settings):
+ # next 2 lines are redundant with the setter of the name property
+ # but are here for clarity
+ self._name = name
+ self.slug = slugify(name)
self.name = name
- self.slug = slugify(self.name)
self.settings = settings
+ @property
+ def name(self):
+ return self._name
+
+ @name.setter
+ def name(self, name):
+ self._name = name
+ self.slug = slugify(name)
+
def as_dict(self):
- return self.__dict__
+ d = self.__dict__
+ d['name'] = self.name
+ return d
def __hash__(self):
- return hash(self.name)
+ return hash(self.slug)
def _key(self):
- return self.name
+ return self.slug
def _normalize_key(self, key):
- return six.text_type(key)
+ return six.text_type(slugify(key))
def __eq__(self, other):
return self._key() == self._normalize_key(other)
| {"golden_diff": "diff --git a/pelican/urlwrappers.py b/pelican/urlwrappers.py\n--- a/pelican/urlwrappers.py\n+++ b/pelican/urlwrappers.py\n@@ -13,21 +13,35 @@\n @functools.total_ordering\n class URLWrapper(object):\n def __init__(self, name, settings):\n+ # next 2 lines are redundant with the setter of the name property\n+ # but are here for clarity\n+ self._name = name\n+ self.slug = slugify(name)\n self.name = name\n- self.slug = slugify(self.name)\n self.settings = settings\n \n+ @property\n+ def name(self):\n+ return self._name\n+\n+ @name.setter\n+ def name(self, name):\n+ self._name = name\n+ self.slug = slugify(name)\n+\n def as_dict(self):\n- return self.__dict__\n+ d = self.__dict__\n+ d['name'] = self.name\n+ return d\n \n def __hash__(self):\n- return hash(self.name)\n+ return hash(self.slug)\n \n def _key(self):\n- return self.name\n+ return self.slug\n \n def _normalize_key(self, key):\n- return six.text_type(key)\n+ return six.text_type(slugify(key))\n \n def __eq__(self, other):\n return self._key() == self._normalize_key(other)\n", "issue": "Conflicts rendering Category pages when category is not defined in consistent case\nI was testing a jinja macro that dealt with creating links for categories.\n\nI noted that if you define a category in one article as `Category: Something` and in another article as `Category: something` that these are treated as separate categories, however, when your category page is rendered, there is only the lowecase url, e.g. `category/something.html`. This will only associate with the articles with meta data defined as `Category: something` and not anywhere where it is defined with uppercase since there is no `category/Something.html`.\n\nI am not sure if making this case insensitive would break code. Certainly, it would be unclear when printing the category name which case to use. From an intelligent template process, you would set you case using CSS style attribute to be sure it was the way you want, and it could always render categories in lower case.\n\nOtherwise, it might just be sufficient to put this into the documentation. I always tend to capitalize by categories, but some people might not notice and wonder why some articles are missing. I have not yet tested this, but I would imagine the same issue exists for tags.\n\n", "before_files": [{"content": "import os\nimport functools\nimport logging\n\nimport six\n\nfrom pelican.utils import (slugify, python_2_unicode_compatible)\n\nlogger = logging.getLogger(__name__)\n\n\n@python_2_unicode_compatible\[email protected]_ordering\nclass URLWrapper(object):\n def __init__(self, name, settings):\n self.name = name\n self.slug = slugify(self.name)\n self.settings = settings\n\n def as_dict(self):\n return self.__dict__\n\n def __hash__(self):\n return hash(self.name)\n\n def _key(self):\n return self.name\n\n def _normalize_key(self, key):\n return six.text_type(key)\n\n def __eq__(self, other):\n return self._key() == self._normalize_key(other)\n\n def __ne__(self, other):\n return self._key() != self._normalize_key(other)\n\n def __lt__(self, other):\n return self._key() < self._normalize_key(other)\n\n def __str__(self):\n return self.name\n\n def __repr__(self):\n return '<{} {}>'.format(type(self).__name__, str(self))\n\n def _from_settings(self, key, get_page_name=False):\n \"\"\"Returns URL information as defined in settings.\n\n When get_page_name=True returns URL without anything after {slug} e.g.\n if in settings: CATEGORY_URL=\"cat/{slug}.html\" this returns\n \"cat/{slug}\" Useful for pagination.\n\n \"\"\"\n setting = \"%s_%s\" % (self.__class__.__name__.upper(), key)\n value = self.settings[setting]\n if not isinstance(value, six.string_types):\n logger.warning('%s is set to %s' % (setting, value))\n return value\n else:\n if get_page_name:\n return os.path.splitext(value)[0].format(**self.as_dict())\n else:\n return value.format(**self.as_dict())\n\n page_name = property(functools.partial(_from_settings, key='URL',\n get_page_name=True))\n url = property(functools.partial(_from_settings, key='URL'))\n save_as = property(functools.partial(_from_settings, key='SAVE_AS'))\n\n\nclass Category(URLWrapper):\n pass\n\n\nclass Tag(URLWrapper):\n def __init__(self, name, *args, **kwargs):\n super(Tag, self).__init__(name.strip(), *args, **kwargs)\n\n\nclass Author(URLWrapper):\n pass\n", "path": "pelican/urlwrappers.py"}], "after_files": [{"content": "import os\nimport functools\nimport logging\n\nimport six\n\nfrom pelican.utils import (slugify, python_2_unicode_compatible)\n\nlogger = logging.getLogger(__name__)\n\n\n@python_2_unicode_compatible\[email protected]_ordering\nclass URLWrapper(object):\n def __init__(self, name, settings):\n # next 2 lines are redundant with the setter of the name property\n # but are here for clarity\n self._name = name\n self.slug = slugify(name)\n self.name = name\n self.settings = settings\n\n @property\n def name(self):\n return self._name\n\n @name.setter\n def name(self, name):\n self._name = name\n self.slug = slugify(name)\n\n def as_dict(self):\n d = self.__dict__\n d['name'] = self.name\n return d\n\n def __hash__(self):\n return hash(self.slug)\n\n def _key(self):\n return self.slug\n\n def _normalize_key(self, key):\n return six.text_type(slugify(key))\n\n def __eq__(self, other):\n return self._key() == self._normalize_key(other)\n\n def __ne__(self, other):\n return self._key() != self._normalize_key(other)\n\n def __lt__(self, other):\n return self._key() < self._normalize_key(other)\n\n def __str__(self):\n return self.name\n\n def __repr__(self):\n return '<{} {}>'.format(type(self).__name__, str(self))\n\n def _from_settings(self, key, get_page_name=False):\n \"\"\"Returns URL information as defined in settings.\n\n When get_page_name=True returns URL without anything after {slug} e.g.\n if in settings: CATEGORY_URL=\"cat/{slug}.html\" this returns\n \"cat/{slug}\" Useful for pagination.\n\n \"\"\"\n setting = \"%s_%s\" % (self.__class__.__name__.upper(), key)\n value = self.settings[setting]\n if not isinstance(value, six.string_types):\n logger.warning('%s is set to %s' % (setting, value))\n return value\n else:\n if get_page_name:\n return os.path.splitext(value)[0].format(**self.as_dict())\n else:\n return value.format(**self.as_dict())\n\n page_name = property(functools.partial(_from_settings, key='URL',\n get_page_name=True))\n url = property(functools.partial(_from_settings, key='URL'))\n save_as = property(functools.partial(_from_settings, key='SAVE_AS'))\n\n\nclass Category(URLWrapper):\n pass\n\n\nclass Tag(URLWrapper):\n def __init__(self, name, *args, **kwargs):\n super(Tag, self).__init__(name.strip(), *args, **kwargs)\n\n\nclass Author(URLWrapper):\n pass\n", "path": "pelican/urlwrappers.py"}]} | 1,201 | 320 |
gh_patches_debug_38335 | rasdani/github-patches | git_diff | ethereum__consensus-specs-863 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rename `Transactions` back to `Operations`
A few of us implementers have been talking about the naming of `Transactions` and believe it is best renamed back to `Operations` to lower confusion and potentially mistaking `Transactions` with transactions in the classical sense. The only thing that should be known as a `Transaction` is a `Transfer`.
If not, it would be great to know what the reason behind the rename was.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/phase0/state_transition.py`
Content:
```
1 from . import spec
2
3
4 from typing import ( # noqa: F401
5 Any,
6 Callable,
7 List,
8 NewType,
9 Tuple,
10 )
11
12 from .spec import (
13 BeaconState,
14 BeaconBlock,
15 )
16
17
18 def expected_deposit_count(state: BeaconState) -> int:
19 return min(
20 spec.MAX_DEPOSITS,
21 state.latest_eth1_data.deposit_count - state.deposit_index
22 )
23
24
25 def process_transaction_type(state: BeaconState,
26 transactions: List[Any],
27 max_transactions: int,
28 tx_fn: Callable[[BeaconState, Any], None]) -> None:
29 assert len(transactions) <= max_transactions
30 for transaction in transactions:
31 tx_fn(state, transaction)
32
33
34 def process_transactions(state: BeaconState, block: BeaconBlock) -> None:
35 process_transaction_type(
36 state,
37 block.body.proposer_slashings,
38 spec.MAX_PROPOSER_SLASHINGS,
39 spec.process_proposer_slashing,
40 )
41
42 process_transaction_type(
43 state,
44 block.body.attester_slashings,
45 spec.MAX_ATTESTER_SLASHINGS,
46 spec.process_attester_slashing,
47 )
48
49 process_transaction_type(
50 state,
51 block.body.attestations,
52 spec.MAX_ATTESTATIONS,
53 spec.process_attestation,
54 )
55
56 assert len(block.body.deposits) == expected_deposit_count(state)
57 process_transaction_type(
58 state,
59 block.body.deposits,
60 spec.MAX_DEPOSITS,
61 spec.process_deposit,
62 )
63
64 process_transaction_type(
65 state,
66 block.body.voluntary_exits,
67 spec.MAX_VOLUNTARY_EXITS,
68 spec.process_voluntary_exit,
69 )
70
71 assert len(block.body.transfers) == len(set(block.body.transfers))
72 process_transaction_type(
73 state,
74 block.body.transfers,
75 spec.MAX_TRANSFERS,
76 spec.process_transfer,
77 )
78
79
80 def process_block(state: BeaconState,
81 block: BeaconBlock,
82 verify_state_root: bool=False) -> None:
83 spec.process_block_header(state, block)
84 spec.process_randao(state, block)
85 spec.process_eth1_data(state, block)
86
87 process_transactions(state, block)
88 if verify_state_root:
89 spec.verify_block_state_root(state, block)
90
91
92 def process_epoch_transition(state: BeaconState) -> None:
93 spec.update_justification_and_finalization(state)
94 spec.process_crosslinks(state)
95 spec.maybe_reset_eth1_period(state)
96 spec.apply_rewards(state)
97 spec.process_ejections(state)
98 spec.update_registry(state)
99 spec.process_slashings(state)
100 spec.process_exit_queue(state)
101 spec.finish_epoch_update(state)
102
103
104 def state_transition(state: BeaconState,
105 block: BeaconBlock,
106 verify_state_root: bool=False) -> BeaconState:
107 while state.slot < block.slot:
108 spec.cache_state(state)
109 if (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0:
110 process_epoch_transition(state)
111 spec.advance_slot(state)
112 if block.slot == state.slot:
113 process_block(state, block, verify_state_root)
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/phase0/state_transition.py b/utils/phase0/state_transition.py
--- a/utils/phase0/state_transition.py
+++ b/utils/phase0/state_transition.py
@@ -22,31 +22,31 @@
)
-def process_transaction_type(state: BeaconState,
- transactions: List[Any],
- max_transactions: int,
- tx_fn: Callable[[BeaconState, Any], None]) -> None:
- assert len(transactions) <= max_transactions
- for transaction in transactions:
- tx_fn(state, transaction)
+def process_operation_type(state: BeaconState,
+ operations: List[Any],
+ max_operations: int,
+ tx_fn: Callable[[BeaconState, Any], None]) -> None:
+ assert len(operations) <= max_operations
+ for operation in operations:
+ tx_fn(state, operation)
-def process_transactions(state: BeaconState, block: BeaconBlock) -> None:
- process_transaction_type(
+def process_operations(state: BeaconState, block: BeaconBlock) -> None:
+ process_operation_type(
state,
block.body.proposer_slashings,
spec.MAX_PROPOSER_SLASHINGS,
spec.process_proposer_slashing,
)
- process_transaction_type(
+ process_operation_type(
state,
block.body.attester_slashings,
spec.MAX_ATTESTER_SLASHINGS,
spec.process_attester_slashing,
)
- process_transaction_type(
+ process_operation_type(
state,
block.body.attestations,
spec.MAX_ATTESTATIONS,
@@ -54,14 +54,14 @@
)
assert len(block.body.deposits) == expected_deposit_count(state)
- process_transaction_type(
+ process_operation_type(
state,
block.body.deposits,
spec.MAX_DEPOSITS,
spec.process_deposit,
)
- process_transaction_type(
+ process_operation_type(
state,
block.body.voluntary_exits,
spec.MAX_VOLUNTARY_EXITS,
@@ -69,7 +69,7 @@
)
assert len(block.body.transfers) == len(set(block.body.transfers))
- process_transaction_type(
+ process_operation_type(
state,
block.body.transfers,
spec.MAX_TRANSFERS,
@@ -84,7 +84,7 @@
spec.process_randao(state, block)
spec.process_eth1_data(state, block)
- process_transactions(state, block)
+ process_operations(state, block)
if verify_state_root:
spec.verify_block_state_root(state, block)
| {"golden_diff": "diff --git a/utils/phase0/state_transition.py b/utils/phase0/state_transition.py\n--- a/utils/phase0/state_transition.py\n+++ b/utils/phase0/state_transition.py\n@@ -22,31 +22,31 @@\n )\n \n \n-def process_transaction_type(state: BeaconState,\n- transactions: List[Any],\n- max_transactions: int,\n- tx_fn: Callable[[BeaconState, Any], None]) -> None:\n- assert len(transactions) <= max_transactions\n- for transaction in transactions:\n- tx_fn(state, transaction)\n+def process_operation_type(state: BeaconState,\n+ operations: List[Any],\n+ max_operations: int,\n+ tx_fn: Callable[[BeaconState, Any], None]) -> None:\n+ assert len(operations) <= max_operations\n+ for operation in operations:\n+ tx_fn(state, operation)\n \n \n-def process_transactions(state: BeaconState, block: BeaconBlock) -> None:\n- process_transaction_type(\n+def process_operations(state: BeaconState, block: BeaconBlock) -> None:\n+ process_operation_type(\n state,\n block.body.proposer_slashings,\n spec.MAX_PROPOSER_SLASHINGS,\n spec.process_proposer_slashing,\n )\n \n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.attester_slashings,\n spec.MAX_ATTESTER_SLASHINGS,\n spec.process_attester_slashing,\n )\n \n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.attestations,\n spec.MAX_ATTESTATIONS,\n@@ -54,14 +54,14 @@\n )\n \n assert len(block.body.deposits) == expected_deposit_count(state)\n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.deposits,\n spec.MAX_DEPOSITS,\n spec.process_deposit,\n )\n \n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.voluntary_exits,\n spec.MAX_VOLUNTARY_EXITS,\n@@ -69,7 +69,7 @@\n )\n \n assert len(block.body.transfers) == len(set(block.body.transfers))\n- process_transaction_type(\n+ process_operation_type(\n state,\n block.body.transfers,\n spec.MAX_TRANSFERS,\n@@ -84,7 +84,7 @@\n spec.process_randao(state, block)\n spec.process_eth1_data(state, block)\n \n- process_transactions(state, block)\n+ process_operations(state, block)\n if verify_state_root:\n spec.verify_block_state_root(state, block)\n", "issue": "Rename `Transactions` back to `Operations`\nA few of us implementers have been talking about the naming of `Transactions` and believe it is best renamed back to `Operations` to lower confusion and potentially mistaking `Transactions` with transactions in the classical sense. The only thing that should be known as a `Transaction` is a `Transfer`.\r\n\r\nIf not, it would be great to know what the reason behind the rename was.\r\n\n", "before_files": [{"content": "from . import spec\n\n\nfrom typing import ( # noqa: F401\n Any,\n Callable,\n List,\n NewType,\n Tuple,\n)\n\nfrom .spec import (\n BeaconState,\n BeaconBlock,\n)\n\n\ndef expected_deposit_count(state: BeaconState) -> int:\n return min(\n spec.MAX_DEPOSITS,\n state.latest_eth1_data.deposit_count - state.deposit_index\n )\n\n\ndef process_transaction_type(state: BeaconState,\n transactions: List[Any],\n max_transactions: int,\n tx_fn: Callable[[BeaconState, Any], None]) -> None:\n assert len(transactions) <= max_transactions\n for transaction in transactions:\n tx_fn(state, transaction)\n\n\ndef process_transactions(state: BeaconState, block: BeaconBlock) -> None:\n process_transaction_type(\n state,\n block.body.proposer_slashings,\n spec.MAX_PROPOSER_SLASHINGS,\n spec.process_proposer_slashing,\n )\n\n process_transaction_type(\n state,\n block.body.attester_slashings,\n spec.MAX_ATTESTER_SLASHINGS,\n spec.process_attester_slashing,\n )\n\n process_transaction_type(\n state,\n block.body.attestations,\n spec.MAX_ATTESTATIONS,\n spec.process_attestation,\n )\n\n assert len(block.body.deposits) == expected_deposit_count(state)\n process_transaction_type(\n state,\n block.body.deposits,\n spec.MAX_DEPOSITS,\n spec.process_deposit,\n )\n\n process_transaction_type(\n state,\n block.body.voluntary_exits,\n spec.MAX_VOLUNTARY_EXITS,\n spec.process_voluntary_exit,\n )\n\n assert len(block.body.transfers) == len(set(block.body.transfers))\n process_transaction_type(\n state,\n block.body.transfers,\n spec.MAX_TRANSFERS,\n spec.process_transfer,\n )\n\n\ndef process_block(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> None:\n spec.process_block_header(state, block)\n spec.process_randao(state, block)\n spec.process_eth1_data(state, block)\n\n process_transactions(state, block)\n if verify_state_root:\n spec.verify_block_state_root(state, block)\n\n\ndef process_epoch_transition(state: BeaconState) -> None:\n spec.update_justification_and_finalization(state)\n spec.process_crosslinks(state)\n spec.maybe_reset_eth1_period(state)\n spec.apply_rewards(state)\n spec.process_ejections(state)\n spec.update_registry(state)\n spec.process_slashings(state)\n spec.process_exit_queue(state)\n spec.finish_epoch_update(state)\n\n\ndef state_transition(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> BeaconState:\n while state.slot < block.slot:\n spec.cache_state(state)\n if (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0:\n process_epoch_transition(state)\n spec.advance_slot(state)\n if block.slot == state.slot:\n process_block(state, block, verify_state_root)\n", "path": "utils/phase0/state_transition.py"}], "after_files": [{"content": "from . import spec\n\n\nfrom typing import ( # noqa: F401\n Any,\n Callable,\n List,\n NewType,\n Tuple,\n)\n\nfrom .spec import (\n BeaconState,\n BeaconBlock,\n)\n\n\ndef expected_deposit_count(state: BeaconState) -> int:\n return min(\n spec.MAX_DEPOSITS,\n state.latest_eth1_data.deposit_count - state.deposit_index\n )\n\n\ndef process_operation_type(state: BeaconState,\n operations: List[Any],\n max_operations: int,\n tx_fn: Callable[[BeaconState, Any], None]) -> None:\n assert len(operations) <= max_operations\n for operation in operations:\n tx_fn(state, operation)\n\n\ndef process_operations(state: BeaconState, block: BeaconBlock) -> None:\n process_operation_type(\n state,\n block.body.proposer_slashings,\n spec.MAX_PROPOSER_SLASHINGS,\n spec.process_proposer_slashing,\n )\n\n process_operation_type(\n state,\n block.body.attester_slashings,\n spec.MAX_ATTESTER_SLASHINGS,\n spec.process_attester_slashing,\n )\n\n process_operation_type(\n state,\n block.body.attestations,\n spec.MAX_ATTESTATIONS,\n spec.process_attestation,\n )\n\n assert len(block.body.deposits) == expected_deposit_count(state)\n process_operation_type(\n state,\n block.body.deposits,\n spec.MAX_DEPOSITS,\n spec.process_deposit,\n )\n\n process_operation_type(\n state,\n block.body.voluntary_exits,\n spec.MAX_VOLUNTARY_EXITS,\n spec.process_voluntary_exit,\n )\n\n assert len(block.body.transfers) == len(set(block.body.transfers))\n process_operation_type(\n state,\n block.body.transfers,\n spec.MAX_TRANSFERS,\n spec.process_transfer,\n )\n\n\ndef process_block(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> None:\n spec.process_block_header(state, block)\n spec.process_randao(state, block)\n spec.process_eth1_data(state, block)\n\n process_operations(state, block)\n if verify_state_root:\n spec.verify_block_state_root(state, block)\n\n\ndef process_epoch_transition(state: BeaconState) -> None:\n spec.update_justification_and_finalization(state)\n spec.process_crosslinks(state)\n spec.maybe_reset_eth1_period(state)\n spec.apply_rewards(state)\n spec.process_ejections(state)\n spec.update_registry(state)\n spec.process_slashings(state)\n spec.process_exit_queue(state)\n spec.finish_epoch_update(state)\n\n\ndef state_transition(state: BeaconState,\n block: BeaconBlock,\n verify_state_root: bool=False) -> BeaconState:\n while state.slot < block.slot:\n spec.cache_state(state)\n if (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0:\n process_epoch_transition(state)\n spec.advance_slot(state)\n if block.slot == state.slot:\n process_block(state, block, verify_state_root)\n", "path": "utils/phase0/state_transition.py"}]} | 1,246 | 569 |
gh_patches_debug_30729 | rasdani/github-patches | git_diff | wearepal__EthicML-337 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SVM Kernel name
Clearly [this](https://github.com/predictive-analytics-lab/EthicML/blob/f7fcf435b5807ef9931f3ff3b259fc7cc4b38da8/ethicml/algorithms/inprocess/svm.py#L20) is not right
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ethicml/algorithms/inprocess/svm.py`
Content:
```
1 """Wrapper for SKLearn implementation of SVM."""
2 from typing import Optional, Union
3
4 import pandas as pd
5 from sklearn.svm import SVC, LinearSVC
6
7 from ethicml.common import implements
8 from ethicml.utility import DataTuple, Prediction, TestTuple
9
10 from .in_algorithm import InAlgorithm
11
12 __all__ = ["SVM"]
13
14
15 class SVM(InAlgorithm):
16 """Support Vector Machine."""
17
18 def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):
19 """Init SVM."""
20 kernel_name = f" (kernel)" if kernel is not None else ""
21 super().__init__(name="SVM" + kernel_name, is_fairness_algo=False)
22 self.C = SVC().C if C is None else C
23 self.kernel = SVC().kernel if kernel is None else kernel
24
25 @implements(InAlgorithm)
26 def run(self, train: DataTuple, test: Union[DataTuple, TestTuple]) -> Prediction:
27 clf = select_svm(self.C, self.kernel)
28 clf.fit(train.x, train.y.to_numpy().ravel())
29 return Prediction(hard=pd.Series(clf.predict(test.x)))
30
31
32 def select_svm(C: float, kernel: str) -> SVC:
33 """Select the appropriate SVM model for the given parameters."""
34 if kernel == "linear":
35 return LinearSVC(C=C, dual=False, tol=1e-12, random_state=888)
36 return SVC(C=C, kernel=kernel, gamma="auto", random_state=888)
37
```
Path: `ethicml/algorithms/inprocess/logistic_regression.py`
Content:
```
1 """Wrapper around Sci-Kit Learn Logistic Regression."""
2 from typing import Optional
3
4 import pandas as pd
5 from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
6 from sklearn.model_selection import KFold
7
8 from ethicml.common import implements
9 from ethicml.utility import DataTuple, Prediction, SoftPrediction, TestTuple
10
11 from .in_algorithm import InAlgorithm
12
13 __all__ = ["LR", "LRCV", "LRProb"]
14
15
16 class LR(InAlgorithm):
17 """Logistic regression with hard predictions."""
18
19 def __init__(self, C: Optional[float] = None):
20 """Init LR."""
21 self.C = LogisticRegression().C if C is None else C
22 super().__init__(name=f"Logistic Regression, C={self.C}", is_fairness_algo=False)
23
24 @implements(InAlgorithm)
25 def run(self, train: DataTuple, test: TestTuple) -> Prediction:
26 clf = LogisticRegression(solver="liblinear", random_state=888, C=self.C, multi_class="auto")
27 clf.fit(train.x, train.y.to_numpy().ravel())
28 return Prediction(hard=pd.Series(clf.predict(test.x)))
29
30
31 class LRProb(InAlgorithm):
32 """Logistic regression with soft output."""
33
34 def __init__(self, C: Optional[int] = None):
35 """Init LRProb."""
36 self.C = LogisticRegression().C if C is None else C
37 super().__init__(name=f"Logistic Regression Prob, C={self.C}", is_fairness_algo=False)
38
39 @implements(InAlgorithm)
40 def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:
41 clf = LogisticRegression(solver="liblinear", random_state=888, C=self.C, multi_class="auto")
42 clf.fit(train.x, train.y.to_numpy().ravel())
43 return SoftPrediction(soft=pd.Series(clf.predict_proba(test.x)[:, 1]))
44
45
46 class LRCV(InAlgorithm):
47 """Kind of a cheap hack for now, but gives a proper cross-valudeted LR."""
48
49 def __init__(self) -> None:
50 """Init LRCV."""
51 super().__init__(name="LRCV", is_fairness_algo=False)
52
53 @implements(InAlgorithm)
54 def run(self, train: DataTuple, test: TestTuple) -> Prediction:
55 folder = KFold(n_splits=3, shuffle=False)
56 clf = LogisticRegressionCV(
57 cv=folder, n_jobs=-1, random_state=888, solver="liblinear", multi_class="auto"
58 )
59 clf.fit(train.x, train.y.to_numpy().ravel())
60 return Prediction(hard=pd.Series(clf.predict(test.x)), info=dict(C=clf.C_[0]))
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ethicml/algorithms/inprocess/logistic_regression.py b/ethicml/algorithms/inprocess/logistic_regression.py
--- a/ethicml/algorithms/inprocess/logistic_regression.py
+++ b/ethicml/algorithms/inprocess/logistic_regression.py
@@ -19,7 +19,7 @@
def __init__(self, C: Optional[float] = None):
"""Init LR."""
self.C = LogisticRegression().C if C is None else C
- super().__init__(name=f"Logistic Regression, C={self.C}", is_fairness_algo=False)
+ super().__init__(name=f"Logistic Regression (C={self.C})", is_fairness_algo=False)
@implements(InAlgorithm)
def run(self, train: DataTuple, test: TestTuple) -> Prediction:
@@ -34,7 +34,7 @@
def __init__(self, C: Optional[int] = None):
"""Init LRProb."""
self.C = LogisticRegression().C if C is None else C
- super().__init__(name=f"Logistic Regression Prob, C={self.C}", is_fairness_algo=False)
+ super().__init__(name=f"Logistic Regression Prob (C={self.C})", is_fairness_algo=False)
@implements(InAlgorithm)
def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:
diff --git a/ethicml/algorithms/inprocess/svm.py b/ethicml/algorithms/inprocess/svm.py
--- a/ethicml/algorithms/inprocess/svm.py
+++ b/ethicml/algorithms/inprocess/svm.py
@@ -17,7 +17,7 @@
def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):
"""Init SVM."""
- kernel_name = f" (kernel)" if kernel is not None else ""
+ kernel_name = f" ({kernel})" if kernel is not None else ""
super().__init__(name="SVM" + kernel_name, is_fairness_algo=False)
self.C = SVC().C if C is None else C
self.kernel = SVC().kernel if kernel is None else kernel
| {"golden_diff": "diff --git a/ethicml/algorithms/inprocess/logistic_regression.py b/ethicml/algorithms/inprocess/logistic_regression.py\n--- a/ethicml/algorithms/inprocess/logistic_regression.py\n+++ b/ethicml/algorithms/inprocess/logistic_regression.py\n@@ -19,7 +19,7 @@\n def __init__(self, C: Optional[float] = None):\n \"\"\"Init LR.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n- super().__init__(name=f\"Logistic Regression, C={self.C}\", is_fairness_algo=False)\n+ super().__init__(name=f\"Logistic Regression (C={self.C})\", is_fairness_algo=False)\n \n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n@@ -34,7 +34,7 @@\n def __init__(self, C: Optional[int] = None):\n \"\"\"Init LRProb.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n- super().__init__(name=f\"Logistic Regression Prob, C={self.C}\", is_fairness_algo=False)\n+ super().__init__(name=f\"Logistic Regression Prob (C={self.C})\", is_fairness_algo=False)\n \n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:\ndiff --git a/ethicml/algorithms/inprocess/svm.py b/ethicml/algorithms/inprocess/svm.py\n--- a/ethicml/algorithms/inprocess/svm.py\n+++ b/ethicml/algorithms/inprocess/svm.py\n@@ -17,7 +17,7 @@\n \n def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):\n \"\"\"Init SVM.\"\"\"\n- kernel_name = f\" (kernel)\" if kernel is not None else \"\"\n+ kernel_name = f\" ({kernel})\" if kernel is not None else \"\"\n super().__init__(name=\"SVM\" + kernel_name, is_fairness_algo=False)\n self.C = SVC().C if C is None else C\n self.kernel = SVC().kernel if kernel is None else kernel\n", "issue": "SVM Kernel name\nClearly [this](https://github.com/predictive-analytics-lab/EthicML/blob/f7fcf435b5807ef9931f3ff3b259fc7cc4b38da8/ethicml/algorithms/inprocess/svm.py#L20) is not right \n", "before_files": [{"content": "\"\"\"Wrapper for SKLearn implementation of SVM.\"\"\"\nfrom typing import Optional, Union\n\nimport pandas as pd\nfrom sklearn.svm import SVC, LinearSVC\n\nfrom ethicml.common import implements\nfrom ethicml.utility import DataTuple, Prediction, TestTuple\n\nfrom .in_algorithm import InAlgorithm\n\n__all__ = [\"SVM\"]\n\n\nclass SVM(InAlgorithm):\n \"\"\"Support Vector Machine.\"\"\"\n\n def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):\n \"\"\"Init SVM.\"\"\"\n kernel_name = f\" (kernel)\" if kernel is not None else \"\"\n super().__init__(name=\"SVM\" + kernel_name, is_fairness_algo=False)\n self.C = SVC().C if C is None else C\n self.kernel = SVC().kernel if kernel is None else kernel\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: Union[DataTuple, TestTuple]) -> Prediction:\n clf = select_svm(self.C, self.kernel)\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)))\n\n\ndef select_svm(C: float, kernel: str) -> SVC:\n \"\"\"Select the appropriate SVM model for the given parameters.\"\"\"\n if kernel == \"linear\":\n return LinearSVC(C=C, dual=False, tol=1e-12, random_state=888)\n return SVC(C=C, kernel=kernel, gamma=\"auto\", random_state=888)\n", "path": "ethicml/algorithms/inprocess/svm.py"}, {"content": "\"\"\"Wrapper around Sci-Kit Learn Logistic Regression.\"\"\"\nfrom typing import Optional\n\nimport pandas as pd\nfrom sklearn.linear_model import LogisticRegression, LogisticRegressionCV\nfrom sklearn.model_selection import KFold\n\nfrom ethicml.common import implements\nfrom ethicml.utility import DataTuple, Prediction, SoftPrediction, TestTuple\n\nfrom .in_algorithm import InAlgorithm\n\n__all__ = [\"LR\", \"LRCV\", \"LRProb\"]\n\n\nclass LR(InAlgorithm):\n \"\"\"Logistic regression with hard predictions.\"\"\"\n\n def __init__(self, C: Optional[float] = None):\n \"\"\"Init LR.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n super().__init__(name=f\"Logistic Regression, C={self.C}\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n clf = LogisticRegression(solver=\"liblinear\", random_state=888, C=self.C, multi_class=\"auto\")\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)))\n\n\nclass LRProb(InAlgorithm):\n \"\"\"Logistic regression with soft output.\"\"\"\n\n def __init__(self, C: Optional[int] = None):\n \"\"\"Init LRProb.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n super().__init__(name=f\"Logistic Regression Prob, C={self.C}\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:\n clf = LogisticRegression(solver=\"liblinear\", random_state=888, C=self.C, multi_class=\"auto\")\n clf.fit(train.x, train.y.to_numpy().ravel())\n return SoftPrediction(soft=pd.Series(clf.predict_proba(test.x)[:, 1]))\n\n\nclass LRCV(InAlgorithm):\n \"\"\"Kind of a cheap hack for now, but gives a proper cross-valudeted LR.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Init LRCV.\"\"\"\n super().__init__(name=\"LRCV\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n folder = KFold(n_splits=3, shuffle=False)\n clf = LogisticRegressionCV(\n cv=folder, n_jobs=-1, random_state=888, solver=\"liblinear\", multi_class=\"auto\"\n )\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)), info=dict(C=clf.C_[0]))\n", "path": "ethicml/algorithms/inprocess/logistic_regression.py"}], "after_files": [{"content": "\"\"\"Wrapper for SKLearn implementation of SVM.\"\"\"\nfrom typing import Optional, Union\n\nimport pandas as pd\nfrom sklearn.svm import SVC, LinearSVC\n\nfrom ethicml.common import implements\nfrom ethicml.utility import DataTuple, Prediction, TestTuple\n\nfrom .in_algorithm import InAlgorithm\n\n__all__ = [\"SVM\"]\n\n\nclass SVM(InAlgorithm):\n \"\"\"Support Vector Machine.\"\"\"\n\n def __init__(self, C: Optional[float] = None, kernel: Optional[str] = None):\n \"\"\"Init SVM.\"\"\"\n kernel_name = f\" ({kernel})\" if kernel is not None else \"\"\n super().__init__(name=\"SVM\" + kernel_name, is_fairness_algo=False)\n self.C = SVC().C if C is None else C\n self.kernel = SVC().kernel if kernel is None else kernel\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: Union[DataTuple, TestTuple]) -> Prediction:\n clf = select_svm(self.C, self.kernel)\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)))\n\n\ndef select_svm(C: float, kernel: str) -> SVC:\n \"\"\"Select the appropriate SVM model for the given parameters.\"\"\"\n if kernel == \"linear\":\n return LinearSVC(C=C, dual=False, tol=1e-12, random_state=888)\n return SVC(C=C, kernel=kernel, gamma=\"auto\", random_state=888)\n", "path": "ethicml/algorithms/inprocess/svm.py"}, {"content": "\"\"\"Wrapper around Sci-Kit Learn Logistic Regression.\"\"\"\nfrom typing import Optional\n\nimport pandas as pd\nfrom sklearn.linear_model import LogisticRegression, LogisticRegressionCV\nfrom sklearn.model_selection import KFold\n\nfrom ethicml.common import implements\nfrom ethicml.utility import DataTuple, Prediction, SoftPrediction, TestTuple\n\nfrom .in_algorithm import InAlgorithm\n\n__all__ = [\"LR\", \"LRCV\", \"LRProb\"]\n\n\nclass LR(InAlgorithm):\n \"\"\"Logistic regression with hard predictions.\"\"\"\n\n def __init__(self, C: Optional[float] = None):\n \"\"\"Init LR.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n super().__init__(name=f\"Logistic Regression (C={self.C})\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n clf = LogisticRegression(solver=\"liblinear\", random_state=888, C=self.C, multi_class=\"auto\")\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)))\n\n\nclass LRProb(InAlgorithm):\n \"\"\"Logistic regression with soft output.\"\"\"\n\n def __init__(self, C: Optional[int] = None):\n \"\"\"Init LRProb.\"\"\"\n self.C = LogisticRegression().C if C is None else C\n super().__init__(name=f\"Logistic Regression Prob (C={self.C})\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> SoftPrediction:\n clf = LogisticRegression(solver=\"liblinear\", random_state=888, C=self.C, multi_class=\"auto\")\n clf.fit(train.x, train.y.to_numpy().ravel())\n return SoftPrediction(soft=pd.Series(clf.predict_proba(test.x)[:, 1]))\n\n\nclass LRCV(InAlgorithm):\n \"\"\"Kind of a cheap hack for now, but gives a proper cross-valudeted LR.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Init LRCV.\"\"\"\n super().__init__(name=\"LRCV\", is_fairness_algo=False)\n\n @implements(InAlgorithm)\n def run(self, train: DataTuple, test: TestTuple) -> Prediction:\n folder = KFold(n_splits=3, shuffle=False)\n clf = LogisticRegressionCV(\n cv=folder, n_jobs=-1, random_state=888, solver=\"liblinear\", multi_class=\"auto\"\n )\n clf.fit(train.x, train.y.to_numpy().ravel())\n return Prediction(hard=pd.Series(clf.predict(test.x)), info=dict(C=clf.C_[0]))\n", "path": "ethicml/algorithms/inprocess/logistic_regression.py"}]} | 1,458 | 489 |
gh_patches_debug_28181 | rasdani/github-patches | git_diff | carpentries__amy-622 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
List of people who taught at events of specific type
Usecase: Tracy wants to grab list of people who taught at DC workshops, so that she knows who is experienced DC instructor.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `workshops/filters.py`
Content:
```
1 from distutils.util import strtobool
2
3 import django.forms.widgets
4
5 import django_filters
6 from django_countries import Countries
7
8 from workshops.models import Event, Host, Person, Task, Airport, EventRequest
9
10 EMPTY_SELECTION = (None, '---------')
11
12
13 class AllCountriesFilter(django_filters.ChoiceFilter):
14 @property
15 def field(self):
16 qs = self.model._default_manager.distinct()
17 qs = qs.order_by(self.name).values_list(self.name, flat=True)
18
19 choices = [o for o in qs if o]
20 countries = Countries()
21 countries.only = choices
22
23 self.extra['choices'] = list(countries)
24 self.extra['choices'].insert(0, EMPTY_SELECTION)
25 return super().field
26
27
28 class ForeignKeyAllValuesFilter(django_filters.ChoiceFilter):
29 def __init__(self, model, *args, **kwargs):
30 self.lookup_model = model
31 super().__init__(*args, **kwargs)
32
33 @property
34 def field(self):
35 name = self.name
36 model = self.lookup_model
37
38 qs1 = self.model._default_manager.distinct()
39 qs1 = qs1.order_by(name).values_list(name, flat=True)
40 qs2 = model.objects.filter(pk__in=qs1)
41 self.extra['choices'] = [(o.pk, str(o)) for o in qs2]
42 self.extra['choices'].insert(0, EMPTY_SELECTION)
43 return super().field
44
45
46 class EventStateFilter(django_filters.ChoiceFilter):
47 def filter(self, qs, value):
48 if isinstance(value, django_filters.fields.Lookup):
49 value = value.value
50
51 # no filtering
52 if value in ([], (), {}, None, '', 'all'):
53 return qs
54
55 # no need to check if value exists in self.extra['choices'] because
56 # validation is done by django_filters
57 try:
58 return getattr(qs, "{}_events".format(value))()
59 except AttributeError:
60 return qs
61
62
63 class EventFilter(django_filters.FilterSet):
64 assigned_to = ForeignKeyAllValuesFilter(Person)
65 host = ForeignKeyAllValuesFilter(Host)
66 administrator = ForeignKeyAllValuesFilter(Host)
67
68 STATUS_CHOICES = [
69 ('', 'All'),
70 ('past', 'Past'),
71 ('ongoing', 'Ongoing'),
72 ('upcoming', 'Upcoming'),
73 ('unpublished', 'Unpublished'),
74 ('uninvoiced', 'Uninvoiced'),
75 ]
76 status = EventStateFilter(choices=STATUS_CHOICES)
77
78 invoice_status = django_filters.ChoiceFilter(
79 choices=(EMPTY_SELECTION, ) + Event.INVOICED_CHOICES,
80 )
81
82 class Meta:
83 model = Event
84 fields = [
85 'assigned_to',
86 'tags',
87 'host',
88 'administrator',
89 'invoice_status',
90 'completed',
91 ]
92 order_by = ['-slug', 'slug', 'start', '-start', 'end', '-end']
93
94
95 class EventRequestFilter(django_filters.FilterSet):
96 assigned_to = ForeignKeyAllValuesFilter(Person)
97 country = AllCountriesFilter()
98 active = django_filters.TypedChoiceFilter(
99 choices=(('true', 'Open'), ('false', 'Closed')),
100 coerce=strtobool,
101 label='Status',
102 widget=django.forms.widgets.RadioSelect,
103 )
104
105 class Meta:
106 model = EventRequest
107 fields = [
108 'assigned_to',
109 'workshop_type',
110 'active',
111 'country',
112 ]
113 order_by = ['-created_at', 'created_at']
114
115
116 class HostFilter(django_filters.FilterSet):
117 country = AllCountriesFilter()
118
119 class Meta:
120 model = Host
121 fields = [
122 'country',
123 ]
124 order_by = ['fullname', '-fullname', 'domain', '-domain', ]
125
126
127 class PersonFilter(django_filters.FilterSet):
128 class Meta:
129 model = Person
130 fields = [
131 'badges',
132 ]
133 order_by = ["lastname", "-lastname", "firstname", "-firstname",
134 "email", "-email"]
135
136 def get_order_by(self, order_value):
137 if order_value == 'firstname':
138 return ['personal', 'middle', 'family']
139 elif order_value == '-firstname':
140 return ['-personal', '-middle', '-family']
141 elif order_value == 'lastname':
142 return ['family', 'middle', 'personal']
143 elif order_value == '-lastname':
144 return ['-family', '-middle', '-personal']
145 return super().get_order_by(order_value)
146
147
148 class TaskFilter(django_filters.FilterSet):
149 class Meta:
150 model = Task
151 fields = [
152 'event',
153 # can't filter on person because person's name contains 3 fields:
154 # person.personal, person.middle, person.family
155 # 'person',
156 'role',
157 ]
158 order_by = [
159 ['event__slug', 'Event'],
160 ['-event__slug', 'Event (descending)'],
161 ['person__family', 'Person'],
162 ['-person__family', 'Person (descending)'],
163 ['role', 'Role'],
164 ['-role', 'Role (descending)'],
165 ]
166
167
168 class AirportFilter(django_filters.FilterSet):
169 fullname = django_filters.CharFilter(lookup_type='icontains')
170
171 class Meta:
172 model = Airport
173 fields = [
174 'fullname',
175 ]
176 order_by = ["iata", "-iata", "fullname", "-fullname"]
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/workshops/filters.py b/workshops/filters.py
--- a/workshops/filters.py
+++ b/workshops/filters.py
@@ -5,7 +5,17 @@
import django_filters
from django_countries import Countries
-from workshops.models import Event, Host, Person, Task, Airport, EventRequest
+from workshops.models import (
+ Event,
+ Host,
+ Person,
+ Task,
+ Airport,
+ EventRequest,
+ Tag,
+ Role,
+ Task,
+)
EMPTY_SELECTION = (None, '---------')
@@ -124,11 +134,31 @@
order_by = ['fullname', '-fullname', 'domain', '-domain', ]
+def filter_taught_workshops(queryset, values):
+ """Limit Persons to only instructors from events with specific tags.
+
+ This needs to be in a separate function because django-filters doesn't
+ support `action` parameter as supposed, ie. with
+ `action='filter_taught_workshops'` it doesn't call the method; instead it
+ tries calling a string, which results in error."""
+ if not values:
+ return queryset
+
+ return queryset.filter(task__role__name='instructor') \
+ .filter(task__event__tags__in=values) \
+ .distinct()
+
+
class PersonFilter(django_filters.FilterSet):
+ taught_workshops = django_filters.ModelMultipleChoiceFilter(
+ queryset=Tag.objects.all(), label='Taught at workshops of type',
+ action=filter_taught_workshops,
+ )
+
class Meta:
model = Person
fields = [
- 'badges',
+ 'badges', 'taught_workshops',
]
order_by = ["lastname", "-lastname", "firstname", "-firstname",
"email", "-email"]
| {"golden_diff": "diff --git a/workshops/filters.py b/workshops/filters.py\n--- a/workshops/filters.py\n+++ b/workshops/filters.py\n@@ -5,7 +5,17 @@\n import django_filters\n from django_countries import Countries\n \n-from workshops.models import Event, Host, Person, Task, Airport, EventRequest\n+from workshops.models import (\n+ Event,\n+ Host,\n+ Person,\n+ Task,\n+ Airport,\n+ EventRequest,\n+ Tag,\n+ Role,\n+ Task,\n+)\n \n EMPTY_SELECTION = (None, '---------')\n \n@@ -124,11 +134,31 @@\n order_by = ['fullname', '-fullname', 'domain', '-domain', ]\n \n \n+def filter_taught_workshops(queryset, values):\n+ \"\"\"Limit Persons to only instructors from events with specific tags.\n+\n+ This needs to be in a separate function because django-filters doesn't\n+ support `action` parameter as supposed, ie. with\n+ `action='filter_taught_workshops'` it doesn't call the method; instead it\n+ tries calling a string, which results in error.\"\"\"\n+ if not values:\n+ return queryset\n+\n+ return queryset.filter(task__role__name='instructor') \\\n+ .filter(task__event__tags__in=values) \\\n+ .distinct()\n+\n+\n class PersonFilter(django_filters.FilterSet):\n+ taught_workshops = django_filters.ModelMultipleChoiceFilter(\n+ queryset=Tag.objects.all(), label='Taught at workshops of type',\n+ action=filter_taught_workshops,\n+ )\n+\n class Meta:\n model = Person\n fields = [\n- 'badges',\n+ 'badges', 'taught_workshops',\n ]\n order_by = [\"lastname\", \"-lastname\", \"firstname\", \"-firstname\",\n \"email\", \"-email\"]\n", "issue": "List of people who taught at events of specific type\nUsecase: Tracy wants to grab list of people who taught at DC workshops, so that she knows who is experienced DC instructor.\n\n", "before_files": [{"content": "from distutils.util import strtobool\n\nimport django.forms.widgets\n\nimport django_filters\nfrom django_countries import Countries\n\nfrom workshops.models import Event, Host, Person, Task, Airport, EventRequest\n\nEMPTY_SELECTION = (None, '---------')\n\n\nclass AllCountriesFilter(django_filters.ChoiceFilter):\n @property\n def field(self):\n qs = self.model._default_manager.distinct()\n qs = qs.order_by(self.name).values_list(self.name, flat=True)\n\n choices = [o for o in qs if o]\n countries = Countries()\n countries.only = choices\n\n self.extra['choices'] = list(countries)\n self.extra['choices'].insert(0, EMPTY_SELECTION)\n return super().field\n\n\nclass ForeignKeyAllValuesFilter(django_filters.ChoiceFilter):\n def __init__(self, model, *args, **kwargs):\n self.lookup_model = model\n super().__init__(*args, **kwargs)\n\n @property\n def field(self):\n name = self.name\n model = self.lookup_model\n\n qs1 = self.model._default_manager.distinct()\n qs1 = qs1.order_by(name).values_list(name, flat=True)\n qs2 = model.objects.filter(pk__in=qs1)\n self.extra['choices'] = [(o.pk, str(o)) for o in qs2]\n self.extra['choices'].insert(0, EMPTY_SELECTION)\n return super().field\n\n\nclass EventStateFilter(django_filters.ChoiceFilter):\n def filter(self, qs, value):\n if isinstance(value, django_filters.fields.Lookup):\n value = value.value\n\n # no filtering\n if value in ([], (), {}, None, '', 'all'):\n return qs\n\n # no need to check if value exists in self.extra['choices'] because\n # validation is done by django_filters\n try:\n return getattr(qs, \"{}_events\".format(value))()\n except AttributeError:\n return qs\n\n\nclass EventFilter(django_filters.FilterSet):\n assigned_to = ForeignKeyAllValuesFilter(Person)\n host = ForeignKeyAllValuesFilter(Host)\n administrator = ForeignKeyAllValuesFilter(Host)\n\n STATUS_CHOICES = [\n ('', 'All'),\n ('past', 'Past'),\n ('ongoing', 'Ongoing'),\n ('upcoming', 'Upcoming'),\n ('unpublished', 'Unpublished'),\n ('uninvoiced', 'Uninvoiced'),\n ]\n status = EventStateFilter(choices=STATUS_CHOICES)\n\n invoice_status = django_filters.ChoiceFilter(\n choices=(EMPTY_SELECTION, ) + Event.INVOICED_CHOICES,\n )\n\n class Meta:\n model = Event\n fields = [\n 'assigned_to',\n 'tags',\n 'host',\n 'administrator',\n 'invoice_status',\n 'completed',\n ]\n order_by = ['-slug', 'slug', 'start', '-start', 'end', '-end']\n\n\nclass EventRequestFilter(django_filters.FilterSet):\n assigned_to = ForeignKeyAllValuesFilter(Person)\n country = AllCountriesFilter()\n active = django_filters.TypedChoiceFilter(\n choices=(('true', 'Open'), ('false', 'Closed')),\n coerce=strtobool,\n label='Status',\n widget=django.forms.widgets.RadioSelect,\n )\n\n class Meta:\n model = EventRequest\n fields = [\n 'assigned_to',\n 'workshop_type',\n 'active',\n 'country',\n ]\n order_by = ['-created_at', 'created_at']\n\n\nclass HostFilter(django_filters.FilterSet):\n country = AllCountriesFilter()\n\n class Meta:\n model = Host\n fields = [\n 'country',\n ]\n order_by = ['fullname', '-fullname', 'domain', '-domain', ]\n\n\nclass PersonFilter(django_filters.FilterSet):\n class Meta:\n model = Person\n fields = [\n 'badges',\n ]\n order_by = [\"lastname\", \"-lastname\", \"firstname\", \"-firstname\",\n \"email\", \"-email\"]\n\n def get_order_by(self, order_value):\n if order_value == 'firstname':\n return ['personal', 'middle', 'family']\n elif order_value == '-firstname':\n return ['-personal', '-middle', '-family']\n elif order_value == 'lastname':\n return ['family', 'middle', 'personal']\n elif order_value == '-lastname':\n return ['-family', '-middle', '-personal']\n return super().get_order_by(order_value)\n\n\nclass TaskFilter(django_filters.FilterSet):\n class Meta:\n model = Task\n fields = [\n 'event',\n # can't filter on person because person's name contains 3 fields:\n # person.personal, person.middle, person.family\n # 'person',\n 'role',\n ]\n order_by = [\n ['event__slug', 'Event'],\n ['-event__slug', 'Event (descending)'],\n ['person__family', 'Person'],\n ['-person__family', 'Person (descending)'],\n ['role', 'Role'],\n ['-role', 'Role (descending)'],\n ]\n\n\nclass AirportFilter(django_filters.FilterSet):\n fullname = django_filters.CharFilter(lookup_type='icontains')\n\n class Meta:\n model = Airport\n fields = [\n 'fullname',\n ]\n order_by = [\"iata\", \"-iata\", \"fullname\", \"-fullname\"]\n", "path": "workshops/filters.py"}], "after_files": [{"content": "from distutils.util import strtobool\n\nimport django.forms.widgets\n\nimport django_filters\nfrom django_countries import Countries\n\nfrom workshops.models import (\n Event,\n Host,\n Person,\n Task,\n Airport,\n EventRequest,\n Tag,\n Role,\n Task,\n)\n\nEMPTY_SELECTION = (None, '---------')\n\n\nclass AllCountriesFilter(django_filters.ChoiceFilter):\n @property\n def field(self):\n qs = self.model._default_manager.distinct()\n qs = qs.order_by(self.name).values_list(self.name, flat=True)\n\n choices = [o for o in qs if o]\n countries = Countries()\n countries.only = choices\n\n self.extra['choices'] = list(countries)\n self.extra['choices'].insert(0, EMPTY_SELECTION)\n return super().field\n\n\nclass ForeignKeyAllValuesFilter(django_filters.ChoiceFilter):\n def __init__(self, model, *args, **kwargs):\n self.lookup_model = model\n super().__init__(*args, **kwargs)\n\n @property\n def field(self):\n name = self.name\n model = self.lookup_model\n\n qs1 = self.model._default_manager.distinct()\n qs1 = qs1.order_by(name).values_list(name, flat=True)\n qs2 = model.objects.filter(pk__in=qs1)\n self.extra['choices'] = [(o.pk, str(o)) for o in qs2]\n self.extra['choices'].insert(0, EMPTY_SELECTION)\n return super().field\n\n\nclass EventStateFilter(django_filters.ChoiceFilter):\n def filter(self, qs, value):\n if isinstance(value, django_filters.fields.Lookup):\n value = value.value\n\n # no filtering\n if value in ([], (), {}, None, '', 'all'):\n return qs\n\n # no need to check if value exists in self.extra['choices'] because\n # validation is done by django_filters\n try:\n return getattr(qs, \"{}_events\".format(value))()\n except AttributeError:\n return qs\n\n\nclass EventFilter(django_filters.FilterSet):\n assigned_to = ForeignKeyAllValuesFilter(Person)\n host = ForeignKeyAllValuesFilter(Host)\n administrator = ForeignKeyAllValuesFilter(Host)\n\n STATUS_CHOICES = [\n ('', 'All'),\n ('past', 'Past'),\n ('ongoing', 'Ongoing'),\n ('upcoming', 'Upcoming'),\n ('unpublished', 'Unpublished'),\n ('uninvoiced', 'Uninvoiced'),\n ]\n status = EventStateFilter(choices=STATUS_CHOICES)\n\n invoice_status = django_filters.ChoiceFilter(\n choices=(EMPTY_SELECTION, ) + Event.INVOICED_CHOICES,\n )\n\n class Meta:\n model = Event\n fields = [\n 'assigned_to',\n 'tags',\n 'host',\n 'administrator',\n 'invoice_status',\n 'completed',\n ]\n order_by = ['-slug', 'slug', 'start', '-start', 'end', '-end']\n\n\nclass EventRequestFilter(django_filters.FilterSet):\n assigned_to = ForeignKeyAllValuesFilter(Person)\n country = AllCountriesFilter()\n active = django_filters.TypedChoiceFilter(\n choices=(('true', 'Open'), ('false', 'Closed')),\n coerce=strtobool,\n label='Status',\n widget=django.forms.widgets.RadioSelect,\n )\n\n class Meta:\n model = EventRequest\n fields = [\n 'assigned_to',\n 'workshop_type',\n 'active',\n 'country',\n ]\n order_by = ['-created_at', 'created_at']\n\n\nclass HostFilter(django_filters.FilterSet):\n country = AllCountriesFilter()\n\n class Meta:\n model = Host\n fields = [\n 'country',\n ]\n order_by = ['fullname', '-fullname', 'domain', '-domain', ]\n\n\ndef filter_taught_workshops(queryset, values):\n \"\"\"Limit Persons to only instructors from events with specific tags.\n\n This needs to be in a separate function because django-filters doesn't\n support `action` parameter as supposed, ie. with\n `action='filter_taught_workshops'` it doesn't call the method; instead it\n tries calling a string, which results in error.\"\"\"\n if not values:\n return queryset\n\n return queryset.filter(task__role__name='instructor') \\\n .filter(task__event__tags__in=values) \\\n .distinct()\n\n\nclass PersonFilter(django_filters.FilterSet):\n taught_workshops = django_filters.ModelMultipleChoiceFilter(\n queryset=Tag.objects.all(), label='Taught at workshops of type',\n action=filter_taught_workshops,\n )\n\n class Meta:\n model = Person\n fields = [\n 'badges', 'taught_workshops',\n ]\n order_by = [\"lastname\", \"-lastname\", \"firstname\", \"-firstname\",\n \"email\", \"-email\"]\n\n def get_order_by(self, order_value):\n if order_value == 'firstname':\n return ['personal', 'middle', 'family']\n elif order_value == '-firstname':\n return ['-personal', '-middle', '-family']\n elif order_value == 'lastname':\n return ['family', 'middle', 'personal']\n elif order_value == '-lastname':\n return ['-family', '-middle', '-personal']\n return super().get_order_by(order_value)\n\n\nclass TaskFilter(django_filters.FilterSet):\n class Meta:\n model = Task\n fields = [\n 'event',\n # can't filter on person because person's name contains 3 fields:\n # person.personal, person.middle, person.family\n # 'person',\n 'role',\n ]\n order_by = [\n ['event__slug', 'Event'],\n ['-event__slug', 'Event (descending)'],\n ['person__family', 'Person'],\n ['-person__family', 'Person (descending)'],\n ['role', 'Role'],\n ['-role', 'Role (descending)'],\n ]\n\n\nclass AirportFilter(django_filters.FilterSet):\n fullname = django_filters.CharFilter(lookup_type='icontains')\n\n class Meta:\n model = Airport\n fields = [\n 'fullname',\n ]\n order_by = [\"iata\", \"-iata\", \"fullname\", \"-fullname\"]\n", "path": "workshops/filters.py"}]} | 1,873 | 410 |
gh_patches_debug_35133 | rasdani/github-patches | git_diff | cowrie__cowrie-1472 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MalShare uploader not working
**Describe the bug**
In my config I have
```
[output_malshare]
enabled = true
```
and in my logs I have
```
[stdout#info] Sending file to MalShare
[stdout#info] Submited to MalShare
```
but when I check on MalShare I can't find any the binaries that have been caught in my honeypot.
**To Reproduce**
Steps to reproduce the behavior:
1. Enable MalShare submission in your config
2. Wait for a bot to drop a binary in your honeypot
3. Try to find the binary on malshare (search by md5)
4. Observe that the binary is not there
**Expected behavior**
The binary should be uploaded successfully to MalShare
**Server (please complete the following information):**
- OS: [e.g. RedHat Linux 7.1, output of uname -a] Ubuntu 20.04, Linux 5.4.0
- Python: 3.8.5
**Additional context**
Based on [MalShare's API docs](https://malshare.com/doc.php) it seems that uploading files now requires an API key and a slightly different POST path than the one [defined in cowrie](https://github.com/cowrie/cowrie/blob/b848ec261554ee9128640601eb9a6734b2bffefe/src/cowrie/output/malshare.py#L90). Probably adding an API key option to the config and updating the uploader with the new path and to use the API key will solve this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cowrie/output/malshare.py`
Content:
```
1 # Copyright (c) 2015 Michel Oosterhof <[email protected]>
2 # All rights reserved.
3 #
4 # Redistribution and use in source and binary forms, with or without
5 # modification, are permitted provided that the following conditions
6 # are met:
7 #
8 # 1. Redistributions of source code must retain the above copyright
9 # notice, this list of conditions and the following disclaimer.
10 # 2. Redistributions in binary form must reproduce the above copyright
11 # notice, this list of conditions and the following disclaimer in the
12 # documentation and/or other materials provided with the distribution.
13 # 3. The names of the author(s) may not be used to endorse or promote
14 # products derived from this software without specific prior written
15 # permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS`` AND ANY EXPRESS OR
18 # IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
19 # OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
20 # IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY DIRECT, INDIRECT,
21 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
22 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
24 # AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
25 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
26 # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
27 # SUCH DAMAGE.
28
29 """
30 Send files to https://malshare.com/
31 More info https://malshare.com/doc.php
32 """
33
34 from __future__ import absolute_import, division
35
36 import os
37
38 try:
39 from urllib.parse import urlparse
40 except ImportError:
41 from urlparse import urlparse
42 import requests
43
44 import cowrie.core.output
45
46
47 class Output(cowrie.core.output.Output):
48 """
49 malshare output
50
51 TODO: use `treq`
52 """
53 def start(self):
54 """
55 Start output plugin
56 """
57 pass
58
59 def stop(self):
60 """
61 Stop output plugin
62 """
63 pass
64
65 def write(self, entry):
66 if entry["eventid"] == "cowrie.session.file_download":
67 print("Sending file to MalShare")
68 p = urlparse(entry["url"]).path
69 if p == "":
70 fileName = entry["shasum"]
71 else:
72 b = os.path.basename(p)
73 if b == "":
74 fileName = entry["shasum"]
75 else:
76 fileName = b
77
78 self.postfile(entry["outfile"], fileName)
79
80 elif entry["eventid"] == "cowrie.session.file_upload":
81 print("Sending file to MalShare")
82 self.postfile(entry["outfile"], entry["filename"])
83
84 def postfile(self, artifact, fileName):
85 """
86 Send a file to MalShare
87 """
88 try:
89 res = requests.post(
90 "https://malshare.com/api.php?mode=cli",
91 files={fileName: open(artifact, "rb")}
92 )
93 if res and res.ok:
94 print("Submited to MalShare")
95 else:
96 print("MalShare Request failed: {}".format(res.status_code))
97 except Exception as e:
98 print("MalShare Request failed: {}".format(e))
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cowrie/output/malshare.py b/src/cowrie/output/malshare.py
--- a/src/cowrie/output/malshare.py
+++ b/src/cowrie/output/malshare.py
@@ -41,7 +41,10 @@
from urlparse import urlparse
import requests
+from twisted.python import log
+
import cowrie.core.output
+from cowrie.core.config import CowrieConfig
class Output(cowrie.core.output.Output):
@@ -54,7 +57,7 @@
"""
Start output plugin
"""
- pass
+ self.apiKey = CowrieConfig().get('output_malshare', 'api_key')
def stop(self):
"""
@@ -64,7 +67,6 @@
def write(self, entry):
if entry["eventid"] == "cowrie.session.file_download":
- print("Sending file to MalShare")
p = urlparse(entry["url"]).path
if p == "":
fileName = entry["shasum"]
@@ -78,7 +80,6 @@
self.postfile(entry["outfile"], fileName)
elif entry["eventid"] == "cowrie.session.file_upload":
- print("Sending file to MalShare")
self.postfile(entry["outfile"], entry["filename"])
def postfile(self, artifact, fileName):
@@ -87,12 +88,12 @@
"""
try:
res = requests.post(
- "https://malshare.com/api.php?mode=cli",
- files={fileName: open(artifact, "rb")}
+ "https://malshare.com/api.php?api_key="+self.apiKey+"&action=upload",
+ files={"upload": open(artifact, "rb")}
)
if res and res.ok:
- print("Submited to MalShare")
+ log.msg("Submitted to MalShare")
else:
- print("MalShare Request failed: {}".format(res.status_code))
+ log.msg("MalShare Request failed: {}".format(res.status_code))
except Exception as e:
- print("MalShare Request failed: {}".format(e))
+ log.msg("MalShare Request failed: {}".format(e))
| {"golden_diff": "diff --git a/src/cowrie/output/malshare.py b/src/cowrie/output/malshare.py\n--- a/src/cowrie/output/malshare.py\n+++ b/src/cowrie/output/malshare.py\n@@ -41,7 +41,10 @@\n from urlparse import urlparse\n import requests\n \n+from twisted.python import log\n+\n import cowrie.core.output\n+from cowrie.core.config import CowrieConfig\n \n \n class Output(cowrie.core.output.Output):\n@@ -54,7 +57,7 @@\n \"\"\"\n Start output plugin\n \"\"\"\n- pass\n+ self.apiKey = CowrieConfig().get('output_malshare', 'api_key')\n \n def stop(self):\n \"\"\"\n@@ -64,7 +67,6 @@\n \n def write(self, entry):\n if entry[\"eventid\"] == \"cowrie.session.file_download\":\n- print(\"Sending file to MalShare\")\n p = urlparse(entry[\"url\"]).path\n if p == \"\":\n fileName = entry[\"shasum\"]\n@@ -78,7 +80,6 @@\n self.postfile(entry[\"outfile\"], fileName)\n \n elif entry[\"eventid\"] == \"cowrie.session.file_upload\":\n- print(\"Sending file to MalShare\")\n self.postfile(entry[\"outfile\"], entry[\"filename\"])\n \n def postfile(self, artifact, fileName):\n@@ -87,12 +88,12 @@\n \"\"\"\n try:\n res = requests.post(\n- \"https://malshare.com/api.php?mode=cli\",\n- files={fileName: open(artifact, \"rb\")}\n+ \"https://malshare.com/api.php?api_key=\"+self.apiKey+\"&action=upload\",\n+ files={\"upload\": open(artifact, \"rb\")}\n )\n if res and res.ok:\n- print(\"Submited to MalShare\")\n+ log.msg(\"Submitted to MalShare\")\n else:\n- print(\"MalShare Request failed: {}\".format(res.status_code))\n+ log.msg(\"MalShare Request failed: {}\".format(res.status_code))\n except Exception as e:\n- print(\"MalShare Request failed: {}\".format(e))\n+ log.msg(\"MalShare Request failed: {}\".format(e))\n", "issue": "MalShare uploader not working\n**Describe the bug**\r\nIn my config I have\r\n```\r\n[output_malshare]\r\nenabled = true\r\n```\r\n\r\nand in my logs I have\r\n```\r\n[stdout#info] Sending file to MalShare\r\n[stdout#info] Submited to MalShare\r\n```\r\n\r\nbut when I check on MalShare I can't find any the binaries that have been caught in my honeypot.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Enable MalShare submission in your config\r\n2. Wait for a bot to drop a binary in your honeypot\r\n3. Try to find the binary on malshare (search by md5)\r\n4. Observe that the binary is not there\r\n\r\n**Expected behavior**\r\nThe binary should be uploaded successfully to MalShare\r\n\r\n**Server (please complete the following information):**\r\n - OS: [e.g. RedHat Linux 7.1, output of uname -a] Ubuntu 20.04, Linux 5.4.0\r\n - Python: 3.8.5\r\n\r\n**Additional context**\r\nBased on [MalShare's API docs](https://malshare.com/doc.php) it seems that uploading files now requires an API key and a slightly different POST path than the one [defined in cowrie](https://github.com/cowrie/cowrie/blob/b848ec261554ee9128640601eb9a6734b2bffefe/src/cowrie/output/malshare.py#L90). Probably adding an API key option to the config and updating the uploader with the new path and to use the API key will solve this.\r\n\n", "before_files": [{"content": "# Copyright (c) 2015 Michel Oosterhof <[email protected]>\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# 1. Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# 3. The names of the author(s) may not be used to endorse or promote\n# products derived from this software without specific prior written\n# permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS`` AND ANY EXPRESS OR\n# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\n# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\n# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED\n# AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n# SUCH DAMAGE.\n\n\"\"\"\nSend files to https://malshare.com/\nMore info https://malshare.com/doc.php\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport os\n\ntry:\n from urllib.parse import urlparse\nexcept ImportError:\n from urlparse import urlparse\nimport requests\n\nimport cowrie.core.output\n\n\nclass Output(cowrie.core.output.Output):\n \"\"\"\n malshare output\n\n TODO: use `treq`\n \"\"\"\n def start(self):\n \"\"\"\n Start output plugin\n \"\"\"\n pass\n\n def stop(self):\n \"\"\"\n Stop output plugin\n \"\"\"\n pass\n\n def write(self, entry):\n if entry[\"eventid\"] == \"cowrie.session.file_download\":\n print(\"Sending file to MalShare\")\n p = urlparse(entry[\"url\"]).path\n if p == \"\":\n fileName = entry[\"shasum\"]\n else:\n b = os.path.basename(p)\n if b == \"\":\n fileName = entry[\"shasum\"]\n else:\n fileName = b\n\n self.postfile(entry[\"outfile\"], fileName)\n\n elif entry[\"eventid\"] == \"cowrie.session.file_upload\":\n print(\"Sending file to MalShare\")\n self.postfile(entry[\"outfile\"], entry[\"filename\"])\n\n def postfile(self, artifact, fileName):\n \"\"\"\n Send a file to MalShare\n \"\"\"\n try:\n res = requests.post(\n \"https://malshare.com/api.php?mode=cli\",\n files={fileName: open(artifact, \"rb\")}\n )\n if res and res.ok:\n print(\"Submited to MalShare\")\n else:\n print(\"MalShare Request failed: {}\".format(res.status_code))\n except Exception as e:\n print(\"MalShare Request failed: {}\".format(e))\n", "path": "src/cowrie/output/malshare.py"}], "after_files": [{"content": "# Copyright (c) 2015 Michel Oosterhof <[email protected]>\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# 1. Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# 3. The names of the author(s) may not be used to endorse or promote\n# products derived from this software without specific prior written\n# permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS`` AND ANY EXPRESS OR\n# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\n# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\n# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED\n# AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n# SUCH DAMAGE.\n\n\"\"\"\nSend files to https://malshare.com/\nMore info https://malshare.com/doc.php\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport os\n\ntry:\n from urllib.parse import urlparse\nexcept ImportError:\n from urlparse import urlparse\nimport requests\n\nfrom twisted.python import log\n\nimport cowrie.core.output\nfrom cowrie.core.config import CowrieConfig\n\n\nclass Output(cowrie.core.output.Output):\n \"\"\"\n malshare output\n\n TODO: use `treq`\n \"\"\"\n def start(self):\n \"\"\"\n Start output plugin\n \"\"\"\n self.apiKey = CowrieConfig().get('output_malshare', 'api_key')\n\n def stop(self):\n \"\"\"\n Stop output plugin\n \"\"\"\n pass\n\n def write(self, entry):\n if entry[\"eventid\"] == \"cowrie.session.file_download\":\n p = urlparse(entry[\"url\"]).path\n if p == \"\":\n fileName = entry[\"shasum\"]\n else:\n b = os.path.basename(p)\n if b == \"\":\n fileName = entry[\"shasum\"]\n else:\n fileName = b\n\n self.postfile(entry[\"outfile\"], fileName)\n\n elif entry[\"eventid\"] == \"cowrie.session.file_upload\":\n self.postfile(entry[\"outfile\"], entry[\"filename\"])\n\n def postfile(self, artifact, fileName):\n \"\"\"\n Send a file to MalShare\n \"\"\"\n try:\n res = requests.post(\n \"https://malshare.com/api.php?api_key=\"+self.apiKey+\"&action=upload\",\n files={\"upload\": open(artifact, \"rb\")}\n )\n if res and res.ok:\n log.msg(\"Submitted to MalShare\")\n else:\n log.msg(\"MalShare Request failed: {}\".format(res.status_code))\n except Exception as e:\n log.msg(\"MalShare Request failed: {}\".format(e))\n", "path": "src/cowrie/output/malshare.py"}]} | 1,504 | 484 |
gh_patches_debug_5923 | rasdani/github-patches | git_diff | dotkom__onlineweb4-488 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Featured careeropprotunities are not featured
The featured opportunities are not prioritized over other opportunities.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/careeropportunity/views.py`
Content:
```
1 #-*- coding: utf-8 -*-
2 from django.utils import timezone
3
4 from datetime import datetime
5
6 from django.conf import settings
7 from django.shortcuts import render_to_response
8 from django.shortcuts import get_object_or_404
9 from django.template import RequestContext
10
11 from apps.careeropportunity.models import CareerOpportunity
12
13
14 def index(request):
15 opportunities = CareerOpportunity.objects.filter(
16 start__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')
17
18 return render_to_response('careeropportunity/index.html', \
19 {'opportunities': opportunities}, \
20 context_instance=RequestContext(request))
21
22
23 def details(request, opportunity_id):
24 opportunity = get_object_or_404(CareerOpportunity, pk=opportunity_id)
25
26 return render_to_response('careeropportunity/details.html', \
27 {'opportunity': opportunity}, \
28 context_instance=RequestContext(request))
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py
--- a/apps/careeropportunity/views.py
+++ b/apps/careeropportunity/views.py
@@ -13,7 +13,7 @@
def index(request):
opportunities = CareerOpportunity.objects.filter(
- start__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')
+ start__lte=timezone.now(), end__gte=timezone.now()).order_by('-featured', '-start')
return render_to_response('careeropportunity/index.html', \
{'opportunities': opportunities}, \
| {"golden_diff": "diff --git a/apps/careeropportunity/views.py b/apps/careeropportunity/views.py\n--- a/apps/careeropportunity/views.py\n+++ b/apps/careeropportunity/views.py\n@@ -13,7 +13,7 @@\n \n def index(request):\n opportunities = CareerOpportunity.objects.filter(\n- \tstart__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')\n+ \tstart__lte=timezone.now(), end__gte=timezone.now()).order_by('-featured', '-start')\n \n return render_to_response('careeropportunity/index.html', \\\n {'opportunities': opportunities}, \\\n", "issue": "Featured careeropprotunities are not featured\nThe featured opportunities are not prioritized over other opportunities. \n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\nfrom django.utils import timezone\n\nfrom datetime import datetime\n\nfrom django.conf import settings\nfrom django.shortcuts import render_to_response\nfrom django.shortcuts import get_object_or_404\nfrom django.template import RequestContext\n\nfrom apps.careeropportunity.models import CareerOpportunity\n\n\ndef index(request):\n opportunities = CareerOpportunity.objects.filter(\n \tstart__lte=timezone.now(), end__gte=timezone.now()).order_by('featured', '-start')\n \n return render_to_response('careeropportunity/index.html', \\\n {'opportunities': opportunities}, \\\n context_instance=RequestContext(request))\n\n\ndef details(request, opportunity_id):\n opportunity = get_object_or_404(CareerOpportunity, pk=opportunity_id)\n\n return render_to_response('careeropportunity/details.html', \\\n {'opportunity': opportunity}, \\\n context_instance=RequestContext(request))\n", "path": "apps/careeropportunity/views.py"}], "after_files": [{"content": "#-*- coding: utf-8 -*-\nfrom django.utils import timezone\n\nfrom datetime import datetime\n\nfrom django.conf import settings\nfrom django.shortcuts import render_to_response\nfrom django.shortcuts import get_object_or_404\nfrom django.template import RequestContext\n\nfrom apps.careeropportunity.models import CareerOpportunity\n\n\ndef index(request):\n opportunities = CareerOpportunity.objects.filter(\n \tstart__lte=timezone.now(), end__gte=timezone.now()).order_by('-featured', '-start')\n \n return render_to_response('careeropportunity/index.html', \\\n {'opportunities': opportunities}, \\\n context_instance=RequestContext(request))\n\n\ndef details(request, opportunity_id):\n opportunity = get_object_or_404(CareerOpportunity, pk=opportunity_id)\n\n return render_to_response('careeropportunity/details.html', \\\n {'opportunity': opportunity}, \\\n context_instance=RequestContext(request))\n", "path": "apps/careeropportunity/views.py"}]} | 528 | 140 |
gh_patches_debug_11224 | rasdani/github-patches | git_diff | ethereum__web3.py-3248 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Web3.isAddress doesn't work for non prefixed checksumed values
* Version: 4.0.0b11
* Python: 3.6
* OS: linux
### What was wrong?
As stated in the docs http://web3py.readthedocs.io/en/latest/overview.html#Web3.isAddress the function Web3.isAddress(value) should **allow both 0x prefixed and non prefixed values**.
If the address is not checksumed, it's ok not to have the **0x**:
```
>>> Web3.isAddress('d3cda913deb6f67967b99d67acdfa1712c293601')
>>> True
```
But if it's checksumed
```
>>> Web3.isAddress('d3CdA913deB6f67967B99D67aCDFa1712C293601')
>>> False
```
No problem if we add the **0x**:
```
>>> Web3.isAddress('0xd3CdA913deB6f67967B99D67aCDFa1712C293601')
>>> True
```
### How can it be fixed?
Changing the documentation to state that checksumed addresses must have 0x or changing the function to accept checksumed addresses with 0x. I would just remove 0x at the beginning of the function (if found) and work with the address as that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import (
3 find_packages,
4 setup,
5 )
6
7 extras_require = {
8 "tester": [
9 "eth-tester[py-evm]==v0.10.0-b.1",
10 "py-geth>=4.1.0",
11 ],
12 "linter": [
13 "black>=22.1.0",
14 "flake8==3.8.3",
15 "isort>=5.11.0",
16 "mypy==1.4.1",
17 "types-setuptools>=57.4.4",
18 "types-requests>=2.26.1",
19 "types-protobuf==3.19.13",
20 ],
21 "docs": [
22 "sphinx>=5.3.0",
23 "sphinx_rtd_theme>=1.0.0",
24 "towncrier>=21,<22",
25 ],
26 "dev": [
27 "bumpversion",
28 "flaky>=3.7.0",
29 "hypothesis>=3.31.2",
30 "importlib-metadata<5.0;python_version<'3.8'",
31 "pytest>=7.0.0",
32 "pytest-asyncio>=0.18.1,<0.23",
33 "pytest-mock>=1.10",
34 "pytest-watch>=4.2",
35 "pytest-xdist>=1.29",
36 "setuptools>=38.6.0",
37 "tox>=3.18.0",
38 "tqdm>4.32",
39 "twine>=1.13",
40 "when-changed>=0.3.0",
41 "build>=0.9.0",
42 ],
43 "ipfs": [
44 "ipfshttpclient==0.8.0a2",
45 ],
46 }
47
48 extras_require["dev"] = (
49 extras_require["tester"]
50 + extras_require["linter"]
51 + extras_require["docs"]
52 + extras_require["ipfs"]
53 + extras_require["dev"]
54 )
55
56 with open("./README.md") as readme:
57 long_description = readme.read()
58
59 setup(
60 name="web3",
61 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
62 version="6.14.0",
63 description="""web3.py""",
64 long_description_content_type="text/markdown",
65 long_description=long_description,
66 author="The Ethereum Foundation",
67 author_email="[email protected]",
68 url="https://github.com/ethereum/web3.py",
69 include_package_data=True,
70 install_requires=[
71 "aiohttp>=3.7.4.post0",
72 "eth-abi>=4.0.0",
73 "eth-account>=0.8.0",
74 "eth-hash[pycryptodome]>=0.5.1",
75 "eth-typing>=3.0.0",
76 "eth-utils>=2.1.0",
77 "hexbytes>=0.1.0,<0.4.0",
78 "jsonschema>=4.0.0",
79 "protobuf>=4.21.6",
80 "pydantic>=2.4.0",
81 "pywin32>=223;platform_system=='Windows'",
82 "requests>=2.16.0",
83 "typing-extensions>=4.0.1",
84 "websockets>=10.0.0",
85 "pyunormalize>=15.0.0",
86 ],
87 python_requires=">=3.8",
88 extras_require=extras_require,
89 py_modules=["web3", "ens", "ethpm"],
90 entry_points={"pytest11": ["pytest_ethereum = web3.tools.pytest_ethereum.plugins"]},
91 license="MIT",
92 zip_safe=False,
93 keywords="ethereum",
94 packages=find_packages(exclude=["tests", "tests.*"]),
95 package_data={"web3": ["py.typed"]},
96 classifiers=[
97 "Development Status :: 5 - Production/Stable",
98 "Intended Audience :: Developers",
99 "License :: OSI Approved :: MIT License",
100 "Natural Language :: English",
101 "Programming Language :: Python :: 3",
102 "Programming Language :: Python :: 3.8",
103 "Programming Language :: Python :: 3.9",
104 "Programming Language :: Python :: 3.10",
105 "Programming Language :: Python :: 3.11",
106 ],
107 )
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,7 +6,7 @@
extras_require = {
"tester": [
- "eth-tester[py-evm]==v0.10.0-b.1",
+ "eth-tester[py-evm]==v0.10.0-b.3",
"py-geth>=4.1.0",
],
"linter": [
@@ -73,7 +73,7 @@
"eth-account>=0.8.0",
"eth-hash[pycryptodome]>=0.5.1",
"eth-typing>=3.0.0",
- "eth-utils>=2.1.0",
+ "eth-utils>=4.0.0",
"hexbytes>=0.1.0,<0.4.0",
"jsonschema>=4.0.0",
"protobuf>=4.21.6",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,7 +6,7 @@\n \n extras_require = {\n \"tester\": [\n- \"eth-tester[py-evm]==v0.10.0-b.1\",\n+ \"eth-tester[py-evm]==v0.10.0-b.3\",\n \"py-geth>=4.1.0\",\n ],\n \"linter\": [\n@@ -73,7 +73,7 @@\n \"eth-account>=0.8.0\",\n \"eth-hash[pycryptodome]>=0.5.1\",\n \"eth-typing>=3.0.0\",\n- \"eth-utils>=2.1.0\",\n+ \"eth-utils>=4.0.0\",\n \"hexbytes>=0.1.0,<0.4.0\",\n \"jsonschema>=4.0.0\",\n \"protobuf>=4.21.6\",\n", "issue": "Web3.isAddress doesn't work for non prefixed checksumed values\n* Version: 4.0.0b11\r\n* Python: 3.6\r\n* OS: linux\r\n\r\n### What was wrong?\r\n\r\nAs stated in the docs http://web3py.readthedocs.io/en/latest/overview.html#Web3.isAddress the function Web3.isAddress(value) should **allow both 0x prefixed and non prefixed values**.\r\n\r\nIf the address is not checksumed, it's ok not to have the **0x**:\r\n\r\n```\r\n>>> Web3.isAddress('d3cda913deb6f67967b99d67acdfa1712c293601')\r\n>>> True\r\n```\r\n\r\nBut if it's checksumed\r\n\r\n```\r\n>>> Web3.isAddress('d3CdA913deB6f67967B99D67aCDFa1712C293601')\r\n>>> False\r\n```\r\n\r\nNo problem if we add the **0x**:\r\n\r\n```\r\n>>> Web3.isAddress('0xd3CdA913deB6f67967B99D67aCDFa1712C293601')\r\n>>> True\r\n```\r\n\r\n### How can it be fixed?\r\n\r\nChanging the documentation to state that checksumed addresses must have 0x or changing the function to accept checksumed addresses with 0x. I would just remove 0x at the beginning of the function (if found) and work with the address as that. \r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.10.0-b.1\",\n \"py-geth>=4.1.0\",\n ],\n \"linter\": [\n \"black>=22.1.0\",\n \"flake8==3.8.3\",\n \"isort>=5.11.0\",\n \"mypy==1.4.1\",\n \"types-setuptools>=57.4.4\",\n \"types-requests>=2.26.1\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"sphinx>=5.3.0\",\n \"sphinx_rtd_theme>=1.0.0\",\n \"towncrier>=21,<22\",\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0\",\n \"hypothesis>=3.31.2\",\n \"importlib-metadata<5.0;python_version<'3.8'\",\n \"pytest>=7.0.0\",\n \"pytest-asyncio>=0.18.1,<0.23\",\n \"pytest-mock>=1.10\",\n \"pytest-watch>=4.2\",\n \"pytest-xdist>=1.29\",\n \"setuptools>=38.6.0\",\n \"tox>=3.18.0\",\n \"tqdm>4.32\",\n \"twine>=1.13\",\n \"when-changed>=0.3.0\",\n \"build>=0.9.0\",\n ],\n \"ipfs\": [\n \"ipfshttpclient==0.8.0a2\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"ipfs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"6.14.0\",\n description=\"\"\"web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"The Ethereum Foundation\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0\",\n \"eth-abi>=4.0.0\",\n \"eth-account>=0.8.0\",\n \"eth-hash[pycryptodome]>=0.5.1\",\n \"eth-typing>=3.0.0\",\n \"eth-utils>=2.1.0\",\n \"hexbytes>=0.1.0,<0.4.0\",\n \"jsonschema>=4.0.0\",\n \"protobuf>=4.21.6\",\n \"pydantic>=2.4.0\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0\",\n \"typing-extensions>=4.0.1\",\n \"websockets>=10.0.0\",\n \"pyunormalize>=15.0.0\",\n ],\n python_requires=\">=3.8\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.10.0-b.3\",\n \"py-geth>=4.1.0\",\n ],\n \"linter\": [\n \"black>=22.1.0\",\n \"flake8==3.8.3\",\n \"isort>=5.11.0\",\n \"mypy==1.4.1\",\n \"types-setuptools>=57.4.4\",\n \"types-requests>=2.26.1\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"sphinx>=5.3.0\",\n \"sphinx_rtd_theme>=1.0.0\",\n \"towncrier>=21,<22\",\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0\",\n \"hypothesis>=3.31.2\",\n \"importlib-metadata<5.0;python_version<'3.8'\",\n \"pytest>=7.0.0\",\n \"pytest-asyncio>=0.18.1,<0.23\",\n \"pytest-mock>=1.10\",\n \"pytest-watch>=4.2\",\n \"pytest-xdist>=1.29\",\n \"setuptools>=38.6.0\",\n \"tox>=3.18.0\",\n \"tqdm>4.32\",\n \"twine>=1.13\",\n \"when-changed>=0.3.0\",\n \"build>=0.9.0\",\n ],\n \"ipfs\": [\n \"ipfshttpclient==0.8.0a2\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"ipfs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"6.14.0\",\n description=\"\"\"web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"The Ethereum Foundation\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0\",\n \"eth-abi>=4.0.0\",\n \"eth-account>=0.8.0\",\n \"eth-hash[pycryptodome]>=0.5.1\",\n \"eth-typing>=3.0.0\",\n \"eth-utils>=4.0.0\",\n \"hexbytes>=0.1.0,<0.4.0\",\n \"jsonschema>=4.0.0\",\n \"protobuf>=4.21.6\",\n \"pydantic>=2.4.0\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0\",\n \"typing-extensions>=4.0.1\",\n \"websockets>=10.0.0\",\n \"pyunormalize>=15.0.0\",\n ],\n python_requires=\">=3.8\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n)\n", "path": "setup.py"}]} | 1,781 | 219 |
gh_patches_debug_17300 | rasdani/github-patches | git_diff | techmatters__terraso-backend-889 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime
## Description
When running `make test`, many warnings of this form are observed:
```
/home/terraso/.local/lib/python3.11/site-packages/django/db/models/fields/__init__.py:1595: RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime (2023-07-11 22:39:48.700825) while time zone support is active.
warnings.warn(
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `terraso_backend/apps/audit_logs/services.py`
Content:
```
1 import typing
2 from datetime import datetime
3 from enum import Enum
4
5 from django.contrib.contenttypes.models import ContentType
6 from django.core.paginator import Paginator
7 from django.db import transaction
8 from django.db.models.query import QuerySet
9
10 from apps.core.models import User
11
12 from . import api, models
13
14 TEMPLATE = "{client_time} - {user} {action} {resource}"
15
16
17 class _AuditLogService:
18 """
19 AuditLogService implements the AuditLog protocol
20 """
21
22 def log(
23 self,
24 user: User,
25 action: api.ACTIONS,
26 resource: object,
27 metadata: typing.Optional[dict[str, any]] = None,
28 client_time: typing.Optional[datetime] = None,
29 ) -> None:
30 """
31 log logs an action performed by a user on a resource
32 example:
33 log(user, "create", resource, client_time=1234567890)
34 :param client_time:
35 :param metadata:
36 :param action:
37 :param user:
38 :type resource: object
39
40 """
41 if not hasattr(user, "id"):
42 raise ValueError("Invalid user")
43
44 get_user_readable = getattr(user, "human_readable", None)
45 user_readable = get_user_readable() if callable(get_user_readable) else user.full_name()
46
47 if not isinstance(action, Enum) or not hasattr(models.Events, action.value):
48 raise ValueError("Invalid action")
49
50 resource_id = resource.id if hasattr(resource, "id") else None
51 if resource_id is None:
52 raise ValueError("Invalid resource")
53
54 get_resource_human_readable = getattr(resource, "human_readable", None)
55 if callable(get_resource_human_readable):
56 resource_human_readable = get_resource_human_readable()
57 else:
58 resource_human_readable = resource_id
59
60 content_type = ContentType.objects.get_for_model(resource)
61 resource_obj = resource
62
63 resource_repr = resource.__dict__.__str__()
64
65 if metadata is None:
66 metadata = {}
67
68 with transaction.atomic():
69 log = models.Log(
70 user=user,
71 event=action.value,
72 resource_id=resource_id,
73 resource_content_type=content_type,
74 resource_object=resource_obj,
75 resource_json_repr=resource_repr,
76 resource_human_readable=str(resource_human_readable),
77 user_human_readable=str(user_readable),
78 )
79
80 if client_time is None:
81 client_time = datetime.now()
82 log.client_timestamp = client_time
83
84 log.metadata = metadata
85 log.save()
86
87
88 class LogData:
89 """
90 LazyPaginator implements the Paginator protocol
91 """
92
93 def __init__(self, data: QuerySet):
94 self.data = data
95
96 def get_paginator(self, page_size: int = 10):
97 return Paginator(self.data, page_size)
98
99 def __len__(self):
100 return len(self.data)
101
102 def __iter__(self):
103 return iter(self.data)
104
105
106 def new_audit_logger() -> api.AuditLog:
107 """
108 new_audit_logger creates a new audit log
109 """
110 return _AuditLogService()
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/terraso_backend/apps/audit_logs/services.py b/terraso_backend/apps/audit_logs/services.py
--- a/terraso_backend/apps/audit_logs/services.py
+++ b/terraso_backend/apps/audit_logs/services.py
@@ -2,6 +2,7 @@
from datetime import datetime
from enum import Enum
+from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from django.core.paginator import Paginator
from django.db import transaction
@@ -79,7 +80,12 @@
if client_time is None:
client_time = datetime.now()
- log.client_timestamp = client_time
+ if settings.USE_TZ:
+ from django.utils.timezone import make_aware
+
+ log.client_timestamp = make_aware(client_time)
+ else:
+ log.client_timestamp = client_time
log.metadata = metadata
log.save()
| {"golden_diff": "diff --git a/terraso_backend/apps/audit_logs/services.py b/terraso_backend/apps/audit_logs/services.py\n--- a/terraso_backend/apps/audit_logs/services.py\n+++ b/terraso_backend/apps/audit_logs/services.py\n@@ -2,6 +2,7 @@\n from datetime import datetime\n from enum import Enum\n \n+from django.conf import settings\n from django.contrib.contenttypes.models import ContentType\n from django.core.paginator import Paginator\n from django.db import transaction\n@@ -79,7 +80,12 @@\n \n if client_time is None:\n client_time = datetime.now()\n- log.client_timestamp = client_time\n+ if settings.USE_TZ:\n+ from django.utils.timezone import make_aware\n+\n+ log.client_timestamp = make_aware(client_time)\n+ else:\n+ log.client_timestamp = client_time\n \n log.metadata = metadata\n log.save()\n", "issue": "RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime\n## Description\r\nWhen running `make test`, many warnings of this form are observed:\r\n```\r\n /home/terraso/.local/lib/python3.11/site-packages/django/db/models/fields/__init__.py:1595: RuntimeWarning: DateTimeField Log.client_timestamp received a naive datetime (2023-07-11 22:39:48.700825) while time zone support is active.\r\n warnings.warn(\r\n```\n", "before_files": [{"content": "import typing\nfrom datetime import datetime\nfrom enum import Enum\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.paginator import Paginator\nfrom django.db import transaction\nfrom django.db.models.query import QuerySet\n\nfrom apps.core.models import User\n\nfrom . import api, models\n\nTEMPLATE = \"{client_time} - {user} {action} {resource}\"\n\n\nclass _AuditLogService:\n \"\"\"\n AuditLogService implements the AuditLog protocol\n \"\"\"\n\n def log(\n self,\n user: User,\n action: api.ACTIONS,\n resource: object,\n metadata: typing.Optional[dict[str, any]] = None,\n client_time: typing.Optional[datetime] = None,\n ) -> None:\n \"\"\"\n log logs an action performed by a user on a resource\n example:\n log(user, \"create\", resource, client_time=1234567890)\n :param client_time:\n :param metadata:\n :param action:\n :param user:\n :type resource: object\n\n \"\"\"\n if not hasattr(user, \"id\"):\n raise ValueError(\"Invalid user\")\n\n get_user_readable = getattr(user, \"human_readable\", None)\n user_readable = get_user_readable() if callable(get_user_readable) else user.full_name()\n\n if not isinstance(action, Enum) or not hasattr(models.Events, action.value):\n raise ValueError(\"Invalid action\")\n\n resource_id = resource.id if hasattr(resource, \"id\") else None\n if resource_id is None:\n raise ValueError(\"Invalid resource\")\n\n get_resource_human_readable = getattr(resource, \"human_readable\", None)\n if callable(get_resource_human_readable):\n resource_human_readable = get_resource_human_readable()\n else:\n resource_human_readable = resource_id\n\n content_type = ContentType.objects.get_for_model(resource)\n resource_obj = resource\n\n resource_repr = resource.__dict__.__str__()\n\n if metadata is None:\n metadata = {}\n\n with transaction.atomic():\n log = models.Log(\n user=user,\n event=action.value,\n resource_id=resource_id,\n resource_content_type=content_type,\n resource_object=resource_obj,\n resource_json_repr=resource_repr,\n resource_human_readable=str(resource_human_readable),\n user_human_readable=str(user_readable),\n )\n\n if client_time is None:\n client_time = datetime.now()\n log.client_timestamp = client_time\n\n log.metadata = metadata\n log.save()\n\n\nclass LogData:\n \"\"\"\n LazyPaginator implements the Paginator protocol\n \"\"\"\n\n def __init__(self, data: QuerySet):\n self.data = data\n\n def get_paginator(self, page_size: int = 10):\n return Paginator(self.data, page_size)\n\n def __len__(self):\n return len(self.data)\n\n def __iter__(self):\n return iter(self.data)\n\n\ndef new_audit_logger() -> api.AuditLog:\n \"\"\"\n new_audit_logger creates a new audit log\n \"\"\"\n return _AuditLogService()\n", "path": "terraso_backend/apps/audit_logs/services.py"}], "after_files": [{"content": "import typing\nfrom datetime import datetime\nfrom enum import Enum\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.paginator import Paginator\nfrom django.db import transaction\nfrom django.db.models.query import QuerySet\n\nfrom apps.core.models import User\n\nfrom . import api, models\n\nTEMPLATE = \"{client_time} - {user} {action} {resource}\"\n\n\nclass _AuditLogService:\n \"\"\"\n AuditLogService implements the AuditLog protocol\n \"\"\"\n\n def log(\n self,\n user: User,\n action: api.ACTIONS,\n resource: object,\n metadata: typing.Optional[dict[str, any]] = None,\n client_time: typing.Optional[datetime] = None,\n ) -> None:\n \"\"\"\n log logs an action performed by a user on a resource\n example:\n log(user, \"create\", resource, client_time=1234567890)\n :param client_time:\n :param metadata:\n :param action:\n :param user:\n :type resource: object\n\n \"\"\"\n if not hasattr(user, \"id\"):\n raise ValueError(\"Invalid user\")\n\n get_user_readable = getattr(user, \"human_readable\", None)\n user_readable = get_user_readable() if callable(get_user_readable) else user.full_name()\n\n if not isinstance(action, Enum) or not hasattr(models.Events, action.value):\n raise ValueError(\"Invalid action\")\n\n resource_id = resource.id if hasattr(resource, \"id\") else None\n if resource_id is None:\n raise ValueError(\"Invalid resource\")\n\n get_resource_human_readable = getattr(resource, \"human_readable\", None)\n if callable(get_resource_human_readable):\n resource_human_readable = get_resource_human_readable()\n else:\n resource_human_readable = resource_id\n\n content_type = ContentType.objects.get_for_model(resource)\n resource_obj = resource\n\n resource_repr = resource.__dict__.__str__()\n\n if metadata is None:\n metadata = {}\n\n with transaction.atomic():\n log = models.Log(\n user=user,\n event=action.value,\n resource_id=resource_id,\n resource_content_type=content_type,\n resource_object=resource_obj,\n resource_json_repr=resource_repr,\n resource_human_readable=str(resource_human_readable),\n user_human_readable=str(user_readable),\n )\n\n if client_time is None:\n client_time = datetime.now()\n if settings.USE_TZ:\n from django.utils.timezone import make_aware\n\n log.client_timestamp = make_aware(client_time)\n else:\n log.client_timestamp = client_time\n\n log.metadata = metadata\n log.save()\n\n\nclass LogData:\n \"\"\"\n LazyPaginator implements the Paginator protocol\n \"\"\"\n\n def __init__(self, data: QuerySet):\n self.data = data\n\n def get_paginator(self, page_size: int = 10):\n return Paginator(self.data, page_size)\n\n def __len__(self):\n return len(self.data)\n\n def __iter__(self):\n return iter(self.data)\n\n\ndef new_audit_logger() -> api.AuditLog:\n \"\"\"\n new_audit_logger creates a new audit log\n \"\"\"\n return _AuditLogService()\n", "path": "terraso_backend/apps/audit_logs/services.py"}]} | 1,256 | 198 |
gh_patches_debug_10615 | rasdani/github-patches | git_diff | pandas-dev__pandas-14007 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DEPR: deprecate SparseList
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandas/sparse/list.py`
Content:
```
1 import numpy as np
2 from pandas.core.base import PandasObject
3 from pandas.formats.printing import pprint_thing
4
5 from pandas.types.common import is_scalar
6 from pandas.sparse.array import SparseArray
7 import pandas._sparse as splib
8
9
10 class SparseList(PandasObject):
11
12 """
13 Data structure for accumulating data to be converted into a
14 SparseArray. Has similar API to the standard Python list
15
16 Parameters
17 ----------
18 data : scalar or array-like
19 fill_value : scalar, default NaN
20 """
21
22 def __init__(self, data=None, fill_value=np.nan):
23 self.fill_value = fill_value
24 self._chunks = []
25
26 if data is not None:
27 self.append(data)
28
29 def __unicode__(self):
30 contents = '\n'.join(repr(c) for c in self._chunks)
31 return '%s\n%s' % (object.__repr__(self), pprint_thing(contents))
32
33 def __len__(self):
34 return sum(len(c) for c in self._chunks)
35
36 def __getitem__(self, i):
37 if i < 0:
38 if i + len(self) < 0: # pragma: no cover
39 raise ValueError('%d out of range' % i)
40 i += len(self)
41
42 passed = 0
43 j = 0
44 while i >= passed + len(self._chunks[j]):
45 passed += len(self._chunks[j])
46 j += 1
47 return self._chunks[j][i - passed]
48
49 def __setitem__(self, i, value):
50 raise NotImplementedError
51
52 @property
53 def nchunks(self):
54 return len(self._chunks)
55
56 @property
57 def is_consolidated(self):
58 return self.nchunks == 1
59
60 def consolidate(self, inplace=True):
61 """
62 Internally consolidate chunks of data
63
64 Parameters
65 ----------
66 inplace : boolean, default True
67 Modify the calling object instead of constructing a new one
68
69 Returns
70 -------
71 splist : SparseList
72 If inplace=False, new object, otherwise reference to existing
73 object
74 """
75 if not inplace:
76 result = self.copy()
77 else:
78 result = self
79
80 if result.is_consolidated:
81 return result
82
83 result._consolidate_inplace()
84 return result
85
86 def _consolidate_inplace(self):
87 new_values = np.concatenate([c.sp_values for c in self._chunks])
88 new_index = _concat_sparse_indexes([c.sp_index for c in self._chunks])
89 new_arr = SparseArray(new_values, sparse_index=new_index,
90 fill_value=self.fill_value)
91 self._chunks = [new_arr]
92
93 def copy(self):
94 """
95 Return copy of the list
96
97 Returns
98 -------
99 new_list : SparseList
100 """
101 new_splist = SparseList(fill_value=self.fill_value)
102 new_splist._chunks = list(self._chunks)
103 return new_splist
104
105 def to_array(self):
106 """
107 Return SparseArray from data stored in the SparseList
108
109 Returns
110 -------
111 sparr : SparseArray
112 """
113 self.consolidate(inplace=True)
114 return self._chunks[0]
115
116 def append(self, value):
117 """
118 Append element or array-like chunk of data to the SparseList
119
120 Parameters
121 ----------
122 value: scalar or array-like
123 """
124 if is_scalar(value):
125 value = [value]
126
127 sparr = SparseArray(value, fill_value=self.fill_value)
128 self._chunks.append(sparr)
129 self._consolidated = False
130
131
132 def _concat_sparse_indexes(indexes):
133 all_indices = []
134 total_length = 0
135
136 for index in indexes:
137 # increment by offset
138 inds = index.to_int_index().indices + total_length
139
140 all_indices.append(inds)
141 total_length += index.length
142
143 return splib.IntIndex(total_length, np.concatenate(all_indices))
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pandas/sparse/list.py b/pandas/sparse/list.py
--- a/pandas/sparse/list.py
+++ b/pandas/sparse/list.py
@@ -1,3 +1,4 @@
+import warnings
import numpy as np
from pandas.core.base import PandasObject
from pandas.formats.printing import pprint_thing
@@ -20,6 +21,11 @@
"""
def __init__(self, data=None, fill_value=np.nan):
+
+ # see gh-13784
+ warnings.warn("SparseList is deprecated and will be removed "
+ "in a future version", FutureWarning, stacklevel=2)
+
self.fill_value = fill_value
self._chunks = []
| {"golden_diff": "diff --git a/pandas/sparse/list.py b/pandas/sparse/list.py\n--- a/pandas/sparse/list.py\n+++ b/pandas/sparse/list.py\n@@ -1,3 +1,4 @@\n+import warnings\n import numpy as np\n from pandas.core.base import PandasObject\n from pandas.formats.printing import pprint_thing\n@@ -20,6 +21,11 @@\n \"\"\"\n \n def __init__(self, data=None, fill_value=np.nan):\n+\n+ # see gh-13784\n+ warnings.warn(\"SparseList is deprecated and will be removed \"\n+ \"in a future version\", FutureWarning, stacklevel=2)\n+\n self.fill_value = fill_value\n self._chunks = []\n", "issue": "DEPR: deprecate SparseList\n\n", "before_files": [{"content": "import numpy as np\nfrom pandas.core.base import PandasObject\nfrom pandas.formats.printing import pprint_thing\n\nfrom pandas.types.common import is_scalar\nfrom pandas.sparse.array import SparseArray\nimport pandas._sparse as splib\n\n\nclass SparseList(PandasObject):\n\n \"\"\"\n Data structure for accumulating data to be converted into a\n SparseArray. Has similar API to the standard Python list\n\n Parameters\n ----------\n data : scalar or array-like\n fill_value : scalar, default NaN\n \"\"\"\n\n def __init__(self, data=None, fill_value=np.nan):\n self.fill_value = fill_value\n self._chunks = []\n\n if data is not None:\n self.append(data)\n\n def __unicode__(self):\n contents = '\\n'.join(repr(c) for c in self._chunks)\n return '%s\\n%s' % (object.__repr__(self), pprint_thing(contents))\n\n def __len__(self):\n return sum(len(c) for c in self._chunks)\n\n def __getitem__(self, i):\n if i < 0:\n if i + len(self) < 0: # pragma: no cover\n raise ValueError('%d out of range' % i)\n i += len(self)\n\n passed = 0\n j = 0\n while i >= passed + len(self._chunks[j]):\n passed += len(self._chunks[j])\n j += 1\n return self._chunks[j][i - passed]\n\n def __setitem__(self, i, value):\n raise NotImplementedError\n\n @property\n def nchunks(self):\n return len(self._chunks)\n\n @property\n def is_consolidated(self):\n return self.nchunks == 1\n\n def consolidate(self, inplace=True):\n \"\"\"\n Internally consolidate chunks of data\n\n Parameters\n ----------\n inplace : boolean, default True\n Modify the calling object instead of constructing a new one\n\n Returns\n -------\n splist : SparseList\n If inplace=False, new object, otherwise reference to existing\n object\n \"\"\"\n if not inplace:\n result = self.copy()\n else:\n result = self\n\n if result.is_consolidated:\n return result\n\n result._consolidate_inplace()\n return result\n\n def _consolidate_inplace(self):\n new_values = np.concatenate([c.sp_values for c in self._chunks])\n new_index = _concat_sparse_indexes([c.sp_index for c in self._chunks])\n new_arr = SparseArray(new_values, sparse_index=new_index,\n fill_value=self.fill_value)\n self._chunks = [new_arr]\n\n def copy(self):\n \"\"\"\n Return copy of the list\n\n Returns\n -------\n new_list : SparseList\n \"\"\"\n new_splist = SparseList(fill_value=self.fill_value)\n new_splist._chunks = list(self._chunks)\n return new_splist\n\n def to_array(self):\n \"\"\"\n Return SparseArray from data stored in the SparseList\n\n Returns\n -------\n sparr : SparseArray\n \"\"\"\n self.consolidate(inplace=True)\n return self._chunks[0]\n\n def append(self, value):\n \"\"\"\n Append element or array-like chunk of data to the SparseList\n\n Parameters\n ----------\n value: scalar or array-like\n \"\"\"\n if is_scalar(value):\n value = [value]\n\n sparr = SparseArray(value, fill_value=self.fill_value)\n self._chunks.append(sparr)\n self._consolidated = False\n\n\ndef _concat_sparse_indexes(indexes):\n all_indices = []\n total_length = 0\n\n for index in indexes:\n # increment by offset\n inds = index.to_int_index().indices + total_length\n\n all_indices.append(inds)\n total_length += index.length\n\n return splib.IntIndex(total_length, np.concatenate(all_indices))\n", "path": "pandas/sparse/list.py"}], "after_files": [{"content": "import warnings\nimport numpy as np\nfrom pandas.core.base import PandasObject\nfrom pandas.formats.printing import pprint_thing\n\nfrom pandas.types.common import is_scalar\nfrom pandas.sparse.array import SparseArray\nimport pandas._sparse as splib\n\n\nclass SparseList(PandasObject):\n\n \"\"\"\n Data structure for accumulating data to be converted into a\n SparseArray. Has similar API to the standard Python list\n\n Parameters\n ----------\n data : scalar or array-like\n fill_value : scalar, default NaN\n \"\"\"\n\n def __init__(self, data=None, fill_value=np.nan):\n\n # see gh-13784\n warnings.warn(\"SparseList is deprecated and will be removed \"\n \"in a future version\", FutureWarning, stacklevel=2)\n\n self.fill_value = fill_value\n self._chunks = []\n\n if data is not None:\n self.append(data)\n\n def __unicode__(self):\n contents = '\\n'.join(repr(c) for c in self._chunks)\n return '%s\\n%s' % (object.__repr__(self), pprint_thing(contents))\n\n def __len__(self):\n return sum(len(c) for c in self._chunks)\n\n def __getitem__(self, i):\n if i < 0:\n if i + len(self) < 0: # pragma: no cover\n raise ValueError('%d out of range' % i)\n i += len(self)\n\n passed = 0\n j = 0\n while i >= passed + len(self._chunks[j]):\n passed += len(self._chunks[j])\n j += 1\n return self._chunks[j][i - passed]\n\n def __setitem__(self, i, value):\n raise NotImplementedError\n\n @property\n def nchunks(self):\n return len(self._chunks)\n\n @property\n def is_consolidated(self):\n return self.nchunks == 1\n\n def consolidate(self, inplace=True):\n \"\"\"\n Internally consolidate chunks of data\n\n Parameters\n ----------\n inplace : boolean, default True\n Modify the calling object instead of constructing a new one\n\n Returns\n -------\n splist : SparseList\n If inplace=False, new object, otherwise reference to existing\n object\n \"\"\"\n if not inplace:\n result = self.copy()\n else:\n result = self\n\n if result.is_consolidated:\n return result\n\n result._consolidate_inplace()\n return result\n\n def _consolidate_inplace(self):\n new_values = np.concatenate([c.sp_values for c in self._chunks])\n new_index = _concat_sparse_indexes([c.sp_index for c in self._chunks])\n new_arr = SparseArray(new_values, sparse_index=new_index,\n fill_value=self.fill_value)\n self._chunks = [new_arr]\n\n def copy(self):\n \"\"\"\n Return copy of the list\n\n Returns\n -------\n new_list : SparseList\n \"\"\"\n new_splist = SparseList(fill_value=self.fill_value)\n new_splist._chunks = list(self._chunks)\n return new_splist\n\n def to_array(self):\n \"\"\"\n Return SparseArray from data stored in the SparseList\n\n Returns\n -------\n sparr : SparseArray\n \"\"\"\n self.consolidate(inplace=True)\n return self._chunks[0]\n\n def append(self, value):\n \"\"\"\n Append element or array-like chunk of data to the SparseList\n\n Parameters\n ----------\n value: scalar or array-like\n \"\"\"\n if is_scalar(value):\n value = [value]\n\n sparr = SparseArray(value, fill_value=self.fill_value)\n self._chunks.append(sparr)\n self._consolidated = False\n\n\ndef _concat_sparse_indexes(indexes):\n all_indices = []\n total_length = 0\n\n for index in indexes:\n # increment by offset\n inds = index.to_int_index().indices + total_length\n\n all_indices.append(inds)\n total_length += index.length\n\n return splib.IntIndex(total_length, np.concatenate(all_indices))\n", "path": "pandas/sparse/list.py"}]} | 1,450 | 164 |
gh_patches_debug_17785 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1717 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up test noise (that includes EPP and migration scripts)
### Issue description
Right now if you run the test suite locally or see the output from github, there is a lot of added prints and logs that make it hard to troubleshoot where your particular error is coming from. This ticket is clean up test noise in general including EPP and migration scripts.
### Acceptance criteria
- [ ] unnecessary prints/logs on tests are removed
### Additional context
_No response_
### Links to other issues
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/epplibwrapper/utility/pool.py`
Content:
```
1 import logging
2 from typing import List
3 import gevent
4 from geventconnpool import ConnectionPool
5 from epplibwrapper.socket import Socket
6 from epplibwrapper.utility.pool_error import PoolError, PoolErrorCodes
7
8 try:
9 from epplib.commands import Hello
10 from epplib.exceptions import TransportError
11 except ImportError:
12 pass
13
14 from gevent.lock import BoundedSemaphore
15 from collections import deque
16
17 logger = logging.getLogger(__name__)
18
19
20 class EPPConnectionPool(ConnectionPool):
21 """A connection pool for EPPLib.
22
23 Args:
24 client (Client): The client
25 login (commands.Login): Login creds
26 options (dict): Options for the ConnectionPool
27 base class
28 """
29
30 def __init__(self, client, login, options: dict):
31 # For storing shared credentials
32 self._client = client
33 self._login = login
34
35 # Keep track of each greenlet
36 self.greenlets: List[gevent.Greenlet] = []
37
38 # Define optional pool settings.
39 # Kept in a dict so that the parent class,
40 # client.py, can maintain seperation/expandability
41 self.size = 1
42 if "size" in options:
43 self.size = options["size"]
44
45 self.exc_classes = tuple((TransportError,))
46 if "exc_classes" in options:
47 self.exc_classes = options["exc_classes"]
48
49 self.keepalive = None
50 if "keepalive" in options:
51 self.keepalive = options["keepalive"]
52
53 # Determines the period in which new
54 # gevent threads are spun up.
55 # This time period is in seconds. So for instance, .1 would be .1 seconds.
56 self.spawn_frequency = 0.1
57 if "spawn_frequency" in options:
58 self.spawn_frequency = options["spawn_frequency"]
59
60 self.conn: deque = deque()
61 self.lock = BoundedSemaphore(self.size)
62
63 self.populate_all_connections()
64
65 def _new_connection(self):
66 socket = self._create_socket(self._client, self._login)
67 try:
68 connection = socket.connect()
69 return connection
70 except Exception as err:
71 message = f"Failed to execute due to a registry error: {err}"
72 logger.error(message, exc_info=True)
73 # We want to raise a pool error rather than a LoginError here
74 # because if this occurs internally, we should handle this
75 # differently than we otherwise would for LoginError.
76 raise PoolError(code=PoolErrorCodes.NEW_CONNECTION_FAILED) from err
77
78 def _keepalive(self, c):
79 """Sends a command to the server to keep the connection alive."""
80 try:
81 # Sends a ping to the registry via EPPLib
82 c.send(Hello())
83 except Exception as err:
84 message = "Failed to keep the connection alive."
85 logger.error(message, exc_info=True)
86 raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err
87
88 def _create_socket(self, client, login) -> Socket:
89 """Creates and returns a socket instance"""
90 socket = Socket(client, login)
91 return socket
92
93 def get_connections(self):
94 """Returns the connection queue"""
95 return self.conn
96
97 def kill_all_connections(self):
98 """Kills all active connections in the pool."""
99 try:
100 if len(self.conn) > 0 or len(self.greenlets) > 0:
101 logger.info("Attempting to kill connections")
102 gevent.killall(self.greenlets)
103
104 self.greenlets.clear()
105 for connection in self.conn:
106 connection.disconnect()
107 self.conn.clear()
108
109 # Clear the semaphore
110 self.lock = BoundedSemaphore(self.size)
111 logger.info("Finished killing connections")
112 else:
113 logger.info("No connections to kill.")
114 except Exception as err:
115 logger.error("Could not kill all connections.")
116 raise PoolError(code=PoolErrorCodes.KILL_ALL_FAILED) from err
117
118 def populate_all_connections(self):
119 """Generates the connection pool.
120 If any connections exist, kill them first.
121 Based off of the __init__ definition for geventconnpool.
122 """
123 if len(self.conn) > 0 or len(self.greenlets) > 0:
124 self.kill_all_connections()
125
126 # Setup the lock
127 for i in range(self.size):
128 self.lock.acquire()
129
130 # Open multiple connections
131 for i in range(self.size):
132 self.greenlets.append(gevent.spawn_later(self.spawn_frequency * i, self._addOne))
133
134 # Open a "keepalive" thread if we want to ping open connections
135 if self.keepalive:
136 self.greenlets.append(gevent.spawn(self._keepalive_periodic))
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/epplibwrapper/utility/pool.py b/src/epplibwrapper/utility/pool.py
--- a/src/epplibwrapper/utility/pool.py
+++ b/src/epplibwrapper/utility/pool.py
@@ -85,6 +85,21 @@
logger.error(message, exc_info=True)
raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err
+ def _keepalive_periodic(self):
+ """Overriding _keepalive_periodic from geventconnpool so that PoolErrors
+ are properly handled, as opposed to printing to stdout"""
+ delay = float(self.keepalive) / self.size
+ while 1:
+ try:
+ with self.get() as c:
+ self._keepalive(c)
+ except PoolError as err:
+ logger.error(err.message, exc_info=True)
+ except self.exc_classes:
+ # Nothing to do, the pool will generate a new connection later
+ pass
+ gevent.sleep(delay)
+
def _create_socket(self, client, login) -> Socket:
"""Creates and returns a socket instance"""
socket = Socket(client, login)
| {"golden_diff": "diff --git a/src/epplibwrapper/utility/pool.py b/src/epplibwrapper/utility/pool.py\n--- a/src/epplibwrapper/utility/pool.py\n+++ b/src/epplibwrapper/utility/pool.py\n@@ -85,6 +85,21 @@\n logger.error(message, exc_info=True)\n raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err\n \n+ def _keepalive_periodic(self):\n+ \"\"\"Overriding _keepalive_periodic from geventconnpool so that PoolErrors\n+ are properly handled, as opposed to printing to stdout\"\"\"\n+ delay = float(self.keepalive) / self.size\n+ while 1:\n+ try:\n+ with self.get() as c:\n+ self._keepalive(c)\n+ except PoolError as err:\n+ logger.error(err.message, exc_info=True)\n+ except self.exc_classes:\n+ # Nothing to do, the pool will generate a new connection later\n+ pass\n+ gevent.sleep(delay)\n+\n def _create_socket(self, client, login) -> Socket:\n \"\"\"Creates and returns a socket instance\"\"\"\n socket = Socket(client, login)\n", "issue": "Clean up test noise (that includes EPP and migration scripts)\n### Issue description\r\n\r\nRight now if you run the test suite locally or see the output from github, there is a lot of added prints and logs that make it hard to troubleshoot where your particular error is coming from. This ticket is clean up test noise in general including EPP and migration scripts. \r\n\r\n\r\n\r\n### Acceptance criteria\r\n\r\n- [ ] unnecessary prints/logs on tests are removed\r\n\r\n### Additional context\r\n\r\n_No response_\r\n\r\n### Links to other issues\r\n\r\n_No response_\n", "before_files": [{"content": "import logging\nfrom typing import List\nimport gevent\nfrom geventconnpool import ConnectionPool\nfrom epplibwrapper.socket import Socket\nfrom epplibwrapper.utility.pool_error import PoolError, PoolErrorCodes\n\ntry:\n from epplib.commands import Hello\n from epplib.exceptions import TransportError\nexcept ImportError:\n pass\n\nfrom gevent.lock import BoundedSemaphore\nfrom collections import deque\n\nlogger = logging.getLogger(__name__)\n\n\nclass EPPConnectionPool(ConnectionPool):\n \"\"\"A connection pool for EPPLib.\n\n Args:\n client (Client): The client\n login (commands.Login): Login creds\n options (dict): Options for the ConnectionPool\n base class\n \"\"\"\n\n def __init__(self, client, login, options: dict):\n # For storing shared credentials\n self._client = client\n self._login = login\n\n # Keep track of each greenlet\n self.greenlets: List[gevent.Greenlet] = []\n\n # Define optional pool settings.\n # Kept in a dict so that the parent class,\n # client.py, can maintain seperation/expandability\n self.size = 1\n if \"size\" in options:\n self.size = options[\"size\"]\n\n self.exc_classes = tuple((TransportError,))\n if \"exc_classes\" in options:\n self.exc_classes = options[\"exc_classes\"]\n\n self.keepalive = None\n if \"keepalive\" in options:\n self.keepalive = options[\"keepalive\"]\n\n # Determines the period in which new\n # gevent threads are spun up.\n # This time period is in seconds. So for instance, .1 would be .1 seconds.\n self.spawn_frequency = 0.1\n if \"spawn_frequency\" in options:\n self.spawn_frequency = options[\"spawn_frequency\"]\n\n self.conn: deque = deque()\n self.lock = BoundedSemaphore(self.size)\n\n self.populate_all_connections()\n\n def _new_connection(self):\n socket = self._create_socket(self._client, self._login)\n try:\n connection = socket.connect()\n return connection\n except Exception as err:\n message = f\"Failed to execute due to a registry error: {err}\"\n logger.error(message, exc_info=True)\n # We want to raise a pool error rather than a LoginError here\n # because if this occurs internally, we should handle this\n # differently than we otherwise would for LoginError.\n raise PoolError(code=PoolErrorCodes.NEW_CONNECTION_FAILED) from err\n\n def _keepalive(self, c):\n \"\"\"Sends a command to the server to keep the connection alive.\"\"\"\n try:\n # Sends a ping to the registry via EPPLib\n c.send(Hello())\n except Exception as err:\n message = \"Failed to keep the connection alive.\"\n logger.error(message, exc_info=True)\n raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err\n\n def _create_socket(self, client, login) -> Socket:\n \"\"\"Creates and returns a socket instance\"\"\"\n socket = Socket(client, login)\n return socket\n\n def get_connections(self):\n \"\"\"Returns the connection queue\"\"\"\n return self.conn\n\n def kill_all_connections(self):\n \"\"\"Kills all active connections in the pool.\"\"\"\n try:\n if len(self.conn) > 0 or len(self.greenlets) > 0:\n logger.info(\"Attempting to kill connections\")\n gevent.killall(self.greenlets)\n\n self.greenlets.clear()\n for connection in self.conn:\n connection.disconnect()\n self.conn.clear()\n\n # Clear the semaphore\n self.lock = BoundedSemaphore(self.size)\n logger.info(\"Finished killing connections\")\n else:\n logger.info(\"No connections to kill.\")\n except Exception as err:\n logger.error(\"Could not kill all connections.\")\n raise PoolError(code=PoolErrorCodes.KILL_ALL_FAILED) from err\n\n def populate_all_connections(self):\n \"\"\"Generates the connection pool.\n If any connections exist, kill them first.\n Based off of the __init__ definition for geventconnpool.\n \"\"\"\n if len(self.conn) > 0 or len(self.greenlets) > 0:\n self.kill_all_connections()\n\n # Setup the lock\n for i in range(self.size):\n self.lock.acquire()\n\n # Open multiple connections\n for i in range(self.size):\n self.greenlets.append(gevent.spawn_later(self.spawn_frequency * i, self._addOne))\n\n # Open a \"keepalive\" thread if we want to ping open connections\n if self.keepalive:\n self.greenlets.append(gevent.spawn(self._keepalive_periodic))\n", "path": "src/epplibwrapper/utility/pool.py"}], "after_files": [{"content": "import logging\nfrom typing import List\nimport gevent\nfrom geventconnpool import ConnectionPool\nfrom epplibwrapper.socket import Socket\nfrom epplibwrapper.utility.pool_error import PoolError, PoolErrorCodes\n\ntry:\n from epplib.commands import Hello\n from epplib.exceptions import TransportError\nexcept ImportError:\n pass\n\nfrom gevent.lock import BoundedSemaphore\nfrom collections import deque\n\nlogger = logging.getLogger(__name__)\n\n\nclass EPPConnectionPool(ConnectionPool):\n \"\"\"A connection pool for EPPLib.\n\n Args:\n client (Client): The client\n login (commands.Login): Login creds\n options (dict): Options for the ConnectionPool\n base class\n \"\"\"\n\n def __init__(self, client, login, options: dict):\n # For storing shared credentials\n self._client = client\n self._login = login\n\n # Keep track of each greenlet\n self.greenlets: List[gevent.Greenlet] = []\n\n # Define optional pool settings.\n # Kept in a dict so that the parent class,\n # client.py, can maintain seperation/expandability\n self.size = 1\n if \"size\" in options:\n self.size = options[\"size\"]\n\n self.exc_classes = tuple((TransportError,))\n if \"exc_classes\" in options:\n self.exc_classes = options[\"exc_classes\"]\n\n self.keepalive = None\n if \"keepalive\" in options:\n self.keepalive = options[\"keepalive\"]\n\n # Determines the period in which new\n # gevent threads are spun up.\n # This time period is in seconds. So for instance, .1 would be .1 seconds.\n self.spawn_frequency = 0.1\n if \"spawn_frequency\" in options:\n self.spawn_frequency = options[\"spawn_frequency\"]\n\n self.conn: deque = deque()\n self.lock = BoundedSemaphore(self.size)\n\n self.populate_all_connections()\n\n def _new_connection(self):\n socket = self._create_socket(self._client, self._login)\n try:\n connection = socket.connect()\n return connection\n except Exception as err:\n message = f\"Failed to execute due to a registry error: {err}\"\n logger.error(message, exc_info=True)\n # We want to raise a pool error rather than a LoginError here\n # because if this occurs internally, we should handle this\n # differently than we otherwise would for LoginError.\n raise PoolError(code=PoolErrorCodes.NEW_CONNECTION_FAILED) from err\n\n def _keepalive(self, c):\n \"\"\"Sends a command to the server to keep the connection alive.\"\"\"\n try:\n # Sends a ping to the registry via EPPLib\n c.send(Hello())\n except Exception as err:\n message = \"Failed to keep the connection alive.\"\n logger.error(message, exc_info=True)\n raise PoolError(code=PoolErrorCodes.KEEP_ALIVE_FAILED) from err\n\n def _keepalive_periodic(self):\n \"\"\"Overriding _keepalive_periodic from geventconnpool so that PoolErrors\n are properly handled, as opposed to printing to stdout\"\"\"\n delay = float(self.keepalive) / self.size\n while 1:\n try:\n with self.get() as c:\n self._keepalive(c)\n except PoolError as err:\n logger.error(err.message, exc_info=True)\n except self.exc_classes:\n # Nothing to do, the pool will generate a new connection later\n pass\n gevent.sleep(delay)\n\n def _create_socket(self, client, login) -> Socket:\n \"\"\"Creates and returns a socket instance\"\"\"\n socket = Socket(client, login)\n return socket\n\n def get_connections(self):\n \"\"\"Returns the connection queue\"\"\"\n return self.conn\n\n def kill_all_connections(self):\n \"\"\"Kills all active connections in the pool.\"\"\"\n try:\n if len(self.conn) > 0 or len(self.greenlets) > 0:\n logger.info(\"Attempting to kill connections\")\n gevent.killall(self.greenlets)\n\n self.greenlets.clear()\n for connection in self.conn:\n connection.disconnect()\n self.conn.clear()\n\n # Clear the semaphore\n self.lock = BoundedSemaphore(self.size)\n logger.info(\"Finished killing connections\")\n else:\n logger.info(\"No connections to kill.\")\n except Exception as err:\n logger.error(\"Could not kill all connections.\")\n raise PoolError(code=PoolErrorCodes.KILL_ALL_FAILED) from err\n\n def populate_all_connections(self):\n \"\"\"Generates the connection pool.\n If any connections exist, kill them first.\n Based off of the __init__ definition for geventconnpool.\n \"\"\"\n if len(self.conn) > 0 or len(self.greenlets) > 0:\n self.kill_all_connections()\n\n # Setup the lock\n for i in range(self.size):\n self.lock.acquire()\n\n # Open multiple connections\n for i in range(self.size):\n self.greenlets.append(gevent.spawn_later(self.spawn_frequency * i, self._addOne))\n\n # Open a \"keepalive\" thread if we want to ping open connections\n if self.keepalive:\n self.greenlets.append(gevent.spawn(self._keepalive_periodic))\n", "path": "src/epplibwrapper/utility/pool.py"}]} | 1,681 | 256 |
gh_patches_debug_10114 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-982 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mockserver server.py does not work in Python 3
```mockserver_1 | Traceback (most recent call last):
mockserver_1 | File "../server.py", line 5, in <module>
mockserver_1 | from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer
mockserver_1 | ModuleNotFoundError: No module named 'SimpleHTTPServer'
```
Looks like some modules have been reorganized in Python 3. Hopefully this is just a matter of updating the imports.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mockserver/server.py`
Content:
```
1 #! /usr/bin/env python
2
3 # Usage: python __file__.py <port>
4
5 from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer
6
7 class CORSRequestHandler(SimpleHTTPRequestHandler):
8 def do_OPTIONS(self):
9 self.send_response(200, 'OK')
10 self.end_headers()
11
12 def end_headers(self):
13 self.send_header('Access-Control-Allow-Origin', '*')
14 self.send_header('Access-Control-Allow-Headers', 'x-request-timestamp, x-signature, electricitymap-token')
15 SimpleHTTPRequestHandler.end_headers(self)
16
17 if __name__ == '__main__':
18 BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mockserver/server.py b/mockserver/server.py
--- a/mockserver/server.py
+++ b/mockserver/server.py
@@ -2,7 +2,7 @@
# Usage: python __file__.py <port>
-from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer
+from http.server import SimpleHTTPRequestHandler, HTTPServer, test
class CORSRequestHandler(SimpleHTTPRequestHandler):
def do_OPTIONS(self):
@@ -15,4 +15,4 @@
SimpleHTTPRequestHandler.end_headers(self)
if __name__ == '__main__':
- BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)
+ test(CORSRequestHandler, HTTPServer)
| {"golden_diff": "diff --git a/mockserver/server.py b/mockserver/server.py\n--- a/mockserver/server.py\n+++ b/mockserver/server.py\n@@ -2,7 +2,7 @@\n \n # Usage: python __file__.py <port>\n \n-from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer\n+from http.server import SimpleHTTPRequestHandler, HTTPServer, test\n \n class CORSRequestHandler(SimpleHTTPRequestHandler):\n def do_OPTIONS(self):\n@@ -15,4 +15,4 @@\n SimpleHTTPRequestHandler.end_headers(self)\n \n if __name__ == '__main__':\n- BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)\n+ test(CORSRequestHandler, HTTPServer)\n", "issue": "Mockserver server.py does not work in Python 3\n```mockserver_1 | Traceback (most recent call last):\r\nmockserver_1 | File \"../server.py\", line 5, in <module>\r\nmockserver_1 | from SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer\r\nmockserver_1 | ModuleNotFoundError: No module named 'SimpleHTTPServer'\r\n```\r\nLooks like some modules have been reorganized in Python 3. Hopefully this is just a matter of updating the imports.\n", "before_files": [{"content": "#! /usr/bin/env python\n\n# Usage: python __file__.py <port>\n\nfrom SimpleHTTPServer import SimpleHTTPRequestHandler, BaseHTTPServer\n\nclass CORSRequestHandler(SimpleHTTPRequestHandler):\n def do_OPTIONS(self):\n self.send_response(200, 'OK')\n self.end_headers()\n\n def end_headers(self):\n self.send_header('Access-Control-Allow-Origin', '*')\n self.send_header('Access-Control-Allow-Headers', 'x-request-timestamp, x-signature, electricitymap-token')\n SimpleHTTPRequestHandler.end_headers(self)\n\nif __name__ == '__main__':\n BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)\n", "path": "mockserver/server.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\n# Usage: python __file__.py <port>\n\nfrom http.server import SimpleHTTPRequestHandler, HTTPServer, test\n\nclass CORSRequestHandler(SimpleHTTPRequestHandler):\n def do_OPTIONS(self):\n self.send_response(200, 'OK')\n self.end_headers()\n\n def end_headers(self):\n self.send_header('Access-Control-Allow-Origin', '*')\n self.send_header('Access-Control-Allow-Headers', 'x-request-timestamp, x-signature, electricitymap-token')\n SimpleHTTPRequestHandler.end_headers(self)\n\nif __name__ == '__main__':\n test(CORSRequestHandler, HTTPServer)\n", "path": "mockserver/server.py"}]} | 541 | 150 |
gh_patches_debug_148 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-7583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: vae does not appear when clicking refresh button in models/VAE
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Pressing the button to update the VAE list does not update the VAE list.
### Steps to reproduce the problem
1. Insert new VAE file to models/VAE
2. Press buttion Refresh VAE list
### What should have happened?
Apprear new VAE file in list
### Commit where the problem happens
Lastest
### What platforms do you use to access the UI ?
_No response_
### What browsers do you use to access the UI ?
_No response_
### Command Line Arguments
```Shell
No
```
### List of extensions
No
### Console logs
```Shell
Nothing
```
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modules/shared_items.py`
Content:
```
1
2
3 def realesrgan_models_names():
4 import modules.realesrgan_model
5 return [x.name for x in modules.realesrgan_model.get_realesrgan_models(None)]
6
7
8 def postprocessing_scripts():
9 import modules.scripts
10
11 return modules.scripts.scripts_postproc.scripts
12
13
14 def sd_vae_items():
15 import modules.sd_vae
16
17 return ["Automatic", "None"] + list(modules.sd_vae.vae_dict)
18
19
20 def refresh_vae_list():
21 import modules.sd_vae
22
23 return modules.sd_vae.refresh_vae_list
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modules/shared_items.py b/modules/shared_items.py
--- a/modules/shared_items.py
+++ b/modules/shared_items.py
@@ -20,4 +20,4 @@
def refresh_vae_list():
import modules.sd_vae
- return modules.sd_vae.refresh_vae_list
+ return modules.sd_vae.refresh_vae_list()
| {"golden_diff": "diff --git a/modules/shared_items.py b/modules/shared_items.py\n--- a/modules/shared_items.py\n+++ b/modules/shared_items.py\n@@ -20,4 +20,4 @@\n def refresh_vae_list():\r\n import modules.sd_vae\r\n \r\n- return modules.sd_vae.refresh_vae_list\r\n+ return modules.sd_vae.refresh_vae_list()\n", "issue": "[Bug]: vae does not appear when clicking refresh button in models/VAE\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nPressing the button to update the VAE list does not update the VAE list.\n\n### Steps to reproduce the problem\n\n1. Insert new VAE file to models/VAE\r\n2. Press buttion Refresh VAE list \n\n### What should have happened?\n\nApprear new VAE file in list\n\n### Commit where the problem happens\n\nLastest\n\n### What platforms do you use to access the UI ?\n\n_No response_\n\n### What browsers do you use to access the UI ?\n\n_No response_\n\n### Command Line Arguments\n\n```Shell\nNo\n```\n\n\n### List of extensions\n\nNo\n\n### Console logs\n\n```Shell\nNothing\n```\n\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "\r\n\r\ndef realesrgan_models_names():\r\n import modules.realesrgan_model\r\n return [x.name for x in modules.realesrgan_model.get_realesrgan_models(None)]\r\n\r\n\r\ndef postprocessing_scripts():\r\n import modules.scripts\r\n\r\n return modules.scripts.scripts_postproc.scripts\r\n\r\n\r\ndef sd_vae_items():\r\n import modules.sd_vae\r\n\r\n return [\"Automatic\", \"None\"] + list(modules.sd_vae.vae_dict)\r\n\r\n\r\ndef refresh_vae_list():\r\n import modules.sd_vae\r\n\r\n return modules.sd_vae.refresh_vae_list\r\n", "path": "modules/shared_items.py"}], "after_files": [{"content": "\r\n\r\ndef realesrgan_models_names():\r\n import modules.realesrgan_model\r\n return [x.name for x in modules.realesrgan_model.get_realesrgan_models(None)]\r\n\r\n\r\ndef postprocessing_scripts():\r\n import modules.scripts\r\n\r\n return modules.scripts.scripts_postproc.scripts\r\n\r\n\r\ndef sd_vae_items():\r\n import modules.sd_vae\r\n\r\n return [\"Automatic\", \"None\"] + list(modules.sd_vae.vae_dict)\r\n\r\n\r\ndef refresh_vae_list():\r\n import modules.sd_vae\r\n\r\n return modules.sd_vae.refresh_vae_list()\r\n", "path": "modules/shared_items.py"}]} | 610 | 78 |
gh_patches_debug_23129 | rasdani/github-patches | git_diff | saleor__saleor-1567 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove or block impersonate's "list" and "search" urls
Saleor uses the [django-impersonate](https://bitbucket.org/petersanchez/django-impersonate/overview) for client impersonation feature. While working on #1549 I've found out that in addition to two views that we are using (start and stop impersonating the user), the library brings additional two views that we don't really want to support:
https://demo.getsaleor.com/impersonate/list/
https://demo.getsaleor.com/impersonate/search/?q=admin (note: this one 500's on link)
Ideally, library would've provided us with a settings to disable those views, but this isn't the case.
So its worth asking ourselves what harm is there in keeping those views around, and if we really want to get rid of those two views, how would we go about it?
Looking at the [imersonate.urls](https://bitbucket.org/petersanchez/django-impersonate/src/f898c697b2bd9945187f8667d680e6d10d06dc33/impersonate/urls.py?at=default&fileviewer=file-view-default), it may be as simple as updating our `urls.py` to explictly define `impersonate-start` and `impersonate-stop`, or perhaps we should open the issue upstream and see what library's author thinks about it?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/urls.py`
Content:
```
1 from django.conf import settings
2 from django.conf.urls import url, include
3 from django.conf.urls.static import static
4 from django.contrib.sitemaps.views import sitemap
5 from django.contrib.staticfiles.views import serve
6 from django.views.i18n import JavaScriptCatalog
7 from graphene_django.views import GraphQLView
8
9 from .cart.urls import urlpatterns as cart_urls
10 from .checkout.urls import urlpatterns as checkout_urls
11 from .core.sitemaps import sitemaps
12 from .core.urls import urlpatterns as core_urls
13 from .dashboard.urls import urlpatterns as dashboard_urls
14 from .data_feeds.urls import urlpatterns as feed_urls
15 from .order.urls import urlpatterns as order_urls
16 from .product.urls import urlpatterns as product_urls
17 from .registration.urls import urlpatterns as registration_urls
18 from .search.urls import urlpatterns as search_urls
19 from .userprofile.urls import urlpatterns as userprofile_urls
20
21 urlpatterns = [
22 url(r'^', include(core_urls)),
23 url(r'^account/', include(registration_urls)),
24 url(r'^cart/', include((cart_urls, 'cart'), namespace='cart')),
25 url(r'^checkout/',
26 include((checkout_urls, 'checkout'), namespace='checkout')),
27 url(r'^dashboard/',
28 include((dashboard_urls, 'dashboard'), namespace='dashboard')),
29 url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),
30 url(r'^impersonate/', include('impersonate.urls')),
31 url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),
32 url(r'^order/', include((order_urls, 'order'), namespace='order')),
33 url(r'^products/',
34 include((product_urls, 'product'), namespace='product')),
35 url(r'^profile/',
36 include((userprofile_urls, 'profile'), namespace='profile')),
37 url(r'^feeds/',
38 include((feed_urls, 'data_feeds'), namespace='data_feeds')),
39 url(r'^search/', include((search_urls, 'search'), namespace='search')),
40 url(r'^sitemap\.xml$', sitemap, {'sitemaps': sitemaps},
41 name='django.contrib.sitemaps.views.sitemap'),
42 url(r'', include('payments.urls')),
43 url('', include('social_django.urls', namespace='social')),
44 ]
45
46 if settings.DEBUG:
47 # static files (images, css, javascript, etc.)
48 urlpatterns += [
49 url(r'^static/(?P<path>.*)$', serve)
50 ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/urls.py b/saleor/urls.py
--- a/saleor/urls.py
+++ b/saleor/urls.py
@@ -5,6 +5,7 @@
from django.contrib.staticfiles.views import serve
from django.views.i18n import JavaScriptCatalog
from graphene_django.views import GraphQLView
+from impersonate.views import impersonate, stop_impersonate
from .cart.urls import urlpatterns as cart_urls
from .checkout.urls import urlpatterns as checkout_urls
@@ -27,7 +28,8 @@
url(r'^dashboard/',
include((dashboard_urls, 'dashboard'), namespace='dashboard')),
url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),
- url(r'^impersonate/', include('impersonate.urls')),
+ url(r'^impersonate/stop/$', stop_impersonate, name='impersonate-stop'),
+ url(r'^impersonate/(?P<uid>\d+)/$', impersonate, name='impersonate-start'),
url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),
url(r'^order/', include((order_urls, 'order'), namespace='order')),
url(r'^products/',
| {"golden_diff": "diff --git a/saleor/urls.py b/saleor/urls.py\n--- a/saleor/urls.py\n+++ b/saleor/urls.py\n@@ -5,6 +5,7 @@\n from django.contrib.staticfiles.views import serve\n from django.views.i18n import JavaScriptCatalog\n from graphene_django.views import GraphQLView\n+from impersonate.views import impersonate, stop_impersonate\n \n from .cart.urls import urlpatterns as cart_urls\n from .checkout.urls import urlpatterns as checkout_urls\n@@ -27,7 +28,8 @@\n url(r'^dashboard/',\n include((dashboard_urls, 'dashboard'), namespace='dashboard')),\n url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),\n- url(r'^impersonate/', include('impersonate.urls')),\n+ url(r'^impersonate/stop/$', stop_impersonate, name='impersonate-stop'),\n+ url(r'^impersonate/(?P<uid>\\d+)/$', impersonate, name='impersonate-start'),\n url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),\n url(r'^order/', include((order_urls, 'order'), namespace='order')),\n url(r'^products/',\n", "issue": "Remove or block impersonate's \"list\" and \"search\" urls\nSaleor uses the [django-impersonate](https://bitbucket.org/petersanchez/django-impersonate/overview) for client impersonation feature. While working on #1549 I've found out that in addition to two views that we are using (start and stop impersonating the user), the library brings additional two views that we don't really want to support:\r\n\r\nhttps://demo.getsaleor.com/impersonate/list/\r\nhttps://demo.getsaleor.com/impersonate/search/?q=admin (note: this one 500's on link)\r\n\r\nIdeally, library would've provided us with a settings to disable those views, but this isn't the case.\r\n\r\nSo its worth asking ourselves what harm is there in keeping those views around, and if we really want to get rid of those two views, how would we go about it?\r\n\r\nLooking at the [imersonate.urls](https://bitbucket.org/petersanchez/django-impersonate/src/f898c697b2bd9945187f8667d680e6d10d06dc33/impersonate/urls.py?at=default&fileviewer=file-view-default), it may be as simple as updating our `urls.py` to explictly define `impersonate-start` and `impersonate-stop`, or perhaps we should open the issue upstream and see what library's author thinks about it?\r\n \n", "before_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import url, include\nfrom django.conf.urls.static import static\nfrom django.contrib.sitemaps.views import sitemap\nfrom django.contrib.staticfiles.views import serve\nfrom django.views.i18n import JavaScriptCatalog\nfrom graphene_django.views import GraphQLView\n\nfrom .cart.urls import urlpatterns as cart_urls\nfrom .checkout.urls import urlpatterns as checkout_urls\nfrom .core.sitemaps import sitemaps\nfrom .core.urls import urlpatterns as core_urls\nfrom .dashboard.urls import urlpatterns as dashboard_urls\nfrom .data_feeds.urls import urlpatterns as feed_urls\nfrom .order.urls import urlpatterns as order_urls\nfrom .product.urls import urlpatterns as product_urls\nfrom .registration.urls import urlpatterns as registration_urls\nfrom .search.urls import urlpatterns as search_urls\nfrom .userprofile.urls import urlpatterns as userprofile_urls\n\nurlpatterns = [\n url(r'^', include(core_urls)),\n url(r'^account/', include(registration_urls)),\n url(r'^cart/', include((cart_urls, 'cart'), namespace='cart')),\n url(r'^checkout/',\n include((checkout_urls, 'checkout'), namespace='checkout')),\n url(r'^dashboard/',\n include((dashboard_urls, 'dashboard'), namespace='dashboard')),\n url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),\n url(r'^impersonate/', include('impersonate.urls')),\n url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),\n url(r'^order/', include((order_urls, 'order'), namespace='order')),\n url(r'^products/',\n include((product_urls, 'product'), namespace='product')),\n url(r'^profile/',\n include((userprofile_urls, 'profile'), namespace='profile')),\n url(r'^feeds/',\n include((feed_urls, 'data_feeds'), namespace='data_feeds')),\n url(r'^search/', include((search_urls, 'search'), namespace='search')),\n url(r'^sitemap\\.xml$', sitemap, {'sitemaps': sitemaps},\n name='django.contrib.sitemaps.views.sitemap'),\n url(r'', include('payments.urls')),\n url('', include('social_django.urls', namespace='social')),\n]\n\nif settings.DEBUG:\n # static files (images, css, javascript, etc.)\n urlpatterns += [\n url(r'^static/(?P<path>.*)$', serve)\n ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n", "path": "saleor/urls.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import url, include\nfrom django.conf.urls.static import static\nfrom django.contrib.sitemaps.views import sitemap\nfrom django.contrib.staticfiles.views import serve\nfrom django.views.i18n import JavaScriptCatalog\nfrom graphene_django.views import GraphQLView\nfrom impersonate.views import impersonate, stop_impersonate\n\nfrom .cart.urls import urlpatterns as cart_urls\nfrom .checkout.urls import urlpatterns as checkout_urls\nfrom .core.sitemaps import sitemaps\nfrom .core.urls import urlpatterns as core_urls\nfrom .dashboard.urls import urlpatterns as dashboard_urls\nfrom .data_feeds.urls import urlpatterns as feed_urls\nfrom .order.urls import urlpatterns as order_urls\nfrom .product.urls import urlpatterns as product_urls\nfrom .registration.urls import urlpatterns as registration_urls\nfrom .search.urls import urlpatterns as search_urls\nfrom .userprofile.urls import urlpatterns as userprofile_urls\n\nurlpatterns = [\n url(r'^', include(core_urls)),\n url(r'^account/', include(registration_urls)),\n url(r'^cart/', include((cart_urls, 'cart'), namespace='cart')),\n url(r'^checkout/',\n include((checkout_urls, 'checkout'), namespace='checkout')),\n url(r'^dashboard/',\n include((dashboard_urls, 'dashboard'), namespace='dashboard')),\n url(r'^graphql', GraphQLView.as_view(graphiql=settings.DEBUG)),\n url(r'^impersonate/stop/$', stop_impersonate, name='impersonate-stop'),\n url(r'^impersonate/(?P<uid>\\d+)/$', impersonate, name='impersonate-start'),\n url(r'^jsi18n/$', JavaScriptCatalog.as_view(), name='javascript-catalog'),\n url(r'^order/', include((order_urls, 'order'), namespace='order')),\n url(r'^products/',\n include((product_urls, 'product'), namespace='product')),\n url(r'^profile/',\n include((userprofile_urls, 'profile'), namespace='profile')),\n url(r'^feeds/',\n include((feed_urls, 'data_feeds'), namespace='data_feeds')),\n url(r'^search/', include((search_urls, 'search'), namespace='search')),\n url(r'^sitemap\\.xml$', sitemap, {'sitemaps': sitemaps},\n name='django.contrib.sitemaps.views.sitemap'),\n url(r'', include('payments.urls')),\n url('', include('social_django.urls', namespace='social')),\n]\n\nif settings.DEBUG:\n # static files (images, css, javascript, etc.)\n urlpatterns += [\n url(r'^static/(?P<path>.*)$', serve)\n ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n", "path": "saleor/urls.py"}]} | 1,191 | 266 |
gh_patches_debug_16174 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-6912 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Firestore] WriteBatch doesn't return instance so you cannot chain.
The WriteBatch methods don’t return the WriteBatch instances for chaining.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `firestore/google/cloud/firestore_v1beta1/batch.py`
Content:
```
1 # Copyright 2017 Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for batch requests to the Google Cloud Firestore API."""
16
17
18 from google.cloud.firestore_v1beta1 import _helpers
19
20
21 class WriteBatch(object):
22 """Accumulate write operations to be sent in a batch.
23
24 This has the same set of methods for write operations that
25 :class:`~.firestore_v1beta1.document.DocumentReference` does,
26 e.g. :meth:`~.firestore_v1beta1.document.DocumentReference.create`.
27
28 Args:
29 client (~.firestore_v1beta1.client.Client): The client that
30 created this batch.
31 """
32
33 def __init__(self, client):
34 self._client = client
35 self._write_pbs = []
36
37 def _add_write_pbs(self, write_pbs):
38 """Add `Write`` protobufs to this transaction.
39
40 This method intended to be over-ridden by subclasses.
41
42 Args:
43 write_pbs (List[google.cloud.proto.firestore.v1beta1.\
44 write_pb2.Write]): A list of write protobufs to be added.
45 """
46 self._write_pbs.extend(write_pbs)
47
48 def create(self, reference, document_data):
49 """Add a "change" to this batch to create a document.
50
51 If the document given by ``reference`` already exists, then this
52 batch will fail when :meth:`commit`-ed.
53
54 Args:
55 reference (~.firestore_v1beta1.document.DocumentReference): A
56 document reference to be created in this batch.
57 document_data (dict): Property names and values to use for
58 creating a document.
59 """
60 write_pbs = _helpers.pbs_for_create(reference._document_path, document_data)
61 self._add_write_pbs(write_pbs)
62
63 def set(self, reference, document_data, merge=False):
64 """Add a "change" to replace a document.
65
66 See
67 :meth:`~.firestore_v1beta1.document.DocumentReference.set` for
68 more information on how ``option`` determines how the change is
69 applied.
70
71 Args:
72 reference (~.firestore_v1beta1.document.DocumentReference):
73 A document reference that will have values set in this batch.
74 document_data (dict):
75 Property names and values to use for replacing a document.
76 merge (Optional[bool] or Optional[List<apispec>]):
77 If True, apply merging instead of overwriting the state
78 of the document.
79 """
80 if merge is not False:
81 write_pbs = _helpers.pbs_for_set_with_merge(
82 reference._document_path, document_data, merge
83 )
84 else:
85 write_pbs = _helpers.pbs_for_set_no_merge(
86 reference._document_path, document_data
87 )
88
89 self._add_write_pbs(write_pbs)
90
91 def update(self, reference, field_updates, option=None):
92 """Add a "change" to update a document.
93
94 See
95 :meth:`~.firestore_v1beta1.document.DocumentReference.update` for
96 more information on ``field_updates`` and ``option``.
97
98 Args:
99 reference (~.firestore_v1beta1.document.DocumentReference): A
100 document reference that will be deleted in this batch.
101 field_updates (dict): Field names or paths to update and values
102 to update with.
103 option (Optional[~.firestore_v1beta1.client.WriteOption]): A
104 write option to make assertions / preconditions on the server
105 state of the document before applying changes.
106 """
107 if option.__class__.__name__ == "ExistsOption":
108 raise ValueError("you must not pass an explicit write option to " "update.")
109 write_pbs = _helpers.pbs_for_update(
110 reference._document_path, field_updates, option
111 )
112 self._add_write_pbs(write_pbs)
113
114 def delete(self, reference, option=None):
115 """Add a "change" to delete a document.
116
117 See
118 :meth:`~.firestore_v1beta1.document.DocumentReference.delete` for
119 more information on how ``option`` determines how the change is
120 applied.
121
122 Args:
123 reference (~.firestore_v1beta1.document.DocumentReference): A
124 document reference that will be deleted in this batch.
125 option (Optional[~.firestore_v1beta1.client.WriteOption]): A
126 write option to make assertions / preconditions on the server
127 state of the document before applying changes.
128 """
129 write_pb = _helpers.pb_for_delete(reference._document_path, option)
130 self._add_write_pbs([write_pb])
131
132 def commit(self):
133 """Commit the changes accumulated in this batch.
134
135 Returns:
136 List[google.cloud.proto.firestore.v1beta1.\
137 write_pb2.WriteResult, ...]: The write results corresponding
138 to the changes committed, returned in the same order as the
139 changes were applied to this batch. A write result contains an
140 ``update_time`` field.
141 """
142 commit_response = self._client._firestore_api.commit(
143 self._client._database_string,
144 self._write_pbs,
145 transaction=None,
146 metadata=self._client._rpc_metadata,
147 )
148
149 self._write_pbs = []
150 return list(commit_response.write_results)
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/firestore/google/cloud/firestore_v1beta1/batch.py b/firestore/google/cloud/firestore_v1beta1/batch.py
--- a/firestore/google/cloud/firestore_v1beta1/batch.py
+++ b/firestore/google/cloud/firestore_v1beta1/batch.py
@@ -33,6 +33,8 @@
def __init__(self, client):
self._client = client
self._write_pbs = []
+ self.write_results = None
+ self.commit_time = None
def _add_write_pbs(self, write_pbs):
"""Add `Write`` protobufs to this transaction.
@@ -147,4 +149,13 @@
)
self._write_pbs = []
- return list(commit_response.write_results)
+ self.write_results = results = list(commit_response.write_results)
+ self.commit_time = commit_response.commit_time
+ return results
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_value, traceback):
+ if exc_type is None:
+ self.commit()
| {"golden_diff": "diff --git a/firestore/google/cloud/firestore_v1beta1/batch.py b/firestore/google/cloud/firestore_v1beta1/batch.py\n--- a/firestore/google/cloud/firestore_v1beta1/batch.py\n+++ b/firestore/google/cloud/firestore_v1beta1/batch.py\n@@ -33,6 +33,8 @@\n def __init__(self, client):\n self._client = client\n self._write_pbs = []\n+ self.write_results = None\n+ self.commit_time = None\n \n def _add_write_pbs(self, write_pbs):\n \"\"\"Add `Write`` protobufs to this transaction.\n@@ -147,4 +149,13 @@\n )\n \n self._write_pbs = []\n- return list(commit_response.write_results)\n+ self.write_results = results = list(commit_response.write_results)\n+ self.commit_time = commit_response.commit_time\n+ return results\n+\n+ def __enter__(self):\n+ return self\n+\n+ def __exit__(self, exc_type, exc_value, traceback):\n+ if exc_type is None:\n+ self.commit()\n", "issue": "[Firestore] WriteBatch doesn't return instance so you cannot chain.\nThe WriteBatch methods don\u2019t return the WriteBatch instances for chaining.\r\n\n", "before_files": [{"content": "# Copyright 2017 Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for batch requests to the Google Cloud Firestore API.\"\"\"\n\n\nfrom google.cloud.firestore_v1beta1 import _helpers\n\n\nclass WriteBatch(object):\n \"\"\"Accumulate write operations to be sent in a batch.\n\n This has the same set of methods for write operations that\n :class:`~.firestore_v1beta1.document.DocumentReference` does,\n e.g. :meth:`~.firestore_v1beta1.document.DocumentReference.create`.\n\n Args:\n client (~.firestore_v1beta1.client.Client): The client that\n created this batch.\n \"\"\"\n\n def __init__(self, client):\n self._client = client\n self._write_pbs = []\n\n def _add_write_pbs(self, write_pbs):\n \"\"\"Add `Write`` protobufs to this transaction.\n\n This method intended to be over-ridden by subclasses.\n\n Args:\n write_pbs (List[google.cloud.proto.firestore.v1beta1.\\\n write_pb2.Write]): A list of write protobufs to be added.\n \"\"\"\n self._write_pbs.extend(write_pbs)\n\n def create(self, reference, document_data):\n \"\"\"Add a \"change\" to this batch to create a document.\n\n If the document given by ``reference`` already exists, then this\n batch will fail when :meth:`commit`-ed.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference to be created in this batch.\n document_data (dict): Property names and values to use for\n creating a document.\n \"\"\"\n write_pbs = _helpers.pbs_for_create(reference._document_path, document_data)\n self._add_write_pbs(write_pbs)\n\n def set(self, reference, document_data, merge=False):\n \"\"\"Add a \"change\" to replace a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.set` for\n more information on how ``option`` determines how the change is\n applied.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference):\n A document reference that will have values set in this batch.\n document_data (dict):\n Property names and values to use for replacing a document.\n merge (Optional[bool] or Optional[List<apispec>]):\n If True, apply merging instead of overwriting the state\n of the document.\n \"\"\"\n if merge is not False:\n write_pbs = _helpers.pbs_for_set_with_merge(\n reference._document_path, document_data, merge\n )\n else:\n write_pbs = _helpers.pbs_for_set_no_merge(\n reference._document_path, document_data\n )\n\n self._add_write_pbs(write_pbs)\n\n def update(self, reference, field_updates, option=None):\n \"\"\"Add a \"change\" to update a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.update` for\n more information on ``field_updates`` and ``option``.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference that will be deleted in this batch.\n field_updates (dict): Field names or paths to update and values\n to update with.\n option (Optional[~.firestore_v1beta1.client.WriteOption]): A\n write option to make assertions / preconditions on the server\n state of the document before applying changes.\n \"\"\"\n if option.__class__.__name__ == \"ExistsOption\":\n raise ValueError(\"you must not pass an explicit write option to \" \"update.\")\n write_pbs = _helpers.pbs_for_update(\n reference._document_path, field_updates, option\n )\n self._add_write_pbs(write_pbs)\n\n def delete(self, reference, option=None):\n \"\"\"Add a \"change\" to delete a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.delete` for\n more information on how ``option`` determines how the change is\n applied.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference that will be deleted in this batch.\n option (Optional[~.firestore_v1beta1.client.WriteOption]): A\n write option to make assertions / preconditions on the server\n state of the document before applying changes.\n \"\"\"\n write_pb = _helpers.pb_for_delete(reference._document_path, option)\n self._add_write_pbs([write_pb])\n\n def commit(self):\n \"\"\"Commit the changes accumulated in this batch.\n\n Returns:\n List[google.cloud.proto.firestore.v1beta1.\\\n write_pb2.WriteResult, ...]: The write results corresponding\n to the changes committed, returned in the same order as the\n changes were applied to this batch. A write result contains an\n ``update_time`` field.\n \"\"\"\n commit_response = self._client._firestore_api.commit(\n self._client._database_string,\n self._write_pbs,\n transaction=None,\n metadata=self._client._rpc_metadata,\n )\n\n self._write_pbs = []\n return list(commit_response.write_results)\n", "path": "firestore/google/cloud/firestore_v1beta1/batch.py"}], "after_files": [{"content": "# Copyright 2017 Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for batch requests to the Google Cloud Firestore API.\"\"\"\n\n\nfrom google.cloud.firestore_v1beta1 import _helpers\n\n\nclass WriteBatch(object):\n \"\"\"Accumulate write operations to be sent in a batch.\n\n This has the same set of methods for write operations that\n :class:`~.firestore_v1beta1.document.DocumentReference` does,\n e.g. :meth:`~.firestore_v1beta1.document.DocumentReference.create`.\n\n Args:\n client (~.firestore_v1beta1.client.Client): The client that\n created this batch.\n \"\"\"\n\n def __init__(self, client):\n self._client = client\n self._write_pbs = []\n self.write_results = None\n self.commit_time = None\n\n def _add_write_pbs(self, write_pbs):\n \"\"\"Add `Write`` protobufs to this transaction.\n\n This method intended to be over-ridden by subclasses.\n\n Args:\n write_pbs (List[google.cloud.proto.firestore.v1beta1.\\\n write_pb2.Write]): A list of write protobufs to be added.\n \"\"\"\n self._write_pbs.extend(write_pbs)\n\n def create(self, reference, document_data):\n \"\"\"Add a \"change\" to this batch to create a document.\n\n If the document given by ``reference`` already exists, then this\n batch will fail when :meth:`commit`-ed.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference to be created in this batch.\n document_data (dict): Property names and values to use for\n creating a document.\n \"\"\"\n write_pbs = _helpers.pbs_for_create(reference._document_path, document_data)\n self._add_write_pbs(write_pbs)\n\n def set(self, reference, document_data, merge=False):\n \"\"\"Add a \"change\" to replace a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.set` for\n more information on how ``option`` determines how the change is\n applied.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference):\n A document reference that will have values set in this batch.\n document_data (dict):\n Property names and values to use for replacing a document.\n merge (Optional[bool] or Optional[List<apispec>]):\n If True, apply merging instead of overwriting the state\n of the document.\n \"\"\"\n if merge is not False:\n write_pbs = _helpers.pbs_for_set_with_merge(\n reference._document_path, document_data, merge\n )\n else:\n write_pbs = _helpers.pbs_for_set_no_merge(\n reference._document_path, document_data\n )\n\n self._add_write_pbs(write_pbs)\n\n def update(self, reference, field_updates, option=None):\n \"\"\"Add a \"change\" to update a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.update` for\n more information on ``field_updates`` and ``option``.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference that will be deleted in this batch.\n field_updates (dict): Field names or paths to update and values\n to update with.\n option (Optional[~.firestore_v1beta1.client.WriteOption]): A\n write option to make assertions / preconditions on the server\n state of the document before applying changes.\n \"\"\"\n if option.__class__.__name__ == \"ExistsOption\":\n raise ValueError(\"you must not pass an explicit write option to \" \"update.\")\n write_pbs = _helpers.pbs_for_update(\n reference._document_path, field_updates, option\n )\n self._add_write_pbs(write_pbs)\n\n def delete(self, reference, option=None):\n \"\"\"Add a \"change\" to delete a document.\n\n See\n :meth:`~.firestore_v1beta1.document.DocumentReference.delete` for\n more information on how ``option`` determines how the change is\n applied.\n\n Args:\n reference (~.firestore_v1beta1.document.DocumentReference): A\n document reference that will be deleted in this batch.\n option (Optional[~.firestore_v1beta1.client.WriteOption]): A\n write option to make assertions / preconditions on the server\n state of the document before applying changes.\n \"\"\"\n write_pb = _helpers.pb_for_delete(reference._document_path, option)\n self._add_write_pbs([write_pb])\n\n def commit(self):\n \"\"\"Commit the changes accumulated in this batch.\n\n Returns:\n List[google.cloud.proto.firestore.v1beta1.\\\n write_pb2.WriteResult, ...]: The write results corresponding\n to the changes committed, returned in the same order as the\n changes were applied to this batch. A write result contains an\n ``update_time`` field.\n \"\"\"\n commit_response = self._client._firestore_api.commit(\n self._client._database_string,\n self._write_pbs,\n transaction=None,\n metadata=self._client._rpc_metadata,\n )\n\n self._write_pbs = []\n self.write_results = results = list(commit_response.write_results)\n self.commit_time = commit_response.commit_time\n return results\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n if exc_type is None:\n self.commit()\n", "path": "firestore/google/cloud/firestore_v1beta1/batch.py"}]} | 1,902 | 250 |
gh_patches_debug_16929 | rasdani/github-patches | git_diff | pulp__pulpcore-306 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix content app not showing file size for 0 byte files
fixes: #5100
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages, setup
2
3 with open('README.md') as f:
4 long_description = f.read()
5
6 requirements = [
7 'coreapi~=2.3.3',
8 'Django~=2.2.3', # LTS version, switch only if we have a compelling reason to
9 'django-filter~=2.2.0',
10 'djangorestframework~=3.10.2',
11 'djangorestframework-queryfields~=1.0.0',
12 'drf-nested-routers~=0.91.0',
13 'drf-yasg~=1.16.1',
14 'gunicorn~=19.9.0',
15 'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412
16 'PyYAML~=5.1.1',
17 'rq~=1.1.0',
18 'redis~=3.1.0',
19 'setuptools>=41.0.1,<41.3.0',
20 'dynaconf~=2.1.0',
21 'whitenoise~=4.1.3',
22 ]
23
24 setup(
25 name='pulpcore',
26 version='3.0.0rc6.dev',
27 description='Pulp Django Application and Related Modules',
28 long_description=long_description,
29 long_description_content_type="text/markdown",
30 license='GPLv2+',
31 packages=find_packages(exclude=['test']),
32 author='Pulp Team',
33 author_email='[email protected]',
34 url='http://www.pulpproject.org',
35 python_requires='>=3.6',
36 install_requires=requirements,
37 extras_require={
38 'postgres': ['psycopg2-binary'],
39 'mysql': ['mysqlclient']
40 },
41 include_package_data=True,
42 classifiers=(
43 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
44 'Operating System :: POSIX :: Linux',
45 'Development Status :: 4 - Beta',
46 'Framework :: Django',
47 'Programming Language :: Python',
48 'Programming Language :: Python :: 3',
49 'Programming Language :: Python :: 3.6',
50 'Programming Language :: Python :: 3.7',
51 ),
52 scripts=['bin/pulp-content'],
53 )
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -13,6 +13,7 @@
'drf-yasg~=1.16.1',
'gunicorn~=19.9.0',
'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412
+ 'psycopg2-binary',
'PyYAML~=5.1.1',
'rq~=1.1.0',
'redis~=3.1.0',
@@ -34,10 +35,6 @@
url='http://www.pulpproject.org',
python_requires='>=3.6',
install_requires=requirements,
- extras_require={
- 'postgres': ['psycopg2-binary'],
- 'mysql': ['mysqlclient']
- },
include_package_data=True,
classifiers=(
'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -13,6 +13,7 @@\n 'drf-yasg~=1.16.1',\n 'gunicorn~=19.9.0',\n 'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412\n+ 'psycopg2-binary',\n 'PyYAML~=5.1.1',\n 'rq~=1.1.0',\n 'redis~=3.1.0',\n@@ -34,10 +35,6 @@\n url='http://www.pulpproject.org',\n python_requires='>=3.6',\n install_requires=requirements,\n- extras_require={\n- 'postgres': ['psycopg2-binary'],\n- 'mysql': ['mysqlclient']\n- },\n include_package_data=True,\n classifiers=(\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n", "issue": "Fix content app not showing file size for 0 byte files\nfixes: #5100\n", "before_files": [{"content": "from setuptools import find_packages, setup\n\nwith open('README.md') as f:\n long_description = f.read()\n\nrequirements = [\n 'coreapi~=2.3.3',\n 'Django~=2.2.3', # LTS version, switch only if we have a compelling reason to\n 'django-filter~=2.2.0',\n 'djangorestframework~=3.10.2',\n 'djangorestframework-queryfields~=1.0.0',\n 'drf-nested-routers~=0.91.0',\n 'drf-yasg~=1.16.1',\n 'gunicorn~=19.9.0',\n 'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412\n 'PyYAML~=5.1.1',\n 'rq~=1.1.0',\n 'redis~=3.1.0',\n 'setuptools>=41.0.1,<41.3.0',\n 'dynaconf~=2.1.0',\n 'whitenoise~=4.1.3',\n]\n\nsetup(\n name='pulpcore',\n version='3.0.0rc6.dev',\n description='Pulp Django Application and Related Modules',\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license='GPLv2+',\n packages=find_packages(exclude=['test']),\n author='Pulp Team',\n author_email='[email protected]',\n url='http://www.pulpproject.org',\n python_requires='>=3.6',\n install_requires=requirements,\n extras_require={\n 'postgres': ['psycopg2-binary'],\n 'mysql': ['mysqlclient']\n },\n include_package_data=True,\n classifiers=(\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Operating System :: POSIX :: Linux',\n 'Development Status :: 4 - Beta',\n 'Framework :: Django',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ),\n scripts=['bin/pulp-content'],\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import find_packages, setup\n\nwith open('README.md') as f:\n long_description = f.read()\n\nrequirements = [\n 'coreapi~=2.3.3',\n 'Django~=2.2.3', # LTS version, switch only if we have a compelling reason to\n 'django-filter~=2.2.0',\n 'djangorestframework~=3.10.2',\n 'djangorestframework-queryfields~=1.0.0',\n 'drf-nested-routers~=0.91.0',\n 'drf-yasg~=1.16.1',\n 'gunicorn~=19.9.0',\n 'packaging', # until drf-yasg 1.16.2 is out https://github.com/axnsan12/drf-yasg/issues/412\n 'psycopg2-binary',\n 'PyYAML~=5.1.1',\n 'rq~=1.1.0',\n 'redis~=3.1.0',\n 'setuptools>=41.0.1,<41.3.0',\n 'dynaconf~=2.1.0',\n 'whitenoise~=4.1.3',\n]\n\nsetup(\n name='pulpcore',\n version='3.0.0rc6.dev',\n description='Pulp Django Application and Related Modules',\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license='GPLv2+',\n packages=find_packages(exclude=['test']),\n author='Pulp Team',\n author_email='[email protected]',\n url='http://www.pulpproject.org',\n python_requires='>=3.6',\n install_requires=requirements,\n include_package_data=True,\n classifiers=(\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Operating System :: POSIX :: Linux',\n 'Development Status :: 4 - Beta',\n 'Framework :: Django',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ),\n scripts=['bin/pulp-content'],\n)\n", "path": "setup.py"}]} | 891 | 241 |
gh_patches_debug_23526 | rasdani/github-patches | git_diff | OpenMined__PySyft-3589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sy.grid.register() should print useful information
**Is your feature request related to a problem? Please describe.**
When registering a node on OpenGrid, we want to convey some information to the user using sys.stdout.write()
A few things we thought to add.
- Information: connecting to opengrid...etc.
- Information: Can I connect to the main grid node... graceful error message if you can't.
- Disclaimer: OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.
- Where to get Help:
- Join our slack (slack.openmined.org) and ask for help in the #lib_syft channel.
- File a Github Issue: https://github.com/OpenMined/PySyft and add the string "#opengrid" in the issue title.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `syft/grid/__init__.py`
Content:
```
1 from .network import Network
2 import uuid
3
4 DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
5
6
7 def register(**kwargs):
8 """ Add this process as a new peer registering it in the grid network.
9
10 Returns:
11 peer: Peer Network instance.
12 """
13 if not kwargs:
14 args = args = {"max_size": None, "timeout": 444, "url": DEFAULT_NETWORK_URL}
15 else:
16 args = kwargs
17
18 peer_id = str(uuid.uuid4())
19 peer = Network(peer_id, **args)
20 peer.start()
21
22 return peer
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py
--- a/syft/grid/__init__.py
+++ b/syft/grid/__init__.py
@@ -1,4 +1,5 @@
from .network import Network
+import sys
import uuid
DEFAULT_NETWORK_URL = "ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com"
@@ -16,7 +17,32 @@
args = kwargs
peer_id = str(uuid.uuid4())
+ sys.stdout.write(
+ "Connecting to OpenGrid (" + "\033[94m" + DEFAULT_NETWORK_URL + "\033[0m" + ") ... "
+ )
peer = Network(peer_id, **args)
+
+ sys.stdout.write("\033[92m" + "OK" + "\033[0m" + "\n")
+ sys.stdout.write("Peer ID: " + peer_id + "\n")
+
+ sys.stdout.write(
+ "\033[93m" + "DISCLAIMER" + "\033[0m"
+ ":"
+ + "\033[1m"
+ + " OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\n"
+ + "\033[0m"
+ )
+
+ sys.stdout.write("Where to get help: \n")
+ sys.stdout.write(
+ " - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\n"
+ )
+ sys.stdout.write(
+ " - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\n"
+ )
+ sys.stdout.write(
+ " - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\n"
+ )
peer.start()
return peer
| {"golden_diff": "diff --git a/syft/grid/__init__.py b/syft/grid/__init__.py\n--- a/syft/grid/__init__.py\n+++ b/syft/grid/__init__.py\n@@ -1,4 +1,5 @@\n from .network import Network\n+import sys\n import uuid\n \n DEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n@@ -16,7 +17,32 @@\n args = kwargs\n \n peer_id = str(uuid.uuid4())\n+ sys.stdout.write(\n+ \"Connecting to OpenGrid (\" + \"\\033[94m\" + DEFAULT_NETWORK_URL + \"\\033[0m\" + \") ... \"\n+ )\n peer = Network(peer_id, **args)\n+\n+ sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n+ sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n+\n+ sys.stdout.write(\n+ \"\\033[93m\" + \"DISCLAIMER\" + \"\\033[0m\"\n+ \":\"\n+ + \"\\033[1m\"\n+ + \" OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\\n\"\n+ + \"\\033[0m\"\n+ )\n+\n+ sys.stdout.write(\"Where to get help: \\n\")\n+ sys.stdout.write(\n+ \" - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\\n\"\n+ )\n+ sys.stdout.write(\n+ \" - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\\n\"\n+ )\n+ sys.stdout.write(\n+ \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n+ )\n peer.start()\n \n return peer\n", "issue": "sy.grid.register() should print useful information\n**Is your feature request related to a problem? Please describe.**\r\nWhen registering a node on OpenGrid, we want to convey some information to the user using sys.stdout.write()\r\n\r\nA few things we thought to add.\r\n\r\n- Information: connecting to opengrid...etc.\r\n - Information: Can I connect to the main grid node... graceful error message if you can't.\r\n- Disclaimer: OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\r\n- Where to get Help:\r\n - Join our slack (slack.openmined.org) and ask for help in the #lib_syft channel.\r\n - File a Github Issue: https://github.com/OpenMined/PySyft and add the string \"#opengrid\" in the issue title.\r\n \r\n\n", "before_files": [{"content": "from .network import Network\nimport uuid\n\nDEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n\n\ndef register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n \n Returns:\n peer: Peer Network instance.\n \"\"\"\n if not kwargs:\n args = args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n else:\n args = kwargs\n\n peer_id = str(uuid.uuid4())\n peer = Network(peer_id, **args)\n peer.start()\n\n return peer\n", "path": "syft/grid/__init__.py"}], "after_files": [{"content": "from .network import Network\nimport sys\nimport uuid\n\nDEFAULT_NETWORK_URL = \"ws://ec2-13-59-45-128.us-east-2.compute.amazonaws.com\"\n\n\ndef register(**kwargs):\n \"\"\" Add this process as a new peer registering it in the grid network.\n \n Returns:\n peer: Peer Network instance.\n \"\"\"\n if not kwargs:\n args = args = {\"max_size\": None, \"timeout\": 444, \"url\": DEFAULT_NETWORK_URL}\n else:\n args = kwargs\n\n peer_id = str(uuid.uuid4())\n sys.stdout.write(\n \"Connecting to OpenGrid (\" + \"\\033[94m\" + DEFAULT_NETWORK_URL + \"\\033[0m\" + \") ... \"\n )\n peer = Network(peer_id, **args)\n\n sys.stdout.write(\"\\033[92m\" + \"OK\" + \"\\033[0m\" + \"\\n\")\n sys.stdout.write(\"Peer ID: \" + peer_id + \"\\n\")\n\n sys.stdout.write(\n \"\\033[93m\" + \"DISCLAIMER\" + \"\\033[0m\"\n \":\"\n + \"\\033[1m\"\n + \" OpenGrid is an experimental feature currently in alpha. Do not use this to protect real-world data.\\n\"\n + \"\\033[0m\"\n )\n\n sys.stdout.write(\"Where to get help: \\n\")\n sys.stdout.write(\n \" - Join our slack (https://slack.openmined.org) and ask for help in the #lib_syft channel.\\n\"\n )\n sys.stdout.write(\n \" - File a Github Issue: https://github.com/OpenMined/PySyft and add the string '#opengrid' in the issue title.\\n\"\n )\n sys.stdout.write(\n \" - Want to join in our development team? Apply here: https://forms.gle/wcH1vxzvPyDSbSVW6\\n\"\n )\n peer.start()\n\n return peer\n", "path": "syft/grid/__init__.py"}]} | 615 | 468 |
gh_patches_debug_23054 | rasdani/github-patches | git_diff | scikit-hep__pyhf-862 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update PyPI keywords and classifies in setup.py
# Description
As JAX is now a supported backend then it should additionally be added to the [list of keywords in `setup.py`](https://github.com/scikit-hep/pyhf/blob/917bd5127c1da023b279c076bb41614fbb859487/setup.py#L85). Additionally, the [classifies](https://packaging.python.org/guides/distributing-packages-using-setuptools/#classifiers) should be updated as well to include a `Development Status`, `License`, `Intended Audience`, and `Topic`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 from pathlib import Path
3
4 this_directory = Path(__file__).parent.resolve()
5 with open(Path(this_directory).joinpath('README.rst'), encoding='utf-8') as readme_rst:
6 long_description = readme_rst.read()
7
8 extras_require = {
9 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],
10 'torch': ['torch~=1.2'],
11 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
12 'xmlio': ['uproot'],
13 'minuit': ['iminuit'],
14 }
15 extras_require['backends'] = sorted(
16 set(
17 extras_require['tensorflow']
18 + extras_require['torch']
19 + extras_require['jax']
20 + extras_require['minuit']
21 )
22 )
23 extras_require['contrib'] = sorted(set(['matplotlib']))
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + [
31 'pyflakes',
32 'pytest~=3.5',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'check-manifest',
43 'jupyter',
44 'uproot~=3.3',
45 'graphviz',
46 'jsonpatch',
47 'black',
48 ]
49 )
50 )
51 extras_require['docs'] = sorted(
52 set(
53 [
54 'sphinx',
55 'sphinxcontrib-bibtex',
56 'sphinx-click',
57 'sphinx_rtd_theme',
58 'nbsphinx',
59 'ipywidgets',
60 'sphinx-issues',
61 'sphinx-copybutton>0.2.9',
62 ]
63 )
64 )
65 extras_require['develop'] = sorted(
66 set(
67 extras_require['docs']
68 + extras_require['test']
69 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'twine']
70 )
71 )
72 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
73
74
75 setup(
76 name='pyhf',
77 version='0.4.1',
78 description='(partial) pure python histfactory implementation',
79 long_description=long_description,
80 long_description_content_type='text/x-rst',
81 url='https://github.com/scikit-hep/pyhf',
82 author='Lukas Heinrich, Matthew Feickert, Giordon Stark',
83 author_email='[email protected], [email protected], [email protected]',
84 license='Apache',
85 keywords='physics fitting numpy scipy tensorflow pytorch',
86 classifiers=[
87 "Programming Language :: Python :: 3",
88 "Programming Language :: Python :: 3.6",
89 "Programming Language :: Python :: 3.7",
90 "Programming Language :: Python :: 3.8",
91 ],
92 package_dir={'': 'src'},
93 packages=find_packages(where='src'),
94 include_package_data=True,
95 python_requires=">=3.6",
96 install_requires=[
97 'scipy', # requires numpy, which is required by pyhf and tensorflow
98 'click>=6.0', # for console scripts,
99 'tqdm', # for readxml
100 'jsonschema>=3.2.0', # for utils
101 'jsonpatch',
102 'pyyaml', # for parsing CLI equal-delimited options
103 ],
104 extras_require=extras_require,
105 entry_points={'console_scripts': ['pyhf=pyhf.cli:cli']},
106 dependency_links=[],
107 use_scm_version=lambda: {'local_scheme': lambda version: ''},
108 )
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -79,11 +79,21 @@
long_description=long_description,
long_description_content_type='text/x-rst',
url='https://github.com/scikit-hep/pyhf',
+ project_urls={
+ "Documentation": "https://scikit-hep.org/pyhf/",
+ "Source": "https://github.com/scikit-hep/pyhf",
+ "Tracker": "https://github.com/scikit-hep/pyhf/issues",
+ },
author='Lukas Heinrich, Matthew Feickert, Giordon Stark',
author_email='[email protected], [email protected], [email protected]',
license='Apache',
- keywords='physics fitting numpy scipy tensorflow pytorch',
+ keywords='physics fitting numpy scipy tensorflow pytorch jax',
classifiers=[
+ "Development Status :: 4 - Beta",
+ "License :: OSI Approved :: Apache Software License",
+ "Intended Audience :: Science/Research",
+ "Topic :: Scientific/Engineering",
+ "Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -79,11 +79,21 @@\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='https://github.com/scikit-hep/pyhf',\n+ project_urls={\n+ \"Documentation\": \"https://scikit-hep.org/pyhf/\",\n+ \"Source\": \"https://github.com/scikit-hep/pyhf\",\n+ \"Tracker\": \"https://github.com/scikit-hep/pyhf/issues\",\n+ },\n author='Lukas Heinrich, Matthew Feickert, Giordon Stark',\n author_email='[email protected], [email protected], [email protected]',\n license='Apache',\n- keywords='physics fitting numpy scipy tensorflow pytorch',\n+ keywords='physics fitting numpy scipy tensorflow pytorch jax',\n classifiers=[\n+ \"Development Status :: 4 - Beta\",\n+ \"License :: OSI Approved :: Apache Software License\",\n+ \"Intended Audience :: Science/Research\",\n+ \"Topic :: Scientific/Engineering\",\n+ \"Topic :: Scientific/Engineering :: Physics\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n", "issue": "Update PyPI keywords and classifies in setup.py\n# Description\r\n\r\nAs JAX is now a supported backend then it should additionally be added to the [list of keywords in `setup.py`](https://github.com/scikit-hep/pyhf/blob/917bd5127c1da023b279c076bb41614fbb859487/setup.py#L85). Additionally, the [classifies](https://packaging.python.org/guides/distributing-packages-using-setuptools/#classifiers) should be updated as well to include a `Development Status`, `License`, `Intended Audience`, and `Topic`.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom pathlib import Path\n\nthis_directory = Path(__file__).parent.resolve()\nwith open(Path(this_directory).joinpath('README.rst'), encoding='utf-8') as readme_rst:\n long_description = readme_rst.read()\n\nextras_require = {\n 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pyflakes',\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'check-manifest',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n 'black',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n name='pyhf',\n version='0.4.1',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='https://github.com/scikit-hep/pyhf',\n author='Lukas Heinrich, Matthew Feickert, Giordon Stark',\n author_email='[email protected], [email protected], [email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch',\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n package_dir={'': 'src'},\n packages=find_packages(where='src'),\n include_package_data=True,\n python_requires=\">=3.6\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf and tensorflow\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'jsonschema>=3.2.0', # for utils\n 'jsonpatch',\n 'pyyaml', # for parsing CLI equal-delimited options\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.cli:cli']},\n dependency_links=[],\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nfrom pathlib import Path\n\nthis_directory = Path(__file__).parent.resolve()\nwith open(Path(this_directory).joinpath('README.rst'), encoding='utf-8') as readme_rst:\n long_description = readme_rst.read()\n\nextras_require = {\n 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pyflakes',\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'check-manifest',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n 'black',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n name='pyhf',\n version='0.4.1',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n url='https://github.com/scikit-hep/pyhf',\n project_urls={\n \"Documentation\": \"https://scikit-hep.org/pyhf/\",\n \"Source\": \"https://github.com/scikit-hep/pyhf\",\n \"Tracker\": \"https://github.com/scikit-hep/pyhf/issues\",\n },\n author='Lukas Heinrich, Matthew Feickert, Giordon Stark',\n author_email='[email protected], [email protected], [email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch jax',\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Physics\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n package_dir={'': 'src'},\n packages=find_packages(where='src'),\n include_package_data=True,\n python_requires=\">=3.6\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf and tensorflow\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'jsonschema>=3.2.0', # for utils\n 'jsonpatch',\n 'pyyaml', # for parsing CLI equal-delimited options\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.cli:cli']},\n dependency_links=[],\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 1,477 | 294 |
gh_patches_debug_2538 | rasdani/github-patches | git_diff | Parsl__parsl-328 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git
Hi
When I try to run parsl I am getting the following issue:
fatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git
Is it a real issue?
I am using python3 and jupyter but run parsl in a terminal.
Issue in parsl.log
I tried to run a simple script given in the parsl documentation
```
import parsl
from parsl import *
import time
workers = ThreadPoolExecutor(max_workers=4)
dfk = DataFlowKernel(executors=[workers])
print(1)
@App('python', dfk)
def hello ():
import time
time.sleep(5)
return 'Hello World!'
print(2)
app_future = hello()
print ('Done: %s' % app_future.done())
print ('Result: %s' % app_future.result())
print ('Done: %s' % app_future.done())
```
However, in the parsl.log shows this issue
2018-06-07 21:45:37 parsl.utils:24 [ERROR] Unable to determine code state
Traceback (most recent call last):
File "/homes/vvikraman/anaconda3/lib/python3.6/site-packages/parsl/utils.py", line 19, in get_version
head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')
File "/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'rev-parse', '--short', 'HEAD']' returned non-zero exit status 128.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/utils.py`
Content:
```
1 import logging
2 import os
3 import shlex
4 import subprocess
5 import threading
6 import time
7 from contextlib import contextmanager
8 from functools import wraps
9
10 import parsl
11 from parsl.version import VERSION
12
13 logger = logging.getLogger(__name__)
14
15
16 def get_version():
17 version = parsl.__version__
18 work_tree = os.path.dirname(os.path.dirname(__file__))
19 git_dir = os.path.join(work_tree, '.git')
20 env = {'GIT_WORK_TREE': work_tree, 'GIT_DIR': git_dir}
21 try:
22 cmd = shlex.split('git rev-parse --short HEAD')
23 head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')
24 diff = subprocess.check_output(shlex.split('git diff HEAD'), env=env)
25 status = 'dirty' if diff else 'clean'
26 version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)
27 except Exception as e:
28 logger.exception("Unable to determine code state")
29
30 return version
31
32
33 def get_all_checkpoints(rundir="runinfo"):
34 """Finds the checkpoints from all last runs.
35
36 Note that checkpoints are incremental, and this helper will not find
37 previous checkpoints from earlier than the most recent run. It probably
38 should be made to do so.
39
40 Kwargs:
41 - rundir(str) : Path to the runinfo directory
42
43 Returns:
44 - a list suitable for the checkpointFiles parameter of DataFlowKernel
45 constructor
46
47 """
48
49 if(not(os.path.isdir(rundir))):
50 return []
51
52 dirs = sorted(os.listdir(rundir))
53
54 checkpoints = []
55
56 for runid in dirs:
57
58 checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid))
59
60 if(os.path.isdir(checkpoint)):
61 checkpoints.append(checkpoint)
62
63 return checkpoints
64
65
66 def get_last_checkpoint(rundir="runinfo"):
67 """Finds the checkpoint from the last run, if one exists.
68
69 Note that checkpoints are incremental, and this helper will not find
70 previous checkpoints from earlier than the most recent run. It probably
71 should be made to do so.
72
73 Kwargs:
74 - rundir(str) : Path to the runinfo directory
75
76 Returns:
77 - a list suitable for checkpointFiles parameter of DataFlowKernel
78 constructor, with 0 or 1 elements
79
80 """
81
82 if(not(os.path.isdir(rundir))):
83 return []
84
85 dirs = sorted(os.listdir(rundir))
86
87 if(len(dirs) == 0):
88 return []
89
90 last_runid = dirs[-1]
91 last_checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, last_runid))
92
93 if(not(os.path.isdir(last_checkpoint))):
94 return []
95
96 return [last_checkpoint]
97
98
99 def timeout(seconds=None):
100 def decorator(func, *args, **kwargs):
101 @wraps(func)
102 def wrapper(*args, **kwargs):
103 t = threading.Thread(target=func, args=args, kwargs=kwargs)
104 t.start()
105 result = t.join(seconds)
106 if t.is_alive():
107 raise RuntimeError('timed out in {}'.format(func))
108 return result
109 return wrapper
110 return decorator
111
112
113 @contextmanager
114 def time_limited_open(path, mode, seconds=1):
115 @timeout(seconds)
116 def check_path(path):
117 while not os.path.exists(path):
118 time.sleep(0.1)
119 check_path(path)
120 f = open(path, mode)
121 yield f
122 f.close()
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsl/utils.py b/parsl/utils.py
--- a/parsl/utils.py
+++ b/parsl/utils.py
@@ -25,7 +25,7 @@
status = 'dirty' if diff else 'clean'
version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)
except Exception as e:
- logger.exception("Unable to determine code state")
+ pass
return version
| {"golden_diff": "diff --git a/parsl/utils.py b/parsl/utils.py\n--- a/parsl/utils.py\n+++ b/parsl/utils.py\n@@ -25,7 +25,7 @@\n status = 'dirty' if diff else 'clean'\n version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)\n except Exception as e:\n- logger.exception(\"Unable to determine code state\")\n+ pass\n \n return version\n", "issue": "fatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git\nHi \r\nWhen I try to run parsl I am getting the following issue:\r\n\r\nfatal: Not a git repository: '/homes/vvikraman/anaconda3/lib/python3.6/site-packages/.git\r\n\r\nIs it a real issue?\r\n\r\nI am using python3 and jupyter but run parsl in a terminal. \nIssue in parsl.log\nI tried to run a simple script given in the parsl documentation \r\n\r\n```\r\nimport parsl\r\nfrom parsl import *\r\nimport time\r\n\r\nworkers = ThreadPoolExecutor(max_workers=4)\r\ndfk = DataFlowKernel(executors=[workers])\r\nprint(1)\r\n@App('python', dfk)\r\ndef hello ():\r\n import time\r\n time.sleep(5)\r\n return 'Hello World!'\r\nprint(2)\r\napp_future = hello()\r\nprint ('Done: %s' % app_future.done())\r\nprint ('Result: %s' % app_future.result())\r\nprint ('Done: %s' % app_future.done())\r\n```\r\nHowever, in the parsl.log shows this issue\r\n\r\n2018-06-07 21:45:37 parsl.utils:24 [ERROR] Unable to determine code state\r\nTraceback (most recent call last):\r\n File \"/homes/vvikraman/anaconda3/lib/python3.6/site-packages/parsl/utils.py\", line 19, in get_version\r\n head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')\r\n File \"/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py\", line 336, in check_output\r\n **kwargs).stdout\r\n File \"/homes/vvikraman/anaconda3/lib/python3.6/subprocess.py\", line 418, in run\r\n output=stdout, stderr=stderr)\r\nsubprocess.CalledProcessError: Command '['git', 'rev-parse', '--short', 'HEAD']' returned non-zero exit status 128.\r\n\n", "before_files": [{"content": "import logging\nimport os\nimport shlex\nimport subprocess\nimport threading\nimport time\nfrom contextlib import contextmanager\nfrom functools import wraps\n\nimport parsl\nfrom parsl.version import VERSION\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_version():\n version = parsl.__version__\n work_tree = os.path.dirname(os.path.dirname(__file__))\n git_dir = os.path.join(work_tree, '.git')\n env = {'GIT_WORK_TREE': work_tree, 'GIT_DIR': git_dir}\n try:\n cmd = shlex.split('git rev-parse --short HEAD')\n head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')\n diff = subprocess.check_output(shlex.split('git diff HEAD'), env=env)\n status = 'dirty' if diff else 'clean'\n version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)\n except Exception as e:\n logger.exception(\"Unable to determine code state\")\n\n return version\n\n\ndef get_all_checkpoints(rundir=\"runinfo\"):\n \"\"\"Finds the checkpoints from all last runs.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for the checkpointFiles parameter of DataFlowKernel\n constructor\n\n \"\"\"\n\n if(not(os.path.isdir(rundir))):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n checkpoints = []\n\n for runid in dirs:\n\n checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid))\n\n if(os.path.isdir(checkpoint)):\n checkpoints.append(checkpoint)\n\n return checkpoints\n\n\ndef get_last_checkpoint(rundir=\"runinfo\"):\n \"\"\"Finds the checkpoint from the last run, if one exists.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for checkpointFiles parameter of DataFlowKernel\n constructor, with 0 or 1 elements\n\n \"\"\"\n\n if(not(os.path.isdir(rundir))):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n if(len(dirs) == 0):\n return []\n\n last_runid = dirs[-1]\n last_checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, last_runid))\n\n if(not(os.path.isdir(last_checkpoint))):\n return []\n\n return [last_checkpoint]\n\n\ndef timeout(seconds=None):\n def decorator(func, *args, **kwargs):\n @wraps(func)\n def wrapper(*args, **kwargs):\n t = threading.Thread(target=func, args=args, kwargs=kwargs)\n t.start()\n result = t.join(seconds)\n if t.is_alive():\n raise RuntimeError('timed out in {}'.format(func))\n return result\n return wrapper\n return decorator\n\n\n@contextmanager\ndef time_limited_open(path, mode, seconds=1):\n @timeout(seconds)\n def check_path(path):\n while not os.path.exists(path):\n time.sleep(0.1)\n check_path(path)\n f = open(path, mode)\n yield f\n f.close()\n", "path": "parsl/utils.py"}], "after_files": [{"content": "import logging\nimport os\nimport shlex\nimport subprocess\nimport threading\nimport time\nfrom contextlib import contextmanager\nfrom functools import wraps\n\nimport parsl\nfrom parsl.version import VERSION\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_version():\n version = parsl.__version__\n work_tree = os.path.dirname(os.path.dirname(__file__))\n git_dir = os.path.join(work_tree, '.git')\n env = {'GIT_WORK_TREE': work_tree, 'GIT_DIR': git_dir}\n try:\n cmd = shlex.split('git rev-parse --short HEAD')\n head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')\n diff = subprocess.check_output(shlex.split('git diff HEAD'), env=env)\n status = 'dirty' if diff else 'clean'\n version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)\n except Exception as e:\n pass\n\n return version\n\n\ndef get_all_checkpoints(rundir=\"runinfo\"):\n \"\"\"Finds the checkpoints from all last runs.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for the checkpointFiles parameter of DataFlowKernel\n constructor\n\n \"\"\"\n\n if(not(os.path.isdir(rundir))):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n checkpoints = []\n\n for runid in dirs:\n\n checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid))\n\n if(os.path.isdir(checkpoint)):\n checkpoints.append(checkpoint)\n\n return checkpoints\n\n\ndef get_last_checkpoint(rundir=\"runinfo\"):\n \"\"\"Finds the checkpoint from the last run, if one exists.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for checkpointFiles parameter of DataFlowKernel\n constructor, with 0 or 1 elements\n\n \"\"\"\n\n if(not(os.path.isdir(rundir))):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n if(len(dirs) == 0):\n return []\n\n last_runid = dirs[-1]\n last_checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, last_runid))\n\n if(not(os.path.isdir(last_checkpoint))):\n return []\n\n return [last_checkpoint]\n\n\ndef timeout(seconds=None):\n def decorator(func, *args, **kwargs):\n @wraps(func)\n def wrapper(*args, **kwargs):\n t = threading.Thread(target=func, args=args, kwargs=kwargs)\n t.start()\n result = t.join(seconds)\n if t.is_alive():\n raise RuntimeError('timed out in {}'.format(func))\n return result\n return wrapper\n return decorator\n\n\n@contextmanager\ndef time_limited_open(path, mode, seconds=1):\n @timeout(seconds)\n def check_path(path):\n while not os.path.exists(path):\n time.sleep(0.1)\n check_path(path)\n f = open(path, mode)\n yield f\n f.close()\n", "path": "parsl/utils.py"}]} | 1,730 | 102 |
gh_patches_debug_9546 | rasdani/github-patches | git_diff | fossasia__open-event-server-5266 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User order_expiry_time as the parameter to expire orders
**Describe the bug**
Currently we are expiring orders after 10 minutes. We should change it to order_expiry_time parameter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/helpers/order.py`
Content:
```
1 import logging
2 from datetime import timedelta, datetime, timezone
3
4 from flask import render_template
5
6 from app.api.helpers import ticketing
7 from app.api.helpers.db import save_to_db, safe_query_without_soft_deleted_entries, get_count
8 from app.api.helpers.exceptions import UnprocessableEntity, ConflictException
9 from app.api.helpers.files import create_save_pdf
10 from app.api.helpers.storage import UPLOAD_PATHS
11 from app.models import db
12 from app.models.ticket import Ticket
13 from app.models.ticket_holder import TicketHolder
14
15
16 def delete_related_attendees_for_order(order):
17 """
18 Delete the associated attendees of an order when it is cancelled/deleted/expired
19 :param order: Order whose attendees have to be deleted.
20 :return:
21 """
22 for ticket_holder in order.ticket_holders:
23 db.session.delete(ticket_holder)
24 try:
25 db.session.commit()
26 except Exception as e:
27 logging.error('DB Exception! %s' % e)
28 db.session.rollback()
29
30
31 def set_expiry_for_order(order, override=False):
32 """
33 Expire the order after the time slot(10 minutes) if the order is pending.
34 Also expires the order if we want to expire an order regardless of the state and time.
35 :param order: Order to be expired.
36 :param override: flag to force expiry.
37 :return:
38 """
39 if order and not order.paid_via and (override or (order.status == 'pending' and (
40 order.created_at +
41 timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):
42 order.status = 'expired'
43 delete_related_attendees_for_order(order)
44 save_to_db(order)
45 return order
46
47
48 def create_pdf_tickets_for_holder(order):
49 """
50 Create tickets for the holders of an order.
51 :param order: The order for which to create tickets for.
52 """
53 if order.status == 'completed':
54 pdf = create_save_pdf(render_template('pdf/ticket_purchaser.html', order=order),
55 UPLOAD_PATHS['pdf']['ticket_attendee'],
56 dir_path='/static/uploads/pdf/tickets/')
57 order.tickets_pdf_url = pdf
58
59 for holder in order.ticket_holders:
60 if (not holder.user) or holder.user.id != order.user_id:
61 # holder is not the order buyer.
62 pdf = create_save_pdf(render_template('pdf/ticket_attendee.html', order=order, holder=holder),
63 UPLOAD_PATHS['pdf']['ticket_attendee'],
64 dir_path='/static/uploads/pdf/tickets/')
65 else:
66 # holder is the order buyer.
67 pdf = order.tickets_pdf_url
68 holder.pdf_url = pdf
69 save_to_db(holder)
70
71 save_to_db(order)
72
73
74 def create_onsite_attendees_for_order(data):
75 """
76 Creates on site ticket holders for an order and adds it into the request data.
77 :param data: data initially passed in the POST request for order.
78 :return:
79 """
80 on_site_tickets = data.get('on_site_tickets')
81
82 if not on_site_tickets:
83 raise UnprocessableEntity({'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing')
84
85 data['ticket_holders'] = []
86
87 for on_site_ticket in on_site_tickets:
88 ticket_id = on_site_ticket['id']
89 quantity = int(on_site_ticket['quantity'])
90
91 ticket = safe_query_without_soft_deleted_entries(db, Ticket, 'id', ticket_id, 'ticket_id')
92
93 ticket_sold_count = get_count(db.session.query(TicketHolder.id).
94 filter_by(ticket_id=int(ticket.id), deleted_at=None))
95
96 # Check if the ticket is already sold out or not.
97 if ticket_sold_count + quantity > ticket.quantity:
98 # delete the already created attendees.
99 for holder in data['ticket_holders']:
100 ticket_holder = db.session.query(TicketHolder).filter(id == int(holder)).one()
101 db.session.delete(ticket_holder)
102 try:
103 db.session.commit()
104 except Exception as e:
105 logging.error('DB Exception! %s' % e)
106 db.session.rollback()
107
108 raise ConflictException(
109 {'pointer': '/data/attributes/on_site_tickets'},
110 "Ticket with id: {} already sold out. You can buy at most {} tickets".format(ticket_id,
111 ticket.quantity -
112 ticket_sold_count)
113 )
114
115 for _ in range(1, quantity):
116 ticket_holder = TicketHolder(firstname='onsite', lastname='attendee', email='[email protected]',
117 ticket_id=ticket.id, event_id=data.get('event'))
118 save_to_db(ticket_holder)
119 data['ticket_holders'].append(ticket_holder.id)
120
121 # delete from the data.
122 del data['on_site_tickets']
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py
--- a/app/api/helpers/order.py
+++ b/app/api/helpers/order.py
@@ -38,7 +38,7 @@
"""
if order and not order.paid_via and (override or (order.status == 'pending' and (
order.created_at +
- timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):
+ timedelta(minutes=order.event.order_expiry_time)) < datetime.now(timezone.utc))):
order.status = 'expired'
delete_related_attendees_for_order(order)
save_to_db(order)
| {"golden_diff": "diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py\n--- a/app/api/helpers/order.py\n+++ b/app/api/helpers/order.py\n@@ -38,7 +38,7 @@\n \"\"\"\n if order and not order.paid_via and (override or (order.status == 'pending' and (\n order.created_at +\n- timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):\n+ timedelta(minutes=order.event.order_expiry_time)) < datetime.now(timezone.utc))):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n", "issue": "User order_expiry_time as the parameter to expire orders\n**Describe the bug**\r\nCurrently we are expiring orders after 10 minutes. We should change it to order_expiry_time parameter. \n", "before_files": [{"content": "import logging\nfrom datetime import timedelta, datetime, timezone\n\nfrom flask import render_template\n\nfrom app.api.helpers import ticketing\nfrom app.api.helpers.db import save_to_db, safe_query_without_soft_deleted_entries, get_count\nfrom app.api.helpers.exceptions import UnprocessableEntity, ConflictException\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\n\n\ndef delete_related_attendees_for_order(order):\n \"\"\"\n Delete the associated attendees of an order when it is cancelled/deleted/expired\n :param order: Order whose attendees have to be deleted.\n :return:\n \"\"\"\n for ticket_holder in order.ticket_holders:\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception as e:\n logging.error('DB Exception! %s' % e)\n db.session.rollback()\n\n\ndef set_expiry_for_order(order, override=False):\n \"\"\"\n Expire the order after the time slot(10 minutes) if the order is pending.\n Also expires the order if we want to expire an order regardless of the state and time.\n :param order: Order to be expired.\n :param override: flag to force expiry.\n :return:\n \"\"\"\n if order and not order.paid_via and (override or (order.status == 'pending' and (\n order.created_at +\n timedelta(minutes=ticketing.TicketingManager.get_order_expiry())) < datetime.now(timezone.utc))):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n return order\n\n\ndef create_pdf_tickets_for_holder(order):\n \"\"\"\n Create tickets for the holders of an order.\n :param order: The order for which to create tickets for.\n \"\"\"\n if order.status == 'completed':\n pdf = create_save_pdf(render_template('pdf/ticket_purchaser.html', order=order),\n UPLOAD_PATHS['pdf']['ticket_attendee'],\n dir_path='/static/uploads/pdf/tickets/')\n order.tickets_pdf_url = pdf\n\n for holder in order.ticket_holders:\n if (not holder.user) or holder.user.id != order.user_id:\n # holder is not the order buyer.\n pdf = create_save_pdf(render_template('pdf/ticket_attendee.html', order=order, holder=holder),\n UPLOAD_PATHS['pdf']['ticket_attendee'],\n dir_path='/static/uploads/pdf/tickets/')\n else:\n # holder is the order buyer.\n pdf = order.tickets_pdf_url\n holder.pdf_url = pdf\n save_to_db(holder)\n\n save_to_db(order)\n\n\ndef create_onsite_attendees_for_order(data):\n \"\"\"\n Creates on site ticket holders for an order and adds it into the request data.\n :param data: data initially passed in the POST request for order.\n :return:\n \"\"\"\n on_site_tickets = data.get('on_site_tickets')\n\n if not on_site_tickets:\n raise UnprocessableEntity({'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing')\n\n data['ticket_holders'] = []\n\n for on_site_ticket in on_site_tickets:\n ticket_id = on_site_ticket['id']\n quantity = int(on_site_ticket['quantity'])\n\n ticket = safe_query_without_soft_deleted_entries(db, Ticket, 'id', ticket_id, 'ticket_id')\n\n ticket_sold_count = get_count(db.session.query(TicketHolder.id).\n filter_by(ticket_id=int(ticket.id), deleted_at=None))\n\n # Check if the ticket is already sold out or not.\n if ticket_sold_count + quantity > ticket.quantity:\n # delete the already created attendees.\n for holder in data['ticket_holders']:\n ticket_holder = db.session.query(TicketHolder).filter(id == int(holder)).one()\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception as e:\n logging.error('DB Exception! %s' % e)\n db.session.rollback()\n\n raise ConflictException(\n {'pointer': '/data/attributes/on_site_tickets'},\n \"Ticket with id: {} already sold out. You can buy at most {} tickets\".format(ticket_id,\n ticket.quantity -\n ticket_sold_count)\n )\n\n for _ in range(1, quantity):\n ticket_holder = TicketHolder(firstname='onsite', lastname='attendee', email='[email protected]',\n ticket_id=ticket.id, event_id=data.get('event'))\n save_to_db(ticket_holder)\n data['ticket_holders'].append(ticket_holder.id)\n\n # delete from the data.\n del data['on_site_tickets']\n", "path": "app/api/helpers/order.py"}], "after_files": [{"content": "import logging\nfrom datetime import timedelta, datetime, timezone\n\nfrom flask import render_template\n\nfrom app.api.helpers import ticketing\nfrom app.api.helpers.db import save_to_db, safe_query_without_soft_deleted_entries, get_count\nfrom app.api.helpers.exceptions import UnprocessableEntity, ConflictException\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\n\n\ndef delete_related_attendees_for_order(order):\n \"\"\"\n Delete the associated attendees of an order when it is cancelled/deleted/expired\n :param order: Order whose attendees have to be deleted.\n :return:\n \"\"\"\n for ticket_holder in order.ticket_holders:\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception as e:\n logging.error('DB Exception! %s' % e)\n db.session.rollback()\n\n\ndef set_expiry_for_order(order, override=False):\n \"\"\"\n Expire the order after the time slot(10 minutes) if the order is pending.\n Also expires the order if we want to expire an order regardless of the state and time.\n :param order: Order to be expired.\n :param override: flag to force expiry.\n :return:\n \"\"\"\n if order and not order.paid_via and (override or (order.status == 'pending' and (\n order.created_at +\n timedelta(minutes=order.event.order_expiry_time)) < datetime.now(timezone.utc))):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n return order\n\n\ndef create_pdf_tickets_for_holder(order):\n \"\"\"\n Create tickets for the holders of an order.\n :param order: The order for which to create tickets for.\n \"\"\"\n if order.status == 'completed':\n pdf = create_save_pdf(render_template('pdf/ticket_purchaser.html', order=order),\n UPLOAD_PATHS['pdf']['ticket_attendee'],\n dir_path='/static/uploads/pdf/tickets/')\n order.tickets_pdf_url = pdf\n\n for holder in order.ticket_holders:\n if (not holder.user) or holder.user.id != order.user_id:\n # holder is not the order buyer.\n pdf = create_save_pdf(render_template('pdf/ticket_attendee.html', order=order, holder=holder),\n UPLOAD_PATHS['pdf']['ticket_attendee'],\n dir_path='/static/uploads/pdf/tickets/')\n else:\n # holder is the order buyer.\n pdf = order.tickets_pdf_url\n holder.pdf_url = pdf\n save_to_db(holder)\n\n save_to_db(order)\n\n\ndef create_onsite_attendees_for_order(data):\n \"\"\"\n Creates on site ticket holders for an order and adds it into the request data.\n :param data: data initially passed in the POST request for order.\n :return:\n \"\"\"\n on_site_tickets = data.get('on_site_tickets')\n\n if not on_site_tickets:\n raise UnprocessableEntity({'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing')\n\n data['ticket_holders'] = []\n\n for on_site_ticket in on_site_tickets:\n ticket_id = on_site_ticket['id']\n quantity = int(on_site_ticket['quantity'])\n\n ticket = safe_query_without_soft_deleted_entries(db, Ticket, 'id', ticket_id, 'ticket_id')\n\n ticket_sold_count = get_count(db.session.query(TicketHolder.id).\n filter_by(ticket_id=int(ticket.id), deleted_at=None))\n\n # Check if the ticket is already sold out or not.\n if ticket_sold_count + quantity > ticket.quantity:\n # delete the already created attendees.\n for holder in data['ticket_holders']:\n ticket_holder = db.session.query(TicketHolder).filter(id == int(holder)).one()\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception as e:\n logging.error('DB Exception! %s' % e)\n db.session.rollback()\n\n raise ConflictException(\n {'pointer': '/data/attributes/on_site_tickets'},\n \"Ticket with id: {} already sold out. You can buy at most {} tickets\".format(ticket_id,\n ticket.quantity -\n ticket_sold_count)\n )\n\n for _ in range(1, quantity):\n ticket_holder = TicketHolder(firstname='onsite', lastname='attendee', email='[email protected]',\n ticket_id=ticket.id, event_id=data.get('event'))\n save_to_db(ticket_holder)\n data['ticket_holders'].append(ticket_holder.id)\n\n # delete from the data.\n del data['on_site_tickets']\n", "path": "app/api/helpers/order.py"}]} | 1,564 | 136 |
gh_patches_debug_18443 | rasdani/github-patches | git_diff | sunpy__sunpy-3398 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add missing ASDF schemas for new coordinate frames in 1.1
Whoops
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/io/special/asdf/tags/coordinates/frames.py`
Content:
```
1 import os
2 import glob
3
4 from astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType
5
6 import sunpy.coordinates
7
8 from ...types import SunPyType
9
10 __all__ = ['SunPyCoordType']
11
12
13 SCHEMA_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__),
14 '..', '..',
15 'schemas',
16 'sunpy.org',
17 'sunpy'))
18
19
20 def _get_frames():
21 """
22 By reading the schema files, get the list of all the frames we can
23 save/load.
24 """
25 search = os.path.join(SCHEMA_PATH, 'coordinates', 'frames', '*.yaml')
26 files = glob.glob(search)
27
28 names = []
29 for fpath in files:
30 path, fname = os.path.split(fpath)
31 frame, _ = fname.split('-')
32 exclude_schemas = []
33 if frame not in exclude_schemas:
34 names.append(frame)
35
36 return names
37
38
39 class SunPyCoordType(BaseCoordType, SunPyType):
40 _tag_prefix = "coordinates/frames/"
41 name = ["coordinates/frames/" + f for f in _get_frames()]
42 types = [
43 sunpy.coordinates.HeliographicCarrington,
44 sunpy.coordinates.HeliographicStonyhurst,
45 sunpy.coordinates.Heliocentric,
46 sunpy.coordinates.Helioprojective,
47 ]
48 requires = ['sunpy', 'astropy>=3.1']
49 version = "1.0.0"
50
51 @classmethod
52 def assert_equal(cls, old, new):
53 assert isinstance(new, type(old))
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/io/special/asdf/tags/coordinates/frames.py b/sunpy/io/special/asdf/tags/coordinates/frames.py
--- a/sunpy/io/special/asdf/tags/coordinates/frames.py
+++ b/sunpy/io/special/asdf/tags/coordinates/frames.py
@@ -3,7 +3,9 @@
from astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType
-import sunpy.coordinates
+from sunpy.coordinates import frames
+
+sunpy_frames = list(map(lambda name: getattr(frames, name), frames.__all__))
from ...types import SunPyType
@@ -39,12 +41,7 @@
class SunPyCoordType(BaseCoordType, SunPyType):
_tag_prefix = "coordinates/frames/"
name = ["coordinates/frames/" + f for f in _get_frames()]
- types = [
- sunpy.coordinates.HeliographicCarrington,
- sunpy.coordinates.HeliographicStonyhurst,
- sunpy.coordinates.Heliocentric,
- sunpy.coordinates.Helioprojective,
- ]
+ types = sunpy_frames
requires = ['sunpy', 'astropy>=3.1']
version = "1.0.0"
| {"golden_diff": "diff --git a/sunpy/io/special/asdf/tags/coordinates/frames.py b/sunpy/io/special/asdf/tags/coordinates/frames.py\n--- a/sunpy/io/special/asdf/tags/coordinates/frames.py\n+++ b/sunpy/io/special/asdf/tags/coordinates/frames.py\n@@ -3,7 +3,9 @@\n \n from astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType\n \n-import sunpy.coordinates\n+from sunpy.coordinates import frames\n+\n+sunpy_frames = list(map(lambda name: getattr(frames, name), frames.__all__))\n \n from ...types import SunPyType\n \n@@ -39,12 +41,7 @@\n class SunPyCoordType(BaseCoordType, SunPyType):\n _tag_prefix = \"coordinates/frames/\"\n name = [\"coordinates/frames/\" + f for f in _get_frames()]\n- types = [\n- sunpy.coordinates.HeliographicCarrington,\n- sunpy.coordinates.HeliographicStonyhurst,\n- sunpy.coordinates.Heliocentric,\n- sunpy.coordinates.Helioprojective,\n- ]\n+ types = sunpy_frames\n requires = ['sunpy', 'astropy>=3.1']\n version = \"1.0.0\"\n", "issue": "Add missing ASDF schemas for new coordinate frames in 1.1\nWhoops\n", "before_files": [{"content": "import os\nimport glob\n\nfrom astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType\n\nimport sunpy.coordinates\n\nfrom ...types import SunPyType\n\n__all__ = ['SunPyCoordType']\n\n\nSCHEMA_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__),\n '..', '..',\n 'schemas',\n 'sunpy.org',\n 'sunpy'))\n\n\ndef _get_frames():\n \"\"\"\n By reading the schema files, get the list of all the frames we can\n save/load.\n \"\"\"\n search = os.path.join(SCHEMA_PATH, 'coordinates', 'frames', '*.yaml')\n files = glob.glob(search)\n\n names = []\n for fpath in files:\n path, fname = os.path.split(fpath)\n frame, _ = fname.split('-')\n exclude_schemas = []\n if frame not in exclude_schemas:\n names.append(frame)\n\n return names\n\n\nclass SunPyCoordType(BaseCoordType, SunPyType):\n _tag_prefix = \"coordinates/frames/\"\n name = [\"coordinates/frames/\" + f for f in _get_frames()]\n types = [\n sunpy.coordinates.HeliographicCarrington,\n sunpy.coordinates.HeliographicStonyhurst,\n sunpy.coordinates.Heliocentric,\n sunpy.coordinates.Helioprojective,\n ]\n requires = ['sunpy', 'astropy>=3.1']\n version = \"1.0.0\"\n\n @classmethod\n def assert_equal(cls, old, new):\n assert isinstance(new, type(old))\n", "path": "sunpy/io/special/asdf/tags/coordinates/frames.py"}], "after_files": [{"content": "import os\nimport glob\n\nfrom astropy.io.misc.asdf.tags.coordinates.frames import BaseCoordType\n\nfrom sunpy.coordinates import frames\n\nsunpy_frames = list(map(lambda name: getattr(frames, name), frames.__all__))\n\nfrom ...types import SunPyType\n\n__all__ = ['SunPyCoordType']\n\n\nSCHEMA_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__),\n '..', '..',\n 'schemas',\n 'sunpy.org',\n 'sunpy'))\n\n\ndef _get_frames():\n \"\"\"\n By reading the schema files, get the list of all the frames we can\n save/load.\n \"\"\"\n search = os.path.join(SCHEMA_PATH, 'coordinates', 'frames', '*.yaml')\n files = glob.glob(search)\n\n names = []\n for fpath in files:\n path, fname = os.path.split(fpath)\n frame, _ = fname.split('-')\n exclude_schemas = []\n if frame not in exclude_schemas:\n names.append(frame)\n\n return names\n\n\nclass SunPyCoordType(BaseCoordType, SunPyType):\n _tag_prefix = \"coordinates/frames/\"\n name = [\"coordinates/frames/\" + f for f in _get_frames()]\n types = sunpy_frames\n requires = ['sunpy', 'astropy>=3.1']\n version = \"1.0.0\"\n\n @classmethod\n def assert_equal(cls, old, new):\n assert isinstance(new, type(old))\n", "path": "sunpy/io/special/asdf/tags/coordinates/frames.py"}]} | 721 | 274 |
gh_patches_debug_789 | rasdani/github-patches | git_diff | geopandas__geopandas-372 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bbox filter from read_file doesn't take advantage of fiona filtering
In line: https://github.com/geopandas/geopandas/blob/master/geopandas/io/file.py#L28
The function goes through the trouble of checking if `bbox` is not null, but just calls `f` in `from_features` just the same.
Line 28 just needs to be changed to the intended `f_filt` to return filtered results or non-filtered if no bbox is passed in.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/io/file.py`
Content:
```
1 import os
2
3 import fiona
4 import numpy as np
5 from shapely.geometry import mapping
6
7 from six import iteritems
8 from geopandas import GeoDataFrame
9
10
11 def read_file(filename, **kwargs):
12 """
13 Returns a GeoDataFrame from a file.
14
15 *filename* is either the absolute or relative path to the file to be
16 opened and *kwargs* are keyword args to be passed to the `open` method
17 in the fiona library when opening the file. For more information on
18 possible keywords, type: ``import fiona; help(fiona.open)``
19 """
20 bbox = kwargs.pop('bbox', None)
21 with fiona.open(filename, **kwargs) as f:
22 crs = f.crs
23 if bbox is not None:
24 assert len(bbox)==4
25 f_filt = f.filter(bbox=bbox)
26 else:
27 f_filt = f
28 gdf = GeoDataFrame.from_features(f, crs=crs)
29
30 return gdf
31
32
33 def to_file(df, filename, driver="ESRI Shapefile", schema=None,
34 **kwargs):
35 """
36 Write this GeoDataFrame to an OGR data source
37
38 A dictionary of supported OGR providers is available via:
39 >>> import fiona
40 >>> fiona.supported_drivers
41
42 Parameters
43 ----------
44 df : GeoDataFrame to be written
45 filename : string
46 File path or file handle to write to.
47 driver : string, default 'ESRI Shapefile'
48 The OGR format driver used to write the vector file.
49 schema : dict, default None
50 If specified, the schema dictionary is passed to Fiona to
51 better control how the file is written. If None, GeoPandas
52 will determine the schema based on each column's dtype
53
54 The *kwargs* are passed to fiona.open and can be used to write
55 to multi-layer data, store data within archives (zip files), etc.
56 """
57 if schema is None:
58 schema = infer_schema(df)
59 filename = os.path.abspath(os.path.expanduser(filename))
60 with fiona.open(filename, 'w', driver=driver, crs=df.crs,
61 schema=schema, **kwargs) as c:
62 for feature in df.iterfeatures():
63 c.write(feature)
64
65
66 def infer_schema(df):
67 try:
68 from collections import OrderedDict
69 except ImportError:
70 from ordereddict import OrderedDict
71
72 def convert_type(in_type):
73 if in_type == object:
74 return 'str'
75 out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
76 if out_type == 'long':
77 out_type = 'int'
78 return out_type
79
80 properties = OrderedDict([
81 (col, convert_type(_type)) for col, _type in
82 zip(df.columns, df.dtypes) if col != df._geometry_column_name
83 ])
84
85 geom_type = _common_geom_type(df)
86 if not geom_type:
87 raise ValueError("Geometry column cannot contain mutiple "
88 "geometry types when writing to file.")
89
90 schema = {'geometry': geom_type, 'properties': properties}
91
92 return schema
93
94
95 def _common_geom_type(df):
96 # Need to check geom_types before we write to file...
97 # Some (most?) providers expect a single geometry type:
98 # Point, LineString, or Polygon
99 geom_types = df.geometry.geom_type.unique()
100
101 from os.path import commonprefix # To find longest common prefix
102 geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse
103 if not geom_type:
104 geom_type = None
105
106 return geom_type
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geopandas/io/file.py b/geopandas/io/file.py
--- a/geopandas/io/file.py
+++ b/geopandas/io/file.py
@@ -25,7 +25,7 @@
f_filt = f.filter(bbox=bbox)
else:
f_filt = f
- gdf = GeoDataFrame.from_features(f, crs=crs)
+ gdf = GeoDataFrame.from_features(f_filt, crs=crs)
return gdf
| {"golden_diff": "diff --git a/geopandas/io/file.py b/geopandas/io/file.py\n--- a/geopandas/io/file.py\n+++ b/geopandas/io/file.py\n@@ -25,7 +25,7 @@\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n- gdf = GeoDataFrame.from_features(f, crs=crs)\n+ gdf = GeoDataFrame.from_features(f_filt, crs=crs)\n \n return gdf\n", "issue": "bbox filter from read_file doesn't take advantage of fiona filtering\nIn line: https://github.com/geopandas/geopandas/blob/master/geopandas/io/file.py#L28\n\nThe function goes through the trouble of checking if `bbox` is not null, but just calls `f` in `from_features` just the same.\n\nLine 28 just needs to be changed to the intended `f_filt` to return filtered results or non-filtered if no bbox is passed in.\n\n", "before_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nfrom shapely.geometry import mapping\n\nfrom six import iteritems\nfrom geopandas import GeoDataFrame\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file.\n\n *filename* is either the absolute or relative path to the file to be\n opened and *kwargs* are keyword args to be passed to the `open` method\n in the fiona library when opening the file. For more information on \n possible keywords, type: ``import fiona; help(fiona.open)``\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n with fiona.open(filename, **kwargs) as f:\n crs = f.crs\n if bbox is not None:\n assert len(bbox)==4\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n gdf = GeoDataFrame.from_features(f, crs=crs)\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as c:\n for feature in df.iterfeatures():\n c.write(feature)\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(_type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix # To find longest common prefix\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse\n if not geom_type:\n geom_type = None\n\n return geom_type\n", "path": "geopandas/io/file.py"}], "after_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nfrom shapely.geometry import mapping\n\nfrom six import iteritems\nfrom geopandas import GeoDataFrame\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file.\n\n *filename* is either the absolute or relative path to the file to be\n opened and *kwargs* are keyword args to be passed to the `open` method\n in the fiona library when opening the file. For more information on \n possible keywords, type: ``import fiona; help(fiona.open)``\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n with fiona.open(filename, **kwargs) as f:\n crs = f.crs\n if bbox is not None:\n assert len(bbox)==4\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n gdf = GeoDataFrame.from_features(f_filt, crs=crs)\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as c:\n for feature in df.iterfeatures():\n c.write(feature)\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(_type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix # To find longest common prefix\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse\n if not geom_type:\n geom_type = None\n\n return geom_type\n", "path": "geopandas/io/file.py"}]} | 1,366 | 108 |
gh_patches_debug_18008 | rasdani/github-patches | git_diff | comic__grand-challenge.org-3330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Google logins broken with django-allauth 0.62+
# Recipe
- Open incognito window (just in case it matters)
- Navigate to grand-challenge.org
- Click Third party auth -> Google to login

- Acknowledge that you are sent to a "third party" by clicking continue on the next page.

# Result

> Unexpected Error
No login possible.
@amickan reported that no sentry errors are being recorded. I cannot login, presumably many other people cannot login either.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/profiles/providers/gmail/views.py`
Content:
```
1 from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter
2 from allauth.socialaccount.providers.oauth2.views import (
3 OAuth2CallbackView,
4 OAuth2LoginView,
5 )
6
7 from grandchallenge.profiles.providers.gmail.provider import GmailProvider
8
9
10 class GmailOAuth2Adapter(GoogleOAuth2Adapter):
11 provider_id = GmailProvider.id
12
13
14 oauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)
15 oauth2_callback = OAuth2CallbackView.adapter_view(GmailOAuth2Adapter)
16
```
Path: `app/grandchallenge/profiles/providers/gmail/provider.py`
Content:
```
1 from allauth.socialaccount.providers.google.provider import GoogleProvider
2
3
4 class GmailProvider(GoogleProvider):
5 id = "gmail"
6 name = "Google"
7
8 def extract_uid(self, data):
9 return str(data["email"])
10
11
12 provider_classes = [GmailProvider]
13
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/profiles/providers/gmail/provider.py b/app/grandchallenge/profiles/providers/gmail/provider.py
--- a/app/grandchallenge/profiles/providers/gmail/provider.py
+++ b/app/grandchallenge/profiles/providers/gmail/provider.py
@@ -1,9 +1,12 @@
from allauth.socialaccount.providers.google.provider import GoogleProvider
+from grandchallenge.profiles.providers.gmail.views import GmailOAuth2Adapter
+
class GmailProvider(GoogleProvider):
id = "gmail"
name = "Google"
+ oauth2_adapter_class = GmailOAuth2Adapter
def extract_uid(self, data):
return str(data["email"])
diff --git a/app/grandchallenge/profiles/providers/gmail/views.py b/app/grandchallenge/profiles/providers/gmail/views.py
--- a/app/grandchallenge/profiles/providers/gmail/views.py
+++ b/app/grandchallenge/profiles/providers/gmail/views.py
@@ -4,11 +4,9 @@
OAuth2LoginView,
)
-from grandchallenge.profiles.providers.gmail.provider import GmailProvider
-
class GmailOAuth2Adapter(GoogleOAuth2Adapter):
- provider_id = GmailProvider.id
+ provider_id = "gmail"
oauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)
| {"golden_diff": "diff --git a/app/grandchallenge/profiles/providers/gmail/provider.py b/app/grandchallenge/profiles/providers/gmail/provider.py\n--- a/app/grandchallenge/profiles/providers/gmail/provider.py\n+++ b/app/grandchallenge/profiles/providers/gmail/provider.py\n@@ -1,9 +1,12 @@\n from allauth.socialaccount.providers.google.provider import GoogleProvider\n \n+from grandchallenge.profiles.providers.gmail.views import GmailOAuth2Adapter\n+\n \n class GmailProvider(GoogleProvider):\n id = \"gmail\"\n name = \"Google\"\n+ oauth2_adapter_class = GmailOAuth2Adapter\n \n def extract_uid(self, data):\n return str(data[\"email\"])\ndiff --git a/app/grandchallenge/profiles/providers/gmail/views.py b/app/grandchallenge/profiles/providers/gmail/views.py\n--- a/app/grandchallenge/profiles/providers/gmail/views.py\n+++ b/app/grandchallenge/profiles/providers/gmail/views.py\n@@ -4,11 +4,9 @@\n OAuth2LoginView,\n )\n \n-from grandchallenge.profiles.providers.gmail.provider import GmailProvider\n-\n \n class GmailOAuth2Adapter(GoogleOAuth2Adapter):\n- provider_id = GmailProvider.id\n+ provider_id = \"gmail\"\n \n \n oauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)\n", "issue": "Google logins broken with django-allauth 0.62+\n# Recipe\r\n\r\n- Open incognito window (just in case it matters)\r\n- Navigate to grand-challenge.org\r\n- Click Third party auth -> Google to login\r\n \r\n\r\n\r\n- Acknowledge that you are sent to a \"third party\" by clicking continue on the next page.\r\n\r\n\r\n\r\n# Result\r\n\r\n\r\n\r\n> Unexpected Error\r\n\r\nNo login possible.\r\n\r\n@amickan reported that no sentry errors are being recorded. I cannot login, presumably many other people cannot login either.\r\n\n", "before_files": [{"content": "from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter\nfrom allauth.socialaccount.providers.oauth2.views import (\n OAuth2CallbackView,\n OAuth2LoginView,\n)\n\nfrom grandchallenge.profiles.providers.gmail.provider import GmailProvider\n\n\nclass GmailOAuth2Adapter(GoogleOAuth2Adapter):\n provider_id = GmailProvider.id\n\n\noauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)\noauth2_callback = OAuth2CallbackView.adapter_view(GmailOAuth2Adapter)\n", "path": "app/grandchallenge/profiles/providers/gmail/views.py"}, {"content": "from allauth.socialaccount.providers.google.provider import GoogleProvider\n\n\nclass GmailProvider(GoogleProvider):\n id = \"gmail\"\n name = \"Google\"\n\n def extract_uid(self, data):\n return str(data[\"email\"])\n\n\nprovider_classes = [GmailProvider]\n", "path": "app/grandchallenge/profiles/providers/gmail/provider.py"}], "after_files": [{"content": "from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter\nfrom allauth.socialaccount.providers.oauth2.views import (\n OAuth2CallbackView,\n OAuth2LoginView,\n)\n\n\nclass GmailOAuth2Adapter(GoogleOAuth2Adapter):\n provider_id = \"gmail\"\n\n\noauth2_login = OAuth2LoginView.adapter_view(GmailOAuth2Adapter)\noauth2_callback = OAuth2CallbackView.adapter_view(GmailOAuth2Adapter)\n", "path": "app/grandchallenge/profiles/providers/gmail/views.py"}, {"content": "from allauth.socialaccount.providers.google.provider import GoogleProvider\n\nfrom grandchallenge.profiles.providers.gmail.views import GmailOAuth2Adapter\n\n\nclass GmailProvider(GoogleProvider):\n id = \"gmail\"\n name = \"Google\"\n oauth2_adapter_class = GmailOAuth2Adapter\n\n def extract_uid(self, data):\n return str(data[\"email\"])\n\n\nprovider_classes = [GmailProvider]\n", "path": "app/grandchallenge/profiles/providers/gmail/provider.py"}]} | 772 | 276 |
gh_patches_debug_25268 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'NoneType' object is not subscriptable
Running `checkov -d .` now emits an exception
```
2022-02-25 17:45:59,050 [MainThread ] [ERROR] Failed to run check: Ensure no NACL allow ingress from 0.0.0.0:0 to port 21 for configuration: {'cidr_block': ['0.0.0.0/0'], 'egress': [False], 'network_acl_id': ['aws_default_network_acl.public.id'], 'protocol': ['-1'], 'rule_action': ['allow'], 'rule_number': [100]} at file: /modules/network/regional/main.tf
Process ForkProcess-1:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/parallelizer/parallel_runner.py", line 29, in func_wrapper
result = original_func(item)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/runners/runner_registry.py", line 66, in <lambda>
lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 119, in run
self.check_tf_definition(report, root_folder, runner_filter, collect_skip_comments)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 215, in check_tf_definition
self.run_all_blocks(definition, self.context, full_file_path, root_folder, report,
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 225, in run_all_blocks
self.run_block(definition[block_type], definitions_context,
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py", line 297, in run_block
results = registry.scan(scanned_file, entity, skipped_checks, runner_filter)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py", line 121, in scan
result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py", line 135, in run_check
result = check.run(
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py", line 86, in run
raise e
File "/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py", line 73, in run
check_result["result"] = self.scan_entity_conf(entity_configuration, entity_type)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 70, in scan_entity_conf
return self.scan_resource_conf(conf)
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py", line 41, in scan_resource_conf
if not self.check_rule(conf):
File "/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py", line 51, in check_rule
if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
TypeError: 'NoneType' object is not subscriptable
```
- OS: MacOS 12.2
- Checkov Version 2.0.902
Relevant resource maybe as follows:
```
resource "aws_network_acl_rule" "public_ingress" {
network_acl_id = aws_default_network_acl.public.id
rule_number = 100
egress = false
protocol = "-1"
rule_action = "allow"
cidr_block = "0.0.0.0/0"
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 from checkov.common.util.type_forcers import force_list
4 from checkov.common.util.type_forcers import force_int
5
6
7 class AbsNACLUnrestrictedIngress(BaseResourceCheck):
8 def __init__(self, check_id, port):
9 name = "Ensure no NACL allow ingress from 0.0.0.0:0 to port %d" % port
10 supported_resources = ['aws_network_acl', 'aws_network_acl_rule']
11 categories = [CheckCategories.NETWORKING]
12 super().__init__(name=name, id=check_id, categories=categories, supported_resources=supported_resources)
13 self.port = port
14
15 def scan_resource_conf(self, conf):
16 """
17
18 Return PASS if:
19 - The NACL doesnt allows restricted ingress access to the port
20 - The resource is an aws_network_acl of type 'ingress' that does not violate the check.
21
22 Return FAIL if:
23 - The the NACL allows unrestricted access to the port
24
25 Return UNKNOWN if:
26 - the resource is an NACL of type 'egress', OR
27
28 :param conf: aws_network_acl configuration
29 :return: <CheckResult>
30 """
31
32 if conf.get("ingress"):
33 ingress = conf.get("ingress")
34 for rule in ingress:
35 if not self.check_rule(rule):
36 return CheckResult.FAILED
37 return CheckResult.PASSED
38 # maybe its an network_acl_rule
39 if conf.get("network_acl_id"):
40 if not conf.get("egress")[0]:
41 if not self.check_rule(conf):
42 return CheckResult.FAILED
43 return CheckResult.PASSED
44
45 return CheckResult.UNKNOWN
46
47 def check_rule(self, rule):
48 if rule.get('cidr_block'):
49 if rule.get('cidr_block') == ["0.0.0.0/0"]:
50 if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
51 if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
52 return False
53 if rule.get('ipv6_cidr_block'):
54 if rule.get('ipv6_cidr_block') == ["::/0"]:
55 if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
56 if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
57 return False
58 return True
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py
--- a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py
+++ b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py
@@ -48,11 +48,17 @@
if rule.get('cidr_block'):
if rule.get('cidr_block') == ["0.0.0.0/0"]:
if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
+ protocol = rule.get('protocol')
+ if protocol and str(protocol[0]) == "-1":
+ return False
if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
return False
if rule.get('ipv6_cidr_block'):
if rule.get('ipv6_cidr_block') == ["::/0"]:
if rule.get('action') == ["allow"] or rule.get('rule_action') == ["allow"]:
+ protocol = rule.get('protocol')
+ if protocol and str(protocol[0]) == "-1":
+ return False
if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):
return False
return True
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\n--- a/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\n+++ b/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\n@@ -48,11 +48,17 @@\n if rule.get('cidr_block'):\n if rule.get('cidr_block') == [\"0.0.0.0/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n+ protocol = rule.get('protocol')\n+ if protocol and str(protocol[0]) == \"-1\":\n+ return False\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n if rule.get('ipv6_cidr_block'):\n if rule.get('ipv6_cidr_block') == [\"::/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n+ protocol = rule.get('protocol')\n+ if protocol and str(protocol[0]) == \"-1\":\n+ return False\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n return True\n", "issue": "'NoneType' object is not subscriptable\nRunning `checkov -d .` now emits an exception\r\n\r\n```\r\n2022-02-25 17:45:59,050 [MainThread ] [ERROR] Failed to run check: Ensure no NACL allow ingress from 0.0.0.0:0 to port 21 for configuration: {'cidr_block': ['0.0.0.0/0'], 'egress': [False], 'network_acl_id': ['aws_default_network_acl.public.id'], 'protocol': ['-1'], 'rule_action': ['allow'], 'rule_number': [100]} at file: /modules/network/regional/main.tf\r\nProcess ForkProcess-1:\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/opt/homebrew/Cellar/[email protected]/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/parallelizer/parallel_runner.py\", line 29, in func_wrapper\r\n result = original_func(item)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/runners/runner_registry.py\", line 66, in <lambda>\r\n lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 119, in run\r\n self.check_tf_definition(report, root_folder, runner_filter, collect_skip_comments)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 215, in check_tf_definition\r\n self.run_all_blocks(definition, self.context, full_file_path, root_folder, report,\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 225, in run_all_blocks\r\n self.run_block(definition[block_type], definitions_context,\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/runner.py\", line 297, in run_block\r\n results = registry.scan(scanned_file, entity, skipped_checks, runner_filter)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py\", line 121, in scan\r\n result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check_registry.py\", line 135, in run_check\r\n result = check.run(\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py\", line 86, in run\r\n raise e\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/common/checks/base_check.py\", line 73, in run\r\n check_result[\"result\"] = self.scan_entity_conf(entity_configuration, entity_type)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 70, in scan_entity_conf\r\n return self.scan_resource_conf(conf)\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\", line 41, in scan_resource_conf\r\n if not self.check_rule(conf):\r\n File \"/opt/homebrew/lib/python3.10/site-packages/checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py\", line 51, in check_rule\r\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\r\nTypeError: 'NoneType' object is not subscriptable\r\n```\r\n\r\n - OS: MacOS 12.2\r\n - Checkov Version 2.0.902\r\n\r\nRelevant resource maybe as follows:\r\n```\r\nresource \"aws_network_acl_rule\" \"public_ingress\" {\r\n network_acl_id = aws_default_network_acl.public.id\r\n rule_number = 100\r\n egress = false\r\n protocol = \"-1\"\r\n rule_action = \"allow\"\r\n cidr_block = \"0.0.0.0/0\"\r\n}\r\n```\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.common.util.type_forcers import force_int\n\n\nclass AbsNACLUnrestrictedIngress(BaseResourceCheck):\n def __init__(self, check_id, port):\n name = \"Ensure no NACL allow ingress from 0.0.0.0:0 to port %d\" % port\n supported_resources = ['aws_network_acl', 'aws_network_acl_rule']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=check_id, categories=categories, supported_resources=supported_resources)\n self.port = port\n\n def scan_resource_conf(self, conf):\n \"\"\"\n\n Return PASS if:\n - The NACL doesnt allows restricted ingress access to the port\n - The resource is an aws_network_acl of type 'ingress' that does not violate the check.\n\n Return FAIL if:\n - The the NACL allows unrestricted access to the port\n\n Return UNKNOWN if:\n - the resource is an NACL of type 'egress', OR\n\n :param conf: aws_network_acl configuration\n :return: <CheckResult>\n \"\"\"\n\n if conf.get(\"ingress\"):\n ingress = conf.get(\"ingress\")\n for rule in ingress:\n if not self.check_rule(rule):\n return CheckResult.FAILED\n return CheckResult.PASSED\n # maybe its an network_acl_rule\n if conf.get(\"network_acl_id\"):\n if not conf.get(\"egress\")[0]:\n if not self.check_rule(conf):\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n return CheckResult.UNKNOWN\n\n def check_rule(self, rule):\n if rule.get('cidr_block'):\n if rule.get('cidr_block') == [\"0.0.0.0/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n if rule.get('ipv6_cidr_block'):\n if rule.get('ipv6_cidr_block') == [\"::/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n return True\n", "path": "checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.common.util.type_forcers import force_int\n\n\nclass AbsNACLUnrestrictedIngress(BaseResourceCheck):\n def __init__(self, check_id, port):\n name = \"Ensure no NACL allow ingress from 0.0.0.0:0 to port %d\" % port\n supported_resources = ['aws_network_acl', 'aws_network_acl_rule']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=check_id, categories=categories, supported_resources=supported_resources)\n self.port = port\n\n def scan_resource_conf(self, conf):\n \"\"\"\n\n Return PASS if:\n - The NACL doesnt allows restricted ingress access to the port\n - The resource is an aws_network_acl of type 'ingress' that does not violate the check.\n\n Return FAIL if:\n - The the NACL allows unrestricted access to the port\n\n Return UNKNOWN if:\n - the resource is an NACL of type 'egress', OR\n\n :param conf: aws_network_acl configuration\n :return: <CheckResult>\n \"\"\"\n\n if conf.get(\"ingress\"):\n ingress = conf.get(\"ingress\")\n for rule in ingress:\n if not self.check_rule(rule):\n return CheckResult.FAILED\n return CheckResult.PASSED\n # maybe its an network_acl_rule\n if conf.get(\"network_acl_id\"):\n if not conf.get(\"egress\")[0]:\n if not self.check_rule(conf):\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n return CheckResult.UNKNOWN\n\n def check_rule(self, rule):\n if rule.get('cidr_block'):\n if rule.get('cidr_block') == [\"0.0.0.0/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n protocol = rule.get('protocol')\n if protocol and str(protocol[0]) == \"-1\":\n return False\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n if rule.get('ipv6_cidr_block'):\n if rule.get('ipv6_cidr_block') == [\"::/0\"]:\n if rule.get('action') == [\"allow\"] or rule.get('rule_action') == [\"allow\"]:\n protocol = rule.get('protocol')\n if protocol and str(protocol[0]) == \"-1\":\n return False\n if int(rule.get('from_port')[0]) <= self.port <= int(rule.get('to_port')[0]):\n return False\n return True\n", "path": "checkov/terraform/checks/resource/aws/AbsNACLUnrestrictedIngress.py"}]} | 1,987 | 312 |
gh_patches_debug_32497 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1600 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: abfallwirtschaft_pforzheim_de has change the URL.
### I Have A Problem With:
A specific source
### What's Your Problem
The URL changes from "https://www.abfallwirtschaft-pforzheim.de/kundenportal/abfallkalender" to "https://www.abfallwirtschaft-pforzheim.de/abfallkalender". On the new Site you need to select a checkbox for the year. I think this option would disappear on the beginning of the next year. But the addon doesnt show me the calendar for 2023 anymore. Its complete empty.
### Source (if relevant)
abfallwirtschaft_pforzheim_de
### Logs
```Shell
no relevant logs
```
### Relevant Configuration
```YAML
abfallwirtschaft_pforzheim_de
```
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [ ] Checked that the website of your service provider is still working
- [ ] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py`
Content:
```
1 from html.parser import HTMLParser
2
3 import requests
4 from waste_collection_schedule import Collection # type: ignore[attr-defined]
5 from waste_collection_schedule.service.ICS import ICS
6
7 # Source code based on rh_entsorgung_de.md
8 TITLE = "Abfallwirtschaft Pforzheim"
9 DESCRIPTION = "Source for Abfallwirtschaft Pforzheim."
10 URL = "https://www.abfallwirtschaft-pforzheim.de"
11 TEST_CASES = {
12 "Abnobstraße": {
13 "street": "Abnobastraße",
14 "house_number": 3,
15 "address_suffix": "",
16 },
17 "Im Buchbusch": {
18 "street": "Im Buchbusch",
19 "house_number": 12,
20 },
21 "Eisenbahnstraße": {
22 "street": "Eisenbahnstraße",
23 "house_number": 29,
24 "address_suffix": "-33",
25 },
26 }
27
28 ICON_MAP = {
29 "Restmuell": "mdi:trash-can",
30 "Biobehaelter": "mdi:leaf",
31 "Papierbehaelter": "mdi:package-variant",
32 "Gelbe": "mdi:recycle",
33 "Grossmuellbehaelter": "mdi:delete-circle",
34 }
35
36
37 API_URL = "https://onlineservices.abfallwirtschaft-pforzheim.de/WasteManagementPforzheim/WasteManagementServlet"
38
39 # Parser for HTML input (hidden) text
40
41
42 class HiddenInputParser(HTMLParser):
43 def __init__(self):
44 super().__init__()
45 self._args = {}
46
47 @property
48 def args(self):
49 return self._args
50
51 def handle_starttag(self, tag, attrs):
52 if tag == "input":
53 d = dict(attrs)
54 if str(d["type"]).lower() == "hidden":
55 self._args[d["name"]] = d["value"] if "value" in d else ""
56
57
58 class Source:
59 def __init__(self, street: str, house_number: int, address_suffix: str = ""):
60 self._street = street
61 self._hnr = house_number
62 self._suffix = address_suffix
63 self._ics = ICS()
64
65 def fetch(self):
66 session = requests.session()
67
68 r = session.get(
69 API_URL,
70 params={"SubmitAction": "wasteDisposalServices",
71 "InFrameMode": "TRUE"},
72 )
73 r.raise_for_status()
74 r.encoding = "utf-8"
75
76 parser = HiddenInputParser()
77 parser.feed(r.text)
78
79 args = parser.args
80 args["Ort"] = self._street[0].upper()
81 args["Strasse"] = self._street
82 args["Hausnummer"] = str(self._hnr)
83 args["Hausnummerzusatz"] = self._suffix
84 args["SubmitAction"] = "CITYCHANGED"
85 r = session.post(
86 API_URL,
87 data=args,
88 )
89 r.raise_for_status()
90
91 args["SubmitAction"] = "forward"
92 args["ContainerGewaehltRM"] = "on"
93 args["ContainerGewaehltBM"] = "on"
94 args["ContainerGewaehltLVP"] = "on"
95 args["ContainerGewaehltPA"] = "on"
96 args["ContainerGewaehltPrMuell"] = "on"
97 r = session.post(
98 API_URL,
99 data=args,
100 )
101 r.raise_for_status()
102
103 args["ApplicationName"] = "com.athos.nl.mvc.abfterm.AbfuhrTerminModel"
104 args["SubmitAction"] = "filedownload_ICAL"
105
106 r = session.post(
107 API_URL,
108 data=args,
109 )
110 r.raise_for_status()
111
112 dates = self._ics.convert(r.text)
113
114 entries = []
115 for d in dates:
116 entries.append(
117 Collection(
118 d[0], d[1], ICON_MAP.get(d[1].split(" ")[0])
119 )
120 )
121 return entries
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py
@@ -1,3 +1,4 @@
+from datetime import datetime
from html.parser import HTMLParser
import requests
@@ -63,12 +64,21 @@
self._ics = ICS()
def fetch(self):
+ now = datetime.now()
+ entries = self.get_data(now.year)
+ if now.month == 12:
+ try:
+ entries += self.get_data(now.year + 1)
+ except Exception:
+ pass
+ return entries
+
+ def get_data(self, year):
session = requests.session()
r = session.get(
API_URL,
- params={"SubmitAction": "wasteDisposalServices",
- "InFrameMode": "TRUE"},
+ params={"SubmitAction": "wasteDisposalServices", "InFrameMode": "TRUE"},
)
r.raise_for_status()
r.encoding = "utf-8"
@@ -82,6 +92,7 @@
args["Hausnummer"] = str(self._hnr)
args["Hausnummerzusatz"] = self._suffix
args["SubmitAction"] = "CITYCHANGED"
+ args["Zeitraum"] = f"Jahresübersicht {year}"
r = session.post(
API_URL,
data=args,
@@ -113,9 +124,5 @@
entries = []
for d in dates:
- entries.append(
- Collection(
- d[0], d[1], ICON_MAP.get(d[1].split(" ")[0])
- )
- )
+ entries.append(Collection(d[0], d[1], ICON_MAP.get(d[1].split(" ")[0])))
return entries
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py\n@@ -1,3 +1,4 @@\n+from datetime import datetime\n from html.parser import HTMLParser\n \n import requests\n@@ -63,12 +64,21 @@\n self._ics = ICS()\n \n def fetch(self):\n+ now = datetime.now()\n+ entries = self.get_data(now.year)\n+ if now.month == 12:\n+ try:\n+ entries += self.get_data(now.year + 1)\n+ except Exception:\n+ pass\n+ return entries\n+\n+ def get_data(self, year):\n session = requests.session()\n \n r = session.get(\n API_URL,\n- params={\"SubmitAction\": \"wasteDisposalServices\",\n- \"InFrameMode\": \"TRUE\"},\n+ params={\"SubmitAction\": \"wasteDisposalServices\", \"InFrameMode\": \"TRUE\"},\n )\n r.raise_for_status()\n r.encoding = \"utf-8\"\n@@ -82,6 +92,7 @@\n args[\"Hausnummer\"] = str(self._hnr)\n args[\"Hausnummerzusatz\"] = self._suffix\n args[\"SubmitAction\"] = \"CITYCHANGED\"\n+ args[\"Zeitraum\"] = f\"Jahres\u00fcbersicht {year}\"\n r = session.post(\n API_URL,\n data=args,\n@@ -113,9 +124,5 @@\n \n entries = []\n for d in dates:\n- entries.append(\n- Collection(\n- d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])\n- )\n- )\n+ entries.append(Collection(d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])))\n return entries\n", "issue": "[Bug]: abfallwirtschaft_pforzheim_de has change the URL.\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nThe URL changes from \"https://www.abfallwirtschaft-pforzheim.de/kundenportal/abfallkalender\" to \"https://www.abfallwirtschaft-pforzheim.de/abfallkalender\". On the new Site you need to select a checkbox for the year. I think this option would disappear on the beginning of the next year. But the addon doesnt show me the calendar for 2023 anymore. Its complete empty.\n\n### Source (if relevant)\n\nabfallwirtschaft_pforzheim_de\n\n### Logs\n\n```Shell\nno relevant logs\n```\n\n\n### Relevant Configuration\n\n```YAML\nabfallwirtschaft_pforzheim_de\n```\n\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [ ] Checked that the website of your service provider is still working\n- [ ] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\n# Source code based on rh_entsorgung_de.md\nTITLE = \"Abfallwirtschaft Pforzheim\"\nDESCRIPTION = \"Source for Abfallwirtschaft Pforzheim.\"\nURL = \"https://www.abfallwirtschaft-pforzheim.de\"\nTEST_CASES = {\n \"Abnobstra\u00dfe\": {\n \"street\": \"Abnobastra\u00dfe\",\n \"house_number\": 3,\n \"address_suffix\": \"\",\n },\n \"Im Buchbusch\": {\n \"street\": \"Im Buchbusch\",\n \"house_number\": 12,\n },\n \"Eisenbahnstra\u00dfe\": {\n \"street\": \"Eisenbahnstra\u00dfe\",\n \"house_number\": 29,\n \"address_suffix\": \"-33\",\n },\n}\n\nICON_MAP = {\n \"Restmuell\": \"mdi:trash-can\",\n \"Biobehaelter\": \"mdi:leaf\",\n \"Papierbehaelter\": \"mdi:package-variant\",\n \"Gelbe\": \"mdi:recycle\",\n \"Grossmuellbehaelter\": \"mdi:delete-circle\",\n}\n\n\nAPI_URL = \"https://onlineservices.abfallwirtschaft-pforzheim.de/WasteManagementPforzheim/WasteManagementServlet\"\n\n# Parser for HTML input (hidden) text\n\n\nclass HiddenInputParser(HTMLParser):\n def __init__(self):\n super().__init__()\n self._args = {}\n\n @property\n def args(self):\n return self._args\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n if str(d[\"type\"]).lower() == \"hidden\":\n self._args[d[\"name\"]] = d[\"value\"] if \"value\" in d else \"\"\n\n\nclass Source:\n def __init__(self, street: str, house_number: int, address_suffix: str = \"\"):\n self._street = street\n self._hnr = house_number\n self._suffix = address_suffix\n self._ics = ICS()\n\n def fetch(self):\n session = requests.session()\n\n r = session.get(\n API_URL,\n params={\"SubmitAction\": \"wasteDisposalServices\",\n \"InFrameMode\": \"TRUE\"},\n )\n r.raise_for_status()\n r.encoding = \"utf-8\"\n\n parser = HiddenInputParser()\n parser.feed(r.text)\n\n args = parser.args\n args[\"Ort\"] = self._street[0].upper()\n args[\"Strasse\"] = self._street\n args[\"Hausnummer\"] = str(self._hnr)\n args[\"Hausnummerzusatz\"] = self._suffix\n args[\"SubmitAction\"] = \"CITYCHANGED\"\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n args[\"SubmitAction\"] = \"forward\"\n args[\"ContainerGewaehltRM\"] = \"on\"\n args[\"ContainerGewaehltBM\"] = \"on\"\n args[\"ContainerGewaehltLVP\"] = \"on\"\n args[\"ContainerGewaehltPA\"] = \"on\"\n args[\"ContainerGewaehltPrMuell\"] = \"on\"\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n args[\"ApplicationName\"] = \"com.athos.nl.mvc.abfterm.AbfuhrTerminModel\"\n args[\"SubmitAction\"] = \"filedownload_ICAL\"\n\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(\n Collection(\n d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])\n )\n )\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py"}], "after_files": [{"content": "from datetime import datetime\nfrom html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\n# Source code based on rh_entsorgung_de.md\nTITLE = \"Abfallwirtschaft Pforzheim\"\nDESCRIPTION = \"Source for Abfallwirtschaft Pforzheim.\"\nURL = \"https://www.abfallwirtschaft-pforzheim.de\"\nTEST_CASES = {\n \"Abnobstra\u00dfe\": {\n \"street\": \"Abnobastra\u00dfe\",\n \"house_number\": 3,\n \"address_suffix\": \"\",\n },\n \"Im Buchbusch\": {\n \"street\": \"Im Buchbusch\",\n \"house_number\": 12,\n },\n \"Eisenbahnstra\u00dfe\": {\n \"street\": \"Eisenbahnstra\u00dfe\",\n \"house_number\": 29,\n \"address_suffix\": \"-33\",\n },\n}\n\nICON_MAP = {\n \"Restmuell\": \"mdi:trash-can\",\n \"Biobehaelter\": \"mdi:leaf\",\n \"Papierbehaelter\": \"mdi:package-variant\",\n \"Gelbe\": \"mdi:recycle\",\n \"Grossmuellbehaelter\": \"mdi:delete-circle\",\n}\n\n\nAPI_URL = \"https://onlineservices.abfallwirtschaft-pforzheim.de/WasteManagementPforzheim/WasteManagementServlet\"\n\n# Parser for HTML input (hidden) text\n\n\nclass HiddenInputParser(HTMLParser):\n def __init__(self):\n super().__init__()\n self._args = {}\n\n @property\n def args(self):\n return self._args\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n if str(d[\"type\"]).lower() == \"hidden\":\n self._args[d[\"name\"]] = d[\"value\"] if \"value\" in d else \"\"\n\n\nclass Source:\n def __init__(self, street: str, house_number: int, address_suffix: str = \"\"):\n self._street = street\n self._hnr = house_number\n self._suffix = address_suffix\n self._ics = ICS()\n\n def fetch(self):\n now = datetime.now()\n entries = self.get_data(now.year)\n if now.month == 12:\n try:\n entries += self.get_data(now.year + 1)\n except Exception:\n pass\n return entries\n\n def get_data(self, year):\n session = requests.session()\n\n r = session.get(\n API_URL,\n params={\"SubmitAction\": \"wasteDisposalServices\", \"InFrameMode\": \"TRUE\"},\n )\n r.raise_for_status()\n r.encoding = \"utf-8\"\n\n parser = HiddenInputParser()\n parser.feed(r.text)\n\n args = parser.args\n args[\"Ort\"] = self._street[0].upper()\n args[\"Strasse\"] = self._street\n args[\"Hausnummer\"] = str(self._hnr)\n args[\"Hausnummerzusatz\"] = self._suffix\n args[\"SubmitAction\"] = \"CITYCHANGED\"\n args[\"Zeitraum\"] = f\"Jahres\u00fcbersicht {year}\"\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n args[\"SubmitAction\"] = \"forward\"\n args[\"ContainerGewaehltRM\"] = \"on\"\n args[\"ContainerGewaehltBM\"] = \"on\"\n args[\"ContainerGewaehltLVP\"] = \"on\"\n args[\"ContainerGewaehltPA\"] = \"on\"\n args[\"ContainerGewaehltPrMuell\"] = \"on\"\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n args[\"ApplicationName\"] = \"com.athos.nl.mvc.abfterm.AbfuhrTerminModel\"\n args[\"SubmitAction\"] = \"filedownload_ICAL\"\n\n r = session.post(\n API_URL,\n data=args,\n )\n r.raise_for_status()\n\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/abfallwirtschaft_pforzheim_de.py"}]} | 1,799 | 481 |
gh_patches_debug_16795 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3042 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[chatgpt] change critic input as state
> ## 📌 Checklist before creating the PR
> * [x] I have created an issue for this PR for traceability
> * [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
> * [ ] I have added relevant tags if possible for us to better distinguish different PRs
>
> ## 🚨 Issue number
> > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
> > e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
> > fixed #3042
>
> ## 📝 What does this PR do?
> > Summarize your work here.
> > if you have any plots/diagrams/screenshots/tables, please attach them here.
>
> This commit fix chatgpt critic input as state according to A2C RL algorithm.
>
> ## 💥 Checklist before requesting a review
> * [x] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
> * [x] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
> * [x] I have performed a self-review of my code
> * [ ] I have added thorough tests.
> * [ ] I have added docstrings for all the functions/methods I implemented
>
> ## ⭐️ Do you enjoy contributing to Colossal-AI?
> * [x] 🌝 Yes, I do.
> * [ ] 🌚 No, I don't.
>
> Tell us more if you don't enjoy contributing to Colossal-AI.
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/ChatGPT/chatgpt/models/base/critic.py`
Content:
```
1 from typing import Optional
2
3 import torch
4 import torch.nn as nn
5
6 from ..lora import LoRAModule
7 from ..utils import masked_mean
8
9
10 class Critic(LoRAModule):
11 """
12 Critic model base class.
13
14 Args:
15 model (nn.Module): Critic model.
16 value_head (nn.Module): Value head to get value.
17 lora_rank (int): LoRA rank.
18 lora_train_bias (str): LoRA bias training mode.
19 """
20
21 def __init__(self,
22 model: nn.Module,
23 value_head: nn.Module,
24 lora_rank: int = 0,
25 lora_train_bias: str = 'none') -> None:
26
27 super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)
28 self.model = model
29 self.value_head = value_head
30 self.convert_to_lora()
31
32 def forward(self,
33 sequences: torch.LongTensor,
34 action_mask: Optional[torch.Tensor] = None,
35 attention_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
36 outputs = self.model(sequences, attention_mask=attention_mask)
37 last_hidden_states = outputs['last_hidden_state']
38
39 values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]
40
41 if action_mask is not None:
42 num_actions = action_mask.size(1)
43 values = values[:, -num_actions:]
44 value = masked_mean(values, action_mask, dim=1)
45 return value
46 value = values.mean(dim=1).squeeze(1)
47 return value
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/applications/ChatGPT/chatgpt/models/base/critic.py b/applications/ChatGPT/chatgpt/models/base/critic.py
--- a/applications/ChatGPT/chatgpt/models/base/critic.py
+++ b/applications/ChatGPT/chatgpt/models/base/critic.py
@@ -36,12 +36,15 @@
outputs = self.model(sequences, attention_mask=attention_mask)
last_hidden_states = outputs['last_hidden_state']
- values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]
+ values = self.value_head(last_hidden_states).squeeze(-1)
if action_mask is not None:
num_actions = action_mask.size(1)
- values = values[:, -num_actions:]
- value = masked_mean(values, action_mask, dim=1)
+ prompt_mask = attention_mask[:, :-num_actions]
+ values = values[:, :-num_actions]
+ value = masked_mean(values, prompt_mask, dim=1)
return value
+
+ values = values[:, :-1]
value = values.mean(dim=1).squeeze(1)
return value
| {"golden_diff": "diff --git a/applications/ChatGPT/chatgpt/models/base/critic.py b/applications/ChatGPT/chatgpt/models/base/critic.py\n--- a/applications/ChatGPT/chatgpt/models/base/critic.py\n+++ b/applications/ChatGPT/chatgpt/models/base/critic.py\n@@ -36,12 +36,15 @@\n outputs = self.model(sequences, attention_mask=attention_mask)\n last_hidden_states = outputs['last_hidden_state']\n \n- values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]\n+ values = self.value_head(last_hidden_states).squeeze(-1)\n \n if action_mask is not None:\n num_actions = action_mask.size(1)\n- values = values[:, -num_actions:]\n- value = masked_mean(values, action_mask, dim=1)\n+ prompt_mask = attention_mask[:, :-num_actions]\n+ values = values[:, :-num_actions]\n+ value = masked_mean(values, prompt_mask, dim=1)\n return value\n+\n+ values = values[:, :-1]\n value = values.mean(dim=1).squeeze(1)\n return value\n", "issue": "[chatgpt] change critic input as state\n> ## \ud83d\udccc Checklist before creating the PR\r\n> * [x] I have created an issue for this PR for traceability\r\n> * [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`\r\n> * [ ] I have added relevant tags if possible for us to better distinguish different PRs\r\n> \r\n> ## \ud83d\udea8 Issue number\r\n> > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge\r\n> > e.g. `fixed #1234`, `closed #1234`, `resolved #1234`\r\n> > fixed #3042\r\n> \r\n> ## \ud83d\udcdd What does this PR do?\r\n> > Summarize your work here.\r\n> > if you have any plots/diagrams/screenshots/tables, please attach them here.\r\n> \r\n> This commit fix chatgpt critic input as state according to A2C RL algorithm.\r\n> \r\n> ## \ud83d\udca5 Checklist before requesting a review\r\n> * [x] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))\r\n> * [x] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible\r\n> * [x] I have performed a self-review of my code\r\n> * [ ] I have added thorough tests.\r\n> * [ ] I have added docstrings for all the functions/methods I implemented\r\n> \r\n> ## \u2b50\ufe0f Do you enjoy contributing to Colossal-AI?\r\n> * [x] \ud83c\udf1d Yes, I do.\r\n> * [ ] \ud83c\udf1a No, I don't.\r\n> \r\n> Tell us more if you don't enjoy contributing to Colossal-AI.\r\n\r\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom ..lora import LoRAModule\nfrom ..utils import masked_mean\n\n\nclass Critic(LoRAModule):\n \"\"\"\n Critic model base class.\n\n Args:\n model (nn.Module): Critic model.\n value_head (nn.Module): Value head to get value.\n lora_rank (int): LoRA rank.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n model: nn.Module,\n value_head: nn.Module,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n\n super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)\n self.model = model\n self.value_head = value_head\n self.convert_to_lora()\n\n def forward(self,\n sequences: torch.LongTensor,\n action_mask: Optional[torch.Tensor] = None,\n attention_mask: Optional[torch.Tensor] = None) -> torch.Tensor:\n outputs = self.model(sequences, attention_mask=attention_mask)\n last_hidden_states = outputs['last_hidden_state']\n\n values = self.value_head(last_hidden_states).squeeze(-1)[:, :-1]\n\n if action_mask is not None:\n num_actions = action_mask.size(1)\n values = values[:, -num_actions:]\n value = masked_mean(values, action_mask, dim=1)\n return value\n value = values.mean(dim=1).squeeze(1)\n return value\n", "path": "applications/ChatGPT/chatgpt/models/base/critic.py"}], "after_files": [{"content": "from typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom ..lora import LoRAModule\nfrom ..utils import masked_mean\n\n\nclass Critic(LoRAModule):\n \"\"\"\n Critic model base class.\n\n Args:\n model (nn.Module): Critic model.\n value_head (nn.Module): Value head to get value.\n lora_rank (int): LoRA rank.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n model: nn.Module,\n value_head: nn.Module,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n\n super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)\n self.model = model\n self.value_head = value_head\n self.convert_to_lora()\n\n def forward(self,\n sequences: torch.LongTensor,\n action_mask: Optional[torch.Tensor] = None,\n attention_mask: Optional[torch.Tensor] = None) -> torch.Tensor:\n outputs = self.model(sequences, attention_mask=attention_mask)\n last_hidden_states = outputs['last_hidden_state']\n\n values = self.value_head(last_hidden_states).squeeze(-1)\n\n if action_mask is not None:\n num_actions = action_mask.size(1)\n prompt_mask = attention_mask[:, :-num_actions]\n values = values[:, :-num_actions]\n value = masked_mean(values, prompt_mask, dim=1)\n return value\n\n values = values[:, :-1]\n value = values.mean(dim=1).squeeze(1)\n return value\n", "path": "applications/ChatGPT/chatgpt/models/base/critic.py"}]} | 1,120 | 254 |
gh_patches_debug_36284 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2966 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider albert_heijn is broken
During the global build at 2021-06-02-14-42-40, spider **albert_heijn** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/albert_heijn.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/albert_heijn.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 from locations.items import GeojsonPointItem
4 import json
5
6 class AlbertHeijnSpider(scrapy.Spider):
7 name = 'albert_heijn'
8 item_attributes = {'brand': "Albert Heijn"}
9 allowed_domains = ['www.ah.nl']
10
11 def start_requests(self):
12 url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'
13 yield scrapy.Request(url, callback=self.parse)
14
15 def parse(self, response):
16 stores = json.loads(response.body_as_unicode())
17 for store in stores['stores']:
18 try:
19 phone_number = store['phoneNumber']
20 except:
21 phone_number = ""
22 yield GeojsonPointItem(
23 lat=store['lat'],
24 lon=store['lng'],
25 addr_full="%s %s" % (store['street'], store["housenr"]),
26 city=store['city'],
27 phone=phone_number,
28 state="",
29 postcode=store['zip'],
30 ref=store['no'],
31 country="Netherlands",
32 website="https://www.ah.nl/winkel/albert-heijn/%s/%s/%s" % (store['city'], store['street'], store['no'])
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/albert_heijn.py b/locations/spiders/albert_heijn.py
--- a/locations/spiders/albert_heijn.py
+++ b/locations/spiders/albert_heijn.py
@@ -1,33 +1,53 @@
# -*- coding: utf-8 -*-
+import json
+import re
+
import scrapy
+
+from locations.hours import OpeningHours
from locations.items import GeojsonPointItem
-import json
-class AlbertHeijnSpider(scrapy.Spider):
- name = 'albert_heijn'
- item_attributes = {'brand': "Albert Heijn"}
- allowed_domains = ['www.ah.nl']
- def start_requests(self):
- url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'
- yield scrapy.Request(url, callback=self.parse)
+class AlbertHeijnSpider(scrapy.Spider):
+ name = "albert_heijn"
+ item_attributes = {"brand": "Albert Heijn", "brand_wikidata": "Q1653985"}
+ allowed_domains = ["www.ah.nl", "www.ah.be"]
+ start_urls = (
+ "https://www.ah.nl/sitemaps/entities/stores/stores.xml",
+ "https://www.ah.be/sitemaps/entities/stores/stores.xml",
+ )
def parse(self, response):
- stores = json.loads(response.body_as_unicode())
- for store in stores['stores']:
- try:
- phone_number = store['phoneNumber']
- except:
- phone_number = ""
- yield GeojsonPointItem(
- lat=store['lat'],
- lon=store['lng'],
- addr_full="%s %s" % (store['street'], store["housenr"]),
- city=store['city'],
- phone=phone_number,
- state="",
- postcode=store['zip'],
- ref=store['no'],
- country="Netherlands",
- website="https://www.ah.nl/winkel/albert-heijn/%s/%s/%s" % (store['city'], store['street'], store['no'])
- )
+ response.selector.remove_namespaces()
+ for url in response.xpath("//loc/text()").extract():
+ if re.search("/winkel/albert-heijn/", url):
+ yield scrapy.Request(url, callback=self.parse_store)
+
+ def parse_store(self, response):
+ for ldjson in response.xpath(
+ '//script[@type="application/ld+json"]/text()'
+ ).extract():
+ data = json.loads(ldjson)
+ if data["@type"] != "GroceryStore":
+ continue
+
+ opening_hours = OpeningHours()
+ for spec in data["openingHoursSpecification"]:
+ opening_hours.add_range(
+ spec["dayOfWeek"][:2], spec["opens"], spec["closes"]
+ )
+
+ properties = {
+ "ref": response.url,
+ "website": response.url,
+ "name": data["name"],
+ "phone": data["telephone"],
+ "lat": data["geo"]["latitude"],
+ "lon": data["geo"]["longitude"],
+ "addr_full": data["address"]["streetAddress"],
+ "city": data["address"]["addressLocality"],
+ "postcode": data["address"]["postalCode"],
+ "country": data["address"]["addressCountry"],
+ "opening_hours": opening_hours.as_opening_hours(),
+ }
+ yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/albert_heijn.py b/locations/spiders/albert_heijn.py\n--- a/locations/spiders/albert_heijn.py\n+++ b/locations/spiders/albert_heijn.py\n@@ -1,33 +1,53 @@\n # -*- coding: utf-8 -*-\n+import json\n+import re\n+\n import scrapy\n+\n+from locations.hours import OpeningHours\n from locations.items import GeojsonPointItem\n-import json\n \n-class AlbertHeijnSpider(scrapy.Spider):\n- name = 'albert_heijn'\n- item_attributes = {'brand': \"Albert Heijn\"}\n- allowed_domains = ['www.ah.nl']\n \n- def start_requests(self):\n- url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'\n- yield scrapy.Request(url, callback=self.parse)\n+class AlbertHeijnSpider(scrapy.Spider):\n+ name = \"albert_heijn\"\n+ item_attributes = {\"brand\": \"Albert Heijn\", \"brand_wikidata\": \"Q1653985\"}\n+ allowed_domains = [\"www.ah.nl\", \"www.ah.be\"]\n+ start_urls = (\n+ \"https://www.ah.nl/sitemaps/entities/stores/stores.xml\",\n+ \"https://www.ah.be/sitemaps/entities/stores/stores.xml\",\n+ )\n \n def parse(self, response):\n- stores = json.loads(response.body_as_unicode())\n- for store in stores['stores']:\n- try:\n- phone_number = store['phoneNumber']\n- except:\n- phone_number = \"\"\n- yield GeojsonPointItem(\n- lat=store['lat'],\n- lon=store['lng'],\n- addr_full=\"%s %s\" % (store['street'], store[\"housenr\"]),\n- city=store['city'],\n- phone=phone_number,\n- state=\"\",\n- postcode=store['zip'],\n- ref=store['no'],\n- country=\"Netherlands\",\n- website=\"https://www.ah.nl/winkel/albert-heijn/%s/%s/%s\" % (store['city'], store['street'], store['no'])\n- )\n+ response.selector.remove_namespaces()\n+ for url in response.xpath(\"//loc/text()\").extract():\n+ if re.search(\"/winkel/albert-heijn/\", url):\n+ yield scrapy.Request(url, callback=self.parse_store)\n+\n+ def parse_store(self, response):\n+ for ldjson in response.xpath(\n+ '//script[@type=\"application/ld+json\"]/text()'\n+ ).extract():\n+ data = json.loads(ldjson)\n+ if data[\"@type\"] != \"GroceryStore\":\n+ continue\n+\n+ opening_hours = OpeningHours()\n+ for spec in data[\"openingHoursSpecification\"]:\n+ opening_hours.add_range(\n+ spec[\"dayOfWeek\"][:2], spec[\"opens\"], spec[\"closes\"]\n+ )\n+\n+ properties = {\n+ \"ref\": response.url,\n+ \"website\": response.url,\n+ \"name\": data[\"name\"],\n+ \"phone\": data[\"telephone\"],\n+ \"lat\": data[\"geo\"][\"latitude\"],\n+ \"lon\": data[\"geo\"][\"longitude\"],\n+ \"addr_full\": data[\"address\"][\"streetAddress\"],\n+ \"city\": data[\"address\"][\"addressLocality\"],\n+ \"postcode\": data[\"address\"][\"postalCode\"],\n+ \"country\": data[\"address\"][\"addressCountry\"],\n+ \"opening_hours\": opening_hours.as_opening_hours(),\n+ }\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider albert_heijn is broken\nDuring the global build at 2021-06-02-14-42-40, spider **albert_heijn** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/albert_heijn.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/albert_heijn.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\nimport json\n\nclass AlbertHeijnSpider(scrapy.Spider):\n name = 'albert_heijn'\n item_attributes = {'brand': \"Albert Heijn\"}\n allowed_domains = ['www.ah.nl']\n\n def start_requests(self):\n url = 'https://www.ah.nl/data/winkelinformatie/winkels/json'\n yield scrapy.Request(url, callback=self.parse)\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())\n for store in stores['stores']:\n try:\n phone_number = store['phoneNumber']\n except:\n phone_number = \"\"\n yield GeojsonPointItem(\n lat=store['lat'],\n lon=store['lng'],\n addr_full=\"%s %s\" % (store['street'], store[\"housenr\"]),\n city=store['city'],\n phone=phone_number,\n state=\"\",\n postcode=store['zip'],\n ref=store['no'],\n country=\"Netherlands\",\n website=\"https://www.ah.nl/winkel/albert-heijn/%s/%s/%s\" % (store['city'], store['street'], store['no'])\n )\n", "path": "locations/spiders/albert_heijn.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\n\nimport scrapy\n\nfrom locations.hours import OpeningHours\nfrom locations.items import GeojsonPointItem\n\n\nclass AlbertHeijnSpider(scrapy.Spider):\n name = \"albert_heijn\"\n item_attributes = {\"brand\": \"Albert Heijn\", \"brand_wikidata\": \"Q1653985\"}\n allowed_domains = [\"www.ah.nl\", \"www.ah.be\"]\n start_urls = (\n \"https://www.ah.nl/sitemaps/entities/stores/stores.xml\",\n \"https://www.ah.be/sitemaps/entities/stores/stores.xml\",\n )\n\n def parse(self, response):\n response.selector.remove_namespaces()\n for url in response.xpath(\"//loc/text()\").extract():\n if re.search(\"/winkel/albert-heijn/\", url):\n yield scrapy.Request(url, callback=self.parse_store)\n\n def parse_store(self, response):\n for ldjson in response.xpath(\n '//script[@type=\"application/ld+json\"]/text()'\n ).extract():\n data = json.loads(ldjson)\n if data[\"@type\"] != \"GroceryStore\":\n continue\n\n opening_hours = OpeningHours()\n for spec in data[\"openingHoursSpecification\"]:\n opening_hours.add_range(\n spec[\"dayOfWeek\"][:2], spec[\"opens\"], spec[\"closes\"]\n )\n\n properties = {\n \"ref\": response.url,\n \"website\": response.url,\n \"name\": data[\"name\"],\n \"phone\": data[\"telephone\"],\n \"lat\": data[\"geo\"][\"latitude\"],\n \"lon\": data[\"geo\"][\"longitude\"],\n \"addr_full\": data[\"address\"][\"streetAddress\"],\n \"city\": data[\"address\"][\"addressLocality\"],\n \"postcode\": data[\"address\"][\"postalCode\"],\n \"country\": data[\"address\"][\"addressCountry\"],\n \"opening_hours\": opening_hours.as_opening_hours(),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/albert_heijn.py"}]} | 776 | 782 |
gh_patches_debug_25605 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-1878 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider tesla is broken
During the global build at 2021-05-26-14-42-23, spider **tesla** failed with **486 features** and **5 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tesla.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/tesla.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import re
3 import scrapy
4 import urllib.parse
5 from locations.items import GeojsonPointItem
6
7
8 class TeslaSpider(scrapy.Spider):
9 name = "tesla"
10 item_attributes = { 'brand': "Tesla" }
11 allowed_domains = ['www.tesla.com']
12 start_urls = [
13 'https://www.tesla.com/findus/list',
14 ]
15 download_delay = 0.5
16 custom_settings = {
17 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
18 }
19
20 def parse(self, response):
21 # Only scrape stores and service centers
22 country_urls = response.xpath('//a[contains(@href,"stores") or contains(@href,"services")]/@href').extract()
23 for country_url in country_urls:
24 yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)
25
26 def parse_store_list(self, response):
27 store_urls = response.xpath('//a[@class="fn org url"]/@href').extract()
28 for store_url in store_urls:
29 yield scrapy.Request(response.urljoin(store_url), callback=self.parse_store)
30
31 def parse_store(self, response):
32 # Skip if "Coming Soon" - no content to capture yet
33 if response.xpath('//span[@class="coming-soon"]/text()').extract_first() == "Coming Soon":
34 pass
35 else:
36 ref = re.search(r'.+/(.+?)/?(?:\.html|$)', response.url).group(1)
37
38 # city, state, and zip do not have separate classes - contained together in locality class as text
39 name = response.xpath('normalize-space(//header/h1/text())').extract_first()
40 common_name = response.xpath('normalize-space(//span[@class="common-name"]//text())').extract_first()
41 street_address = response.xpath('normalize-space(//span[@class="street-address"]//text())').extract_first()
42 city_state_zip = response.xpath('normalize-space(//span[@class="locality"]//text())').extract_first()
43
44 if common_name and street_address and city_state_zip:
45 addr_full = common_name + ' ' + street_address + ', ' + city_state_zip
46 elif street_address and not city_state_zip:
47 addr_full = street_address
48 elif city_state_zip and not street_address:
49 addr_full = city_state_zip
50 elif street_address and city_state_zip:
51 addr_full = street_address + ', ' + city_state_zip
52
53 country_url = response.xpath('//header[@class="findus-list-header"]/a/@href').extract_first()
54 country = urllib.parse.unquote_plus(re.search(r'.+/(.+?)/?(?:\.html|$)', country_url).group(1))
55 phone = response.xpath('normalize-space(//span[@class="tel"]/span[2]/text())').extract_first()
56 location_type = re.search(r".+/(.+?)/(.+?)/?(?:\.html|$)", response.url).group(1)
57
58 # map link varies across store pages
59 if response.xpath('normalize-space(//a[contains(@href,"maps.google")]/@href)').extract_first():
60 map_link = response.xpath('normalize-space(//a[contains(@href,"maps.google")]/@href)').extract_first()
61 else:
62 map_link = response.xpath('normalize-space(//img[contains(@src,"maps.google")]/@src)').extract_first()
63
64 # extract coordinates from map link
65 if re.search(r'.+=([0-9.-]+),\s?([0-9.-]+)', map_link):
66 lat = re.search(r'.+=([0-9.-]+),\s?([0-9.-]+)', map_link).group(1)
67 lon = re.search(r'.+=([0-9.-]+),\s?([0-9.-]+)', map_link).group(2)
68 elif re.search(r'.+@([0-9.-]+),\s?([0-9.-]+)', map_link):
69 lat = re.search(r'.+@([0-9.-]+),\s?([0-9.-]+)', map_link).group(1)
70 lon = re.search(r'.+@([0-9.-]+),\s?([0-9.-]+)', map_link).group(2)
71 else:
72 lat = None
73 lon = None
74
75 properties = {
76 'ref': ref,
77 'name': name,
78 'addr_full': addr_full,
79 'country': country,
80 'phone': phone,
81 'website': response.url,
82 'lat': lat,
83 'lon': lon,
84 'extras':
85 {
86 'location_type': location_type # Is this a service center or store/gallery
87 }
88 }
89
90 yield GeojsonPointItem(**properties)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/tesla.py b/locations/spiders/tesla.py
--- a/locations/spiders/tesla.py
+++ b/locations/spiders/tesla.py
@@ -19,7 +19,7 @@
def parse(self, response):
# Only scrape stores and service centers
- country_urls = response.xpath('//a[contains(@href,"stores") or contains(@href,"services")]/@href').extract()
+ country_urls = response.xpath('//a[contains(@href,"stores") or contains(@href,"services") or contains(@href,"superchargers")]/@href').extract()
for country_url in country_urls:
yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)
@@ -41,6 +41,7 @@
street_address = response.xpath('normalize-space(//span[@class="street-address"]//text())').extract_first()
city_state_zip = response.xpath('normalize-space(//span[@class="locality"]//text())').extract_first()
+ addr_full = ""
if common_name and street_address and city_state_zip:
addr_full = common_name + ' ' + street_address + ', ' + city_state_zip
elif street_address and not city_state_zip:
| {"golden_diff": "diff --git a/locations/spiders/tesla.py b/locations/spiders/tesla.py\n--- a/locations/spiders/tesla.py\n+++ b/locations/spiders/tesla.py\n@@ -19,7 +19,7 @@\n \n def parse(self, response):\n # Only scrape stores and service centers\n- country_urls = response.xpath('//a[contains(@href,\"stores\") or contains(@href,\"services\")]/@href').extract()\n+ country_urls = response.xpath('//a[contains(@href,\"stores\") or contains(@href,\"services\") or contains(@href,\"superchargers\")]/@href').extract()\n for country_url in country_urls:\n yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)\n \n@@ -41,6 +41,7 @@\n street_address = response.xpath('normalize-space(//span[@class=\"street-address\"]//text())').extract_first()\n city_state_zip = response.xpath('normalize-space(//span[@class=\"locality\"]//text())').extract_first()\n \n+ addr_full = \"\"\n if common_name and street_address and city_state_zip:\n addr_full = common_name + ' ' + street_address + ', ' + city_state_zip\n elif street_address and not city_state_zip:\n", "issue": "Spider tesla is broken\nDuring the global build at 2021-05-26-14-42-23, spider **tesla** failed with **486 features** and **5 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/tesla.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/tesla.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport re\nimport scrapy\nimport urllib.parse\nfrom locations.items import GeojsonPointItem\n\n\nclass TeslaSpider(scrapy.Spider):\n name = \"tesla\"\n item_attributes = { 'brand': \"Tesla\" }\n allowed_domains = ['www.tesla.com']\n start_urls = [\n 'https://www.tesla.com/findus/list',\n ]\n download_delay = 0.5\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse(self, response):\n # Only scrape stores and service centers\n country_urls = response.xpath('//a[contains(@href,\"stores\") or contains(@href,\"services\")]/@href').extract()\n for country_url in country_urls:\n yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)\n\n def parse_store_list(self, response):\n store_urls = response.xpath('//a[@class=\"fn org url\"]/@href').extract()\n for store_url in store_urls:\n yield scrapy.Request(response.urljoin(store_url), callback=self.parse_store)\n\n def parse_store(self, response):\n # Skip if \"Coming Soon\" - no content to capture yet\n if response.xpath('//span[@class=\"coming-soon\"]/text()').extract_first() == \"Coming Soon\":\n pass\n else:\n ref = re.search(r'.+/(.+?)/?(?:\\.html|$)', response.url).group(1)\n\n # city, state, and zip do not have separate classes - contained together in locality class as text\n name = response.xpath('normalize-space(//header/h1/text())').extract_first()\n common_name = response.xpath('normalize-space(//span[@class=\"common-name\"]//text())').extract_first()\n street_address = response.xpath('normalize-space(//span[@class=\"street-address\"]//text())').extract_first()\n city_state_zip = response.xpath('normalize-space(//span[@class=\"locality\"]//text())').extract_first()\n\n if common_name and street_address and city_state_zip:\n addr_full = common_name + ' ' + street_address + ', ' + city_state_zip\n elif street_address and not city_state_zip:\n addr_full = street_address\n elif city_state_zip and not street_address:\n addr_full = city_state_zip\n elif street_address and city_state_zip:\n addr_full = street_address + ', ' + city_state_zip\n\n country_url = response.xpath('//header[@class=\"findus-list-header\"]/a/@href').extract_first()\n country = urllib.parse.unquote_plus(re.search(r'.+/(.+?)/?(?:\\.html|$)', country_url).group(1))\n phone = response.xpath('normalize-space(//span[@class=\"tel\"]/span[2]/text())').extract_first()\n location_type = re.search(r\".+/(.+?)/(.+?)/?(?:\\.html|$)\", response.url).group(1)\n\n # map link varies across store pages\n if response.xpath('normalize-space(//a[contains(@href,\"maps.google\")]/@href)').extract_first():\n map_link = response.xpath('normalize-space(//a[contains(@href,\"maps.google\")]/@href)').extract_first()\n else:\n map_link = response.xpath('normalize-space(//img[contains(@src,\"maps.google\")]/@src)').extract_first()\n\n # extract coordinates from map link\n if re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link):\n lat = re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link).group(1)\n lon = re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link).group(2)\n elif re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link):\n lat = re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link).group(1)\n lon = re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link).group(2)\n else:\n lat = None\n lon = None\n\n properties = {\n 'ref': ref,\n 'name': name,\n 'addr_full': addr_full,\n 'country': country,\n 'phone': phone,\n 'website': response.url,\n 'lat': lat,\n 'lon': lon,\n 'extras':\n {\n 'location_type': location_type # Is this a service center or store/gallery\n }\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/tesla.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport re\nimport scrapy\nimport urllib.parse\nfrom locations.items import GeojsonPointItem\n\n\nclass TeslaSpider(scrapy.Spider):\n name = \"tesla\"\n item_attributes = { 'brand': \"Tesla\" }\n allowed_domains = ['www.tesla.com']\n start_urls = [\n 'https://www.tesla.com/findus/list',\n ]\n download_delay = 0.5\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse(self, response):\n # Only scrape stores and service centers\n country_urls = response.xpath('//a[contains(@href,\"stores\") or contains(@href,\"services\") or contains(@href,\"superchargers\")]/@href').extract()\n for country_url in country_urls:\n yield scrapy.Request(response.urljoin(country_url), callback=self.parse_store_list)\n\n def parse_store_list(self, response):\n store_urls = response.xpath('//a[@class=\"fn org url\"]/@href').extract()\n for store_url in store_urls:\n yield scrapy.Request(response.urljoin(store_url), callback=self.parse_store)\n\n def parse_store(self, response):\n # Skip if \"Coming Soon\" - no content to capture yet\n if response.xpath('//span[@class=\"coming-soon\"]/text()').extract_first() == \"Coming Soon\":\n pass\n else:\n ref = re.search(r'.+/(.+?)/?(?:\\.html|$)', response.url).group(1)\n\n # city, state, and zip do not have separate classes - contained together in locality class as text\n name = response.xpath('normalize-space(//header/h1/text())').extract_first()\n common_name = response.xpath('normalize-space(//span[@class=\"common-name\"]//text())').extract_first()\n street_address = response.xpath('normalize-space(//span[@class=\"street-address\"]//text())').extract_first()\n city_state_zip = response.xpath('normalize-space(//span[@class=\"locality\"]//text())').extract_first()\n\n addr_full = \"\"\n if common_name and street_address and city_state_zip:\n addr_full = common_name + ' ' + street_address + ', ' + city_state_zip\n elif street_address and not city_state_zip:\n addr_full = street_address\n elif city_state_zip and not street_address:\n addr_full = city_state_zip\n elif street_address and city_state_zip:\n addr_full = street_address + ', ' + city_state_zip\n\n country_url = response.xpath('//header[@class=\"findus-list-header\"]/a/@href').extract_first()\n country = urllib.parse.unquote_plus(re.search(r'.+/(.+?)/?(?:\\.html|$)', country_url).group(1))\n phone = response.xpath('normalize-space(//span[@class=\"tel\"]/span[2]/text())').extract_first()\n location_type = re.search(r\".+/(.+?)/(.+?)/?(?:\\.html|$)\", response.url).group(1)\n\n # map link varies across store pages\n if response.xpath('normalize-space(//a[contains(@href,\"maps.google\")]/@href)').extract_first():\n map_link = response.xpath('normalize-space(//a[contains(@href,\"maps.google\")]/@href)').extract_first()\n else:\n map_link = response.xpath('normalize-space(//img[contains(@src,\"maps.google\")]/@src)').extract_first()\n\n # extract coordinates from map link\n if re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link):\n lat = re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link).group(1)\n lon = re.search(r'.+=([0-9.-]+),\\s?([0-9.-]+)', map_link).group(2)\n elif re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link):\n lat = re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link).group(1)\n lon = re.search(r'.+@([0-9.-]+),\\s?([0-9.-]+)', map_link).group(2)\n else:\n lat = None\n lon = None\n\n properties = {\n 'ref': ref,\n 'name': name,\n 'addr_full': addr_full,\n 'country': country,\n 'phone': phone,\n 'website': response.url,\n 'lat': lat,\n 'lon': lon,\n 'extras':\n {\n 'location_type': location_type # Is this a service center or store/gallery\n }\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/tesla.py"}]} | 1,692 | 275 |
gh_patches_debug_19241 | rasdani/github-patches | git_diff | Gallopsled__pwntools-2240 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Do not overwrite global `bytes` in code or examples
It looks like there's a few places we overwrite `bytes` (the type identifier) with a local variable.
```
$ git grep -E -e '^ +bytes *=' -- '*.py'
pwnlib/commandline/disasm.py:81: bytes = disasm(dat, vma=safeeval.const(args.address), instructions=False, offset=False)
pwnlib/commandline/elfpatch.py:29: bytes = unhex(a.bytes)
pwnlib/elf/elf.py:195: bytes = 4
```
And a few cases we do it in tests, which could have cross-test impact if the global state isn't reset (hint: it isn't).
```
~/pwntools $ git grep -E -e '^ +>>> bytes *=' -- '*.py'
pwnlib/runner.py:42: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
pwnlib/runner.py:48: >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
pwnlib/runner.py:87: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwnlib/runner.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3
4 import os
5 import tempfile
6
7 from pwnlib.context import LocalContext
8 from pwnlib.elf import ELF
9 from pwnlib.tubes.process import process
10
11 __all__ = ['run_assembly', 'run_shellcode', 'run_assembly_exitcode', 'run_shellcode_exitcode']
12
13 @LocalContext
14 def run_assembly(assembly):
15 """
16 Given an assembly listing, assemble and execute it.
17
18 Returns:
19
20 A :class:`pwnlib.tubes.process.process` tube to interact with the process.
21
22 Example:
23
24 >>> p = run_assembly('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
25 >>> p.wait_for_close()
26 >>> p.poll()
27 3
28
29 >>> p = run_assembly('mov r0, #12; mov r7, #1; svc #0', arch='arm')
30 >>> p.wait_for_close()
31 >>> p.poll()
32 12
33 """
34 return ELF.from_assembly(assembly).process()
35
36 @LocalContext
37 def run_shellcode(bytes, **kw):
38 """Given assembled machine code bytes, execute them.
39
40 Example:
41
42 >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
43 >>> p = run_shellcode(bytes)
44 >>> p.wait_for_close()
45 >>> p.poll()
46 3
47
48 >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
49 >>> p = run_shellcode(bytes, arch='arm')
50 >>> p.wait_for_close()
51 >>> p.poll()
52 12
53 """
54 return ELF.from_bytes(bytes, **kw).process()
55
56 @LocalContext
57 def run_assembly_exitcode(assembly):
58 """
59 Given an assembly listing, assemble and execute it, and wait for
60 the process to die.
61
62 Returns:
63
64 The exit code of the process.
65
66 Example:
67
68 >>> run_assembly_exitcode('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
69 3
70 """
71 p = run_assembly(assembly)
72 p.wait_for_close()
73 return p.poll()
74
75 @LocalContext
76 def run_shellcode_exitcode(bytes):
77 """
78 Given assembled machine code bytes, execute them, and wait for
79 the process to die.
80
81 Returns:
82
83 The exit code of the process.
84
85 Example:
86
87 >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
88 >>> run_shellcode_exitcode(bytes)
89 3
90 """
91 p = run_shellcode(bytes)
92 p.wait_for_close()
93 return p.poll()
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwnlib/runner.py b/pwnlib/runner.py
--- a/pwnlib/runner.py
+++ b/pwnlib/runner.py
@@ -39,14 +39,14 @@
Example:
- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
- >>> p = run_shellcode(bytes)
+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
+ >>> p = run_shellcode(insn_bytes)
>>> p.wait_for_close()
>>> p.poll()
3
- >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
- >>> p = run_shellcode(bytes, arch='arm')
+ >>> insn_bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')
+ >>> p = run_shellcode(insn_bytes, arch='arm')
>>> p.wait_for_close()
>>> p.poll()
12
@@ -84,8 +84,8 @@
Example:
- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
- >>> run_shellcode_exitcode(bytes)
+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')
+ >>> run_shellcode_exitcode(insn_bytes)
3
"""
p = run_shellcode(bytes)
| {"golden_diff": "diff --git a/pwnlib/runner.py b/pwnlib/runner.py\n--- a/pwnlib/runner.py\n+++ b/pwnlib/runner.py\n@@ -39,14 +39,14 @@\n \n Example:\n \n- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n- >>> p = run_shellcode(bytes)\n+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n+ >>> p = run_shellcode(insn_bytes)\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n \n- >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n- >>> p = run_shellcode(bytes, arch='arm')\n+ >>> insn_bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n+ >>> p = run_shellcode(insn_bytes, arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n@@ -84,8 +84,8 @@\n \n Example:\n \n- >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n- >>> run_shellcode_exitcode(bytes)\n+ >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n+ >>> run_shellcode_exitcode(insn_bytes)\n 3\n \"\"\"\n p = run_shellcode(bytes)\n", "issue": "Do not overwrite global `bytes` in code or examples\nIt looks like there's a few places we overwrite `bytes` (the type identifier) with a local variable.\r\n\r\n```\r\n$ git grep -E -e '^ +bytes *=' -- '*.py'\r\npwnlib/commandline/disasm.py:81: bytes = disasm(dat, vma=safeeval.const(args.address), instructions=False, offset=False)\r\npwnlib/commandline/elfpatch.py:29: bytes = unhex(a.bytes)\r\npwnlib/elf/elf.py:195: bytes = 4\r\n```\r\n\r\nAnd a few cases we do it in tests, which could have cross-test impact if the global state isn't reset (hint: it isn't).\r\n\r\n```\r\n~/pwntools $ git grep -E -e '^ +>>> bytes *=' -- '*.py'\r\npwnlib/runner.py:42: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\r\npwnlib/runner.py:48: >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\r\npwnlib/runner.py:87: >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport os\nimport tempfile\n\nfrom pwnlib.context import LocalContext\nfrom pwnlib.elf import ELF\nfrom pwnlib.tubes.process import process\n\n__all__ = ['run_assembly', 'run_shellcode', 'run_assembly_exitcode', 'run_shellcode_exitcode']\n\n@LocalContext\ndef run_assembly(assembly):\n \"\"\"\n Given an assembly listing, assemble and execute it.\n\n Returns:\n\n A :class:`pwnlib.tubes.process.process` tube to interact with the process.\n\n Example:\n\n >>> p = run_assembly('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n\n >>> p = run_assembly('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n \"\"\"\n return ELF.from_assembly(assembly).process()\n\n@LocalContext\ndef run_shellcode(bytes, **kw):\n \"\"\"Given assembled machine code bytes, execute them.\n\n Example:\n\n >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> p = run_shellcode(bytes)\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n\n >>> bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n >>> p = run_shellcode(bytes, arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n \"\"\"\n return ELF.from_bytes(bytes, **kw).process()\n\n@LocalContext\ndef run_assembly_exitcode(assembly):\n \"\"\"\n Given an assembly listing, assemble and execute it, and wait for\n the process to die.\n\n Returns:\n\n The exit code of the process.\n\n Example:\n\n >>> run_assembly_exitcode('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n 3\n \"\"\"\n p = run_assembly(assembly)\n p.wait_for_close()\n return p.poll()\n\n@LocalContext\ndef run_shellcode_exitcode(bytes):\n \"\"\"\n Given assembled machine code bytes, execute them, and wait for\n the process to die.\n\n Returns:\n\n The exit code of the process.\n\n Example:\n\n >>> bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> run_shellcode_exitcode(bytes)\n 3\n \"\"\"\n p = run_shellcode(bytes)\n p.wait_for_close()\n return p.poll()\n", "path": "pwnlib/runner.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport os\nimport tempfile\n\nfrom pwnlib.context import LocalContext\nfrom pwnlib.elf import ELF\nfrom pwnlib.tubes.process import process\n\n__all__ = ['run_assembly', 'run_shellcode', 'run_assembly_exitcode', 'run_shellcode_exitcode']\n\n@LocalContext\ndef run_assembly(assembly):\n \"\"\"\n Given an assembly listing, assemble and execute it.\n\n Returns:\n\n A :class:`pwnlib.tubes.process.process` tube to interact with the process.\n\n Example:\n\n >>> p = run_assembly('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n\n >>> p = run_assembly('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n \"\"\"\n return ELF.from_assembly(assembly).process()\n\n@LocalContext\ndef run_shellcode(bytes, **kw):\n \"\"\"Given assembled machine code bytes, execute them.\n\n Example:\n\n >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> p = run_shellcode(insn_bytes)\n >>> p.wait_for_close()\n >>> p.poll()\n 3\n\n >>> insn_bytes = asm('mov r0, #12; mov r7, #1; svc #0', arch='arm')\n >>> p = run_shellcode(insn_bytes, arch='arm')\n >>> p.wait_for_close()\n >>> p.poll()\n 12\n \"\"\"\n return ELF.from_bytes(bytes, **kw).process()\n\n@LocalContext\ndef run_assembly_exitcode(assembly):\n \"\"\"\n Given an assembly listing, assemble and execute it, and wait for\n the process to die.\n\n Returns:\n\n The exit code of the process.\n\n Example:\n\n >>> run_assembly_exitcode('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n 3\n \"\"\"\n p = run_assembly(assembly)\n p.wait_for_close()\n return p.poll()\n\n@LocalContext\ndef run_shellcode_exitcode(bytes):\n \"\"\"\n Given assembled machine code bytes, execute them, and wait for\n the process to die.\n\n Returns:\n\n The exit code of the process.\n\n Example:\n\n >>> insn_bytes = asm('mov ebx, 3; mov eax, SYS_exit; int 0x80;')\n >>> run_shellcode_exitcode(insn_bytes)\n 3\n \"\"\"\n p = run_shellcode(bytes)\n p.wait_for_close()\n return p.poll()\n", "path": "pwnlib/runner.py"}]} | 1,355 | 364 |
gh_patches_debug_35111 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-2854 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
King Island: battery never seems to discharge
I've been keeping an eye on AUS-TAS-KI since it was added to the map. Charging works fine, discharging doesn't show up.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/AUS_TAS_KI.py`
Content:
```
1 # Initial PR https://github.com/tmrowco/electricitymap-contrib/pull/2456
2 # Discussion thread https://github.com/tmrowco/electricitymap-contrib/issues/636
3 # A promotion webpage for King's Island energy production is here : https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island
4 # As of 09/2020, it embeds with <iframe> the URI https://data.ajenti.com.au/KIREIP/index.html
5 # About the data, the feed we get seems to be counters with a 2 seconds interval.
6 # That means that if we fetch these counters every 15 minutes, we only are reading "instantaneous" metters that could differ from the total quantity of energies at play. To get the very exact data, we would need to have a parser running constanty to collect those 2-sec interval counters.
7
8 import asyncio
9 import json
10 import logging
11 import arrow
12 from signalr import Connection
13 from requests import Session
14
15 class SignalR:
16 def __init__(self, url):
17 self.url = url
18
19 def update_res(self, msg):
20 if (msg != {}):
21 self.res = msg
22
23 def get_value(self, hub, method):
24 self.res = {}
25 with Session() as session:
26 #create a connection
27 connection = Connection(self.url, session)
28 chat = connection.register_hub(hub)
29 chat.client.on(method, self.update_res)
30 connection.start()
31 connection.wait(3)
32 connection.close()
33 return self.res
34
35 def parse_payload(logger, payload):
36 technologies_parsed = {}
37 if not 'technologies' in payload:
38 raise KeyError(
39 f"No 'technologies' in payload\n"
40 f"serie : {json.dumps(payload)}"
41 )
42 else:
43 logger.debug(f"serie : {json.dumps(payload)}")
44 for technology in payload['technologies']:
45 assert technology['unit'] == 'kW'
46 # The upstream API gives us kW, we need MW
47 technologies_parsed[technology['id']] = int(technology['value'])/1000
48 logger.debug(f"production : {json.dumps(technologies_parsed)}")
49
50 biodiesel_percent = payload['biodiesel']['percent']
51
52 return technologies_parsed, biodiesel_percent
53
54 # Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid
55 def format_storage_techs(technologies_parsed):
56 storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']
57 battery_production = storage_techs if storage_techs > 0 else 0
58 battery_storage = storage_techs if storage_techs < 0 else 0
59
60 return battery_production, battery_storage
61
62 def fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):
63
64 if target_datetime is not None:
65 raise NotImplementedError('The datasource currently implemented is only real time')
66
67 payload = SignalR("https://data.ajenti.com.au/live/signalr").get_value("TagHub", "Dashboard")
68 technologies_parsed, biodiesel_percent = parse_payload(logger, payload)
69 battery_production, battery_storage = format_storage_techs(technologies_parsed)
70 return {
71 'zoneKey': zone_key,
72 'datetime': arrow.now(tz='Australia/Currie').datetime,
73 'production': {
74 'battery discharge': battery_production,
75 'biomass': technologies_parsed['diesel']*biodiesel_percent/100,
76 'coal': 0,
77 'gas': 0,
78 'hydro': 0,
79 'nuclear': 0,
80 'oil': technologies_parsed['diesel']*(100-biodiesel_percent)/100,
81 'solar': technologies_parsed['solar'],
82 'wind': 0 if technologies_parsed['wind'] < 0 and technologies_parsed['wind'] > -0.1 else technologies_parsed['wind'], #If wind between 0 and -0.1 set to 0 to ignore self-consumption
83 'geothermal': 0,
84 'unknown': 0
85 },
86 'storage': {
87 'battery': battery_storage*-1
88 },
89 'source': 'https://data.ajenti.com.au/KIREIP/index.html'
90 }
91
92 if __name__ == '__main__':
93 print(fetch_production())
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsers/AUS_TAS_KI.py b/parsers/AUS_TAS_KI.py
--- a/parsers/AUS_TAS_KI.py
+++ b/parsers/AUS_TAS_KI.py
@@ -52,12 +52,10 @@
return technologies_parsed, biodiesel_percent
# Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid
-def format_storage_techs(technologies_parsed):
+def sum_storage_techs(technologies_parsed):
storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']
- battery_production = storage_techs if storage_techs > 0 else 0
- battery_storage = storage_techs if storage_techs < 0 else 0
- return battery_production, battery_storage
+ return storage_techs
def fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):
@@ -66,12 +64,11 @@
payload = SignalR("https://data.ajenti.com.au/live/signalr").get_value("TagHub", "Dashboard")
technologies_parsed, biodiesel_percent = parse_payload(logger, payload)
- battery_production, battery_storage = format_storage_techs(technologies_parsed)
+ storage_techs = sum_storage_techs(technologies_parsed)
return {
'zoneKey': zone_key,
'datetime': arrow.now(tz='Australia/Currie').datetime,
'production': {
- 'battery discharge': battery_production,
'biomass': technologies_parsed['diesel']*biodiesel_percent/100,
'coal': 0,
'gas': 0,
@@ -84,9 +81,9 @@
'unknown': 0
},
'storage': {
- 'battery': battery_storage*-1
+ 'battery': storage_techs*-1 #Somewhat counterintuitively,to ElectricityMap positive means charging and negative means discharging
},
- 'source': 'https://data.ajenti.com.au/KIREIP/index.html'
+ 'source': 'https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island' #Iframe: https://data.ajenti.com.au/KIREIP/index.html
}
if __name__ == '__main__':
| {"golden_diff": "diff --git a/parsers/AUS_TAS_KI.py b/parsers/AUS_TAS_KI.py\n--- a/parsers/AUS_TAS_KI.py\n+++ b/parsers/AUS_TAS_KI.py\n@@ -52,12 +52,10 @@\n return technologies_parsed, biodiesel_percent\n \n # Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid\n-def format_storage_techs(technologies_parsed):\n+def sum_storage_techs(technologies_parsed):\n storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']\n- battery_production = storage_techs if storage_techs > 0 else 0\n- battery_storage = storage_techs if storage_techs < 0 else 0\n \n- return battery_production, battery_storage\n+ return storage_techs\n \n def fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):\n \n@@ -66,12 +64,11 @@\n \n payload = SignalR(\"https://data.ajenti.com.au/live/signalr\").get_value(\"TagHub\", \"Dashboard\")\n technologies_parsed, biodiesel_percent = parse_payload(logger, payload)\n- battery_production, battery_storage = format_storage_techs(technologies_parsed)\n+ storage_techs = sum_storage_techs(technologies_parsed)\n return {\n 'zoneKey': zone_key,\n 'datetime': arrow.now(tz='Australia/Currie').datetime,\n 'production': {\n- 'battery discharge': battery_production,\n 'biomass': technologies_parsed['diesel']*biodiesel_percent/100,\n 'coal': 0,\n 'gas': 0,\n@@ -84,9 +81,9 @@\n 'unknown': 0\n },\n 'storage': {\n- 'battery': battery_storage*-1\n+ 'battery': storage_techs*-1 #Somewhat counterintuitively,to ElectricityMap positive means charging and negative means discharging\n },\n- 'source': 'https://data.ajenti.com.au/KIREIP/index.html'\n+ 'source': 'https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island' #Iframe: https://data.ajenti.com.au/KIREIP/index.html\n }\n \n if __name__ == '__main__':\n", "issue": "King Island: battery never seems to discharge \nI've been keeping an eye on AUS-TAS-KI since it was added to the map. Charging works fine, discharging doesn't show up.\n", "before_files": [{"content": "# Initial PR https://github.com/tmrowco/electricitymap-contrib/pull/2456\n# Discussion thread https://github.com/tmrowco/electricitymap-contrib/issues/636\n# A promotion webpage for King's Island energy production is here : https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island\n# As of 09/2020, it embeds with <iframe> the URI https://data.ajenti.com.au/KIREIP/index.html\n# About the data, the feed we get seems to be counters with a 2 seconds interval.\n# That means that if we fetch these counters every 15 minutes, we only are reading \"instantaneous\" metters that could differ from the total quantity of energies at play. To get the very exact data, we would need to have a parser running constanty to collect those 2-sec interval counters.\n\nimport asyncio\nimport json\nimport logging\nimport arrow\nfrom signalr import Connection\nfrom requests import Session\n\nclass SignalR:\n def __init__(self, url):\n self.url = url\n \n def update_res(self, msg):\n if (msg != {}):\n self.res = msg\n\n def get_value(self, hub, method):\n self.res = {}\n with Session() as session:\n #create a connection\n connection = Connection(self.url, session)\n chat = connection.register_hub(hub)\n chat.client.on(method, self.update_res)\n connection.start()\n connection.wait(3)\n connection.close()\n return self.res\n \ndef parse_payload(logger, payload):\n technologies_parsed = {}\n if not 'technologies' in payload:\n raise KeyError(\n f\"No 'technologies' in payload\\n\"\n f\"serie : {json.dumps(payload)}\"\n )\n else:\n logger.debug(f\"serie : {json.dumps(payload)}\")\n for technology in payload['technologies']:\n assert technology['unit'] == 'kW'\n # The upstream API gives us kW, we need MW\n technologies_parsed[technology['id']] = int(technology['value'])/1000\n logger.debug(f\"production : {json.dumps(technologies_parsed)}\")\n\n biodiesel_percent = payload['biodiesel']['percent']\n\n return technologies_parsed, biodiesel_percent\n\n# Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid\ndef format_storage_techs(technologies_parsed):\n storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']\n battery_production = storage_techs if storage_techs > 0 else 0\n battery_storage = storage_techs if storage_techs < 0 else 0\n\n return battery_production, battery_storage\n\ndef fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):\n\n if target_datetime is not None:\n raise NotImplementedError('The datasource currently implemented is only real time')\n \n payload = SignalR(\"https://data.ajenti.com.au/live/signalr\").get_value(\"TagHub\", \"Dashboard\")\n technologies_parsed, biodiesel_percent = parse_payload(logger, payload)\n battery_production, battery_storage = format_storage_techs(technologies_parsed)\n return {\n 'zoneKey': zone_key,\n 'datetime': arrow.now(tz='Australia/Currie').datetime,\n 'production': {\n 'battery discharge': battery_production,\n 'biomass': technologies_parsed['diesel']*biodiesel_percent/100,\n 'coal': 0,\n 'gas': 0,\n 'hydro': 0,\n 'nuclear': 0,\n 'oil': technologies_parsed['diesel']*(100-biodiesel_percent)/100,\n 'solar': technologies_parsed['solar'],\n 'wind': 0 if technologies_parsed['wind'] < 0 and technologies_parsed['wind'] > -0.1 else technologies_parsed['wind'], #If wind between 0 and -0.1 set to 0 to ignore self-consumption\n 'geothermal': 0,\n 'unknown': 0\n },\n 'storage': {\n 'battery': battery_storage*-1\n },\n 'source': 'https://data.ajenti.com.au/KIREIP/index.html'\n }\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/AUS_TAS_KI.py"}], "after_files": [{"content": "# Initial PR https://github.com/tmrowco/electricitymap-contrib/pull/2456\n# Discussion thread https://github.com/tmrowco/electricitymap-contrib/issues/636\n# A promotion webpage for King's Island energy production is here : https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island\n# As of 09/2020, it embeds with <iframe> the URI https://data.ajenti.com.au/KIREIP/index.html\n# About the data, the feed we get seems to be counters with a 2 seconds interval.\n# That means that if we fetch these counters every 15 minutes, we only are reading \"instantaneous\" metters that could differ from the total quantity of energies at play. To get the very exact data, we would need to have a parser running constanty to collect those 2-sec interval counters.\n\nimport asyncio\nimport json\nimport logging\nimport arrow\nfrom signalr import Connection\nfrom requests import Session\n\nclass SignalR:\n def __init__(self, url):\n self.url = url\n \n def update_res(self, msg):\n if (msg != {}):\n self.res = msg\n\n def get_value(self, hub, method):\n self.res = {}\n with Session() as session:\n #create a connection\n connection = Connection(self.url, session)\n chat = connection.register_hub(hub)\n chat.client.on(method, self.update_res)\n connection.start()\n connection.wait(3)\n connection.close()\n return self.res\n \ndef parse_payload(logger, payload):\n technologies_parsed = {}\n if not 'technologies' in payload:\n raise KeyError(\n f\"No 'technologies' in payload\\n\"\n f\"serie : {json.dumps(payload)}\"\n )\n else:\n logger.debug(f\"serie : {json.dumps(payload)}\")\n for technology in payload['technologies']:\n assert technology['unit'] == 'kW'\n # The upstream API gives us kW, we need MW\n technologies_parsed[technology['id']] = int(technology['value'])/1000\n logger.debug(f\"production : {json.dumps(technologies_parsed)}\")\n\n biodiesel_percent = payload['biodiesel']['percent']\n\n return technologies_parsed, biodiesel_percent\n\n# Both keys battery and flywheel are negative when storing energy, and positive when feeding energy to the grid\ndef sum_storage_techs(technologies_parsed):\n storage_techs = technologies_parsed['battery']+technologies_parsed['flywheel']\n\n return storage_techs\n\ndef fetch_production(zone_key='AUS-TAS-KI', session=None, target_datetime=None, logger: logging.Logger = logging.getLogger(__name__)):\n\n if target_datetime is not None:\n raise NotImplementedError('The datasource currently implemented is only real time')\n \n payload = SignalR(\"https://data.ajenti.com.au/live/signalr\").get_value(\"TagHub\", \"Dashboard\")\n technologies_parsed, biodiesel_percent = parse_payload(logger, payload)\n storage_techs = sum_storage_techs(technologies_parsed)\n return {\n 'zoneKey': zone_key,\n 'datetime': arrow.now(tz='Australia/Currie').datetime,\n 'production': {\n 'biomass': technologies_parsed['diesel']*biodiesel_percent/100,\n 'coal': 0,\n 'gas': 0,\n 'hydro': 0,\n 'nuclear': 0,\n 'oil': technologies_parsed['diesel']*(100-biodiesel_percent)/100,\n 'solar': technologies_parsed['solar'],\n 'wind': 0 if technologies_parsed['wind'] < 0 and technologies_parsed['wind'] > -0.1 else technologies_parsed['wind'], #If wind between 0 and -0.1 set to 0 to ignore self-consumption\n 'geothermal': 0,\n 'unknown': 0\n },\n 'storage': {\n 'battery': storage_techs*-1 #Somewhat counterintuitively,to ElectricityMap positive means charging and negative means discharging\n },\n 'source': 'https://www.hydro.com.au/clean-energy/hybrid-energy-solutions/success-stories/king-island' #Iframe: https://data.ajenti.com.au/KIREIP/index.html\n }\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/AUS_TAS_KI.py"}]} | 1,447 | 540 |
gh_patches_debug_3443 | rasdani/github-patches | git_diff | crytic__slither-1971 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Suggestion to make the recommendation in the `msgvalue-inside-a-loop` detector wiki clearer
### Describe the desired feature
Reference: https://github.com/crytic/slither/wiki/Detector-Documentation#msgvalue-inside-a-loop
This is the current recommendation for the `msgvalue-inside-a-loop` detector:
```solidity
Track msg.value through a local variable and decrease its amount on every iteration/usage.
```
This is a vague recommendation - it does not address the issue head-on, i.e., what mathematical technique the developer should use to remedy the bug.
My suggestions:
1. Recommend dividing by the number of `receivers`
2. Recommend providing an explicit array of amounts alongside the `receivers` array, and check that the sum of each element in that array matches `msg.value`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/detectors/statements/msg_value_in_loop.py`
Content:
```
1 from typing import List, Optional
2 from slither.core.cfg.node import NodeType, Node
3 from slither.detectors.abstract_detector import (
4 AbstractDetector,
5 DetectorClassification,
6 DETECTOR_INFO,
7 )
8 from slither.slithir.operations import InternalCall
9 from slither.core.declarations import SolidityVariableComposed, Contract
10 from slither.utils.output import Output
11
12
13 def detect_msg_value_in_loop(contract: Contract) -> List[Node]:
14 results: List[Node] = []
15 for f in contract.functions_entry_points:
16 if f.is_implemented and f.payable:
17 msg_value_in_loop(f.entry_point, 0, [], results)
18 return results
19
20
21 def msg_value_in_loop(
22 node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]
23 ) -> None:
24
25 if node is None:
26 return
27
28 if node in visited:
29 return
30 # shared visited
31 visited.append(node)
32
33 if node.type == NodeType.STARTLOOP:
34 in_loop_counter += 1
35 elif node.type == NodeType.ENDLOOP:
36 in_loop_counter -= 1
37
38 for ir in node.all_slithir_operations():
39 if in_loop_counter > 0 and SolidityVariableComposed("msg.value") in ir.read:
40 results.append(ir.node)
41 if isinstance(ir, (InternalCall)):
42 msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)
43
44 for son in node.sons:
45 msg_value_in_loop(son, in_loop_counter, visited, results)
46
47
48 class MsgValueInLoop(AbstractDetector):
49 """
50 Detect the use of msg.value inside a loop
51 """
52
53 ARGUMENT = "msg-value-loop"
54 HELP = "msg.value inside a loop"
55 IMPACT = DetectorClassification.HIGH
56 CONFIDENCE = DetectorClassification.MEDIUM
57
58 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop"
59
60 WIKI_TITLE = "`msg.value` inside a loop"
61 WIKI_DESCRIPTION = "Detect the use of `msg.value` inside a loop."
62
63 # region wiki_exploit_scenario
64 WIKI_EXPLOIT_SCENARIO = """
65 ```solidity
66 contract MsgValueInLoop{
67
68 mapping (address => uint256) balances;
69
70 function bad(address[] memory receivers) public payable {
71 for (uint256 i=0; i < receivers.length; i++) {
72 balances[receivers[i]] += msg.value;
73 }
74 }
75
76 }
77 ```
78 """
79 # endregion wiki_exploit_scenario
80
81 WIKI_RECOMMENDATION = """
82 Track msg.value through a local variable and decrease its amount on every iteration/usage.
83 """
84
85 def _detect(self) -> List[Output]:
86 """"""
87 results: List[Output] = []
88 for c in self.compilation_unit.contracts_derived:
89 values = detect_msg_value_in_loop(c)
90 for node in values:
91 func = node.function
92
93 info: DETECTOR_INFO = [func, " use msg.value in a loop: ", node, "\n"]
94 res = self.generate_result(info)
95 results.append(res)
96
97 return results
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/detectors/statements/msg_value_in_loop.py b/slither/detectors/statements/msg_value_in_loop.py
--- a/slither/detectors/statements/msg_value_in_loop.py
+++ b/slither/detectors/statements/msg_value_in_loop.py
@@ -79,7 +79,7 @@
# endregion wiki_exploit_scenario
WIKI_RECOMMENDATION = """
-Track msg.value through a local variable and decrease its amount on every iteration/usage.
+Provide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.
"""
def _detect(self) -> List[Output]:
| {"golden_diff": "diff --git a/slither/detectors/statements/msg_value_in_loop.py b/slither/detectors/statements/msg_value_in_loop.py\n--- a/slither/detectors/statements/msg_value_in_loop.py\n+++ b/slither/detectors/statements/msg_value_in_loop.py\n@@ -79,7 +79,7 @@\n # endregion wiki_exploit_scenario\n \n WIKI_RECOMMENDATION = \"\"\"\n-Track msg.value through a local variable and decrease its amount on every iteration/usage.\n+Provide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.\n \"\"\"\n \n def _detect(self) -> List[Output]:\n", "issue": "Suggestion to make the recommendation in the `msgvalue-inside-a-loop` detector wiki clearer\n### Describe the desired feature\n\nReference: https://github.com/crytic/slither/wiki/Detector-Documentation#msgvalue-inside-a-loop\r\n\r\nThis is the current recommendation for the `msgvalue-inside-a-loop` detector:\r\n\r\n```solidity\r\nTrack msg.value through a local variable and decrease its amount on every iteration/usage.\r\n```\r\n\r\nThis is a vague recommendation - it does not address the issue head-on, i.e., what mathematical technique the developer should use to remedy the bug.\r\n\r\nMy suggestions:\r\n\r\n1. Recommend dividing by the number of `receivers`\r\n2. Recommend providing an explicit array of amounts alongside the `receivers` array, and check that the sum of each element in that array matches `msg.value`\n", "before_files": [{"content": "from typing import List, Optional\nfrom slither.core.cfg.node import NodeType, Node\nfrom slither.detectors.abstract_detector import (\n AbstractDetector,\n DetectorClassification,\n DETECTOR_INFO,\n)\nfrom slither.slithir.operations import InternalCall\nfrom slither.core.declarations import SolidityVariableComposed, Contract\nfrom slither.utils.output import Output\n\n\ndef detect_msg_value_in_loop(contract: Contract) -> List[Node]:\n results: List[Node] = []\n for f in contract.functions_entry_points:\n if f.is_implemented and f.payable:\n msg_value_in_loop(f.entry_point, 0, [], results)\n return results\n\n\ndef msg_value_in_loop(\n node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]\n) -> None:\n\n if node is None:\n return\n\n if node in visited:\n return\n # shared visited\n visited.append(node)\n\n if node.type == NodeType.STARTLOOP:\n in_loop_counter += 1\n elif node.type == NodeType.ENDLOOP:\n in_loop_counter -= 1\n\n for ir in node.all_slithir_operations():\n if in_loop_counter > 0 and SolidityVariableComposed(\"msg.value\") in ir.read:\n results.append(ir.node)\n if isinstance(ir, (InternalCall)):\n msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)\n\n for son in node.sons:\n msg_value_in_loop(son, in_loop_counter, visited, results)\n\n\nclass MsgValueInLoop(AbstractDetector):\n \"\"\"\n Detect the use of msg.value inside a loop\n \"\"\"\n\n ARGUMENT = \"msg-value-loop\"\n HELP = \"msg.value inside a loop\"\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop\"\n\n WIKI_TITLE = \"`msg.value` inside a loop\"\n WIKI_DESCRIPTION = \"Detect the use of `msg.value` inside a loop.\"\n\n # region wiki_exploit_scenario\n WIKI_EXPLOIT_SCENARIO = \"\"\"\n```solidity\ncontract MsgValueInLoop{\n\n mapping (address => uint256) balances;\n\n function bad(address[] memory receivers) public payable {\n for (uint256 i=0; i < receivers.length; i++) {\n balances[receivers[i]] += msg.value;\n }\n }\n\n}\n```\n\"\"\"\n # endregion wiki_exploit_scenario\n\n WIKI_RECOMMENDATION = \"\"\"\nTrack msg.value through a local variable and decrease its amount on every iteration/usage.\n\"\"\"\n\n def _detect(self) -> List[Output]:\n \"\"\"\"\"\"\n results: List[Output] = []\n for c in self.compilation_unit.contracts_derived:\n values = detect_msg_value_in_loop(c)\n for node in values:\n func = node.function\n\n info: DETECTOR_INFO = [func, \" use msg.value in a loop: \", node, \"\\n\"]\n res = self.generate_result(info)\n results.append(res)\n\n return results\n", "path": "slither/detectors/statements/msg_value_in_loop.py"}], "after_files": [{"content": "from typing import List, Optional\nfrom slither.core.cfg.node import NodeType, Node\nfrom slither.detectors.abstract_detector import (\n AbstractDetector,\n DetectorClassification,\n DETECTOR_INFO,\n)\nfrom slither.slithir.operations import InternalCall\nfrom slither.core.declarations import SolidityVariableComposed, Contract\nfrom slither.utils.output import Output\n\n\ndef detect_msg_value_in_loop(contract: Contract) -> List[Node]:\n results: List[Node] = []\n for f in contract.functions_entry_points:\n if f.is_implemented and f.payable:\n msg_value_in_loop(f.entry_point, 0, [], results)\n return results\n\n\ndef msg_value_in_loop(\n node: Optional[Node], in_loop_counter: int, visited: List[Node], results: List[Node]\n) -> None:\n\n if node is None:\n return\n\n if node in visited:\n return\n # shared visited\n visited.append(node)\n\n if node.type == NodeType.STARTLOOP:\n in_loop_counter += 1\n elif node.type == NodeType.ENDLOOP:\n in_loop_counter -= 1\n\n for ir in node.all_slithir_operations():\n if in_loop_counter > 0 and SolidityVariableComposed(\"msg.value\") in ir.read:\n results.append(ir.node)\n if isinstance(ir, (InternalCall)):\n msg_value_in_loop(ir.function.entry_point, in_loop_counter, visited, results)\n\n for son in node.sons:\n msg_value_in_loop(son, in_loop_counter, visited, results)\n\n\nclass MsgValueInLoop(AbstractDetector):\n \"\"\"\n Detect the use of msg.value inside a loop\n \"\"\"\n\n ARGUMENT = \"msg-value-loop\"\n HELP = \"msg.value inside a loop\"\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation/#msgvalue-inside-a-loop\"\n\n WIKI_TITLE = \"`msg.value` inside a loop\"\n WIKI_DESCRIPTION = \"Detect the use of `msg.value` inside a loop.\"\n\n # region wiki_exploit_scenario\n WIKI_EXPLOIT_SCENARIO = \"\"\"\n```solidity\ncontract MsgValueInLoop{\n\n mapping (address => uint256) balances;\n\n function bad(address[] memory receivers) public payable {\n for (uint256 i=0; i < receivers.length; i++) {\n balances[receivers[i]] += msg.value;\n }\n }\n\n}\n```\n\"\"\"\n # endregion wiki_exploit_scenario\n\n WIKI_RECOMMENDATION = \"\"\"\nProvide an explicit array of amounts alongside the receivers array, and check that the sum of all amounts matches `msg.value`.\n\"\"\"\n\n def _detect(self) -> List[Output]:\n \"\"\"\"\"\"\n results: List[Output] = []\n for c in self.compilation_unit.contracts_derived:\n values = detect_msg_value_in_loop(c)\n for node in values:\n func = node.function\n\n info: DETECTOR_INFO = [func, \" use msg.value in a loop: \", node, \"\\n\"]\n res = self.generate_result(info)\n results.append(res)\n\n return results\n", "path": "slither/detectors/statements/msg_value_in_loop.py"}]} | 1,317 | 146 |
gh_patches_debug_29162 | rasdani/github-patches | git_diff | techmatters__terraso-backend-103 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove /auth/user endpoint
<!--
Use a concise title that describes the problem. Include key words from error messages.
Bad: Problem with newsletter form
Good: Newsletter form missing email address field
Bad: Issue with website
Good: Footer missing from homepage
-->
## Description
Clients in general should use the graph QL interface to gather and manipulate data, right now this enpoint is conflicting with that. It was created to provide the initial authenticated user data, but using the data provided by the JWT token this is not needed anymore.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `terraso_backend/apps/auth/views.py`
Content:
```
1 import json
2
3 from django.conf import settings
4 from django.contrib.auth import get_user_model
5 from django.http import HttpResponse, HttpResponseRedirect, JsonResponse
6 from django.views import View
7
8 from .mixins import AuthenticationRequiredMixin
9 from .providers import AppleProvider, GoogleProvider
10 from .services import AccountService, JWTService
11
12 User = get_user_model()
13
14
15 class AbstractAuthorizeView(View):
16 def get(self, request, *args, **kwargs):
17 return JsonResponse({"request_url": self.provider.login_url()})
18
19 @property
20 def provider(self):
21 return NotImplementedError("AbstractAuthorizeView must be inherited")
22
23
24 class GoogleAuthorizeView(AbstractAuthorizeView):
25 @property
26 def provider(self):
27 return GoogleProvider
28
29
30 class AppleAuthorizeView(AbstractAuthorizeView):
31 @property
32 def provider(self):
33 return AppleProvider
34
35
36 class AbstractCallbackView(View):
37 def get(self, request, *args, **kwargs):
38 self.authorization_code = self.request.GET.get("code")
39 self.error = self.request.GET.get("error")
40
41 return self.process_callback()
42
43 def post(self, request, *args, **kwargs):
44 self.authorization_code = self.request.POST.get("code")
45 self.error = self.request.POST.get("error")
46
47 return self.process_callback()
48
49 def process_callback(self):
50 if self.error:
51 return HttpResponse(f"Error: {self.error}", status=400)
52
53 if not self.authorization_code:
54 return HttpResponse("Error: no authorization code informed", status=400)
55
56 jwt_service = JWTService()
57
58 try:
59 user = self.process_signup()
60 access_token = jwt_service.create_access_token(user)
61 refresh_token = jwt_service.create_refresh_token(user)
62 except Exception as exc:
63 return HttpResponse(f"Error: {exc}", status=400)
64
65 response = HttpResponseRedirect(settings.WEB_CLIENT_URL)
66 response.set_cookie("atoken", access_token, domain=settings.AUTH_COOKIE_DOMAIN)
67 response.set_cookie("rtoken", refresh_token, domain=settings.AUTH_COOKIE_DOMAIN)
68
69 return response
70
71 def process_signup(self):
72 raise NotImplementedError("AbstractCallbackView must be inherited.")
73
74
75 class GoogleCallbackView(AbstractCallbackView):
76 def process_signup(self):
77 return AccountService().sign_up_with_google(self.authorization_code)
78
79
80 class AppleCallbackView(AbstractCallbackView):
81 def process_signup(self):
82 try:
83 apple_user_data = json.loads(self.request.POST.get("user", "{}"))
84 except json.JSONDecodeError:
85 raise Exception("couldn't parse User data from Apple")
86
87 first_name = apple_user_data.get("name", {}).get("firstName", "")
88 last_name = apple_user_data.get("name", {}).get("lastName", "")
89
90 return AccountService().sign_up_with_apple(
91 self.authorization_code, first_name=first_name, last_name=last_name
92 )
93
94
95 class RefreshAccessTokenView(View):
96 def post(self, request, *args, **kwargs):
97 try:
98 request_data = json.loads(request.body)
99 except json.decoder.JSONDecodeError:
100 return JsonResponse({"error": "The request expects a json body"}, status=400)
101
102 try:
103 refresh_token = request_data["refresh_token"]
104 except KeyError:
105 return JsonResponse(
106 {"error": "The request expects a 'refresh_token' parameter"}, status=400
107 )
108
109 jwt_service = JWTService()
110
111 try:
112 refresh_payload = jwt_service.verify_token(refresh_token)
113 except Exception as exc:
114 return JsonResponse({"error": str(exc)}, status=400)
115
116 try:
117 user = User.objects.get(id=refresh_payload["sub"])
118 except User.DoesNotExist:
119 return JsonResponse({"error": "User not found"}, status=400)
120
121 if not user.is_active:
122 return JsonResponse({"error": "User not found"}, status=400)
123
124 access_token = jwt_service.create_access_token(user)
125 refresh_token = jwt_service.create_refresh_token(user)
126
127 return JsonResponse(
128 {
129 "access_token": access_token,
130 "refresh_token": refresh_token,
131 }
132 )
133
134
135 class CheckUserView(AuthenticationRequiredMixin, View):
136 def get(self, request, *args, **kwargs):
137 user = request.user
138 return JsonResponse(
139 {
140 "user": {
141 "email": user.email,
142 "first_name": user.first_name,
143 "last_name": user.last_name,
144 "profile_image": user.profile_image,
145 }
146 }
147 )
148
```
Path: `terraso_backend/apps/auth/urls.py`
Content:
```
1 from django.urls import path
2 from django.views.decorators.csrf import csrf_exempt
3
4 from apps.auth.views import (
5 AppleAuthorizeView,
6 AppleCallbackView,
7 CheckUserView,
8 GoogleAuthorizeView,
9 GoogleCallbackView,
10 RefreshAccessTokenView,
11 )
12
13 app_name = "apps.auth"
14
15 urlpatterns = [
16 path("apple/authorize", csrf_exempt(AppleAuthorizeView.as_view()), name="apple-authorize"),
17 path(
18 "apple/callback",
19 csrf_exempt(AppleCallbackView.as_view()),
20 name="apple-callback",
21 ),
22 path("google/authorize", csrf_exempt(GoogleAuthorizeView.as_view()), name="google-authorize"),
23 path(
24 "google/callback",
25 csrf_exempt(GoogleCallbackView.as_view()),
26 name="google-callback",
27 ),
28 path("tokens", csrf_exempt(RefreshAccessTokenView.as_view()), name="tokens"),
29 path("user", csrf_exempt(CheckUserView.as_view()), name="user"),
30 ]
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/terraso_backend/apps/auth/urls.py b/terraso_backend/apps/auth/urls.py
--- a/terraso_backend/apps/auth/urls.py
+++ b/terraso_backend/apps/auth/urls.py
@@ -4,7 +4,6 @@
from apps.auth.views import (
AppleAuthorizeView,
AppleCallbackView,
- CheckUserView,
GoogleAuthorizeView,
GoogleCallbackView,
RefreshAccessTokenView,
@@ -26,5 +25,4 @@
name="google-callback",
),
path("tokens", csrf_exempt(RefreshAccessTokenView.as_view()), name="tokens"),
- path("user", csrf_exempt(CheckUserView.as_view()), name="user"),
]
diff --git a/terraso_backend/apps/auth/views.py b/terraso_backend/apps/auth/views.py
--- a/terraso_backend/apps/auth/views.py
+++ b/terraso_backend/apps/auth/views.py
@@ -5,7 +5,6 @@
from django.http import HttpResponse, HttpResponseRedirect, JsonResponse
from django.views import View
-from .mixins import AuthenticationRequiredMixin
from .providers import AppleProvider, GoogleProvider
from .services import AccountService, JWTService
@@ -130,18 +129,3 @@
"refresh_token": refresh_token,
}
)
-
-
-class CheckUserView(AuthenticationRequiredMixin, View):
- def get(self, request, *args, **kwargs):
- user = request.user
- return JsonResponse(
- {
- "user": {
- "email": user.email,
- "first_name": user.first_name,
- "last_name": user.last_name,
- "profile_image": user.profile_image,
- }
- }
- )
| {"golden_diff": "diff --git a/terraso_backend/apps/auth/urls.py b/terraso_backend/apps/auth/urls.py\n--- a/terraso_backend/apps/auth/urls.py\n+++ b/terraso_backend/apps/auth/urls.py\n@@ -4,7 +4,6 @@\n from apps.auth.views import (\n AppleAuthorizeView,\n AppleCallbackView,\n- CheckUserView,\n GoogleAuthorizeView,\n GoogleCallbackView,\n RefreshAccessTokenView,\n@@ -26,5 +25,4 @@\n name=\"google-callback\",\n ),\n path(\"tokens\", csrf_exempt(RefreshAccessTokenView.as_view()), name=\"tokens\"),\n- path(\"user\", csrf_exempt(CheckUserView.as_view()), name=\"user\"),\n ]\ndiff --git a/terraso_backend/apps/auth/views.py b/terraso_backend/apps/auth/views.py\n--- a/terraso_backend/apps/auth/views.py\n+++ b/terraso_backend/apps/auth/views.py\n@@ -5,7 +5,6 @@\n from django.http import HttpResponse, HttpResponseRedirect, JsonResponse\n from django.views import View\n \n-from .mixins import AuthenticationRequiredMixin\n from .providers import AppleProvider, GoogleProvider\n from .services import AccountService, JWTService\n \n@@ -130,18 +129,3 @@\n \"refresh_token\": refresh_token,\n }\n )\n-\n-\n-class CheckUserView(AuthenticationRequiredMixin, View):\n- def get(self, request, *args, **kwargs):\n- user = request.user\n- return JsonResponse(\n- {\n- \"user\": {\n- \"email\": user.email,\n- \"first_name\": user.first_name,\n- \"last_name\": user.last_name,\n- \"profile_image\": user.profile_image,\n- }\n- }\n- )\n", "issue": "Remove /auth/user endpoint\n<!--\r\nUse a concise title that describes the problem. Include key words from error messages.\r\nBad: Problem with newsletter form\r\nGood: Newsletter form missing email address field\r\n\r\nBad: Issue with website\r\nGood: Footer missing from homepage\r\n-->\r\n\r\n## Description\r\nClients in general should use the graph QL interface to gather and manipulate data, right now this enpoint is conflicting with that. It was created to provide the initial authenticated user data, but using the data provided by the JWT token this is not needed anymore.\n", "before_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.http import HttpResponse, HttpResponseRedirect, JsonResponse\nfrom django.views import View\n\nfrom .mixins import AuthenticationRequiredMixin\nfrom .providers import AppleProvider, GoogleProvider\nfrom .services import AccountService, JWTService\n\nUser = get_user_model()\n\n\nclass AbstractAuthorizeView(View):\n def get(self, request, *args, **kwargs):\n return JsonResponse({\"request_url\": self.provider.login_url()})\n\n @property\n def provider(self):\n return NotImplementedError(\"AbstractAuthorizeView must be inherited\")\n\n\nclass GoogleAuthorizeView(AbstractAuthorizeView):\n @property\n def provider(self):\n return GoogleProvider\n\n\nclass AppleAuthorizeView(AbstractAuthorizeView):\n @property\n def provider(self):\n return AppleProvider\n\n\nclass AbstractCallbackView(View):\n def get(self, request, *args, **kwargs):\n self.authorization_code = self.request.GET.get(\"code\")\n self.error = self.request.GET.get(\"error\")\n\n return self.process_callback()\n\n def post(self, request, *args, **kwargs):\n self.authorization_code = self.request.POST.get(\"code\")\n self.error = self.request.POST.get(\"error\")\n\n return self.process_callback()\n\n def process_callback(self):\n if self.error:\n return HttpResponse(f\"Error: {self.error}\", status=400)\n\n if not self.authorization_code:\n return HttpResponse(\"Error: no authorization code informed\", status=400)\n\n jwt_service = JWTService()\n\n try:\n user = self.process_signup()\n access_token = jwt_service.create_access_token(user)\n refresh_token = jwt_service.create_refresh_token(user)\n except Exception as exc:\n return HttpResponse(f\"Error: {exc}\", status=400)\n\n response = HttpResponseRedirect(settings.WEB_CLIENT_URL)\n response.set_cookie(\"atoken\", access_token, domain=settings.AUTH_COOKIE_DOMAIN)\n response.set_cookie(\"rtoken\", refresh_token, domain=settings.AUTH_COOKIE_DOMAIN)\n\n return response\n\n def process_signup(self):\n raise NotImplementedError(\"AbstractCallbackView must be inherited.\")\n\n\nclass GoogleCallbackView(AbstractCallbackView):\n def process_signup(self):\n return AccountService().sign_up_with_google(self.authorization_code)\n\n\nclass AppleCallbackView(AbstractCallbackView):\n def process_signup(self):\n try:\n apple_user_data = json.loads(self.request.POST.get(\"user\", \"{}\"))\n except json.JSONDecodeError:\n raise Exception(\"couldn't parse User data from Apple\")\n\n first_name = apple_user_data.get(\"name\", {}).get(\"firstName\", \"\")\n last_name = apple_user_data.get(\"name\", {}).get(\"lastName\", \"\")\n\n return AccountService().sign_up_with_apple(\n self.authorization_code, first_name=first_name, last_name=last_name\n )\n\n\nclass RefreshAccessTokenView(View):\n def post(self, request, *args, **kwargs):\n try:\n request_data = json.loads(request.body)\n except json.decoder.JSONDecodeError:\n return JsonResponse({\"error\": \"The request expects a json body\"}, status=400)\n\n try:\n refresh_token = request_data[\"refresh_token\"]\n except KeyError:\n return JsonResponse(\n {\"error\": \"The request expects a 'refresh_token' parameter\"}, status=400\n )\n\n jwt_service = JWTService()\n\n try:\n refresh_payload = jwt_service.verify_token(refresh_token)\n except Exception as exc:\n return JsonResponse({\"error\": str(exc)}, status=400)\n\n try:\n user = User.objects.get(id=refresh_payload[\"sub\"])\n except User.DoesNotExist:\n return JsonResponse({\"error\": \"User not found\"}, status=400)\n\n if not user.is_active:\n return JsonResponse({\"error\": \"User not found\"}, status=400)\n\n access_token = jwt_service.create_access_token(user)\n refresh_token = jwt_service.create_refresh_token(user)\n\n return JsonResponse(\n {\n \"access_token\": access_token,\n \"refresh_token\": refresh_token,\n }\n )\n\n\nclass CheckUserView(AuthenticationRequiredMixin, View):\n def get(self, request, *args, **kwargs):\n user = request.user\n return JsonResponse(\n {\n \"user\": {\n \"email\": user.email,\n \"first_name\": user.first_name,\n \"last_name\": user.last_name,\n \"profile_image\": user.profile_image,\n }\n }\n )\n", "path": "terraso_backend/apps/auth/views.py"}, {"content": "from django.urls import path\nfrom django.views.decorators.csrf import csrf_exempt\n\nfrom apps.auth.views import (\n AppleAuthorizeView,\n AppleCallbackView,\n CheckUserView,\n GoogleAuthorizeView,\n GoogleCallbackView,\n RefreshAccessTokenView,\n)\n\napp_name = \"apps.auth\"\n\nurlpatterns = [\n path(\"apple/authorize\", csrf_exempt(AppleAuthorizeView.as_view()), name=\"apple-authorize\"),\n path(\n \"apple/callback\",\n csrf_exempt(AppleCallbackView.as_view()),\n name=\"apple-callback\",\n ),\n path(\"google/authorize\", csrf_exempt(GoogleAuthorizeView.as_view()), name=\"google-authorize\"),\n path(\n \"google/callback\",\n csrf_exempt(GoogleCallbackView.as_view()),\n name=\"google-callback\",\n ),\n path(\"tokens\", csrf_exempt(RefreshAccessTokenView.as_view()), name=\"tokens\"),\n path(\"user\", csrf_exempt(CheckUserView.as_view()), name=\"user\"),\n]\n", "path": "terraso_backend/apps/auth/urls.py"}], "after_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.http import HttpResponse, HttpResponseRedirect, JsonResponse\nfrom django.views import View\n\nfrom .providers import AppleProvider, GoogleProvider\nfrom .services import AccountService, JWTService\n\nUser = get_user_model()\n\n\nclass AbstractAuthorizeView(View):\n def get(self, request, *args, **kwargs):\n return JsonResponse({\"request_url\": self.provider.login_url()})\n\n @property\n def provider(self):\n return NotImplementedError(\"AbstractAuthorizeView must be inherited\")\n\n\nclass GoogleAuthorizeView(AbstractAuthorizeView):\n @property\n def provider(self):\n return GoogleProvider\n\n\nclass AppleAuthorizeView(AbstractAuthorizeView):\n @property\n def provider(self):\n return AppleProvider\n\n\nclass AbstractCallbackView(View):\n def get(self, request, *args, **kwargs):\n self.authorization_code = self.request.GET.get(\"code\")\n self.error = self.request.GET.get(\"error\")\n\n return self.process_callback()\n\n def post(self, request, *args, **kwargs):\n self.authorization_code = self.request.POST.get(\"code\")\n self.error = self.request.POST.get(\"error\")\n\n return self.process_callback()\n\n def process_callback(self):\n if self.error:\n return HttpResponse(f\"Error: {self.error}\", status=400)\n\n if not self.authorization_code:\n return HttpResponse(\"Error: no authorization code informed\", status=400)\n\n jwt_service = JWTService()\n\n try:\n user = self.process_signup()\n access_token = jwt_service.create_access_token(user)\n refresh_token = jwt_service.create_refresh_token(user)\n except Exception as exc:\n return HttpResponse(f\"Error: {exc}\", status=400)\n\n response = HttpResponseRedirect(settings.WEB_CLIENT_URL)\n response.set_cookie(\"atoken\", access_token, domain=settings.AUTH_COOKIE_DOMAIN)\n response.set_cookie(\"rtoken\", refresh_token, domain=settings.AUTH_COOKIE_DOMAIN)\n\n return response\n\n def process_signup(self):\n raise NotImplementedError(\"AbstractCallbackView must be inherited.\")\n\n\nclass GoogleCallbackView(AbstractCallbackView):\n def process_signup(self):\n return AccountService().sign_up_with_google(self.authorization_code)\n\n\nclass AppleCallbackView(AbstractCallbackView):\n def process_signup(self):\n try:\n apple_user_data = json.loads(self.request.POST.get(\"user\", \"{}\"))\n except json.JSONDecodeError:\n raise Exception(\"couldn't parse User data from Apple\")\n\n first_name = apple_user_data.get(\"name\", {}).get(\"firstName\", \"\")\n last_name = apple_user_data.get(\"name\", {}).get(\"lastName\", \"\")\n\n return AccountService().sign_up_with_apple(\n self.authorization_code, first_name=first_name, last_name=last_name\n )\n\n\nclass RefreshAccessTokenView(View):\n def post(self, request, *args, **kwargs):\n try:\n request_data = json.loads(request.body)\n except json.decoder.JSONDecodeError:\n return JsonResponse({\"error\": \"The request expects a json body\"}, status=400)\n\n try:\n refresh_token = request_data[\"refresh_token\"]\n except KeyError:\n return JsonResponse(\n {\"error\": \"The request expects a 'refresh_token' parameter\"}, status=400\n )\n\n jwt_service = JWTService()\n\n try:\n refresh_payload = jwt_service.verify_token(refresh_token)\n except Exception as exc:\n return JsonResponse({\"error\": str(exc)}, status=400)\n\n try:\n user = User.objects.get(id=refresh_payload[\"sub\"])\n except User.DoesNotExist:\n return JsonResponse({\"error\": \"User not found\"}, status=400)\n\n if not user.is_active:\n return JsonResponse({\"error\": \"User not found\"}, status=400)\n\n access_token = jwt_service.create_access_token(user)\n refresh_token = jwt_service.create_refresh_token(user)\n\n return JsonResponse(\n {\n \"access_token\": access_token,\n \"refresh_token\": refresh_token,\n }\n )\n", "path": "terraso_backend/apps/auth/views.py"}, {"content": "from django.urls import path\nfrom django.views.decorators.csrf import csrf_exempt\n\nfrom apps.auth.views import (\n AppleAuthorizeView,\n AppleCallbackView,\n GoogleAuthorizeView,\n GoogleCallbackView,\n RefreshAccessTokenView,\n)\n\napp_name = \"apps.auth\"\n\nurlpatterns = [\n path(\"apple/authorize\", csrf_exempt(AppleAuthorizeView.as_view()), name=\"apple-authorize\"),\n path(\n \"apple/callback\",\n csrf_exempt(AppleCallbackView.as_view()),\n name=\"apple-callback\",\n ),\n path(\"google/authorize\", csrf_exempt(GoogleAuthorizeView.as_view()), name=\"google-authorize\"),\n path(\n \"google/callback\",\n csrf_exempt(GoogleCallbackView.as_view()),\n name=\"google-callback\",\n ),\n path(\"tokens\", csrf_exempt(RefreshAccessTokenView.as_view()), name=\"tokens\"),\n]\n", "path": "terraso_backend/apps/auth/urls.py"}]} | 1,930 | 381 |
gh_patches_debug_39882 | rasdani/github-patches | git_diff | DDMAL__CantusDB-223 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Promote people to superuser in Django admin
Replace the current `is_staff` checkbox with a `is_superuser` checkbox. This way, the project manager can give superuser access (access to Django admin interface) to people without touching the command line.
Project managers can access the Cantus editing functionalities (within the Cantus DB website, not the Admin interface).
Superusers can access the Django admin interface.
If we want someone to have access to both, they need to be in the `project manager` group and have `is_superuser` as `True`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/admin.py`
Content:
```
1 from django.contrib import admin
2 from main_app.models import *
3 # Register your models here.
4 admin.site.register(Chant)
5 admin.site.register(Feast)
6 admin.site.register(Genre)
7 admin.site.register(Indexer)
8 admin.site.register(Notation)
9 admin.site.register(Office)
10 admin.site.register(Provenance)
11 admin.site.register(Segment)
12 admin.site.register(Sequence)
13 admin.site.register(Source)
14
```
Path: `django/cantusdb_project/users/admin.py`
Content:
```
1 from django.contrib import admin
2 from .models import *
3 from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
4 from django.contrib.auth.forms import UserCreationForm, UserChangeForm
5
6 # Register your models here.
7
8 class CustomUserCreationForm(UserCreationForm):
9
10 class Meta:
11 model = User
12 fields = (
13 'email', 'password', 'is_active', 'date_joined', 'last_login',
14 'full_name', 'first_name', 'last_name', 'institution', 'city',
15 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',
16 )
17
18
19 class CustomUserChangeForm(UserChangeForm):
20
21 class Meta:
22 model = User
23 fields = (
24 'email', 'password', 'is_active', 'date_joined', 'last_login',
25 'full_name', 'first_name', 'last_name', 'institution', 'city',
26 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',
27 )
28
29
30 class UserAdmin(BaseUserAdmin):
31 form = CustomUserChangeForm
32 add_form = CustomUserCreationForm
33 list_display = ('email', 'first_name', 'last_name', 'institution',)
34 list_filter = ('groups',)
35 fieldsets = (
36 ('Account info', {'fields': (('email', 'password'), 'is_active', ('date_joined', 'last_login'))}),
37 ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),
38 ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),
39 )
40 add_fieldsets = (
41 ('Account info', {'fields': ('email', ('password1', 'password2'),)}),
42 ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),
43 ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),
44 )
45 search_fields = ('email', 'first_name', 'last_name', 'institution',)
46 ordering = ('email',)
47 filter_horizontal = ('groups', 'sources_user_can_edit',)
48
49 admin.site.register(User, UserAdmin)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py
--- a/django/cantusdb_project/main_app/admin.py
+++ b/django/cantusdb_project/main_app/admin.py
@@ -1,6 +1,15 @@
from django.contrib import admin
from main_app.models import *
+from django.contrib.auth import get_user_model
# Register your models here.
+
+class SourcesUserCanEditInline(admin.TabularInline):
+ model = get_user_model().sources_user_can_edit.through
+
+class SourceAdmin(admin.ModelAdmin):
+ inlines = [SourcesUserCanEditInline]
+ filter_horizontal = ('century', 'notation', 'current_editors', 'inventoried_by', 'full_text_entered_by', 'melodies_entered_by', 'proofreaders', 'other_editors')
+
admin.site.register(Chant)
admin.site.register(Feast)
admin.site.register(Genre)
@@ -10,4 +19,4 @@
admin.site.register(Provenance)
admin.site.register(Segment)
admin.site.register(Sequence)
-admin.site.register(Source)
+admin.site.register(Source, SourceAdmin)
diff --git a/django/cantusdb_project/users/admin.py b/django/cantusdb_project/users/admin.py
--- a/django/cantusdb_project/users/admin.py
+++ b/django/cantusdb_project/users/admin.py
@@ -5,42 +5,19 @@
# Register your models here.
-class CustomUserCreationForm(UserCreationForm):
-
- class Meta:
- model = User
- fields = (
- 'email', 'password', 'is_active', 'date_joined', 'last_login',
- 'full_name', 'first_name', 'last_name', 'institution', 'city',
- 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',
- )
-
-
-class CustomUserChangeForm(UserChangeForm):
-
- class Meta:
- model = User
- fields = (
- 'email', 'password', 'is_active', 'date_joined', 'last_login',
- 'full_name', 'first_name', 'last_name', 'institution', 'city',
- 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',
- )
-
-
class UserAdmin(BaseUserAdmin):
- form = CustomUserChangeForm
- add_form = CustomUserCreationForm
+ readonly_fields = ('date_joined', 'last_login',)
list_display = ('email', 'first_name', 'last_name', 'institution',)
list_filter = ('groups',)
fieldsets = (
('Account info', {'fields': (('email', 'password'), 'is_active', ('date_joined', 'last_login'))}),
('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),
- ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),
+ ('Permissions', {'fields': ('is_staff', 'is_superuser', 'groups', 'sources_user_can_edit',)}),
)
add_fieldsets = (
('Account info', {'fields': ('email', ('password1', 'password2'),)}),
('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),
- ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),
+ ('Permissions', {'fields': ('is_staff', 'is_superuser', 'groups', 'sources_user_can_edit',)}),
)
search_fields = ('email', 'first_name', 'last_name', 'institution',)
ordering = ('email',)
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py\n--- a/django/cantusdb_project/main_app/admin.py\n+++ b/django/cantusdb_project/main_app/admin.py\n@@ -1,6 +1,15 @@\n from django.contrib import admin\n from main_app.models import *\n+from django.contrib.auth import get_user_model\n # Register your models here.\n+\n+class SourcesUserCanEditInline(admin.TabularInline):\n+ model = get_user_model().sources_user_can_edit.through\n+\n+class SourceAdmin(admin.ModelAdmin):\n+ inlines = [SourcesUserCanEditInline]\n+ filter_horizontal = ('century', 'notation', 'current_editors', 'inventoried_by', 'full_text_entered_by', 'melodies_entered_by', 'proofreaders', 'other_editors')\n+\n admin.site.register(Chant)\n admin.site.register(Feast)\n admin.site.register(Genre)\n@@ -10,4 +19,4 @@\n admin.site.register(Provenance)\n admin.site.register(Segment)\n admin.site.register(Sequence)\n-admin.site.register(Source)\n+admin.site.register(Source, SourceAdmin)\ndiff --git a/django/cantusdb_project/users/admin.py b/django/cantusdb_project/users/admin.py\n--- a/django/cantusdb_project/users/admin.py\n+++ b/django/cantusdb_project/users/admin.py\n@@ -5,42 +5,19 @@\n \n # Register your models here.\n \n-class CustomUserCreationForm(UserCreationForm):\n-\n- class Meta:\n- model = User\n- fields = (\n- 'email', 'password', 'is_active', 'date_joined', 'last_login', \n- 'full_name', 'first_name', 'last_name', 'institution', 'city', \n- 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',\n- )\n-\n-\n-class CustomUserChangeForm(UserChangeForm):\n-\n- class Meta:\n- model = User\n- fields = (\n- 'email', 'password', 'is_active', 'date_joined', 'last_login', \n- 'full_name', 'first_name', 'last_name', 'institution', 'city', \n- 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',\n- )\n-\n-\n class UserAdmin(BaseUserAdmin):\n- form = CustomUserChangeForm\n- add_form = CustomUserCreationForm \n+ readonly_fields = ('date_joined', 'last_login',)\n list_display = ('email', 'first_name', 'last_name', 'institution',)\n list_filter = ('groups',)\n fieldsets = (\n ('Account info', {'fields': (('email', 'password'), 'is_active', ('date_joined', 'last_login'))}),\n ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),\n- ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),\n+ ('Permissions', {'fields': ('is_staff', 'is_superuser', 'groups', 'sources_user_can_edit',)}),\n )\n add_fieldsets = (\n ('Account info', {'fields': ('email', ('password1', 'password2'),)}),\n ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),\n- ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),\n+ ('Permissions', {'fields': ('is_staff', 'is_superuser', 'groups', 'sources_user_can_edit',)}),\n )\n search_fields = ('email', 'first_name', 'last_name', 'institution',)\n ordering = ('email',)\n", "issue": "Promote people to superuser in Django admin\nReplace the current `is_staff` checkbox with a `is_superuser` checkbox. This way, the project manager can give superuser access (access to Django admin interface) to people without touching the command line. \r\n\r\nProject managers can access the Cantus editing functionalities (within the Cantus DB website, not the Admin interface). \r\nSuperusers can access the Django admin interface. \r\nIf we want someone to have access to both, they need to be in the `project manager` group and have `is_superuser` as `True`.\n", "before_files": [{"content": "from django.contrib import admin\nfrom main_app.models import *\n# Register your models here.\nadmin.site.register(Chant)\nadmin.site.register(Feast)\nadmin.site.register(Genre)\nadmin.site.register(Indexer)\nadmin.site.register(Notation)\nadmin.site.register(Office)\nadmin.site.register(Provenance)\nadmin.site.register(Segment)\nadmin.site.register(Sequence)\nadmin.site.register(Source)\n", "path": "django/cantusdb_project/main_app/admin.py"}, {"content": "from django.contrib import admin\nfrom .models import *\nfrom django.contrib.auth.admin import UserAdmin as BaseUserAdmin\nfrom django.contrib.auth.forms import UserCreationForm, UserChangeForm\n\n# Register your models here.\n\nclass CustomUserCreationForm(UserCreationForm):\n\n class Meta:\n model = User\n fields = (\n 'email', 'password', 'is_active', 'date_joined', 'last_login', \n 'full_name', 'first_name', 'last_name', 'institution', 'city', \n 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',\n )\n\n\nclass CustomUserChangeForm(UserChangeForm):\n\n class Meta:\n model = User\n fields = (\n 'email', 'password', 'is_active', 'date_joined', 'last_login', \n 'full_name', 'first_name', 'last_name', 'institution', 'city', \n 'country', 'website', 'is_staff', 'groups', 'sources_user_can_edit',\n )\n\n\nclass UserAdmin(BaseUserAdmin):\n form = CustomUserChangeForm\n add_form = CustomUserCreationForm \n list_display = ('email', 'first_name', 'last_name', 'institution',)\n list_filter = ('groups',)\n fieldsets = (\n ('Account info', {'fields': (('email', 'password'), 'is_active', ('date_joined', 'last_login'))}),\n ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),\n ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),\n )\n add_fieldsets = (\n ('Account info', {'fields': ('email', ('password1', 'password2'),)}),\n ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),\n ('Permissions', {'fields': ('is_staff', 'groups', 'sources_user_can_edit',)}),\n )\n search_fields = ('email', 'first_name', 'last_name', 'institution',)\n ordering = ('email',)\n filter_horizontal = ('groups', 'sources_user_can_edit',)\n\nadmin.site.register(User, UserAdmin)\n", "path": "django/cantusdb_project/users/admin.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom main_app.models import *\nfrom django.contrib.auth import get_user_model\n# Register your models here.\n\nclass SourcesUserCanEditInline(admin.TabularInline):\n model = get_user_model().sources_user_can_edit.through\n\nclass SourceAdmin(admin.ModelAdmin):\n inlines = [SourcesUserCanEditInline]\n filter_horizontal = ('century', 'notation', 'current_editors', 'inventoried_by', 'full_text_entered_by', 'melodies_entered_by', 'proofreaders', 'other_editors')\n\nadmin.site.register(Chant)\nadmin.site.register(Feast)\nadmin.site.register(Genre)\nadmin.site.register(Indexer)\nadmin.site.register(Notation)\nadmin.site.register(Office)\nadmin.site.register(Provenance)\nadmin.site.register(Segment)\nadmin.site.register(Sequence)\nadmin.site.register(Source, SourceAdmin)\n", "path": "django/cantusdb_project/main_app/admin.py"}, {"content": "from django.contrib import admin\nfrom .models import *\nfrom django.contrib.auth.admin import UserAdmin as BaseUserAdmin\nfrom django.contrib.auth.forms import UserCreationForm, UserChangeForm\n\n# Register your models here.\n\nclass UserAdmin(BaseUserAdmin):\n readonly_fields = ('date_joined', 'last_login',)\n list_display = ('email', 'first_name', 'last_name', 'institution',)\n list_filter = ('groups',)\n fieldsets = (\n ('Account info', {'fields': (('email', 'password'), 'is_active', ('date_joined', 'last_login'))}),\n ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),\n ('Permissions', {'fields': ('is_staff', 'is_superuser', 'groups', 'sources_user_can_edit',)}),\n )\n add_fieldsets = (\n ('Account info', {'fields': ('email', ('password1', 'password2'),)}),\n ('Personal info', {'fields': ('full_name', ('first_name', 'last_name'), 'institution', ('city', 'country'), 'website',)}),\n ('Permissions', {'fields': ('is_staff', 'is_superuser', 'groups', 'sources_user_can_edit',)}),\n )\n search_fields = ('email', 'first_name', 'last_name', 'institution',)\n ordering = ('email',)\n filter_horizontal = ('groups', 'sources_user_can_edit',)\n\nadmin.site.register(User, UserAdmin)\n", "path": "django/cantusdb_project/users/admin.py"}]} | 1,087 | 842 |
gh_patches_debug_39641 | rasdani/github-patches | git_diff | great-expectations__great_expectations-6275 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `great_expectations/rule_based_profiler/rule_state.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Dict, List, Optional
4
5 import great_expectations.exceptions as ge_exceptions
6 from great_expectations.rule_based_profiler.domain import Domain
7 from great_expectations.rule_based_profiler.parameter_container import (
8 ParameterContainer,
9 )
10
11 if TYPE_CHECKING:
12 from great_expectations.rule_based_profiler.rule.rule import Rule
13
14
15 class RuleState:
16 """
17 RuleState maintains state information, resulting from executing "Rule.run()" method by combining passed "Batch" data
18 with currently loaded configuration of "Rule" components ("DomainBuilder" object, "ParameterBuilder" objects, and
19 "ExpectationConfigurationBuilder" objects). Using "RuleState" with correponding flags is sufficient for generating
20 outputs for different purposes (in raw and aggregated form) from available "Domain" objects and computed parameters.
21 """
22
23 def __init__(
24 self,
25 rule: Optional[Rule] = None,
26 variables: Optional[ParameterContainer] = None,
27 domains: Optional[List[Domain]] = None,
28 parameters: Optional[Dict[str, ParameterContainer]] = None,
29 ) -> None:
30 """
31 Args:
32 rule: Rule object for which present RuleState object corresponds (needed for various Rule properties).
33 variables: attribute name/value pairs (part of state, relevant for associated Rule).
34 domains: List of Domain objects, which DomainBuilder of associated Rule generated.
35 parameters: Dictionary of ParameterContainer objects corresponding to all Domain objects in memory.
36 """
37 self._rule = rule
38
39 self._variables = variables
40
41 if domains is None:
42 domains = []
43
44 self._domains = domains
45
46 if parameters is None:
47 parameters = {}
48
49 self._parameters = parameters
50
51 self._rule_domain_builder_execution_time = 0.0
52 self._rule_execution_time = 0.0
53
54 @property
55 def rule(self) -> Optional[Rule]:
56 return self._rule
57
58 @rule.setter
59 def rule(self, value: Rule) -> None:
60 self._rule = value
61
62 @property
63 def variables(self) -> Optional[ParameterContainer]:
64 return self._variables
65
66 @variables.setter
67 def variables(self, value: Optional[ParameterContainer]) -> None:
68 self._variables = value
69
70 @property
71 def domains(self) -> List[Domain]:
72 return self._domains
73
74 @domains.setter
75 def domains(self, value: Optional[List[Domain]]) -> None:
76 self._domains = value
77
78 @property
79 def parameters(self) -> Dict[str, ParameterContainer]:
80 return self._parameters
81
82 @parameters.setter
83 def parameters(self, value: Optional[Dict[str, ParameterContainer]]) -> None:
84 self._parameters = value
85
86 @property
87 def rule_domain_builder_execution_time(self) -> float:
88 return self._rule_domain_builder_execution_time
89
90 @rule_domain_builder_execution_time.setter
91 def rule_domain_builder_execution_time(self, value: float) -> None:
92 self._rule_domain_builder_execution_time = value
93
94 @property
95 def rule_execution_time(self) -> float:
96 return self._rule_execution_time
97
98 @rule_execution_time.setter
99 def rule_execution_time(self, value: float) -> None:
100 self._rule_execution_time = value
101
102 def reset(self) -> None:
103 self.reset_domains()
104 self.reset_parameter_containers()
105
106 def reset_domains(self) -> None:
107 self.domains = []
108
109 def reset_parameter_containers(self) -> None:
110 self.parameters = {}
111
112 def add_domain(
113 self,
114 domain: Domain,
115 allow_duplicates: bool = False,
116 ) -> None:
117 domain_cursor: Domain
118 if not allow_duplicates and domain.id in [
119 domain_cursor.id for domain_cursor in self.domains
120 ]:
121 raise ge_exceptions.ProfilerConfigurationError(
122 f"""Error: Domain\n{domain}\nalready exists. In order to add it, either pass "allow_duplicates=True" \
123 or call "RuleState.remove_domain_if_exists()" with Domain having ID equal to "{domain.id}" as argument first.
124 """
125 )
126 self.domains.append(domain)
127
128 def remove_domain_if_exists(self, domain: Domain) -> None:
129 domain_cursor: Domain
130 if domain.id in [domain_cursor.id for domain_cursor in self.domains]:
131 self.domains.remove(domain)
132 self.remove_domain_if_exists(domain=domain)
133
134 def get_domains_as_dict(self) -> Dict[str, Domain]:
135 domain: Domain
136 return {domain.id: domain for domain in self.domains}
137
138 def initialize_parameter_container_for_domain(
139 self,
140 domain: Domain,
141 overwrite: bool = True,
142 ) -> None:
143 if not overwrite and domain.id in self.parameters:
144 raise ge_exceptions.ProfilerConfigurationError(
145 f"""Error: ParameterContainer for Domain\n{domain}\nalready exists. In order to overwrite it, either \
146 pass "overwrite=True" or call "RuleState.remove_parameter_container_from_domain()" with Domain having ID equal to \
147 "{domain.id}" as argument first.
148 """
149 )
150
151 parameter_container = ParameterContainer(parameter_nodes=None)
152 self._parameters[domain.id] = parameter_container
153
154 def remove_parameter_container_from_domain_if_exists(self, domain: Domain) -> None:
155 self.parameters.pop(domain.id, None)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/great_expectations/rule_based_profiler/rule_state.py b/great_expectations/rule_based_profiler/rule_state.py
--- a/great_expectations/rule_based_profiler/rule_state.py
+++ b/great_expectations/rule_based_profiler/rule_state.py
@@ -23,26 +23,26 @@
def __init__(
self,
rule: Optional[Rule] = None,
- variables: Optional[ParameterContainer] = None,
domains: Optional[List[Domain]] = None,
+ variables: Optional[ParameterContainer] = None,
parameters: Optional[Dict[str, ParameterContainer]] = None,
) -> None:
"""
Args:
rule: Rule object for which present RuleState object corresponds (needed for various Rule properties).
- variables: attribute name/value pairs (part of state, relevant for associated Rule).
domains: List of Domain objects, which DomainBuilder of associated Rule generated.
+ variables: attribute name/value pairs (part of state, relevant for associated Rule).
parameters: Dictionary of ParameterContainer objects corresponding to all Domain objects in memory.
"""
self._rule = rule
- self._variables = variables
-
if domains is None:
domains = []
self._domains = domains
+ self._variables = variables
+
if parameters is None:
parameters = {}
@@ -59,14 +59,6 @@
def rule(self, value: Rule) -> None:
self._rule = value
- @property
- def variables(self) -> Optional[ParameterContainer]:
- return self._variables
-
- @variables.setter
- def variables(self, value: Optional[ParameterContainer]) -> None:
- self._variables = value
-
@property
def domains(self) -> List[Domain]:
return self._domains
@@ -75,6 +67,14 @@
def domains(self, value: Optional[List[Domain]]) -> None:
self._domains = value
+ @property
+ def variables(self) -> Optional[ParameterContainer]:
+ return self._variables
+
+ @variables.setter
+ def variables(self, value: Optional[ParameterContainer]) -> None:
+ self._variables = value
+
@property
def parameters(self) -> Dict[str, ParameterContainer]:
return self._parameters
@@ -123,6 +123,7 @@
or call "RuleState.remove_domain_if_exists()" with Domain having ID equal to "{domain.id}" as argument first.
"""
)
+
self.domains.append(domain)
def remove_domain_if_exists(self, domain: Domain) -> None:
| {"golden_diff": "diff --git a/great_expectations/rule_based_profiler/rule_state.py b/great_expectations/rule_based_profiler/rule_state.py\n--- a/great_expectations/rule_based_profiler/rule_state.py\n+++ b/great_expectations/rule_based_profiler/rule_state.py\n@@ -23,26 +23,26 @@\n def __init__(\n self,\n rule: Optional[Rule] = None,\n- variables: Optional[ParameterContainer] = None,\n domains: Optional[List[Domain]] = None,\n+ variables: Optional[ParameterContainer] = None,\n parameters: Optional[Dict[str, ParameterContainer]] = None,\n ) -> None:\n \"\"\"\n Args:\n rule: Rule object for which present RuleState object corresponds (needed for various Rule properties).\n- variables: attribute name/value pairs (part of state, relevant for associated Rule).\n domains: List of Domain objects, which DomainBuilder of associated Rule generated.\n+ variables: attribute name/value pairs (part of state, relevant for associated Rule).\n parameters: Dictionary of ParameterContainer objects corresponding to all Domain objects in memory.\n \"\"\"\n self._rule = rule\n \n- self._variables = variables\n-\n if domains is None:\n domains = []\n \n self._domains = domains\n \n+ self._variables = variables\n+\n if parameters is None:\n parameters = {}\n \n@@ -59,14 +59,6 @@\n def rule(self, value: Rule) -> None:\n self._rule = value\n \n- @property\n- def variables(self) -> Optional[ParameterContainer]:\n- return self._variables\n-\n- @variables.setter\n- def variables(self, value: Optional[ParameterContainer]) -> None:\n- self._variables = value\n-\n @property\n def domains(self) -> List[Domain]:\n return self._domains\n@@ -75,6 +67,14 @@\n def domains(self, value: Optional[List[Domain]]) -> None:\n self._domains = value\n \n+ @property\n+ def variables(self) -> Optional[ParameterContainer]:\n+ return self._variables\n+\n+ @variables.setter\n+ def variables(self, value: Optional[ParameterContainer]) -> None:\n+ self._variables = value\n+\n @property\n def parameters(self) -> Dict[str, ParameterContainer]:\n return self._parameters\n@@ -123,6 +123,7 @@\n or call \"RuleState.remove_domain_if_exists()\" with Domain having ID equal to \"{domain.id}\" as argument first.\n \"\"\"\n )\n+\n self.domains.append(domain)\n \n def remove_domain_if_exists(self, domain: Domain) -> None:\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Dict, List, Optional\n\nimport great_expectations.exceptions as ge_exceptions\nfrom great_expectations.rule_based_profiler.domain import Domain\nfrom great_expectations.rule_based_profiler.parameter_container import (\n ParameterContainer,\n)\n\nif TYPE_CHECKING:\n from great_expectations.rule_based_profiler.rule.rule import Rule\n\n\nclass RuleState:\n \"\"\"\n RuleState maintains state information, resulting from executing \"Rule.run()\" method by combining passed \"Batch\" data\n with currently loaded configuration of \"Rule\" components (\"DomainBuilder\" object, \"ParameterBuilder\" objects, and\n \"ExpectationConfigurationBuilder\" objects). Using \"RuleState\" with correponding flags is sufficient for generating\n outputs for different purposes (in raw and aggregated form) from available \"Domain\" objects and computed parameters.\n \"\"\"\n\n def __init__(\n self,\n rule: Optional[Rule] = None,\n variables: Optional[ParameterContainer] = None,\n domains: Optional[List[Domain]] = None,\n parameters: Optional[Dict[str, ParameterContainer]] = None,\n ) -> None:\n \"\"\"\n Args:\n rule: Rule object for which present RuleState object corresponds (needed for various Rule properties).\n variables: attribute name/value pairs (part of state, relevant for associated Rule).\n domains: List of Domain objects, which DomainBuilder of associated Rule generated.\n parameters: Dictionary of ParameterContainer objects corresponding to all Domain objects in memory.\n \"\"\"\n self._rule = rule\n\n self._variables = variables\n\n if domains is None:\n domains = []\n\n self._domains = domains\n\n if parameters is None:\n parameters = {}\n\n self._parameters = parameters\n\n self._rule_domain_builder_execution_time = 0.0\n self._rule_execution_time = 0.0\n\n @property\n def rule(self) -> Optional[Rule]:\n return self._rule\n\n @rule.setter\n def rule(self, value: Rule) -> None:\n self._rule = value\n\n @property\n def variables(self) -> Optional[ParameterContainer]:\n return self._variables\n\n @variables.setter\n def variables(self, value: Optional[ParameterContainer]) -> None:\n self._variables = value\n\n @property\n def domains(self) -> List[Domain]:\n return self._domains\n\n @domains.setter\n def domains(self, value: Optional[List[Domain]]) -> None:\n self._domains = value\n\n @property\n def parameters(self) -> Dict[str, ParameterContainer]:\n return self._parameters\n\n @parameters.setter\n def parameters(self, value: Optional[Dict[str, ParameterContainer]]) -> None:\n self._parameters = value\n\n @property\n def rule_domain_builder_execution_time(self) -> float:\n return self._rule_domain_builder_execution_time\n\n @rule_domain_builder_execution_time.setter\n def rule_domain_builder_execution_time(self, value: float) -> None:\n self._rule_domain_builder_execution_time = value\n\n @property\n def rule_execution_time(self) -> float:\n return self._rule_execution_time\n\n @rule_execution_time.setter\n def rule_execution_time(self, value: float) -> None:\n self._rule_execution_time = value\n\n def reset(self) -> None:\n self.reset_domains()\n self.reset_parameter_containers()\n\n def reset_domains(self) -> None:\n self.domains = []\n\n def reset_parameter_containers(self) -> None:\n self.parameters = {}\n\n def add_domain(\n self,\n domain: Domain,\n allow_duplicates: bool = False,\n ) -> None:\n domain_cursor: Domain\n if not allow_duplicates and domain.id in [\n domain_cursor.id for domain_cursor in self.domains\n ]:\n raise ge_exceptions.ProfilerConfigurationError(\n f\"\"\"Error: Domain\\n{domain}\\nalready exists. In order to add it, either pass \"allow_duplicates=True\" \\\nor call \"RuleState.remove_domain_if_exists()\" with Domain having ID equal to \"{domain.id}\" as argument first.\n\"\"\"\n )\n self.domains.append(domain)\n\n def remove_domain_if_exists(self, domain: Domain) -> None:\n domain_cursor: Domain\n if domain.id in [domain_cursor.id for domain_cursor in self.domains]:\n self.domains.remove(domain)\n self.remove_domain_if_exists(domain=domain)\n\n def get_domains_as_dict(self) -> Dict[str, Domain]:\n domain: Domain\n return {domain.id: domain for domain in self.domains}\n\n def initialize_parameter_container_for_domain(\n self,\n domain: Domain,\n overwrite: bool = True,\n ) -> None:\n if not overwrite and domain.id in self.parameters:\n raise ge_exceptions.ProfilerConfigurationError(\n f\"\"\"Error: ParameterContainer for Domain\\n{domain}\\nalready exists. In order to overwrite it, either \\\npass \"overwrite=True\" or call \"RuleState.remove_parameter_container_from_domain()\" with Domain having ID equal to \\\n\"{domain.id}\" as argument first.\n\"\"\"\n )\n\n parameter_container = ParameterContainer(parameter_nodes=None)\n self._parameters[domain.id] = parameter_container\n\n def remove_parameter_container_from_domain_if_exists(self, domain: Domain) -> None:\n self.parameters.pop(domain.id, None)\n", "path": "great_expectations/rule_based_profiler/rule_state.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Dict, List, Optional\n\nimport great_expectations.exceptions as ge_exceptions\nfrom great_expectations.rule_based_profiler.domain import Domain\nfrom great_expectations.rule_based_profiler.parameter_container import (\n ParameterContainer,\n)\n\nif TYPE_CHECKING:\n from great_expectations.rule_based_profiler.rule.rule import Rule\n\n\nclass RuleState:\n \"\"\"\n RuleState maintains state information, resulting from executing \"Rule.run()\" method by combining passed \"Batch\" data\n with currently loaded configuration of \"Rule\" components (\"DomainBuilder\" object, \"ParameterBuilder\" objects, and\n \"ExpectationConfigurationBuilder\" objects). Using \"RuleState\" with correponding flags is sufficient for generating\n outputs for different purposes (in raw and aggregated form) from available \"Domain\" objects and computed parameters.\n \"\"\"\n\n def __init__(\n self,\n rule: Optional[Rule] = None,\n domains: Optional[List[Domain]] = None,\n variables: Optional[ParameterContainer] = None,\n parameters: Optional[Dict[str, ParameterContainer]] = None,\n ) -> None:\n \"\"\"\n Args:\n rule: Rule object for which present RuleState object corresponds (needed for various Rule properties).\n domains: List of Domain objects, which DomainBuilder of associated Rule generated.\n variables: attribute name/value pairs (part of state, relevant for associated Rule).\n parameters: Dictionary of ParameterContainer objects corresponding to all Domain objects in memory.\n \"\"\"\n self._rule = rule\n\n if domains is None:\n domains = []\n\n self._domains = domains\n\n self._variables = variables\n\n if parameters is None:\n parameters = {}\n\n self._parameters = parameters\n\n self._rule_domain_builder_execution_time = 0.0\n self._rule_execution_time = 0.0\n\n @property\n def rule(self) -> Optional[Rule]:\n return self._rule\n\n @rule.setter\n def rule(self, value: Rule) -> None:\n self._rule = value\n\n @property\n def domains(self) -> List[Domain]:\n return self._domains\n\n @domains.setter\n def domains(self, value: Optional[List[Domain]]) -> None:\n self._domains = value\n\n @property\n def variables(self) -> Optional[ParameterContainer]:\n return self._variables\n\n @variables.setter\n def variables(self, value: Optional[ParameterContainer]) -> None:\n self._variables = value\n\n @property\n def parameters(self) -> Dict[str, ParameterContainer]:\n return self._parameters\n\n @parameters.setter\n def parameters(self, value: Optional[Dict[str, ParameterContainer]]) -> None:\n self._parameters = value\n\n @property\n def rule_domain_builder_execution_time(self) -> float:\n return self._rule_domain_builder_execution_time\n\n @rule_domain_builder_execution_time.setter\n def rule_domain_builder_execution_time(self, value: float) -> None:\n self._rule_domain_builder_execution_time = value\n\n @property\n def rule_execution_time(self) -> float:\n return self._rule_execution_time\n\n @rule_execution_time.setter\n def rule_execution_time(self, value: float) -> None:\n self._rule_execution_time = value\n\n def reset(self) -> None:\n self.reset_domains()\n self.reset_parameter_containers()\n\n def reset_domains(self) -> None:\n self.domains = []\n\n def reset_parameter_containers(self) -> None:\n self.parameters = {}\n\n def add_domain(\n self,\n domain: Domain,\n allow_duplicates: bool = False,\n ) -> None:\n domain_cursor: Domain\n if not allow_duplicates and domain.id in [\n domain_cursor.id for domain_cursor in self.domains\n ]:\n raise ge_exceptions.ProfilerConfigurationError(\n f\"\"\"Error: Domain\\n{domain}\\nalready exists. In order to add it, either pass \"allow_duplicates=True\" \\\nor call \"RuleState.remove_domain_if_exists()\" with Domain having ID equal to \"{domain.id}\" as argument first.\n\"\"\"\n )\n\n self.domains.append(domain)\n\n def remove_domain_if_exists(self, domain: Domain) -> None:\n domain_cursor: Domain\n if domain.id in [domain_cursor.id for domain_cursor in self.domains]:\n self.domains.remove(domain)\n self.remove_domain_if_exists(domain=domain)\n\n def get_domains_as_dict(self) -> Dict[str, Domain]:\n domain: Domain\n return {domain.id: domain for domain in self.domains}\n\n def initialize_parameter_container_for_domain(\n self,\n domain: Domain,\n overwrite: bool = True,\n ) -> None:\n if not overwrite and domain.id in self.parameters:\n raise ge_exceptions.ProfilerConfigurationError(\n f\"\"\"Error: ParameterContainer for Domain\\n{domain}\\nalready exists. In order to overwrite it, either \\\npass \"overwrite=True\" or call \"RuleState.remove_parameter_container_from_domain()\" with Domain having ID equal to \\\n\"{domain.id}\" as argument first.\n\"\"\"\n )\n\n parameter_container = ParameterContainer(parameter_nodes=None)\n self._parameters[domain.id] = parameter_container\n\n def remove_parameter_container_from_domain_if_exists(self, domain: Domain) -> None:\n self.parameters.pop(domain.id, None)\n", "path": "great_expectations/rule_based_profiler/rule_state.py"}]} | 1,807 | 584 |
gh_patches_debug_13542 | rasdani/github-patches | git_diff | bokeh__bokeh-10032 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation spelling and formatting
There are various typos scattered throughout the documentation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py`
Content:
```
1 import numpy as np
2
3 from bokeh.layouts import column, row
4 from bokeh.models import CustomJS, Div, Slider
5 from bokeh.plotting import ColumnDataSource, figure, output_file, show
6
7 x = np.linspace(0, 10, 500)
8 y = np.sin(x)
9
10 source = ColumnDataSource(data=dict(x=x, y=y))
11
12 plot = figure(y_range=(-10, 10), plot_width=400, plot_height=200, background_fill_color="#fafafa", sizing_mode="scale_width")
13
14 plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)
15
16 amp = Slider(start=0.1, end=10, value=1, step=.1, title="Amplitude", sizing_mode="stretch_both")
17 freq = Slider(start=0.1, end=10, value=1, step=.1, title="Frequency", sizing_mode="stretch_both")
18 phase = Slider(start=0, end=6.4, value=0, step=.1, title="Phase", sizing_mode="stretch_both")
19 offset = Slider(start=-5, end=5, value=0, step=.1, title="Offset", sizing_mode="stretch_both")
20
21 widgets = column(amp, freq, phase, offset, sizing_mode="fixed", height=250, width=150)
22
23 callback = CustomJS(args=dict(source=source, amp=amp, freq=freq, phase=phase, offset=offset),
24 code="""
25 const data = source.data;
26 const A = amp.value;
27 const k = freq.value;
28 const phi = phase.value;
29 const B = offset.value;
30 const x = data['x']
31 const y = data['y']
32 for (var i = 0; i < x.length; i++) {
33 y[i] = B + A*Math.sin(k*x[i]+phi);
34 }
35 source.change.emit();
36 """)
37
38 amp.js_on_change('value', callback)
39 freq.js_on_change('value', callback)
40 phase.js_on_change('value', callback)
41 offset.js_on_change('value', callback)
42
43 heading = Div(sizing_mode="stretch_width", height=80, text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. "
44 "Sed elementum lorem lacus, eget scelerisque diam rutrum ac. Nunc est urna, fringilla nec placerat vitae, venenatis egestas eros. "
45 "Aliquam eleifend orci gravida nulla tempor, sit amet malesuada lacus efficitur. Mauris placerat.")
46
47 layout = column(heading, row(widgets, plot), sizing_mode="stretch_both")
48
49 output_file("slider.html", title="slider.py example")
50
51 show(layout)
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py b/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py
--- a/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py
+++ b/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py
@@ -40,9 +40,8 @@
phase.js_on_change('value', callback)
offset.js_on_change('value', callback)
-heading = Div(sizing_mode="stretch_width", height=80, text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. "
-"Sed elementum lorem lacus, eget scelerisque diam rutrum ac. Nunc est urna, fringilla nec placerat vitae, venenatis egestas eros. "
-"Aliquam eleifend orci gravida nulla tempor, sit amet malesuada lacus efficitur. Mauris placerat.")
+heading = Div(sizing_mode="stretch_width", height=80, text="In this wave example, the sliders on the left "
+ "can be used to change the amplitude, frequency, phase and offset of the wave.")
layout = column(heading, row(widgets, plot), sizing_mode="stretch_both")
| {"golden_diff": "diff --git a/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py b/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py\n--- a/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py\n+++ b/sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py\n@@ -40,9 +40,8 @@\n phase.js_on_change('value', callback)\n offset.js_on_change('value', callback)\n \n-heading = Div(sizing_mode=\"stretch_width\", height=80, text=\"Lorem ipsum dolor sit amet, consectetur adipiscing elit. \"\n-\"Sed elementum lorem lacus, eget scelerisque diam rutrum ac. Nunc est urna, fringilla nec placerat vitae, venenatis egestas eros. \"\n-\"Aliquam eleifend orci gravida nulla tempor, sit amet malesuada lacus efficitur. Mauris placerat.\")\n+heading = Div(sizing_mode=\"stretch_width\", height=80, text=\"In this wave example, the sliders on the left \"\n+ \"can be used to change the amplitude, frequency, phase and offset of the wave.\")\n \n layout = column(heading, row(widgets, plot), sizing_mode=\"stretch_both\")\n", "issue": "Documentation spelling and formatting\nThere are various typos scattered throughout the documentation.\n", "before_files": [{"content": "import numpy as np\n\nfrom bokeh.layouts import column, row\nfrom bokeh.models import CustomJS, Div, Slider\nfrom bokeh.plotting import ColumnDataSource, figure, output_file, show\n\nx = np.linspace(0, 10, 500)\ny = np.sin(x)\n\nsource = ColumnDataSource(data=dict(x=x, y=y))\n\nplot = figure(y_range=(-10, 10), plot_width=400, plot_height=200, background_fill_color=\"#fafafa\", sizing_mode=\"scale_width\")\n\nplot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)\n\namp = Slider(start=0.1, end=10, value=1, step=.1, title=\"Amplitude\", sizing_mode=\"stretch_both\")\nfreq = Slider(start=0.1, end=10, value=1, step=.1, title=\"Frequency\", sizing_mode=\"stretch_both\")\nphase = Slider(start=0, end=6.4, value=0, step=.1, title=\"Phase\", sizing_mode=\"stretch_both\")\noffset = Slider(start=-5, end=5, value=0, step=.1, title=\"Offset\", sizing_mode=\"stretch_both\")\n\nwidgets = column(amp, freq, phase, offset, sizing_mode=\"fixed\", height=250, width=150)\n\ncallback = CustomJS(args=dict(source=source, amp=amp, freq=freq, phase=phase, offset=offset),\n code=\"\"\"\n const data = source.data;\n const A = amp.value;\n const k = freq.value;\n const phi = phase.value;\n const B = offset.value;\n const x = data['x']\n const y = data['y']\n for (var i = 0; i < x.length; i++) {\n y[i] = B + A*Math.sin(k*x[i]+phi);\n }\n source.change.emit();\n\"\"\")\n\namp.js_on_change('value', callback)\nfreq.js_on_change('value', callback)\nphase.js_on_change('value', callback)\noffset.js_on_change('value', callback)\n\nheading = Div(sizing_mode=\"stretch_width\", height=80, text=\"Lorem ipsum dolor sit amet, consectetur adipiscing elit. \"\n\"Sed elementum lorem lacus, eget scelerisque diam rutrum ac. Nunc est urna, fringilla nec placerat vitae, venenatis egestas eros. \"\n\"Aliquam eleifend orci gravida nulla tempor, sit amet malesuada lacus efficitur. Mauris placerat.\")\n\nlayout = column(heading, row(widgets, plot), sizing_mode=\"stretch_both\")\n\noutput_file(\"slider.html\", title=\"slider.py example\")\n\nshow(layout)\n", "path": "sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py"}], "after_files": [{"content": "import numpy as np\n\nfrom bokeh.layouts import column, row\nfrom bokeh.models import CustomJS, Div, Slider\nfrom bokeh.plotting import ColumnDataSource, figure, output_file, show\n\nx = np.linspace(0, 10, 500)\ny = np.sin(x)\n\nsource = ColumnDataSource(data=dict(x=x, y=y))\n\nplot = figure(y_range=(-10, 10), plot_width=400, plot_height=200, background_fill_color=\"#fafafa\", sizing_mode=\"scale_width\")\n\nplot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)\n\namp = Slider(start=0.1, end=10, value=1, step=.1, title=\"Amplitude\", sizing_mode=\"stretch_both\")\nfreq = Slider(start=0.1, end=10, value=1, step=.1, title=\"Frequency\", sizing_mode=\"stretch_both\")\nphase = Slider(start=0, end=6.4, value=0, step=.1, title=\"Phase\", sizing_mode=\"stretch_both\")\noffset = Slider(start=-5, end=5, value=0, step=.1, title=\"Offset\", sizing_mode=\"stretch_both\")\n\nwidgets = column(amp, freq, phase, offset, sizing_mode=\"fixed\", height=250, width=150)\n\ncallback = CustomJS(args=dict(source=source, amp=amp, freq=freq, phase=phase, offset=offset),\n code=\"\"\"\n const data = source.data;\n const A = amp.value;\n const k = freq.value;\n const phi = phase.value;\n const B = offset.value;\n const x = data['x']\n const y = data['y']\n for (var i = 0; i < x.length; i++) {\n y[i] = B + A*Math.sin(k*x[i]+phi);\n }\n source.change.emit();\n\"\"\")\n\namp.js_on_change('value', callback)\nfreq.js_on_change('value', callback)\nphase.js_on_change('value', callback)\noffset.js_on_change('value', callback)\n\nheading = Div(sizing_mode=\"stretch_width\", height=80, text=\"In this wave example, the sliders on the left \"\n \"can be used to change the amplitude, frequency, phase and offset of the wave.\")\n\nlayout = column(heading, row(widgets, plot), sizing_mode=\"stretch_both\")\n\noutput_file(\"slider.html\", title=\"slider.py example\")\n\nshow(layout)\n", "path": "sphinx/source/docs/user_guide/examples/layout_sizing_mode_multiple.py"}]} | 973 | 259 |
gh_patches_debug_23031 | rasdani/github-patches | git_diff | nilearn__nilearn-4373 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] Minor discrepancy in comparing means example
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe your proposed suggestion in detail.
Using a movie watching dataset as an example but calling it "resting state" in several places
### List any pages that would be impacted.
The "comparing means of images with `math_img`" example: https://nilearn.github.io/stable/auto_examples/06_manipulating_images/plot_compare_mean_image.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/06_manipulating_images/plot_compare_mean_image.py`
Content:
```
1 """
2 Comparing the means of 2 images
3 ===============================
4
5 The goal of this example is to illustrate the use of the function
6 :func:`nilearn.image.math_img` with a list of images as input.
7 We compare the means of 2 resting state 4D images. The mean of the images
8 could have been computed with nilearn :func:`nilearn.image.mean_img` function.
9 """
10
11 # %%
12 # Fetching 2 subject movie watching brain development :term:`fMRI` datasets.
13 from nilearn import datasets
14
15 dataset = datasets.fetch_development_fmri(n_subjects=2)
16
17
18 # %%
19 # Print basic information on the adhd subjects resting state datasets.
20 print(f"Subject 1 resting state dataset at: {dataset.func[0]}")
21 print(f"Subject 2 resting state dataset at: {dataset.func[1]}")
22
23
24 # %%
25 # Comparing the means of the 2 movie watching datasets.
26 from nilearn import image, plotting
27
28 result_img = image.math_img(
29 "np.mean(img1, axis=-1) - np.mean(img2, axis=-1)",
30 img1=dataset.func[0],
31 img2=dataset.func[1],
32 )
33
34 plotting.plot_stat_map(
35 result_img, title="Comparing means of 2 resting state 4D images."
36 )
37 plotting.show()
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/06_manipulating_images/plot_compare_mean_image.py b/examples/06_manipulating_images/plot_compare_mean_image.py
--- a/examples/06_manipulating_images/plot_compare_mean_image.py
+++ b/examples/06_manipulating_images/plot_compare_mean_image.py
@@ -4,7 +4,7 @@
The goal of this example is to illustrate the use of the function
:func:`nilearn.image.math_img` with a list of images as input.
-We compare the means of 2 resting state 4D images. The mean of the images
+We compare the means of 2 movie watching 4D images. The mean of the images
could have been computed with nilearn :func:`nilearn.image.mean_img` function.
"""
@@ -16,9 +16,9 @@
# %%
-# Print basic information on the adhd subjects resting state datasets.
-print(f"Subject 1 resting state dataset at: {dataset.func[0]}")
-print(f"Subject 2 resting state dataset at: {dataset.func[1]}")
+# Print file locations for both subjects.
+print(f"Subject 1 dataset at: {dataset.func[0]}")
+print(f"Subject 2 dataset at: {dataset.func[1]}")
# %%
@@ -32,6 +32,6 @@
)
plotting.plot_stat_map(
- result_img, title="Comparing means of 2 resting state 4D images."
+ result_img, title="Comparing means of 2 movie watching 4D images."
)
plotting.show()
| {"golden_diff": "diff --git a/examples/06_manipulating_images/plot_compare_mean_image.py b/examples/06_manipulating_images/plot_compare_mean_image.py\n--- a/examples/06_manipulating_images/plot_compare_mean_image.py\n+++ b/examples/06_manipulating_images/plot_compare_mean_image.py\n@@ -4,7 +4,7 @@\n \n The goal of this example is to illustrate the use of the function\n :func:`nilearn.image.math_img` with a list of images as input.\n-We compare the means of 2 resting state 4D images. The mean of the images\n+We compare the means of 2 movie watching 4D images. The mean of the images\n could have been computed with nilearn :func:`nilearn.image.mean_img` function.\n \"\"\"\n \n@@ -16,9 +16,9 @@\n \n \n # %%\n-# Print basic information on the adhd subjects resting state datasets.\n-print(f\"Subject 1 resting state dataset at: {dataset.func[0]}\")\n-print(f\"Subject 2 resting state dataset at: {dataset.func[1]}\")\n+# Print file locations for both subjects.\n+print(f\"Subject 1 dataset at: {dataset.func[0]}\")\n+print(f\"Subject 2 dataset at: {dataset.func[1]}\")\n \n \n # %%\n@@ -32,6 +32,6 @@\n )\n \n plotting.plot_stat_map(\n- result_img, title=\"Comparing means of 2 resting state 4D images.\"\n+ result_img, title=\"Comparing means of 2 movie watching 4D images.\"\n )\n plotting.show()\n", "issue": "[DOC] Minor discrepancy in comparing means example\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Describe your proposed suggestion in detail.\n\nUsing a movie watching dataset as an example but calling it \"resting state\" in several places\n\n### List any pages that would be impacted.\n\nThe \"comparing means of images with `math_img`\" example: https://nilearn.github.io/stable/auto_examples/06_manipulating_images/plot_compare_mean_image.html\n", "before_files": [{"content": "\"\"\"\nComparing the means of 2 images\n===============================\n\nThe goal of this example is to illustrate the use of the function\n:func:`nilearn.image.math_img` with a list of images as input.\nWe compare the means of 2 resting state 4D images. The mean of the images\ncould have been computed with nilearn :func:`nilearn.image.mean_img` function.\n\"\"\"\n\n# %%\n# Fetching 2 subject movie watching brain development :term:`fMRI` datasets.\nfrom nilearn import datasets\n\ndataset = datasets.fetch_development_fmri(n_subjects=2)\n\n\n# %%\n# Print basic information on the adhd subjects resting state datasets.\nprint(f\"Subject 1 resting state dataset at: {dataset.func[0]}\")\nprint(f\"Subject 2 resting state dataset at: {dataset.func[1]}\")\n\n\n# %%\n# Comparing the means of the 2 movie watching datasets.\nfrom nilearn import image, plotting\n\nresult_img = image.math_img(\n \"np.mean(img1, axis=-1) - np.mean(img2, axis=-1)\",\n img1=dataset.func[0],\n img2=dataset.func[1],\n)\n\nplotting.plot_stat_map(\n result_img, title=\"Comparing means of 2 resting state 4D images.\"\n)\nplotting.show()\n", "path": "examples/06_manipulating_images/plot_compare_mean_image.py"}], "after_files": [{"content": "\"\"\"\nComparing the means of 2 images\n===============================\n\nThe goal of this example is to illustrate the use of the function\n:func:`nilearn.image.math_img` with a list of images as input.\nWe compare the means of 2 movie watching 4D images. The mean of the images\ncould have been computed with nilearn :func:`nilearn.image.mean_img` function.\n\"\"\"\n\n# %%\n# Fetching 2 subject movie watching brain development :term:`fMRI` datasets.\nfrom nilearn import datasets\n\ndataset = datasets.fetch_development_fmri(n_subjects=2)\n\n\n# %%\n# Print file locations for both subjects.\nprint(f\"Subject 1 dataset at: {dataset.func[0]}\")\nprint(f\"Subject 2 dataset at: {dataset.func[1]}\")\n\n\n# %%\n# Comparing the means of the 2 movie watching datasets.\nfrom nilearn import image, plotting\n\nresult_img = image.math_img(\n \"np.mean(img1, axis=-1) - np.mean(img2, axis=-1)\",\n img1=dataset.func[0],\n img2=dataset.func[1],\n)\n\nplotting.plot_stat_map(\n result_img, title=\"Comparing means of 2 movie watching 4D images.\"\n)\nplotting.show()\n", "path": "examples/06_manipulating_images/plot_compare_mean_image.py"}]} | 727 | 348 |
gh_patches_debug_5762 | rasdani/github-patches | git_diff | fossasia__open-event-server-4445 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not allowing attendee POST request
**I'm submitting a ...**
- [x] bug report
**Current behavior:**
Created a new event with new user account. Created a free ticket for the event. for create attendee request the response is:
```
"errors": [
{
"status": 403,
"source": {
"source": "event_id"
},
"detail": "Access Forbidden",
"title": "Access Forbidden"
}
],
"jsonapi": {
"version": "1.0"
}
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/attendees.py`
Content:
```
1 from flask_jwt import current_identity
2 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
3
4 from app.api.bootstrap import api
5 from app.api.helpers.db import safe_query
6 from app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity
7 from app.api.helpers.permission_manager import has_access
8 from app.api.helpers.permissions import jwt_required
9 from app.api.helpers.query import event_query
10 from app.api.helpers.utilities import require_relationship
11 from app.api.schema.attendees import AttendeeSchema, AttendeeSchemaPublic
12 from app.models import db
13 from app.models.order import Order
14 from app.models.ticket import Ticket
15 from app.models.ticket_holder import TicketHolder
16 from app.models.user import User
17
18
19 class AttendeeListPost(ResourceList):
20 """
21 List and create Attendees through direct URL
22 """
23
24 def before_post(self, args, kwargs, data):
25 """
26 Before post method to check for required relationship and proper permissions
27 :param args:
28 :param kwargs:
29 :param data:
30 :return:
31 """
32 require_relationship(['ticket', 'event'], data)
33 if not has_access('is_coorganizer', event_id=data['event']):
34 raise ForbiddenException({'source': 'event_id'}, "Access Forbidden")
35
36 methods = ['POST']
37 schema = AttendeeSchema
38 data_layer = {'session': db.session,
39 'model': TicketHolder}
40
41
42 class AttendeeList(ResourceList):
43 """
44 List Attendees
45 """
46 def query(self, view_kwargs):
47 """
48 query method for Attendees List
49 :param view_kwargs:
50 :return:
51 """
52 query_ = self.session.query(TicketHolder)
53
54 if view_kwargs.get('order_identifier'):
55 order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')
56 if not has_access('is_registrar', event_id=order.event_id) or not has_access('is_user_itself',
57 id=order.user_id):
58 raise ForbiddenException({'source': ''}, 'Access Forbidden')
59 query_ = query_.join(Order).filter(Order.id == order.id)
60
61 if view_kwargs.get('ticket_id'):
62 ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')
63 if not has_access('is_registrar', event_id=ticket.event_id):
64 raise ForbiddenException({'source': ''}, 'Access Forbidden')
65 query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)
66
67 if view_kwargs.get('user_id'):
68 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
69 if not has_access('is_user_itself', id=user.id):
70 raise ForbiddenException({'source': ''}, 'Access Forbidden')
71 query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id)
72
73 query_ = event_query(self, query_, view_kwargs, permission='is_registrar')
74 return query_
75
76 view_kwargs = True
77 methods = ['GET', ]
78 schema = AttendeeSchema
79 data_layer = {'session': db.session,
80 'model': TicketHolder,
81 'methods': {
82 'query': query
83 }}
84
85
86 class AttendeeDetail(ResourceDetail):
87 """
88 Attendee detail by id
89 """
90 def before_get_object(self, view_kwargs):
91 """
92 before get object method for attendee detail
93 :param view_kwargs:
94 :return:
95 """
96 attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id')
97 if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id):
98 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')
99
100 def before_delete_object(self, obj, kwargs):
101 """
102 before delete object method for attendee detail
103 :param obj:
104 :param kwargs:
105 :return:
106 """
107 if not has_access('is_registrar', event_id=obj.event_id):
108 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')
109
110 def before_update_object(self, obj, data, kwargs):
111 """
112 before update object method for attendee detail
113 :param obj:
114 :param data:
115 :param kwargs:
116 :return:
117 """
118 if not has_access('is_registrar', event_id=obj.event_id):
119 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')
120
121 if 'is_checked_in' in data:
122 if data['is_checked_in'] and 'checkin_times' not in data:
123 raise UnprocessableEntity({'pointer': '/data/attributes/checkin_times'},
124 "Check in time missing while trying to check in attendee")
125
126 if obj.checkin_times and data['checkin_times'] not in obj.checkin_times.split(","):
127 data['checkin_times'] = '{},{}'.format(obj.checkin_times, data['checkin_times'])
128
129 decorators = (jwt_required,)
130 schema = AttendeeSchema
131 data_layer = {'session': db.session,
132 'model': TicketHolder,
133 'methods': {
134 'before_get_object': before_get_object,
135 'before_update_object': before_update_object,
136 'before_delete_object': before_delete_object
137 }}
138
139
140 class AttendeeRelationshipRequired(ResourceRelationship):
141 """
142 Attendee Relationship (Required)
143 """
144 decorators = (jwt_required,)
145 methods = ['GET', 'PATCH']
146 schema = AttendeeSchema
147 data_layer = {'session': db.session,
148 'model': TicketHolder}
149
150
151 class AttendeeRelationshipOptional(ResourceRelationship):
152 """
153 Attendee Relationship(Optional)
154 """
155 decorators = (api.has_permission('is_user_itself', fetch="user_id", fetch_as="id", model=TicketHolder),)
156 schema = AttendeeSchema
157 data_layer = {'session': db.session,
158 'model': TicketHolder}
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/attendees.py b/app/api/attendees.py
--- a/app/api/attendees.py
+++ b/app/api/attendees.py
@@ -30,9 +30,8 @@
:return:
"""
require_relationship(['ticket', 'event'], data)
- if not has_access('is_coorganizer', event_id=data['event']):
- raise ForbiddenException({'source': 'event_id'}, "Access Forbidden")
+ decorators = (jwt_required,)
methods = ['POST']
schema = AttendeeSchema
data_layer = {'session': db.session,
| {"golden_diff": "diff --git a/app/api/attendees.py b/app/api/attendees.py\n--- a/app/api/attendees.py\n+++ b/app/api/attendees.py\n@@ -30,9 +30,8 @@\n :return:\n \"\"\"\n require_relationship(['ticket', 'event'], data)\n- if not has_access('is_coorganizer', event_id=data['event']):\n- raise ForbiddenException({'source': 'event_id'}, \"Access Forbidden\")\n \n+ decorators = (jwt_required,)\n methods = ['POST']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n", "issue": "Not allowing attendee POST request\n**I'm submitting a ...**\r\n- [x] bug report\r\n\r\n**Current behavior:**\r\nCreated a new event with new user account. Created a free ticket for the event. for create attendee request the response is:\r\n```\r\n\"errors\": [\r\n {\r\n \"status\": 403,\r\n \"source\": {\r\n \"source\": \"event_id\"\r\n },\r\n \"detail\": \"Access Forbidden\",\r\n \"title\": \"Access Forbidden\"\r\n }\r\n ],\r\n \"jsonapi\": {\r\n \"version\": \"1.0\"\r\n }\r\n}\r\n```\n", "before_files": [{"content": "from flask_jwt import current_identity\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.attendees import AttendeeSchema, AttendeeSchemaPublic\nfrom app.models import db\nfrom app.models.order import Order\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.user import User\n\n\nclass AttendeeListPost(ResourceList):\n \"\"\"\n List and create Attendees through direct URL\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n Before post method to check for required relationship and proper permissions\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['ticket', 'event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': 'event_id'}, \"Access Forbidden\")\n\n methods = ['POST']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeList(ResourceList):\n \"\"\"\n List Attendees\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for Attendees List\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(TicketHolder)\n\n if view_kwargs.get('order_identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')\n if not has_access('is_registrar', event_id=order.event_id) or not has_access('is_user_itself',\n id=order.user_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Order).filter(Order.id == order.id)\n\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n if not has_access('is_registrar', event_id=ticket.event_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id)\n\n query_ = event_query(self, query_, view_kwargs, permission='is_registrar')\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'query': query\n }}\n\n\nclass AttendeeDetail(ResourceDetail):\n \"\"\"\n Attendee detail by id\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get object method for attendee detail\n :param view_kwargs:\n :return:\n \"\"\"\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id')\n if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_delete_object(self, obj, kwargs):\n \"\"\"\n before delete object method for attendee detail\n :param obj:\n :param kwargs:\n :return:\n \"\"\"\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_update_object(self, obj, data, kwargs):\n \"\"\"\n before update object method for attendee detail\n :param obj:\n :param data:\n :param kwargs:\n :return:\n \"\"\"\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n if 'is_checked_in' in data:\n if data['is_checked_in'] and 'checkin_times' not in data:\n raise UnprocessableEntity({'pointer': '/data/attributes/checkin_times'},\n \"Check in time missing while trying to check in attendee\")\n\n if obj.checkin_times and data['checkin_times'] not in obj.checkin_times.split(\",\"):\n data['checkin_times'] = '{},{}'.format(obj.checkin_times, data['checkin_times'])\n\n decorators = (jwt_required,)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'before_get_object': before_get_object,\n 'before_update_object': before_update_object,\n 'before_delete_object': before_delete_object\n }}\n\n\nclass AttendeeRelationshipRequired(ResourceRelationship):\n \"\"\"\n Attendee Relationship (Required)\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeRelationshipOptional(ResourceRelationship):\n \"\"\"\n Attendee Relationship(Optional)\n \"\"\"\n decorators = (api.has_permission('is_user_itself', fetch=\"user_id\", fetch_as=\"id\", model=TicketHolder),)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n", "path": "app/api/attendees.py"}], "after_files": [{"content": "from flask_jwt import current_identity\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.attendees import AttendeeSchema, AttendeeSchemaPublic\nfrom app.models import db\nfrom app.models.order import Order\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.user import User\n\n\nclass AttendeeListPost(ResourceList):\n \"\"\"\n List and create Attendees through direct URL\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n Before post method to check for required relationship and proper permissions\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['ticket', 'event'], data)\n\n decorators = (jwt_required,)\n methods = ['POST']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeList(ResourceList):\n \"\"\"\n List Attendees\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for Attendees List\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(TicketHolder)\n\n if view_kwargs.get('order_identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')\n if not has_access('is_registrar', event_id=order.event_id) or not has_access('is_user_itself',\n id=order.user_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Order).filter(Order.id == order.id)\n\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n if not has_access('is_registrar', event_id=ticket.event_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id)\n\n query_ = event_query(self, query_, view_kwargs, permission='is_registrar')\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'query': query\n }}\n\n\nclass AttendeeDetail(ResourceDetail):\n \"\"\"\n Attendee detail by id\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get object method for attendee detail\n :param view_kwargs:\n :return:\n \"\"\"\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id')\n if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_delete_object(self, obj, kwargs):\n \"\"\"\n before delete object method for attendee detail\n :param obj:\n :param kwargs:\n :return:\n \"\"\"\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_update_object(self, obj, data, kwargs):\n \"\"\"\n before update object method for attendee detail\n :param obj:\n :param data:\n :param kwargs:\n :return:\n \"\"\"\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n if 'is_checked_in' in data:\n if data['is_checked_in'] and 'checkin_times' not in data:\n raise UnprocessableEntity({'pointer': '/data/attributes/checkin_times'},\n \"Check in time missing while trying to check in attendee\")\n\n if obj.checkin_times and data['checkin_times'] not in obj.checkin_times.split(\",\"):\n data['checkin_times'] = '{},{}'.format(obj.checkin_times, data['checkin_times'])\n\n decorators = (jwt_required,)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'before_get_object': before_get_object,\n 'before_update_object': before_update_object,\n 'before_delete_object': before_delete_object\n }}\n\n\nclass AttendeeRelationshipRequired(ResourceRelationship):\n \"\"\"\n Attendee Relationship (Required)\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeRelationshipOptional(ResourceRelationship):\n \"\"\"\n Attendee Relationship(Optional)\n \"\"\"\n decorators = (api.has_permission('is_user_itself', fetch=\"user_id\", fetch_as=\"id\", model=TicketHolder),)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n", "path": "app/api/attendees.py"}]} | 2,017 | 131 |
gh_patches_debug_2739 | rasdani/github-patches | git_diff | translate__translate-3435 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
multistring needs a __hash__ method
In old ttk you could do something like
``` python
foo = multistring("foo")
foodict = {foo: "bar"}
assert 'foo' in foodict
```
It seems this no longer works - not sure why, but a `__hash__` method that returns `hash(str(self))` should fix the problem i believe
@claudep @julen any thoughts on this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `translate/misc/multistring.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2006 Zuza Software Foundation
4 #
5 # This file is part of translate.
6 #
7 # translate is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # translate is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 """Supports a hybrid Unicode string that can also have a list of alternate
21 strings in the strings attribute
22 """
23
24 import warnings
25
26 import six
27
28 from .deprecation import RemovedInTTK2Warning
29
30
31 def _create_text_type(newtype, string, encoding):
32 """Helper to construct a text type out of characters or bytes. Required to
33 temporarily preserve backwards compatibility. Must be removed in TTK2.
34 """
35 if isinstance(string, six.text_type):
36 return six.text_type.__new__(newtype, string)
37
38 warnings.warn(
39 'Passing non-ASCII bytes as well as the `encoding` argument to '
40 '`multistring` is deprecated. Always pass unicode characters instead.',
41 RemovedInTTK2Warning, stacklevel=2,
42 )
43 return six.text_type.__new__(newtype, string or six.binary_type(), encoding)
44
45
46 class multistring(six.text_type):
47
48 def __new__(newtype, string=u"", *args, **kwargs):
49 encoding = kwargs.pop('encoding', 'utf-8')
50 if isinstance(string, list):
51 if not string:
52 raise ValueError("multistring must contain at least one string")
53 newstring = _create_text_type(newtype, string[0], encoding)
54 newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]
55 else:
56 newstring = _create_text_type(newtype, string, encoding)
57 newstring.strings = [newstring]
58 return newstring
59
60 def __init__(self, *args, **kwargs):
61 super(multistring, self).__init__()
62 if not hasattr(self, "strings"):
63 self.strings = []
64
65 def __cmp__(self, otherstring):
66 def cmp_compat(s1, s2):
67 # Python 3 compatible cmp() equivalent
68 return (s1 > s2) - (s1 < s2)
69 if isinstance(otherstring, multistring):
70 parentcompare = cmp_compat(six.text_type(self), otherstring)
71 if parentcompare:
72 return parentcompare
73 else:
74 return cmp_compat(self.strings[1:], otherstring.strings[1:])
75 elif isinstance(otherstring, six.text_type):
76 return cmp_compat(six.text_type(self), otherstring)
77 elif isinstance(otherstring, bytes):
78 return cmp_compat(self.encode('utf-8'), otherstring)
79 elif isinstance(otherstring, list) and otherstring:
80 return cmp_compat(self, multistring(otherstring))
81 else:
82 return cmp_compat(str(type(self)), str(type(otherstring)))
83
84 def __hash__(self):
85 return hash(''.join(self.strings))
86
87 def __ne__(self, otherstring):
88 return self.__cmp__(otherstring) != 0
89
90 def __eq__(self, otherstring):
91 return self.__cmp__(otherstring) == 0
92
93 def __repr__(self):
94 _repr = u"multistring(%r)" % (
95 [six.text_type(item) for item in self.strings]
96 )
97 return _repr.encode('utf-8') if six.PY2 else _repr
98
99 def __str__(self):
100 if six.PY2:
101 return self.encode('utf-8')
102 return super(multistring, self).__str__()
103
104 def replace(self, old, new, count=None):
105 if count is None:
106 newstr = multistring(super(multistring, self).replace(old, new))
107 else:
108 newstr = multistring(super(multistring, self).replace(old, new, count))
109 for s in self.strings[1:]:
110 if count is None:
111 newstr.strings.append(s.replace(old, new))
112 else:
113 newstr.strings.append(s.replace(old, new, count))
114 return newstr
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/translate/misc/multistring.py b/translate/misc/multistring.py
--- a/translate/misc/multistring.py
+++ b/translate/misc/multistring.py
@@ -82,7 +82,7 @@
return cmp_compat(str(type(self)), str(type(otherstring)))
def __hash__(self):
- return hash(''.join(self.strings))
+ return hash(str(self))
def __ne__(self, otherstring):
return self.__cmp__(otherstring) != 0
| {"golden_diff": "diff --git a/translate/misc/multistring.py b/translate/misc/multistring.py\n--- a/translate/misc/multistring.py\n+++ b/translate/misc/multistring.py\n@@ -82,7 +82,7 @@\n return cmp_compat(str(type(self)), str(type(otherstring)))\n \n def __hash__(self):\n- return hash(''.join(self.strings))\n+ return hash(str(self))\n \n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n", "issue": "multistring needs a __hash__ method\nIn old ttk you could do something like\n\n``` python\nfoo = multistring(\"foo\")\nfoodict = {foo: \"bar\"}\nassert 'foo' in foodict\n```\n\nIt seems this no longer works - not sure why, but a `__hash__` method that returns `hash(str(self))` should fix the problem i believe\n\n@claudep @julen any thoughts on this?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Supports a hybrid Unicode string that can also have a list of alternate\nstrings in the strings attribute\n\"\"\"\n\nimport warnings\n\nimport six\n\nfrom .deprecation import RemovedInTTK2Warning\n\n\ndef _create_text_type(newtype, string, encoding):\n \"\"\"Helper to construct a text type out of characters or bytes. Required to\n temporarily preserve backwards compatibility. Must be removed in TTK2.\n \"\"\"\n if isinstance(string, six.text_type):\n return six.text_type.__new__(newtype, string)\n\n warnings.warn(\n 'Passing non-ASCII bytes as well as the `encoding` argument to '\n '`multistring` is deprecated. Always pass unicode characters instead.',\n RemovedInTTK2Warning, stacklevel=2,\n )\n return six.text_type.__new__(newtype, string or six.binary_type(), encoding)\n\n\nclass multistring(six.text_type):\n\n def __new__(newtype, string=u\"\", *args, **kwargs):\n encoding = kwargs.pop('encoding', 'utf-8')\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n newstring = _create_text_type(newtype, string[0], encoding)\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n newstring = _create_text_type(newtype, string, encoding)\n newstring.strings = [newstring]\n return newstring\n\n def __init__(self, *args, **kwargs):\n super(multistring, self).__init__()\n if not hasattr(self, \"strings\"):\n self.strings = []\n\n def __cmp__(self, otherstring):\n def cmp_compat(s1, s2):\n # Python 3 compatible cmp() equivalent\n return (s1 > s2) - (s1 < s2)\n if isinstance(otherstring, multistring):\n parentcompare = cmp_compat(six.text_type(self), otherstring)\n if parentcompare:\n return parentcompare\n else:\n return cmp_compat(self.strings[1:], otherstring.strings[1:])\n elif isinstance(otherstring, six.text_type):\n return cmp_compat(six.text_type(self), otherstring)\n elif isinstance(otherstring, bytes):\n return cmp_compat(self.encode('utf-8'), otherstring)\n elif isinstance(otherstring, list) and otherstring:\n return cmp_compat(self, multistring(otherstring))\n else:\n return cmp_compat(str(type(self)), str(type(otherstring)))\n\n def __hash__(self):\n return hash(''.join(self.strings))\n\n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n\n def __eq__(self, otherstring):\n return self.__cmp__(otherstring) == 0\n\n def __repr__(self):\n _repr = u\"multistring(%r)\" % (\n [six.text_type(item) for item in self.strings]\n )\n return _repr.encode('utf-8') if six.PY2 else _repr\n\n def __str__(self):\n if six.PY2:\n return self.encode('utf-8')\n return super(multistring, self).__str__()\n\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super(multistring, self).replace(old, new))\n else:\n newstr = multistring(super(multistring, self).replace(old, new, count))\n for s in self.strings[1:]:\n if count is None:\n newstr.strings.append(s.replace(old, new))\n else:\n newstr.strings.append(s.replace(old, new, count))\n return newstr\n", "path": "translate/misc/multistring.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Supports a hybrid Unicode string that can also have a list of alternate\nstrings in the strings attribute\n\"\"\"\n\nimport warnings\n\nimport six\n\nfrom .deprecation import RemovedInTTK2Warning\n\n\ndef _create_text_type(newtype, string, encoding):\n \"\"\"Helper to construct a text type out of characters or bytes. Required to\n temporarily preserve backwards compatibility. Must be removed in TTK2.\n \"\"\"\n if isinstance(string, six.text_type):\n return six.text_type.__new__(newtype, string)\n\n warnings.warn(\n 'Passing non-ASCII bytes as well as the `encoding` argument to '\n '`multistring` is deprecated. Always pass unicode characters instead.',\n RemovedInTTK2Warning, stacklevel=2,\n )\n return six.text_type.__new__(newtype, string or six.binary_type(), encoding)\n\n\nclass multistring(six.text_type):\n\n def __new__(newtype, string=u\"\", *args, **kwargs):\n encoding = kwargs.pop('encoding', 'utf-8')\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n newstring = _create_text_type(newtype, string[0], encoding)\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n newstring = _create_text_type(newtype, string, encoding)\n newstring.strings = [newstring]\n return newstring\n\n def __init__(self, *args, **kwargs):\n super(multistring, self).__init__()\n if not hasattr(self, \"strings\"):\n self.strings = []\n\n def __cmp__(self, otherstring):\n def cmp_compat(s1, s2):\n # Python 3 compatible cmp() equivalent\n return (s1 > s2) - (s1 < s2)\n if isinstance(otherstring, multistring):\n parentcompare = cmp_compat(six.text_type(self), otherstring)\n if parentcompare:\n return parentcompare\n else:\n return cmp_compat(self.strings[1:], otherstring.strings[1:])\n elif isinstance(otherstring, six.text_type):\n return cmp_compat(six.text_type(self), otherstring)\n elif isinstance(otherstring, bytes):\n return cmp_compat(self.encode('utf-8'), otherstring)\n elif isinstance(otherstring, list) and otherstring:\n return cmp_compat(self, multistring(otherstring))\n else:\n return cmp_compat(str(type(self)), str(type(otherstring)))\n\n def __hash__(self):\n return hash(str(self))\n\n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n\n def __eq__(self, otherstring):\n return self.__cmp__(otherstring) == 0\n\n def __repr__(self):\n _repr = u\"multistring(%r)\" % (\n [six.text_type(item) for item in self.strings]\n )\n return _repr.encode('utf-8') if six.PY2 else _repr\n\n def __str__(self):\n if six.PY2:\n return self.encode('utf-8')\n return super(multistring, self).__str__()\n\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super(multistring, self).replace(old, new))\n else:\n newstr = multistring(super(multistring, self).replace(old, new, count))\n for s in self.strings[1:]:\n if count is None:\n newstr.strings.append(s.replace(old, new))\n else:\n newstr.strings.append(s.replace(old, new, count))\n return newstr\n", "path": "translate/misc/multistring.py"}]} | 1,573 | 114 |
gh_patches_debug_45322 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-3531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v18] J'ai perdu mes sujets suivis
Version : 18
OS Windows
Navigateur Firefox
Alors qu'une synchro a été faite entre la prod et la beta, je ne retrouve plus mes sujets suivis dans la page https://beta.zestedesavoir.com/forums/ comme on peut le voir sur la capture ci-dessous (il ne me reste plus qu'un sujet suivis au lieu de beaucoup plus)

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/notification/management/commands/migrate_subscriptions.py`
Content:
```
1 # coding: utf-8
2 from django.core.management import BaseCommand
3 from django.db.models import F, Q
4 from zds.forum.models import TopicRead
5 from zds.member.models import Profile
6 from zds.mp.models import PrivateTopicRead, PrivateTopic
7 from zds.notification.models import TopicFollowed, TopicAnswerSubscription, ContentReactionAnswerSubscription, \
8 PrivateTopicAnswerSubscription
9 from zds.tutorialv2.models.models_database import ContentReaction, ContentRead
10
11
12 class Command(BaseCommand):
13 help = 'Migrate old subscriptions and notifications for new models.'
14
15 def handle(self, *args, **options):
16 for profile in Profile.objects.all():
17 self.stdout.write(u'Migrate all notifications of {}...'.format(profile.user.username))
18 # Forums.
19 self.stdout.write(u'Starting migration with topics...')
20 topics_followed = TopicFollowed.objects.filter(user=profile.user).values("topic").distinct().all()
21 topics_never_read = TopicRead.objects\
22 .filter(user=profile.user)\
23 .filter(topic__in=topics_followed)\
24 .select_related("topic")\
25 .exclude(post=F('topic__last_message')).all()
26
27 for topic_never_read in topics_never_read:
28 content = topic_never_read.topic.first_unread_post(profile.user)
29 if content is None:
30 content = topic_never_read.topic.last_message
31
32 # Migrate subscriptions.
33 content_object = topic_never_read.topic
34 subscription = TopicAnswerSubscription.objects.get_or_create_active(
35 user=profile.user, content_object=content_object)
36
37 # Migrate notifications.
38 subscription.send_notification(content=content, sender=content.author)
39 notification = TopicAnswerSubscription.objects\
40 .get_existing(profile.user, content_object, is_active=True).last_notification
41 notification.pubdate = content.pubdate
42 notification.save()
43
44 self.stdout.write(u'Migration about « {} » [OK]'.format(topic_never_read.topic.title))
45
46 # Private messages.
47 self.stdout.write(u'Starting migration with private topics...')
48 topics_never_read = list(PrivateTopicRead.objects
49 .filter(user=profile.user)
50 .filter(privatepost=F('privatetopic__last_message')).all())
51
52 tnrs = []
53 for tnr in topics_never_read:
54 tnrs.append(tnr.privatetopic.pk)
55
56 private_topics_unread = PrivateTopic.objects \
57 .filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \
58 .exclude(pk__in=tnrs) \
59 .order_by("-pubdate") \
60 .distinct()
61
62 for private_topic_unread in private_topics_unread:
63 answer = private_topic_unread.last_read_post(profile.user)
64 if answer is None:
65 answer = private_topic_unread.get_last_answer()
66
67 # Migrate subscriptions.
68 subscription = PrivateTopicAnswerSubscription.objects.get_or_create_active(
69 user=profile.user, content_object=private_topic_unread)
70
71 # Migrate notifications.
72 subscription.send_notification(content=answer, sender=answer.author, send_email=False)
73 notification = PrivateTopicAnswerSubscription.objects\
74 .get_existing(profile.user, private_topic_unread, is_active=True).last_notification
75 notification.pubdate = answer.pubdate
76 notification.save()
77
78 self.stdout.write(u'Migration about « {} » [OK]'.format(private_topic_unread.title))
79
80 # Contents.
81 self.stdout.write(u'Starting migration with contents...')
82 content_followed_pk = ContentReaction.objects\
83 .filter(author=profile.user, related_content__public_version__isnull=False)\
84 .values_list('related_content__pk', flat=True)
85
86 content_to_read = ContentRead.objects\
87 .select_related('note')\
88 .select_related('note__author')\
89 .select_related('content')\
90 .select_related('note__related_content__public_version')\
91 .filter(user=profile.user)\
92 .exclude(note__pk=F('content__last_note__pk')).all()
93
94 for content_read in content_to_read:
95 content = content_read.content
96 if content.pk not in content_followed_pk and profile.user not in content.authors.all():
97 continue
98 reaction = content.first_unread_note(user=profile.user)
99 if reaction is None:
100 reaction = content.first_note()
101 if reaction is None:
102 continue
103
104 # Migrate subscriptions.
105 content_object = reaction.related_content
106 subscription = ContentReactionAnswerSubscription.objects.get_or_create_active(
107 user=profile.user, content_object=content_object)
108
109 # Migrate notifications.
110 subscription.send_notification(content=reaction, sender=reaction.author)
111 notification = ContentReactionAnswerSubscription.objects\
112 .get_existing(profile.user, content_object, is_active=True).last_notification
113 notification.pubdate = reaction.pubdate
114 notification.save()
115
116 self.stdout.write(u'Migration about « {} » [OK]'.format(content.title))
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/notification/management/commands/migrate_subscriptions.py b/zds/notification/management/commands/migrate_subscriptions.py
--- a/zds/notification/management/commands/migrate_subscriptions.py
+++ b/zds/notification/management/commands/migrate_subscriptions.py
@@ -1,12 +1,12 @@
# coding: utf-8
from django.core.management import BaseCommand
from django.db.models import F, Q
-from zds.forum.models import TopicRead
+from zds.forum.models import TopicRead, Topic
from zds.member.models import Profile
from zds.mp.models import PrivateTopicRead, PrivateTopic
from zds.notification.models import TopicFollowed, TopicAnswerSubscription, ContentReactionAnswerSubscription, \
PrivateTopicAnswerSubscription
-from zds.tutorialv2.models.models_database import ContentReaction, ContentRead
+from zds.tutorialv2.models.models_database import ContentReaction, ContentRead, PublishableContent
class Command(BaseCommand):
@@ -24,12 +24,17 @@
.select_related("topic")\
.exclude(post=F('topic__last_message')).all()
+ # Migrate subscriptions.
+ for topic_followed in TopicFollowed.objects \
+ .filter(user=profile.user).values_list('topic', flat=True).distinct().all():
+ content_object = Topic.objects.get(id=topic_followed)
+ TopicAnswerSubscription.objects.get_or_create_active(user=profile.user, content_object=content_object)
+
for topic_never_read in topics_never_read:
content = topic_never_read.topic.first_unread_post(profile.user)
if content is None:
content = topic_never_read.topic.last_message
- # Migrate subscriptions.
content_object = topic_never_read.topic
subscription = TopicAnswerSubscription.objects.get_or_create_active(
user=profile.user, content_object=content_object)
@@ -53,6 +58,16 @@
for tnr in topics_never_read:
tnrs.append(tnr.privatetopic.pk)
+ # Migrate subscriptions.
+ private_topics = PrivateTopic.objects \
+ .filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \
+ .order_by("-pubdate") \
+ .distinct()
+
+ for private_topic in private_topics:
+ PrivateTopicAnswerSubscription.objects.get_or_create_active(
+ user=profile.user, content_object=private_topic)
+
private_topics_unread = PrivateTopic.objects \
.filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \
.exclude(pk__in=tnrs) \
@@ -64,7 +79,6 @@
if answer is None:
answer = private_topic_unread.get_last_answer()
- # Migrate subscriptions.
subscription = PrivateTopicAnswerSubscription.objects.get_or_create_active(
user=profile.user, content_object=private_topic_unread)
@@ -79,6 +93,16 @@
# Contents.
self.stdout.write(u'Starting migration with contents...')
+ # Migrate subscriptions.
+ contents_followed = ContentReaction.objects \
+ .filter(author=profile.user, related_content__public_version__isnull=False) \
+ .values_list('related_content', flat=True)
+
+ for content_followed in contents_followed:
+ content_object = PublishableContent.objects.get(id=content_followed)
+ ContentReactionAnswerSubscription.objects.get_or_create_active(
+ user=profile.user, content_object=content_object)
+
content_followed_pk = ContentReaction.objects\
.filter(author=profile.user, related_content__public_version__isnull=False)\
.values_list('related_content__pk', flat=True)
@@ -101,7 +125,6 @@
if reaction is None:
continue
- # Migrate subscriptions.
content_object = reaction.related_content
subscription = ContentReactionAnswerSubscription.objects.get_or_create_active(
user=profile.user, content_object=content_object)
| {"golden_diff": "diff --git a/zds/notification/management/commands/migrate_subscriptions.py b/zds/notification/management/commands/migrate_subscriptions.py\n--- a/zds/notification/management/commands/migrate_subscriptions.py\n+++ b/zds/notification/management/commands/migrate_subscriptions.py\n@@ -1,12 +1,12 @@\n # coding: utf-8\n from django.core.management import BaseCommand\n from django.db.models import F, Q\n-from zds.forum.models import TopicRead\n+from zds.forum.models import TopicRead, Topic\n from zds.member.models import Profile\n from zds.mp.models import PrivateTopicRead, PrivateTopic\n from zds.notification.models import TopicFollowed, TopicAnswerSubscription, ContentReactionAnswerSubscription, \\\n PrivateTopicAnswerSubscription\n-from zds.tutorialv2.models.models_database import ContentReaction, ContentRead\n+from zds.tutorialv2.models.models_database import ContentReaction, ContentRead, PublishableContent\n \n \n class Command(BaseCommand):\n@@ -24,12 +24,17 @@\n .select_related(\"topic\")\\\n .exclude(post=F('topic__last_message')).all()\n \n+ # Migrate subscriptions.\n+ for topic_followed in TopicFollowed.objects \\\n+ .filter(user=profile.user).values_list('topic', flat=True).distinct().all():\n+ content_object = Topic.objects.get(id=topic_followed)\n+ TopicAnswerSubscription.objects.get_or_create_active(user=profile.user, content_object=content_object)\n+\n for topic_never_read in topics_never_read:\n content = topic_never_read.topic.first_unread_post(profile.user)\n if content is None:\n content = topic_never_read.topic.last_message\n \n- # Migrate subscriptions.\n content_object = topic_never_read.topic\n subscription = TopicAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=content_object)\n@@ -53,6 +58,16 @@\n for tnr in topics_never_read:\n tnrs.append(tnr.privatetopic.pk)\n \n+ # Migrate subscriptions.\n+ private_topics = PrivateTopic.objects \\\n+ .filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \\\n+ .order_by(\"-pubdate\") \\\n+ .distinct()\n+\n+ for private_topic in private_topics:\n+ PrivateTopicAnswerSubscription.objects.get_or_create_active(\n+ user=profile.user, content_object=private_topic)\n+\n private_topics_unread = PrivateTopic.objects \\\n .filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \\\n .exclude(pk__in=tnrs) \\\n@@ -64,7 +79,6 @@\n if answer is None:\n answer = private_topic_unread.get_last_answer()\n \n- # Migrate subscriptions.\n subscription = PrivateTopicAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=private_topic_unread)\n \n@@ -79,6 +93,16 @@\n \n # Contents.\n self.stdout.write(u'Starting migration with contents...')\n+ # Migrate subscriptions.\n+ contents_followed = ContentReaction.objects \\\n+ .filter(author=profile.user, related_content__public_version__isnull=False) \\\n+ .values_list('related_content', flat=True)\n+\n+ for content_followed in contents_followed:\n+ content_object = PublishableContent.objects.get(id=content_followed)\n+ ContentReactionAnswerSubscription.objects.get_or_create_active(\n+ user=profile.user, content_object=content_object)\n+\n content_followed_pk = ContentReaction.objects\\\n .filter(author=profile.user, related_content__public_version__isnull=False)\\\n .values_list('related_content__pk', flat=True)\n@@ -101,7 +125,6 @@\n if reaction is None:\n continue\n \n- # Migrate subscriptions.\n content_object = reaction.related_content\n subscription = ContentReactionAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=content_object)\n", "issue": "[v18] J'ai perdu mes sujets suivis\nVersion : 18\nOS Windows\nNavigateur Firefox\n\nAlors qu'une synchro a \u00e9t\u00e9 faite entre la prod et la beta, je ne retrouve plus mes sujets suivis dans la page https://beta.zestedesavoir.com/forums/ comme on peut le voir sur la capture ci-dessous (il ne me reste plus qu'un sujet suivis au lieu de beaucoup plus)\n\n\n\n", "before_files": [{"content": "# coding: utf-8\nfrom django.core.management import BaseCommand\nfrom django.db.models import F, Q\nfrom zds.forum.models import TopicRead\nfrom zds.member.models import Profile\nfrom zds.mp.models import PrivateTopicRead, PrivateTopic\nfrom zds.notification.models import TopicFollowed, TopicAnswerSubscription, ContentReactionAnswerSubscription, \\\n PrivateTopicAnswerSubscription\nfrom zds.tutorialv2.models.models_database import ContentReaction, ContentRead\n\n\nclass Command(BaseCommand):\n help = 'Migrate old subscriptions and notifications for new models.'\n\n def handle(self, *args, **options):\n for profile in Profile.objects.all():\n self.stdout.write(u'Migrate all notifications of {}...'.format(profile.user.username))\n # Forums.\n self.stdout.write(u'Starting migration with topics...')\n topics_followed = TopicFollowed.objects.filter(user=profile.user).values(\"topic\").distinct().all()\n topics_never_read = TopicRead.objects\\\n .filter(user=profile.user)\\\n .filter(topic__in=topics_followed)\\\n .select_related(\"topic\")\\\n .exclude(post=F('topic__last_message')).all()\n\n for topic_never_read in topics_never_read:\n content = topic_never_read.topic.first_unread_post(profile.user)\n if content is None:\n content = topic_never_read.topic.last_message\n\n # Migrate subscriptions.\n content_object = topic_never_read.topic\n subscription = TopicAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=content_object)\n\n # Migrate notifications.\n subscription.send_notification(content=content, sender=content.author)\n notification = TopicAnswerSubscription.objects\\\n .get_existing(profile.user, content_object, is_active=True).last_notification\n notification.pubdate = content.pubdate\n notification.save()\n\n self.stdout.write(u'Migration about \u00ab {} \u00bb [OK]'.format(topic_never_read.topic.title))\n\n # Private messages.\n self.stdout.write(u'Starting migration with private topics...')\n topics_never_read = list(PrivateTopicRead.objects\n .filter(user=profile.user)\n .filter(privatepost=F('privatetopic__last_message')).all())\n\n tnrs = []\n for tnr in topics_never_read:\n tnrs.append(tnr.privatetopic.pk)\n\n private_topics_unread = PrivateTopic.objects \\\n .filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \\\n .exclude(pk__in=tnrs) \\\n .order_by(\"-pubdate\") \\\n .distinct()\n\n for private_topic_unread in private_topics_unread:\n answer = private_topic_unread.last_read_post(profile.user)\n if answer is None:\n answer = private_topic_unread.get_last_answer()\n\n # Migrate subscriptions.\n subscription = PrivateTopicAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=private_topic_unread)\n\n # Migrate notifications.\n subscription.send_notification(content=answer, sender=answer.author, send_email=False)\n notification = PrivateTopicAnswerSubscription.objects\\\n .get_existing(profile.user, private_topic_unread, is_active=True).last_notification\n notification.pubdate = answer.pubdate\n notification.save()\n\n self.stdout.write(u'Migration about \u00ab {} \u00bb [OK]'.format(private_topic_unread.title))\n\n # Contents.\n self.stdout.write(u'Starting migration with contents...')\n content_followed_pk = ContentReaction.objects\\\n .filter(author=profile.user, related_content__public_version__isnull=False)\\\n .values_list('related_content__pk', flat=True)\n\n content_to_read = ContentRead.objects\\\n .select_related('note')\\\n .select_related('note__author')\\\n .select_related('content')\\\n .select_related('note__related_content__public_version')\\\n .filter(user=profile.user)\\\n .exclude(note__pk=F('content__last_note__pk')).all()\n\n for content_read in content_to_read:\n content = content_read.content\n if content.pk not in content_followed_pk and profile.user not in content.authors.all():\n continue\n reaction = content.first_unread_note(user=profile.user)\n if reaction is None:\n reaction = content.first_note()\n if reaction is None:\n continue\n\n # Migrate subscriptions.\n content_object = reaction.related_content\n subscription = ContentReactionAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=content_object)\n\n # Migrate notifications.\n subscription.send_notification(content=reaction, sender=reaction.author)\n notification = ContentReactionAnswerSubscription.objects\\\n .get_existing(profile.user, content_object, is_active=True).last_notification\n notification.pubdate = reaction.pubdate\n notification.save()\n\n self.stdout.write(u'Migration about \u00ab {} \u00bb [OK]'.format(content.title))\n", "path": "zds/notification/management/commands/migrate_subscriptions.py"}], "after_files": [{"content": "# coding: utf-8\nfrom django.core.management import BaseCommand\nfrom django.db.models import F, Q\nfrom zds.forum.models import TopicRead, Topic\nfrom zds.member.models import Profile\nfrom zds.mp.models import PrivateTopicRead, PrivateTopic\nfrom zds.notification.models import TopicFollowed, TopicAnswerSubscription, ContentReactionAnswerSubscription, \\\n PrivateTopicAnswerSubscription\nfrom zds.tutorialv2.models.models_database import ContentReaction, ContentRead, PublishableContent\n\n\nclass Command(BaseCommand):\n help = 'Migrate old subscriptions and notifications for new models.'\n\n def handle(self, *args, **options):\n for profile in Profile.objects.all():\n self.stdout.write(u'Migrate all notifications of {}...'.format(profile.user.username))\n # Forums.\n self.stdout.write(u'Starting migration with topics...')\n topics_followed = TopicFollowed.objects.filter(user=profile.user).values(\"topic\").distinct().all()\n topics_never_read = TopicRead.objects\\\n .filter(user=profile.user)\\\n .filter(topic__in=topics_followed)\\\n .select_related(\"topic\")\\\n .exclude(post=F('topic__last_message')).all()\n\n # Migrate subscriptions.\n for topic_followed in TopicFollowed.objects \\\n .filter(user=profile.user).values_list('topic', flat=True).distinct().all():\n content_object = Topic.objects.get(id=topic_followed)\n TopicAnswerSubscription.objects.get_or_create_active(user=profile.user, content_object=content_object)\n\n for topic_never_read in topics_never_read:\n content = topic_never_read.topic.first_unread_post(profile.user)\n if content is None:\n content = topic_never_read.topic.last_message\n\n content_object = topic_never_read.topic\n subscription = TopicAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=content_object)\n\n # Migrate notifications.\n subscription.send_notification(content=content, sender=content.author)\n notification = TopicAnswerSubscription.objects\\\n .get_existing(profile.user, content_object, is_active=True).last_notification\n notification.pubdate = content.pubdate\n notification.save()\n\n self.stdout.write(u'Migration about \u00ab {} \u00bb [OK]'.format(topic_never_read.topic.title))\n\n # Private messages.\n self.stdout.write(u'Starting migration with private topics...')\n topics_never_read = list(PrivateTopicRead.objects\n .filter(user=profile.user)\n .filter(privatepost=F('privatetopic__last_message')).all())\n\n tnrs = []\n for tnr in topics_never_read:\n tnrs.append(tnr.privatetopic.pk)\n\n # Migrate subscriptions.\n private_topics = PrivateTopic.objects \\\n .filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \\\n .order_by(\"-pubdate\") \\\n .distinct()\n\n for private_topic in private_topics:\n PrivateTopicAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=private_topic)\n\n private_topics_unread = PrivateTopic.objects \\\n .filter(Q(author=profile.user) | Q(participants__in=[profile.user])) \\\n .exclude(pk__in=tnrs) \\\n .order_by(\"-pubdate\") \\\n .distinct()\n\n for private_topic_unread in private_topics_unread:\n answer = private_topic_unread.last_read_post(profile.user)\n if answer is None:\n answer = private_topic_unread.get_last_answer()\n\n subscription = PrivateTopicAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=private_topic_unread)\n\n # Migrate notifications.\n subscription.send_notification(content=answer, sender=answer.author, send_email=False)\n notification = PrivateTopicAnswerSubscription.objects\\\n .get_existing(profile.user, private_topic_unread, is_active=True).last_notification\n notification.pubdate = answer.pubdate\n notification.save()\n\n self.stdout.write(u'Migration about \u00ab {} \u00bb [OK]'.format(private_topic_unread.title))\n\n # Contents.\n self.stdout.write(u'Starting migration with contents...')\n # Migrate subscriptions.\n contents_followed = ContentReaction.objects \\\n .filter(author=profile.user, related_content__public_version__isnull=False) \\\n .values_list('related_content', flat=True)\n\n for content_followed in contents_followed:\n content_object = PublishableContent.objects.get(id=content_followed)\n ContentReactionAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=content_object)\n\n content_followed_pk = ContentReaction.objects\\\n .filter(author=profile.user, related_content__public_version__isnull=False)\\\n .values_list('related_content__pk', flat=True)\n\n content_to_read = ContentRead.objects\\\n .select_related('note')\\\n .select_related('note__author')\\\n .select_related('content')\\\n .select_related('note__related_content__public_version')\\\n .filter(user=profile.user)\\\n .exclude(note__pk=F('content__last_note__pk')).all()\n\n for content_read in content_to_read:\n content = content_read.content\n if content.pk not in content_followed_pk and profile.user not in content.authors.all():\n continue\n reaction = content.first_unread_note(user=profile.user)\n if reaction is None:\n reaction = content.first_note()\n if reaction is None:\n continue\n\n content_object = reaction.related_content\n subscription = ContentReactionAnswerSubscription.objects.get_or_create_active(\n user=profile.user, content_object=content_object)\n\n # Migrate notifications.\n subscription.send_notification(content=reaction, sender=reaction.author)\n notification = ContentReactionAnswerSubscription.objects\\\n .get_existing(profile.user, content_object, is_active=True).last_notification\n notification.pubdate = reaction.pubdate\n notification.save()\n\n self.stdout.write(u'Migration about \u00ab {} \u00bb [OK]'.format(content.title))\n", "path": "zds/notification/management/commands/migrate_subscriptions.py"}]} | 1,718 | 876 |
gh_patches_debug_14068 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3269 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add openAPI Specification for /users/ endpoint
Generate spec for `users` endpoint
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/settings/openapi.py`
Content:
```
1 def custom_preprocessing_hook(endpoints):
2 filtered = []
3 for (path, path_regex, method, callback) in endpoints:
4 # Remove all but DRF API endpoints
5 if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/") or path.startswith("/api/db/v0/tables/"):
6 filtered.append((path, path_regex, method, callback))
7 return filtered
8
9
10 def remove_url_prefix_hook(result, **kwargs):
11 # Remove namespace and version URL prefix from the operation Id of the generated API schema
12 for path, path_info in result['paths'].items():
13 for method, operation in path_info.items():
14 operation_id = operation.get('operationId')
15 if operation_id:
16 if path.startswith('/api/db/v0/'):
17 operation['operationId'] = operation_id.replace('db_v0_', '')
18 elif path.startswith('/api/ui/v0/'):
19 operation['operationId'] = operation_id.replace('ui_v0_', '')
20
21 return result
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/config/settings/openapi.py b/config/settings/openapi.py
--- a/config/settings/openapi.py
+++ b/config/settings/openapi.py
@@ -1,9 +1,15 @@
def custom_preprocessing_hook(endpoints):
- filtered = []
- for (path, path_regex, method, callback) in endpoints:
- # Remove all but DRF API endpoints
- if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/") or path.startswith("/api/db/v0/tables/"):
- filtered.append((path, path_regex, method, callback))
+ prefixes = [
+ "/api/db/v0/databases/",
+ "/api/db/v0/data_files/",
+ "/api/db/v0/schemas/",
+ "/api/db/v0/tables/",
+ "/api/db/v0/links/",
+ "/api/db/v0/queries/",
+ "/api/ui/v0/databases/",
+ "/api/ui/v0/users/"
+ ]
+ filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]
return filtered
| {"golden_diff": "diff --git a/config/settings/openapi.py b/config/settings/openapi.py\n--- a/config/settings/openapi.py\n+++ b/config/settings/openapi.py\n@@ -1,9 +1,15 @@\n def custom_preprocessing_hook(endpoints):\n- filtered = []\n- for (path, path_regex, method, callback) in endpoints:\n- # Remove all but DRF API endpoints\n- if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\") or path.startswith(\"/api/db/v0/tables/\"):\n- filtered.append((path, path_regex, method, callback))\n+ prefixes = [\n+ \"/api/db/v0/databases/\",\n+ \"/api/db/v0/data_files/\",\n+ \"/api/db/v0/schemas/\",\n+ \"/api/db/v0/tables/\",\n+ \"/api/db/v0/links/\",\n+ \"/api/db/v0/queries/\",\n+ \"/api/ui/v0/databases/\",\n+ \"/api/ui/v0/users/\"\n+ ]\n+ filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]\n return filtered\n", "issue": "Add openAPI Specification for /users/ endpoint\nGenerate spec for `users` endpoint\n", "before_files": [{"content": "def custom_preprocessing_hook(endpoints):\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\") or path.startswith(\"/api/db/v0/tables/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}], "after_files": [{"content": "def custom_preprocessing_hook(endpoints):\n prefixes = [\n \"/api/db/v0/databases/\",\n \"/api/db/v0/data_files/\",\n \"/api/db/v0/schemas/\",\n \"/api/db/v0/tables/\",\n \"/api/db/v0/links/\",\n \"/api/db/v0/queries/\",\n \"/api/ui/v0/databases/\",\n \"/api/ui/v0/users/\"\n ]\n filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}]} | 539 | 271 |
gh_patches_debug_36550 | rasdani/github-patches | git_diff | python-gitlab__python-gitlab-2816 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support /projects/:id/job_token_scope/allowlist
I would like to access the job token allow list through the Gitlab API.
Docs:
- [Get a project’s CI/CD job token inbound allowlist](https://docs.gitlab.com/ee/api/project_job_token_scopes.html#get-a-projects-cicd-job-token-inbound-allowlist)
- [Allow access to your project with a job token](https://docs.gitlab.com/ee/ci/jobs/ci_job_token.html#allow-access-to-your-project-with-a-job-token)
API endpoint:
```
GET/POST/DELETE /projects/:id/job_token_scope/allowlist
```
I currently have the following manager to access the job token scope allow list:
```
class ProjectJobTokenScopeAllowlistManager(
gitlab.mixins.ListMixin,
gitlab.mixins.CreateMixin,
gitlab.mixins.DeleteMixin,
gitlab.base.RESTManager,
):
_path = "/projects/{project_id}/job_token_scope/allowlist"
_obj_cls = gitlab.v4.objects.Project
_from_parent_attrs = {"project_id": "id"}
_create_attrs = gitlab.types.RequiredOptional(required=("target_project_id",))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gitlab/v4/objects/job_token_scope.py`
Content:
```
1 from typing import Any, cast
2
3 from gitlab.base import RESTManager, RESTObject
4 from gitlab.mixins import (
5 GetWithoutIdMixin,
6 RefreshMixin,
7 SaveMixin,
8 UpdateMethod,
9 UpdateMixin,
10 )
11
12 __all__ = [
13 "ProjectJobTokenScope",
14 "ProjectJobTokenScopeManager",
15 ]
16
17
18 class ProjectJobTokenScope(RefreshMixin, SaveMixin, RESTObject):
19 _id_attr = None
20
21
22 class ProjectJobTokenScopeManager(GetWithoutIdMixin, UpdateMixin, RESTManager):
23 _path = "/projects/{project_id}/job_token_scope"
24 _obj_cls = ProjectJobTokenScope
25 _from_parent_attrs = {"project_id": "id"}
26 _update_method = UpdateMethod.PATCH
27
28 def get(self, **kwargs: Any) -> ProjectJobTokenScope:
29 return cast(ProjectJobTokenScope, super().get(**kwargs))
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gitlab/v4/objects/job_token_scope.py b/gitlab/v4/objects/job_token_scope.py
--- a/gitlab/v4/objects/job_token_scope.py
+++ b/gitlab/v4/objects/job_token_scope.py
@@ -2,12 +2,17 @@
from gitlab.base import RESTManager, RESTObject
from gitlab.mixins import (
+ CreateMixin,
+ DeleteMixin,
GetWithoutIdMixin,
+ ListMixin,
+ ObjectDeleteMixin,
RefreshMixin,
SaveMixin,
UpdateMethod,
UpdateMixin,
)
+from gitlab.types import RequiredOptional
__all__ = [
"ProjectJobTokenScope",
@@ -18,6 +23,9 @@
class ProjectJobTokenScope(RefreshMixin, SaveMixin, RESTObject):
_id_attr = None
+ allowlist: "AllowlistProjectManager"
+ groups_allowlist: "AllowlistGroupManager"
+
class ProjectJobTokenScopeManager(GetWithoutIdMixin, UpdateMixin, RESTManager):
_path = "/projects/{project_id}/job_token_scope"
@@ -27,3 +35,43 @@
def get(self, **kwargs: Any) -> ProjectJobTokenScope:
return cast(ProjectJobTokenScope, super().get(**kwargs))
+
+
+class AllowlistProject(ObjectDeleteMixin, RESTObject):
+ _id_attr = "target_project_id" # note: only true for create endpoint
+
+ def get_id(self) -> int:
+ """Returns the id of the resource. This override deals with
+ the fact that either an `id` or a `target_project_id` attribute
+ is returned by the server depending on the endpoint called."""
+ target_project_id = cast(int, super().get_id())
+ if target_project_id is not None:
+ return target_project_id
+ return cast(int, self.id)
+
+
+class AllowlistProjectManager(ListMixin, CreateMixin, DeleteMixin, RESTManager):
+ _path = "/projects/{project_id}/job_token_scope/allowlist"
+ _obj_cls = AllowlistProject
+ _from_parent_attrs = {"project_id": "project_id"}
+ _create_attrs = RequiredOptional(required=("target_project_id",))
+
+
+class AllowlistGroup(ObjectDeleteMixin, RESTObject):
+ _id_attr = "target_group_id" # note: only true for create endpoint
+
+ def get_id(self) -> int:
+ """Returns the id of the resource. This override deals with
+ the fact that either an `id` or a `target_group_id` attribute
+ is returned by the server depending on the endpoint called."""
+ target_group_id = cast(int, super().get_id())
+ if target_group_id is not None:
+ return target_group_id
+ return cast(int, self.id)
+
+
+class AllowlistGroupManager(ListMixin, CreateMixin, DeleteMixin, RESTManager):
+ _path = "/projects/{project_id}/job_token_scope/groups_allowlist"
+ _obj_cls = AllowlistGroup
+ _from_parent_attrs = {"project_id": "project_id"}
+ _create_attrs = RequiredOptional(required=("target_group_id",))
| {"golden_diff": "diff --git a/gitlab/v4/objects/job_token_scope.py b/gitlab/v4/objects/job_token_scope.py\n--- a/gitlab/v4/objects/job_token_scope.py\n+++ b/gitlab/v4/objects/job_token_scope.py\n@@ -2,12 +2,17 @@\n \n from gitlab.base import RESTManager, RESTObject\n from gitlab.mixins import (\n+ CreateMixin,\n+ DeleteMixin,\n GetWithoutIdMixin,\n+ ListMixin,\n+ ObjectDeleteMixin,\n RefreshMixin,\n SaveMixin,\n UpdateMethod,\n UpdateMixin,\n )\n+from gitlab.types import RequiredOptional\n \n __all__ = [\n \"ProjectJobTokenScope\",\n@@ -18,6 +23,9 @@\n class ProjectJobTokenScope(RefreshMixin, SaveMixin, RESTObject):\n _id_attr = None\n \n+ allowlist: \"AllowlistProjectManager\"\n+ groups_allowlist: \"AllowlistGroupManager\"\n+\n \n class ProjectJobTokenScopeManager(GetWithoutIdMixin, UpdateMixin, RESTManager):\n _path = \"/projects/{project_id}/job_token_scope\"\n@@ -27,3 +35,43 @@\n \n def get(self, **kwargs: Any) -> ProjectJobTokenScope:\n return cast(ProjectJobTokenScope, super().get(**kwargs))\n+\n+\n+class AllowlistProject(ObjectDeleteMixin, RESTObject):\n+ _id_attr = \"target_project_id\" # note: only true for create endpoint\n+\n+ def get_id(self) -> int:\n+ \"\"\"Returns the id of the resource. This override deals with\n+ the fact that either an `id` or a `target_project_id` attribute\n+ is returned by the server depending on the endpoint called.\"\"\"\n+ target_project_id = cast(int, super().get_id())\n+ if target_project_id is not None:\n+ return target_project_id\n+ return cast(int, self.id)\n+\n+\n+class AllowlistProjectManager(ListMixin, CreateMixin, DeleteMixin, RESTManager):\n+ _path = \"/projects/{project_id}/job_token_scope/allowlist\"\n+ _obj_cls = AllowlistProject\n+ _from_parent_attrs = {\"project_id\": \"project_id\"}\n+ _create_attrs = RequiredOptional(required=(\"target_project_id\",))\n+\n+\n+class AllowlistGroup(ObjectDeleteMixin, RESTObject):\n+ _id_attr = \"target_group_id\" # note: only true for create endpoint\n+\n+ def get_id(self) -> int:\n+ \"\"\"Returns the id of the resource. This override deals with\n+ the fact that either an `id` or a `target_group_id` attribute\n+ is returned by the server depending on the endpoint called.\"\"\"\n+ target_group_id = cast(int, super().get_id())\n+ if target_group_id is not None:\n+ return target_group_id\n+ return cast(int, self.id)\n+\n+\n+class AllowlistGroupManager(ListMixin, CreateMixin, DeleteMixin, RESTManager):\n+ _path = \"/projects/{project_id}/job_token_scope/groups_allowlist\"\n+ _obj_cls = AllowlistGroup\n+ _from_parent_attrs = {\"project_id\": \"project_id\"}\n+ _create_attrs = RequiredOptional(required=(\"target_group_id\",))\n", "issue": "Support /projects/:id/job_token_scope/allowlist\nI would like to access the job token allow list through the Gitlab API. \r\n\r\nDocs: \r\n- [Get a project\u2019s CI/CD job token inbound allowlist](https://docs.gitlab.com/ee/api/project_job_token_scopes.html#get-a-projects-cicd-job-token-inbound-allowlist)\r\n- [Allow access to your project with a job token](https://docs.gitlab.com/ee/ci/jobs/ci_job_token.html#allow-access-to-your-project-with-a-job-token)\r\n\r\nAPI endpoint: \r\n```\r\nGET/POST/DELETE /projects/:id/job_token_scope/allowlist\r\n```\r\n\r\nI currently have the following manager to access the job token scope allow list:\r\n\r\n```\r\nclass ProjectJobTokenScopeAllowlistManager(\r\n gitlab.mixins.ListMixin,\r\n gitlab.mixins.CreateMixin,\r\n gitlab.mixins.DeleteMixin,\r\n gitlab.base.RESTManager,\r\n):\r\n _path = \"/projects/{project_id}/job_token_scope/allowlist\"\r\n _obj_cls = gitlab.v4.objects.Project\r\n _from_parent_attrs = {\"project_id\": \"id\"}\r\n _create_attrs = gitlab.types.RequiredOptional(required=(\"target_project_id\",))\r\n```\n", "before_files": [{"content": "from typing import Any, cast\n\nfrom gitlab.base import RESTManager, RESTObject\nfrom gitlab.mixins import (\n GetWithoutIdMixin,\n RefreshMixin,\n SaveMixin,\n UpdateMethod,\n UpdateMixin,\n)\n\n__all__ = [\n \"ProjectJobTokenScope\",\n \"ProjectJobTokenScopeManager\",\n]\n\n\nclass ProjectJobTokenScope(RefreshMixin, SaveMixin, RESTObject):\n _id_attr = None\n\n\nclass ProjectJobTokenScopeManager(GetWithoutIdMixin, UpdateMixin, RESTManager):\n _path = \"/projects/{project_id}/job_token_scope\"\n _obj_cls = ProjectJobTokenScope\n _from_parent_attrs = {\"project_id\": \"id\"}\n _update_method = UpdateMethod.PATCH\n\n def get(self, **kwargs: Any) -> ProjectJobTokenScope:\n return cast(ProjectJobTokenScope, super().get(**kwargs))\n", "path": "gitlab/v4/objects/job_token_scope.py"}], "after_files": [{"content": "from typing import Any, cast\n\nfrom gitlab.base import RESTManager, RESTObject\nfrom gitlab.mixins import (\n CreateMixin,\n DeleteMixin,\n GetWithoutIdMixin,\n ListMixin,\n ObjectDeleteMixin,\n RefreshMixin,\n SaveMixin,\n UpdateMethod,\n UpdateMixin,\n)\nfrom gitlab.types import RequiredOptional\n\n__all__ = [\n \"ProjectJobTokenScope\",\n \"ProjectJobTokenScopeManager\",\n]\n\n\nclass ProjectJobTokenScope(RefreshMixin, SaveMixin, RESTObject):\n _id_attr = None\n\n allowlist: \"AllowlistProjectManager\"\n groups_allowlist: \"AllowlistGroupManager\"\n\n\nclass ProjectJobTokenScopeManager(GetWithoutIdMixin, UpdateMixin, RESTManager):\n _path = \"/projects/{project_id}/job_token_scope\"\n _obj_cls = ProjectJobTokenScope\n _from_parent_attrs = {\"project_id\": \"id\"}\n _update_method = UpdateMethod.PATCH\n\n def get(self, **kwargs: Any) -> ProjectJobTokenScope:\n return cast(ProjectJobTokenScope, super().get(**kwargs))\n\n\nclass AllowlistProject(ObjectDeleteMixin, RESTObject):\n _id_attr = \"target_project_id\" # note: only true for create endpoint\n\n def get_id(self) -> int:\n \"\"\"Returns the id of the resource. This override deals with\n the fact that either an `id` or a `target_project_id` attribute\n is returned by the server depending on the endpoint called.\"\"\"\n target_project_id = cast(int, super().get_id())\n if target_project_id is not None:\n return target_project_id\n return cast(int, self.id)\n\n\nclass AllowlistProjectManager(ListMixin, CreateMixin, DeleteMixin, RESTManager):\n _path = \"/projects/{project_id}/job_token_scope/allowlist\"\n _obj_cls = AllowlistProject\n _from_parent_attrs = {\"project_id\": \"project_id\"}\n _create_attrs = RequiredOptional(required=(\"target_project_id\",))\n\n\nclass AllowlistGroup(ObjectDeleteMixin, RESTObject):\n _id_attr = \"target_group_id\" # note: only true for create endpoint\n\n def get_id(self) -> int:\n \"\"\"Returns the id of the resource. This override deals with\n the fact that either an `id` or a `target_group_id` attribute\n is returned by the server depending on the endpoint called.\"\"\"\n target_group_id = cast(int, super().get_id())\n if target_group_id is not None:\n return target_group_id\n return cast(int, self.id)\n\n\nclass AllowlistGroupManager(ListMixin, CreateMixin, DeleteMixin, RESTManager):\n _path = \"/projects/{project_id}/job_token_scope/groups_allowlist\"\n _obj_cls = AllowlistGroup\n _from_parent_attrs = {\"project_id\": \"project_id\"}\n _create_attrs = RequiredOptional(required=(\"target_group_id\",))\n", "path": "gitlab/v4/objects/job_token_scope.py"}]} | 769 | 704 |
gh_patches_debug_6178 | rasdani/github-patches | git_diff | Nitrate__Nitrate-440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Django 2.2.x
Add a testenv in tox.ini for Django 2.2.
Add "Framework :: Django :: 2.2" to ``setup.py``.
Update README to add version `2.2`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from setuptools import setup, find_packages
4
5
6 with open('VERSION.txt', 'r') as f:
7 pkg_version = f.read().strip()
8
9
10 def get_long_description():
11 with open('README.rst', 'r') as f:
12 return f.read()
13
14
15 install_requires = [
16 'beautifulsoup4 >= 4.1.1',
17 'django >= 2.0,<3.0',
18 'django-contrib-comments == 1.8.0',
19 'django-tinymce == 2.7.0',
20 'django-uuslug == 1.1.8',
21 'html2text',
22 'odfpy >= 0.9.6',
23 'python-bugzilla',
24 'xmltodict',
25 'kobo == 0.9.0'
26 ]
27
28 extras_require = {
29 'mysql': ['mysqlclient >= 1.2.3'],
30 'pgsql': ['psycopg2 == 2.7.5'],
31
32 # Required for tcms.auth.backends.KerberosBackend
33 'krbauth': [
34 'kerberos == 1.2.5'
35 ],
36
37 # Packages for building documentation
38 'docs': [
39 'Sphinx >= 1.1.2',
40 'sphinx_rtd_theme',
41 ],
42
43 # Necessary packages for running tests
44 'tests': [
45 'beautifulsoup4',
46 'coverage',
47 'factory_boy',
48 'flake8',
49 'mock',
50 'pytest < 4.2.0',
51 'pytest-cov',
52 'pytest-django',
53 ],
54
55 # Contain tools that assists the development
56 'devtools': [
57 'django-debug-toolbar == 1.7',
58 'tox',
59 'django-extensions',
60 'pygraphviz',
61 'future-breakpoint',
62 ],
63
64 # Required packages required to run async tasks
65 'async': [
66 'celery == 4.2.0',
67 ],
68
69 'multiauth': [
70 'social-auth-app-django == 3.1.0',
71 ]
72 }
73
74 setup(
75 name='Nitrate',
76 version=pkg_version,
77 description='Test Case Management System',
78 long_description=get_long_description(),
79 author='Nitrate Team',
80 maintainer='Chenxiong Qi',
81 maintainer_email='[email protected]',
82 url='https://github.com/Nitrate/Nitrate/',
83 license='GPLv2+',
84 keywords='test case',
85 install_requires=install_requires,
86 extras_require=extras_require,
87 python_requires='>=3.6',
88 package_dir={'': 'src'},
89 packages=find_packages('src', exclude=['test*']),
90 include_package_data=True,
91 zip_safe=False,
92 classifiers=[
93 'Framework :: Django',
94 'Framework :: Django :: 2.0',
95 'Framework :: Django :: 2.1',
96 'Intended Audience :: Developers',
97 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
98 'Programming Language :: Python :: 3',
99 'Programming Language :: Python :: 3.6',
100 'Programming Language :: Python :: 3.7',
101 'Programming Language :: Python :: 3 :: Only',
102 'Topic :: Software Development :: Quality Assurance',
103 'Topic :: Software Development :: Testing',
104 ],
105 project_urls={
106 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',
107 'Source Code': 'https://github.com/Nitrate/Nitrate',
108 'Documentation': 'https://nitrate.readthedocs.io/',
109 },
110 )
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -93,6 +93,7 @@
'Framework :: Django',
'Framework :: Django :: 2.0',
'Framework :: Django :: 2.1',
+ 'Framework :: Django :: 2.2',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
'Programming Language :: Python :: 3',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -93,6 +93,7 @@\n 'Framework :: Django',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n+ 'Framework :: Django :: 2.2',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n", "issue": "Add Django 2.2.x\nAdd a testenv in tox.ini for Django 2.2.\r\nAdd \"Framework :: Django :: 2.2\" to ``setup.py``.\r\nUpdate README to add version `2.2`.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 2.0,<3.0',\n 'django-contrib-comments == 1.8.0',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nextras_require = {\n 'mysql': ['mysqlclient >= 1.2.3'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'mock',\n 'pytest < 4.2.0',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar == 1.7',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n 'future-breakpoint',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ],\n\n 'multiauth': [\n 'social-auth-app-django == 3.1.0',\n ]\n}\n\nsetup(\n name='Nitrate',\n version=pkg_version,\n description='Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n python_requires='>=3.6',\n package_dir={'': 'src'},\n packages=find_packages('src', exclude=['test*']),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 2.0,<3.0',\n 'django-contrib-comments == 1.8.0',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nextras_require = {\n 'mysql': ['mysqlclient >= 1.2.3'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'mock',\n 'pytest < 4.2.0',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar == 1.7',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n 'future-breakpoint',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ],\n\n 'multiauth': [\n 'social-auth-app-django == 3.1.0',\n ]\n}\n\nsetup(\n name='Nitrate',\n version=pkg_version,\n description='Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n python_requires='>=3.6',\n package_dir={'': 'src'},\n packages=find_packages('src', exclude=['test*']),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Framework :: Django :: 2.2',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}]} | 1,319 | 111 |
gh_patches_debug_14972 | rasdani/github-patches | git_diff | kserve__kserve-1349 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Test] Fix lightgbm test in new test-infra
During the process I helped kfserving migrate to new test-infra, noticed that lightgbm failed every time, and to mitigate the issue, I disabled it in the pytest.
I'm working on the lightgbm test cases, and see if I can reproduce locally.
Need to spend time and dig into it
Detailed can be found https://github.com/kubeflow/kfserving/pull/1329#issuecomment-770325947
/cc @yuzisun
/assign
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/lgbserver/setup.py`
Content:
```
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-asyncio',
20 'pytest-tornasync',
21 'mypy'
22 ]
23
24 setup(
25 name='lgbserver',
26 version='0.1.0',
27 author_email='[email protected]',
28 license='../../LICENSE.txt',
29 url='https://github.com/kubeflow/kfserving/python/kfserving/lgbserver',
30 description='Model Server implementation for LightGBM. \
31 Not intended for use outside KFServing Frameworks Images',
32 long_description=open('README.md').read(),
33 python_requires='>3.4',
34 packages=find_packages("lgbserver"),
35 install_requires=[
36 "kfserving>=0.4.0",
37 "lightgbm == 2.3.1",
38 "pandas == 0.25.3",
39 "argparse >= 1.4.0"
40 ],
41 tests_require=tests_require,
42 extras_require={'test': tests_require}
43 )
44
```
Path: `python/aixexplainer/setup.py`
Content:
```
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22 setup(
23 name='aixserver',
24 version='0.2.1',
25 author_email='[email protected]',
26 license='https://github.com/kubeflow/kfserving/LICENSE',
27 url='https://github.com/kubeflow/kfserving/python/aixserver',
28 description='Model Server implementation for AI eXplainability with LIME. \
29 Not intended for use outside KFServing Frameworks Images',
30 long_description=open('README.md').read(),
31 python_requires='>3.4',
32 packages=find_packages("aixserver"),
33 install_requires=[
34 "kfserving>=0.5.0",
35 "argparse >= 1.4.0",
36 "aix360 >= 0.1.0",
37 "lime >= 0.1.1.37",
38 "nest_asyncio>=1.4.0"
39 ],
40 tests_require=tests_require,
41 extras_require={'test': tests_require}
42 )
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/aixexplainer/setup.py b/python/aixexplainer/setup.py
--- a/python/aixexplainer/setup.py
+++ b/python/aixexplainer/setup.py
@@ -35,7 +35,8 @@
"argparse >= 1.4.0",
"aix360 >= 0.1.0",
"lime >= 0.1.1.37",
- "nest_asyncio>=1.4.0"
+ "nest_asyncio>=1.4.0",
+ "cvxpy == 1.1.7"
],
tests_require=tests_require,
extras_require={'test': tests_require}
diff --git a/python/lgbserver/setup.py b/python/lgbserver/setup.py
--- a/python/lgbserver/setup.py
+++ b/python/lgbserver/setup.py
@@ -36,7 +36,8 @@
"kfserving>=0.4.0",
"lightgbm == 2.3.1",
"pandas == 0.25.3",
- "argparse >= 1.4.0"
+ "argparse >= 1.4.0",
+ "numpy == 1.19.5",
],
tests_require=tests_require,
extras_require={'test': tests_require}
| {"golden_diff": "diff --git a/python/aixexplainer/setup.py b/python/aixexplainer/setup.py\n--- a/python/aixexplainer/setup.py\n+++ b/python/aixexplainer/setup.py\n@@ -35,7 +35,8 @@\n \"argparse >= 1.4.0\",\n \"aix360 >= 0.1.0\",\n \"lime >= 0.1.1.37\",\n- \"nest_asyncio>=1.4.0\"\n+ \"nest_asyncio>=1.4.0\",\n+ \"cvxpy == 1.1.7\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\ndiff --git a/python/lgbserver/setup.py b/python/lgbserver/setup.py\n--- a/python/lgbserver/setup.py\n+++ b/python/lgbserver/setup.py\n@@ -36,7 +36,8 @@\n \"kfserving>=0.4.0\",\n \"lightgbm == 2.3.1\",\n \"pandas == 0.25.3\",\n- \"argparse >= 1.4.0\"\n+ \"argparse >= 1.4.0\",\n+ \"numpy == 1.19.5\",\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n", "issue": "[Test] Fix lightgbm test in new test-infra\nDuring the process I helped kfserving migrate to new test-infra, noticed that lightgbm failed every time, and to mitigate the issue, I disabled it in the pytest.\r\n\r\nI'm working on the lightgbm test cases, and see if I can reproduce locally. \r\n\r\nNeed to spend time and dig into it \r\n\r\nDetailed can be found https://github.com/kubeflow/kfserving/pull/1329#issuecomment-770325947\r\n\r\n/cc @yuzisun \r\n/assign \n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='lgbserver',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/lgbserver',\n description='Model Server implementation for LightGBM. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>3.4',\n packages=find_packages(\"lgbserver\"),\n install_requires=[\n \"kfserving>=0.4.0\",\n \"lightgbm == 2.3.1\",\n \"pandas == 0.25.3\",\n \"argparse >= 1.4.0\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/lgbserver/setup.py"}, {"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\nsetup(\n name='aixserver',\n version='0.2.1',\n author_email='[email protected]',\n license='https://github.com/kubeflow/kfserving/LICENSE',\n url='https://github.com/kubeflow/kfserving/python/aixserver',\n description='Model Server implementation for AI eXplainability with LIME. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>3.4',\n packages=find_packages(\"aixserver\"),\n install_requires=[\n \"kfserving>=0.5.0\",\n \"argparse >= 1.4.0\",\n \"aix360 >= 0.1.0\",\n \"lime >= 0.1.1.37\",\n \"nest_asyncio>=1.4.0\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/aixexplainer/setup.py"}], "after_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='lgbserver',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/lgbserver',\n description='Model Server implementation for LightGBM. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>3.4',\n packages=find_packages(\"lgbserver\"),\n install_requires=[\n \"kfserving>=0.4.0\",\n \"lightgbm == 2.3.1\",\n \"pandas == 0.25.3\",\n \"argparse >= 1.4.0\",\n \"numpy == 1.19.5\",\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/lgbserver/setup.py"}, {"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\nsetup(\n name='aixserver',\n version='0.2.1',\n author_email='[email protected]',\n license='https://github.com/kubeflow/kfserving/LICENSE',\n url='https://github.com/kubeflow/kfserving/python/aixserver',\n description='Model Server implementation for AI eXplainability with LIME. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>3.4',\n packages=find_packages(\"aixserver\"),\n install_requires=[\n \"kfserving>=0.5.0\",\n \"argparse >= 1.4.0\",\n \"aix360 >= 0.1.0\",\n \"lime >= 0.1.1.37\",\n \"nest_asyncio>=1.4.0\",\n \"cvxpy == 1.1.7\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/aixexplainer/setup.py"}]} | 1,291 | 294 |
gh_patches_debug_13923 | rasdani/github-patches | git_diff | litestar-org__litestar-1791 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enhancement: Only run Sonar & Snyk on forks
### Summary
When forking the repo, tests are brought with it but Snyk & Sonar CI fails because the secrets are not set.
### Basic Example
Add ` && github.repository_owner == 'starlite-api'` to the if-check in `ci.yaml` to prevent these two CI items from running on non-upstream repos.
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/types/composite_types.py`
Content:
```
1 from functools import partial
2 from os import PathLike
3 from pathlib import Path
4 from typing import (
5 TYPE_CHECKING,
6 Any,
7 AsyncIterable,
8 AsyncIterator,
9 Callable,
10 Dict,
11 Iterable,
12 Iterator,
13 Literal,
14 Mapping,
15 Sequence,
16 Set,
17 Tuple,
18 Type,
19 TypeVar,
20 Union,
21 )
22
23 from litestar.enums import ScopeType
24
25 from .asgi_types import ASGIApp
26 from .callable_types import AnyCallable, ExceptionHandler
27
28 if TYPE_CHECKING:
29 from litestar.datastructures.cookie import Cookie
30 from litestar.datastructures.response_header import ResponseHeader
31 from litestar.di import Provide
32 from litestar.middleware.base import DefineMiddleware, MiddlewareProtocol
33 from litestar.params import ParameterKwarg
34 else:
35 BaseHTTPMiddleware = Any
36 Cookie = Any
37 DefineMiddleware = Any
38 ImmutableState = Any
39 MiddlewareProtocol = Any
40 ParameterKwarg = Any
41 Provide = Any
42 ResponseHeader = Any
43
44 T = TypeVar("T")
45
46
47 Dependencies = Mapping[str, Union[Provide, AnyCallable]]
48 ExceptionHandlersMap = Mapping[Union[int, Type[Exception]], ExceptionHandler]
49 MaybePartial = Union[T, partial]
50 Middleware = Union[
51 Callable[..., ASGIApp], DefineMiddleware, Iterator[Tuple[ASGIApp, Dict[str, Any]]], Type[MiddlewareProtocol]
52 ]
53 ParametersMap = Mapping[str, ParameterKwarg]
54 PathType = Union[Path, PathLike, str]
55 ResponseCookies = Union[Sequence[Cookie], Mapping[str, str]]
56 ResponseHeaders = Union[Sequence[ResponseHeader], Mapping[str, str]]
57 Scopes = Set[Literal[ScopeType.HTTP, ScopeType.WEBSOCKET]]
58 StreamType = Union[Iterable[T], Iterator[T], AsyncIterable[T], AsyncIterator[T]]
59 TypeEncodersMap = Mapping[Any, Callable[[Any], Any]]
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/types/composite_types.py b/litestar/types/composite_types.py
--- a/litestar/types/composite_types.py
+++ b/litestar/types/composite_types.py
@@ -12,6 +12,7 @@
Iterator,
Literal,
Mapping,
+ MutableMapping,
Sequence,
Set,
Tuple,
@@ -45,7 +46,7 @@
Dependencies = Mapping[str, Union[Provide, AnyCallable]]
-ExceptionHandlersMap = Mapping[Union[int, Type[Exception]], ExceptionHandler]
+ExceptionHandlersMap = MutableMapping[Union[int, Type[Exception]], ExceptionHandler]
MaybePartial = Union[T, partial]
Middleware = Union[
Callable[..., ASGIApp], DefineMiddleware, Iterator[Tuple[ASGIApp, Dict[str, Any]]], Type[MiddlewareProtocol]
| {"golden_diff": "diff --git a/litestar/types/composite_types.py b/litestar/types/composite_types.py\n--- a/litestar/types/composite_types.py\n+++ b/litestar/types/composite_types.py\n@@ -12,6 +12,7 @@\n Iterator,\n Literal,\n Mapping,\n+ MutableMapping,\n Sequence,\n Set,\n Tuple,\n@@ -45,7 +46,7 @@\n \n \n Dependencies = Mapping[str, Union[Provide, AnyCallable]]\n-ExceptionHandlersMap = Mapping[Union[int, Type[Exception]], ExceptionHandler]\n+ExceptionHandlersMap = MutableMapping[Union[int, Type[Exception]], ExceptionHandler]\n MaybePartial = Union[T, partial]\n Middleware = Union[\n Callable[..., ASGIApp], DefineMiddleware, Iterator[Tuple[ASGIApp, Dict[str, Any]]], Type[MiddlewareProtocol]\n", "issue": "Enhancement: Only run Sonar & Snyk on forks\n### Summary\n\nWhen forking the repo, tests are brought with it but Snyk & Sonar CI fails because the secrets are not set.\n\n### Basic Example\n\nAdd ` && github.repository_owner == 'starlite-api'` to the if-check in `ci.yaml` to prevent these two CI items from running on non-upstream repos.\n\n### Drawbacks and Impact\n\n_No response_\n\n### Unresolved questions\n\n_No response_\n", "before_files": [{"content": "from functools import partial\nfrom os import PathLike\nfrom pathlib import Path\nfrom typing import (\n TYPE_CHECKING,\n Any,\n AsyncIterable,\n AsyncIterator,\n Callable,\n Dict,\n Iterable,\n Iterator,\n Literal,\n Mapping,\n Sequence,\n Set,\n Tuple,\n Type,\n TypeVar,\n Union,\n)\n\nfrom litestar.enums import ScopeType\n\nfrom .asgi_types import ASGIApp\nfrom .callable_types import AnyCallable, ExceptionHandler\n\nif TYPE_CHECKING:\n from litestar.datastructures.cookie import Cookie\n from litestar.datastructures.response_header import ResponseHeader\n from litestar.di import Provide\n from litestar.middleware.base import DefineMiddleware, MiddlewareProtocol\n from litestar.params import ParameterKwarg\nelse:\n BaseHTTPMiddleware = Any\n Cookie = Any\n DefineMiddleware = Any\n ImmutableState = Any\n MiddlewareProtocol = Any\n ParameterKwarg = Any\n Provide = Any\n ResponseHeader = Any\n\nT = TypeVar(\"T\")\n\n\nDependencies = Mapping[str, Union[Provide, AnyCallable]]\nExceptionHandlersMap = Mapping[Union[int, Type[Exception]], ExceptionHandler]\nMaybePartial = Union[T, partial]\nMiddleware = Union[\n Callable[..., ASGIApp], DefineMiddleware, Iterator[Tuple[ASGIApp, Dict[str, Any]]], Type[MiddlewareProtocol]\n]\nParametersMap = Mapping[str, ParameterKwarg]\nPathType = Union[Path, PathLike, str]\nResponseCookies = Union[Sequence[Cookie], Mapping[str, str]]\nResponseHeaders = Union[Sequence[ResponseHeader], Mapping[str, str]]\nScopes = Set[Literal[ScopeType.HTTP, ScopeType.WEBSOCKET]]\nStreamType = Union[Iterable[T], Iterator[T], AsyncIterable[T], AsyncIterator[T]]\nTypeEncodersMap = Mapping[Any, Callable[[Any], Any]]\n", "path": "litestar/types/composite_types.py"}], "after_files": [{"content": "from functools import partial\nfrom os import PathLike\nfrom pathlib import Path\nfrom typing import (\n TYPE_CHECKING,\n Any,\n AsyncIterable,\n AsyncIterator,\n Callable,\n Dict,\n Iterable,\n Iterator,\n Literal,\n Mapping,\n MutableMapping,\n Sequence,\n Set,\n Tuple,\n Type,\n TypeVar,\n Union,\n)\n\nfrom litestar.enums import ScopeType\n\nfrom .asgi_types import ASGIApp\nfrom .callable_types import AnyCallable, ExceptionHandler\n\nif TYPE_CHECKING:\n from litestar.datastructures.cookie import Cookie\n from litestar.datastructures.response_header import ResponseHeader\n from litestar.di import Provide\n from litestar.middleware.base import DefineMiddleware, MiddlewareProtocol\n from litestar.params import ParameterKwarg\nelse:\n BaseHTTPMiddleware = Any\n Cookie = Any\n DefineMiddleware = Any\n ImmutableState = Any\n MiddlewareProtocol = Any\n ParameterKwarg = Any\n Provide = Any\n ResponseHeader = Any\n\nT = TypeVar(\"T\")\n\n\nDependencies = Mapping[str, Union[Provide, AnyCallable]]\nExceptionHandlersMap = MutableMapping[Union[int, Type[Exception]], ExceptionHandler]\nMaybePartial = Union[T, partial]\nMiddleware = Union[\n Callable[..., ASGIApp], DefineMiddleware, Iterator[Tuple[ASGIApp, Dict[str, Any]]], Type[MiddlewareProtocol]\n]\nParametersMap = Mapping[str, ParameterKwarg]\nPathType = Union[Path, PathLike, str]\nResponseCookies = Union[Sequence[Cookie], Mapping[str, str]]\nResponseHeaders = Union[Sequence[ResponseHeader], Mapping[str, str]]\nScopes = Set[Literal[ScopeType.HTTP, ScopeType.WEBSOCKET]]\nStreamType = Union[Iterable[T], Iterator[T], AsyncIterable[T], AsyncIterator[T]]\nTypeEncodersMap = Mapping[Any, Callable[[Any], Any]]\n", "path": "litestar/types/composite_types.py"}]} | 872 | 180 |
gh_patches_debug_7610 | rasdani/github-patches | git_diff | pytorch__ignite-1675 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Loss automatically detaching inputs breaks some workflows
## 🐛 Bug description
https://github.com/pytorch/ignite/blob/master/ignite/metrics/loss.py#L60
Before 0.4.3, the `y_pred` and `y` were passed without calling detach. Now because of detach a number of usecases are no longer possible.
An example is: https://docs.gpytorch.ai/en/v1.3.1/marginal_log_likelihoods.html#variationalelbo
The output of `model(train_x)` is an object which does not support detach. This leads to a crash when passing the `mll` function to Loss as a `loss_fn`.
Another example is https://github.com/y0ast/deterministic-uncertainty-quantification/blob/master/train_duq_cifar.py#L153
The loss is dependent on the gradient, which means that by calling detach the loss cannot be computed anymore.
I have been trying to work around it, but I can't figure out a nice way. https://pytorch.org/ignite/metrics.html#ignite.metrics.Average is not input size aware so it cannot correctly compute the average of the loss.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/loss.py`
Content:
```
1 from typing import Callable, Dict, Sequence, Tuple, Union, cast
2
3 import torch
4
5 from ignite.exceptions import NotComputableError
6 from ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce
7
8 __all__ = ["Loss"]
9
10
11 class Loss(Metric):
12 """
13 Calculates the average loss according to the passed loss_fn.
14
15 Args:
16 loss_fn (callable): a callable taking a prediction tensor, a target
17 tensor, optionally other arguments, and returns the average loss
18 over all observations in the batch.
19 output_transform (callable): a callable that is used to transform the
20 :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
21 form expected by the metric.
22 This can be useful if, for example, you have a multi-output model and
23 you want to compute the metric with respect to one of the outputs.
24 The output is expected to be a tuple `(prediction, target)` or
25 (prediction, target, kwargs) where kwargs is a dictionary of extra
26 keywords arguments. If extra keywords arguments are provided they are passed to `loss_fn`.
27 batch_size (callable): a callable taking a target tensor that returns the
28 first dimension size (usually the batch size).
29 device (str or torch.device): specifies which device updates are accumulated on. Setting the
30 metric's device to be the same as your ``update`` arguments ensures the ``update`` method is
31 non-blocking. By default, CPU.
32
33 """
34
35 required_output_keys = None
36
37 def __init__(
38 self,
39 loss_fn: Callable,
40 output_transform: Callable = lambda x: x,
41 batch_size: Callable = lambda x: len(x),
42 device: Union[str, torch.device] = torch.device("cpu"),
43 ):
44 super(Loss, self).__init__(output_transform, device=device)
45 self._loss_fn = loss_fn
46 self._batch_size = batch_size
47
48 @reinit__is_reduced
49 def reset(self) -> None:
50 self._sum = torch.tensor(0.0, device=self._device)
51 self._num_examples = 0
52
53 @reinit__is_reduced
54 def update(self, output: Sequence[Union[torch.Tensor, Dict]]) -> None:
55 if len(output) == 2:
56 y_pred, y = cast(Tuple[torch.Tensor, torch.Tensor], output)
57 kwargs = {} # type: Dict
58 else:
59 y_pred, y, kwargs = cast(Tuple[torch.Tensor, torch.Tensor, Dict], output)
60 average_loss = self._loss_fn(y_pred.detach(), y.detach(), **kwargs)
61
62 if len(average_loss.shape) != 0:
63 raise ValueError("loss_fn did not return the average loss.")
64
65 n = self._batch_size(y)
66 self._sum += average_loss.to(self._device) * n
67 self._num_examples += n
68
69 @sync_all_reduce("_sum", "_num_examples")
70 def compute(self) -> float:
71 if self._num_examples == 0:
72 raise NotComputableError("Loss must have at least one example before it can be computed.")
73 return self._sum.item() / self._num_examples
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py
--- a/ignite/metrics/loss.py
+++ b/ignite/metrics/loss.py
@@ -57,7 +57,7 @@
kwargs = {} # type: Dict
else:
y_pred, y, kwargs = cast(Tuple[torch.Tensor, torch.Tensor, Dict], output)
- average_loss = self._loss_fn(y_pred.detach(), y.detach(), **kwargs)
+ average_loss = self._loss_fn(y_pred, y, **kwargs).detach()
if len(average_loss.shape) != 0:
raise ValueError("loss_fn did not return the average loss.")
| {"golden_diff": "diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py\n--- a/ignite/metrics/loss.py\n+++ b/ignite/metrics/loss.py\n@@ -57,7 +57,7 @@\n kwargs = {} # type: Dict\n else:\n y_pred, y, kwargs = cast(Tuple[torch.Tensor, torch.Tensor, Dict], output)\n- average_loss = self._loss_fn(y_pred.detach(), y.detach(), **kwargs)\n+ average_loss = self._loss_fn(y_pred, y, **kwargs).detach()\n \n if len(average_loss.shape) != 0:\n raise ValueError(\"loss_fn did not return the average loss.\")\n", "issue": "Loss automatically detaching inputs breaks some workflows\n## \ud83d\udc1b Bug description\r\n\r\nhttps://github.com/pytorch/ignite/blob/master/ignite/metrics/loss.py#L60\r\n\r\nBefore 0.4.3, the `y_pred` and `y` were passed without calling detach. Now because of detach a number of usecases are no longer possible.\r\n\r\nAn example is: https://docs.gpytorch.ai/en/v1.3.1/marginal_log_likelihoods.html#variationalelbo\r\n\r\nThe output of `model(train_x)` is an object which does not support detach. This leads to a crash when passing the `mll` function to Loss as a `loss_fn`.\r\n\r\nAnother example is https://github.com/y0ast/deterministic-uncertainty-quantification/blob/master/train_duq_cifar.py#L153\r\n\r\nThe loss is dependent on the gradient, which means that by calling detach the loss cannot be computed anymore.\r\n\r\nI have been trying to work around it, but I can't figure out a nice way. https://pytorch.org/ignite/metrics.html#ignite.metrics.Average is not input size aware so it cannot correctly compute the average of the loss.\n", "before_files": [{"content": "from typing import Callable, Dict, Sequence, Tuple, Union, cast\n\nimport torch\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce\n\n__all__ = [\"Loss\"]\n\n\nclass Loss(Metric):\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n\n Args:\n loss_fn (callable): a callable taking a prediction tensor, a target\n tensor, optionally other arguments, and returns the average loss\n over all observations in the batch.\n output_transform (callable): a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric.\n This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n The output is expected to be a tuple `(prediction, target)` or\n (prediction, target, kwargs) where kwargs is a dictionary of extra\n keywords arguments. If extra keywords arguments are provided they are passed to `loss_fn`.\n batch_size (callable): a callable taking a target tensor that returns the\n first dimension size (usually the batch size).\n device (str or torch.device): specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n \"\"\"\n\n required_output_keys = None\n\n def __init__(\n self,\n loss_fn: Callable,\n output_transform: Callable = lambda x: x,\n batch_size: Callable = lambda x: len(x),\n device: Union[str, torch.device] = torch.device(\"cpu\"),\n ):\n super(Loss, self).__init__(output_transform, device=device)\n self._loss_fn = loss_fn\n self._batch_size = batch_size\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum = torch.tensor(0.0, device=self._device)\n self._num_examples = 0\n\n @reinit__is_reduced\n def update(self, output: Sequence[Union[torch.Tensor, Dict]]) -> None:\n if len(output) == 2:\n y_pred, y = cast(Tuple[torch.Tensor, torch.Tensor], output)\n kwargs = {} # type: Dict\n else:\n y_pred, y, kwargs = cast(Tuple[torch.Tensor, torch.Tensor, Dict], output)\n average_loss = self._loss_fn(y_pred.detach(), y.detach(), **kwargs)\n\n if len(average_loss.shape) != 0:\n raise ValueError(\"loss_fn did not return the average loss.\")\n\n n = self._batch_size(y)\n self._sum += average_loss.to(self._device) * n\n self._num_examples += n\n\n @sync_all_reduce(\"_sum\", \"_num_examples\")\n def compute(self) -> float:\n if self._num_examples == 0:\n raise NotComputableError(\"Loss must have at least one example before it can be computed.\")\n return self._sum.item() / self._num_examples\n", "path": "ignite/metrics/loss.py"}], "after_files": [{"content": "from typing import Callable, Dict, Sequence, Tuple, Union, cast\n\nimport torch\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce\n\n__all__ = [\"Loss\"]\n\n\nclass Loss(Metric):\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n\n Args:\n loss_fn (callable): a callable taking a prediction tensor, a target\n tensor, optionally other arguments, and returns the average loss\n over all observations in the batch.\n output_transform (callable): a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric.\n This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n The output is expected to be a tuple `(prediction, target)` or\n (prediction, target, kwargs) where kwargs is a dictionary of extra\n keywords arguments. If extra keywords arguments are provided they are passed to `loss_fn`.\n batch_size (callable): a callable taking a target tensor that returns the\n first dimension size (usually the batch size).\n device (str or torch.device): specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n \"\"\"\n\n required_output_keys = None\n\n def __init__(\n self,\n loss_fn: Callable,\n output_transform: Callable = lambda x: x,\n batch_size: Callable = lambda x: len(x),\n device: Union[str, torch.device] = torch.device(\"cpu\"),\n ):\n super(Loss, self).__init__(output_transform, device=device)\n self._loss_fn = loss_fn\n self._batch_size = batch_size\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum = torch.tensor(0.0, device=self._device)\n self._num_examples = 0\n\n @reinit__is_reduced\n def update(self, output: Sequence[Union[torch.Tensor, Dict]]) -> None:\n if len(output) == 2:\n y_pred, y = cast(Tuple[torch.Tensor, torch.Tensor], output)\n kwargs = {} # type: Dict\n else:\n y_pred, y, kwargs = cast(Tuple[torch.Tensor, torch.Tensor, Dict], output)\n average_loss = self._loss_fn(y_pred, y, **kwargs).detach()\n\n if len(average_loss.shape) != 0:\n raise ValueError(\"loss_fn did not return the average loss.\")\n\n n = self._batch_size(y)\n self._sum += average_loss.to(self._device) * n\n self._num_examples += n\n\n @sync_all_reduce(\"_sum\", \"_num_examples\")\n def compute(self) -> float:\n if self._num_examples == 0:\n raise NotComputableError(\"Loss must have at least one example before it can be computed.\")\n return self._sum.item() / self._num_examples\n", "path": "ignite/metrics/loss.py"}]} | 1,343 | 151 |
gh_patches_debug_1493 | rasdani/github-patches | git_diff | plotly__dash-601 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extract meta failure on missing props docstring.
If a props is missing a docstring, it will fail to generate the component with js error, `Cannot read property 'length' of undefined`.
https://community.plot.ly/t/dash-component-creation-javascript-ok-nothing-rendered-in-python/19369
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dash/development/component_generator.py`
Content:
```
1 from __future__ import print_function
2 from collections import OrderedDict
3
4 import json
5 import sys
6 import subprocess
7 import shlex
8 import os
9 import argparse
10 import shutil
11 import functools
12
13 import pkg_resources
14
15 from ._r_components_generation import write_class_file
16 from ._r_components_generation import generate_exports
17 from ._py_components_generation import generate_class_file
18 from ._py_components_generation import generate_imports
19 from ._py_components_generation import generate_classes_files
20
21
22 class _CombinedFormatter(argparse.ArgumentDefaultsHelpFormatter,
23 argparse.RawDescriptionHelpFormatter):
24 pass
25
26
27 # pylint: disable=too-many-locals
28 def generate_components(components_source, project_shortname,
29 package_info_filename='package.json',
30 ignore='^_',
31 rprefix=None):
32
33 project_shortname = project_shortname.replace('-', '_').rstrip('/\\')
34
35 if rprefix:
36 prefix = rprefix
37
38 is_windows = sys.platform == 'win32'
39
40 extract_path = pkg_resources.resource_filename('dash', 'extract-meta.js')
41
42 os.environ['NODE_PATH'] = 'node_modules'
43 cmd = shlex.split(
44 'node {} {} {}'.format(extract_path, ignore, components_source),
45 posix=not is_windows
46 )
47
48 shutil.copyfile('package.json',
49 os.path.join(project_shortname, package_info_filename))
50
51 proc = subprocess.Popen(cmd,
52 stdout=subprocess.PIPE,
53 stderr=subprocess.PIPE,
54 shell=is_windows)
55 out, err = proc.communicate()
56 status = proc.poll()
57
58 if err:
59 print(err.decode(), file=sys.stderr)
60
61 if not out:
62 print(
63 'Error generating metadata in {} (status={})'.format(
64 project_shortname, status),
65 file=sys.stderr)
66 sys.exit(1)
67
68 jsondata_unicode = json.loads(out.decode(), object_pairs_hook=OrderedDict)
69
70 if sys.version_info[0] >= 3:
71 metadata = jsondata_unicode
72 else:
73 metadata = byteify(jsondata_unicode)
74
75 generator_methods = [generate_class_file]
76
77 if rprefix:
78 if not os.path.exists('man'):
79 os.makedirs('man')
80 if not os.path.exists('R'):
81 os.makedirs('R')
82 generator_methods.append(
83 functools.partial(write_class_file, prefix=prefix))
84
85 components = generate_classes_files(
86 project_shortname,
87 metadata,
88 *generator_methods
89 )
90
91 with open(os.path.join(project_shortname, 'metadata.json'), 'w') as f:
92 json.dump(metadata, f)
93
94 generate_imports(project_shortname, components)
95
96 if rprefix:
97 with open('package.json', 'r') as f:
98 jsondata_unicode = json.load(f, object_pairs_hook=OrderedDict)
99 if sys.version_info[0] >= 3:
100 pkg_data = jsondata_unicode
101 else:
102 pkg_data = byteify(jsondata_unicode)
103
104 generate_exports(
105 project_shortname, components, metadata, pkg_data, prefix
106 )
107
108
109 def cli():
110 parser = argparse.ArgumentParser(
111 prog='dash-generate-components',
112 formatter_class=_CombinedFormatter,
113 description='Generate dash components by extracting the metadata '
114 'using react-docgen. Then map the metadata to python classes.'
115 )
116 parser.add_argument('components_source',
117 help='React components source directory.')
118 parser.add_argument(
119 'project_shortname',
120 help='Name of the project to export the classes files.'
121 )
122 parser.add_argument(
123 '-p', '--package-info-filename',
124 default='package.json',
125 help='The filename of the copied `package.json` to `project_shortname`'
126 )
127 parser.add_argument(
128 '-i', '--ignore',
129 default='^_',
130 help='Files/directories matching the pattern will be ignored'
131 )
132 parser.add_argument(
133 '--r-prefix',
134 help='Experimental: specify a prefix for DashR component names, write'
135 'DashR components to R dir, create R package.'
136 )
137
138 args = parser.parse_args()
139 generate_components(
140 args.components_source, args.project_shortname,
141 package_info_filename=args.package_info_filename,
142 ignore=args.ignore,
143 rprefix=args.r_prefix)
144
145
146 # pylint: disable=undefined-variable
147 def byteify(input_object):
148 if isinstance(input_object, dict):
149 return OrderedDict([
150 (byteify(key), byteify(value))
151 for key, value in input_object.iteritems()
152 ])
153 elif isinstance(input_object, list):
154 return [byteify(element) for element in input_object]
155 elif isinstance(input_object, unicode): # noqa:F821
156 return input_object.encode('utf-8')
157 return input_object
158
159
160 if __name__ == '__main__':
161 cli()
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dash/development/component_generator.py b/dash/development/component_generator.py
--- a/dash/development/component_generator.py
+++ b/dash/development/component_generator.py
@@ -89,7 +89,7 @@
)
with open(os.path.join(project_shortname, 'metadata.json'), 'w') as f:
- json.dump(metadata, f)
+ json.dump(metadata, f, indent=2)
generate_imports(project_shortname, components)
| {"golden_diff": "diff --git a/dash/development/component_generator.py b/dash/development/component_generator.py\n--- a/dash/development/component_generator.py\n+++ b/dash/development/component_generator.py\n@@ -89,7 +89,7 @@\n )\n \n with open(os.path.join(project_shortname, 'metadata.json'), 'w') as f:\n- json.dump(metadata, f)\n+ json.dump(metadata, f, indent=2)\n \n generate_imports(project_shortname, components)\n", "issue": "Extract meta failure on missing props docstring.\nIf a props is missing a docstring, it will fail to generate the component with js error, `Cannot read property 'length' of undefined`.\r\n\r\nhttps://community.plot.ly/t/dash-component-creation-javascript-ok-nothing-rendered-in-python/19369\n", "before_files": [{"content": "from __future__ import print_function\nfrom collections import OrderedDict\n\nimport json\nimport sys\nimport subprocess\nimport shlex\nimport os\nimport argparse\nimport shutil\nimport functools\n\nimport pkg_resources\n\nfrom ._r_components_generation import write_class_file\nfrom ._r_components_generation import generate_exports\nfrom ._py_components_generation import generate_class_file\nfrom ._py_components_generation import generate_imports\nfrom ._py_components_generation import generate_classes_files\n\n\nclass _CombinedFormatter(argparse.ArgumentDefaultsHelpFormatter,\n argparse.RawDescriptionHelpFormatter):\n pass\n\n\n# pylint: disable=too-many-locals\ndef generate_components(components_source, project_shortname,\n package_info_filename='package.json',\n ignore='^_',\n rprefix=None):\n\n project_shortname = project_shortname.replace('-', '_').rstrip('/\\\\')\n\n if rprefix:\n prefix = rprefix\n\n is_windows = sys.platform == 'win32'\n\n extract_path = pkg_resources.resource_filename('dash', 'extract-meta.js')\n\n os.environ['NODE_PATH'] = 'node_modules'\n cmd = shlex.split(\n 'node {} {} {}'.format(extract_path, ignore, components_source),\n posix=not is_windows\n )\n\n shutil.copyfile('package.json',\n os.path.join(project_shortname, package_info_filename))\n\n proc = subprocess.Popen(cmd,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n shell=is_windows)\n out, err = proc.communicate()\n status = proc.poll()\n\n if err:\n print(err.decode(), file=sys.stderr)\n\n if not out:\n print(\n 'Error generating metadata in {} (status={})'.format(\n project_shortname, status),\n file=sys.stderr)\n sys.exit(1)\n\n jsondata_unicode = json.loads(out.decode(), object_pairs_hook=OrderedDict)\n\n if sys.version_info[0] >= 3:\n metadata = jsondata_unicode\n else:\n metadata = byteify(jsondata_unicode)\n\n generator_methods = [generate_class_file]\n\n if rprefix:\n if not os.path.exists('man'):\n os.makedirs('man')\n if not os.path.exists('R'):\n os.makedirs('R')\n generator_methods.append(\n functools.partial(write_class_file, prefix=prefix))\n\n components = generate_classes_files(\n project_shortname,\n metadata,\n *generator_methods\n )\n\n with open(os.path.join(project_shortname, 'metadata.json'), 'w') as f:\n json.dump(metadata, f)\n\n generate_imports(project_shortname, components)\n\n if rprefix:\n with open('package.json', 'r') as f:\n jsondata_unicode = json.load(f, object_pairs_hook=OrderedDict)\n if sys.version_info[0] >= 3:\n pkg_data = jsondata_unicode\n else:\n pkg_data = byteify(jsondata_unicode)\n\n generate_exports(\n project_shortname, components, metadata, pkg_data, prefix\n )\n\n\ndef cli():\n parser = argparse.ArgumentParser(\n prog='dash-generate-components',\n formatter_class=_CombinedFormatter,\n description='Generate dash components by extracting the metadata '\n 'using react-docgen. Then map the metadata to python classes.'\n )\n parser.add_argument('components_source',\n help='React components source directory.')\n parser.add_argument(\n 'project_shortname',\n help='Name of the project to export the classes files.'\n )\n parser.add_argument(\n '-p', '--package-info-filename',\n default='package.json',\n help='The filename of the copied `package.json` to `project_shortname`'\n )\n parser.add_argument(\n '-i', '--ignore',\n default='^_',\n help='Files/directories matching the pattern will be ignored'\n )\n parser.add_argument(\n '--r-prefix',\n help='Experimental: specify a prefix for DashR component names, write'\n 'DashR components to R dir, create R package.'\n )\n\n args = parser.parse_args()\n generate_components(\n args.components_source, args.project_shortname,\n package_info_filename=args.package_info_filename,\n ignore=args.ignore,\n rprefix=args.r_prefix)\n\n\n# pylint: disable=undefined-variable\ndef byteify(input_object):\n if isinstance(input_object, dict):\n return OrderedDict([\n (byteify(key), byteify(value))\n for key, value in input_object.iteritems()\n ])\n elif isinstance(input_object, list):\n return [byteify(element) for element in input_object]\n elif isinstance(input_object, unicode): # noqa:F821\n return input_object.encode('utf-8')\n return input_object\n\n\nif __name__ == '__main__':\n cli()\n", "path": "dash/development/component_generator.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom collections import OrderedDict\n\nimport json\nimport sys\nimport subprocess\nimport shlex\nimport os\nimport argparse\nimport shutil\nimport functools\n\nimport pkg_resources\n\nfrom ._r_components_generation import write_class_file\nfrom ._r_components_generation import generate_exports\nfrom ._py_components_generation import generate_class_file\nfrom ._py_components_generation import generate_imports\nfrom ._py_components_generation import generate_classes_files\n\n\nclass _CombinedFormatter(argparse.ArgumentDefaultsHelpFormatter,\n argparse.RawDescriptionHelpFormatter):\n pass\n\n\n# pylint: disable=too-many-locals\ndef generate_components(components_source, project_shortname,\n package_info_filename='package.json',\n ignore='^_',\n rprefix=None):\n\n project_shortname = project_shortname.replace('-', '_').rstrip('/\\\\')\n\n if rprefix:\n prefix = rprefix\n\n is_windows = sys.platform == 'win32'\n\n extract_path = pkg_resources.resource_filename('dash', 'extract-meta.js')\n\n os.environ['NODE_PATH'] = 'node_modules'\n cmd = shlex.split(\n 'node {} {} {}'.format(extract_path, ignore, components_source),\n posix=not is_windows\n )\n\n shutil.copyfile('package.json',\n os.path.join(project_shortname, package_info_filename))\n\n proc = subprocess.Popen(cmd,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n shell=is_windows)\n out, err = proc.communicate()\n status = proc.poll()\n\n if err:\n print(err.decode(), file=sys.stderr)\n\n if not out:\n print(\n 'Error generating metadata in {} (status={})'.format(\n project_shortname, status),\n file=sys.stderr)\n sys.exit(1)\n\n jsondata_unicode = json.loads(out.decode(), object_pairs_hook=OrderedDict)\n\n if sys.version_info[0] >= 3:\n metadata = jsondata_unicode\n else:\n metadata = byteify(jsondata_unicode)\n\n generator_methods = [generate_class_file]\n\n if rprefix:\n if not os.path.exists('man'):\n os.makedirs('man')\n if not os.path.exists('R'):\n os.makedirs('R')\n generator_methods.append(\n functools.partial(write_class_file, prefix=prefix))\n\n components = generate_classes_files(\n project_shortname,\n metadata,\n *generator_methods\n )\n\n with open(os.path.join(project_shortname, 'metadata.json'), 'w') as f:\n json.dump(metadata, f, indent=2)\n\n generate_imports(project_shortname, components)\n\n if rprefix:\n with open('package.json', 'r') as f:\n jsondata_unicode = json.load(f, object_pairs_hook=OrderedDict)\n if sys.version_info[0] >= 3:\n pkg_data = jsondata_unicode\n else:\n pkg_data = byteify(jsondata_unicode)\n\n generate_exports(\n project_shortname, components, metadata, pkg_data, prefix\n )\n\n\ndef cli():\n parser = argparse.ArgumentParser(\n prog='dash-generate-components',\n formatter_class=_CombinedFormatter,\n description='Generate dash components by extracting the metadata '\n 'using react-docgen. Then map the metadata to python classes.'\n )\n parser.add_argument('components_source',\n help='React components source directory.')\n parser.add_argument(\n 'project_shortname',\n help='Name of the project to export the classes files.'\n )\n parser.add_argument(\n '-p', '--package-info-filename',\n default='package.json',\n help='The filename of the copied `package.json` to `project_shortname`'\n )\n parser.add_argument(\n '-i', '--ignore',\n default='^_',\n help='Files/directories matching the pattern will be ignored'\n )\n parser.add_argument(\n '--r-prefix',\n help='Experimental: specify a prefix for DashR component names, write'\n 'DashR components to R dir, create R package.'\n )\n\n args = parser.parse_args()\n generate_components(\n args.components_source, args.project_shortname,\n package_info_filename=args.package_info_filename,\n ignore=args.ignore,\n rprefix=args.r_prefix)\n\n\n# pylint: disable=undefined-variable\ndef byteify(input_object):\n if isinstance(input_object, dict):\n return OrderedDict([\n (byteify(key), byteify(value))\n for key, value in input_object.iteritems()\n ])\n elif isinstance(input_object, list):\n return [byteify(element) for element in input_object]\n elif isinstance(input_object, unicode): # noqa:F821\n return input_object.encode('utf-8')\n return input_object\n\n\nif __name__ == '__main__':\n cli()\n", "path": "dash/development/component_generator.py"}]} | 1,715 | 106 |
gh_patches_debug_1743 | rasdani/github-patches | git_diff | conan-io__conan-3816 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Conan crash if .count-files consist NULL-character
conan version 1.8.0, Windows 7 x64
For some reason (maybe a crash), the .count-files in each conan package in the cache were corrupted and contained a unicode NULL character (https://www.fileformat.info/info/unicode/char/0000/index.htm). This led to a conan crash for basically every package action (e.g. conan info).
I already prepared a fix (https://github.com/conan-io/conan/compare/develop...pianoslum:develop) which also adds a little bit more verbosity in case there is no number in the .count-file.
Is this enough info or should I elaborate?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/util/locks.py`
Content:
```
1 import fasteners
2 from conans.util.log import logger
3 import time
4 from conans.util.files import save, load
5 import os
6
7
8 class NoLock(object):
9
10 def __enter__(self):
11 pass
12
13 def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable
14 pass
15
16
17 class SimpleLock(object):
18
19 def __init__(self, filename):
20 self._lock = fasteners.InterProcessLock(filename, logger=logger)
21
22 def __enter__(self):
23 self._lock.acquire()
24
25 def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable
26 self._lock.release()
27
28
29 READ_BUSY_DELAY = 0.5
30 WRITE_BUSY_DELAY = 0.25
31
32
33 class Lock(object):
34
35 @staticmethod
36 def clean(folder):
37 if os.path.exists(folder + ".count"):
38 os.remove(folder + ".count")
39 if os.path.exists(folder + ".count.lock"):
40 os.remove(folder + ".count.lock")
41
42 def __init__(self, folder, locked_item, output):
43 self._count_file = folder + ".count"
44 self._count_lock_file = folder + ".count.lock"
45 self._locked_item = locked_item
46 self._output = output
47 self._first_lock = True
48
49 @property
50 def files(self):
51 return (self._count_file, self._count_lock_file)
52
53 def _info_locked(self):
54 if self._first_lock:
55 self._first_lock = False
56 self._output.info("%s is locked by another concurrent conan process, wait..."
57 % str(self._locked_item))
58 self._output.info("If not the case, quit, and do 'conan remove --locks'")
59
60 def _readers(self):
61 try:
62 return int(load(self._count_file))
63 except IOError:
64 return 0
65
66
67 class ReadLock(Lock):
68
69 def __enter__(self):
70 while True:
71 with fasteners.InterProcessLock(self._count_lock_file, logger=logger):
72 readers = self._readers()
73 if readers >= 0:
74 save(self._count_file, str(readers + 1))
75 break
76 self._info_locked()
77 time.sleep(READ_BUSY_DELAY)
78
79 def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable
80 with fasteners.InterProcessLock(self._count_lock_file, logger=logger):
81 readers = self._readers()
82 save(self._count_file, str(readers - 1))
83
84
85 class WriteLock(Lock):
86
87 def __enter__(self):
88 while True:
89 with fasteners.InterProcessLock(self._count_lock_file, logger=logger):
90 readers = self._readers()
91 if readers == 0:
92 save(self._count_file, "-1")
93 break
94 self._info_locked()
95 time.sleep(WRITE_BUSY_DELAY)
96
97 def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable
98 with fasteners.InterProcessLock(self._count_lock_file, logger=logger):
99 save(self._count_file, "0")
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/util/locks.py b/conans/util/locks.py
--- a/conans/util/locks.py
+++ b/conans/util/locks.py
@@ -60,7 +60,8 @@
def _readers(self):
try:
return int(load(self._count_file))
- except IOError:
+ except (IOError, UnicodeEncodeError, ValueError):
+ self._output.warn("%s does not contain a number!" % self._count_file)
return 0
| {"golden_diff": "diff --git a/conans/util/locks.py b/conans/util/locks.py\n--- a/conans/util/locks.py\n+++ b/conans/util/locks.py\n@@ -60,7 +60,8 @@\n def _readers(self):\n try:\n return int(load(self._count_file))\n- except IOError:\n+ except (IOError, UnicodeEncodeError, ValueError):\n+ self._output.warn(\"%s does not contain a number!\" % self._count_file)\n return 0\n", "issue": "Conan crash if .count-files consist NULL-character\nconan version 1.8.0, Windows 7 x64\r\n\r\nFor some reason (maybe a crash), the .count-files in each conan package in the cache were corrupted and contained a unicode NULL character (https://www.fileformat.info/info/unicode/char/0000/index.htm). This led to a conan crash for basically every package action (e.g. conan info).\r\n\r\nI already prepared a fix (https://github.com/conan-io/conan/compare/develop...pianoslum:develop) which also adds a little bit more verbosity in case there is no number in the .count-file.\r\n\r\nIs this enough info or should I elaborate?\n", "before_files": [{"content": "import fasteners\nfrom conans.util.log import logger\nimport time\nfrom conans.util.files import save, load\nimport os\n\n\nclass NoLock(object):\n\n def __enter__(self):\n pass\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n pass\n\n\nclass SimpleLock(object):\n\n def __init__(self, filename):\n self._lock = fasteners.InterProcessLock(filename, logger=logger)\n\n def __enter__(self):\n self._lock.acquire()\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n self._lock.release()\n\n\nREAD_BUSY_DELAY = 0.5\nWRITE_BUSY_DELAY = 0.25\n\n\nclass Lock(object):\n\n @staticmethod\n def clean(folder):\n if os.path.exists(folder + \".count\"):\n os.remove(folder + \".count\")\n if os.path.exists(folder + \".count.lock\"):\n os.remove(folder + \".count.lock\")\n\n def __init__(self, folder, locked_item, output):\n self._count_file = folder + \".count\"\n self._count_lock_file = folder + \".count.lock\"\n self._locked_item = locked_item\n self._output = output\n self._first_lock = True\n\n @property\n def files(self):\n return (self._count_file, self._count_lock_file)\n\n def _info_locked(self):\n if self._first_lock:\n self._first_lock = False\n self._output.info(\"%s is locked by another concurrent conan process, wait...\"\n % str(self._locked_item))\n self._output.info(\"If not the case, quit, and do 'conan remove --locks'\")\n\n def _readers(self):\n try:\n return int(load(self._count_file))\n except IOError:\n return 0\n\n\nclass ReadLock(Lock):\n\n def __enter__(self):\n while True:\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n readers = self._readers()\n if readers >= 0:\n save(self._count_file, str(readers + 1))\n break\n self._info_locked()\n time.sleep(READ_BUSY_DELAY)\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n readers = self._readers()\n save(self._count_file, str(readers - 1))\n\n\nclass WriteLock(Lock):\n\n def __enter__(self):\n while True:\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n readers = self._readers()\n if readers == 0:\n save(self._count_file, \"-1\")\n break\n self._info_locked()\n time.sleep(WRITE_BUSY_DELAY)\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n save(self._count_file, \"0\")\n", "path": "conans/util/locks.py"}], "after_files": [{"content": "import fasteners\nfrom conans.util.log import logger\nimport time\nfrom conans.util.files import save, load\nimport os\n\n\nclass NoLock(object):\n\n def __enter__(self):\n pass\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n pass\n\n\nclass SimpleLock(object):\n\n def __init__(self, filename):\n self._lock = fasteners.InterProcessLock(filename, logger=logger)\n\n def __enter__(self):\n self._lock.acquire()\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n self._lock.release()\n\n\nREAD_BUSY_DELAY = 0.5\nWRITE_BUSY_DELAY = 0.25\n\n\nclass Lock(object):\n\n @staticmethod\n def clean(folder):\n if os.path.exists(folder + \".count\"):\n os.remove(folder + \".count\")\n if os.path.exists(folder + \".count.lock\"):\n os.remove(folder + \".count.lock\")\n\n def __init__(self, folder, locked_item, output):\n self._count_file = folder + \".count\"\n self._count_lock_file = folder + \".count.lock\"\n self._locked_item = locked_item\n self._output = output\n self._first_lock = True\n\n @property\n def files(self):\n return (self._count_file, self._count_lock_file)\n\n def _info_locked(self):\n if self._first_lock:\n self._first_lock = False\n self._output.info(\"%s is locked by another concurrent conan process, wait...\"\n % str(self._locked_item))\n self._output.info(\"If not the case, quit, and do 'conan remove --locks'\")\n\n def _readers(self):\n try:\n return int(load(self._count_file))\n except (IOError, UnicodeEncodeError, ValueError):\n self._output.warn(\"%s does not contain a number!\" % self._count_file)\n return 0\n\n\nclass ReadLock(Lock):\n\n def __enter__(self):\n while True:\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n readers = self._readers()\n if readers >= 0:\n save(self._count_file, str(readers + 1))\n break\n self._info_locked()\n time.sleep(READ_BUSY_DELAY)\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n readers = self._readers()\n save(self._count_file, str(readers - 1))\n\n\nclass WriteLock(Lock):\n\n def __enter__(self):\n while True:\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n readers = self._readers()\n if readers == 0:\n save(self._count_file, \"-1\")\n break\n self._info_locked()\n time.sleep(WRITE_BUSY_DELAY)\n\n def __exit__(self, exc_type, exc_val, exc_tb): # @UnusedVariable\n with fasteners.InterProcessLock(self._count_lock_file, logger=logger):\n save(self._count_file, \"0\")\n", "path": "conans/util/locks.py"}]} | 1,289 | 110 |
gh_patches_debug_15170 | rasdani/github-patches | git_diff | dmlc__dgl-1082 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AmazonCoBuy dataset typo in API document
## 📚 Documentation
In API reference of `dgl.data.AmazonCoBuy` at https://docs.dgl.ai/en/latest/api/python/data.html#dgl.data.AmazonCoBuy, parameter `name` has to be ‘computer’ or ‘photo’. However, it should be ‘computer**s**’ or ‘photo’ according to the code:
https://github.com/dmlc/dgl/blob/fa0ee46a0146d21f46427fd9288dfe18074d6109/python/dgl/data/gnn_benckmark.py#L107-L125
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/data/gnn_benckmark.py`
Content:
```
1 import scipy.sparse as sp
2 import numpy as np
3 from dgl import graph_index, DGLGraph, transform
4 import os
5 from .utils import download, extract_archive, get_download_dir, _get_dgl_url
6
7 __all__=["AmazonCoBuy", "Coauthor", 'CoraFull']
8
9 def eliminate_self_loops(A):
10 """Remove self-loops from the adjacency matrix."""
11 A = A.tolil()
12 A.setdiag(0)
13 A = A.tocsr()
14 A.eliminate_zeros()
15 return A
16
17
18 class GNNBenchmarkDataset(object):
19 """Base Class for GNN Benchmark dataset from https://github.com/shchur/gnn-benchmark#datasets"""
20 _url = {}
21
22 def __init__(self, name):
23 assert name.lower() in self._url, "Name not valid"
24 self.dir = get_download_dir()
25 self.path = os.path.join(
26 self.dir, 'gnn_benckmark', self._url[name.lower()].split('/')[-1])
27 download(self._url[name.lower()], path=self.path)
28 g = self.load_npz(self.path)
29 self.data = [g]
30
31 @staticmethod
32 def load_npz(file_name):
33 with np.load(file_name) as loader:
34 loader = dict(loader)
35 num_nodes = loader['adj_shape'][0]
36 adj_matrix = sp.csr_matrix((loader['adj_data'], loader['adj_indices'], loader['adj_indptr']),
37 shape=loader['adj_shape']).tocoo()
38
39 if 'attr_data' in loader:
40 # Attributes are stored as a sparse CSR matrix
41 attr_matrix = sp.csr_matrix((loader['attr_data'], loader['attr_indices'], loader['attr_indptr']),
42 shape=loader['attr_shape']).todense()
43 elif 'attr_matrix' in loader:
44 # Attributes are stored as a (dense) np.ndarray
45 attr_matrix = loader['attr_matrix']
46 else:
47 attr_matrix = None
48
49 if 'labels_data' in loader:
50 # Labels are stored as a CSR matrix
51 labels = sp.csr_matrix((loader['labels_data'], loader['labels_indices'], loader['labels_indptr']),
52 shape=loader['labels_shape']).todense()
53 elif 'labels' in loader:
54 # Labels are stored as a numpy array
55 labels = loader['labels']
56 else:
57 labels = None
58 g = DGLGraph()
59 g.add_nodes(num_nodes)
60 g.add_edges(adj_matrix.row, adj_matrix.col)
61 g.add_edges(adj_matrix.col, adj_matrix.row)
62 g.ndata['feat'] = attr_matrix
63 g.ndata['label'] = labels
64 return g
65
66 def __getitem__(self, idx):
67 assert idx == 0, "This dataset has only one graph"
68 return self.data[0]
69
70 def __len__(self):
71 return len(self.data)
72
73
74 class CoraFull(GNNBenchmarkDataset):
75 r"""
76 Extended Cora dataset from `Deep Gaussian Embedding of Graphs:
77 Unsupervised Inductive Learning via Ranking`. Nodes represent paper and edges represent citations.
78
79 Reference: https://github.com/shchur/gnn-benchmark#datasets
80 """
81 _url = {"cora_full":'https://github.com/shchur/gnn-benchmark/raw/master/data/npz/cora_full.npz'}
82
83 def __init__(self):
84 super().__init__("cora_full")
85
86
87 class Coauthor(GNNBenchmarkDataset):
88 r"""
89 Coauthor CS and Coauthor Physics are co-authorship graphs based on the Microsoft Academic Graph
90 from the KDD Cup 2016 challenge 3
91 . Here, nodes are authors, that are connected by an edge if they
92 co-authored a paper; node features represent paper keywords for each author’s papers, and class
93 labels indicate most active fields of study for each author.
94
95 Parameters
96 ---------------
97 name: str
98 Name of the dataset, has to be 'cs' or 'physics'
99
100 """
101 _url = {
102 'cs': "https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_cs.npz",
103 'physics': "https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_phy.npz"
104 }
105
106
107 class AmazonCoBuy(GNNBenchmarkDataset):
108 r"""
109 Amazon Computers and Amazon Photo are segments of the Amazon co-purchase graph [McAuley
110 et al., 2015], where nodes represent goods, edges indicate that two goods are frequently bought
111 together, node features are bag-of-words encoded product reviews, and class labels are given by the
112 product category.
113
114 Reference: https://github.com/shchur/gnn-benchmark#datasets
115
116 Parameters
117 ---------------
118 name: str
119 Name of the dataset, has to be 'computer' or 'photo'
120
121 """
122 _url = {
123 'computers': "https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_computers.npz",
124 'photo': "https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_photo.npz"
125 }
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/dgl/data/gnn_benckmark.py b/python/dgl/data/gnn_benckmark.py
--- a/python/dgl/data/gnn_benckmark.py
+++ b/python/dgl/data/gnn_benckmark.py
@@ -30,7 +30,7 @@
@staticmethod
def load_npz(file_name):
- with np.load(file_name) as loader:
+ with np.load(file_name, allow_pickle=True) as loader:
loader = dict(loader)
num_nodes = loader['adj_shape'][0]
adj_matrix = sp.csr_matrix((loader['adj_data'], loader['adj_indices'], loader['adj_indptr']),
@@ -116,7 +116,7 @@
Parameters
---------------
name: str
- Name of the dataset, has to be 'computer' or 'photo'
+ Name of the dataset, has to be 'computers' or 'photo'
"""
_url = {
| {"golden_diff": "diff --git a/python/dgl/data/gnn_benckmark.py b/python/dgl/data/gnn_benckmark.py\n--- a/python/dgl/data/gnn_benckmark.py\n+++ b/python/dgl/data/gnn_benckmark.py\n@@ -30,7 +30,7 @@\n \n @staticmethod\n def load_npz(file_name):\n- with np.load(file_name) as loader:\n+ with np.load(file_name, allow_pickle=True) as loader:\n loader = dict(loader)\n num_nodes = loader['adj_shape'][0]\n adj_matrix = sp.csr_matrix((loader['adj_data'], loader['adj_indices'], loader['adj_indptr']),\n@@ -116,7 +116,7 @@\n Parameters\n ---------------\n name: str\n- Name of the dataset, has to be 'computer' or 'photo'\n+ Name of the dataset, has to be 'computers' or 'photo'\n \n \"\"\"\n _url = {\n", "issue": "AmazonCoBuy dataset typo in API document\n## \ud83d\udcda Documentation\r\n\r\nIn API reference of `dgl.data.AmazonCoBuy` at https://docs.dgl.ai/en/latest/api/python/data.html#dgl.data.AmazonCoBuy, parameter `name` has to be \u2018computer\u2019 or \u2018photo\u2019. However, it should be \u2018computer**s**\u2019 or \u2018photo\u2019 according to the code:\r\n\r\nhttps://github.com/dmlc/dgl/blob/fa0ee46a0146d21f46427fd9288dfe18074d6109/python/dgl/data/gnn_benckmark.py#L107-L125\r\n\r\n\n", "before_files": [{"content": "import scipy.sparse as sp\nimport numpy as np\nfrom dgl import graph_index, DGLGraph, transform\nimport os\nfrom .utils import download, extract_archive, get_download_dir, _get_dgl_url\n\n__all__=[\"AmazonCoBuy\", \"Coauthor\", 'CoraFull']\n\ndef eliminate_self_loops(A):\n \"\"\"Remove self-loops from the adjacency matrix.\"\"\"\n A = A.tolil()\n A.setdiag(0)\n A = A.tocsr()\n A.eliminate_zeros()\n return A\n\n\nclass GNNBenchmarkDataset(object):\n \"\"\"Base Class for GNN Benchmark dataset from https://github.com/shchur/gnn-benchmark#datasets\"\"\"\n _url = {}\n\n def __init__(self, name):\n assert name.lower() in self._url, \"Name not valid\"\n self.dir = get_download_dir()\n self.path = os.path.join(\n self.dir, 'gnn_benckmark', self._url[name.lower()].split('/')[-1])\n download(self._url[name.lower()], path=self.path)\n g = self.load_npz(self.path)\n self.data = [g]\n\n @staticmethod\n def load_npz(file_name):\n with np.load(file_name) as loader:\n loader = dict(loader)\n num_nodes = loader['adj_shape'][0]\n adj_matrix = sp.csr_matrix((loader['adj_data'], loader['adj_indices'], loader['adj_indptr']),\n shape=loader['adj_shape']).tocoo()\n\n if 'attr_data' in loader:\n # Attributes are stored as a sparse CSR matrix\n attr_matrix = sp.csr_matrix((loader['attr_data'], loader['attr_indices'], loader['attr_indptr']),\n shape=loader['attr_shape']).todense()\n elif 'attr_matrix' in loader:\n # Attributes are stored as a (dense) np.ndarray\n attr_matrix = loader['attr_matrix']\n else:\n attr_matrix = None\n\n if 'labels_data' in loader:\n # Labels are stored as a CSR matrix\n labels = sp.csr_matrix((loader['labels_data'], loader['labels_indices'], loader['labels_indptr']),\n shape=loader['labels_shape']).todense()\n elif 'labels' in loader:\n # Labels are stored as a numpy array\n labels = loader['labels']\n else:\n labels = None\n g = DGLGraph()\n g.add_nodes(num_nodes)\n g.add_edges(adj_matrix.row, adj_matrix.col)\n g.add_edges(adj_matrix.col, adj_matrix.row)\n g.ndata['feat'] = attr_matrix\n g.ndata['label'] = labels\n return g \n\n def __getitem__(self, idx):\n assert idx == 0, \"This dataset has only one graph\"\n return self.data[0]\n\n def __len__(self):\n return len(self.data)\n\n\nclass CoraFull(GNNBenchmarkDataset):\n r\"\"\"\n Extended Cora dataset from `Deep Gaussian Embedding of Graphs: \n Unsupervised Inductive Learning via Ranking`. Nodes represent paper and edges represent citations.\n\n Reference: https://github.com/shchur/gnn-benchmark#datasets\n \"\"\"\n _url = {\"cora_full\":'https://github.com/shchur/gnn-benchmark/raw/master/data/npz/cora_full.npz'}\n\n def __init__(self):\n super().__init__(\"cora_full\")\n\n\nclass Coauthor(GNNBenchmarkDataset):\n r\"\"\"\n Coauthor CS and Coauthor Physics are co-authorship graphs based on the Microsoft Academic Graph\n from the KDD Cup 2016 challenge 3\n . Here, nodes are authors, that are connected by an edge if they\n co-authored a paper; node features represent paper keywords for each author\u2019s papers, and class\n labels indicate most active fields of study for each author.\n\n Parameters\n ---------------\n name: str\n Name of the dataset, has to be 'cs' or 'physics'\n\n \"\"\"\n _url = {\n 'cs': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_cs.npz\",\n 'physics': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_phy.npz\"\n }\n\n\nclass AmazonCoBuy(GNNBenchmarkDataset):\n r\"\"\"\n Amazon Computers and Amazon Photo are segments of the Amazon co-purchase graph [McAuley\n et al., 2015], where nodes represent goods, edges indicate that two goods are frequently bought\n together, node features are bag-of-words encoded product reviews, and class labels are given by the\n product category.\n\n Reference: https://github.com/shchur/gnn-benchmark#datasets\n\n Parameters\n ---------------\n name: str\n Name of the dataset, has to be 'computer' or 'photo'\n\n \"\"\"\n _url = {\n 'computers': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_computers.npz\",\n 'photo': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_photo.npz\"\n }\n", "path": "python/dgl/data/gnn_benckmark.py"}], "after_files": [{"content": "import scipy.sparse as sp\nimport numpy as np\nfrom dgl import graph_index, DGLGraph, transform\nimport os\nfrom .utils import download, extract_archive, get_download_dir, _get_dgl_url\n\n__all__=[\"AmazonCoBuy\", \"Coauthor\", 'CoraFull']\n\ndef eliminate_self_loops(A):\n \"\"\"Remove self-loops from the adjacency matrix.\"\"\"\n A = A.tolil()\n A.setdiag(0)\n A = A.tocsr()\n A.eliminate_zeros()\n return A\n\n\nclass GNNBenchmarkDataset(object):\n \"\"\"Base Class for GNN Benchmark dataset from https://github.com/shchur/gnn-benchmark#datasets\"\"\"\n _url = {}\n\n def __init__(self, name):\n assert name.lower() in self._url, \"Name not valid\"\n self.dir = get_download_dir()\n self.path = os.path.join(\n self.dir, 'gnn_benckmark', self._url[name.lower()].split('/')[-1])\n download(self._url[name.lower()], path=self.path)\n g = self.load_npz(self.path)\n self.data = [g]\n\n @staticmethod\n def load_npz(file_name):\n with np.load(file_name, allow_pickle=True) as loader:\n loader = dict(loader)\n num_nodes = loader['adj_shape'][0]\n adj_matrix = sp.csr_matrix((loader['adj_data'], loader['adj_indices'], loader['adj_indptr']),\n shape=loader['adj_shape']).tocoo()\n\n if 'attr_data' in loader:\n # Attributes are stored as a sparse CSR matrix\n attr_matrix = sp.csr_matrix((loader['attr_data'], loader['attr_indices'], loader['attr_indptr']),\n shape=loader['attr_shape']).todense()\n elif 'attr_matrix' in loader:\n # Attributes are stored as a (dense) np.ndarray\n attr_matrix = loader['attr_matrix']\n else:\n attr_matrix = None\n\n if 'labels_data' in loader:\n # Labels are stored as a CSR matrix\n labels = sp.csr_matrix((loader['labels_data'], loader['labels_indices'], loader['labels_indptr']),\n shape=loader['labels_shape']).todense()\n elif 'labels' in loader:\n # Labels are stored as a numpy array\n labels = loader['labels']\n else:\n labels = None\n g = DGLGraph()\n g.add_nodes(num_nodes)\n g.add_edges(adj_matrix.row, adj_matrix.col)\n g.add_edges(adj_matrix.col, adj_matrix.row)\n g.ndata['feat'] = attr_matrix\n g.ndata['label'] = labels\n return g \n\n def __getitem__(self, idx):\n assert idx == 0, \"This dataset has only one graph\"\n return self.data[0]\n\n def __len__(self):\n return len(self.data)\n\n\nclass CoraFull(GNNBenchmarkDataset):\n r\"\"\"\n Extended Cora dataset from `Deep Gaussian Embedding of Graphs: \n Unsupervised Inductive Learning via Ranking`. Nodes represent paper and edges represent citations.\n\n Reference: https://github.com/shchur/gnn-benchmark#datasets\n \"\"\"\n _url = {\"cora_full\":'https://github.com/shchur/gnn-benchmark/raw/master/data/npz/cora_full.npz'}\n\n def __init__(self):\n super().__init__(\"cora_full\")\n\n\nclass Coauthor(GNNBenchmarkDataset):\n r\"\"\"\n Coauthor CS and Coauthor Physics are co-authorship graphs based on the Microsoft Academic Graph\n from the KDD Cup 2016 challenge 3\n . Here, nodes are authors, that are connected by an edge if they\n co-authored a paper; node features represent paper keywords for each author\u2019s papers, and class\n labels indicate most active fields of study for each author.\n\n Parameters\n ---------------\n name: str\n Name of the dataset, has to be 'cs' or 'physics'\n\n \"\"\"\n _url = {\n 'cs': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_cs.npz\",\n 'physics': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_phy.npz\"\n }\n\n\nclass AmazonCoBuy(GNNBenchmarkDataset):\n r\"\"\"\n Amazon Computers and Amazon Photo are segments of the Amazon co-purchase graph [McAuley\n et al., 2015], where nodes represent goods, edges indicate that two goods are frequently bought\n together, node features are bag-of-words encoded product reviews, and class labels are given by the\n product category.\n\n Reference: https://github.com/shchur/gnn-benchmark#datasets\n\n Parameters\n ---------------\n name: str\n Name of the dataset, has to be 'computers' or 'photo'\n\n \"\"\"\n _url = {\n 'computers': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_computers.npz\",\n 'photo': \"https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_photo.npz\"\n }\n", "path": "python/dgl/data/gnn_benckmark.py"}]} | 1,825 | 217 |
gh_patches_debug_27764 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-4025 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
motherless extractor - test 0 fails
```
$ python ~/projects/youtube-dl/test/test_download.py TestDownload.test_Motherless
[Motherless] AC3FFE1: Downloading webpage
[info] Writing video description metadata as JSON to: AC3FFE1.info.json
[debug] Invoking downloader on 'http://s17.motherlessmedia.com/dev386/0/572/287/0572287847.mp4/5cb6d38eccba71d7f6bb2ef260997c3d/544A96C0.mp4'
[download] Destination: AC3FFE1.mp4
[download] 100% of 10.00KiB in 00:00
F
======================================================================
FAIL: test_Motherless (__main__.TestDownload)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/crabman/projects/youtube-dl/test/test_download.py", line 170, in test_template
self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)
AssertionError: False is not true : Missing file AC3FFE1.flv
----------------------------------------------------------------------
Ran 1 test in 1.690s
FAILED (failures=1)
```
Apparently that video's page no longer give you flv, but instead gives you mp4 file. I am not sure why it happened. Maybe motherless doesn't serve flv files anymore, maybe it still does for some videos - I don't know.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/motherless.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import datetime
4 import re
5
6 from .common import InfoExtractor
7 from ..utils import (
8 int_or_none,
9 unified_strdate,
10 )
11
12
13 class MotherlessIE(InfoExtractor):
14 _VALID_URL = r'http://(?:www\.)?motherless\.com/(?P<id>[A-Z0-9]+)'
15 _TESTS = [
16 {
17 'url': 'http://motherless.com/AC3FFE1',
18 'md5': '5527fef81d2e529215dad3c2d744a7d9',
19 'info_dict': {
20 'id': 'AC3FFE1',
21 'ext': 'flv',
22 'title': 'Fucked in the ass while playing PS3',
23 'categories': ['Gaming', 'anal', 'reluctant', 'rough', 'Wife'],
24 'upload_date': '20100913',
25 'uploader_id': 'famouslyfuckedup',
26 'thumbnail': 're:http://.*\.jpg',
27 'age_limit': 18,
28 }
29 },
30 {
31 'url': 'http://motherless.com/532291B',
32 'md5': 'bc59a6b47d1f958e61fbd38a4d31b131',
33 'info_dict': {
34 'id': '532291B',
35 'ext': 'mp4',
36 'title': 'Amazing girl playing the omegle game, PERFECT!',
37 'categories': ['Amateur', 'webcam', 'omegle', 'pink', 'young', 'masturbate', 'teen', 'game', 'hairy'],
38 'upload_date': '20140622',
39 'uploader_id': 'Sulivana7x',
40 'thumbnail': 're:http://.*\.jpg',
41 'age_limit': 18,
42 }
43 }
44 ]
45
46 def _real_extract(self,url):
47 mobj = re.match(self._VALID_URL, url)
48 video_id = mobj.group('id')
49
50 webpage = self._download_webpage(url, video_id)
51
52 title = self._html_search_regex(r'id="view-upload-title">\s+([^<]+)<', webpage, 'title')
53
54 video_url = self._html_search_regex(r'setup\(\{\s+"file".+: "([^"]+)",', webpage, 'video_url')
55 age_limit = self._rta_search(webpage)
56
57 view_count = self._html_search_regex(r'<strong>Views</strong>\s+([^<]+)<', webpage, 'view_count')
58
59 upload_date = self._html_search_regex(r'<strong>Uploaded</strong>\s+([^<]+)<', webpage, 'upload_date')
60 if 'Ago' in upload_date:
61 days = int(re.search(r'([0-9]+)', upload_date).group(1))
62 upload_date = (datetime.datetime.now() - datetime.timedelta(days=days)).strftime('%Y%m%d')
63 else:
64 upload_date = unified_strdate(upload_date)
65
66 like_count = self._html_search_regex(r'<strong>Favorited</strong>\s+([^<]+)<', webpage, 'like_count')
67
68 comment_count = webpage.count('class="media-comment-contents"')
69 uploader_id = self._html_search_regex(r'"thumb-member-username">\s+<a href="/m/([^"]+)"', webpage, 'uploader_id')
70
71 categories = self._html_search_meta('keywords', webpage)
72 if categories:
73 categories = [cat.strip() for cat in categories.split(',')]
74
75 return {
76 'id': video_id,
77 'title': title,
78 'upload_date': upload_date,
79 'uploader_id': uploader_id,
80 'thumbnail': self._og_search_thumbnail(webpage),
81 'categories': categories,
82 'view_count': int_or_none(view_count.replace(',', '')),
83 'like_count': int_or_none(like_count.replace(',', '')),
84 'comment_count': comment_count,
85 'age_limit': age_limit,
86 'url': video_url,
87 }
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/motherless.py b/youtube_dl/extractor/motherless.py
--- a/youtube_dl/extractor/motherless.py
+++ b/youtube_dl/extractor/motherless.py
@@ -11,14 +11,14 @@
class MotherlessIE(InfoExtractor):
- _VALID_URL = r'http://(?:www\.)?motherless\.com/(?P<id>[A-Z0-9]+)'
+ _VALID_URL = r'http://(?:www\.)?motherless\.com/(?:g/[a-z0-9_]+/)?(?P<id>[A-Z0-9]+)'
_TESTS = [
{
'url': 'http://motherless.com/AC3FFE1',
- 'md5': '5527fef81d2e529215dad3c2d744a7d9',
+ 'md5': '310f62e325a9fafe64f68c0bccb6e75f',
'info_dict': {
'id': 'AC3FFE1',
- 'ext': 'flv',
+ 'ext': 'mp4',
'title': 'Fucked in the ass while playing PS3',
'categories': ['Gaming', 'anal', 'reluctant', 'rough', 'Wife'],
'upload_date': '20100913',
@@ -40,6 +40,20 @@
'thumbnail': 're:http://.*\.jpg',
'age_limit': 18,
}
+ },
+ {
+ 'url': 'http://motherless.com/g/cosplay/633979F',
+ 'md5': '0b2a43f447a49c3e649c93ad1fafa4a0',
+ 'info_dict': {
+ 'id': '633979F',
+ 'ext': 'mp4',
+ 'title': 'Turtlette',
+ 'categories': ['superheroine heroine superher'],
+ 'upload_date': '20140827',
+ 'uploader_id': 'shade0230',
+ 'thumbnail': 're:http://.*\.jpg',
+ 'age_limit': 18,
+ }
}
]
| {"golden_diff": "diff --git a/youtube_dl/extractor/motherless.py b/youtube_dl/extractor/motherless.py\n--- a/youtube_dl/extractor/motherless.py\n+++ b/youtube_dl/extractor/motherless.py\n@@ -11,14 +11,14 @@\n \n \n class MotherlessIE(InfoExtractor):\n- _VALID_URL = r'http://(?:www\\.)?motherless\\.com/(?P<id>[A-Z0-9]+)'\n+ _VALID_URL = r'http://(?:www\\.)?motherless\\.com/(?:g/[a-z0-9_]+/)?(?P<id>[A-Z0-9]+)'\n _TESTS = [\n {\n 'url': 'http://motherless.com/AC3FFE1',\n- 'md5': '5527fef81d2e529215dad3c2d744a7d9',\n+ 'md5': '310f62e325a9fafe64f68c0bccb6e75f',\n 'info_dict': {\n 'id': 'AC3FFE1',\n- 'ext': 'flv',\n+ 'ext': 'mp4',\n 'title': 'Fucked in the ass while playing PS3',\n 'categories': ['Gaming', 'anal', 'reluctant', 'rough', 'Wife'],\n 'upload_date': '20100913',\n@@ -40,6 +40,20 @@\n 'thumbnail': 're:http://.*\\.jpg',\n 'age_limit': 18,\n }\n+ },\n+ {\n+ 'url': 'http://motherless.com/g/cosplay/633979F',\n+ 'md5': '0b2a43f447a49c3e649c93ad1fafa4a0',\n+ 'info_dict': {\n+ 'id': '633979F',\n+ 'ext': 'mp4',\n+ 'title': 'Turtlette',\n+ 'categories': ['superheroine heroine superher'],\n+ 'upload_date': '20140827',\n+ 'uploader_id': 'shade0230',\n+ 'thumbnail': 're:http://.*\\.jpg',\n+ 'age_limit': 18,\n+ }\n }\n ]\n", "issue": "motherless extractor - test 0 fails\n```\n$ python ~/projects/youtube-dl/test/test_download.py TestDownload.test_Motherless\n[Motherless] AC3FFE1: Downloading webpage\n[info] Writing video description metadata as JSON to: AC3FFE1.info.json\n[debug] Invoking downloader on 'http://s17.motherlessmedia.com/dev386/0/572/287/0572287847.mp4/5cb6d38eccba71d7f6bb2ef260997c3d/544A96C0.mp4'\n[download] Destination: AC3FFE1.mp4\n[download] 100% of 10.00KiB in 00:00\nF\n======================================================================\nFAIL: test_Motherless (__main__.TestDownload)\n----------------------------------------------------------------------\nTraceback (most recent call last):\nFile \"/home/crabman/projects/youtube-dl/test/test_download.py\", line 170, in test_template\n self.assertTrue(os.path.exists(tc_filename), msg='Missing file ' + tc_filename)\nAssertionError: False is not true : Missing file AC3FFE1.flv\n\n----------------------------------------------------------------------\nRan 1 test in 1.690s\n\nFAILED (failures=1)\n```\n\nApparently that video's page no longer give you flv, but instead gives you mp4 file. I am not sure why it happened. Maybe motherless doesn't serve flv files anymore, maybe it still does for some videos - I don't know.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport datetime\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n unified_strdate,\n)\n\n\nclass MotherlessIE(InfoExtractor):\n _VALID_URL = r'http://(?:www\\.)?motherless\\.com/(?P<id>[A-Z0-9]+)'\n _TESTS = [\n {\n 'url': 'http://motherless.com/AC3FFE1',\n 'md5': '5527fef81d2e529215dad3c2d744a7d9',\n 'info_dict': {\n 'id': 'AC3FFE1',\n 'ext': 'flv',\n 'title': 'Fucked in the ass while playing PS3',\n 'categories': ['Gaming', 'anal', 'reluctant', 'rough', 'Wife'],\n 'upload_date': '20100913',\n 'uploader_id': 'famouslyfuckedup',\n 'thumbnail': 're:http://.*\\.jpg',\n 'age_limit': 18,\n }\n },\n {\n 'url': 'http://motherless.com/532291B',\n 'md5': 'bc59a6b47d1f958e61fbd38a4d31b131',\n 'info_dict': {\n 'id': '532291B',\n 'ext': 'mp4',\n 'title': 'Amazing girl playing the omegle game, PERFECT!',\n 'categories': ['Amateur', 'webcam', 'omegle', 'pink', 'young', 'masturbate', 'teen', 'game', 'hairy'],\n 'upload_date': '20140622',\n 'uploader_id': 'Sulivana7x',\n 'thumbnail': 're:http://.*\\.jpg',\n 'age_limit': 18,\n }\n }\n ]\n\n def _real_extract(self,url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n\n webpage = self._download_webpage(url, video_id)\n\n title = self._html_search_regex(r'id=\"view-upload-title\">\\s+([^<]+)<', webpage, 'title')\n \n video_url = self._html_search_regex(r'setup\\(\\{\\s+\"file\".+: \"([^\"]+)\",', webpage, 'video_url')\n age_limit = self._rta_search(webpage)\n\n view_count = self._html_search_regex(r'<strong>Views</strong>\\s+([^<]+)<', webpage, 'view_count')\n \n upload_date = self._html_search_regex(r'<strong>Uploaded</strong>\\s+([^<]+)<', webpage, 'upload_date')\n if 'Ago' in upload_date:\n days = int(re.search(r'([0-9]+)', upload_date).group(1))\n upload_date = (datetime.datetime.now() - datetime.timedelta(days=days)).strftime('%Y%m%d')\n else:\n upload_date = unified_strdate(upload_date)\n\n like_count = self._html_search_regex(r'<strong>Favorited</strong>\\s+([^<]+)<', webpage, 'like_count')\n\n comment_count = webpage.count('class=\"media-comment-contents\"')\n uploader_id = self._html_search_regex(r'\"thumb-member-username\">\\s+<a href=\"/m/([^\"]+)\"', webpage, 'uploader_id')\n\n categories = self._html_search_meta('keywords', webpage)\n if categories:\n categories = [cat.strip() for cat in categories.split(',')]\n\n return {\n 'id': video_id,\n 'title': title,\n 'upload_date': upload_date,\n 'uploader_id': uploader_id,\n 'thumbnail': self._og_search_thumbnail(webpage),\n 'categories': categories,\n 'view_count': int_or_none(view_count.replace(',', '')),\n 'like_count': int_or_none(like_count.replace(',', '')),\n 'comment_count': comment_count,\n 'age_limit': age_limit,\n 'url': video_url,\n }\n", "path": "youtube_dl/extractor/motherless.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport datetime\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n unified_strdate,\n)\n\n\nclass MotherlessIE(InfoExtractor):\n _VALID_URL = r'http://(?:www\\.)?motherless\\.com/(?:g/[a-z0-9_]+/)?(?P<id>[A-Z0-9]+)'\n _TESTS = [\n {\n 'url': 'http://motherless.com/AC3FFE1',\n 'md5': '310f62e325a9fafe64f68c0bccb6e75f',\n 'info_dict': {\n 'id': 'AC3FFE1',\n 'ext': 'mp4',\n 'title': 'Fucked in the ass while playing PS3',\n 'categories': ['Gaming', 'anal', 'reluctant', 'rough', 'Wife'],\n 'upload_date': '20100913',\n 'uploader_id': 'famouslyfuckedup',\n 'thumbnail': 're:http://.*\\.jpg',\n 'age_limit': 18,\n }\n },\n {\n 'url': 'http://motherless.com/532291B',\n 'md5': 'bc59a6b47d1f958e61fbd38a4d31b131',\n 'info_dict': {\n 'id': '532291B',\n 'ext': 'mp4',\n 'title': 'Amazing girl playing the omegle game, PERFECT!',\n 'categories': ['Amateur', 'webcam', 'omegle', 'pink', 'young', 'masturbate', 'teen', 'game', 'hairy'],\n 'upload_date': '20140622',\n 'uploader_id': 'Sulivana7x',\n 'thumbnail': 're:http://.*\\.jpg',\n 'age_limit': 18,\n }\n },\n {\n 'url': 'http://motherless.com/g/cosplay/633979F',\n 'md5': '0b2a43f447a49c3e649c93ad1fafa4a0',\n 'info_dict': {\n 'id': '633979F',\n 'ext': 'mp4',\n 'title': 'Turtlette',\n 'categories': ['superheroine heroine superher'],\n 'upload_date': '20140827',\n 'uploader_id': 'shade0230',\n 'thumbnail': 're:http://.*\\.jpg',\n 'age_limit': 18,\n }\n }\n ]\n\n def _real_extract(self,url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n\n webpage = self._download_webpage(url, video_id)\n\n title = self._html_search_regex(r'id=\"view-upload-title\">\\s+([^<]+)<', webpage, 'title')\n \n video_url = self._html_search_regex(r'setup\\(\\{\\s+\"file\".+: \"([^\"]+)\",', webpage, 'video_url')\n age_limit = self._rta_search(webpage)\n\n view_count = self._html_search_regex(r'<strong>Views</strong>\\s+([^<]+)<', webpage, 'view_count')\n \n upload_date = self._html_search_regex(r'<strong>Uploaded</strong>\\s+([^<]+)<', webpage, 'upload_date')\n if 'Ago' in upload_date:\n days = int(re.search(r'([0-9]+)', upload_date).group(1))\n upload_date = (datetime.datetime.now() - datetime.timedelta(days=days)).strftime('%Y%m%d')\n else:\n upload_date = unified_strdate(upload_date)\n\n like_count = self._html_search_regex(r'<strong>Favorited</strong>\\s+([^<]+)<', webpage, 'like_count')\n\n comment_count = webpage.count('class=\"media-comment-contents\"')\n uploader_id = self._html_search_regex(r'\"thumb-member-username\">\\s+<a href=\"/m/([^\"]+)\"', webpage, 'uploader_id')\n\n categories = self._html_search_meta('keywords', webpage)\n if categories:\n categories = [cat.strip() for cat in categories.split(',')]\n\n return {\n 'id': video_id,\n 'title': title,\n 'upload_date': upload_date,\n 'uploader_id': uploader_id,\n 'thumbnail': self._og_search_thumbnail(webpage),\n 'categories': categories,\n 'view_count': int_or_none(view_count.replace(',', '')),\n 'like_count': int_or_none(like_count.replace(',', '')),\n 'comment_count': comment_count,\n 'age_limit': age_limit,\n 'url': video_url,\n }\n", "path": "youtube_dl/extractor/motherless.py"}]} | 1,688 | 547 |
gh_patches_debug_14166 | rasdani/github-patches | git_diff | tiangolo__fastapi-454 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for Pydantic's dataclasses
**Describe the question**
I am using a pydantic.dataclasses.dataclass as a response_model of my fast api route.
```
from fastapi import FastAPI
from pydantic.dataclasses import dataclass
@dataclass
class Foo:
bar: int
app = FastAPI()
@app.get('/foo', response_model=Foo)
def get_bar() -> Foo:
return Foo(bar=1)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create a file **app.py** with the above code.
2. Start the app: `uvicorn app:app`
3. Open the browser and go to **localhost:8000/foo**
4. See error:
```
pydantic.error_wrappers.ValidationError: 1 validation error
response
__init__() got an unexpected keyword argument '__initialised__' (type=type_error)
```
**Expected behavior**
No validation error and serialized json does not contain any keys `__initialised__`.
**Environment:**
- OS: Linux
- FastAPI Version 0.25.0
- Python version 3.7.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fastapi/utils.py`
Content:
```
1 import re
2 from typing import Any, Dict, List, Sequence, Set, Type, cast
3
4 from fastapi import routing
5 from fastapi.openapi.constants import REF_PREFIX
6 from pydantic import BaseConfig, BaseModel, Schema, create_model
7 from pydantic.fields import Field
8 from pydantic.schema import get_flat_models_from_fields, model_process_schema
9 from pydantic.utils import lenient_issubclass
10 from starlette.routing import BaseRoute
11
12
13 def get_flat_models_from_routes(routes: Sequence[BaseRoute]) -> Set[Type[BaseModel]]:
14 body_fields_from_routes: List[Field] = []
15 responses_from_routes: List[Field] = []
16 for route in routes:
17 if getattr(route, "include_in_schema", None) and isinstance(
18 route, routing.APIRoute
19 ):
20 if route.body_field:
21 assert isinstance(
22 route.body_field, Field
23 ), "A request body must be a Pydantic Field"
24 body_fields_from_routes.append(route.body_field)
25 if route.response_field:
26 responses_from_routes.append(route.response_field)
27 if route.response_fields:
28 responses_from_routes.extend(route.response_fields.values())
29 flat_models = get_flat_models_from_fields(
30 body_fields_from_routes + responses_from_routes, known_models=set()
31 )
32 return flat_models
33
34
35 def get_model_definitions(
36 *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]
37 ) -> Dict[str, Any]:
38 definitions: Dict[str, Dict] = {}
39 for model in flat_models:
40 m_schema, m_definitions, m_nested_models = model_process_schema(
41 model, model_name_map=model_name_map, ref_prefix=REF_PREFIX
42 )
43 definitions.update(m_definitions)
44 model_name = model_name_map[model]
45 definitions[model_name] = m_schema
46 return definitions
47
48
49 def get_path_param_names(path: str) -> Set[str]:
50 return {item.strip("{}") for item in re.findall("{[^}]*}", path)}
51
52
53 def create_cloned_field(field: Field) -> Field:
54 original_type = field.type_
55 use_type = original_type
56 if lenient_issubclass(original_type, BaseModel):
57 original_type = cast(Type[BaseModel], original_type)
58 use_type = create_model( # type: ignore
59 original_type.__name__,
60 __config__=original_type.__config__,
61 __validators__=original_type.__validators__,
62 )
63 for f in original_type.__fields__.values():
64 use_type.__fields__[f.name] = f
65 new_field = Field(
66 name=field.name,
67 type_=use_type,
68 class_validators={},
69 default=None,
70 required=False,
71 model_config=BaseConfig,
72 schema=Schema(None),
73 )
74 new_field.has_alias = field.has_alias
75 new_field.alias = field.alias
76 new_field.class_validators = field.class_validators
77 new_field.default = field.default
78 new_field.required = field.required
79 new_field.model_config = field.model_config
80 new_field.schema = field.schema
81 new_field.allow_none = field.allow_none
82 new_field.validate_always = field.validate_always
83 if field.sub_fields:
84 new_field.sub_fields = [
85 create_cloned_field(sub_field) for sub_field in field.sub_fields
86 ]
87 if field.key_field:
88 new_field.key_field = create_cloned_field(field.key_field)
89 new_field.validators = field.validators
90 new_field.whole_pre_validators = field.whole_pre_validators
91 new_field.whole_post_validators = field.whole_post_validators
92 new_field.parse_json = field.parse_json
93 new_field.shape = field.shape
94 new_field._populate_validators()
95 return new_field
96
97
98 def generate_operation_id_for_path(*, name: str, path: str, method: str) -> str:
99 operation_id = name + path
100 operation_id = operation_id.replace("{", "_").replace("}", "_").replace("/", "_")
101 operation_id = operation_id + "_" + method.lower()
102 return operation_id
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fastapi/utils.py b/fastapi/utils.py
--- a/fastapi/utils.py
+++ b/fastapi/utils.py
@@ -1,4 +1,5 @@
import re
+from dataclasses import is_dataclass
from typing import Any, Dict, List, Sequence, Set, Type, cast
from fastapi import routing
@@ -52,6 +53,8 @@
def create_cloned_field(field: Field) -> Field:
original_type = field.type_
+ if is_dataclass(original_type) and hasattr(original_type, "__pydantic_model__"):
+ original_type = original_type.__pydantic_model__ # type: ignore
use_type = original_type
if lenient_issubclass(original_type, BaseModel):
original_type = cast(Type[BaseModel], original_type)
| {"golden_diff": "diff --git a/fastapi/utils.py b/fastapi/utils.py\n--- a/fastapi/utils.py\n+++ b/fastapi/utils.py\n@@ -1,4 +1,5 @@\n import re\n+from dataclasses import is_dataclass\n from typing import Any, Dict, List, Sequence, Set, Type, cast\n \n from fastapi import routing\n@@ -52,6 +53,8 @@\n \n def create_cloned_field(field: Field) -> Field:\n original_type = field.type_\n+ if is_dataclass(original_type) and hasattr(original_type, \"__pydantic_model__\"):\n+ original_type = original_type.__pydantic_model__ # type: ignore\n use_type = original_type\n if lenient_issubclass(original_type, BaseModel):\n original_type = cast(Type[BaseModel], original_type)\n", "issue": "Add support for Pydantic's dataclasses\n**Describe the question**\r\nI am using a pydantic.dataclasses.dataclass as a response_model of my fast api route.\r\n\r\n\r\n```\r\nfrom fastapi import FastAPI\r\nfrom pydantic.dataclasses import dataclass\r\n\r\n\r\n@dataclass\r\nclass Foo:\r\n bar: int\r\n\r\napp = FastAPI()\r\n\r\[email protected]('/foo', response_model=Foo)\r\ndef get_bar() -> Foo:\r\n return Foo(bar=1)\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Create a file **app.py** with the above code.\r\n2. Start the app: `uvicorn app:app`\r\n3. Open the browser and go to **localhost:8000/foo**\r\n4. See error: \r\n```\r\npydantic.error_wrappers.ValidationError: 1 validation error\r\nresponse\r\n __init__() got an unexpected keyword argument '__initialised__' (type=type_error)\r\n```\r\n\r\n**Expected behavior**\r\nNo validation error and serialized json does not contain any keys `__initialised__`.\r\n\r\n\r\n**Environment:**\r\n - OS: Linux\r\n - FastAPI Version 0.25.0\r\n- Python version 3.7.3\r\n\n", "before_files": [{"content": "import re\nfrom typing import Any, Dict, List, Sequence, Set, Type, cast\n\nfrom fastapi import routing\nfrom fastapi.openapi.constants import REF_PREFIX\nfrom pydantic import BaseConfig, BaseModel, Schema, create_model\nfrom pydantic.fields import Field\nfrom pydantic.schema import get_flat_models_from_fields, model_process_schema\nfrom pydantic.utils import lenient_issubclass\nfrom starlette.routing import BaseRoute\n\n\ndef get_flat_models_from_routes(routes: Sequence[BaseRoute]) -> Set[Type[BaseModel]]:\n body_fields_from_routes: List[Field] = []\n responses_from_routes: List[Field] = []\n for route in routes:\n if getattr(route, \"include_in_schema\", None) and isinstance(\n route, routing.APIRoute\n ):\n if route.body_field:\n assert isinstance(\n route.body_field, Field\n ), \"A request body must be a Pydantic Field\"\n body_fields_from_routes.append(route.body_field)\n if route.response_field:\n responses_from_routes.append(route.response_field)\n if route.response_fields:\n responses_from_routes.extend(route.response_fields.values())\n flat_models = get_flat_models_from_fields(\n body_fields_from_routes + responses_from_routes, known_models=set()\n )\n return flat_models\n\n\ndef get_model_definitions(\n *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]\n) -> Dict[str, Any]:\n definitions: Dict[str, Dict] = {}\n for model in flat_models:\n m_schema, m_definitions, m_nested_models = model_process_schema(\n model, model_name_map=model_name_map, ref_prefix=REF_PREFIX\n )\n definitions.update(m_definitions)\n model_name = model_name_map[model]\n definitions[model_name] = m_schema\n return definitions\n\n\ndef get_path_param_names(path: str) -> Set[str]:\n return {item.strip(\"{}\") for item in re.findall(\"{[^}]*}\", path)}\n\n\ndef create_cloned_field(field: Field) -> Field:\n original_type = field.type_\n use_type = original_type\n if lenient_issubclass(original_type, BaseModel):\n original_type = cast(Type[BaseModel], original_type)\n use_type = create_model( # type: ignore\n original_type.__name__,\n __config__=original_type.__config__,\n __validators__=original_type.__validators__,\n )\n for f in original_type.__fields__.values():\n use_type.__fields__[f.name] = f\n new_field = Field(\n name=field.name,\n type_=use_type,\n class_validators={},\n default=None,\n required=False,\n model_config=BaseConfig,\n schema=Schema(None),\n )\n new_field.has_alias = field.has_alias\n new_field.alias = field.alias\n new_field.class_validators = field.class_validators\n new_field.default = field.default\n new_field.required = field.required\n new_field.model_config = field.model_config\n new_field.schema = field.schema\n new_field.allow_none = field.allow_none\n new_field.validate_always = field.validate_always\n if field.sub_fields:\n new_field.sub_fields = [\n create_cloned_field(sub_field) for sub_field in field.sub_fields\n ]\n if field.key_field:\n new_field.key_field = create_cloned_field(field.key_field)\n new_field.validators = field.validators\n new_field.whole_pre_validators = field.whole_pre_validators\n new_field.whole_post_validators = field.whole_post_validators\n new_field.parse_json = field.parse_json\n new_field.shape = field.shape\n new_field._populate_validators()\n return new_field\n\n\ndef generate_operation_id_for_path(*, name: str, path: str, method: str) -> str:\n operation_id = name + path\n operation_id = operation_id.replace(\"{\", \"_\").replace(\"}\", \"_\").replace(\"/\", \"_\")\n operation_id = operation_id + \"_\" + method.lower()\n return operation_id\n", "path": "fastapi/utils.py"}], "after_files": [{"content": "import re\nfrom dataclasses import is_dataclass\nfrom typing import Any, Dict, List, Sequence, Set, Type, cast\n\nfrom fastapi import routing\nfrom fastapi.openapi.constants import REF_PREFIX\nfrom pydantic import BaseConfig, BaseModel, Schema, create_model\nfrom pydantic.fields import Field\nfrom pydantic.schema import get_flat_models_from_fields, model_process_schema\nfrom pydantic.utils import lenient_issubclass\nfrom starlette.routing import BaseRoute\n\n\ndef get_flat_models_from_routes(routes: Sequence[BaseRoute]) -> Set[Type[BaseModel]]:\n body_fields_from_routes: List[Field] = []\n responses_from_routes: List[Field] = []\n for route in routes:\n if getattr(route, \"include_in_schema\", None) and isinstance(\n route, routing.APIRoute\n ):\n if route.body_field:\n assert isinstance(\n route.body_field, Field\n ), \"A request body must be a Pydantic Field\"\n body_fields_from_routes.append(route.body_field)\n if route.response_field:\n responses_from_routes.append(route.response_field)\n if route.response_fields:\n responses_from_routes.extend(route.response_fields.values())\n flat_models = get_flat_models_from_fields(\n body_fields_from_routes + responses_from_routes, known_models=set()\n )\n return flat_models\n\n\ndef get_model_definitions(\n *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]\n) -> Dict[str, Any]:\n definitions: Dict[str, Dict] = {}\n for model in flat_models:\n m_schema, m_definitions, m_nested_models = model_process_schema(\n model, model_name_map=model_name_map, ref_prefix=REF_PREFIX\n )\n definitions.update(m_definitions)\n model_name = model_name_map[model]\n definitions[model_name] = m_schema\n return definitions\n\n\ndef get_path_param_names(path: str) -> Set[str]:\n return {item.strip(\"{}\") for item in re.findall(\"{[^}]*}\", path)}\n\n\ndef create_cloned_field(field: Field) -> Field:\n original_type = field.type_\n if is_dataclass(original_type) and hasattr(original_type, \"__pydantic_model__\"):\n original_type = original_type.__pydantic_model__ # type: ignore\n use_type = original_type\n if lenient_issubclass(original_type, BaseModel):\n original_type = cast(Type[BaseModel], original_type)\n use_type = create_model( # type: ignore\n original_type.__name__,\n __config__=original_type.__config__,\n __validators__=original_type.__validators__,\n )\n for f in original_type.__fields__.values():\n use_type.__fields__[f.name] = f\n new_field = Field(\n name=field.name,\n type_=use_type,\n class_validators={},\n default=None,\n required=False,\n model_config=BaseConfig,\n schema=Schema(None),\n )\n new_field.has_alias = field.has_alias\n new_field.alias = field.alias\n new_field.class_validators = field.class_validators\n new_field.default = field.default\n new_field.required = field.required\n new_field.model_config = field.model_config\n new_field.schema = field.schema\n new_field.allow_none = field.allow_none\n new_field.validate_always = field.validate_always\n if field.sub_fields:\n new_field.sub_fields = [\n create_cloned_field(sub_field) for sub_field in field.sub_fields\n ]\n if field.key_field:\n new_field.key_field = create_cloned_field(field.key_field)\n new_field.validators = field.validators\n new_field.whole_pre_validators = field.whole_pre_validators\n new_field.whole_post_validators = field.whole_post_validators\n new_field.parse_json = field.parse_json\n new_field.shape = field.shape\n new_field._populate_validators()\n return new_field\n\n\ndef generate_operation_id_for_path(*, name: str, path: str, method: str) -> str:\n operation_id = name + path\n operation_id = operation_id.replace(\"{\", \"_\").replace(\"}\", \"_\").replace(\"/\", \"_\")\n operation_id = operation_id + \"_\" + method.lower()\n return operation_id\n", "path": "fastapi/utils.py"}]} | 1,582 | 178 |
gh_patches_debug_3231 | rasdani/github-patches | git_diff | scikit-image__scikit-image-2784 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate and remove `skimage.novice`
`skimage.novice` was built by and for Software Carpentry. I've recently learned that it is no longer being used, so I don't see much use for it remaining in the package.
If someone feels excited about turning it into a new package, please do so.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/novice/__init__.py`
Content:
```
1 """
2 skimage.novice
3 ==============
4 A special Python image submodule for beginners.
5
6 Description
7 -----------
8 ``skimage.novice`` provides a simple image manipulation interface for
9 beginners. It allows for easy loading, manipulating, and saving of image
10 files.
11
12 This module is primarily intended for teaching and differs significantly from
13 the normal, array-oriented image functions used by scikit-image.
14
15 .. note::
16
17 This module uses the Cartesian coordinate system, where the origin is at
18 the lower-left corner instead of the upper-right and the order is x, y
19 instead of row, column.
20
21
22 Examples
23 --------
24 We can create a Picture object open opening an image file:
25
26 >>> from skimage import novice
27 >>> from skimage import data
28 >>> picture = novice.open(data.data_dir + '/chelsea.png')
29
30 We can display pictures (after running this command, close the window to access the prompt again):
31
32 >>> picture.show() # doctest: +SKIP
33
34 Pictures know their format:
35
36 >>> picture.format
37 'png'
38
39 ... and where they came from:
40
41 >>> picture.path.endswith('chelsea.png')
42 True
43
44 ... and their size:
45
46 >>> picture.size
47 (451, 300)
48 >>> picture.width
49 451
50
51 As a reminder, we can preview the picture with our earlier command:
52
53 >>> picture.show() # doctest: +SKIP
54
55 Changing `size` resizes the picture.
56
57 >>> picture.size = (45, 30)
58
59 We can preview the changes we made to the picture with the ``compare`` command:
60
61 >>> picture.compare() # doctest: +SKIP
62
63 You can iterate over pixels, which have RGB values between 0 and 255,
64 and know their location in the picture.
65
66 >>> for pixel in picture:
67 ... if (pixel.red > 128) and (pixel.x < picture.width):
68 ... pixel.red = pixel.red / 2
69
70 Pictures know if they've been modified from the original file
71
72 >>> picture.modified
73 True
74 >>> print(picture.path)
75 None
76
77 Pictures can be indexed like arrays
78
79 >>> picture[0:20, 0:20] = (0, 0, 0)
80
81 Saving the picture updates the path attribute, format, and modified state.
82
83 >>> picture.save('save-demo.jpg')
84 >>> picture.path.endswith('save-demo.jpg')
85 True
86 >>> picture.format
87 'jpeg'
88 >>> picture.modified
89 False
90
91 An image can also be restored to its original state after modification:
92
93 >>> picture[0:20, 0:20] = (0, 0, 0)
94 >>> picture.compare() # doctest: +SKIP
95 >>> picture.reset()
96 >>> picture.compare() # doctest: +SKIP
97
98 """
99 from ._novice import Picture, open, colors, color_dict
100
101
102 __all__ = ['Picture', 'open', 'colors', 'color_dict']
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/novice/__init__.py b/skimage/novice/__init__.py
--- a/skimage/novice/__init__.py
+++ b/skimage/novice/__init__.py
@@ -96,7 +96,11 @@
>>> picture.compare() # doctest: +SKIP
"""
+import warnings
from ._novice import Picture, open, colors, color_dict
+warnings.warn("This module was deprecated in version 0.14. "
+ "It will be removed in 0.16.")
+
__all__ = ['Picture', 'open', 'colors', 'color_dict']
| {"golden_diff": "diff --git a/skimage/novice/__init__.py b/skimage/novice/__init__.py\n--- a/skimage/novice/__init__.py\n+++ b/skimage/novice/__init__.py\n@@ -96,7 +96,11 @@\n >>> picture.compare() # doctest: +SKIP\n \n \"\"\"\n+import warnings\n from ._novice import Picture, open, colors, color_dict\n \n \n+warnings.warn(\"This module was deprecated in version 0.14. \"\n+ \"It will be removed in 0.16.\")\n+\n __all__ = ['Picture', 'open', 'colors', 'color_dict']\n", "issue": "Deprecate and remove `skimage.novice`\n`skimage.novice` was built by and for Software Carpentry. I've recently learned that it is no longer being used, so I don't see much use for it remaining in the package.\r\n\r\nIf someone feels excited about turning it into a new package, please do so.\n", "before_files": [{"content": "\"\"\"\nskimage.novice\n==============\nA special Python image submodule for beginners.\n\nDescription\n-----------\n``skimage.novice`` provides a simple image manipulation interface for\nbeginners. It allows for easy loading, manipulating, and saving of image\nfiles.\n\nThis module is primarily intended for teaching and differs significantly from\nthe normal, array-oriented image functions used by scikit-image.\n\n.. note::\n\n This module uses the Cartesian coordinate system, where the origin is at\n the lower-left corner instead of the upper-right and the order is x, y\n instead of row, column.\n\n\nExamples\n--------\nWe can create a Picture object open opening an image file:\n\n>>> from skimage import novice\n>>> from skimage import data\n>>> picture = novice.open(data.data_dir + '/chelsea.png')\n\nWe can display pictures (after running this command, close the window to access the prompt again):\n\n>>> picture.show() # doctest: +SKIP\n\nPictures know their format:\n\n>>> picture.format\n'png'\n\n... and where they came from:\n\n>>> picture.path.endswith('chelsea.png')\nTrue\n\n... and their size:\n\n>>> picture.size\n(451, 300)\n>>> picture.width\n451\n\nAs a reminder, we can preview the picture with our earlier command:\n\n>>> picture.show() # doctest: +SKIP\n\nChanging `size` resizes the picture.\n\n>>> picture.size = (45, 30)\n\nWe can preview the changes we made to the picture with the ``compare`` command:\n\n>>> picture.compare() # doctest: +SKIP\n\nYou can iterate over pixels, which have RGB values between 0 and 255,\nand know their location in the picture.\n\n>>> for pixel in picture:\n... if (pixel.red > 128) and (pixel.x < picture.width):\n... pixel.red = pixel.red / 2\n\nPictures know if they've been modified from the original file\n\n>>> picture.modified\nTrue\n>>> print(picture.path)\nNone\n\nPictures can be indexed like arrays\n\n>>> picture[0:20, 0:20] = (0, 0, 0)\n\nSaving the picture updates the path attribute, format, and modified state.\n\n>>> picture.save('save-demo.jpg')\n>>> picture.path.endswith('save-demo.jpg')\nTrue\n>>> picture.format\n'jpeg'\n>>> picture.modified\nFalse\n\nAn image can also be restored to its original state after modification:\n\n>>> picture[0:20, 0:20] = (0, 0, 0)\n>>> picture.compare() # doctest: +SKIP\n>>> picture.reset()\n>>> picture.compare() # doctest: +SKIP\n\n\"\"\"\nfrom ._novice import Picture, open, colors, color_dict\n\n\n__all__ = ['Picture', 'open', 'colors', 'color_dict']\n", "path": "skimage/novice/__init__.py"}], "after_files": [{"content": "\"\"\"\nskimage.novice\n==============\nA special Python image submodule for beginners.\n\nDescription\n-----------\n``skimage.novice`` provides a simple image manipulation interface for\nbeginners. It allows for easy loading, manipulating, and saving of image\nfiles.\n\nThis module is primarily intended for teaching and differs significantly from\nthe normal, array-oriented image functions used by scikit-image.\n\n.. note::\n\n This module uses the Cartesian coordinate system, where the origin is at\n the lower-left corner instead of the upper-right and the order is x, y\n instead of row, column.\n\n\nExamples\n--------\nWe can create a Picture object open opening an image file:\n\n>>> from skimage import novice\n>>> from skimage import data\n>>> picture = novice.open(data.data_dir + '/chelsea.png')\n\nWe can display pictures (after running this command, close the window to access the prompt again):\n\n>>> picture.show() # doctest: +SKIP\n\nPictures know their format:\n\n>>> picture.format\n'png'\n\n... and where they came from:\n\n>>> picture.path.endswith('chelsea.png')\nTrue\n\n... and their size:\n\n>>> picture.size\n(451, 300)\n>>> picture.width\n451\n\nAs a reminder, we can preview the picture with our earlier command:\n\n>>> picture.show() # doctest: +SKIP\n\nChanging `size` resizes the picture.\n\n>>> picture.size = (45, 30)\n\nWe can preview the changes we made to the picture with the ``compare`` command:\n\n>>> picture.compare() # doctest: +SKIP\n\nYou can iterate over pixels, which have RGB values between 0 and 255,\nand know their location in the picture.\n\n>>> for pixel in picture:\n... if (pixel.red > 128) and (pixel.x < picture.width):\n... pixel.red = pixel.red / 2\n\nPictures know if they've been modified from the original file\n\n>>> picture.modified\nTrue\n>>> print(picture.path)\nNone\n\nPictures can be indexed like arrays\n\n>>> picture[0:20, 0:20] = (0, 0, 0)\n\nSaving the picture updates the path attribute, format, and modified state.\n\n>>> picture.save('save-demo.jpg')\n>>> picture.path.endswith('save-demo.jpg')\nTrue\n>>> picture.format\n'jpeg'\n>>> picture.modified\nFalse\n\nAn image can also be restored to its original state after modification:\n\n>>> picture[0:20, 0:20] = (0, 0, 0)\n>>> picture.compare() # doctest: +SKIP\n>>> picture.reset()\n>>> picture.compare() # doctest: +SKIP\n\n\"\"\"\nimport warnings\nfrom ._novice import Picture, open, colors, color_dict\n\n\nwarnings.warn(\"This module was deprecated in version 0.14. \"\n \"It will be removed in 0.16.\")\n\n__all__ = ['Picture', 'open', 'colors', 'color_dict']\n", "path": "skimage/novice/__init__.py"}]} | 1,160 | 146 |
gh_patches_debug_21859 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-1540 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add readthedocs documentation for pika instrumentation
Part of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 Instrument `pika` to trace RabbitMQ applications.
16
17 Usage
18 -----
19
20 * Start broker backend
21
22 .. code-block:: python
23
24 docker run -p 5672:5672 rabbitmq
25
26 * Run instrumented task
27
28 .. code-block:: python
29
30 import pika
31 from opentelemetry.instrumentation.pika import PikaInstrumentor
32
33 PikaInstrumentor().instrument()
34
35 connection = pika.BlockingConnection(pika.URLParameters('amqp://localhost'))
36 channel = connection.channel()
37 channel.queue_declare(queue='hello')
38 channel.basic_publish(exchange='', routing_key='hello', body=b'Hello World!')
39
40 * PikaInstrumentor also supports instrumentation of a single channel
41
42 .. code-block:: python
43
44 import pika
45 from opentelemetry.instrumentation.pika import PikaInstrumentor
46
47 connection = pika.BlockingConnection(pika.URLParameters('amqp://localhost'))
48 channel = connection.channel()
49 channel.queue_declare(queue='hello')
50
51 pika_instrumentation = PikaInstrumentor()
52 pika_instrumentation.instrument_channel(channel=channel)
53
54
55 channel.basic_publish(exchange='', routing_key='hello', body=b'Hello World!')
56
57 pika_instrumentation.uninstrument_channel(channel=channel)
58
59 * PikaInstrumentor also supports instrumentation without creating an object, and receiving a tracer_provider
60
61 .. code-block:: python
62
63 PikaInstrumentor.instrument_channel(channel, tracer_provider=tracer_provider)
64
65 * PikaInstrumentor also supports instrumenting with hooks that will be called when producing or consuming a message.
66 The hooks should be of type `Callable[[Span, bytes, BasicProperties], None]`
67 where the first parameter is the span, the second parameter is the message body
68 and the third parameter is the message properties
69
70 .. code-block:: python
71
72 def publish_hook(span: Span, body: bytes, properties: BasicProperties):
73 span.set_attribute("messaging.payload", body.decode())
74
75 def consume_hook(span: Span, body: bytes, properties: BasicProperties):
76 span.set_attribute("messaging.id", properties.message_id)
77
78 PikaInstrumentor.instrument_channel(channel, publish_hook=publish_hook, consume_hook=consume_hook)
79
80 API
81 ---
82 """
83 # pylint: disable=import-error
84
85 from .pika_instrumentor import PikaInstrumentor
86 from .version import __version__
87
88 __all__ = ["PikaInstrumentor", "__version__"]
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py b/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""
-Instrument `pika` to trace RabbitMQ applications.
+Instrument pika to trace RabbitMQ applications.
Usage
-----
@@ -63,7 +63,7 @@
PikaInstrumentor.instrument_channel(channel, tracer_provider=tracer_provider)
* PikaInstrumentor also supports instrumenting with hooks that will be called when producing or consuming a message.
- The hooks should be of type `Callable[[Span, bytes, BasicProperties], None]`
+ The hooks should be of type "Callable[[Span, bytes, BasicProperties], None]"
where the first parameter is the span, the second parameter is the message body
and the third parameter is the message properties
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py b/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \"\"\"\n-Instrument `pika` to trace RabbitMQ applications.\n+Instrument pika to trace RabbitMQ applications.\n \n Usage\n -----\n@@ -63,7 +63,7 @@\n PikaInstrumentor.instrument_channel(channel, tracer_provider=tracer_provider)\n \n * PikaInstrumentor also supports instrumenting with hooks that will be called when producing or consuming a message.\n- The hooks should be of type `Callable[[Span, bytes, BasicProperties], None]`\n+ The hooks should be of type \"Callable[[Span, bytes, BasicProperties], None]\"\n where the first parameter is the span, the second parameter is the message body\n and the third parameter is the message properties\n", "issue": "Add readthedocs documentation for pika instrumentation\nPart of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nInstrument `pika` to trace RabbitMQ applications.\n\nUsage\n-----\n\n* Start broker backend\n\n.. code-block:: python\n\n docker run -p 5672:5672 rabbitmq\n\n* Run instrumented task\n\n.. code-block:: python\n\n import pika\n from opentelemetry.instrumentation.pika import PikaInstrumentor\n\n PikaInstrumentor().instrument()\n\n connection = pika.BlockingConnection(pika.URLParameters('amqp://localhost'))\n channel = connection.channel()\n channel.queue_declare(queue='hello')\n channel.basic_publish(exchange='', routing_key='hello', body=b'Hello World!')\n\n* PikaInstrumentor also supports instrumentation of a single channel\n\n.. code-block:: python\n\n import pika\n from opentelemetry.instrumentation.pika import PikaInstrumentor\n\n connection = pika.BlockingConnection(pika.URLParameters('amqp://localhost'))\n channel = connection.channel()\n channel.queue_declare(queue='hello')\n\n pika_instrumentation = PikaInstrumentor()\n pika_instrumentation.instrument_channel(channel=channel)\n\n\n channel.basic_publish(exchange='', routing_key='hello', body=b'Hello World!')\n\n pika_instrumentation.uninstrument_channel(channel=channel)\n\n* PikaInstrumentor also supports instrumentation without creating an object, and receiving a tracer_provider\n\n.. code-block:: python\n\n PikaInstrumentor.instrument_channel(channel, tracer_provider=tracer_provider)\n\n* PikaInstrumentor also supports instrumenting with hooks that will be called when producing or consuming a message.\n The hooks should be of type `Callable[[Span, bytes, BasicProperties], None]`\n where the first parameter is the span, the second parameter is the message body\n and the third parameter is the message properties\n\n.. code-block:: python\n\n def publish_hook(span: Span, body: bytes, properties: BasicProperties):\n span.set_attribute(\"messaging.payload\", body.decode())\n\n def consume_hook(span: Span, body: bytes, properties: BasicProperties):\n span.set_attribute(\"messaging.id\", properties.message_id)\n\n PikaInstrumentor.instrument_channel(channel, publish_hook=publish_hook, consume_hook=consume_hook)\n\nAPI\n---\n\"\"\"\n# pylint: disable=import-error\n\nfrom .pika_instrumentor import PikaInstrumentor\nfrom .version import __version__\n\n__all__ = [\"PikaInstrumentor\", \"__version__\"]\n", "path": "instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nInstrument pika to trace RabbitMQ applications.\n\nUsage\n-----\n\n* Start broker backend\n\n.. code-block:: python\n\n docker run -p 5672:5672 rabbitmq\n\n* Run instrumented task\n\n.. code-block:: python\n\n import pika\n from opentelemetry.instrumentation.pika import PikaInstrumentor\n\n PikaInstrumentor().instrument()\n\n connection = pika.BlockingConnection(pika.URLParameters('amqp://localhost'))\n channel = connection.channel()\n channel.queue_declare(queue='hello')\n channel.basic_publish(exchange='', routing_key='hello', body=b'Hello World!')\n\n* PikaInstrumentor also supports instrumentation of a single channel\n\n.. code-block:: python\n\n import pika\n from opentelemetry.instrumentation.pika import PikaInstrumentor\n\n connection = pika.BlockingConnection(pika.URLParameters('amqp://localhost'))\n channel = connection.channel()\n channel.queue_declare(queue='hello')\n\n pika_instrumentation = PikaInstrumentor()\n pika_instrumentation.instrument_channel(channel=channel)\n\n\n channel.basic_publish(exchange='', routing_key='hello', body=b'Hello World!')\n\n pika_instrumentation.uninstrument_channel(channel=channel)\n\n* PikaInstrumentor also supports instrumentation without creating an object, and receiving a tracer_provider\n\n.. code-block:: python\n\n PikaInstrumentor.instrument_channel(channel, tracer_provider=tracer_provider)\n\n* PikaInstrumentor also supports instrumenting with hooks that will be called when producing or consuming a message.\n The hooks should be of type \"Callable[[Span, bytes, BasicProperties], None]\"\n where the first parameter is the span, the second parameter is the message body\n and the third parameter is the message properties\n\n.. code-block:: python\n\n def publish_hook(span: Span, body: bytes, properties: BasicProperties):\n span.set_attribute(\"messaging.payload\", body.decode())\n\n def consume_hook(span: Span, body: bytes, properties: BasicProperties):\n span.set_attribute(\"messaging.id\", properties.message_id)\n\n PikaInstrumentor.instrument_channel(channel, publish_hook=publish_hook, consume_hook=consume_hook)\n\nAPI\n---\n\"\"\"\n# pylint: disable=import-error\n\nfrom .pika_instrumentor import PikaInstrumentor\nfrom .version import __version__\n\n__all__ = [\"PikaInstrumentor\", \"__version__\"]\n", "path": "instrumentation/opentelemetry-instrumentation-pika/src/opentelemetry/instrumentation/pika/__init__.py"}]} | 1,142 | 283 |
gh_patches_debug_15296 | rasdani/github-patches | git_diff | litestar-org__litestar-1748 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/pydantic.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Generic, TypeVar
4
5 from pydantic import BaseModel
6
7 from litestar.dto.factory.abc import AbstractDTOFactory
8 from litestar.dto.factory.field import DTO_FIELD_META_KEY, DTOField
9 from litestar.dto.factory.types import FieldDefinition
10 from litestar.dto.factory.utils import get_model_type_hints
11 from litestar.types.empty import Empty
12 from litestar.utils.helpers import get_fully_qualified_class_name
13
14 if TYPE_CHECKING:
15 from typing import Any, ClassVar, Collection, Generator
16
17 from pydantic.fields import ModelField
18
19 from litestar.typing import ParsedType
20
21 __all__ = ("PydanticDTO",)
22
23 T = TypeVar("T", bound="BaseModel | Collection[BaseModel]")
24
25
26 class PydanticDTO(AbstractDTOFactory[T], Generic[T]):
27 """Support for domain modelling with Pydantic."""
28
29 __slots__ = ()
30
31 model_type: ClassVar[type[BaseModel]]
32
33 @classmethod
34 def generate_field_definitions(cls, model_type: type[BaseModel]) -> Generator[FieldDefinition, None, None]:
35 for key, parsed_type in get_model_type_hints(model_type).items():
36 model_field = model_type.__fields__[key]
37
38 dto_field: DTOField | None = model_field.field_info.extra.get(DTO_FIELD_META_KEY)
39
40 def determine_default(_parsed_type: ParsedType, _model_field: ModelField) -> Any:
41 if (
42 _model_field.default is Ellipsis
43 or _model_field.default_factory is not None
44 or (_model_field.default is None and not _parsed_type.is_optional)
45 ):
46 return Empty
47
48 return _model_field.default
49
50 field_def = FieldDefinition(
51 name=key,
52 default=determine_default(parsed_type, model_field),
53 parsed_type=parsed_type,
54 default_factory=model_field.default_factory or Empty,
55 dto_field=dto_field,
56 unique_model_name=get_fully_qualified_class_name(model_type),
57 )
58
59 yield field_def
60
61 @classmethod
62 def detect_nested_field(cls, parsed_type: ParsedType) -> bool:
63 return parsed_type.is_subclass_of(BaseModel)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/contrib/pydantic.py b/litestar/contrib/pydantic.py
--- a/litestar/contrib/pydantic.py
+++ b/litestar/contrib/pydantic.py
@@ -32,9 +32,10 @@
@classmethod
def generate_field_definitions(cls, model_type: type[BaseModel]) -> Generator[FieldDefinition, None, None]:
- for key, parsed_type in get_model_type_hints(model_type).items():
+ model_parsed_types = get_model_type_hints(model_type)
+ for key, model_field in model_type.__fields__.items():
+ parsed_type = model_parsed_types[key]
model_field = model_type.__fields__[key]
-
dto_field: DTOField | None = model_field.field_info.extra.get(DTO_FIELD_META_KEY)
def determine_default(_parsed_type: ParsedType, _model_field: ModelField) -> Any:
| {"golden_diff": "diff --git a/litestar/contrib/pydantic.py b/litestar/contrib/pydantic.py\n--- a/litestar/contrib/pydantic.py\n+++ b/litestar/contrib/pydantic.py\n@@ -32,9 +32,10 @@\n \n @classmethod\n def generate_field_definitions(cls, model_type: type[BaseModel]) -> Generator[FieldDefinition, None, None]:\n- for key, parsed_type in get_model_type_hints(model_type).items():\n+ model_parsed_types = get_model_type_hints(model_type)\n+ for key, model_field in model_type.__fields__.items():\n+ parsed_type = model_parsed_types[key]\n model_field = model_type.__fields__[key]\n-\n dto_field: DTOField | None = model_field.field_info.extra.get(DTO_FIELD_META_KEY)\n \n def determine_default(_parsed_type: ParsedType, _model_field: ModelField) -> Any:\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Generic, TypeVar\n\nfrom pydantic import BaseModel\n\nfrom litestar.dto.factory.abc import AbstractDTOFactory\nfrom litestar.dto.factory.field import DTO_FIELD_META_KEY, DTOField\nfrom litestar.dto.factory.types import FieldDefinition\nfrom litestar.dto.factory.utils import get_model_type_hints\nfrom litestar.types.empty import Empty\nfrom litestar.utils.helpers import get_fully_qualified_class_name\n\nif TYPE_CHECKING:\n from typing import Any, ClassVar, Collection, Generator\n\n from pydantic.fields import ModelField\n\n from litestar.typing import ParsedType\n\n__all__ = (\"PydanticDTO\",)\n\nT = TypeVar(\"T\", bound=\"BaseModel | Collection[BaseModel]\")\n\n\nclass PydanticDTO(AbstractDTOFactory[T], Generic[T]):\n \"\"\"Support for domain modelling with Pydantic.\"\"\"\n\n __slots__ = ()\n\n model_type: ClassVar[type[BaseModel]]\n\n @classmethod\n def generate_field_definitions(cls, model_type: type[BaseModel]) -> Generator[FieldDefinition, None, None]:\n for key, parsed_type in get_model_type_hints(model_type).items():\n model_field = model_type.__fields__[key]\n\n dto_field: DTOField | None = model_field.field_info.extra.get(DTO_FIELD_META_KEY)\n\n def determine_default(_parsed_type: ParsedType, _model_field: ModelField) -> Any:\n if (\n _model_field.default is Ellipsis\n or _model_field.default_factory is not None\n or (_model_field.default is None and not _parsed_type.is_optional)\n ):\n return Empty\n\n return _model_field.default\n\n field_def = FieldDefinition(\n name=key,\n default=determine_default(parsed_type, model_field),\n parsed_type=parsed_type,\n default_factory=model_field.default_factory or Empty,\n dto_field=dto_field,\n unique_model_name=get_fully_qualified_class_name(model_type),\n )\n\n yield field_def\n\n @classmethod\n def detect_nested_field(cls, parsed_type: ParsedType) -> bool:\n return parsed_type.is_subclass_of(BaseModel)\n", "path": "litestar/contrib/pydantic.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Generic, TypeVar\n\nfrom pydantic import BaseModel\n\nfrom litestar.dto.factory.abc import AbstractDTOFactory\nfrom litestar.dto.factory.field import DTO_FIELD_META_KEY, DTOField\nfrom litestar.dto.factory.types import FieldDefinition\nfrom litestar.dto.factory.utils import get_model_type_hints\nfrom litestar.types.empty import Empty\nfrom litestar.utils.helpers import get_fully_qualified_class_name\n\nif TYPE_CHECKING:\n from typing import Any, ClassVar, Collection, Generator\n\n from pydantic.fields import ModelField\n\n from litestar.typing import ParsedType\n\n__all__ = (\"PydanticDTO\",)\n\nT = TypeVar(\"T\", bound=\"BaseModel | Collection[BaseModel]\")\n\n\nclass PydanticDTO(AbstractDTOFactory[T], Generic[T]):\n \"\"\"Support for domain modelling with Pydantic.\"\"\"\n\n __slots__ = ()\n\n model_type: ClassVar[type[BaseModel]]\n\n @classmethod\n def generate_field_definitions(cls, model_type: type[BaseModel]) -> Generator[FieldDefinition, None, None]:\n model_parsed_types = get_model_type_hints(model_type)\n for key, model_field in model_type.__fields__.items():\n parsed_type = model_parsed_types[key]\n model_field = model_type.__fields__[key]\n dto_field: DTOField | None = model_field.field_info.extra.get(DTO_FIELD_META_KEY)\n\n def determine_default(_parsed_type: ParsedType, _model_field: ModelField) -> Any:\n if (\n _model_field.default is Ellipsis\n or _model_field.default_factory is not None\n or (_model_field.default is None and not _parsed_type.is_optional)\n ):\n return Empty\n\n return _model_field.default\n\n field_def = FieldDefinition(\n name=key,\n default=determine_default(parsed_type, model_field),\n parsed_type=parsed_type,\n default_factory=model_field.default_factory or Empty,\n dto_field=dto_field,\n unique_model_name=get_fully_qualified_class_name(model_type),\n )\n\n yield field_def\n\n @classmethod\n def detect_nested_field(cls, parsed_type: ParsedType) -> bool:\n return parsed_type.is_subclass_of(BaseModel)\n", "path": "litestar/contrib/pydantic.py"}]} | 1,024 | 204 |
gh_patches_debug_23893 | rasdani/github-patches | git_diff | vispy__vispy-2179 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update volume_plane.py to allow timer to be stopped
I noticed the volume_plane.py has keyboard handling for moving the plane through the volume, but it doesn't allow you to stop the animation so these events don't actually mean much.
@alisterburt do you have time to fix this? Tradition is to use the spacebar (`" "`) to toggle the timer start/stop
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/scene/volume_plane.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # -----------------------------------------------------------------------------
3 # Copyright (c) Vispy Development Team. All Rights Reserved.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 # -----------------------------------------------------------------------------
6 # vispy: gallery 10:200:5
7 """
8 Rendering Planes through 3D Data
9 ================================
10
11 Controls:
12 * 1 - toggle between volume rendering methods
13 * 2 - toggle between volume rendering modes ('volume', 'plane')
14 * [] - shift plane along plane normal
15 * {} - decrease/increase plane thickness
16
17 * x/y/z/o - set plane normal along x/y/z or [1,1,1] oblique axis
18 """
19 import sys
20
21 import numpy as np
22
23 from vispy import app, scene, io
24 from vispy.visuals.transforms import STTransform
25
26 # Read volume
27 vol = np.load(io.load_data_file('volume/stent.npz'))['arr_0']
28
29 # Prepare canvas
30 canvas = scene.SceneCanvas(keys='interactive', show=True)
31 view = canvas.central_widget.add_view()
32
33 # Create the volume visual for plane rendering
34 plane = scene.visuals.Volume(
35 vol,
36 parent=view.scene,
37 raycasting_mode='plane',
38 method='mip',
39 plane_thickness=3.0,
40 plane_position=(128, 60, 64),
41 plane_normal=(1, 0, 0),
42 )
43
44 volume = scene.visuals.Volume(
45 vol,
46 parent=view.scene,
47 raycasting_mode='volume',
48 method='mip',
49 )
50 volume.set_gl_state('additive')
51 volume.opacity = 0.25
52
53 # Create a camera
54 cam = scene.cameras.TurntableCamera(
55 parent=view.scene, fov=60.0, azimuth=-42.0, elevation=30.0
56 )
57 view.camera = cam
58
59 # Create an XYZAxis visual
60 axis = scene.visuals.XYZAxis(parent=view)
61 s = STTransform(translate=(50, 50), scale=(50, 50, 50, 1))
62 affine = s.as_matrix()
63 axis.transform = affine
64
65
66 def update_axis_visual():
67 """Sync XYZAxis visual with camera angles"""
68 axis.transform.reset()
69
70 axis.transform.rotate(cam.roll, (0, 0, 1))
71 axis.transform.rotate(cam.elevation, (1, 0, 0))
72 axis.transform.rotate(cam.azimuth, (0, 1, 0))
73 axis.transform.scale((50, 50, 0.001))
74 axis.transform.translate((50., 50.))
75
76 axis.update()
77
78
79 update_axis_visual()
80
81
82 @canvas.events.mouse_move.connect
83 def on_mouse_move(event):
84 if event.button == 1 and event.is_dragging:
85 update_axis_visual()
86
87
88 # Implement key presses
89 @canvas.events.key_press.connect
90 def on_key_press(event):
91 if event.text == '1':
92 methods = ['mip', 'average']
93 method = methods[(methods.index(plane.method) + 1) % 2]
94 print("Volume render method: %s" % method)
95 plane.method = method
96 elif event.text == '2':
97 modes = ['volume', 'plane']
98 if plane.raycasting_mode == modes[0]:
99 plane.raycasting_mode = modes[1]
100 print(modes[1])
101 else:
102 plane.raycasting_mode = modes[0]
103 print(modes[0])
104 elif event.text != '' and event.text in '{}':
105 t = -1 if event.text == '{' else 1
106 plane.plane_thickness += t
107 plane.plane_thickness += t
108 print(f"plane thickness: {plane.plane_thickness}")
109 elif event.text != '' and event.text in '[]':
110 shift = plane.plane_normal / np.linalg.norm(plane.plane_normal)
111 if event.text == '[':
112 plane.plane_position -= 2 * shift
113 elif event.text == ']':
114 plane.plane_position += 2 * shift
115 print(f"plane position: {plane.plane_position}")
116
117 elif event.text == 'x':
118 plane.plane_normal = [0, 0, 1]
119 elif event.text == 'y':
120 plane.plane_normal = [0, 1, 0]
121 elif event.text == 'z':
122 plane.plane_normal = [1, 0, 0]
123 elif event.text == 'o':
124 plane.plane_normal = [1, 1, 1]
125
126
127 def move_plane(event):
128 z_pos = plane.plane_position[0]
129 if z_pos < 32:
130 plane.plane_position = plane.plane_position + [1, 0, 0]
131 elif 32 < z_pos <= 220:
132 plane.plane_position = plane.plane_position - [1, 0, 0]
133 else:
134 plane.plane_position = (220, 64, 64)
135
136
137 timer = app.Timer('auto', connect=move_plane, start=True)
138
139 if __name__ == '__main__':
140 canvas.show()
141 print(__doc__)
142 if sys.flags.interactive == 0:
143 plane.plane_position = (220, 64, 64)
144 app.run()
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/scene/volume_plane.py b/examples/scene/volume_plane.py
--- a/examples/scene/volume_plane.py
+++ b/examples/scene/volume_plane.py
@@ -13,6 +13,7 @@
* 2 - toggle between volume rendering modes ('volume', 'plane')
* [] - shift plane along plane normal
* {} - decrease/increase plane thickness
+* Spacebar - stop/start animation
* x/y/z/o - set plane normal along x/y/z or [1,1,1] oblique axis
"""
@@ -113,7 +114,6 @@
elif event.text == ']':
plane.plane_position += 2 * shift
print(f"plane position: {plane.plane_position}")
-
elif event.text == 'x':
plane.plane_normal = [0, 0, 1]
elif event.text == 'y':
@@ -122,6 +122,11 @@
plane.plane_normal = [1, 0, 0]
elif event.text == 'o':
plane.plane_normal = [1, 1, 1]
+ elif event.text == ' ':
+ if timer.running:
+ timer.stop()
+ else:
+ timer.start()
def move_plane(event):
| {"golden_diff": "diff --git a/examples/scene/volume_plane.py b/examples/scene/volume_plane.py\n--- a/examples/scene/volume_plane.py\n+++ b/examples/scene/volume_plane.py\n@@ -13,6 +13,7 @@\n * 2 - toggle between volume rendering modes ('volume', 'plane')\n * [] - shift plane along plane normal\n * {} - decrease/increase plane thickness\n+* Spacebar - stop/start animation\n \n * x/y/z/o - set plane normal along x/y/z or [1,1,1] oblique axis\n \"\"\"\n@@ -113,7 +114,6 @@\n elif event.text == ']':\n plane.plane_position += 2 * shift\n print(f\"plane position: {plane.plane_position}\")\n-\n elif event.text == 'x':\n plane.plane_normal = [0, 0, 1]\n elif event.text == 'y':\n@@ -122,6 +122,11 @@\n plane.plane_normal = [1, 0, 0]\n elif event.text == 'o':\n plane.plane_normal = [1, 1, 1]\n+ elif event.text == ' ':\n+ if timer.running:\n+ timer.stop()\n+ else:\n+ timer.start()\n \n \n def move_plane(event):\n", "issue": "Update volume_plane.py to allow timer to be stopped\nI noticed the volume_plane.py has keyboard handling for moving the plane through the volume, but it doesn't allow you to stop the animation so these events don't actually mean much.\r\n\r\n@alisterburt do you have time to fix this? Tradition is to use the spacebar (`\" \"`) to toggle the timer start/stop\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n# vispy: gallery 10:200:5\n\"\"\"\nRendering Planes through 3D Data\n================================\n\nControls:\n* 1 - toggle between volume rendering methods\n* 2 - toggle between volume rendering modes ('volume', 'plane')\n* [] - shift plane along plane normal\n* {} - decrease/increase plane thickness\n\n* x/y/z/o - set plane normal along x/y/z or [1,1,1] oblique axis\n\"\"\"\nimport sys\n\nimport numpy as np\n\nfrom vispy import app, scene, io\nfrom vispy.visuals.transforms import STTransform\n\n# Read volume\nvol = np.load(io.load_data_file('volume/stent.npz'))['arr_0']\n\n# Prepare canvas\ncanvas = scene.SceneCanvas(keys='interactive', show=True)\nview = canvas.central_widget.add_view()\n\n# Create the volume visual for plane rendering\nplane = scene.visuals.Volume(\n vol,\n parent=view.scene,\n raycasting_mode='plane',\n method='mip',\n plane_thickness=3.0,\n plane_position=(128, 60, 64),\n plane_normal=(1, 0, 0),\n)\n\nvolume = scene.visuals.Volume(\n vol,\n parent=view.scene,\n raycasting_mode='volume',\n method='mip',\n)\nvolume.set_gl_state('additive')\nvolume.opacity = 0.25\n\n# Create a camera\ncam = scene.cameras.TurntableCamera(\n parent=view.scene, fov=60.0, azimuth=-42.0, elevation=30.0\n)\nview.camera = cam\n\n# Create an XYZAxis visual\naxis = scene.visuals.XYZAxis(parent=view)\ns = STTransform(translate=(50, 50), scale=(50, 50, 50, 1))\naffine = s.as_matrix()\naxis.transform = affine\n\n\ndef update_axis_visual():\n \"\"\"Sync XYZAxis visual with camera angles\"\"\"\n axis.transform.reset()\n\n axis.transform.rotate(cam.roll, (0, 0, 1))\n axis.transform.rotate(cam.elevation, (1, 0, 0))\n axis.transform.rotate(cam.azimuth, (0, 1, 0))\n axis.transform.scale((50, 50, 0.001))\n axis.transform.translate((50., 50.))\n\n axis.update()\n\n\nupdate_axis_visual()\n\n\[email protected]_move.connect\ndef on_mouse_move(event):\n if event.button == 1 and event.is_dragging:\n update_axis_visual()\n\n\n# Implement key presses\[email protected]_press.connect\ndef on_key_press(event):\n if event.text == '1':\n methods = ['mip', 'average']\n method = methods[(methods.index(plane.method) + 1) % 2]\n print(\"Volume render method: %s\" % method)\n plane.method = method\n elif event.text == '2':\n modes = ['volume', 'plane']\n if plane.raycasting_mode == modes[0]:\n plane.raycasting_mode = modes[1]\n print(modes[1])\n else:\n plane.raycasting_mode = modes[0]\n print(modes[0])\n elif event.text != '' and event.text in '{}':\n t = -1 if event.text == '{' else 1\n plane.plane_thickness += t\n plane.plane_thickness += t\n print(f\"plane thickness: {plane.plane_thickness}\")\n elif event.text != '' and event.text in '[]':\n shift = plane.plane_normal / np.linalg.norm(plane.plane_normal)\n if event.text == '[':\n plane.plane_position -= 2 * shift\n elif event.text == ']':\n plane.plane_position += 2 * shift\n print(f\"plane position: {plane.plane_position}\")\n\n elif event.text == 'x':\n plane.plane_normal = [0, 0, 1]\n elif event.text == 'y':\n plane.plane_normal = [0, 1, 0]\n elif event.text == 'z':\n plane.plane_normal = [1, 0, 0]\n elif event.text == 'o':\n plane.plane_normal = [1, 1, 1]\n\n\ndef move_plane(event):\n z_pos = plane.plane_position[0]\n if z_pos < 32:\n plane.plane_position = plane.plane_position + [1, 0, 0]\n elif 32 < z_pos <= 220:\n plane.plane_position = plane.plane_position - [1, 0, 0]\n else:\n plane.plane_position = (220, 64, 64)\n\n\ntimer = app.Timer('auto', connect=move_plane, start=True)\n\nif __name__ == '__main__':\n canvas.show()\n print(__doc__)\n if sys.flags.interactive == 0:\n plane.plane_position = (220, 64, 64)\n app.run()\n", "path": "examples/scene/volume_plane.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n# vispy: gallery 10:200:5\n\"\"\"\nRendering Planes through 3D Data\n================================\n\nControls:\n* 1 - toggle between volume rendering methods\n* 2 - toggle between volume rendering modes ('volume', 'plane')\n* [] - shift plane along plane normal\n* {} - decrease/increase plane thickness\n* Spacebar - stop/start animation\n\n* x/y/z/o - set plane normal along x/y/z or [1,1,1] oblique axis\n\"\"\"\nimport sys\n\nimport numpy as np\n\nfrom vispy import app, scene, io\nfrom vispy.visuals.transforms import STTransform\n\n# Read volume\nvol = np.load(io.load_data_file('volume/stent.npz'))['arr_0']\n\n# Prepare canvas\ncanvas = scene.SceneCanvas(keys='interactive', show=True)\nview = canvas.central_widget.add_view()\n\n# Create the volume visual for plane rendering\nplane = scene.visuals.Volume(\n vol,\n parent=view.scene,\n raycasting_mode='plane',\n method='mip',\n plane_thickness=3.0,\n plane_position=(128, 60, 64),\n plane_normal=(1, 0, 0),\n)\n\nvolume = scene.visuals.Volume(\n vol,\n parent=view.scene,\n raycasting_mode='volume',\n method='mip',\n)\nvolume.set_gl_state('additive')\nvolume.opacity = 0.25\n\n# Create a camera\ncam = scene.cameras.TurntableCamera(\n parent=view.scene, fov=60.0, azimuth=-42.0, elevation=30.0\n)\nview.camera = cam\n\n# Create an XYZAxis visual\naxis = scene.visuals.XYZAxis(parent=view)\ns = STTransform(translate=(50, 50), scale=(50, 50, 50, 1))\naffine = s.as_matrix()\naxis.transform = affine\n\n\ndef update_axis_visual():\n \"\"\"Sync XYZAxis visual with camera angles\"\"\"\n axis.transform.reset()\n\n axis.transform.rotate(cam.roll, (0, 0, 1))\n axis.transform.rotate(cam.elevation, (1, 0, 0))\n axis.transform.rotate(cam.azimuth, (0, 1, 0))\n axis.transform.scale((50, 50, 0.001))\n axis.transform.translate((50., 50.))\n\n axis.update()\n\n\nupdate_axis_visual()\n\n\[email protected]_move.connect\ndef on_mouse_move(event):\n if event.button == 1 and event.is_dragging:\n update_axis_visual()\n\n\n# Implement key presses\[email protected]_press.connect\ndef on_key_press(event):\n if event.text == '1':\n methods = ['mip', 'average']\n method = methods[(methods.index(plane.method) + 1) % 2]\n print(\"Volume render method: %s\" % method)\n plane.method = method\n elif event.text == '2':\n modes = ['volume', 'plane']\n if plane.raycasting_mode == modes[0]:\n plane.raycasting_mode = modes[1]\n print(modes[1])\n else:\n plane.raycasting_mode = modes[0]\n print(modes[0])\n elif event.text != '' and event.text in '{}':\n t = -1 if event.text == '{' else 1\n plane.plane_thickness += t\n plane.plane_thickness += t\n print(f\"plane thickness: {plane.plane_thickness}\")\n elif event.text != '' and event.text in '[]':\n shift = plane.plane_normal / np.linalg.norm(plane.plane_normal)\n if event.text == '[':\n plane.plane_position -= 2 * shift\n elif event.text == ']':\n plane.plane_position += 2 * shift\n print(f\"plane position: {plane.plane_position}\")\n elif event.text == 'x':\n plane.plane_normal = [0, 0, 1]\n elif event.text == 'y':\n plane.plane_normal = [0, 1, 0]\n elif event.text == 'z':\n plane.plane_normal = [1, 0, 0]\n elif event.text == 'o':\n plane.plane_normal = [1, 1, 1]\n elif event.text == ' ':\n if timer.running:\n timer.stop()\n else:\n timer.start()\n\n\ndef move_plane(event):\n z_pos = plane.plane_position[0]\n if z_pos < 32:\n plane.plane_position = plane.plane_position + [1, 0, 0]\n elif 32 < z_pos <= 220:\n plane.plane_position = plane.plane_position - [1, 0, 0]\n else:\n plane.plane_position = (220, 64, 64)\n\n\ntimer = app.Timer('auto', connect=move_plane, start=True)\n\nif __name__ == '__main__':\n canvas.show()\n print(__doc__)\n if sys.flags.interactive == 0:\n plane.plane_position = (220, 64, 64)\n app.run()\n", "path": "examples/scene/volume_plane.py"}]} | 1,813 | 286 |
gh_patches_debug_34571 | rasdani/github-patches | git_diff | iterative__dvc-5080 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
config: add --list to list all variables and their values
Something that would be handy for us during debugging and for users. Same as `git config --list`. Useful to avoid confusion such as in https://github.com/iterative/dvc/issues/4970
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/command/config.py`
Content:
```
1 import argparse
2 import logging
3
4 from dvc.command.base import CmdBaseNoRepo, append_doc_link
5 from dvc.config import Config, ConfigError
6
7 logger = logging.getLogger(__name__)
8
9
10 class CmdConfig(CmdBaseNoRepo):
11 def __init__(self, args):
12 super().__init__(args)
13
14 self.config = Config(validate=False)
15
16 def run(self):
17 section, opt = self.args.name.lower().strip().split(".", 1)
18
19 if self.args.value is None and not self.args.unset:
20 conf = self.config.load_one(self.args.level)
21 self._check(conf, section, opt)
22 logger.info(conf[section][opt])
23 return 0
24
25 with self.config.edit(self.args.level) as conf:
26 if self.args.unset:
27 self._check(conf, section, opt)
28 del conf[section][opt]
29 else:
30 self._check(conf, section)
31 conf[section][opt] = self.args.value
32
33 if self.args.name == "cache.type":
34 logger.warning(
35 "You have changed the 'cache.type' option. This doesn't update"
36 " any existing workspace file links, but it can be done with:"
37 "\n dvc checkout --relink"
38 )
39
40 return 0
41
42 def _check(self, conf, section, opt=None):
43 if section not in conf:
44 msg = "section {} doesn't exist"
45 raise ConfigError(msg.format(self.args.name))
46
47 if opt and opt not in conf[section]:
48 msg = "option {} doesn't exist"
49 raise ConfigError(msg.format(self.args.name))
50
51
52 parent_config_parser = argparse.ArgumentParser(add_help=False)
53 level_group = parent_config_parser.add_mutually_exclusive_group()
54 level_group.add_argument(
55 "--global",
56 dest="level",
57 action="store_const",
58 const="global",
59 help="Use global config.",
60 )
61 level_group.add_argument(
62 "--system",
63 dest="level",
64 action="store_const",
65 const="system",
66 help="Use system config.",
67 )
68 level_group.add_argument(
69 "--local",
70 dest="level",
71 action="store_const",
72 const="local",
73 help="Use local config.",
74 )
75 parent_config_parser.set_defaults(level="repo")
76
77
78 def add_parser(subparsers, parent_parser):
79 CONFIG_HELP = "Get or set config options."
80
81 config_parser = subparsers.add_parser(
82 "config",
83 parents=[parent_config_parser, parent_parser],
84 description=append_doc_link(CONFIG_HELP, "config"),
85 help=CONFIG_HELP,
86 formatter_class=argparse.RawDescriptionHelpFormatter,
87 )
88 config_parser.add_argument(
89 "-u",
90 "--unset",
91 default=False,
92 action="store_true",
93 help="Unset option.",
94 )
95 config_parser.add_argument("name", help="Option name.")
96 config_parser.add_argument("value", nargs="?", help="Option value.")
97 config_parser.set_defaults(func=CmdConfig)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/command/config.py b/dvc/command/config.py
--- a/dvc/command/config.py
+++ b/dvc/command/config.py
@@ -3,6 +3,7 @@
from dvc.command.base import CmdBaseNoRepo, append_doc_link
from dvc.config import Config, ConfigError
+from dvc.utils.flatten import flatten
logger = logging.getLogger(__name__)
@@ -14,6 +15,22 @@
self.config = Config(validate=False)
def run(self):
+ if self.args.list:
+ if any((self.args.name, self.args.value, self.args.unset)):
+ logger.error(
+ "-l/--list can't be used together with any of these "
+ "options: -u/--unset, name, value"
+ )
+ return 1
+
+ conf = self.config.load_one(self.args.level)
+ logger.info("\n".join(self._format_config(conf)))
+ return 0
+
+ if self.args.name is None:
+ logger.error("name argument is required")
+ return 1
+
section, opt = self.args.name.lower().strip().split(".", 1)
if self.args.value is None and not self.args.unset:
@@ -48,6 +65,11 @@
msg = "option {} doesn't exist"
raise ConfigError(msg.format(self.args.name))
+ @staticmethod
+ def _format_config(config):
+ for key, value in flatten(config).items():
+ yield f"{key}={value}"
+
parent_config_parser = argparse.ArgumentParser(add_help=False)
level_group = parent_config_parser.add_mutually_exclusive_group()
@@ -92,6 +114,13 @@
action="store_true",
help="Unset option.",
)
- config_parser.add_argument("name", help="Option name.")
+ config_parser.add_argument("name", nargs="?", help="Option name.")
config_parser.add_argument("value", nargs="?", help="Option value.")
+ config_parser.add_argument(
+ "-l",
+ "--list",
+ default=False,
+ action="store_true",
+ help="list all defined config values",
+ )
config_parser.set_defaults(func=CmdConfig)
| {"golden_diff": "diff --git a/dvc/command/config.py b/dvc/command/config.py\n--- a/dvc/command/config.py\n+++ b/dvc/command/config.py\n@@ -3,6 +3,7 @@\n \n from dvc.command.base import CmdBaseNoRepo, append_doc_link\n from dvc.config import Config, ConfigError\n+from dvc.utils.flatten import flatten\n \n logger = logging.getLogger(__name__)\n \n@@ -14,6 +15,22 @@\n self.config = Config(validate=False)\n \n def run(self):\n+ if self.args.list:\n+ if any((self.args.name, self.args.value, self.args.unset)):\n+ logger.error(\n+ \"-l/--list can't be used together with any of these \"\n+ \"options: -u/--unset, name, value\"\n+ )\n+ return 1\n+\n+ conf = self.config.load_one(self.args.level)\n+ logger.info(\"\\n\".join(self._format_config(conf)))\n+ return 0\n+\n+ if self.args.name is None:\n+ logger.error(\"name argument is required\")\n+ return 1\n+\n section, opt = self.args.name.lower().strip().split(\".\", 1)\n \n if self.args.value is None and not self.args.unset:\n@@ -48,6 +65,11 @@\n msg = \"option {} doesn't exist\"\n raise ConfigError(msg.format(self.args.name))\n \n+ @staticmethod\n+ def _format_config(config):\n+ for key, value in flatten(config).items():\n+ yield f\"{key}={value}\"\n+\n \n parent_config_parser = argparse.ArgumentParser(add_help=False)\n level_group = parent_config_parser.add_mutually_exclusive_group()\n@@ -92,6 +114,13 @@\n action=\"store_true\",\n help=\"Unset option.\",\n )\n- config_parser.add_argument(\"name\", help=\"Option name.\")\n+ config_parser.add_argument(\"name\", nargs=\"?\", help=\"Option name.\")\n config_parser.add_argument(\"value\", nargs=\"?\", help=\"Option value.\")\n+ config_parser.add_argument(\n+ \"-l\",\n+ \"--list\",\n+ default=False,\n+ action=\"store_true\",\n+ help=\"list all defined config values\",\n+ )\n config_parser.set_defaults(func=CmdConfig)\n", "issue": "config: add --list to list all variables and their values\nSomething that would be handy for us during debugging and for users. Same as `git config --list`. Useful to avoid confusion such as in https://github.com/iterative/dvc/issues/4970\n", "before_files": [{"content": "import argparse\nimport logging\n\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.config import Config, ConfigError\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdConfig(CmdBaseNoRepo):\n def __init__(self, args):\n super().__init__(args)\n\n self.config = Config(validate=False)\n\n def run(self):\n section, opt = self.args.name.lower().strip().split(\".\", 1)\n\n if self.args.value is None and not self.args.unset:\n conf = self.config.load_one(self.args.level)\n self._check(conf, section, opt)\n logger.info(conf[section][opt])\n return 0\n\n with self.config.edit(self.args.level) as conf:\n if self.args.unset:\n self._check(conf, section, opt)\n del conf[section][opt]\n else:\n self._check(conf, section)\n conf[section][opt] = self.args.value\n\n if self.args.name == \"cache.type\":\n logger.warning(\n \"You have changed the 'cache.type' option. This doesn't update\"\n \" any existing workspace file links, but it can be done with:\"\n \"\\n dvc checkout --relink\"\n )\n\n return 0\n\n def _check(self, conf, section, opt=None):\n if section not in conf:\n msg = \"section {} doesn't exist\"\n raise ConfigError(msg.format(self.args.name))\n\n if opt and opt not in conf[section]:\n msg = \"option {} doesn't exist\"\n raise ConfigError(msg.format(self.args.name))\n\n\nparent_config_parser = argparse.ArgumentParser(add_help=False)\nlevel_group = parent_config_parser.add_mutually_exclusive_group()\nlevel_group.add_argument(\n \"--global\",\n dest=\"level\",\n action=\"store_const\",\n const=\"global\",\n help=\"Use global config.\",\n)\nlevel_group.add_argument(\n \"--system\",\n dest=\"level\",\n action=\"store_const\",\n const=\"system\",\n help=\"Use system config.\",\n)\nlevel_group.add_argument(\n \"--local\",\n dest=\"level\",\n action=\"store_const\",\n const=\"local\",\n help=\"Use local config.\",\n)\nparent_config_parser.set_defaults(level=\"repo\")\n\n\ndef add_parser(subparsers, parent_parser):\n CONFIG_HELP = \"Get or set config options.\"\n\n config_parser = subparsers.add_parser(\n \"config\",\n parents=[parent_config_parser, parent_parser],\n description=append_doc_link(CONFIG_HELP, \"config\"),\n help=CONFIG_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n config_parser.add_argument(\n \"-u\",\n \"--unset\",\n default=False,\n action=\"store_true\",\n help=\"Unset option.\",\n )\n config_parser.add_argument(\"name\", help=\"Option name.\")\n config_parser.add_argument(\"value\", nargs=\"?\", help=\"Option value.\")\n config_parser.set_defaults(func=CmdConfig)\n", "path": "dvc/command/config.py"}], "after_files": [{"content": "import argparse\nimport logging\n\nfrom dvc.command.base import CmdBaseNoRepo, append_doc_link\nfrom dvc.config import Config, ConfigError\nfrom dvc.utils.flatten import flatten\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdConfig(CmdBaseNoRepo):\n def __init__(self, args):\n super().__init__(args)\n\n self.config = Config(validate=False)\n\n def run(self):\n if self.args.list:\n if any((self.args.name, self.args.value, self.args.unset)):\n logger.error(\n \"-l/--list can't be used together with any of these \"\n \"options: -u/--unset, name, value\"\n )\n return 1\n\n conf = self.config.load_one(self.args.level)\n logger.info(\"\\n\".join(self._format_config(conf)))\n return 0\n\n if self.args.name is None:\n logger.error(\"name argument is required\")\n return 1\n\n section, opt = self.args.name.lower().strip().split(\".\", 1)\n\n if self.args.value is None and not self.args.unset:\n conf = self.config.load_one(self.args.level)\n self._check(conf, section, opt)\n logger.info(conf[section][opt])\n return 0\n\n with self.config.edit(self.args.level) as conf:\n if self.args.unset:\n self._check(conf, section, opt)\n del conf[section][opt]\n else:\n self._check(conf, section)\n conf[section][opt] = self.args.value\n\n if self.args.name == \"cache.type\":\n logger.warning(\n \"You have changed the 'cache.type' option. This doesn't update\"\n \" any existing workspace file links, but it can be done with:\"\n \"\\n dvc checkout --relink\"\n )\n\n return 0\n\n def _check(self, conf, section, opt=None):\n if section not in conf:\n msg = \"section {} doesn't exist\"\n raise ConfigError(msg.format(self.args.name))\n\n if opt and opt not in conf[section]:\n msg = \"option {} doesn't exist\"\n raise ConfigError(msg.format(self.args.name))\n\n @staticmethod\n def _format_config(config):\n for key, value in flatten(config).items():\n yield f\"{key}={value}\"\n\n\nparent_config_parser = argparse.ArgumentParser(add_help=False)\nlevel_group = parent_config_parser.add_mutually_exclusive_group()\nlevel_group.add_argument(\n \"--global\",\n dest=\"level\",\n action=\"store_const\",\n const=\"global\",\n help=\"Use global config.\",\n)\nlevel_group.add_argument(\n \"--system\",\n dest=\"level\",\n action=\"store_const\",\n const=\"system\",\n help=\"Use system config.\",\n)\nlevel_group.add_argument(\n \"--local\",\n dest=\"level\",\n action=\"store_const\",\n const=\"local\",\n help=\"Use local config.\",\n)\nparent_config_parser.set_defaults(level=\"repo\")\n\n\ndef add_parser(subparsers, parent_parser):\n CONFIG_HELP = \"Get or set config options.\"\n\n config_parser = subparsers.add_parser(\n \"config\",\n parents=[parent_config_parser, parent_parser],\n description=append_doc_link(CONFIG_HELP, \"config\"),\n help=CONFIG_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n config_parser.add_argument(\n \"-u\",\n \"--unset\",\n default=False,\n action=\"store_true\",\n help=\"Unset option.\",\n )\n config_parser.add_argument(\"name\", nargs=\"?\", help=\"Option name.\")\n config_parser.add_argument(\"value\", nargs=\"?\", help=\"Option value.\")\n config_parser.add_argument(\n \"-l\",\n \"--list\",\n default=False,\n action=\"store_true\",\n help=\"list all defined config values\",\n )\n config_parser.set_defaults(func=CmdConfig)\n", "path": "dvc/command/config.py"}]} | 1,130 | 495 |
gh_patches_debug_26903 | rasdani/github-patches | git_diff | ipython__ipython-6139 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use WebSocket.ping to keep connections alive across proxies
I am accessing a remote IPython Notebook and need to use a proxy (Squid/SOCKS) for internet access.
As long as I am running cells everything works fine, but after one minute of inactivity the proxy seems to close the connection, resulting in the following Firefox console log:
```
The connection to wss://example.org/kernels/$uuid/shell was interrupted while the page
was loading. @ https://example.org/static/js/kernel.js
```
The websocket connection is reestablished immediately (and interrupted again after one minute), but running cells does not work anymore until I manually restart the kernel.
```
ipython --version
0.13.1.rc2
```
Sending any message via each websocket (shell+iopub) will avoid interruption. This is a quick and _dirty_ client side workaround:
```
from IPython.core.display import HTML
HTML('''<script>
var keepalive_timer;
function keep_connection_alive() {
IPython.notebook.kernel.execute();
keepalive_timer = setTimeout('keep_connection_alive()', 55000);
}
keep_connection_alive();
</script>''')
```
WebSocket has ping and pong frames for keep-alive, tornado has a ping() function to send such a frame. Could the ping function be called somewhere in the kernel code to keep alive connections even via proxy?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/html/base/zmqhandlers.py`
Content:
```
1 """Tornado handlers for WebSocket <-> ZMQ sockets."""
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 import json
7
8 try:
9 from urllib.parse import urlparse # Py 3
10 except ImportError:
11 from urlparse import urlparse # Py 2
12
13 try:
14 from http.cookies import SimpleCookie # Py 3
15 except ImportError:
16 from Cookie import SimpleCookie # Py 2
17 import logging
18
19 import tornado
20 from tornado import web
21 from tornado import websocket
22
23 from IPython.kernel.zmq.session import Session
24 from IPython.utils.jsonutil import date_default
25 from IPython.utils.py3compat import PY3, cast_unicode
26
27 from .handlers import IPythonHandler
28
29
30 class ZMQStreamHandler(websocket.WebSocketHandler):
31
32 def check_origin(self, origin):
33 """Check Origin == Host or Access-Control-Allow-Origin.
34
35 Tornado >= 4 calls this method automatically, raising 403 if it returns False.
36 We call it explicitly in `open` on Tornado < 4.
37 """
38 if self.allow_origin == '*':
39 return True
40
41 host = self.request.headers.get("Host")
42
43 # If no header is provided, assume we can't verify origin
44 if(origin is None or host is None):
45 return False
46
47 host_origin = "{0}://{1}".format(self.request.protocol, host)
48
49 # OK if origin matches host
50 if origin == host_origin:
51 return True
52
53 # Check CORS headers
54 if self.allow_origin:
55 return self.allow_origin == origin
56 elif self.allow_origin_pat:
57 return bool(self.allow_origin_pat.match(origin))
58 else:
59 # No CORS headers deny the request
60 return False
61
62 def clear_cookie(self, *args, **kwargs):
63 """meaningless for websockets"""
64 pass
65
66 def _reserialize_reply(self, msg_list):
67 """Reserialize a reply message using JSON.
68
69 This takes the msg list from the ZMQ socket, unserializes it using
70 self.session and then serializes the result using JSON. This method
71 should be used by self._on_zmq_reply to build messages that can
72 be sent back to the browser.
73 """
74 idents, msg_list = self.session.feed_identities(msg_list)
75 msg = self.session.unserialize(msg_list)
76 try:
77 msg['header'].pop('date')
78 except KeyError:
79 pass
80 try:
81 msg['parent_header'].pop('date')
82 except KeyError:
83 pass
84 msg.pop('buffers')
85 return json.dumps(msg, default=date_default)
86
87 def _on_zmq_reply(self, msg_list):
88 # Sometimes this gets triggered when the on_close method is scheduled in the
89 # eventloop but hasn't been called.
90 if self.stream.closed(): return
91 try:
92 msg = self._reserialize_reply(msg_list)
93 except Exception:
94 self.log.critical("Malformed message: %r" % msg_list, exc_info=True)
95 else:
96 self.write_message(msg)
97
98 def allow_draft76(self):
99 """Allow draft 76, until browsers such as Safari update to RFC 6455.
100
101 This has been disabled by default in tornado in release 2.2.0, and
102 support will be removed in later versions.
103 """
104 return True
105
106
107 class AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):
108 def set_default_headers(self):
109 """Undo the set_default_headers in IPythonHandler
110
111 which doesn't make sense for websockets
112 """
113 pass
114
115 def open(self, kernel_id):
116 self.kernel_id = cast_unicode(kernel_id, 'ascii')
117 # Check to see that origin matches host directly, including ports
118 # Tornado 4 already does CORS checking
119 if tornado.version_info[0] < 4:
120 if not self.check_origin(self.get_origin()):
121 self.log.warn("Cross Origin WebSocket Attempt from %s", self.get_origin())
122 raise web.HTTPError(403)
123
124 self.session = Session(config=self.config)
125 self.save_on_message = self.on_message
126 self.on_message = self.on_first_message
127
128 def _inject_cookie_message(self, msg):
129 """Inject the first message, which is the document cookie,
130 for authentication."""
131 if not PY3 and isinstance(msg, unicode):
132 # Cookie constructor doesn't accept unicode strings
133 # under Python 2.x for some reason
134 msg = msg.encode('utf8', 'replace')
135 try:
136 identity, msg = msg.split(':', 1)
137 self.session.session = cast_unicode(identity, 'ascii')
138 except Exception:
139 logging.error("First ws message didn't have the form 'identity:[cookie]' - %r", msg)
140
141 try:
142 self.request._cookies = SimpleCookie(msg)
143 except:
144 self.log.warn("couldn't parse cookie string: %s",msg, exc_info=True)
145
146 def on_first_message(self, msg):
147 self._inject_cookie_message(msg)
148 if self.get_current_user() is None:
149 self.log.warn("Couldn't authenticate WebSocket connection")
150 raise web.HTTPError(403)
151 self.on_message = self.save_on_message
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/IPython/html/base/zmqhandlers.py b/IPython/html/base/zmqhandlers.py
--- a/IPython/html/base/zmqhandlers.py
+++ b/IPython/html/base/zmqhandlers.py
@@ -17,6 +17,7 @@
import logging
import tornado
+from tornado import ioloop
from tornado import web
from tornado import websocket
@@ -103,8 +104,12 @@
"""
return True
+# ping interval for keeping websockets alive (30 seconds)
+WS_PING_INTERVAL = 30000
class AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):
+ ping_callback = None
+
def set_default_headers(self):
"""Undo the set_default_headers in IPythonHandler
@@ -124,6 +129,16 @@
self.session = Session(config=self.config)
self.save_on_message = self.on_message
self.on_message = self.on_first_message
+ self.ping_callback = ioloop.PeriodicCallback(self.send_ping, WS_PING_INTERVAL)
+ self.ping_callback.start()
+
+ def send_ping(self):
+ """send a ping to keep the websocket alive"""
+ if self.stream.closed() and self.ping_callback is not None:
+ self.ping_callback.stop()
+ return
+
+ self.ping(b'')
def _inject_cookie_message(self, msg):
"""Inject the first message, which is the document cookie,
| {"golden_diff": "diff --git a/IPython/html/base/zmqhandlers.py b/IPython/html/base/zmqhandlers.py\n--- a/IPython/html/base/zmqhandlers.py\n+++ b/IPython/html/base/zmqhandlers.py\n@@ -17,6 +17,7 @@\n import logging\n \n import tornado\n+from tornado import ioloop\n from tornado import web\n from tornado import websocket\n \n@@ -103,8 +104,12 @@\n \"\"\"\n return True\n \n+# ping interval for keeping websockets alive (30 seconds)\n+WS_PING_INTERVAL = 30000\n \n class AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):\n+ ping_callback = None\n+\n def set_default_headers(self):\n \"\"\"Undo the set_default_headers in IPythonHandler\n \n@@ -124,6 +129,16 @@\n self.session = Session(config=self.config)\n self.save_on_message = self.on_message\n self.on_message = self.on_first_message\n+ self.ping_callback = ioloop.PeriodicCallback(self.send_ping, WS_PING_INTERVAL)\n+ self.ping_callback.start()\n+\n+ def send_ping(self):\n+ \"\"\"send a ping to keep the websocket alive\"\"\"\n+ if self.stream.closed() and self.ping_callback is not None:\n+ self.ping_callback.stop()\n+ return\n+\n+ self.ping(b'')\n \n def _inject_cookie_message(self, msg):\n \"\"\"Inject the first message, which is the document cookie,\n", "issue": "Use WebSocket.ping to keep connections alive across proxies\nI am accessing a remote IPython Notebook and need to use a proxy (Squid/SOCKS) for internet access.\nAs long as I am running cells everything works fine, but after one minute of inactivity the proxy seems to close the connection, resulting in the following Firefox console log:\n\n```\nThe connection to wss://example.org/kernels/$uuid/shell was interrupted while the page \nwas loading. @ https://example.org/static/js/kernel.js\n```\n\nThe websocket connection is reestablished immediately (and interrupted again after one minute), but running cells does not work anymore until I manually restart the kernel.\n\n```\nipython --version\n0.13.1.rc2\n```\n\nSending any message via each websocket (shell+iopub) will avoid interruption. This is a quick and _dirty_ client side workaround:\n\n```\nfrom IPython.core.display import HTML\nHTML('''<script>\nvar keepalive_timer;\nfunction keep_connection_alive() {\n IPython.notebook.kernel.execute();\n keepalive_timer = setTimeout('keep_connection_alive()', 55000);\n}\nkeep_connection_alive();\n</script>''')\n```\n\nWebSocket has ping and pong frames for keep-alive, tornado has a ping() function to send such a frame. Could the ping function be called somewhere in the kernel code to keep alive connections even via proxy?\n\n", "before_files": [{"content": "\"\"\"Tornado handlers for WebSocket <-> ZMQ sockets.\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport json\n\ntry:\n from urllib.parse import urlparse # Py 3\nexcept ImportError:\n from urlparse import urlparse # Py 2\n\ntry:\n from http.cookies import SimpleCookie # Py 3\nexcept ImportError:\n from Cookie import SimpleCookie # Py 2\nimport logging\n\nimport tornado\nfrom tornado import web\nfrom tornado import websocket\n\nfrom IPython.kernel.zmq.session import Session\nfrom IPython.utils.jsonutil import date_default\nfrom IPython.utils.py3compat import PY3, cast_unicode\n\nfrom .handlers import IPythonHandler\n\n\nclass ZMQStreamHandler(websocket.WebSocketHandler):\n \n def check_origin(self, origin):\n \"\"\"Check Origin == Host or Access-Control-Allow-Origin.\n \n Tornado >= 4 calls this method automatically, raising 403 if it returns False.\n We call it explicitly in `open` on Tornado < 4.\n \"\"\"\n if self.allow_origin == '*':\n return True\n\n host = self.request.headers.get(\"Host\")\n\n # If no header is provided, assume we can't verify origin\n if(origin is None or host is None):\n return False\n \n host_origin = \"{0}://{1}\".format(self.request.protocol, host)\n \n # OK if origin matches host\n if origin == host_origin:\n return True\n \n # Check CORS headers\n if self.allow_origin:\n return self.allow_origin == origin\n elif self.allow_origin_pat:\n return bool(self.allow_origin_pat.match(origin))\n else:\n # No CORS headers deny the request\n return False\n\n def clear_cookie(self, *args, **kwargs):\n \"\"\"meaningless for websockets\"\"\"\n pass\n\n def _reserialize_reply(self, msg_list):\n \"\"\"Reserialize a reply message using JSON.\n\n This takes the msg list from the ZMQ socket, unserializes it using\n self.session and then serializes the result using JSON. This method\n should be used by self._on_zmq_reply to build messages that can\n be sent back to the browser.\n \"\"\"\n idents, msg_list = self.session.feed_identities(msg_list)\n msg = self.session.unserialize(msg_list)\n try:\n msg['header'].pop('date')\n except KeyError:\n pass\n try:\n msg['parent_header'].pop('date')\n except KeyError:\n pass\n msg.pop('buffers')\n return json.dumps(msg, default=date_default)\n\n def _on_zmq_reply(self, msg_list):\n # Sometimes this gets triggered when the on_close method is scheduled in the\n # eventloop but hasn't been called.\n if self.stream.closed(): return\n try:\n msg = self._reserialize_reply(msg_list)\n except Exception:\n self.log.critical(\"Malformed message: %r\" % msg_list, exc_info=True)\n else:\n self.write_message(msg)\n\n def allow_draft76(self):\n \"\"\"Allow draft 76, until browsers such as Safari update to RFC 6455.\n \n This has been disabled by default in tornado in release 2.2.0, and\n support will be removed in later versions.\n \"\"\"\n return True\n\n\nclass AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):\n def set_default_headers(self):\n \"\"\"Undo the set_default_headers in IPythonHandler\n \n which doesn't make sense for websockets\n \"\"\"\n pass\n\n def open(self, kernel_id):\n self.kernel_id = cast_unicode(kernel_id, 'ascii')\n # Check to see that origin matches host directly, including ports\n # Tornado 4 already does CORS checking\n if tornado.version_info[0] < 4:\n if not self.check_origin(self.get_origin()):\n self.log.warn(\"Cross Origin WebSocket Attempt from %s\", self.get_origin())\n raise web.HTTPError(403)\n\n self.session = Session(config=self.config)\n self.save_on_message = self.on_message\n self.on_message = self.on_first_message\n\n def _inject_cookie_message(self, msg):\n \"\"\"Inject the first message, which is the document cookie,\n for authentication.\"\"\"\n if not PY3 and isinstance(msg, unicode):\n # Cookie constructor doesn't accept unicode strings\n # under Python 2.x for some reason\n msg = msg.encode('utf8', 'replace')\n try:\n identity, msg = msg.split(':', 1)\n self.session.session = cast_unicode(identity, 'ascii')\n except Exception:\n logging.error(\"First ws message didn't have the form 'identity:[cookie]' - %r\", msg)\n \n try:\n self.request._cookies = SimpleCookie(msg)\n except:\n self.log.warn(\"couldn't parse cookie string: %s\",msg, exc_info=True)\n\n def on_first_message(self, msg):\n self._inject_cookie_message(msg)\n if self.get_current_user() is None:\n self.log.warn(\"Couldn't authenticate WebSocket connection\")\n raise web.HTTPError(403)\n self.on_message = self.save_on_message\n", "path": "IPython/html/base/zmqhandlers.py"}], "after_files": [{"content": "\"\"\"Tornado handlers for WebSocket <-> ZMQ sockets.\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport json\n\ntry:\n from urllib.parse import urlparse # Py 3\nexcept ImportError:\n from urlparse import urlparse # Py 2\n\ntry:\n from http.cookies import SimpleCookie # Py 3\nexcept ImportError:\n from Cookie import SimpleCookie # Py 2\nimport logging\n\nimport tornado\nfrom tornado import ioloop\nfrom tornado import web\nfrom tornado import websocket\n\nfrom IPython.kernel.zmq.session import Session\nfrom IPython.utils.jsonutil import date_default\nfrom IPython.utils.py3compat import PY3, cast_unicode\n\nfrom .handlers import IPythonHandler\n\n\nclass ZMQStreamHandler(websocket.WebSocketHandler):\n \n def check_origin(self, origin):\n \"\"\"Check Origin == Host or Access-Control-Allow-Origin.\n \n Tornado >= 4 calls this method automatically, raising 403 if it returns False.\n We call it explicitly in `open` on Tornado < 4.\n \"\"\"\n if self.allow_origin == '*':\n return True\n\n host = self.request.headers.get(\"Host\")\n\n # If no header is provided, assume we can't verify origin\n if(origin is None or host is None):\n return False\n \n host_origin = \"{0}://{1}\".format(self.request.protocol, host)\n \n # OK if origin matches host\n if origin == host_origin:\n return True\n \n # Check CORS headers\n if self.allow_origin:\n return self.allow_origin == origin\n elif self.allow_origin_pat:\n return bool(self.allow_origin_pat.match(origin))\n else:\n # No CORS headers deny the request\n return False\n\n def clear_cookie(self, *args, **kwargs):\n \"\"\"meaningless for websockets\"\"\"\n pass\n\n def _reserialize_reply(self, msg_list):\n \"\"\"Reserialize a reply message using JSON.\n\n This takes the msg list from the ZMQ socket, unserializes it using\n self.session and then serializes the result using JSON. This method\n should be used by self._on_zmq_reply to build messages that can\n be sent back to the browser.\n \"\"\"\n idents, msg_list = self.session.feed_identities(msg_list)\n msg = self.session.unserialize(msg_list)\n try:\n msg['header'].pop('date')\n except KeyError:\n pass\n try:\n msg['parent_header'].pop('date')\n except KeyError:\n pass\n msg.pop('buffers')\n return json.dumps(msg, default=date_default)\n\n def _on_zmq_reply(self, msg_list):\n # Sometimes this gets triggered when the on_close method is scheduled in the\n # eventloop but hasn't been called.\n if self.stream.closed(): return\n try:\n msg = self._reserialize_reply(msg_list)\n except Exception:\n self.log.critical(\"Malformed message: %r\" % msg_list, exc_info=True)\n else:\n self.write_message(msg)\n\n def allow_draft76(self):\n \"\"\"Allow draft 76, until browsers such as Safari update to RFC 6455.\n \n This has been disabled by default in tornado in release 2.2.0, and\n support will be removed in later versions.\n \"\"\"\n return True\n\n# ping interval for keeping websockets alive (30 seconds)\nWS_PING_INTERVAL = 30000\n\nclass AuthenticatedZMQStreamHandler(ZMQStreamHandler, IPythonHandler):\n ping_callback = None\n\n def set_default_headers(self):\n \"\"\"Undo the set_default_headers in IPythonHandler\n \n which doesn't make sense for websockets\n \"\"\"\n pass\n\n def open(self, kernel_id):\n self.kernel_id = cast_unicode(kernel_id, 'ascii')\n # Check to see that origin matches host directly, including ports\n # Tornado 4 already does CORS checking\n if tornado.version_info[0] < 4:\n if not self.check_origin(self.get_origin()):\n self.log.warn(\"Cross Origin WebSocket Attempt from %s\", self.get_origin())\n raise web.HTTPError(403)\n\n self.session = Session(config=self.config)\n self.save_on_message = self.on_message\n self.on_message = self.on_first_message\n self.ping_callback = ioloop.PeriodicCallback(self.send_ping, WS_PING_INTERVAL)\n self.ping_callback.start()\n\n def send_ping(self):\n \"\"\"send a ping to keep the websocket alive\"\"\"\n if self.stream.closed() and self.ping_callback is not None:\n self.ping_callback.stop()\n return\n\n self.ping(b'')\n\n def _inject_cookie_message(self, msg):\n \"\"\"Inject the first message, which is the document cookie,\n for authentication.\"\"\"\n if not PY3 and isinstance(msg, unicode):\n # Cookie constructor doesn't accept unicode strings\n # under Python 2.x for some reason\n msg = msg.encode('utf8', 'replace')\n try:\n identity, msg = msg.split(':', 1)\n self.session.session = cast_unicode(identity, 'ascii')\n except Exception:\n logging.error(\"First ws message didn't have the form 'identity:[cookie]' - %r\", msg)\n \n try:\n self.request._cookies = SimpleCookie(msg)\n except:\n self.log.warn(\"couldn't parse cookie string: %s\",msg, exc_info=True)\n\n def on_first_message(self, msg):\n self._inject_cookie_message(msg)\n if self.get_current_user() is None:\n self.log.warn(\"Couldn't authenticate WebSocket connection\")\n raise web.HTTPError(403)\n self.on_message = self.save_on_message\n", "path": "IPython/html/base/zmqhandlers.py"}]} | 2,023 | 322 |
gh_patches_debug_2257 | rasdani/github-patches | git_diff | apluslms__a-plus-204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Numbers should be safe characters in file names
If the attachment for the "Exercise with attachment" is named for example "2.zip", the name for that file will be just ".zip" in the database because numbers are not considered as safe characters:
https://github.com/Aalto-LeTech/a-plus/blob/a86bf/lib/helpers.py#L62
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/helpers.py`
Content:
```
1 from django.conf import settings
2 from random import choice
3 from PIL import Image
4 import string
5 import urllib
6
7
8 def extract_form_errors(form):
9 """
10 Extracts Django form errors to a list of error messages.
11 """
12 errors = []
13 for field in form.errors:
14 for err in form.errors[field]:
15 errors.append("%s: %s" % (field, err))
16 return errors
17
18
19 def get_random_string(length=32):
20 """
21 This function creates a random string with a given length.
22 The strings consist of upper and lower case letters and numbers.
23
24 @param length: the length of the randomized string, defaults to 32
25 @return: a random string containing lower and upper case letters and digits
26 """
27
28 # Use all letters and numbers in the identifier
29 choices = string.ascii_letters + string.digits
30
31 return ''.join([choice(choices) for _ in range(length)])
32
33
34 def query_dict_to_list_of_tuples(query_dict):
35 """
36 This helper function creates a list of tuples with the values
37 from a QueryDict object. In a QueryDict the same key can have
38 several values, which is not possible with a typical dict nor a JSON
39 object. The resulting list will be similar to [(key1, value1), (key2, value2)].
40
41 @param query_dict: a QueryDict object
42 @return: a list of tuples with the same keys and values as in the given QueryDict
43 """
44 list_of_tuples = []
45 for key in query_dict:
46 for val in query_dict.getlist(key):
47 list_of_tuples.append((key, val))
48 return list_of_tuples
49
50
51 def update_url_params(url, params):
52 delimiter = "&" if "?" in url else "?"
53 return url + delimiter + urllib.parse.urlencode(params)
54
55
56 def has_same_domain(url1, url2):
57 uri1 = urllib.parse.urlparse(url1)
58 uri2 = urllib.parse.urlparse(url2)
59 return uri1.netloc == uri2.netloc
60
61
62 FILENAME_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-"
63
64 def safe_file_name(name):
65 safename = "".join(c for c in name if c in FILENAME_CHARS)
66 if safename[0] == "-":
67 return "_" + safename[1:80]
68 return safename[:80]
69
70
71 def resize_image(path, max_size):
72 image = Image.open(path)
73 image.thumbnail(max_size, Image.ANTIALIAS)
74 image.save(path)
75
76
77 def roman_numeral(number):
78 numbers = [1000,900,500,400,100,90,50,40,10,9,5,4,1];
79 letters = ["M","CM","D","CD","C","XC","L","XL","X","IX","V","IV","I"];
80 roman = ""
81 for i in range(len(numbers)):
82 while number >= numbers[i]:
83 roman += letters[i]
84 number -= numbers[i]
85 return roman
86
87
88 def settings_text(request, key):
89 def get(name):
90 if hasattr(settings, name):
91 return getattr(settings, name)
92 return None
93 return get('{}_{}'.format(key, request.LANGUAGE_CODE.upper())) or get(key)
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/helpers.py b/lib/helpers.py
--- a/lib/helpers.py
+++ b/lib/helpers.py
@@ -59,7 +59,7 @@
return uri1.netloc == uri2.netloc
-FILENAME_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-"
+FILENAME_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-0123456789"
def safe_file_name(name):
safename = "".join(c for c in name if c in FILENAME_CHARS)
| {"golden_diff": "diff --git a/lib/helpers.py b/lib/helpers.py\n--- a/lib/helpers.py\n+++ b/lib/helpers.py\n@@ -59,7 +59,7 @@\n return uri1.netloc == uri2.netloc\n \n \n-FILENAME_CHARS = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-\"\n+FILENAME_CHARS = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-0123456789\"\n \n def safe_file_name(name):\n safename = \"\".join(c for c in name if c in FILENAME_CHARS)\n", "issue": "Numbers should be safe characters in file names\nIf the attachment for the \"Exercise with attachment\" is named for example \"2.zip\", the name for that file will be just \".zip\" in the database because numbers are not considered as safe characters:\n\nhttps://github.com/Aalto-LeTech/a-plus/blob/a86bf/lib/helpers.py#L62\n\n", "before_files": [{"content": "from django.conf import settings\nfrom random import choice\nfrom PIL import Image\nimport string\nimport urllib\n\n\ndef extract_form_errors(form):\n \"\"\"\n Extracts Django form errors to a list of error messages.\n \"\"\"\n errors = []\n for field in form.errors:\n for err in form.errors[field]:\n errors.append(\"%s: %s\" % (field, err))\n return errors\n\n\ndef get_random_string(length=32):\n \"\"\"\n This function creates a random string with a given length.\n The strings consist of upper and lower case letters and numbers.\n\n @param length: the length of the randomized string, defaults to 32\n @return: a random string containing lower and upper case letters and digits\n \"\"\"\n\n # Use all letters and numbers in the identifier\n choices = string.ascii_letters + string.digits\n\n return ''.join([choice(choices) for _ in range(length)])\n\n\ndef query_dict_to_list_of_tuples(query_dict):\n \"\"\"\n This helper function creates a list of tuples with the values\n from a QueryDict object. In a QueryDict the same key can have\n several values, which is not possible with a typical dict nor a JSON\n object. The resulting list will be similar to [(key1, value1), (key2, value2)].\n\n @param query_dict: a QueryDict object\n @return: a list of tuples with the same keys and values as in the given QueryDict\n \"\"\"\n list_of_tuples = []\n for key in query_dict:\n for val in query_dict.getlist(key):\n list_of_tuples.append((key, val))\n return list_of_tuples\n\n\ndef update_url_params(url, params):\n delimiter = \"&\" if \"?\" in url else \"?\"\n return url + delimiter + urllib.parse.urlencode(params)\n\n\ndef has_same_domain(url1, url2):\n uri1 = urllib.parse.urlparse(url1)\n uri2 = urllib.parse.urlparse(url2)\n return uri1.netloc == uri2.netloc\n\n\nFILENAME_CHARS = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-\"\n\ndef safe_file_name(name):\n safename = \"\".join(c for c in name if c in FILENAME_CHARS)\n if safename[0] == \"-\":\n return \"_\" + safename[1:80]\n return safename[:80]\n\n\ndef resize_image(path, max_size):\n image = Image.open(path)\n image.thumbnail(max_size, Image.ANTIALIAS)\n image.save(path)\n\n\ndef roman_numeral(number):\n numbers = [1000,900,500,400,100,90,50,40,10,9,5,4,1];\n letters = [\"M\",\"CM\",\"D\",\"CD\",\"C\",\"XC\",\"L\",\"XL\",\"X\",\"IX\",\"V\",\"IV\",\"I\"];\n roman = \"\"\n for i in range(len(numbers)):\n while number >= numbers[i]:\n roman += letters[i]\n number -= numbers[i]\n return roman\n\n\ndef settings_text(request, key):\n def get(name):\n if hasattr(settings, name):\n return getattr(settings, name)\n return None\n return get('{}_{}'.format(key, request.LANGUAGE_CODE.upper())) or get(key)\n", "path": "lib/helpers.py"}], "after_files": [{"content": "from django.conf import settings\nfrom random import choice\nfrom PIL import Image\nimport string\nimport urllib\n\n\ndef extract_form_errors(form):\n \"\"\"\n Extracts Django form errors to a list of error messages.\n \"\"\"\n errors = []\n for field in form.errors:\n for err in form.errors[field]:\n errors.append(\"%s: %s\" % (field, err))\n return errors\n\n\ndef get_random_string(length=32):\n \"\"\"\n This function creates a random string with a given length.\n The strings consist of upper and lower case letters and numbers.\n\n @param length: the length of the randomized string, defaults to 32\n @return: a random string containing lower and upper case letters and digits\n \"\"\"\n\n # Use all letters and numbers in the identifier\n choices = string.ascii_letters + string.digits\n\n return ''.join([choice(choices) for _ in range(length)])\n\n\ndef query_dict_to_list_of_tuples(query_dict):\n \"\"\"\n This helper function creates a list of tuples with the values\n from a QueryDict object. In a QueryDict the same key can have\n several values, which is not possible with a typical dict nor a JSON\n object. The resulting list will be similar to [(key1, value1), (key2, value2)].\n\n @param query_dict: a QueryDict object\n @return: a list of tuples with the same keys and values as in the given QueryDict\n \"\"\"\n list_of_tuples = []\n for key in query_dict:\n for val in query_dict.getlist(key):\n list_of_tuples.append((key, val))\n return list_of_tuples\n\n\ndef update_url_params(url, params):\n delimiter = \"&\" if \"?\" in url else \"?\"\n return url + delimiter + urllib.parse.urlencode(params)\n\n\ndef has_same_domain(url1, url2):\n uri1 = urllib.parse.urlparse(url1)\n uri2 = urllib.parse.urlparse(url2)\n return uri1.netloc == uri2.netloc\n\n\nFILENAME_CHARS = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ._-0123456789\"\n\ndef safe_file_name(name):\n safename = \"\".join(c for c in name if c in FILENAME_CHARS)\n if safename[0] == \"-\":\n return \"_\" + safename[1:80]\n return safename[:80]\n\n\ndef resize_image(path, max_size):\n image = Image.open(path)\n image.thumbnail(max_size, Image.ANTIALIAS)\n image.save(path)\n\n\ndef roman_numeral(number):\n numbers = [1000,900,500,400,100,90,50,40,10,9,5,4,1];\n letters = [\"M\",\"CM\",\"D\",\"CD\",\"C\",\"XC\",\"L\",\"XL\",\"X\",\"IX\",\"V\",\"IV\",\"I\"];\n roman = \"\"\n for i in range(len(numbers)):\n while number >= numbers[i]:\n roman += letters[i]\n number -= numbers[i]\n return roman\n\n\ndef settings_text(request, key):\n def get(name):\n if hasattr(settings, name):\n return getattr(settings, name)\n return None\n return get('{}_{}'.format(key, request.LANGUAGE_CODE.upper())) or get(key)\n", "path": "lib/helpers.py"}]} | 1,216 | 104 |
gh_patches_debug_10343 | rasdani/github-patches | git_diff | scrapy__scrapy-3671 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Copying items not work correctly with list values
> > item = Item()
> > item['some_field'] = []
> >
> > c_item = Item(item)
> > c_item['some_field'].append(1)
> >
> > print item['some_field'], c_item['some_field']
> > [1] [1]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/item.py`
Content:
```
1 """
2 Scrapy Item
3
4 See documentation in docs/topics/item.rst
5 """
6
7 from pprint import pformat
8 from collections import MutableMapping
9
10 from abc import ABCMeta
11 import six
12
13 from scrapy.utils.trackref import object_ref
14
15
16 class BaseItem(object_ref):
17 """Base class for all scraped items."""
18 pass
19
20
21 class Field(dict):
22 """Container of field metadata"""
23
24
25 class ItemMeta(ABCMeta):
26
27 def __new__(mcs, class_name, bases, attrs):
28 classcell = attrs.pop('__classcell__', None)
29 new_bases = tuple(base._class for base in bases if hasattr(base, '_class'))
30 _class = super(ItemMeta, mcs).__new__(mcs, 'x_' + class_name, new_bases, attrs)
31
32 fields = getattr(_class, 'fields', {})
33 new_attrs = {}
34 for n in dir(_class):
35 v = getattr(_class, n)
36 if isinstance(v, Field):
37 fields[n] = v
38 elif n in attrs:
39 new_attrs[n] = attrs[n]
40
41 new_attrs['fields'] = fields
42 new_attrs['_class'] = _class
43 if classcell is not None:
44 new_attrs['__classcell__'] = classcell
45 return super(ItemMeta, mcs).__new__(mcs, class_name, bases, new_attrs)
46
47
48 class DictItem(MutableMapping, BaseItem):
49
50 fields = {}
51
52 def __init__(self, *args, **kwargs):
53 self._values = {}
54 if args or kwargs: # avoid creating dict for most common case
55 for k, v in six.iteritems(dict(*args, **kwargs)):
56 self[k] = v
57
58 def __getitem__(self, key):
59 return self._values[key]
60
61 def __setitem__(self, key, value):
62 if key in self.fields:
63 self._values[key] = value
64 else:
65 raise KeyError("%s does not support field: %s" %
66 (self.__class__.__name__, key))
67
68 def __delitem__(self, key):
69 del self._values[key]
70
71 def __getattr__(self, name):
72 if name in self.fields:
73 raise AttributeError("Use item[%r] to get field value" % name)
74 raise AttributeError(name)
75
76 def __setattr__(self, name, value):
77 if not name.startswith('_'):
78 raise AttributeError("Use item[%r] = %r to set field value" %
79 (name, value))
80 super(DictItem, self).__setattr__(name, value)
81
82 def __len__(self):
83 return len(self._values)
84
85 def __iter__(self):
86 return iter(self._values)
87
88 __hash__ = BaseItem.__hash__
89
90 def keys(self):
91 return self._values.keys()
92
93 def __repr__(self):
94 return pformat(dict(self))
95
96 def copy(self):
97 return self.__class__(self)
98
99
100 @six.add_metaclass(ItemMeta)
101 class Item(DictItem):
102 pass
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/item.py b/scrapy/item.py
--- a/scrapy/item.py
+++ b/scrapy/item.py
@@ -6,6 +6,7 @@
from pprint import pformat
from collections import MutableMapping
+from copy import deepcopy
from abc import ABCMeta
import six
@@ -96,6 +97,13 @@
def copy(self):
return self.__class__(self)
+ def deepcopy(self):
+ """Return a `deep copy`_ of this item.
+
+ .. _deep copy: https://docs.python.org/library/copy.html#copy.deepcopy
+ """
+ return deepcopy(self)
+
@six.add_metaclass(ItemMeta)
class Item(DictItem):
| {"golden_diff": "diff --git a/scrapy/item.py b/scrapy/item.py\n--- a/scrapy/item.py\n+++ b/scrapy/item.py\n@@ -6,6 +6,7 @@\n \n from pprint import pformat\n from collections import MutableMapping\n+from copy import deepcopy\n \n from abc import ABCMeta\n import six\n@@ -96,6 +97,13 @@\n def copy(self):\n return self.__class__(self)\n \n+ def deepcopy(self):\n+ \"\"\"Return a `deep copy`_ of this item.\n+\n+ .. _deep copy: https://docs.python.org/library/copy.html#copy.deepcopy\n+ \"\"\"\n+ return deepcopy(self)\n+\n \n @six.add_metaclass(ItemMeta)\n class Item(DictItem):\n", "issue": "Copying items not work correctly with list values\n> > item = Item()\n> > item['some_field'] = []\n> > \n> > c_item = Item(item)\n> > c_item['some_field'].append(1)\n> > \n> > print item['some_field'], c_item['some_field']\n> > [1] [1]\n\n", "before_files": [{"content": "\"\"\"\nScrapy Item\n\nSee documentation in docs/topics/item.rst\n\"\"\"\n\nfrom pprint import pformat\nfrom collections import MutableMapping\n\nfrom abc import ABCMeta\nimport six\n\nfrom scrapy.utils.trackref import object_ref\n\n\nclass BaseItem(object_ref):\n \"\"\"Base class for all scraped items.\"\"\"\n pass\n\n\nclass Field(dict):\n \"\"\"Container of field metadata\"\"\"\n\n\nclass ItemMeta(ABCMeta):\n\n def __new__(mcs, class_name, bases, attrs):\n classcell = attrs.pop('__classcell__', None)\n new_bases = tuple(base._class for base in bases if hasattr(base, '_class'))\n _class = super(ItemMeta, mcs).__new__(mcs, 'x_' + class_name, new_bases, attrs)\n\n fields = getattr(_class, 'fields', {})\n new_attrs = {}\n for n in dir(_class):\n v = getattr(_class, n)\n if isinstance(v, Field):\n fields[n] = v\n elif n in attrs:\n new_attrs[n] = attrs[n]\n\n new_attrs['fields'] = fields\n new_attrs['_class'] = _class\n if classcell is not None:\n new_attrs['__classcell__'] = classcell\n return super(ItemMeta, mcs).__new__(mcs, class_name, bases, new_attrs)\n\n\nclass DictItem(MutableMapping, BaseItem):\n\n fields = {}\n\n def __init__(self, *args, **kwargs):\n self._values = {}\n if args or kwargs: # avoid creating dict for most common case\n for k, v in six.iteritems(dict(*args, **kwargs)):\n self[k] = v\n\n def __getitem__(self, key):\n return self._values[key]\n\n def __setitem__(self, key, value):\n if key in self.fields:\n self._values[key] = value\n else:\n raise KeyError(\"%s does not support field: %s\" %\n (self.__class__.__name__, key))\n\n def __delitem__(self, key):\n del self._values[key]\n\n def __getattr__(self, name):\n if name in self.fields:\n raise AttributeError(\"Use item[%r] to get field value\" % name)\n raise AttributeError(name)\n\n def __setattr__(self, name, value):\n if not name.startswith('_'):\n raise AttributeError(\"Use item[%r] = %r to set field value\" %\n (name, value))\n super(DictItem, self).__setattr__(name, value)\n\n def __len__(self):\n return len(self._values)\n\n def __iter__(self):\n return iter(self._values)\n\n __hash__ = BaseItem.__hash__\n\n def keys(self):\n return self._values.keys()\n\n def __repr__(self):\n return pformat(dict(self))\n\n def copy(self):\n return self.__class__(self)\n\n\[email protected]_metaclass(ItemMeta)\nclass Item(DictItem):\n pass\n", "path": "scrapy/item.py"}], "after_files": [{"content": "\"\"\"\nScrapy Item\n\nSee documentation in docs/topics/item.rst\n\"\"\"\n\nfrom pprint import pformat\nfrom collections import MutableMapping\nfrom copy import deepcopy\n\nfrom abc import ABCMeta\nimport six\n\nfrom scrapy.utils.trackref import object_ref\n\n\nclass BaseItem(object_ref):\n \"\"\"Base class for all scraped items.\"\"\"\n pass\n\n\nclass Field(dict):\n \"\"\"Container of field metadata\"\"\"\n\n\nclass ItemMeta(ABCMeta):\n\n def __new__(mcs, class_name, bases, attrs):\n classcell = attrs.pop('__classcell__', None)\n new_bases = tuple(base._class for base in bases if hasattr(base, '_class'))\n _class = super(ItemMeta, mcs).__new__(mcs, 'x_' + class_name, new_bases, attrs)\n\n fields = getattr(_class, 'fields', {})\n new_attrs = {}\n for n in dir(_class):\n v = getattr(_class, n)\n if isinstance(v, Field):\n fields[n] = v\n elif n in attrs:\n new_attrs[n] = attrs[n]\n\n new_attrs['fields'] = fields\n new_attrs['_class'] = _class\n if classcell is not None:\n new_attrs['__classcell__'] = classcell\n return super(ItemMeta, mcs).__new__(mcs, class_name, bases, new_attrs)\n\n\nclass DictItem(MutableMapping, BaseItem):\n\n fields = {}\n\n def __init__(self, *args, **kwargs):\n self._values = {}\n if args or kwargs: # avoid creating dict for most common case\n for k, v in six.iteritems(dict(*args, **kwargs)):\n self[k] = v\n\n def __getitem__(self, key):\n return self._values[key]\n\n def __setitem__(self, key, value):\n if key in self.fields:\n self._values[key] = value\n else:\n raise KeyError(\"%s does not support field: %s\" %\n (self.__class__.__name__, key))\n\n def __delitem__(self, key):\n del self._values[key]\n\n def __getattr__(self, name):\n if name in self.fields:\n raise AttributeError(\"Use item[%r] to get field value\" % name)\n raise AttributeError(name)\n\n def __setattr__(self, name, value):\n if not name.startswith('_'):\n raise AttributeError(\"Use item[%r] = %r to set field value\" %\n (name, value))\n super(DictItem, self).__setattr__(name, value)\n\n def __len__(self):\n return len(self._values)\n\n def __iter__(self):\n return iter(self._values)\n\n __hash__ = BaseItem.__hash__\n\n def keys(self):\n return self._values.keys()\n\n def __repr__(self):\n return pformat(dict(self))\n\n def copy(self):\n return self.__class__(self)\n\n def deepcopy(self):\n \"\"\"Return a `deep copy`_ of this item.\n\n .. _deep copy: https://docs.python.org/library/copy.html#copy.deepcopy\n \"\"\"\n return deepcopy(self)\n\n\[email protected]_metaclass(ItemMeta)\nclass Item(DictItem):\n pass\n", "path": "scrapy/item.py"}]} | 1,187 | 159 |
gh_patches_debug_24337 | rasdani/github-patches | git_diff | joke2k__faker-258 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Faker should provide company VAT numbers
It would be nice if faker could provide company VAT numbers. More information about VAT codes is in Wikipedia: https://en.wikipedia.org/wiki/VAT_identification_number
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/company/fi_FI/__init__.py`
Content:
```
1 from __future__ import unicode_literals
2 from .. import Provider as CompanyProvider
3
4
5 class Provider(CompanyProvider):
6 formats = (
7 '{{last_name}} {{company_suffix}}',
8 '{{last_name}} {{last_name}} {{company_suffix}}',
9 '{{last_name}} {{last_name}} {{company_suffix}}',
10 '{{last_name}}'
11 )
12
13 company_suffixes = (
14 'As Oy', 'Tmi', 'Oy', 'Oyj', 'Ky', 'Osk', 'ry'
15 )
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/company/fi_FI/__init__.py b/faker/providers/company/fi_FI/__init__.py
--- a/faker/providers/company/fi_FI/__init__.py
+++ b/faker/providers/company/fi_FI/__init__.py
@@ -13,3 +13,41 @@
company_suffixes = (
'As Oy', 'Tmi', 'Oy', 'Oyj', 'Ky', 'Osk', 'ry'
)
+
+ def company_business_id(self):
+ """
+ Returns Finnish company Business Identity Code (y-tunnus).
+ Format is 8 digits - e.g. FI99999999,[8] last digit is a check
+ digit utilizing MOD 11-2. The first digit is zero for some old
+ organizations. This function provides current codes starting with
+ non-zero.
+ """
+ def calculate_checksum(number):
+ """Calculate the checksum using mod 11,2 method"""
+ factors = [7, 9, 10, 5, 8, 4, 2]
+ sum = 0
+ for x, y in zip(number, factors):
+ sum = sum + int(x) * y
+ if sum % 11 == 0:
+ return '0'
+ else:
+ return str(11 - sum % 11)
+
+ first_digit = str(self.random_digit_not_null())
+ body = first_digit + self.bothify(self.random_element(('######',)))
+ cs = calculate_checksum(body)
+ return (body + '-' + str(cs))
+
+ def company_vat(self):
+ """
+ Returns Finnish VAT identification number (Arvonlisaveronumero).
+ This can be calculated from company business identity code by
+ adding prefix "FI" and removing dash before checksum.
+ """
+ def convert_to_vat(business_id):
+ """
+ Convert business id to VATIN
+ """
+ return 'FI' + business_id.replace('-', '')
+
+ return convert_to_vat(self.company_business_id())
| {"golden_diff": "diff --git a/faker/providers/company/fi_FI/__init__.py b/faker/providers/company/fi_FI/__init__.py\n--- a/faker/providers/company/fi_FI/__init__.py\n+++ b/faker/providers/company/fi_FI/__init__.py\n@@ -13,3 +13,41 @@\n company_suffixes = (\n 'As Oy', 'Tmi', 'Oy', 'Oyj', 'Ky', 'Osk', 'ry'\n )\n+\n+ def company_business_id(self):\n+ \"\"\"\n+ Returns Finnish company Business Identity Code (y-tunnus).\n+ Format is 8 digits - e.g. FI99999999,[8] last digit is a check\n+ digit utilizing MOD 11-2. The first digit is zero for some old\n+ organizations. This function provides current codes starting with\n+ non-zero.\n+ \"\"\"\n+ def calculate_checksum(number):\n+ \"\"\"Calculate the checksum using mod 11,2 method\"\"\"\n+ factors = [7, 9, 10, 5, 8, 4, 2]\n+ sum = 0\n+ for x, y in zip(number, factors):\n+ sum = sum + int(x) * y\n+ if sum % 11 == 0:\n+ return '0'\n+ else:\n+ return str(11 - sum % 11)\n+\n+ first_digit = str(self.random_digit_not_null())\n+ body = first_digit + self.bothify(self.random_element(('######',)))\n+ cs = calculate_checksum(body)\n+ return (body + '-' + str(cs))\n+\n+ def company_vat(self):\n+ \"\"\"\n+ Returns Finnish VAT identification number (Arvonlisaveronumero).\n+ This can be calculated from company business identity code by\n+ adding prefix \"FI\" and removing dash before checksum.\n+ \"\"\"\n+ def convert_to_vat(business_id):\n+ \"\"\"\n+ Convert business id to VATIN\n+ \"\"\"\n+ return 'FI' + business_id.replace('-', '')\n+\n+ return convert_to_vat(self.company_business_id())\n", "issue": "Faker should provide company VAT numbers\nIt would be nice if faker could provide company VAT numbers. More information about VAT codes is in Wikipedia: https://en.wikipedia.org/wiki/VAT_identification_number\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom .. import Provider as CompanyProvider\n\n\nclass Provider(CompanyProvider):\n formats = (\n '{{last_name}} {{company_suffix}}',\n '{{last_name}} {{last_name}} {{company_suffix}}',\n '{{last_name}} {{last_name}} {{company_suffix}}',\n '{{last_name}}'\n )\n\n company_suffixes = (\n 'As Oy', 'Tmi', 'Oy', 'Oyj', 'Ky', 'Osk', 'ry'\n )\n", "path": "faker/providers/company/fi_FI/__init__.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom .. import Provider as CompanyProvider\n\n\nclass Provider(CompanyProvider):\n formats = (\n '{{last_name}} {{company_suffix}}',\n '{{last_name}} {{last_name}} {{company_suffix}}',\n '{{last_name}} {{last_name}} {{company_suffix}}',\n '{{last_name}}'\n )\n\n company_suffixes = (\n 'As Oy', 'Tmi', 'Oy', 'Oyj', 'Ky', 'Osk', 'ry'\n )\n\n def company_business_id(self):\n \"\"\"\n Returns Finnish company Business Identity Code (y-tunnus).\n Format is 8 digits - e.g. FI99999999,[8] last digit is a check\n digit utilizing MOD 11-2. The first digit is zero for some old\n organizations. This function provides current codes starting with\n non-zero.\n \"\"\"\n def calculate_checksum(number):\n \"\"\"Calculate the checksum using mod 11,2 method\"\"\"\n factors = [7, 9, 10, 5, 8, 4, 2]\n sum = 0\n for x, y in zip(number, factors):\n sum = sum + int(x) * y\n if sum % 11 == 0:\n return '0'\n else:\n return str(11 - sum % 11)\n\n first_digit = str(self.random_digit_not_null())\n body = first_digit + self.bothify(self.random_element(('######',)))\n cs = calculate_checksum(body)\n return (body + '-' + str(cs))\n\n def company_vat(self):\n \"\"\"\n Returns Finnish VAT identification number (Arvonlisaveronumero).\n This can be calculated from company business identity code by\n adding prefix \"FI\" and removing dash before checksum.\n \"\"\"\n def convert_to_vat(business_id):\n \"\"\"\n Convert business id to VATIN\n \"\"\"\n return 'FI' + business_id.replace('-', '')\n\n return convert_to_vat(self.company_business_id())\n", "path": "faker/providers/company/fi_FI/__init__.py"}]} | 441 | 478 |
gh_patches_debug_61685 | rasdani/github-patches | git_diff | joke2k__faker-1432 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gmail.co.uk isn't a valid free email domain in the UK
* Faker version: 6.6.2
* OS: macOS 11.2.3
When generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived.
According to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.
### Steps to reproduce
1. Configure Faker with the `en_UK` locale.
1. Generate free emails by calling `fake.free_email()` repeatedly
1. Observe that some of them end in `gmail.co.uk`
### Expected behavior
Email addresses should not have `gmail.co.uk` as a domain.
### Actual behavior
As a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.
gmail.co.uk isn't a valid free email domain in the UK
* Faker version: 6.6.2
* OS: macOS 11.2.3
When generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived.
According to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.
### Steps to reproduce
1. Configure Faker with the `en_UK` locale.
1. Generate free emails by calling `fake.free_email()` repeatedly
1. Observe that some of them end in `gmail.co.uk`
### Expected behavior
Email addresses should not have `gmail.co.uk` as a domain.
### Actual behavior
As a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/internet/en_GB/__init__.py`
Content:
```
1 from .. import Provider as InternetProvider
2
3
4 class Provider(InternetProvider):
5 # Data taken from
6 # https://github.com/fzaninotto/Faker/blob/master/src/Faker/Provider/en_GB/Internet.php
7
8 free_email_domains = (
9 'gmail.com',
10 'yahoo.com',
11 'hotmail.com',
12 'gmail.co.uk',
13 'yahoo.co.uk',
14 'hotmail.co.uk',
15 )
16
17 tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/internet/en_GB/__init__.py b/faker/providers/internet/en_GB/__init__.py
--- a/faker/providers/internet/en_GB/__init__.py
+++ b/faker/providers/internet/en_GB/__init__.py
@@ -9,9 +9,9 @@
'gmail.com',
'yahoo.com',
'hotmail.com',
- 'gmail.co.uk',
'yahoo.co.uk',
'hotmail.co.uk',
+ 'outlook.com',
)
tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')
| {"golden_diff": "diff --git a/faker/providers/internet/en_GB/__init__.py b/faker/providers/internet/en_GB/__init__.py\n--- a/faker/providers/internet/en_GB/__init__.py\n+++ b/faker/providers/internet/en_GB/__init__.py\n@@ -9,9 +9,9 @@\n 'gmail.com',\n 'yahoo.com',\n 'hotmail.com',\n- 'gmail.co.uk',\n 'yahoo.co.uk',\n 'hotmail.co.uk',\n+ 'outlook.com',\n )\n \n tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')\n", "issue": "gmail.co.uk isn't a valid free email domain in the UK\n* Faker version: 6.6.2\r\n* OS: macOS 11.2.3\r\n\r\nWhen generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived. \r\n\r\nAccording to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.\r\n\r\n### Steps to reproduce\r\n\r\n1. Configure Faker with the `en_UK` locale.\r\n1. Generate free emails by calling `fake.free_email()` repeatedly\r\n1. Observe that some of them end in `gmail.co.uk`\r\n\r\n### Expected behavior\r\nEmail addresses should not have `gmail.co.uk` as a domain.\r\n\r\n### Actual behavior\r\nAs a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.\ngmail.co.uk isn't a valid free email domain in the UK\n* Faker version: 6.6.2\r\n* OS: macOS 11.2.3\r\n\r\nWhen generating a free email address, I got a result with the domain `gmail.co.uk`. From the source code, this list of free UK email domains was copied from the PHP version of Faker, which is now archived. \r\n\r\nAccording to [this Google support thread](https://support.google.com/mail/thread/4572636?hl=en) (albeit not necessarily from someone with the authority to speak on behalf of Google), there is no such domain.\r\n\r\n### Steps to reproduce\r\n\r\n1. Configure Faker with the `en_UK` locale.\r\n1. Generate free emails by calling `fake.free_email()` repeatedly\r\n1. Observe that some of them end in `gmail.co.uk`\r\n\r\n### Expected behavior\r\nEmail addresses should not have `gmail.co.uk` as a domain.\r\n\r\n### Actual behavior\r\nAs a replacement, maybe include Hotmail's successor, `outlook.com`? It's not UK specific, but I don't know anything about the state of free UK email providers.\n", "before_files": [{"content": "from .. import Provider as InternetProvider\n\n\nclass Provider(InternetProvider):\n # Data taken from\n # https://github.com/fzaninotto/Faker/blob/master/src/Faker/Provider/en_GB/Internet.php\n\n free_email_domains = (\n 'gmail.com',\n 'yahoo.com',\n 'hotmail.com',\n 'gmail.co.uk',\n 'yahoo.co.uk',\n 'hotmail.co.uk',\n )\n\n tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')\n", "path": "faker/providers/internet/en_GB/__init__.py"}], "after_files": [{"content": "from .. import Provider as InternetProvider\n\n\nclass Provider(InternetProvider):\n # Data taken from\n # https://github.com/fzaninotto/Faker/blob/master/src/Faker/Provider/en_GB/Internet.php\n\n free_email_domains = (\n 'gmail.com',\n 'yahoo.com',\n 'hotmail.com',\n 'yahoo.co.uk',\n 'hotmail.co.uk',\n 'outlook.com',\n )\n\n tlds = ('com', 'com', 'com', 'com', 'com', 'com', 'biz', 'info', 'net', 'org', 'co.uk')\n", "path": "faker/providers/internet/en_GB/__init__.py"}]} | 903 | 148 |
gh_patches_debug_60838 | rasdani/github-patches | git_diff | docarray__docarray-60 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix: fix tags type after pydantic model
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docarray/document/pydantic_model.py`
Content:
```
1 from typing import Optional, List, Dict, Any, TYPE_CHECKING, Union
2
3 from pydantic import BaseModel, validator
4
5 from ..math.ndarray import to_list
6
7 if TYPE_CHECKING:
8 from ..types import ArrayType
9
10 _ProtoValueType = Optional[Union[str, bool, float]]
11 _StructValueType = Union[
12 _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]
13 ]
14
15
16 def _convert_ndarray_to_list(v: 'ArrayType'):
17 if v is not None:
18 return to_list(v)
19
20
21 class PydanticDocument(BaseModel):
22 id: str
23 parent_id: Optional[str]
24 granularity: Optional[int]
25 adjacency: Optional[int]
26 blob: Optional[bytes]
27 tensor: Optional[Any]
28 mime_type: Optional[str]
29 text: Optional[str]
30 weight: Optional[float]
31 uri: Optional[str]
32 tags: Optional[Dict[str, '_StructValueType']]
33 offset: Optional[float]
34 location: Optional[List[float]]
35 embedding: Optional[Any]
36 modality: Optional[str]
37 evaluations: Optional[Dict[str, Dict[str, '_StructValueType']]]
38 scores: Optional[Dict[str, Dict[str, '_StructValueType']]]
39 chunks: Optional[List['PydanticDocument']]
40 matches: Optional[List['PydanticDocument']]
41
42 _tensor2list = validator('tensor', allow_reuse=True)(_convert_ndarray_to_list)
43 _embedding2list = validator('embedding', allow_reuse=True)(_convert_ndarray_to_list)
44
45
46 PydanticDocument.update_forward_refs()
47
48 PydanticDocumentArray = List[PydanticDocument]
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docarray/document/pydantic_model.py b/docarray/document/pydantic_model.py
--- a/docarray/document/pydantic_model.py
+++ b/docarray/document/pydantic_model.py
@@ -7,7 +7,8 @@
if TYPE_CHECKING:
from ..types import ArrayType
-_ProtoValueType = Optional[Union[str, bool, float]]
+# this order must be preserved: https://pydantic-docs.helpmanual.io/usage/types/#unions
+_ProtoValueType = Optional[Union[bool, float, str]]
_StructValueType = Union[
_ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]
]
| {"golden_diff": "diff --git a/docarray/document/pydantic_model.py b/docarray/document/pydantic_model.py\n--- a/docarray/document/pydantic_model.py\n+++ b/docarray/document/pydantic_model.py\n@@ -7,7 +7,8 @@\n if TYPE_CHECKING:\n from ..types import ArrayType\n \n-_ProtoValueType = Optional[Union[str, bool, float]]\n+# this order must be preserved: https://pydantic-docs.helpmanual.io/usage/types/#unions\n+_ProtoValueType = Optional[Union[bool, float, str]]\n _StructValueType = Union[\n _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]\n ]\n", "issue": "fix: fix tags type after pydantic model\n\n", "before_files": [{"content": "from typing import Optional, List, Dict, Any, TYPE_CHECKING, Union\n\nfrom pydantic import BaseModel, validator\n\nfrom ..math.ndarray import to_list\n\nif TYPE_CHECKING:\n from ..types import ArrayType\n\n_ProtoValueType = Optional[Union[str, bool, float]]\n_StructValueType = Union[\n _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]\n]\n\n\ndef _convert_ndarray_to_list(v: 'ArrayType'):\n if v is not None:\n return to_list(v)\n\n\nclass PydanticDocument(BaseModel):\n id: str\n parent_id: Optional[str]\n granularity: Optional[int]\n adjacency: Optional[int]\n blob: Optional[bytes]\n tensor: Optional[Any]\n mime_type: Optional[str]\n text: Optional[str]\n weight: Optional[float]\n uri: Optional[str]\n tags: Optional[Dict[str, '_StructValueType']]\n offset: Optional[float]\n location: Optional[List[float]]\n embedding: Optional[Any]\n modality: Optional[str]\n evaluations: Optional[Dict[str, Dict[str, '_StructValueType']]]\n scores: Optional[Dict[str, Dict[str, '_StructValueType']]]\n chunks: Optional[List['PydanticDocument']]\n matches: Optional[List['PydanticDocument']]\n\n _tensor2list = validator('tensor', allow_reuse=True)(_convert_ndarray_to_list)\n _embedding2list = validator('embedding', allow_reuse=True)(_convert_ndarray_to_list)\n\n\nPydanticDocument.update_forward_refs()\n\nPydanticDocumentArray = List[PydanticDocument]\n", "path": "docarray/document/pydantic_model.py"}], "after_files": [{"content": "from typing import Optional, List, Dict, Any, TYPE_CHECKING, Union\n\nfrom pydantic import BaseModel, validator\n\nfrom ..math.ndarray import to_list\n\nif TYPE_CHECKING:\n from ..types import ArrayType\n\n# this order must be preserved: https://pydantic-docs.helpmanual.io/usage/types/#unions\n_ProtoValueType = Optional[Union[bool, float, str]]\n_StructValueType = Union[\n _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]\n]\n\n\ndef _convert_ndarray_to_list(v: 'ArrayType'):\n if v is not None:\n return to_list(v)\n\n\nclass PydanticDocument(BaseModel):\n id: str\n parent_id: Optional[str]\n granularity: Optional[int]\n adjacency: Optional[int]\n blob: Optional[bytes]\n tensor: Optional[Any]\n mime_type: Optional[str]\n text: Optional[str]\n weight: Optional[float]\n uri: Optional[str]\n tags: Optional[Dict[str, '_StructValueType']]\n offset: Optional[float]\n location: Optional[List[float]]\n embedding: Optional[Any]\n modality: Optional[str]\n evaluations: Optional[Dict[str, Dict[str, '_StructValueType']]]\n scores: Optional[Dict[str, Dict[str, '_StructValueType']]]\n chunks: Optional[List['PydanticDocument']]\n matches: Optional[List['PydanticDocument']]\n\n _tensor2list = validator('tensor', allow_reuse=True)(_convert_ndarray_to_list)\n _embedding2list = validator('embedding', allow_reuse=True)(_convert_ndarray_to_list)\n\n\nPydanticDocument.update_forward_refs()\n\nPydanticDocumentArray = List[PydanticDocument]\n", "path": "docarray/document/pydantic_model.py"}]} | 706 | 142 |
gh_patches_debug_32437 | rasdani/github-patches | git_diff | safe-global__safe-config-service-292 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow querying `ChainInfo` by `shortName`
@rmeissner add use case
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/chains/urls.py`
Content:
```
1 from django.urls import path
2
3 from chains.views import ChainsDetailView, ChainsListView
4
5 app_name = "chains"
6
7 urlpatterns = [
8 path("", ChainsListView.as_view(), name="list"),
9 path("<pk>/", ChainsDetailView.as_view(), name="detail"),
10 ]
11
```
Path: `src/chains/views.py`
Content:
```
1 from rest_framework import filters
2 from rest_framework.generics import ListAPIView, RetrieveAPIView
3 from rest_framework.pagination import LimitOffsetPagination
4
5 from .models import Chain
6 from .serializers import ChainSerializer
7
8
9 class ChainsListView(ListAPIView):
10 serializer_class = ChainSerializer
11 pagination_class = LimitOffsetPagination
12 pagination_class.max_limit = 100
13 pagination_class.default_limit = 10
14 queryset = Chain.objects.all()
15 filter_backends = [filters.OrderingFilter]
16 ordering_fields = ["relevance", "name"]
17 ordering = [
18 "relevance",
19 "name",
20 ]
21
22
23 class ChainsDetailView(RetrieveAPIView):
24 serializer_class = ChainSerializer
25 queryset = Chain.objects.all()
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/chains/urls.py b/src/chains/urls.py
--- a/src/chains/urls.py
+++ b/src/chains/urls.py
@@ -1,10 +1,15 @@
from django.urls import path
-from chains.views import ChainsDetailView, ChainsListView
+from chains.views import ChainsDetailView, ChainsDetailViewByShortName, ChainsListView
app_name = "chains"
urlpatterns = [
path("", ChainsListView.as_view(), name="list"),
- path("<pk>/", ChainsDetailView.as_view(), name="detail"),
+ path("<int:pk>/", ChainsDetailView.as_view(), name="detail"),
+ path(
+ "<str:short_name>/",
+ ChainsDetailViewByShortName.as_view(),
+ name="detail_by_short_name",
+ ),
]
diff --git a/src/chains/views.py b/src/chains/views.py
--- a/src/chains/views.py
+++ b/src/chains/views.py
@@ -1,6 +1,11 @@
+from typing import Any
+
+from drf_yasg.utils import swagger_auto_schema
from rest_framework import filters
from rest_framework.generics import ListAPIView, RetrieveAPIView
from rest_framework.pagination import LimitOffsetPagination
+from rest_framework.request import Request
+from rest_framework.response import Response
from .models import Chain
from .serializers import ChainSerializer
@@ -23,3 +28,22 @@
class ChainsDetailView(RetrieveAPIView):
serializer_class = ChainSerializer
queryset = Chain.objects.all()
+
+ @swagger_auto_schema(
+ operation_id="Get chain by id"
+ ) # type: ignore[misc] # Untyped decorator makes function "get" untyped
+ def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:
+ return super().get(request, *args, **kwargs)
+
+
+class ChainsDetailViewByShortName(RetrieveAPIView):
+ lookup_field = "short_name"
+ serializer_class = ChainSerializer
+ queryset = Chain.objects.all()
+
+ @swagger_auto_schema(
+ operation_id="Get chain by shortName",
+ operation_description="Warning: `shortNames` may contain characters that need to be URL encoded (i.e.: whitespaces)", # noqa E501
+ ) # type: ignore[misc] # Untyped decorator makes function "get" untyped
+ def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:
+ return super().get(request, *args, **kwargs)
| {"golden_diff": "diff --git a/src/chains/urls.py b/src/chains/urls.py\n--- a/src/chains/urls.py\n+++ b/src/chains/urls.py\n@@ -1,10 +1,15 @@\n from django.urls import path\n \n-from chains.views import ChainsDetailView, ChainsListView\n+from chains.views import ChainsDetailView, ChainsDetailViewByShortName, ChainsListView\n \n app_name = \"chains\"\n \n urlpatterns = [\n path(\"\", ChainsListView.as_view(), name=\"list\"),\n- path(\"<pk>/\", ChainsDetailView.as_view(), name=\"detail\"),\n+ path(\"<int:pk>/\", ChainsDetailView.as_view(), name=\"detail\"),\n+ path(\n+ \"<str:short_name>/\",\n+ ChainsDetailViewByShortName.as_view(),\n+ name=\"detail_by_short_name\",\n+ ),\n ]\ndiff --git a/src/chains/views.py b/src/chains/views.py\n--- a/src/chains/views.py\n+++ b/src/chains/views.py\n@@ -1,6 +1,11 @@\n+from typing import Any\n+\n+from drf_yasg.utils import swagger_auto_schema\n from rest_framework import filters\n from rest_framework.generics import ListAPIView, RetrieveAPIView\n from rest_framework.pagination import LimitOffsetPagination\n+from rest_framework.request import Request\n+from rest_framework.response import Response\n \n from .models import Chain\n from .serializers import ChainSerializer\n@@ -23,3 +28,22 @@\n class ChainsDetailView(RetrieveAPIView):\n serializer_class = ChainSerializer\n queryset = Chain.objects.all()\n+\n+ @swagger_auto_schema(\n+ operation_id=\"Get chain by id\"\n+ ) # type: ignore[misc] # Untyped decorator makes function \"get\" untyped\n+ def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:\n+ return super().get(request, *args, **kwargs)\n+\n+\n+class ChainsDetailViewByShortName(RetrieveAPIView):\n+ lookup_field = \"short_name\"\n+ serializer_class = ChainSerializer\n+ queryset = Chain.objects.all()\n+\n+ @swagger_auto_schema(\n+ operation_id=\"Get chain by shortName\",\n+ operation_description=\"Warning: `shortNames` may contain characters that need to be URL encoded (i.e.: whitespaces)\", # noqa E501\n+ ) # type: ignore[misc] # Untyped decorator makes function \"get\" untyped\n+ def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:\n+ return super().get(request, *args, **kwargs)\n", "issue": "Allow querying `ChainInfo` by `shortName`\n@rmeissner add use case\n", "before_files": [{"content": "from django.urls import path\n\nfrom chains.views import ChainsDetailView, ChainsListView\n\napp_name = \"chains\"\n\nurlpatterns = [\n path(\"\", ChainsListView.as_view(), name=\"list\"),\n path(\"<pk>/\", ChainsDetailView.as_view(), name=\"detail\"),\n]\n", "path": "src/chains/urls.py"}, {"content": "from rest_framework import filters\nfrom rest_framework.generics import ListAPIView, RetrieveAPIView\nfrom rest_framework.pagination import LimitOffsetPagination\n\nfrom .models import Chain\nfrom .serializers import ChainSerializer\n\n\nclass ChainsListView(ListAPIView):\n serializer_class = ChainSerializer\n pagination_class = LimitOffsetPagination\n pagination_class.max_limit = 100\n pagination_class.default_limit = 10\n queryset = Chain.objects.all()\n filter_backends = [filters.OrderingFilter]\n ordering_fields = [\"relevance\", \"name\"]\n ordering = [\n \"relevance\",\n \"name\",\n ]\n\n\nclass ChainsDetailView(RetrieveAPIView):\n serializer_class = ChainSerializer\n queryset = Chain.objects.all()\n", "path": "src/chains/views.py"}], "after_files": [{"content": "from django.urls import path\n\nfrom chains.views import ChainsDetailView, ChainsDetailViewByShortName, ChainsListView\n\napp_name = \"chains\"\n\nurlpatterns = [\n path(\"\", ChainsListView.as_view(), name=\"list\"),\n path(\"<int:pk>/\", ChainsDetailView.as_view(), name=\"detail\"),\n path(\n \"<str:short_name>/\",\n ChainsDetailViewByShortName.as_view(),\n name=\"detail_by_short_name\",\n ),\n]\n", "path": "src/chains/urls.py"}, {"content": "from typing import Any\n\nfrom drf_yasg.utils import swagger_auto_schema\nfrom rest_framework import filters\nfrom rest_framework.generics import ListAPIView, RetrieveAPIView\nfrom rest_framework.pagination import LimitOffsetPagination\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\n\nfrom .models import Chain\nfrom .serializers import ChainSerializer\n\n\nclass ChainsListView(ListAPIView):\n serializer_class = ChainSerializer\n pagination_class = LimitOffsetPagination\n pagination_class.max_limit = 100\n pagination_class.default_limit = 10\n queryset = Chain.objects.all()\n filter_backends = [filters.OrderingFilter]\n ordering_fields = [\"relevance\", \"name\"]\n ordering = [\n \"relevance\",\n \"name\",\n ]\n\n\nclass ChainsDetailView(RetrieveAPIView):\n serializer_class = ChainSerializer\n queryset = Chain.objects.all()\n\n @swagger_auto_schema(\n operation_id=\"Get chain by id\"\n ) # type: ignore[misc] # Untyped decorator makes function \"get\" untyped\n def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:\n return super().get(request, *args, **kwargs)\n\n\nclass ChainsDetailViewByShortName(RetrieveAPIView):\n lookup_field = \"short_name\"\n serializer_class = ChainSerializer\n queryset = Chain.objects.all()\n\n @swagger_auto_schema(\n operation_id=\"Get chain by shortName\",\n operation_description=\"Warning: `shortNames` may contain characters that need to be URL encoded (i.e.: whitespaces)\", # noqa E501\n ) # type: ignore[misc] # Untyped decorator makes function \"get\" untyped\n def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:\n return super().get(request, *args, **kwargs)\n", "path": "src/chains/views.py"}]} | 562 | 558 |
gh_patches_debug_2542 | rasdani/github-patches | git_diff | CTFd__CTFd-1508 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Access media library from Challenge UI
Accessing the media library from the challenge UI is a useful idea if you're using images in the challenge interface. Saves some clicks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/admin/pages.py`
Content:
```
1 from flask import render_template, request
2
3 from CTFd.admin import admin
4 from CTFd.models import Pages
5 from CTFd.schemas.pages import PageSchema
6 from CTFd.utils import markdown
7 from CTFd.utils.config.pages import build_html
8 from CTFd.utils.decorators import admins_only
9
10
11 @admin.route("/admin/pages")
12 @admins_only
13 def pages_listing():
14 pages = Pages.query.all()
15 return render_template("admin/pages.html", pages=pages)
16
17
18 @admin.route("/admin/pages/new")
19 @admins_only
20 def pages_new():
21 return render_template("admin/editor.html")
22
23
24 @admin.route("/admin/pages/preview", methods=["POST"])
25 @admins_only
26 def pages_preview():
27 data = request.form.to_dict()
28 schema = PageSchema()
29 page = schema.load(data)
30 return render_template("page.html", content=build_html(page.data["content"]))
31
32
33 @admin.route("/admin/pages/<int:page_id>")
34 @admins_only
35 def pages_detail(page_id):
36 page = Pages.query.filter_by(id=page_id).first_or_404()
37 page_op = request.args.get("operation")
38
39 if request.method == "GET" and page_op == "preview":
40 return render_template("page.html", content=markdown(page.content))
41
42 if request.method == "GET" and page_op == "create":
43 return render_template("admin/editor.html")
44
45 return render_template("admin/editor.html", page=page)
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/admin/pages.py b/CTFd/admin/pages.py
--- a/CTFd/admin/pages.py
+++ b/CTFd/admin/pages.py
@@ -27,7 +27,7 @@
data = request.form.to_dict()
schema = PageSchema()
page = schema.load(data)
- return render_template("page.html", content=build_html(page.data["content"]))
+ return render_template("page.html", content=build_html(page.data.content))
@admin.route("/admin/pages/<int:page_id>")
| {"golden_diff": "diff --git a/CTFd/admin/pages.py b/CTFd/admin/pages.py\n--- a/CTFd/admin/pages.py\n+++ b/CTFd/admin/pages.py\n@@ -27,7 +27,7 @@\n data = request.form.to_dict()\n schema = PageSchema()\n page = schema.load(data)\n- return render_template(\"page.html\", content=build_html(page.data[\"content\"]))\n+ return render_template(\"page.html\", content=build_html(page.data.content))\n \n \n @admin.route(\"/admin/pages/<int:page_id>\")\n", "issue": "Access media library from Challenge UI\nAccessing the media library from the challenge UI is a useful idea if you're using images in the challenge interface. Saves some clicks. \n", "before_files": [{"content": "from flask import render_template, request\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Pages\nfrom CTFd.schemas.pages import PageSchema\nfrom CTFd.utils import markdown\nfrom CTFd.utils.config.pages import build_html\nfrom CTFd.utils.decorators import admins_only\n\n\[email protected](\"/admin/pages\")\n@admins_only\ndef pages_listing():\n pages = Pages.query.all()\n return render_template(\"admin/pages.html\", pages=pages)\n\n\[email protected](\"/admin/pages/new\")\n@admins_only\ndef pages_new():\n return render_template(\"admin/editor.html\")\n\n\[email protected](\"/admin/pages/preview\", methods=[\"POST\"])\n@admins_only\ndef pages_preview():\n data = request.form.to_dict()\n schema = PageSchema()\n page = schema.load(data)\n return render_template(\"page.html\", content=build_html(page.data[\"content\"]))\n\n\[email protected](\"/admin/pages/<int:page_id>\")\n@admins_only\ndef pages_detail(page_id):\n page = Pages.query.filter_by(id=page_id).first_or_404()\n page_op = request.args.get(\"operation\")\n\n if request.method == \"GET\" and page_op == \"preview\":\n return render_template(\"page.html\", content=markdown(page.content))\n\n if request.method == \"GET\" and page_op == \"create\":\n return render_template(\"admin/editor.html\")\n\n return render_template(\"admin/editor.html\", page=page)\n", "path": "CTFd/admin/pages.py"}], "after_files": [{"content": "from flask import render_template, request\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Pages\nfrom CTFd.schemas.pages import PageSchema\nfrom CTFd.utils import markdown\nfrom CTFd.utils.config.pages import build_html\nfrom CTFd.utils.decorators import admins_only\n\n\[email protected](\"/admin/pages\")\n@admins_only\ndef pages_listing():\n pages = Pages.query.all()\n return render_template(\"admin/pages.html\", pages=pages)\n\n\[email protected](\"/admin/pages/new\")\n@admins_only\ndef pages_new():\n return render_template(\"admin/editor.html\")\n\n\[email protected](\"/admin/pages/preview\", methods=[\"POST\"])\n@admins_only\ndef pages_preview():\n data = request.form.to_dict()\n schema = PageSchema()\n page = schema.load(data)\n return render_template(\"page.html\", content=build_html(page.data.content))\n\n\[email protected](\"/admin/pages/<int:page_id>\")\n@admins_only\ndef pages_detail(page_id):\n page = Pages.query.filter_by(id=page_id).first_or_404()\n page_op = request.args.get(\"operation\")\n\n if request.method == \"GET\" and page_op == \"preview\":\n return render_template(\"page.html\", content=markdown(page.content))\n\n if request.method == \"GET\" and page_op == \"create\":\n return render_template(\"admin/editor.html\")\n\n return render_template(\"admin/editor.html\", page=page)\n", "path": "CTFd/admin/pages.py"}]} | 689 | 117 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.