problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_57793 | rasdani/github-patches | git_diff | catalyst-team__catalyst-855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EarlyStoppingCallback considers first epoch as bad
## 🐛 Bug Report
EarlyStoppingCallback considers first epoch as bad. This can lead for example to always stopping after first epoch if patience=1.
### How To Reproduce
You can train a model with early stopping and patience=1 and see that it always stops after first epoch. Or you can use the unit test below that I added to pull request.
#### Code sample
```python
from unittest.mock import MagicMock, PropertyMock
from catalyst.core import EarlyStoppingCallback
def test_patience1():
"""@TODO: Docs. Contribution is welcome."""
early_stop = EarlyStoppingCallback(1)
runner = MagicMock()
type(runner).stage_name = PropertyMock(return_value="training")
type(runner).valid_metrics = PropertyMock(return_value={"loss": 0.001})
stop_mock = PropertyMock(return_value=False)
type(runner).need_early_stop = stop_mock
early_stop.on_epoch_end(runner)
assert stop_mock.mock_calls == []
```
### Expected behavior
Training doesn't stop after first epoch. And the unit test passes.
### Environment
```bash
Catalyst version: 20.06
PyTorch version: 1.5.1
Is debug build: No
CUDA used to build PyTorch: None
TensorFlow version: N/A
TensorBoard version: 2.2.2
OS: Mac OSX 10.15.5
GCC version: Could not collect
CMake version: version 3.8.0
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] catalyst-codestyle==20.4
[pip3] catalyst-sphinx-theme==1.1.1
[pip3] efficientnet-pytorch==0.6.3
[pip3] numpy==1.18.5
[pip3] segmentation-models-pytorch==0.1.0
[pip3] tensorboard==2.2.2
[pip3] tensorboard-plugin-wit==1.6.0.post3
[pip3] tensorboardX==2.0
[pip3] torch==1.5.1
[pip3] torchvision==0.6.1
[conda] catalyst-codestyle 20.4 <pip>
[conda] catalyst-sphinx-theme 1.1.1 <pip>
[conda] efficientnet-pytorch 0.6.3 <pip>
[conda] numpy 1.18.5 <pip>
[conda] segmentation-models-pytorch 0.1.0 <pip>
[conda] tensorboard 2.2.2 <pip>
[conda] tensorboard-plugin-wit 1.6.0.post3 <pip>
[conda] tensorboardX 2.0 <pip>
[conda] torch 1.5.1 <pip>
[conda] torchvision 0.6.1 <pip>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `catalyst/core/callbacks/early_stop.py`
Content:
```
1 from catalyst.core.callback import Callback, CallbackNode, CallbackOrder
2 from catalyst.core.runner import IRunner
3
4
5 class CheckRunCallback(Callback):
6 """@TODO: Docs. Contribution is welcome."""
7
8 def __init__(self, num_batch_steps: int = 3, num_epoch_steps: int = 2):
9 """@TODO: Docs. Contribution is welcome."""
10 super().__init__(order=CallbackOrder.external, node=CallbackNode.all)
11 self.num_batch_steps = num_batch_steps
12 self.num_epoch_steps = num_epoch_steps
13
14 def on_epoch_end(self, runner: IRunner):
15 """@TODO: Docs. Contribution is welcome."""
16 if runner.epoch >= self.num_epoch_steps:
17 runner.need_early_stop = True
18
19 def on_batch_end(self, runner: IRunner):
20 """@TODO: Docs. Contribution is welcome."""
21 if runner.loader_batch_step >= self.num_batch_steps:
22 runner.need_early_stop = True
23
24
25 class EarlyStoppingCallback(Callback):
26 """@TODO: Docs. Contribution is welcome."""
27
28 def __init__(
29 self,
30 patience: int,
31 metric: str = "loss",
32 minimize: bool = True,
33 min_delta: float = 1e-6,
34 ):
35 """@TODO: Docs. Contribution is welcome."""
36 super().__init__(order=CallbackOrder.external, node=CallbackNode.all)
37 self.best_score = None
38 self.metric = metric
39 self.patience = patience
40 self.num_bad_epochs = 0
41 self.is_better = None
42
43 if minimize:
44 self.is_better = lambda score, best: score <= (best - min_delta)
45 else:
46 self.is_better = lambda score, best: score >= (best + min_delta)
47
48 def on_epoch_end(self, runner: IRunner) -> None:
49 """@TODO: Docs. Contribution is welcome."""
50 if runner.stage_name.startswith("infer"):
51 return
52
53 score = runner.valid_metrics[self.metric]
54 if self.best_score is None:
55 self.best_score = score
56 if self.is_better(score, self.best_score):
57 self.num_bad_epochs = 0
58 self.best_score = score
59 else:
60 self.num_bad_epochs += 1
61
62 if self.num_bad_epochs >= self.patience:
63 print(f"Early stop at {runner.epoch} epoch")
64 runner.need_early_stop = True
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/catalyst/core/callbacks/early_stop.py b/catalyst/core/callbacks/early_stop.py
--- a/catalyst/core/callbacks/early_stop.py
+++ b/catalyst/core/callbacks/early_stop.py
@@ -51,9 +51,7 @@
return
score = runner.valid_metrics[self.metric]
- if self.best_score is None:
- self.best_score = score
- if self.is_better(score, self.best_score):
+ if self.best_score is None or self.is_better(score, self.best_score):
self.num_bad_epochs = 0
self.best_score = score
else:
| {"golden_diff": "diff --git a/catalyst/core/callbacks/early_stop.py b/catalyst/core/callbacks/early_stop.py\n--- a/catalyst/core/callbacks/early_stop.py\n+++ b/catalyst/core/callbacks/early_stop.py\n@@ -51,9 +51,7 @@\n return\n \n score = runner.valid_metrics[self.metric]\n- if self.best_score is None:\n- self.best_score = score\n- if self.is_better(score, self.best_score):\n+ if self.best_score is None or self.is_better(score, self.best_score):\n self.num_bad_epochs = 0\n self.best_score = score\n else:\n", "issue": "EarlyStoppingCallback considers first epoch as bad\n## \ud83d\udc1b Bug Report\r\nEarlyStoppingCallback considers first epoch as bad. This can lead for example to always stopping after first epoch if patience=1.\r\n\r\n\r\n### How To Reproduce\r\nYou can train a model with early stopping and patience=1 and see that it always stops after first epoch. Or you can use the unit test below that I added to pull request.\r\n\r\n#### Code sample\r\n```python\r\nfrom unittest.mock import MagicMock, PropertyMock\r\n\r\nfrom catalyst.core import EarlyStoppingCallback\r\n\r\n\r\ndef test_patience1():\r\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\r\n early_stop = EarlyStoppingCallback(1)\r\n runner = MagicMock()\r\n type(runner).stage_name = PropertyMock(return_value=\"training\")\r\n type(runner).valid_metrics = PropertyMock(return_value={\"loss\": 0.001})\r\n stop_mock = PropertyMock(return_value=False)\r\n type(runner).need_early_stop = stop_mock\r\n\r\n early_stop.on_epoch_end(runner)\r\n\r\n assert stop_mock.mock_calls == []\r\n```\r\n\r\n### Expected behavior\r\nTraining doesn't stop after first epoch. And the unit test passes.\r\n\r\n\r\n### Environment\r\n```bash\r\nCatalyst version: 20.06\r\nPyTorch version: 1.5.1\r\nIs debug build: No\r\nCUDA used to build PyTorch: None\r\nTensorFlow version: N/A\r\nTensorBoard version: 2.2.2\r\n\r\nOS: Mac OSX 10.15.5\r\nGCC version: Could not collect\r\nCMake version: version 3.8.0\r\n\r\nPython version: 3.7\r\nIs CUDA available: No\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\n\r\nVersions of relevant libraries:\r\n[pip3] catalyst-codestyle==20.4\r\n[pip3] catalyst-sphinx-theme==1.1.1\r\n[pip3] efficientnet-pytorch==0.6.3\r\n[pip3] numpy==1.18.5\r\n[pip3] segmentation-models-pytorch==0.1.0\r\n[pip3] tensorboard==2.2.2\r\n[pip3] tensorboard-plugin-wit==1.6.0.post3\r\n[pip3] tensorboardX==2.0\r\n[pip3] torch==1.5.1\r\n[pip3] torchvision==0.6.1\r\n[conda] catalyst-codestyle 20.4 <pip>\r\n[conda] catalyst-sphinx-theme 1.1.1 <pip>\r\n[conda] efficientnet-pytorch 0.6.3 <pip>\r\n[conda] numpy 1.18.5 <pip>\r\n[conda] segmentation-models-pytorch 0.1.0 <pip>\r\n[conda] tensorboard 2.2.2 <pip>\r\n[conda] tensorboard-plugin-wit 1.6.0.post3 <pip>\r\n[conda] tensorboardX 2.0 <pip>\r\n[conda] torch 1.5.1 <pip>\r\n[conda] torchvision 0.6.1 <pip>\r\n```\r\n\n", "before_files": [{"content": "from catalyst.core.callback import Callback, CallbackNode, CallbackOrder\nfrom catalyst.core.runner import IRunner\n\n\nclass CheckRunCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(self, num_batch_steps: int = 3, num_epoch_steps: int = 2):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.num_batch_steps = num_batch_steps\n self.num_epoch_steps = num_epoch_steps\n\n def on_epoch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.epoch >= self.num_epoch_steps:\n runner.need_early_stop = True\n\n def on_batch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.loader_batch_step >= self.num_batch_steps:\n runner.need_early_stop = True\n\n\nclass EarlyStoppingCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(\n self,\n patience: int,\n metric: str = \"loss\",\n minimize: bool = True,\n min_delta: float = 1e-6,\n ):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.best_score = None\n self.metric = metric\n self.patience = patience\n self.num_bad_epochs = 0\n self.is_better = None\n\n if minimize:\n self.is_better = lambda score, best: score <= (best - min_delta)\n else:\n self.is_better = lambda score, best: score >= (best + min_delta)\n\n def on_epoch_end(self, runner: IRunner) -> None:\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.stage_name.startswith(\"infer\"):\n return\n\n score = runner.valid_metrics[self.metric]\n if self.best_score is None:\n self.best_score = score\n if self.is_better(score, self.best_score):\n self.num_bad_epochs = 0\n self.best_score = score\n else:\n self.num_bad_epochs += 1\n\n if self.num_bad_epochs >= self.patience:\n print(f\"Early stop at {runner.epoch} epoch\")\n runner.need_early_stop = True\n", "path": "catalyst/core/callbacks/early_stop.py"}], "after_files": [{"content": "from catalyst.core.callback import Callback, CallbackNode, CallbackOrder\nfrom catalyst.core.runner import IRunner\n\n\nclass CheckRunCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(self, num_batch_steps: int = 3, num_epoch_steps: int = 2):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.num_batch_steps = num_batch_steps\n self.num_epoch_steps = num_epoch_steps\n\n def on_epoch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.epoch >= self.num_epoch_steps:\n runner.need_early_stop = True\n\n def on_batch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.loader_batch_step >= self.num_batch_steps:\n runner.need_early_stop = True\n\n\nclass EarlyStoppingCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(\n self,\n patience: int,\n metric: str = \"loss\",\n minimize: bool = True,\n min_delta: float = 1e-6,\n ):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.best_score = None\n self.metric = metric\n self.patience = patience\n self.num_bad_epochs = 0\n self.is_better = None\n\n if minimize:\n self.is_better = lambda score, best: score <= (best - min_delta)\n else:\n self.is_better = lambda score, best: score >= (best + min_delta)\n\n def on_epoch_end(self, runner: IRunner) -> None:\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.stage_name.startswith(\"infer\"):\n return\n\n score = runner.valid_metrics[self.metric]\n if self.best_score is None or self.is_better(score, self.best_score):\n self.num_bad_epochs = 0\n self.best_score = score\n else:\n self.num_bad_epochs += 1\n\n if self.num_bad_epochs >= self.patience:\n print(f\"Early stop at {runner.epoch} epoch\")\n runner.need_early_stop = True\n", "path": "catalyst/core/callbacks/early_stop.py"}]} | 1,605 | 144 |
gh_patches_debug_840 | rasdani/github-patches | git_diff | nilearn__nilearn-507 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add test for compatibility of old version of six
For the moment, we are compatible with the latest version of six. Recently, somebody pointed out that we did not support six 1.5.2. We should investigate, decide which version we should be compatible with and then add this to Travis.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `continuous_integration/show-python-packages-versions.py`
Content:
```
1 import sys
2
3 DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']
4
5
6 def print_package_version(package_name, indent=' '):
7 try:
8 package = __import__(package_name)
9 version = getattr(package, '__version__', None)
10 package_file = getattr(package, '__file__', )
11 provenance_info = '{0} from {1}'.format(version, package_file)
12 except ImportError:
13 provenance_info = 'not installed'
14
15 print('{0}{1}: {2}'.format(indent, package_name, provenance_info))
16
17 if __name__ == '__main__':
18 print('=' * 120)
19 print('Python %s' % str(sys.version))
20 print('from: %s\n' % sys.executable)
21
22 print('Dependencies versions')
23 for package_name in DEPENDENCIES:
24 print_package_version(package_name)
25 print('=' * 120)
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/continuous_integration/show-python-packages-versions.py b/continuous_integration/show-python-packages-versions.py
--- a/continuous_integration/show-python-packages-versions.py
+++ b/continuous_integration/show-python-packages-versions.py
@@ -1,6 +1,6 @@
import sys
-DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']
+DEPENDENCIES = ['six', 'numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']
def print_package_version(package_name, indent=' '):
| {"golden_diff": "diff --git a/continuous_integration/show-python-packages-versions.py b/continuous_integration/show-python-packages-versions.py\n--- a/continuous_integration/show-python-packages-versions.py\n+++ b/continuous_integration/show-python-packages-versions.py\n@@ -1,6 +1,6 @@\n import sys\n \n-DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n+DEPENDENCIES = ['six', 'numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n \n \n def print_package_version(package_name, indent=' '):\n", "issue": "Add test for compatibility of old version of six\nFor the moment, we are compatible with the latest version of six. Recently, somebody pointed out that we did not support six 1.5.2. We should investigate, decide which version we should be compatible with and then add this to Travis.\n\n", "before_files": [{"content": "import sys\n\nDEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n\n\ndef print_package_version(package_name, indent=' '):\n try:\n package = __import__(package_name)\n version = getattr(package, '__version__', None)\n package_file = getattr(package, '__file__', )\n provenance_info = '{0} from {1}'.format(version, package_file)\n except ImportError:\n provenance_info = 'not installed'\n\n print('{0}{1}: {2}'.format(indent, package_name, provenance_info))\n\nif __name__ == '__main__':\n print('=' * 120)\n print('Python %s' % str(sys.version))\n print('from: %s\\n' % sys.executable)\n\n print('Dependencies versions')\n for package_name in DEPENDENCIES:\n print_package_version(package_name)\n print('=' * 120)\n", "path": "continuous_integration/show-python-packages-versions.py"}], "after_files": [{"content": "import sys\n\nDEPENDENCIES = ['six', 'numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n\n\ndef print_package_version(package_name, indent=' '):\n try:\n package = __import__(package_name)\n version = getattr(package, '__version__', None)\n package_file = getattr(package, '__file__', )\n provenance_info = '{0} from {1}'.format(version, package_file)\n except ImportError:\n provenance_info = 'not installed'\n\n print('{0}{1}: {2}'.format(indent, package_name, provenance_info))\n\nif __name__ == '__main__':\n print('=' * 120)\n print('Python %s' % str(sys.version))\n print('from: %s\\n' % sys.executable)\n\n print('Dependencies versions')\n for package_name in DEPENDENCIES:\n print_package_version(package_name)\n print('=' * 120)\n", "path": "continuous_integration/show-python-packages-versions.py"}]} | 569 | 123 |
gh_patches_debug_11637 | rasdani/github-patches | git_diff | getsentry__sentry-59857 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jira deprecation of glance panels
Notice from Atlassian Support team about glance panel deprecation.
AC:
- Review the deprecation plan
- Build a recommendation based on how we're impacted. If minor development work is required, complete that with this ticket. If significant work is required, notify EM/PM to share impact and come up with next steps together.
Email from Atlassian:
```
Hope you are having a good day!
As part of this deprecation notice (https://developer.atlassian.com/cloud/jira/platform/changelog/#CHANGE-897), we are reaching out because we have identified that your app, “Sentry,” will be affected by the deprecation of glance panels.
This was initially scheduled for the 6th of October, but we have delayed it until the 30th of November.
The jiraIssueGlances and jira:issueGlance modules in Forge (https://developer.atlassian.com/platform/forge/manifest-reference/modules/jira-issue-glance/) and Connect (https://developer.atlassian.com/cloud/jira/platform/modules/issue-glance/) are being deprecated and replaced with the issueContext module.
We recommend transitioning from the glance panel to the new issue context module before the 30th of November.
Please note, we will not be extending this deprecation date as we announced it on the 30th of March.
Let me know if you need any further assistance,
Ahmud
Product Manager-Jira Cloud
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/integrations/jira/endpoints/descriptor.py`
Content:
```
1 from django.conf import settings
2 from django.urls import reverse
3 from rest_framework.request import Request
4 from rest_framework.response import Response
5
6 from sentry.api.api_publish_status import ApiPublishStatus
7 from sentry.api.base import Endpoint, control_silo_endpoint
8 from sentry.utils.assets import get_frontend_app_asset_url
9 from sentry.utils.http import absolute_uri
10
11 from .. import JIRA_KEY
12
13 scopes = ["read", "write", "act_as_user"]
14 # For Jira, only approved apps can use the access_email_addresses scope
15 # This scope allows Sentry to use the email endpoint (https://developer.atlassian.com/cloud/jira/platform/rest/v3/#api-rest-api-3-user-email-get)
16 # We use the email with Jira 2-way sync in order to match the user
17 if settings.JIRA_USE_EMAIL_SCOPE:
18 scopes.append("access_email_addresses")
19
20
21 @control_silo_endpoint
22 class JiraDescriptorEndpoint(Endpoint):
23 publish_status = {
24 "GET": ApiPublishStatus.UNKNOWN,
25 }
26 """
27 Provides the metadata needed by Jira to setup an instance of the Sentry integration within Jira.
28 Only used by on-prem orgs and devs setting up local instances of the integration. (Sentry SAAS
29 already has an established, official instance of the Sentry integration registered with Jira.)
30 """
31
32 authentication_classes = ()
33 permission_classes = ()
34
35 def get(self, request: Request) -> Response:
36 sentry_logo = absolute_uri(
37 get_frontend_app_asset_url("sentry", "entrypoints/logo-sentry.svg")
38 )
39 return self.respond(
40 {
41 "name": "Sentry",
42 "description": "Connect your Sentry organization to one or more of your Jira cloud instances. Get started streamlining your bug-squashing workflow by allowing your Sentry and Jira instances to work together.",
43 "key": JIRA_KEY,
44 "baseUrl": absolute_uri(),
45 "vendor": {"name": "Sentry", "url": "https://sentry.io"},
46 "authentication": {"type": "jwt"},
47 "lifecycle": {
48 "installed": "/extensions/jira/installed/",
49 "uninstalled": "/extensions/jira/uninstalled/",
50 },
51 "apiVersion": 1,
52 "modules": {
53 "postInstallPage": {
54 "url": "/extensions/jira/ui-hook/",
55 "name": {"value": "Configure Sentry Add-on"},
56 "key": "post-install-sentry",
57 },
58 "configurePage": {
59 "url": "/extensions/jira/ui-hook/",
60 "name": {"value": "Configure Sentry Add-on"},
61 "key": "configure-sentry",
62 },
63 "jiraIssueGlances": [
64 {
65 "icon": {"width": 24, "height": 24, "url": sentry_logo},
66 "content": {"type": "label", "label": {"value": "Linked Issues"}},
67 "target": {
68 "type": "web_panel",
69 "url": "/extensions/jira/issue/{issue.key}/",
70 },
71 "name": {"value": "Sentry "},
72 "key": "sentry-issues-glance",
73 }
74 ],
75 "webhooks": [
76 {
77 "event": "jira:issue_updated",
78 "url": reverse("sentry-extensions-jira-issue-updated"),
79 "excludeBody": False,
80 }
81 ],
82 },
83 "apiMigrations": {"gdpr": True, "context-qsh": True, "signed-install": True},
84 "scopes": scopes,
85 }
86 )
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/integrations/jira/endpoints/descriptor.py b/src/sentry/integrations/jira/endpoints/descriptor.py
--- a/src/sentry/integrations/jira/endpoints/descriptor.py
+++ b/src/sentry/integrations/jira/endpoints/descriptor.py
@@ -60,7 +60,7 @@
"name": {"value": "Configure Sentry Add-on"},
"key": "configure-sentry",
},
- "jiraIssueGlances": [
+ "jiraIssueContexts": [
{
"icon": {"width": 24, "height": 24, "url": sentry_logo},
"content": {"type": "label", "label": {"value": "Linked Issues"}},
| {"golden_diff": "diff --git a/src/sentry/integrations/jira/endpoints/descriptor.py b/src/sentry/integrations/jira/endpoints/descriptor.py\n--- a/src/sentry/integrations/jira/endpoints/descriptor.py\n+++ b/src/sentry/integrations/jira/endpoints/descriptor.py\n@@ -60,7 +60,7 @@\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"configure-sentry\",\n },\n- \"jiraIssueGlances\": [\n+ \"jiraIssueContexts\": [\n {\n \"icon\": {\"width\": 24, \"height\": 24, \"url\": sentry_logo},\n \"content\": {\"type\": \"label\", \"label\": {\"value\": \"Linked Issues\"}},\n", "issue": "Jira deprecation of glance panels\nNotice from Atlassian Support team about glance panel deprecation. \r\n\r\nAC:\r\n- Review the deprecation plan\r\n- Build a recommendation based on how we're impacted. If minor development work is required, complete that with this ticket. If significant work is required, notify EM/PM to share impact and come up with next steps together.\r\n\r\nEmail from Atlassian:\r\n```\r\nHope you are having a good day!\r\nAs part of this deprecation notice (https://developer.atlassian.com/cloud/jira/platform/changelog/#CHANGE-897), we are reaching out because we have identified that your app, \u201cSentry,\u201d will be affected by the deprecation of glance panels. \r\nThis was initially scheduled for the 6th of October, but we have delayed it until the 30th of November.\r\nThe jiraIssueGlances and jira:issueGlance modules in Forge (https://developer.atlassian.com/platform/forge/manifest-reference/modules/jira-issue-glance/) and Connect (https://developer.atlassian.com/cloud/jira/platform/modules/issue-glance/) are being deprecated and replaced with the issueContext module. \r\nWe recommend transitioning from the glance panel to the new issue context module before the 30th of November. \r\nPlease note, we will not be extending this deprecation date as we announced it on the 30th of March.\r\nLet me know if you need any further assistance,\r\nAhmud\r\nProduct Manager-Jira Cloud\r\n```\n", "before_files": [{"content": "from django.conf import settings\nfrom django.urls import reverse\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\n\nfrom sentry.api.api_publish_status import ApiPublishStatus\nfrom sentry.api.base import Endpoint, control_silo_endpoint\nfrom sentry.utils.assets import get_frontend_app_asset_url\nfrom sentry.utils.http import absolute_uri\n\nfrom .. import JIRA_KEY\n\nscopes = [\"read\", \"write\", \"act_as_user\"]\n# For Jira, only approved apps can use the access_email_addresses scope\n# This scope allows Sentry to use the email endpoint (https://developer.atlassian.com/cloud/jira/platform/rest/v3/#api-rest-api-3-user-email-get)\n# We use the email with Jira 2-way sync in order to match the user\nif settings.JIRA_USE_EMAIL_SCOPE:\n scopes.append(\"access_email_addresses\")\n\n\n@control_silo_endpoint\nclass JiraDescriptorEndpoint(Endpoint):\n publish_status = {\n \"GET\": ApiPublishStatus.UNKNOWN,\n }\n \"\"\"\n Provides the metadata needed by Jira to setup an instance of the Sentry integration within Jira.\n Only used by on-prem orgs and devs setting up local instances of the integration. (Sentry SAAS\n already has an established, official instance of the Sentry integration registered with Jira.)\n \"\"\"\n\n authentication_classes = ()\n permission_classes = ()\n\n def get(self, request: Request) -> Response:\n sentry_logo = absolute_uri(\n get_frontend_app_asset_url(\"sentry\", \"entrypoints/logo-sentry.svg\")\n )\n return self.respond(\n {\n \"name\": \"Sentry\",\n \"description\": \"Connect your Sentry organization to one or more of your Jira cloud instances. Get started streamlining your bug-squashing workflow by allowing your Sentry and Jira instances to work together.\",\n \"key\": JIRA_KEY,\n \"baseUrl\": absolute_uri(),\n \"vendor\": {\"name\": \"Sentry\", \"url\": \"https://sentry.io\"},\n \"authentication\": {\"type\": \"jwt\"},\n \"lifecycle\": {\n \"installed\": \"/extensions/jira/installed/\",\n \"uninstalled\": \"/extensions/jira/uninstalled/\",\n },\n \"apiVersion\": 1,\n \"modules\": {\n \"postInstallPage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"post-install-sentry\",\n },\n \"configurePage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"configure-sentry\",\n },\n \"jiraIssueGlances\": [\n {\n \"icon\": {\"width\": 24, \"height\": 24, \"url\": sentry_logo},\n \"content\": {\"type\": \"label\", \"label\": {\"value\": \"Linked Issues\"}},\n \"target\": {\n \"type\": \"web_panel\",\n \"url\": \"/extensions/jira/issue/{issue.key}/\",\n },\n \"name\": {\"value\": \"Sentry \"},\n \"key\": \"sentry-issues-glance\",\n }\n ],\n \"webhooks\": [\n {\n \"event\": \"jira:issue_updated\",\n \"url\": reverse(\"sentry-extensions-jira-issue-updated\"),\n \"excludeBody\": False,\n }\n ],\n },\n \"apiMigrations\": {\"gdpr\": True, \"context-qsh\": True, \"signed-install\": True},\n \"scopes\": scopes,\n }\n )\n", "path": "src/sentry/integrations/jira/endpoints/descriptor.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.urls import reverse\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\n\nfrom sentry.api.api_publish_status import ApiPublishStatus\nfrom sentry.api.base import Endpoint, control_silo_endpoint\nfrom sentry.utils.assets import get_frontend_app_asset_url\nfrom sentry.utils.http import absolute_uri\n\nfrom .. import JIRA_KEY\n\nscopes = [\"read\", \"write\", \"act_as_user\"]\n# For Jira, only approved apps can use the access_email_addresses scope\n# This scope allows Sentry to use the email endpoint (https://developer.atlassian.com/cloud/jira/platform/rest/v3/#api-rest-api-3-user-email-get)\n# We use the email with Jira 2-way sync in order to match the user\nif settings.JIRA_USE_EMAIL_SCOPE:\n scopes.append(\"access_email_addresses\")\n\n\n@control_silo_endpoint\nclass JiraDescriptorEndpoint(Endpoint):\n publish_status = {\n \"GET\": ApiPublishStatus.UNKNOWN,\n }\n \"\"\"\n Provides the metadata needed by Jira to setup an instance of the Sentry integration within Jira.\n Only used by on-prem orgs and devs setting up local instances of the integration. (Sentry SAAS\n already has an established, official instance of the Sentry integration registered with Jira.)\n \"\"\"\n\n authentication_classes = ()\n permission_classes = ()\n\n def get(self, request: Request) -> Response:\n sentry_logo = absolute_uri(\n get_frontend_app_asset_url(\"sentry\", \"entrypoints/logo-sentry.svg\")\n )\n return self.respond(\n {\n \"name\": \"Sentry\",\n \"description\": \"Connect your Sentry organization to one or more of your Jira cloud instances. Get started streamlining your bug-squashing workflow by allowing your Sentry and Jira instances to work together.\",\n \"key\": JIRA_KEY,\n \"baseUrl\": absolute_uri(),\n \"vendor\": {\"name\": \"Sentry\", \"url\": \"https://sentry.io\"},\n \"authentication\": {\"type\": \"jwt\"},\n \"lifecycle\": {\n \"installed\": \"/extensions/jira/installed/\",\n \"uninstalled\": \"/extensions/jira/uninstalled/\",\n },\n \"apiVersion\": 1,\n \"modules\": {\n \"postInstallPage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"post-install-sentry\",\n },\n \"configurePage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"configure-sentry\",\n },\n \"jiraIssueContexts\": [\n {\n \"icon\": {\"width\": 24, \"height\": 24, \"url\": sentry_logo},\n \"content\": {\"type\": \"label\", \"label\": {\"value\": \"Linked Issues\"}},\n \"target\": {\n \"type\": \"web_panel\",\n \"url\": \"/extensions/jira/issue/{issue.key}/\",\n },\n \"name\": {\"value\": \"Sentry \"},\n \"key\": \"sentry-issues-glance\",\n }\n ],\n \"webhooks\": [\n {\n \"event\": \"jira:issue_updated\",\n \"url\": reverse(\"sentry-extensions-jira-issue-updated\"),\n \"excludeBody\": False,\n }\n ],\n },\n \"apiMigrations\": {\"gdpr\": True, \"context-qsh\": True, \"signed-install\": True},\n \"scopes\": scopes,\n }\n )\n", "path": "src/sentry/integrations/jira/endpoints/descriptor.py"}]} | 1,499 | 171 |
gh_patches_debug_21950 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-1670 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] The Added Loss term for InducingKernel seems flipped in sign?
# 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
```
def loss(self, *params):
prior_covar = self.prior_dist.lazy_covariance_matrix
variational_covar = self.variational_dist.lazy_covariance_matrix
diag = prior_covar.diag() - variational_covar.diag()
shape = prior_covar.shape[:-1]
noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()
return 0.5 * (diag / noise_diag).sum()
```
This is the current code for InducingPointKernelAddedLossTerm.loss
From what I see, this "loss term" is added into the mll that is returned by the `ExactMarginalLogLikelihood` class. This in itself is misleading as the loss is usually the negative of the mll.
Moreover, the variational negative loss used to evaluate inducing points is given below

As can be seen, the above is the expression for the pseudo-mll that is maximized when optimizing the inducing points. in this, the component of `InducingPointKernelAddedLossTerm` is negative to the value that is being added into the loss.
This is quite likely a significant bug. Please fix (just invert the sign of `diag` above)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/mlls/inducing_point_kernel_added_loss_term.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from .added_loss_term import AddedLossTerm
4
5
6 class InducingPointKernelAddedLossTerm(AddedLossTerm):
7 def __init__(self, variational_dist, prior_dist, likelihood):
8 self.prior_dist = prior_dist
9 self.variational_dist = variational_dist
10 self.likelihood = likelihood
11
12 def loss(self, *params):
13 prior_covar = self.prior_dist.lazy_covariance_matrix
14 variational_covar = self.variational_dist.lazy_covariance_matrix
15 diag = prior_covar.diag() - variational_covar.diag()
16 shape = prior_covar.shape[:-1]
17 noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()
18 return 0.5 * (diag / noise_diag).sum()
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py
--- a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py
+++ b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py
@@ -4,7 +4,7 @@
class InducingPointKernelAddedLossTerm(AddedLossTerm):
- def __init__(self, variational_dist, prior_dist, likelihood):
+ def __init__(self, prior_dist, variational_dist, likelihood):
self.prior_dist = prior_dist
self.variational_dist = variational_dist
self.likelihood = likelihood
@@ -12,7 +12,7 @@
def loss(self, *params):
prior_covar = self.prior_dist.lazy_covariance_matrix
variational_covar = self.variational_dist.lazy_covariance_matrix
- diag = prior_covar.diag() - variational_covar.diag()
+ diag = variational_covar.diag() - prior_covar.diag()
shape = prior_covar.shape[:-1]
noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()
return 0.5 * (diag / noise_diag).sum()
| {"golden_diff": "diff --git a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py\n--- a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py\n+++ b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py\n@@ -4,7 +4,7 @@\n \n \n class InducingPointKernelAddedLossTerm(AddedLossTerm):\n- def __init__(self, variational_dist, prior_dist, likelihood):\n+ def __init__(self, prior_dist, variational_dist, likelihood):\n self.prior_dist = prior_dist\n self.variational_dist = variational_dist\n self.likelihood = likelihood\n@@ -12,7 +12,7 @@\n def loss(self, *params):\n prior_covar = self.prior_dist.lazy_covariance_matrix\n variational_covar = self.variational_dist.lazy_covariance_matrix\n- diag = prior_covar.diag() - variational_covar.diag()\n+ diag = variational_covar.diag() - prior_covar.diag()\n shape = prior_covar.shape[:-1]\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\n return 0.5 * (diag / noise_diag).sum()\n", "issue": "[Bug] The Added Loss term for InducingKernel seems flipped in sign?\n# \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n```\r\n def loss(self, *params):\r\n prior_covar = self.prior_dist.lazy_covariance_matrix\r\n variational_covar = self.variational_dist.lazy_covariance_matrix\r\n diag = prior_covar.diag() - variational_covar.diag()\r\n shape = prior_covar.shape[:-1]\r\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\r\n return 0.5 * (diag / noise_diag).sum()\r\n```\r\nThis is the current code for InducingPointKernelAddedLossTerm.loss\r\n\r\nFrom what I see, this \"loss term\" is added into the mll that is returned by the `ExactMarginalLogLikelihood` class. This in itself is misleading as the loss is usually the negative of the mll.\r\n\r\nMoreover, the variational negative loss used to evaluate inducing points is given below\r\n\r\n\r\n\r\nAs can be seen, the above is the expression for the pseudo-mll that is maximized when optimizing the inducing points. in this, the component of `InducingPointKernelAddedLossTerm` is negative to the value that is being added into the loss.\r\n\r\nThis is quite likely a significant bug. Please fix (just invert the sign of `diag` above)\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom .added_loss_term import AddedLossTerm\n\n\nclass InducingPointKernelAddedLossTerm(AddedLossTerm):\n def __init__(self, variational_dist, prior_dist, likelihood):\n self.prior_dist = prior_dist\n self.variational_dist = variational_dist\n self.likelihood = likelihood\n\n def loss(self, *params):\n prior_covar = self.prior_dist.lazy_covariance_matrix\n variational_covar = self.variational_dist.lazy_covariance_matrix\n diag = prior_covar.diag() - variational_covar.diag()\n shape = prior_covar.shape[:-1]\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\n return 0.5 * (diag / noise_diag).sum()\n", "path": "gpytorch/mlls/inducing_point_kernel_added_loss_term.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom .added_loss_term import AddedLossTerm\n\n\nclass InducingPointKernelAddedLossTerm(AddedLossTerm):\n def __init__(self, prior_dist, variational_dist, likelihood):\n self.prior_dist = prior_dist\n self.variational_dist = variational_dist\n self.likelihood = likelihood\n\n def loss(self, *params):\n prior_covar = self.prior_dist.lazy_covariance_matrix\n variational_covar = self.variational_dist.lazy_covariance_matrix\n diag = variational_covar.diag() - prior_covar.diag()\n shape = prior_covar.shape[:-1]\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\n return 0.5 * (diag / noise_diag).sum()\n", "path": "gpytorch/mlls/inducing_point_kernel_added_loss_term.py"}]} | 824 | 284 |
gh_patches_debug_29548 | rasdani/github-patches | git_diff | translate__pootle-3719 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Running migrate twice gives an error about changed models
If you run `migrate` a second time directly after an initial migration you will get the following error.
```
Running migrations:
No migrations to apply.
Your models have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
```
`makemigrations` produces this file:
``` py
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import pootle.core.markup.fields
class Migration(migrations.Migration):
dependencies = [
('virtualfolder', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='virtualfolder',
name='description',
field=pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True),
preserve_default=True,
),
]
```
@unho Why are virtualfolders doing this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/virtualfolder/migrations/0001_initial.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.db import models, migrations
5 import pootle.core.markup.fields
6
7
8 class Migration(migrations.Migration):
9
10 dependencies = [
11 ('pootle_store', '0001_initial'),
12 ]
13
14 operations = [
15 migrations.CreateModel(
16 name='VirtualFolder',
17 fields=[
18 ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
19 ('name', models.CharField(max_length=70, verbose_name='Name')),
20 ('location', models.CharField(help_text='Root path where this virtual folder is applied.', max_length=255, verbose_name='Location')),
21 ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),
22 ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),
23 ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),
24 ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),
25 ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),
26 ],
27 options={
28 'ordering': ['-priority', 'name'],
29 },
30 bases=(models.Model,),
31 ),
32 migrations.AlterUniqueTogether(
33 name='virtualfolder',
34 unique_together=set([('name', 'location')]),
35 ),
36 ]
37
```
Path: `pootle/core/markup/fields.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 import logging
11
12 from django.conf import settings
13 from django.core.cache import cache
14 from django.db import models
15 from django.utils.safestring import mark_safe
16
17 from .filters import apply_markup_filter
18 from .widgets import MarkupTextarea
19
20
21 __all__ = ('Markup', 'MarkupField',)
22
23
24 logger = logging.getLogger('pootle.markup')
25
26
27 _rendered_cache_key = lambda obj, pk, field: '_%s_%s_%s_rendered' % \
28 (obj, pk, field)
29
30
31 class Markup(object):
32
33 def __init__(self, instance, field_name, rendered_cache_key):
34 self.instance = instance
35 self.field_name = field_name
36 self.cache_key = rendered_cache_key
37
38 @property
39 def raw(self):
40 return self.instance.__dict__[self.field_name]
41
42 @raw.setter
43 def raw(self, value):
44 setattr(self.instance, self.field_name, value)
45
46 @property
47 def rendered(self):
48 rendered = cache.get(self.cache_key)
49
50 if not rendered:
51 logger.debug(u'Caching rendered output of %r', self.cache_key)
52 rendered = apply_markup_filter(self.raw)
53 cache.set(self.cache_key, rendered,
54 settings.OBJECT_CACHE_TIMEOUT)
55
56 return rendered
57
58 def __unicode__(self):
59 return mark_safe(self.rendered)
60
61 def __nonzero__(self):
62 return self.raw.strip() != '' and self.raw is not None
63
64
65 class MarkupDescriptor(object):
66
67 def __init__(self, field):
68 self.field = field
69
70 def __get__(self, obj, owner):
71 if obj is None:
72 raise AttributeError('Can only be accessed via an instance.')
73
74 markup = obj.__dict__[self.field.name]
75 if markup is None:
76 return None
77
78 cache_key = _rendered_cache_key(obj.__class__.__name__,
79 obj.pk,
80 self.field.name)
81 return Markup(obj, self.field.name, cache_key)
82
83 def __set__(self, obj, value):
84 if isinstance(value, Markup):
85 obj.__dict__[self.field.name] = value.raw
86 else:
87 obj.__dict__[self.field.name] = value
88
89
90 class MarkupField(models.TextField):
91
92 description = 'Text field supporting different markup formats.'
93
94 def contribute_to_class(self, cls, name):
95 super(MarkupField, self).contribute_to_class(cls, name)
96 setattr(cls, self.name, MarkupDescriptor(self))
97
98 def pre_save(self, model_instance, add):
99 value = super(MarkupField, self).pre_save(model_instance, add)
100
101 if not add:
102 # Invalidate cache to force rendering upon next retrieval
103 cache_key = _rendered_cache_key(model_instance.__class__.__name__,
104 model_instance.pk,
105 self.name)
106 logger.debug('Invalidating cache for %r', cache_key)
107 cache.delete(cache_key)
108
109 return value.raw
110
111 def get_prep_value(self, value):
112 if isinstance(value, Markup):
113 return value.raw
114
115 return value
116
117 def value_to_string(self, obj):
118 value = self._get_val_from_obj(obj)
119 return self.get_prep_value(value)
120
121 def formfield(self, **kwargs):
122 defaults = {'widget': MarkupTextarea}
123 defaults.update(kwargs)
124 return super(MarkupField, self).formfield(**defaults)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/virtualfolder/migrations/0001_initial.py b/pootle/apps/virtualfolder/migrations/0001_initial.py
--- a/pootle/apps/virtualfolder/migrations/0001_initial.py
+++ b/pootle/apps/virtualfolder/migrations/0001_initial.py
@@ -21,7 +21,7 @@
('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),
('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),
('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),
- ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),
+ ('description', pootle.core.markup.fields.MarkupField(verbose_name='Description', blank=True)),
('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),
],
options={
diff --git a/pootle/core/markup/fields.py b/pootle/core/markup/fields.py
--- a/pootle/core/markup/fields.py
+++ b/pootle/core/markup/fields.py
@@ -122,3 +122,8 @@
defaults = {'widget': MarkupTextarea}
defaults.update(kwargs)
return super(MarkupField, self).formfield(**defaults)
+
+ def deconstruct(self):
+ name, path, args, kwargs = super(MarkupField, self).deconstruct()
+ kwargs.pop('help_text', None)
+ return name, path, args, kwargs
| {"golden_diff": "diff --git a/pootle/apps/virtualfolder/migrations/0001_initial.py b/pootle/apps/virtualfolder/migrations/0001_initial.py\n--- a/pootle/apps/virtualfolder/migrations/0001_initial.py\n+++ b/pootle/apps/virtualfolder/migrations/0001_initial.py\n@@ -21,7 +21,7 @@\n ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),\n ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),\n ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),\n- ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),\n+ ('description', pootle.core.markup.fields.MarkupField(verbose_name='Description', blank=True)),\n ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),\n ],\n options={\ndiff --git a/pootle/core/markup/fields.py b/pootle/core/markup/fields.py\n--- a/pootle/core/markup/fields.py\n+++ b/pootle/core/markup/fields.py\n@@ -122,3 +122,8 @@\n defaults = {'widget': MarkupTextarea}\n defaults.update(kwargs)\n return super(MarkupField, self).formfield(**defaults)\n+\n+ def deconstruct(self):\n+ name, path, args, kwargs = super(MarkupField, self).deconstruct()\n+ kwargs.pop('help_text', None)\n+ return name, path, args, kwargs\n", "issue": "Running migrate twice gives an error about changed models\nIf you run `migrate` a second time directly after an initial migration you will get the following error.\n\n```\nRunning migrations:\n No migrations to apply.\n Your models have changes that are not yet reflected in a migration, and so won't be applied.\n Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.\n```\n\n`makemigrations` produces this file:\n\n``` py\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport pootle.core.markup.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('virtualfolder', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='virtualfolder',\n name='description',\n field=pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True),\n preserve_default=True,\n ),\n ]\n```\n\n@unho Why are virtualfolders doing this?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport pootle.core.markup.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('pootle_store', '0001_initial'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='VirtualFolder',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('name', models.CharField(max_length=70, verbose_name='Name')),\n ('location', models.CharField(help_text='Root path where this virtual folder is applied.', max_length=255, verbose_name='Location')),\n ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),\n ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),\n ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),\n ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),\n ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),\n ],\n options={\n 'ordering': ['-priority', 'name'],\n },\n bases=(models.Model,),\n ),\n migrations.AlterUniqueTogether(\n name='virtualfolder',\n unique_together=set([('name', 'location')]),\n ),\n ]\n", "path": "pootle/apps/virtualfolder/migrations/0001_initial.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.safestring import mark_safe\n\nfrom .filters import apply_markup_filter\nfrom .widgets import MarkupTextarea\n\n\n__all__ = ('Markup', 'MarkupField',)\n\n\nlogger = logging.getLogger('pootle.markup')\n\n\n_rendered_cache_key = lambda obj, pk, field: '_%s_%s_%s_rendered' % \\\n (obj, pk, field)\n\n\nclass Markup(object):\n\n def __init__(self, instance, field_name, rendered_cache_key):\n self.instance = instance\n self.field_name = field_name\n self.cache_key = rendered_cache_key\n\n @property\n def raw(self):\n return self.instance.__dict__[self.field_name]\n\n @raw.setter\n def raw(self, value):\n setattr(self.instance, self.field_name, value)\n\n @property\n def rendered(self):\n rendered = cache.get(self.cache_key)\n\n if not rendered:\n logger.debug(u'Caching rendered output of %r', self.cache_key)\n rendered = apply_markup_filter(self.raw)\n cache.set(self.cache_key, rendered,\n settings.OBJECT_CACHE_TIMEOUT)\n\n return rendered\n\n def __unicode__(self):\n return mark_safe(self.rendered)\n\n def __nonzero__(self):\n return self.raw.strip() != '' and self.raw is not None\n\n\nclass MarkupDescriptor(object):\n\n def __init__(self, field):\n self.field = field\n\n def __get__(self, obj, owner):\n if obj is None:\n raise AttributeError('Can only be accessed via an instance.')\n\n markup = obj.__dict__[self.field.name]\n if markup is None:\n return None\n\n cache_key = _rendered_cache_key(obj.__class__.__name__,\n obj.pk,\n self.field.name)\n return Markup(obj, self.field.name, cache_key)\n\n def __set__(self, obj, value):\n if isinstance(value, Markup):\n obj.__dict__[self.field.name] = value.raw\n else:\n obj.__dict__[self.field.name] = value\n\n\nclass MarkupField(models.TextField):\n\n description = 'Text field supporting different markup formats.'\n\n def contribute_to_class(self, cls, name):\n super(MarkupField, self).contribute_to_class(cls, name)\n setattr(cls, self.name, MarkupDescriptor(self))\n\n def pre_save(self, model_instance, add):\n value = super(MarkupField, self).pre_save(model_instance, add)\n\n if not add:\n # Invalidate cache to force rendering upon next retrieval\n cache_key = _rendered_cache_key(model_instance.__class__.__name__,\n model_instance.pk,\n self.name)\n logger.debug('Invalidating cache for %r', cache_key)\n cache.delete(cache_key)\n\n return value.raw\n\n def get_prep_value(self, value):\n if isinstance(value, Markup):\n return value.raw\n\n return value\n\n def value_to_string(self, obj):\n value = self._get_val_from_obj(obj)\n return self.get_prep_value(value)\n\n def formfield(self, **kwargs):\n defaults = {'widget': MarkupTextarea}\n defaults.update(kwargs)\n return super(MarkupField, self).formfield(**defaults)\n", "path": "pootle/core/markup/fields.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport pootle.core.markup.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('pootle_store', '0001_initial'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='VirtualFolder',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('name', models.CharField(max_length=70, verbose_name='Name')),\n ('location', models.CharField(help_text='Root path where this virtual folder is applied.', max_length=255, verbose_name='Location')),\n ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),\n ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),\n ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),\n ('description', pootle.core.markup.fields.MarkupField(verbose_name='Description', blank=True)),\n ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),\n ],\n options={\n 'ordering': ['-priority', 'name'],\n },\n bases=(models.Model,),\n ),\n migrations.AlterUniqueTogether(\n name='virtualfolder',\n unique_together=set([('name', 'location')]),\n ),\n ]\n", "path": "pootle/apps/virtualfolder/migrations/0001_initial.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.safestring import mark_safe\n\nfrom .filters import apply_markup_filter\nfrom .widgets import MarkupTextarea\n\n\n__all__ = ('Markup', 'MarkupField',)\n\n\nlogger = logging.getLogger('pootle.markup')\n\n\n_rendered_cache_key = lambda obj, pk, field: '_%s_%s_%s_rendered' % \\\n (obj, pk, field)\n\n\nclass Markup(object):\n\n def __init__(self, instance, field_name, rendered_cache_key):\n self.instance = instance\n self.field_name = field_name\n self.cache_key = rendered_cache_key\n\n @property\n def raw(self):\n return self.instance.__dict__[self.field_name]\n\n @raw.setter\n def raw(self, value):\n setattr(self.instance, self.field_name, value)\n\n @property\n def rendered(self):\n rendered = cache.get(self.cache_key)\n\n if not rendered:\n logger.debug(u'Caching rendered output of %r', self.cache_key)\n rendered = apply_markup_filter(self.raw)\n cache.set(self.cache_key, rendered,\n settings.OBJECT_CACHE_TIMEOUT)\n\n return rendered\n\n def __unicode__(self):\n return mark_safe(self.rendered)\n\n def __nonzero__(self):\n return self.raw.strip() != '' and self.raw is not None\n\n\nclass MarkupDescriptor(object):\n\n def __init__(self, field):\n self.field = field\n\n def __get__(self, obj, owner):\n if obj is None:\n raise AttributeError('Can only be accessed via an instance.')\n\n markup = obj.__dict__[self.field.name]\n if markup is None:\n return None\n\n cache_key = _rendered_cache_key(obj.__class__.__name__,\n obj.pk,\n self.field.name)\n return Markup(obj, self.field.name, cache_key)\n\n def __set__(self, obj, value):\n if isinstance(value, Markup):\n obj.__dict__[self.field.name] = value.raw\n else:\n obj.__dict__[self.field.name] = value\n\n\nclass MarkupField(models.TextField):\n\n description = 'Text field supporting different markup formats.'\n\n def contribute_to_class(self, cls, name):\n super(MarkupField, self).contribute_to_class(cls, name)\n setattr(cls, self.name, MarkupDescriptor(self))\n\n def pre_save(self, model_instance, add):\n value = super(MarkupField, self).pre_save(model_instance, add)\n\n if not add:\n # Invalidate cache to force rendering upon next retrieval\n cache_key = _rendered_cache_key(model_instance.__class__.__name__,\n model_instance.pk,\n self.name)\n logger.debug('Invalidating cache for %r', cache_key)\n cache.delete(cache_key)\n\n return value.raw\n\n def get_prep_value(self, value):\n if isinstance(value, Markup):\n return value.raw\n\n return value\n\n def value_to_string(self, obj):\n value = self._get_val_from_obj(obj)\n return self.get_prep_value(value)\n\n def formfield(self, **kwargs):\n defaults = {'widget': MarkupTextarea}\n defaults.update(kwargs)\n return super(MarkupField, self).formfield(**defaults)\n\n def deconstruct(self):\n name, path, args, kwargs = super(MarkupField, self).deconstruct()\n kwargs.pop('help_text', None)\n return name, path, args, kwargs\n", "path": "pootle/core/markup/fields.py"}]} | 2,022 | 410 |
gh_patches_debug_4294 | rasdani/github-patches | git_diff | open-mmlab__mmpretrain-286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] CPU Testing
Since CPU training is already supported in PR #219, what about also adding the feature of CPU testing.
Besides, it seems there are still some problems with the CPU training feature @wangruohui :
When we set `--device CPU`, the expected behavior is using CPU for training, no matter if there exist GPUs on this machine. However, mmcls will use GPU for training if it exists, even if we set `--device CPU`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmcls/apis/train.py`
Content:
```
1 import random
2 import warnings
3
4 import numpy as np
5 import torch
6 from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
7 from mmcv.runner import DistSamplerSeedHook, build_optimizer, build_runner
8
9 from mmcls.core import DistOptimizerHook
10 from mmcls.datasets import build_dataloader, build_dataset
11 from mmcls.utils import get_root_logger
12
13 # TODO import eval hooks from mmcv and delete them from mmcls
14 try:
15 from mmcv.runner.hooks import EvalHook, DistEvalHook
16 except ImportError:
17 warnings.warn('DeprecationWarning: EvalHook and DistEvalHook from mmcls '
18 'will be deprecated.'
19 'Please install mmcv through master branch.')
20 from mmcls.core import EvalHook, DistEvalHook
21
22 # TODO import optimizer hook from mmcv and delete them from mmcls
23 try:
24 from mmcv.runner import Fp16OptimizerHook
25 except ImportError:
26 warnings.warn('DeprecationWarning: FP16OptimizerHook from mmcls will be '
27 'deprecated. Please install mmcv>=1.1.4.')
28 from mmcls.core import Fp16OptimizerHook
29
30
31 def set_random_seed(seed, deterministic=False):
32 """Set random seed.
33
34 Args:
35 seed (int): Seed to be used.
36 deterministic (bool): Whether to set the deterministic option for
37 CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`
38 to True and `torch.backends.cudnn.benchmark` to False.
39 Default: False.
40 """
41 random.seed(seed)
42 np.random.seed(seed)
43 torch.manual_seed(seed)
44 torch.cuda.manual_seed_all(seed)
45 if deterministic:
46 torch.backends.cudnn.deterministic = True
47 torch.backends.cudnn.benchmark = False
48
49
50 def train_model(model,
51 dataset,
52 cfg,
53 distributed=False,
54 validate=False,
55 timestamp=None,
56 device='cuda',
57 meta=None):
58 logger = get_root_logger(cfg.log_level)
59
60 # prepare data loaders
61 dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
62
63 data_loaders = [
64 build_dataloader(
65 ds,
66 cfg.data.samples_per_gpu,
67 cfg.data.workers_per_gpu,
68 # cfg.gpus will be ignored if distributed
69 num_gpus=len(cfg.gpu_ids),
70 dist=distributed,
71 round_up=True,
72 seed=cfg.seed) for ds in dataset
73 ]
74
75 # put model on gpus
76 if distributed:
77 find_unused_parameters = cfg.get('find_unused_parameters', False)
78 # Sets the `find_unused_parameters` parameter in
79 # torch.nn.parallel.DistributedDataParallel
80 model = MMDistributedDataParallel(
81 model.cuda(),
82 device_ids=[torch.cuda.current_device()],
83 broadcast_buffers=False,
84 find_unused_parameters=find_unused_parameters)
85 else:
86 if device == 'cuda':
87 model = MMDataParallel(
88 model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
89 elif device == 'cpu':
90 model = MMDataParallel(model.cpu())
91 else:
92 raise ValueError(F'unsupported device name {device}.')
93
94 # build runner
95 optimizer = build_optimizer(model, cfg.optimizer)
96
97 if cfg.get('runner') is None:
98 cfg.runner = {
99 'type': 'EpochBasedRunner',
100 'max_epochs': cfg.total_epochs
101 }
102 warnings.warn(
103 'config is now expected to have a `runner` section, '
104 'please set `runner` in your config.', UserWarning)
105
106 runner = build_runner(
107 cfg.runner,
108 default_args=dict(
109 model=model,
110 batch_processor=None,
111 optimizer=optimizer,
112 work_dir=cfg.work_dir,
113 logger=logger,
114 meta=meta))
115
116 # an ugly walkaround to make the .log and .log.json filenames the same
117 runner.timestamp = timestamp
118
119 # fp16 setting
120 fp16_cfg = cfg.get('fp16', None)
121 if fp16_cfg is not None:
122 optimizer_config = Fp16OptimizerHook(
123 **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
124 elif distributed and 'type' not in cfg.optimizer_config:
125 optimizer_config = DistOptimizerHook(**cfg.optimizer_config)
126 else:
127 optimizer_config = cfg.optimizer_config
128
129 # register hooks
130 runner.register_training_hooks(cfg.lr_config, optimizer_config,
131 cfg.checkpoint_config, cfg.log_config,
132 cfg.get('momentum_config', None))
133 if distributed:
134 runner.register_hook(DistSamplerSeedHook())
135
136 # register eval hooks
137 if validate:
138 val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
139 val_dataloader = build_dataloader(
140 val_dataset,
141 samples_per_gpu=cfg.data.samples_per_gpu,
142 workers_per_gpu=cfg.data.workers_per_gpu,
143 dist=distributed,
144 shuffle=False,
145 round_up=True)
146 eval_cfg = cfg.get('evaluation', {})
147 eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'
148 eval_hook = DistEvalHook if distributed else EvalHook
149 runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
150
151 if cfg.resume_from:
152 runner.resume(cfg.resume_from)
153 elif cfg.load_from:
154 runner.load_checkpoint(cfg.load_from)
155 runner.run(data_loaders, cfg.workflow)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmcls/apis/train.py b/mmcls/apis/train.py
--- a/mmcls/apis/train.py
+++ b/mmcls/apis/train.py
@@ -87,7 +87,7 @@
model = MMDataParallel(
model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
elif device == 'cpu':
- model = MMDataParallel(model.cpu())
+ model = model.cpu()
else:
raise ValueError(F'unsupported device name {device}.')
| {"golden_diff": "diff --git a/mmcls/apis/train.py b/mmcls/apis/train.py\n--- a/mmcls/apis/train.py\n+++ b/mmcls/apis/train.py\n@@ -87,7 +87,7 @@\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n elif device == 'cpu':\n- model = MMDataParallel(model.cpu())\n+ model = model.cpu()\n else:\n raise ValueError(F'unsupported device name {device}.')\n", "issue": "[Feature Request] CPU Testing\nSince CPU training is already supported in PR #219, what about also adding the feature of CPU testing. \r\n\r\nBesides, it seems there are still some problems with the CPU training feature @wangruohui : \r\nWhen we set `--device CPU`, the expected behavior is using CPU for training, no matter if there exist GPUs on this machine. However, mmcls will use GPU for training if it exists, even if we set `--device CPU`. \n", "before_files": [{"content": "import random\nimport warnings\n\nimport numpy as np\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, build_optimizer, build_runner\n\nfrom mmcls.core import DistOptimizerHook\nfrom mmcls.datasets import build_dataloader, build_dataset\nfrom mmcls.utils import get_root_logger\n\n# TODO import eval hooks from mmcv and delete them from mmcls\ntry:\n from mmcv.runner.hooks import EvalHook, DistEvalHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: EvalHook and DistEvalHook from mmcls '\n 'will be deprecated.'\n 'Please install mmcv through master branch.')\n from mmcls.core import EvalHook, DistEvalHook\n\n# TODO import optimizer hook from mmcv and delete them from mmcls\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: FP16OptimizerHook from mmcls will be '\n 'deprecated. Please install mmcv>=1.1.4.')\n from mmcls.core import Fp16OptimizerHook\n\n\ndef set_random_seed(seed, deterministic=False):\n \"\"\"Set random seed.\n\n Args:\n seed (int): Seed to be used.\n deterministic (bool): Whether to set the deterministic option for\n CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n to True and `torch.backends.cudnn.benchmark` to False.\n Default: False.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n if deterministic:\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n device='cuda',\n meta=None):\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n\n data_loaders = [\n build_dataloader(\n ds,\n cfg.data.samples_per_gpu,\n cfg.data.workers_per_gpu,\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n round_up=True,\n seed=cfg.seed) for ds in dataset\n ]\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', False)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n if device == 'cuda':\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n elif device == 'cpu':\n model = MMDataParallel(model.cpu())\n else:\n raise ValueError(F'unsupported device name {device}.')\n\n # build runner\n optimizer = build_optimizer(model, cfg.optimizer)\n\n if cfg.get('runner') is None:\n cfg.runner = {\n 'type': 'EpochBasedRunner',\n 'max_epochs': cfg.total_epochs\n }\n warnings.warn(\n 'config is now expected to have a `runner` section, '\n 'please set `runner` in your config.', UserWarning)\n\n runner = build_runner(\n cfg.runner,\n default_args=dict(\n model=model,\n batch_processor=None,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta))\n\n # an ugly walkaround to make the .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = DistOptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n val_dataloader = build_dataloader(\n val_dataset,\n samples_per_gpu=cfg.data.samples_per_gpu,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=distributed,\n shuffle=False,\n round_up=True)\n eval_cfg = cfg.get('evaluation', {})\n eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow)\n", "path": "mmcls/apis/train.py"}], "after_files": [{"content": "import random\nimport warnings\n\nimport numpy as np\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, build_optimizer, build_runner\n\nfrom mmcls.core import DistOptimizerHook\nfrom mmcls.datasets import build_dataloader, build_dataset\nfrom mmcls.utils import get_root_logger\n\n# TODO import eval hooks from mmcv and delete them from mmcls\ntry:\n from mmcv.runner.hooks import EvalHook, DistEvalHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: EvalHook and DistEvalHook from mmcls '\n 'will be deprecated.'\n 'Please install mmcv through master branch.')\n from mmcls.core import EvalHook, DistEvalHook\n\n# TODO import optimizer hook from mmcv and delete them from mmcls\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: FP16OptimizerHook from mmcls will be '\n 'deprecated. Please install mmcv>=1.1.4.')\n from mmcls.core import Fp16OptimizerHook\n\n\ndef set_random_seed(seed, deterministic=False):\n \"\"\"Set random seed.\n\n Args:\n seed (int): Seed to be used.\n deterministic (bool): Whether to set the deterministic option for\n CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n to True and `torch.backends.cudnn.benchmark` to False.\n Default: False.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n if deterministic:\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n device='cuda',\n meta=None):\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n\n data_loaders = [\n build_dataloader(\n ds,\n cfg.data.samples_per_gpu,\n cfg.data.workers_per_gpu,\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n round_up=True,\n seed=cfg.seed) for ds in dataset\n ]\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', False)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n if device == 'cuda':\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n elif device == 'cpu':\n model = model.cpu()\n else:\n raise ValueError(F'unsupported device name {device}.')\n\n # build runner\n optimizer = build_optimizer(model, cfg.optimizer)\n\n if cfg.get('runner') is None:\n cfg.runner = {\n 'type': 'EpochBasedRunner',\n 'max_epochs': cfg.total_epochs\n }\n warnings.warn(\n 'config is now expected to have a `runner` section, '\n 'please set `runner` in your config.', UserWarning)\n\n runner = build_runner(\n cfg.runner,\n default_args=dict(\n model=model,\n batch_processor=None,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta))\n\n # an ugly walkaround to make the .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = DistOptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n val_dataloader = build_dataloader(\n val_dataset,\n samples_per_gpu=cfg.data.samples_per_gpu,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=distributed,\n shuffle=False,\n round_up=True)\n eval_cfg = cfg.get('evaluation', {})\n eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow)\n", "path": "mmcls/apis/train.py"}]} | 1,869 | 106 |
gh_patches_debug_5537 | rasdani/github-patches | git_diff | nextcloud__appstore-619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Verify email addresses after E-Mail change
When a user changes their email address, it should be verified. allauth provides some views for that which may or may not be useful. Unsure whether email addresses currently are verified at signup, but it would be appropriate for it to use the same mechanism.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nextcloudappstore/user/views.py`
Content:
```
1 from allauth.account.models import EmailAddress
2 from allauth.account.views import PasswordChangeView
3 from django.contrib import messages
4 from django.contrib.auth.mixins import LoginRequiredMixin
5 from django.urls import reverse_lazy
6 from django.shortcuts import redirect, render, get_object_or_404
7 from django.urls import reverse
8 from django.views.generic import TemplateView
9 from django.views.generic import UpdateView
10
11 from nextcloudappstore.core.models import App
12 from nextcloudappstore.user.forms import DeleteAccountForm, AccountForm
13
14
15 class TransferAppsView(LoginRequiredMixin, TemplateView):
16 template_name = 'user/transfer-apps.html'
17
18 def post(self, request, pk):
19 app = get_object_or_404(App, pk=pk, owner=self.request.user)
20 app.ownership_transfer_enabled = not app.ownership_transfer_enabled
21 app.save()
22 return redirect(reverse('user:account-transfer-apps'))
23
24 def get_context_data(self, **kwargs):
25 context = super().get_context_data(**kwargs)
26 context['apps'] = App.objects.filter(owner=self.request.user)
27 context['acc_page'] = 'account-transfer-apps'
28 return context
29
30
31 class ChangeLanguageView(LoginRequiredMixin, TemplateView):
32 template_name = 'user/set-language.html'
33
34 def get_context_data(self, **kwargs):
35 context = super().get_context_data(**kwargs)
36 context['acc_page'] = 'account-change-language'
37 return context
38
39
40 class DeleteAccountView(LoginRequiredMixin, TemplateView):
41 template_name = 'user/delete-account.html'
42
43 def get_context_data(self, **kwargs):
44 context = super().get_context_data(**kwargs)
45 context['form'] = DeleteAccountForm()
46 context['acc_page'] = 'delete-account'
47 return context
48
49 def post(self, request, *args, **kwargs):
50 form = DeleteAccountForm(request.POST, user=request.user)
51 if form.is_valid():
52 request.user.delete()
53 return redirect(reverse_lazy('home'))
54 else:
55 return render(request, self.template_name, {'form': form})
56
57
58 class AccountView(LoginRequiredMixin, UpdateView):
59 """Display and allow changing of the user's name."""
60
61 template_name = 'user/account.html'
62 template_name_suffix = ''
63 form_class = AccountForm
64 success_url = reverse_lazy('user:account')
65
66 def get_context_data(self, **kwargs):
67 context = super().get_context_data(**kwargs)
68 context['acc_page'] = 'account'
69 return context
70
71 def form_valid(self, form):
72 email = EmailAddress.objects.get_primary(user=self.request.user)
73 email.email = form.cleaned_data['email']
74 email.save()
75 messages.success(self.request, 'Account details saved.')
76 return super().form_valid(form)
77
78 def get_object(self, queryset=None):
79 return self.request.user
80
81
82 class PasswordView(LoginRequiredMixin, PasswordChangeView):
83 """Allow the user to change their password."""
84
85 template_name = 'user/password.html'
86 success_url = reverse_lazy('user:account-password')
87
88 def get_context_data(self, **kwargs):
89 context = super().get_context_data(**kwargs)
90 context['acc_page'] = 'password'
91 return context
92
93
94 class APITokenView(LoginRequiredMixin, TemplateView):
95 """Display the user's API token, and allow it to be regenerated."""
96
97 template_name = 'user/api-token.html'
98
99 def get_context_data(self, **kwargs):
100 context = super().get_context_data(**kwargs)
101 context['acc_page'] = 'api-token'
102 return context
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nextcloudappstore/user/views.py b/nextcloudappstore/user/views.py
--- a/nextcloudappstore/user/views.py
+++ b/nextcloudappstore/user/views.py
@@ -70,8 +70,7 @@
def form_valid(self, form):
email = EmailAddress.objects.get_primary(user=self.request.user)
- email.email = form.cleaned_data['email']
- email.save()
+ email.change(None, form.cleaned_data['email'])
messages.success(self.request, 'Account details saved.')
return super().form_valid(form)
| {"golden_diff": "diff --git a/nextcloudappstore/user/views.py b/nextcloudappstore/user/views.py\n--- a/nextcloudappstore/user/views.py\n+++ b/nextcloudappstore/user/views.py\n@@ -70,8 +70,7 @@\n \n def form_valid(self, form):\n email = EmailAddress.objects.get_primary(user=self.request.user)\n- email.email = form.cleaned_data['email']\n- email.save()\n+ email.change(None, form.cleaned_data['email'])\n messages.success(self.request, 'Account details saved.')\n return super().form_valid(form)\n", "issue": "Verify email addresses after E-Mail change\nWhen a user changes their email address, it should be verified. allauth provides some views for that which may or may not be useful. Unsure whether email addresses currently are verified at signup, but it would be appropriate for it to use the same mechanism.\n\n", "before_files": [{"content": "from allauth.account.models import EmailAddress\nfrom allauth.account.views import PasswordChangeView\nfrom django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.urls import reverse_lazy\nfrom django.shortcuts import redirect, render, get_object_or_404\nfrom django.urls import reverse\nfrom django.views.generic import TemplateView\nfrom django.views.generic import UpdateView\n\nfrom nextcloudappstore.core.models import App\nfrom nextcloudappstore.user.forms import DeleteAccountForm, AccountForm\n\n\nclass TransferAppsView(LoginRequiredMixin, TemplateView):\n template_name = 'user/transfer-apps.html'\n\n def post(self, request, pk):\n app = get_object_or_404(App, pk=pk, owner=self.request.user)\n app.ownership_transfer_enabled = not app.ownership_transfer_enabled\n app.save()\n return redirect(reverse('user:account-transfer-apps'))\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['apps'] = App.objects.filter(owner=self.request.user)\n context['acc_page'] = 'account-transfer-apps'\n return context\n\n\nclass ChangeLanguageView(LoginRequiredMixin, TemplateView):\n template_name = 'user/set-language.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account-change-language'\n return context\n\n\nclass DeleteAccountView(LoginRequiredMixin, TemplateView):\n template_name = 'user/delete-account.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['form'] = DeleteAccountForm()\n context['acc_page'] = 'delete-account'\n return context\n\n def post(self, request, *args, **kwargs):\n form = DeleteAccountForm(request.POST, user=request.user)\n if form.is_valid():\n request.user.delete()\n return redirect(reverse_lazy('home'))\n else:\n return render(request, self.template_name, {'form': form})\n\n\nclass AccountView(LoginRequiredMixin, UpdateView):\n \"\"\"Display and allow changing of the user's name.\"\"\"\n\n template_name = 'user/account.html'\n template_name_suffix = ''\n form_class = AccountForm\n success_url = reverse_lazy('user:account')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account'\n return context\n\n def form_valid(self, form):\n email = EmailAddress.objects.get_primary(user=self.request.user)\n email.email = form.cleaned_data['email']\n email.save()\n messages.success(self.request, 'Account details saved.')\n return super().form_valid(form)\n\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass PasswordView(LoginRequiredMixin, PasswordChangeView):\n \"\"\"Allow the user to change their password.\"\"\"\n\n template_name = 'user/password.html'\n success_url = reverse_lazy('user:account-password')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'password'\n return context\n\n\nclass APITokenView(LoginRequiredMixin, TemplateView):\n \"\"\"Display the user's API token, and allow it to be regenerated.\"\"\"\n\n template_name = 'user/api-token.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'api-token'\n return context\n", "path": "nextcloudappstore/user/views.py"}], "after_files": [{"content": "from allauth.account.models import EmailAddress\nfrom allauth.account.views import PasswordChangeView\nfrom django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.urls import reverse_lazy\nfrom django.shortcuts import redirect, render, get_object_or_404\nfrom django.urls import reverse\nfrom django.views.generic import TemplateView\nfrom django.views.generic import UpdateView\n\nfrom nextcloudappstore.core.models import App\nfrom nextcloudappstore.user.forms import DeleteAccountForm, AccountForm\n\n\nclass TransferAppsView(LoginRequiredMixin, TemplateView):\n template_name = 'user/transfer-apps.html'\n\n def post(self, request, pk):\n app = get_object_or_404(App, pk=pk, owner=self.request.user)\n app.ownership_transfer_enabled = not app.ownership_transfer_enabled\n app.save()\n return redirect(reverse('user:account-transfer-apps'))\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['apps'] = App.objects.filter(owner=self.request.user)\n context['acc_page'] = 'account-transfer-apps'\n return context\n\n\nclass ChangeLanguageView(LoginRequiredMixin, TemplateView):\n template_name = 'user/set-language.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account-change-language'\n return context\n\n\nclass DeleteAccountView(LoginRequiredMixin, TemplateView):\n template_name = 'user/delete-account.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['form'] = DeleteAccountForm()\n context['acc_page'] = 'delete-account'\n return context\n\n def post(self, request, *args, **kwargs):\n form = DeleteAccountForm(request.POST, user=request.user)\n if form.is_valid():\n request.user.delete()\n return redirect(reverse_lazy('home'))\n else:\n return render(request, self.template_name, {'form': form})\n\n\nclass AccountView(LoginRequiredMixin, UpdateView):\n \"\"\"Display and allow changing of the user's name.\"\"\"\n\n template_name = 'user/account.html'\n template_name_suffix = ''\n form_class = AccountForm\n success_url = reverse_lazy('user:account')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account'\n return context\n\n def form_valid(self, form):\n email = EmailAddress.objects.get_primary(user=self.request.user)\n email.change(None, form.cleaned_data['email'])\n messages.success(self.request, 'Account details saved.')\n return super().form_valid(form)\n\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass PasswordView(LoginRequiredMixin, PasswordChangeView):\n \"\"\"Allow the user to change their password.\"\"\"\n\n template_name = 'user/password.html'\n success_url = reverse_lazy('user:account-password')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'password'\n return context\n\n\nclass APITokenView(LoginRequiredMixin, TemplateView):\n \"\"\"Display the user's API token, and allow it to be regenerated.\"\"\"\n\n template_name = 'user/api-token.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'api-token'\n return context\n", "path": "nextcloudappstore/user/views.py"}]} | 1,277 | 125 |
gh_patches_debug_31147 | rasdani/github-patches | git_diff | onnx__onnx-5757 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
check_function requires contexts as arguments which breaks backward compatibility
https://github.com/onnx/onnx/pull/5693 added required parameters to the `check_function` function in checker which breaks backward compatibility. Should we provide default contexts to `check_function` as well?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onnx/checker.py`
Content:
```
1 # Copyright (c) ONNX Project Contributors
2 #
3 # SPDX-License-Identifier: Apache-2.0
4 """Graph utilities for checking whether an ONNX proto message is legal."""
5
6 from __future__ import annotations
7
8 __all__ = [
9 "check_attribute",
10 "check_function",
11 "check_graph",
12 "check_model",
13 "check_node",
14 "check_sparse_tensor",
15 "check_tensor",
16 "check_value_info",
17 "DEFAULT_CONTEXT",
18 "LEXICAL_SCOPE_CONTEXT",
19 "ValidationError",
20 "C",
21 "MAXIMUM_PROTOBUF",
22 ]
23
24 import os
25 import sys
26 from typing import Any, Callable, TypeVar
27
28 from google.protobuf.message import Message
29
30 import onnx.defs
31 import onnx.onnx_cpp2py_export.checker as C # noqa: N812
32 import onnx.shape_inference
33 from onnx import (
34 IR_VERSION,
35 AttributeProto,
36 FunctionProto,
37 GraphProto,
38 ModelProto,
39 NodeProto,
40 SparseTensorProto,
41 TensorProto,
42 ValueInfoProto,
43 )
44
45 # Limitation of single protobuf file is 2GB
46 MAXIMUM_PROTOBUF = 2000000000
47
48 # TODO: This thing where we reserialize the protobuf back into the
49 # string, only to deserialize it at the call site, is really goofy.
50 # Stop doing that.
51
52
53 # NB: Please don't edit this context!
54 DEFAULT_CONTEXT = C.CheckerContext()
55 DEFAULT_CONTEXT.ir_version = IR_VERSION
56 # TODO: Maybe ONNX-ML should also be defaulted?
57 DEFAULT_CONTEXT.opset_imports = {"": onnx.defs.onnx_opset_version()}
58
59 LEXICAL_SCOPE_CONTEXT = C.LexicalScopeContext()
60
61
62 FuncType = TypeVar("FuncType", bound=Callable[..., Any])
63
64
65 def _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:
66 if not isinstance(proto, proto_type):
67 raise TypeError(
68 f"The proto message needs to be of type '{proto_type.__name__}'"
69 )
70
71
72 def check_value_info(
73 value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT
74 ) -> None:
75 _ensure_proto_type(value_info, ValueInfoProto)
76 return C.check_value_info(value_info.SerializeToString(), ctx)
77
78
79 def check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:
80 _ensure_proto_type(tensor, TensorProto)
81 return C.check_tensor(tensor.SerializeToString(), ctx)
82
83
84 def check_attribute(
85 attr: AttributeProto,
86 ctx: C.CheckerContext = DEFAULT_CONTEXT,
87 lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
88 ) -> None:
89 _ensure_proto_type(attr, AttributeProto)
90 return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)
91
92
93 def check_node(
94 node: NodeProto,
95 ctx: C.CheckerContext = DEFAULT_CONTEXT,
96 lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
97 ) -> None:
98 _ensure_proto_type(node, NodeProto)
99 return C.check_node(node.SerializeToString(), ctx, lex_ctx)
100
101
102 def check_function(
103 function: FunctionProto,
104 ctx: C.CheckerContext,
105 lex_ctx: C.LexicalScopeContext,
106 ) -> None:
107 _ensure_proto_type(function, FunctionProto)
108 C.check_function(function.SerializeToString(), ctx, lex_ctx)
109
110
111 def check_graph(
112 graph: GraphProto,
113 ctx: C.CheckerContext = DEFAULT_CONTEXT,
114 lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
115 ) -> None:
116 _ensure_proto_type(graph, GraphProto)
117 return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)
118
119
120 def check_sparse_tensor(
121 sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT
122 ) -> None:
123 _ensure_proto_type(sparse, SparseTensorProto)
124 C.check_sparse_tensor(sparse.SerializeToString(), ctx)
125
126
127 def check_model(
128 model: ModelProto | str | bytes | os.PathLike,
129 full_check: bool = False,
130 skip_opset_compatibility_check: bool = False,
131 ) -> None:
132 """Check the consistency of a model.
133
134 An exception will be raised if the model's ir_version is not set
135 properly or is higher than checker's ir_version, or if the model
136 has duplicate keys in metadata_props.
137
138 If IR version >= 3, the model must specify opset_import.
139 If IR version < 3, the model cannot have any opset_import specified.
140
141 Args:
142 model: Model to check. If model is a path, the function checks model
143 path first. If the model bytes size is larger than 2GB, function
144 should be called using model path.
145 full_check: If True, the function also runs shape inference check.
146 skip_opset_compatibility_check: If True, the function skips the check for
147 opset compatibility.
148 """
149 # If model is a path instead of ModelProto
150 if isinstance(model, (str, os.PathLike)):
151 C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)
152 else:
153 protobuf_string = (
154 model if isinstance(model, bytes) else model.SerializeToString()
155 )
156 # If the protobuf is larger than 2GB,
157 # remind users should use the model path to check
158 if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:
159 raise ValueError(
160 "This protobuf of onnx model is too large (>2GB). Call check_model with model path instead."
161 )
162 C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
163
164
165 ValidationError = C.ValidationError
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/onnx/checker.py b/onnx/checker.py
--- a/onnx/checker.py
+++ b/onnx/checker.py
@@ -84,37 +84,37 @@
def check_attribute(
attr: AttributeProto,
ctx: C.CheckerContext = DEFAULT_CONTEXT,
- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(attr, AttributeProto)
- return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)
+ return C.check_attribute(attr.SerializeToString(), ctx, lexical_scope_ctx)
def check_node(
node: NodeProto,
ctx: C.CheckerContext = DEFAULT_CONTEXT,
- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(node, NodeProto)
- return C.check_node(node.SerializeToString(), ctx, lex_ctx)
+ return C.check_node(node.SerializeToString(), ctx, lexical_scope_ctx)
def check_function(
function: FunctionProto,
- ctx: C.CheckerContext,
- lex_ctx: C.LexicalScopeContext,
+ ctx: C.CheckerContext = DEFAULT_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(function, FunctionProto)
- C.check_function(function.SerializeToString(), ctx, lex_ctx)
+ C.check_function(function.SerializeToString(), ctx, lexical_scope_ctx)
def check_graph(
graph: GraphProto,
ctx: C.CheckerContext = DEFAULT_CONTEXT,
- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(graph, GraphProto)
- return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)
+ return C.check_graph(graph.SerializeToString(), ctx, lexical_scope_ctx)
def check_sparse_tensor(
| {"golden_diff": "diff --git a/onnx/checker.py b/onnx/checker.py\n--- a/onnx/checker.py\n+++ b/onnx/checker.py\n@@ -84,37 +84,37 @@\n def check_attribute(\n attr: AttributeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(attr, AttributeProto)\n- return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)\n+ return C.check_attribute(attr.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_node(\n node: NodeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(node, NodeProto)\n- return C.check_node(node.SerializeToString(), ctx, lex_ctx)\n+ return C.check_node(node.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_function(\n function: FunctionProto,\n- ctx: C.CheckerContext,\n- lex_ctx: C.LexicalScopeContext,\n+ ctx: C.CheckerContext = DEFAULT_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(function, FunctionProto)\n- C.check_function(function.SerializeToString(), ctx, lex_ctx)\n+ C.check_function(function.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_graph(\n graph: GraphProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(graph, GraphProto)\n- return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)\n+ return C.check_graph(graph.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_sparse_tensor(\n", "issue": "check_function requires contexts as arguments which breaks backward compatibility\nhttps://github.com/onnx/onnx/pull/5693 added required parameters to the `check_function` function in checker which breaks backward compatibility. Should we provide default contexts to `check_function` as well?\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) ONNX Project Contributors\n#\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"Graph utilities for checking whether an ONNX proto message is legal.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"check_attribute\",\n \"check_function\",\n \"check_graph\",\n \"check_model\",\n \"check_node\",\n \"check_sparse_tensor\",\n \"check_tensor\",\n \"check_value_info\",\n \"DEFAULT_CONTEXT\",\n \"LEXICAL_SCOPE_CONTEXT\",\n \"ValidationError\",\n \"C\",\n \"MAXIMUM_PROTOBUF\",\n]\n\nimport os\nimport sys\nfrom typing import Any, Callable, TypeVar\n\nfrom google.protobuf.message import Message\n\nimport onnx.defs\nimport onnx.onnx_cpp2py_export.checker as C # noqa: N812\nimport onnx.shape_inference\nfrom onnx import (\n IR_VERSION,\n AttributeProto,\n FunctionProto,\n GraphProto,\n ModelProto,\n NodeProto,\n SparseTensorProto,\n TensorProto,\n ValueInfoProto,\n)\n\n# Limitation of single protobuf file is 2GB\nMAXIMUM_PROTOBUF = 2000000000\n\n# TODO: This thing where we reserialize the protobuf back into the\n# string, only to deserialize it at the call site, is really goofy.\n# Stop doing that.\n\n\n# NB: Please don't edit this context!\nDEFAULT_CONTEXT = C.CheckerContext()\nDEFAULT_CONTEXT.ir_version = IR_VERSION\n# TODO: Maybe ONNX-ML should also be defaulted?\nDEFAULT_CONTEXT.opset_imports = {\"\": onnx.defs.onnx_opset_version()}\n\nLEXICAL_SCOPE_CONTEXT = C.LexicalScopeContext()\n\n\nFuncType = TypeVar(\"FuncType\", bound=Callable[..., Any])\n\n\ndef _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:\n if not isinstance(proto, proto_type):\n raise TypeError(\n f\"The proto message needs to be of type '{proto_type.__name__}'\"\n )\n\n\ndef check_value_info(\n value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(value_info, ValueInfoProto)\n return C.check_value_info(value_info.SerializeToString(), ctx)\n\n\ndef check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(tensor, TensorProto)\n return C.check_tensor(tensor.SerializeToString(), ctx)\n\n\ndef check_attribute(\n attr: AttributeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(attr, AttributeProto)\n return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_node(\n node: NodeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(node, NodeProto)\n return C.check_node(node.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_function(\n function: FunctionProto,\n ctx: C.CheckerContext,\n lex_ctx: C.LexicalScopeContext,\n) -> None:\n _ensure_proto_type(function, FunctionProto)\n C.check_function(function.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_graph(\n graph: GraphProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(graph, GraphProto)\n return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_sparse_tensor(\n sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(sparse, SparseTensorProto)\n C.check_sparse_tensor(sparse.SerializeToString(), ctx)\n\n\ndef check_model(\n model: ModelProto | str | bytes | os.PathLike,\n full_check: bool = False,\n skip_opset_compatibility_check: bool = False,\n) -> None:\n \"\"\"Check the consistency of a model.\n\n An exception will be raised if the model's ir_version is not set\n properly or is higher than checker's ir_version, or if the model\n has duplicate keys in metadata_props.\n\n If IR version >= 3, the model must specify opset_import.\n If IR version < 3, the model cannot have any opset_import specified.\n\n Args:\n model: Model to check. If model is a path, the function checks model\n path first. If the model bytes size is larger than 2GB, function\n should be called using model path.\n full_check: If True, the function also runs shape inference check.\n skip_opset_compatibility_check: If True, the function skips the check for\n opset compatibility.\n \"\"\"\n # If model is a path instead of ModelProto\n if isinstance(model, (str, os.PathLike)):\n C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)\n else:\n protobuf_string = (\n model if isinstance(model, bytes) else model.SerializeToString()\n )\n # If the protobuf is larger than 2GB,\n # remind users should use the model path to check\n if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:\n raise ValueError(\n \"This protobuf of onnx model is too large (>2GB). Call check_model with model path instead.\"\n )\n C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)\n\n\nValidationError = C.ValidationError\n", "path": "onnx/checker.py"}], "after_files": [{"content": "# Copyright (c) ONNX Project Contributors\n#\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"Graph utilities for checking whether an ONNX proto message is legal.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"check_attribute\",\n \"check_function\",\n \"check_graph\",\n \"check_model\",\n \"check_node\",\n \"check_sparse_tensor\",\n \"check_tensor\",\n \"check_value_info\",\n \"DEFAULT_CONTEXT\",\n \"LEXICAL_SCOPE_CONTEXT\",\n \"ValidationError\",\n \"C\",\n \"MAXIMUM_PROTOBUF\",\n]\n\nimport os\nimport sys\nfrom typing import Any, Callable, TypeVar\n\nfrom google.protobuf.message import Message\n\nimport onnx.defs\nimport onnx.onnx_cpp2py_export.checker as C # noqa: N812\nimport onnx.shape_inference\nfrom onnx import (\n IR_VERSION,\n AttributeProto,\n FunctionProto,\n GraphProto,\n ModelProto,\n NodeProto,\n SparseTensorProto,\n TensorProto,\n ValueInfoProto,\n)\n\n# Limitation of single protobuf file is 2GB\nMAXIMUM_PROTOBUF = 2000000000\n\n# TODO: This thing where we reserialize the protobuf back into the\n# string, only to deserialize it at the call site, is really goofy.\n# Stop doing that.\n\n\n# NB: Please don't edit this context!\nDEFAULT_CONTEXT = C.CheckerContext()\nDEFAULT_CONTEXT.ir_version = IR_VERSION\n# TODO: Maybe ONNX-ML should also be defaulted?\nDEFAULT_CONTEXT.opset_imports = {\"\": onnx.defs.onnx_opset_version()}\n\nLEXICAL_SCOPE_CONTEXT = C.LexicalScopeContext()\n\n\nFuncType = TypeVar(\"FuncType\", bound=Callable[..., Any])\n\n\ndef _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:\n if not isinstance(proto, proto_type):\n raise TypeError(\n f\"The proto message needs to be of type '{proto_type.__name__}'\"\n )\n\n\ndef check_value_info(\n value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(value_info, ValueInfoProto)\n return C.check_value_info(value_info.SerializeToString(), ctx)\n\n\ndef check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(tensor, TensorProto)\n return C.check_tensor(tensor.SerializeToString(), ctx)\n\n\ndef check_attribute(\n attr: AttributeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(attr, AttributeProto)\n return C.check_attribute(attr.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_node(\n node: NodeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(node, NodeProto)\n return C.check_node(node.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_function(\n function: FunctionProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(function, FunctionProto)\n C.check_function(function.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_graph(\n graph: GraphProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(graph, GraphProto)\n return C.check_graph(graph.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_sparse_tensor(\n sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(sparse, SparseTensorProto)\n C.check_sparse_tensor(sparse.SerializeToString(), ctx)\n\n\ndef check_model(\n model: ModelProto | str | bytes | os.PathLike,\n full_check: bool = False,\n skip_opset_compatibility_check: bool = False,\n) -> None:\n \"\"\"Check the consistency of a model.\n\n An exception will be raised if the model's ir_version is not set\n properly or is higher than checker's ir_version, or if the model\n has duplicate keys in metadata_props.\n\n If IR version >= 3, the model must specify opset_import.\n If IR version < 3, the model cannot have any opset_import specified.\n\n Args:\n model: Model to check. If model is a path, the function checks model\n path first. If the model bytes size is larger than 2GB, function\n should be called using model path.\n full_check: If True, the function also runs shape inference check.\n skip_opset_compatibility_check: If True, the function skips the check for\n opset compatibility.\n \"\"\"\n # If model is a path instead of ModelProto\n if isinstance(model, (str, os.PathLike)):\n C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)\n else:\n protobuf_string = (\n model if isinstance(model, bytes) else model.SerializeToString()\n )\n # If the protobuf is larger than 2GB,\n # remind users should use the model path to check\n if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:\n raise ValueError(\n \"This protobuf of onnx model is too large (>2GB). Call check_model with model path instead.\"\n )\n C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)\n\n\nValidationError = C.ValidationError\n", "path": "onnx/checker.py"}]} | 1,933 | 469 |
gh_patches_debug_11270 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2886 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E2520 raised for mutually exclusive properties when using Conditions
### CloudFormation Lint Version
cfn-lint 0.80.2
### What operating system are you using?
Windows
### Describe the bug
[E2520](https://github.com/aws-cloudformation/cfn-lint/blob/main/docs/rules.md#E2520) is raised for mutually exclusive properties when using Conditions
```
cfn-lint -t ./template.yaml
E2520 Property SourceSecurityGroupId should NOT exist with CidrIp for Resources/Ingress/Properties
.\template.yaml:13:7
```
The same was working prior `0.79.11`. PR [2875](https://github.com/aws-cloudformation/cfn-lint/pull/2875) seems to be the cause.
```
> cfn-lint --version
cfn-lint 0.79.10
> cfn-lint -t ./template.yaml
> echo $lastexitcode
0
```
### Expected behavior
E2520 is ignored for mutually exclusive properties that use the same Condition and Fn::If intrinsic function which makes sure only one of the properties has value.
### Reproduction template
```yaml
AWSTemplateFormatVersion: 2010-09-09
Parameters:
pCidr:
Type: String
Default: ''
Conditions:
cIsCidr: !Not [!Equals [!Ref pCidr, '']]
Resources:
Ingress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
SourceSecurityGroupId: !If [ cIsCidr, !Ref AWS::NoValue, sg-abc12345 ]
CidrIp: !If [ cIsCidr, !Ref pCidr, !Ref AWS::NoValue ]
IpProtocol: "-1"
GroupId: sg-abc1234567
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/properties/Exclusive.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import cfnlint.helpers
6 from cfnlint.data import AdditionalSpecs
7 from cfnlint.rules import CloudFormationLintRule, RuleMatch
8
9
10 class Exclusive(CloudFormationLintRule):
11 """Check Properties Resource Configuration"""
12
13 id = "E2520"
14 shortdesc = "Check Properties that are mutually exclusive"
15 description = (
16 "Making sure CloudFormation properties that are exclusive are not defined"
17 )
18 source_url = "https://github.com/aws-cloudformation/cfn-python-lint"
19 tags = ["resources"]
20
21 def __init__(self):
22 """Init"""
23 super().__init__()
24 exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, "Exclusive.json")
25 self.resource_types_specs = exclusivespec["ResourceTypes"]
26 self.property_types_specs = exclusivespec["PropertyTypes"]
27 for resource_type_spec in self.resource_types_specs:
28 self.resource_property_types.append(resource_type_spec)
29 for property_type_spec in self.property_types_specs:
30 self.resource_sub_property_types.append(property_type_spec)
31
32 def check(self, properties, exclusions, path, cfn):
33 """Check itself"""
34 matches = []
35 for p_value, p_path in properties.items_safe(path[:]):
36 for k, v in exclusions.items():
37 property_sets = cfn.get_object_without_conditions(p_value, [k] + v)
38 for property_set in property_sets:
39 obj = property_set["Object"].clean()
40 for prop in obj:
41 if prop in exclusions:
42 for excl_property in exclusions[prop]:
43 if excl_property in obj:
44 if property_set["Scenario"] is None:
45 message = "Property {0} should NOT exist with {1} for {2}"
46 matches.append(
47 RuleMatch(
48 p_path + [prop],
49 message.format(
50 excl_property,
51 prop,
52 "/".join(map(str, p_path)),
53 ),
54 )
55 )
56 else:
57 scenario_text = " and ".join(
58 [
59 f'when condition "{k}" is {v}'
60 for (k, v) in property_set[
61 "Scenario"
62 ].items()
63 ]
64 )
65 message = "Property {0} should NOT exist with {1} {2} for {3}"
66 matches.append(
67 RuleMatch(
68 p_path + [prop],
69 message.format(
70 excl_property,
71 prop,
72 scenario_text,
73 "/".join(map(str, p_path)),
74 ),
75 )
76 )
77
78 return matches
79
80 def match_resource_sub_properties(self, properties, property_type, path, cfn):
81 """Match for sub properties"""
82 matches = []
83
84 exclusions = self.property_types_specs.get(property_type, {})
85 matches.extend(self.check(properties, exclusions, path, cfn))
86
87 return matches
88
89 def match_resource_properties(self, properties, resource_type, path, cfn):
90 """Check CloudFormation Properties"""
91 matches = []
92
93 exclusions = self.resource_types_specs.get(resource_type, {})
94 matches.extend(self.check(properties, exclusions, path, cfn))
95
96 return matches
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py
--- a/src/cfnlint/rules/resources/properties/Exclusive.py
+++ b/src/cfnlint/rules/resources/properties/Exclusive.py
@@ -38,7 +38,7 @@
for property_set in property_sets:
obj = property_set["Object"].clean()
for prop in obj:
- if prop in exclusions:
+ if prop == k:
for excl_property in exclusions[prop]:
if excl_property in obj:
if property_set["Scenario"] is None:
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py\n--- a/src/cfnlint/rules/resources/properties/Exclusive.py\n+++ b/src/cfnlint/rules/resources/properties/Exclusive.py\n@@ -38,7 +38,7 @@\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n- if prop in exclusions:\n+ if prop == k:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n", "issue": "E2520 raised for mutually exclusive properties when using Conditions\n### CloudFormation Lint Version\n\ncfn-lint 0.80.2\n\n### What operating system are you using?\n\nWindows\n\n### Describe the bug\n\n[E2520](https://github.com/aws-cloudformation/cfn-lint/blob/main/docs/rules.md#E2520) is raised for mutually exclusive properties when using Conditions\r\n\r\n```\r\ncfn-lint -t ./template.yaml\r\nE2520 Property SourceSecurityGroupId should NOT exist with CidrIp for Resources/Ingress/Properties\r\n.\\template.yaml:13:7\r\n```\r\n\r\nThe same was working prior `0.79.11`. PR [2875](https://github.com/aws-cloudformation/cfn-lint/pull/2875) seems to be the cause.\r\n\r\n```\r\n> cfn-lint --version \r\ncfn-lint 0.79.10\r\n> cfn-lint -t ./template.yaml \r\n> echo $lastexitcode\r\n0\r\n```\n\n### Expected behavior\n\nE2520 is ignored for mutually exclusive properties that use the same Condition and Fn::If intrinsic function which makes sure only one of the properties has value.\n\n### Reproduction template\n\n```yaml\r\nAWSTemplateFormatVersion: 2010-09-09\r\nParameters:\r\n pCidr:\r\n Type: String\r\n Default: ''\r\nConditions:\r\n cIsCidr: !Not [!Equals [!Ref pCidr, '']]\r\nResources:\r\n Ingress:\r\n Type: AWS::EC2::SecurityGroupIngress\r\n Properties:\r\n SourceSecurityGroupId: !If [ cIsCidr, !Ref AWS::NoValue, sg-abc12345 ]\r\n CidrIp: !If [ cIsCidr, !Ref pCidr, !Ref AWS::NoValue ]\r\n IpProtocol: \"-1\"\r\n GroupId: sg-abc1234567\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop in exclusions:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}]} | 1,566 | 133 |
gh_patches_debug_7736 | rasdani/github-patches | git_diff | google__flax-2492 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve documentation for `Dropout` and `rngs` argument in `linen.Module.apply()`
Here is an example of `Dropout` in a model definition:
https://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/models.py#L211
Here is the `apply()`, where `rngs` is passed in
https://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/train.py#L206-L207
However the `rng` is not very clearly explained in `apply()`
https://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/linen/module.py#L749
The `rngs` seems to be passed to `flax/core/scope.py`
Here is the code for `Dropout` (linen)
https://github.com/google/flax/blob/9b4807840c5cb26ef5e29028e3558d404aee00a0/flax/linen/stochastic.py#L56-L57
Here is the code for `make_rng()`
https://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/core/scope.py#L441-L447
The documentation for `rngs` in `apply()` should have a (pointer to) list of names of possible rngs
And documentation for `Dropout` should mention how to pass in rng using `apply()`, without directly passing in like `Dropout()(x,rng=rng)`.
Also probably need to mention the `make_rng()` `fold_in` the rng so each dropout layer will use different rng if there are multiple dropout layers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flax/linen/stochastic.py`
Content:
```
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Stochastic modules."""
16
17 from typing import Optional, Sequence
18
19 from flax.linen.module import compact
20 from flax.linen.module import merge_param
21 from flax.linen.module import Module
22 from jax import lax
23 from jax import random
24 import jax.numpy as jnp
25
26
27 class Dropout(Module):
28 """Create a dropout layer.
29
30 Attributes:
31 rate: the dropout probability. (_not_ the keep rate!)
32 broadcast_dims: dimensions that will share the same dropout mask
33 deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
34 masked, whereas if true, no mask is applied and the inputs are returned
35 as is.
36 """
37 rate: float
38 broadcast_dims: Sequence[int] = ()
39 deterministic: Optional[bool] = None
40
41 @compact
42 def __call__(self, inputs, deterministic: Optional[bool] = None):
43 """Applies a random dropout mask to the input.
44
45 Args:
46 inputs: the inputs that should be randomly masked.
47 deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
48 masked, whereas if true, no mask is applied and the inputs are returned
49 as is.
50
51 Returns:
52 The masked inputs reweighted to preserve mean.
53 """
54 deterministic = merge_param(
55 'deterministic', self.deterministic, deterministic)
56 if self.rate == 0.:
57 return inputs
58 # Prevent gradient NaNs in 1.0 edge-case.
59 if self.rate == 1.0:
60 return jnp.zeros_like(inputs)
61 keep_prob = 1. - self.rate
62 if deterministic:
63 return inputs
64 else:
65 rng = self.make_rng('dropout')
66 broadcast_shape = list(inputs.shape)
67 for dim in self.broadcast_dims:
68 broadcast_shape[dim] = 1
69 mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)
70 mask = jnp.broadcast_to(mask, inputs.shape)
71 return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py
--- a/flax/linen/stochastic.py
+++ b/flax/linen/stochastic.py
@@ -27,6 +27,11 @@
class Dropout(Module):
"""Create a dropout layer.
+ Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure
+ to include an RNG seed named `'dropout'`. For example::
+
+ model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`
+
Attributes:
rate: the dropout probability. (_not_ the keep rate!)
broadcast_dims: dimensions that will share the same dropout mask
| {"golden_diff": "diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py\n--- a/flax/linen/stochastic.py\n+++ b/flax/linen/stochastic.py\n@@ -27,6 +27,11 @@\n class Dropout(Module):\n \"\"\"Create a dropout layer.\n \n+ Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure\n+ to include an RNG seed named `'dropout'`. For example::\n+ \n+ model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`\n+\n Attributes:\n rate: the dropout probability. (_not_ the keep rate!)\n broadcast_dims: dimensions that will share the same dropout mask\n", "issue": "Improve documentation for `Dropout` and `rngs` argument in `linen.Module.apply()`\n\r\nHere is an example of `Dropout` in a model definition:\r\nhttps://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/models.py#L211\r\n\r\nHere is the `apply()`, where `rngs` is passed in\r\nhttps://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/train.py#L206-L207\r\nHowever the `rng` is not very clearly explained in `apply()`\r\nhttps://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/linen/module.py#L749\r\nThe `rngs` seems to be passed to `flax/core/scope.py`\r\nHere is the code for `Dropout` (linen)\r\nhttps://github.com/google/flax/blob/9b4807840c5cb26ef5e29028e3558d404aee00a0/flax/linen/stochastic.py#L56-L57\r\nHere is the code for `make_rng()`\r\nhttps://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/core/scope.py#L441-L447\r\n\r\nThe documentation for `rngs` in `apply()` should have a (pointer to) list of names of possible rngs\r\nAnd documentation for `Dropout` should mention how to pass in rng using `apply()`, without directly passing in like `Dropout()(x,rng=rng)`.\r\nAlso probably need to mention the `make_rng()` `fold_in` the rng so each dropout layer will use different rng if there are multiple dropout layers.\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Stochastic modules.\"\"\"\n\nfrom typing import Optional, Sequence\n\nfrom flax.linen.module import compact\nfrom flax.linen.module import merge_param\nfrom flax.linen.module import Module\nfrom jax import lax\nfrom jax import random\nimport jax.numpy as jnp\n\n\nclass Dropout(Module):\n \"\"\"Create a dropout layer.\n\n Attributes:\n rate: the dropout probability. (_not_ the keep rate!)\n broadcast_dims: dimensions that will share the same dropout mask\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n \"\"\"\n rate: float\n broadcast_dims: Sequence[int] = ()\n deterministic: Optional[bool] = None\n\n @compact\n def __call__(self, inputs, deterministic: Optional[bool] = None):\n \"\"\"Applies a random dropout mask to the input.\n\n Args:\n inputs: the inputs that should be randomly masked.\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n\n Returns:\n The masked inputs reweighted to preserve mean.\n \"\"\"\n deterministic = merge_param(\n 'deterministic', self.deterministic, deterministic)\n if self.rate == 0.:\n return inputs\n # Prevent gradient NaNs in 1.0 edge-case.\n if self.rate == 1.0:\n return jnp.zeros_like(inputs)\n keep_prob = 1. - self.rate\n if deterministic:\n return inputs\n else:\n rng = self.make_rng('dropout')\n broadcast_shape = list(inputs.shape)\n for dim in self.broadcast_dims:\n broadcast_shape[dim] = 1\n mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)\n mask = jnp.broadcast_to(mask, inputs.shape)\n return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))\n", "path": "flax/linen/stochastic.py"}], "after_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Stochastic modules.\"\"\"\n\nfrom typing import Optional, Sequence\n\nfrom flax.linen.module import compact\nfrom flax.linen.module import merge_param\nfrom flax.linen.module import Module\nfrom jax import lax\nfrom jax import random\nimport jax.numpy as jnp\n\n\nclass Dropout(Module):\n \"\"\"Create a dropout layer.\n\n Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure\n to include an RNG seed named `'dropout'`. For example::\n \n model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`\n\n Attributes:\n rate: the dropout probability. (_not_ the keep rate!)\n broadcast_dims: dimensions that will share the same dropout mask\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n \"\"\"\n rate: float\n broadcast_dims: Sequence[int] = ()\n deterministic: Optional[bool] = None\n\n @compact\n def __call__(self, inputs, deterministic: Optional[bool] = None):\n \"\"\"Applies a random dropout mask to the input.\n\n Args:\n inputs: the inputs that should be randomly masked.\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n\n Returns:\n The masked inputs reweighted to preserve mean.\n \"\"\"\n deterministic = merge_param(\n 'deterministic', self.deterministic, deterministic)\n if self.rate == 0.:\n return inputs\n # Prevent gradient NaNs in 1.0 edge-case.\n if self.rate == 1.0:\n return jnp.zeros_like(inputs)\n keep_prob = 1. - self.rate\n if deterministic:\n return inputs\n else:\n rng = self.make_rng('dropout')\n broadcast_shape = list(inputs.shape)\n for dim in self.broadcast_dims:\n broadcast_shape[dim] = 1\n mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)\n mask = jnp.broadcast_to(mask, inputs.shape)\n return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))\n", "path": "flax/linen/stochastic.py"}]} | 1,471 | 167 |
gh_patches_debug_18985 | rasdani/github-patches | git_diff | oppia__oppia-6309 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
InteractiveMap interaction: in the rule editor, clicks on the map are not displayed correctly
Create an exploration with a map interaction. Add a rule and click on the map to choose the point the rule applies to. A marker should appear where you click, but it does not.
Save and close the rule, then re-open it. The marker is now displayed correctly.
Create a new rule. Before being clicked on the map should be blank, but instead it displays the position of the marker from the previous rule.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `extensions/dependencies/dependencies_config.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Configuration for JavaScript library dependencies."""
18
19
20 # A dict mapping dependency ids to the Angular module names they
21 # should insert when the Angular app is first initialized.
22 DEPENDENCIES_TO_ANGULAR_MODULES_DICT = {
23 'codemirror': ['ui.codemirror'],
24 'google_maps': ['ui.map'],
25 'guppy': [],
26 'logic_proof': [],
27 'math_expressions': [],
28 'midijs': [],
29 'pencilcode': [],
30 'skulpt': [],
31 }
32
```
Path: `extensions/interactions/InteractiveMap/InteractiveMap.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, softwar
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Python configuration for InteractiveMap interaction."""
18
19 from extensions.interactions import base
20
21
22 class InteractiveMap(base.BaseInteraction):
23 """Interaction for pinpointing a location on a map."""
24
25 name = 'World Map'
26 description = 'Allows learners to specify a position on a world map.'
27 display_mode = base.DISPLAY_MODE_SUPPLEMENTAL
28 is_trainable = False
29 _dependency_ids = ['google_maps']
30 answer_type = 'CoordTwoDim'
31 instructions = 'Click on the map'
32 narrow_instructions = 'View map'
33 needs_summary = True
34 # There needs to be a way to pass marker location so that an answer can be
35 # conveyed meaningfully to the learner. Once this issue is fixed,
36 # InteractiveMap interaction can be supported by the solution feature.
37 can_have_solution = False
38 show_generic_submit_button = False
39
40 _customization_arg_specs = [{
41 'name': 'latitude',
42 'description': 'Starting center latitude (-90 to 90)',
43 'schema': {
44 'type': 'float',
45 'validators': [{
46 'id': 'is_at_least',
47 'min_value': -90.0,
48 }, {
49 'id': 'is_at_most',
50 'max_value': 90.0,
51 }]
52 },
53 'default_value': 0.0,
54 }, {
55 'name': 'longitude',
56 'description': 'Starting center longitude (-180 to 180)',
57 'schema': {
58 'type': 'float',
59 'validators': [{
60 'id': 'is_at_least',
61 'min_value': -180.0,
62 }, {
63 'id': 'is_at_most',
64 'max_value': 180.0,
65 }]
66 },
67 'default_value': 0.0,
68 }, {
69 'name': 'zoom',
70 'description': 'Starting zoom level (0 shows the entire earth)',
71 'schema': {
72 'type': 'float',
73 },
74 'default_value': 0.0,
75 }]
76
77 _answer_visualization_specs = [{
78 # Table with answer counts for top N answers.
79 'id': 'FrequencyTable',
80 'options': {
81 'column_headers': ['Answer', 'Count'],
82 'title': 'Top 10 answers',
83 },
84 'calculation_id': 'Top10AnswerFrequencies',
85 'addressed_info_is_supported': True,
86 }]
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/extensions/dependencies/dependencies_config.py b/extensions/dependencies/dependencies_config.py
--- a/extensions/dependencies/dependencies_config.py
+++ b/extensions/dependencies/dependencies_config.py
@@ -21,7 +21,7 @@
# should insert when the Angular app is first initialized.
DEPENDENCIES_TO_ANGULAR_MODULES_DICT = {
'codemirror': ['ui.codemirror'],
- 'google_maps': ['ui.map'],
+ 'ui_leaflet': ['ui-leaflet'],
'guppy': [],
'logic_proof': [],
'math_expressions': [],
diff --git a/extensions/interactions/InteractiveMap/InteractiveMap.py b/extensions/interactions/InteractiveMap/InteractiveMap.py
--- a/extensions/interactions/InteractiveMap/InteractiveMap.py
+++ b/extensions/interactions/InteractiveMap/InteractiveMap.py
@@ -26,7 +26,7 @@
description = 'Allows learners to specify a position on a world map.'
display_mode = base.DISPLAY_MODE_SUPPLEMENTAL
is_trainable = False
- _dependency_ids = ['google_maps']
+ _dependency_ids = ['ui_leaflet']
answer_type = 'CoordTwoDim'
instructions = 'Click on the map'
narrow_instructions = 'View map'
| {"golden_diff": "diff --git a/extensions/dependencies/dependencies_config.py b/extensions/dependencies/dependencies_config.py\n--- a/extensions/dependencies/dependencies_config.py\n+++ b/extensions/dependencies/dependencies_config.py\n@@ -21,7 +21,7 @@\n # should insert when the Angular app is first initialized.\n DEPENDENCIES_TO_ANGULAR_MODULES_DICT = {\n 'codemirror': ['ui.codemirror'],\n- 'google_maps': ['ui.map'],\n+ 'ui_leaflet': ['ui-leaflet'],\n 'guppy': [],\n 'logic_proof': [],\n 'math_expressions': [],\ndiff --git a/extensions/interactions/InteractiveMap/InteractiveMap.py b/extensions/interactions/InteractiveMap/InteractiveMap.py\n--- a/extensions/interactions/InteractiveMap/InteractiveMap.py\n+++ b/extensions/interactions/InteractiveMap/InteractiveMap.py\n@@ -26,7 +26,7 @@\n description = 'Allows learners to specify a position on a world map.'\n display_mode = base.DISPLAY_MODE_SUPPLEMENTAL\n is_trainable = False\n- _dependency_ids = ['google_maps']\n+ _dependency_ids = ['ui_leaflet']\n answer_type = 'CoordTwoDim'\n instructions = 'Click on the map'\n narrow_instructions = 'View map'\n", "issue": "InteractiveMap interaction: in the rule editor, clicks on the map are not displayed correctly\nCreate an exploration with a map interaction. Add a rule and click on the map to choose the point the rule applies to. A marker should appear where you click, but it does not.\n\nSave and close the rule, then re-open it. The marker is now displayed correctly.\n\nCreate a new rule. Before being clicked on the map should be blank, but instead it displays the position of the marker from the previous rule.\n\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Configuration for JavaScript library dependencies.\"\"\"\n\n\n# A dict mapping dependency ids to the Angular module names they\n# should insert when the Angular app is first initialized.\nDEPENDENCIES_TO_ANGULAR_MODULES_DICT = {\n 'codemirror': ['ui.codemirror'],\n 'google_maps': ['ui.map'],\n 'guppy': [],\n 'logic_proof': [],\n 'math_expressions': [],\n 'midijs': [],\n 'pencilcode': [],\n 'skulpt': [],\n}\n", "path": "extensions/dependencies/dependencies_config.py"}, {"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for InteractiveMap interaction.\"\"\"\n\nfrom extensions.interactions import base\n\n\nclass InteractiveMap(base.BaseInteraction):\n \"\"\"Interaction for pinpointing a location on a map.\"\"\"\n\n name = 'World Map'\n description = 'Allows learners to specify a position on a world map.'\n display_mode = base.DISPLAY_MODE_SUPPLEMENTAL\n is_trainable = False\n _dependency_ids = ['google_maps']\n answer_type = 'CoordTwoDim'\n instructions = 'Click on the map'\n narrow_instructions = 'View map'\n needs_summary = True\n # There needs to be a way to pass marker location so that an answer can be\n # conveyed meaningfully to the learner. Once this issue is fixed,\n # InteractiveMap interaction can be supported by the solution feature.\n can_have_solution = False\n show_generic_submit_button = False\n\n _customization_arg_specs = [{\n 'name': 'latitude',\n 'description': 'Starting center latitude (-90 to 90)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -90.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 90.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'longitude',\n 'description': 'Starting center longitude (-180 to 180)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -180.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 180.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'zoom',\n 'description': 'Starting zoom level (0 shows the entire earth)',\n 'schema': {\n 'type': 'float',\n },\n 'default_value': 0.0,\n }]\n\n _answer_visualization_specs = [{\n # Table with answer counts for top N answers.\n 'id': 'FrequencyTable',\n 'options': {\n 'column_headers': ['Answer', 'Count'],\n 'title': 'Top 10 answers',\n },\n 'calculation_id': 'Top10AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/InteractiveMap/InteractiveMap.py"}], "after_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Configuration for JavaScript library dependencies.\"\"\"\n\n\n# A dict mapping dependency ids to the Angular module names they\n# should insert when the Angular app is first initialized.\nDEPENDENCIES_TO_ANGULAR_MODULES_DICT = {\n 'codemirror': ['ui.codemirror'],\n 'ui_leaflet': ['ui-leaflet'],\n 'guppy': [],\n 'logic_proof': [],\n 'math_expressions': [],\n 'midijs': [],\n 'pencilcode': [],\n 'skulpt': [],\n}\n", "path": "extensions/dependencies/dependencies_config.py"}, {"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for InteractiveMap interaction.\"\"\"\n\nfrom extensions.interactions import base\n\n\nclass InteractiveMap(base.BaseInteraction):\n \"\"\"Interaction for pinpointing a location on a map.\"\"\"\n\n name = 'World Map'\n description = 'Allows learners to specify a position on a world map.'\n display_mode = base.DISPLAY_MODE_SUPPLEMENTAL\n is_trainable = False\n _dependency_ids = ['ui_leaflet']\n answer_type = 'CoordTwoDim'\n instructions = 'Click on the map'\n narrow_instructions = 'View map'\n needs_summary = True\n # There needs to be a way to pass marker location so that an answer can be\n # conveyed meaningfully to the learner. Once this issue is fixed,\n # InteractiveMap interaction can be supported by the solution feature.\n can_have_solution = False\n show_generic_submit_button = False\n\n _customization_arg_specs = [{\n 'name': 'latitude',\n 'description': 'Starting center latitude (-90 to 90)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -90.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 90.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'longitude',\n 'description': 'Starting center longitude (-180 to 180)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -180.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 180.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'zoom',\n 'description': 'Starting zoom level (0 shows the entire earth)',\n 'schema': {\n 'type': 'float',\n },\n 'default_value': 0.0,\n }]\n\n _answer_visualization_specs = [{\n # Table with answer counts for top N answers.\n 'id': 'FrequencyTable',\n 'options': {\n 'column_headers': ['Answer', 'Count'],\n 'title': 'Top 10 answers',\n },\n 'calculation_id': 'Top10AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/InteractiveMap/InteractiveMap.py"}]} | 1,527 | 276 |
gh_patches_debug_34246 | rasdani/github-patches | git_diff | uccser__cs-unplugged-318 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support multiple page resources
Currently the create image function for a resource return a single image. Instead it should return a list of images, which would allow multiple page resources.
For example, for 4 pages of a single page resource the content would be:
```
Image output: [A]
Final document: A, A, A, A
```
For 4 pages of a three page resource the content would be:
```
Image output: [A, B, C], [A, B, C], [A, B, C], [A, B, C]
Final document: A, B, C, A, B, C, A, B, C, A, B, C
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/resources/views/generate_resource_pdf.py`
Content:
```
1 """Module for generating custom resource PDFs."""
2
3 from django.http import HttpResponse
4 from django.template.loader import render_to_string
5 from django.contrib.staticfiles import finders
6 from django.conf import settings
7 from PIL import Image
8 from io import BytesIO
9 import importlib
10 import base64
11
12 RESPONSE_CONTENT_DISPOSITION = 'attachment; filename="{filename}.pdf"'
13 MM_TO_PIXEL_RATIO = 3.78
14
15
16 def generate_resource_pdf(request, resource, module_path):
17 """Return a response containing a generated PDF resource.
18
19 Args:
20 request: HTTP request object
21 resource: Object of resource data.
22 module_path: Path to module for generating resource.
23
24 Returns:
25 HTTP response containing generated resource PDF.
26 """
27 # TODO: Weasyprint handling in production
28 import environ
29 env = environ.Env(
30 DJANGO_PRODUCTION=(bool),
31 )
32 if env("DJANGO_PRODUCTION"):
33 return HttpResponse("<html><body>PDF generation is currently not supported in production.</body></html>")
34 else:
35 from weasyprint import HTML, CSS
36 context = dict()
37 get_request = request.GET
38 context["paper_size"] = get_request["paper_size"]
39 context["resource"] = resource
40 context["header_text"] = get_request["header_text"]
41
42 resource_image_generator = importlib.import_module(module_path)
43 filename = "{} ({})".format(resource.name, resource_image_generator.subtitle(get_request, resource))
44 context["filename"] = filename
45
46 num_copies = range(0, int(get_request["copies"]))
47 context["resource_images"] = []
48 for copy in num_copies:
49 context["resource_images"].append(
50 generate_resource_image(get_request, resource, module_path)
51 )
52
53 pdf_html = render_to_string("resources/base-resource-pdf.html", context)
54 html = HTML(string=pdf_html, base_url=settings.STATIC_ROOT)
55 css_file = finders.find("css/print-resource-pdf.css")
56 css_string = open(css_file, encoding="UTF-8").read()
57 base_css = CSS(string=css_string)
58 pdf_file = html.write_pdf(stylesheets=[base_css])
59
60 response = HttpResponse(pdf_file, content_type="application/pdf")
61 response["Content-Disposition"] = RESPONSE_CONTENT_DISPOSITION.format(filename=filename)
62 return response
63
64
65 def generate_resource_image(get_request, resource, module_path):
66 """Retrieve image from resource generator and resize to size.
67
68 Args:
69 get_request: HTTP request object
70 resource: Object of resource data.
71 module_path: Path to module for generating resource.
72
73 Returns:
74 Base64 string of a generated resource image.
75 """
76 # Get image from resource image creator
77 resource_image_generator = importlib.import_module(module_path)
78 image = resource_image_generator.resource_image(get_request, resource)
79
80 # Resize image to reduce file size
81 if get_request["paper_size"] == "a4":
82 max_pixel_height = 267 * MM_TO_PIXEL_RATIO
83 elif get_request["paper_size"] == "letter":
84 max_pixel_height = 249 * MM_TO_PIXEL_RATIO
85 (width, height) = image.size
86 if height > max_pixel_height:
87 ratio = max_pixel_height / height
88 width *= ratio
89 height *= ratio
90 image = image.resize((int(width), int(height)), Image.ANTIALIAS)
91
92 # Save image to buffer
93 image_buffer = BytesIO()
94 image.save(image_buffer, format="PNG")
95
96 # Return base64 of image
97 return base64.b64encode(image_buffer.getvalue())
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/resources/views/generate_resource_pdf.py b/csunplugged/resources/views/generate_resource_pdf.py
--- a/csunplugged/resources/views/generate_resource_pdf.py
+++ b/csunplugged/resources/views/generate_resource_pdf.py
@@ -63,7 +63,9 @@
def generate_resource_image(get_request, resource, module_path):
- """Retrieve image from resource generator and resize to size.
+ """Retrieve image(s) for one copy of resource from resource generator.
+
+ Images are resized to size.
Args:
get_request: HTTP request object
@@ -71,27 +73,33 @@
module_path: Path to module for generating resource.
Returns:
- Base64 string of a generated resource image.
+ List of Base64 strings of a generated resource images for one copy.
"""
- # Get image from resource image creator
+ # Get images from resource image creator
resource_image_generator = importlib.import_module(module_path)
- image = resource_image_generator.resource_image(get_request, resource)
+ raw_images = resource_image_generator.resource_image(get_request, resource)
+ if not isinstance(raw_images, list):
+ raw_images = [raw_images]
- # Resize image to reduce file size
+ # Resize images to reduce file size
if get_request["paper_size"] == "a4":
max_pixel_height = 267 * MM_TO_PIXEL_RATIO
elif get_request["paper_size"] == "letter":
max_pixel_height = 249 * MM_TO_PIXEL_RATIO
- (width, height) = image.size
- if height > max_pixel_height:
- ratio = max_pixel_height / height
- width *= ratio
- height *= ratio
- image = image.resize((int(width), int(height)), Image.ANTIALIAS)
-
- # Save image to buffer
- image_buffer = BytesIO()
- image.save(image_buffer, format="PNG")
-
- # Return base64 of image
- return base64.b64encode(image_buffer.getvalue())
+
+ images = []
+ for image in raw_images:
+ (width, height) = image.size
+ if height > max_pixel_height:
+ ratio = max_pixel_height / height
+ width *= ratio
+ height *= ratio
+ image = image.resize((int(width), int(height)), Image.ANTIALIAS)
+
+ # Save image to buffer
+ image_buffer = BytesIO()
+ image.save(image_buffer, format="PNG")
+ # Add base64 of image to list of images
+ images.append(base64.b64encode(image_buffer.getvalue()))
+
+ return images
| {"golden_diff": "diff --git a/csunplugged/resources/views/generate_resource_pdf.py b/csunplugged/resources/views/generate_resource_pdf.py\n--- a/csunplugged/resources/views/generate_resource_pdf.py\n+++ b/csunplugged/resources/views/generate_resource_pdf.py\n@@ -63,7 +63,9 @@\n \n \n def generate_resource_image(get_request, resource, module_path):\n- \"\"\"Retrieve image from resource generator and resize to size.\n+ \"\"\"Retrieve image(s) for one copy of resource from resource generator.\n+\n+ Images are resized to size.\n \n Args:\n get_request: HTTP request object\n@@ -71,27 +73,33 @@\n module_path: Path to module for generating resource.\n \n Returns:\n- Base64 string of a generated resource image.\n+ List of Base64 strings of a generated resource images for one copy.\n \"\"\"\n- # Get image from resource image creator\n+ # Get images from resource image creator\n resource_image_generator = importlib.import_module(module_path)\n- image = resource_image_generator.resource_image(get_request, resource)\n+ raw_images = resource_image_generator.resource_image(get_request, resource)\n+ if not isinstance(raw_images, list):\n+ raw_images = [raw_images]\n \n- # Resize image to reduce file size\n+ # Resize images to reduce file size\n if get_request[\"paper_size\"] == \"a4\":\n max_pixel_height = 267 * MM_TO_PIXEL_RATIO\n elif get_request[\"paper_size\"] == \"letter\":\n max_pixel_height = 249 * MM_TO_PIXEL_RATIO\n- (width, height) = image.size\n- if height > max_pixel_height:\n- ratio = max_pixel_height / height\n- width *= ratio\n- height *= ratio\n- image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n-\n- # Save image to buffer\n- image_buffer = BytesIO()\n- image.save(image_buffer, format=\"PNG\")\n-\n- # Return base64 of image\n- return base64.b64encode(image_buffer.getvalue())\n+\n+ images = []\n+ for image in raw_images:\n+ (width, height) = image.size\n+ if height > max_pixel_height:\n+ ratio = max_pixel_height / height\n+ width *= ratio\n+ height *= ratio\n+ image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n+\n+ # Save image to buffer\n+ image_buffer = BytesIO()\n+ image.save(image_buffer, format=\"PNG\")\n+ # Add base64 of image to list of images\n+ images.append(base64.b64encode(image_buffer.getvalue()))\n+\n+ return images\n", "issue": "Support multiple page resources\nCurrently the create image function for a resource return a single image. Instead it should return a list of images, which would allow multiple page resources.\r\n\r\nFor example, for 4 pages of a single page resource the content would be:\r\n\r\n```\r\nImage output: [A]\r\nFinal document: A, A, A, A\r\n```\r\n\r\nFor 4 pages of a three page resource the content would be:\r\n\r\n```\r\nImage output: [A, B, C], [A, B, C], [A, B, C], [A, B, C] \r\nFinal document: A, B, C, A, B, C, A, B, C, A, B, C\r\n```\n", "before_files": [{"content": "\"\"\"Module for generating custom resource PDFs.\"\"\"\n\nfrom django.http import HttpResponse\nfrom django.template.loader import render_to_string\nfrom django.contrib.staticfiles import finders\nfrom django.conf import settings\nfrom PIL import Image\nfrom io import BytesIO\nimport importlib\nimport base64\n\nRESPONSE_CONTENT_DISPOSITION = 'attachment; filename=\"{filename}.pdf\"'\nMM_TO_PIXEL_RATIO = 3.78\n\n\ndef generate_resource_pdf(request, resource, module_path):\n \"\"\"Return a response containing a generated PDF resource.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n HTTP response containing generated resource PDF.\n \"\"\"\n # TODO: Weasyprint handling in production\n import environ\n env = environ.Env(\n DJANGO_PRODUCTION=(bool),\n )\n if env(\"DJANGO_PRODUCTION\"):\n return HttpResponse(\"<html><body>PDF generation is currently not supported in production.</body></html>\")\n else:\n from weasyprint import HTML, CSS\n context = dict()\n get_request = request.GET\n context[\"paper_size\"] = get_request[\"paper_size\"]\n context[\"resource\"] = resource\n context[\"header_text\"] = get_request[\"header_text\"]\n\n resource_image_generator = importlib.import_module(module_path)\n filename = \"{} ({})\".format(resource.name, resource_image_generator.subtitle(get_request, resource))\n context[\"filename\"] = filename\n\n num_copies = range(0, int(get_request[\"copies\"]))\n context[\"resource_images\"] = []\n for copy in num_copies:\n context[\"resource_images\"].append(\n generate_resource_image(get_request, resource, module_path)\n )\n\n pdf_html = render_to_string(\"resources/base-resource-pdf.html\", context)\n html = HTML(string=pdf_html, base_url=settings.STATIC_ROOT)\n css_file = finders.find(\"css/print-resource-pdf.css\")\n css_string = open(css_file, encoding=\"UTF-8\").read()\n base_css = CSS(string=css_string)\n pdf_file = html.write_pdf(stylesheets=[base_css])\n\n response = HttpResponse(pdf_file, content_type=\"application/pdf\")\n response[\"Content-Disposition\"] = RESPONSE_CONTENT_DISPOSITION.format(filename=filename)\n return response\n\n\ndef generate_resource_image(get_request, resource, module_path):\n \"\"\"Retrieve image from resource generator and resize to size.\n\n Args:\n get_request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n Base64 string of a generated resource image.\n \"\"\"\n # Get image from resource image creator\n resource_image_generator = importlib.import_module(module_path)\n image = resource_image_generator.resource_image(get_request, resource)\n\n # Resize image to reduce file size\n if get_request[\"paper_size\"] == \"a4\":\n max_pixel_height = 267 * MM_TO_PIXEL_RATIO\n elif get_request[\"paper_size\"] == \"letter\":\n max_pixel_height = 249 * MM_TO_PIXEL_RATIO\n (width, height) = image.size\n if height > max_pixel_height:\n ratio = max_pixel_height / height\n width *= ratio\n height *= ratio\n image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n\n # Save image to buffer\n image_buffer = BytesIO()\n image.save(image_buffer, format=\"PNG\")\n\n # Return base64 of image\n return base64.b64encode(image_buffer.getvalue())\n", "path": "csunplugged/resources/views/generate_resource_pdf.py"}], "after_files": [{"content": "\"\"\"Module for generating custom resource PDFs.\"\"\"\n\nfrom django.http import HttpResponse\nfrom django.template.loader import render_to_string\nfrom django.contrib.staticfiles import finders\nfrom django.conf import settings\nfrom PIL import Image\nfrom io import BytesIO\nimport importlib\nimport base64\n\nRESPONSE_CONTENT_DISPOSITION = 'attachment; filename=\"{filename}.pdf\"'\nMM_TO_PIXEL_RATIO = 3.78\n\n\ndef generate_resource_pdf(request, resource, module_path):\n \"\"\"Return a response containing a generated PDF resource.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n HTTP response containing generated resource PDF.\n \"\"\"\n # TODO: Weasyprint handling in production\n import environ\n env = environ.Env(\n DJANGO_PRODUCTION=(bool),\n )\n if env(\"DJANGO_PRODUCTION\"):\n return HttpResponse(\"<html><body>PDF generation is currently not supported in production.</body></html>\")\n else:\n from weasyprint import HTML, CSS\n context = dict()\n get_request = request.GET\n context[\"paper_size\"] = get_request[\"paper_size\"]\n context[\"resource\"] = resource\n context[\"header_text\"] = get_request[\"header_text\"]\n\n resource_image_generator = importlib.import_module(module_path)\n filename = \"{} ({})\".format(resource.name, resource_image_generator.subtitle(get_request, resource))\n context[\"filename\"] = filename\n\n num_copies = range(0, int(get_request[\"copies\"]))\n context[\"resource_images\"] = []\n for copy in num_copies:\n context[\"resource_images\"].append(\n generate_resource_image(get_request, resource, module_path)\n )\n\n pdf_html = render_to_string(\"resources/base-resource-pdf.html\", context)\n html = HTML(string=pdf_html, base_url=settings.STATIC_ROOT)\n css_file = finders.find(\"css/print-resource-pdf.css\")\n css_string = open(css_file, encoding=\"UTF-8\").read()\n base_css = CSS(string=css_string)\n pdf_file = html.write_pdf(stylesheets=[base_css])\n\n response = HttpResponse(pdf_file, content_type=\"application/pdf\")\n response[\"Content-Disposition\"] = RESPONSE_CONTENT_DISPOSITION.format(filename=filename)\n return response\n\n\ndef generate_resource_image(get_request, resource, module_path):\n \"\"\"Retrieve image(s) for one copy of resource from resource generator.\n\n Images are resized to size.\n\n Args:\n get_request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n List of Base64 strings of a generated resource images for one copy.\n \"\"\"\n # Get images from resource image creator\n resource_image_generator = importlib.import_module(module_path)\n raw_images = resource_image_generator.resource_image(get_request, resource)\n if not isinstance(raw_images, list):\n raw_images = [raw_images]\n\n # Resize images to reduce file size\n if get_request[\"paper_size\"] == \"a4\":\n max_pixel_height = 267 * MM_TO_PIXEL_RATIO\n elif get_request[\"paper_size\"] == \"letter\":\n max_pixel_height = 249 * MM_TO_PIXEL_RATIO\n\n images = []\n for image in raw_images:\n (width, height) = image.size\n if height > max_pixel_height:\n ratio = max_pixel_height / height\n width *= ratio\n height *= ratio\n image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n\n # Save image to buffer\n image_buffer = BytesIO()\n image.save(image_buffer, format=\"PNG\")\n # Add base64 of image to list of images\n images.append(base64.b64encode(image_buffer.getvalue()))\n\n return images\n", "path": "csunplugged/resources/views/generate_resource_pdf.py"}]} | 1,366 | 599 |
gh_patches_debug_32375 | rasdani/github-patches | git_diff | getsentry__sentry-python-897 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash in pure_eval
This happened while we were experiencing a DB outage:
```
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 443, in fetch
return await self._execute(query, args, 0, timeout)
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 1445, in _execute
result, _ = await self.__execute(
File "/server/athenian/api/db.py", line 191, in _asyncpg_execute
result = await self._execute_original(query, args, limit, timeout, return_status)
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 1454, in __execute
return await self._do_execute(query, executor, timeout)
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 1476, in _do_execute
result = await executor(stmt, None)
File "asyncpg/protocol/protocol.pyx", line 196, in bind_execute
return await waiter
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/scope.py", line 353, in apply_to_event
new_event = event_processor(event, hint)
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 79, in add_executing_info
pure_eval_frame(tb.tb_frame) or sentry_frame["vars"]
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 128, in pure_eval_frame
expressions.sort(key=closeness, reverse=True)
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 113, in closeness
nodes_before_stmt = [
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 114, in <listcomp>
node for node in nodes if node.first_token.startpos < stmt.last_token.endpos
AttributeError: 'Name' object has no attribute 'first_token'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/pure_eval.py`
Content:
```
1 from __future__ import absolute_import
2
3 import ast
4
5 from sentry_sdk import Hub, serializer
6 from sentry_sdk._types import MYPY
7 from sentry_sdk.integrations import Integration, DidNotEnable
8 from sentry_sdk.scope import add_global_event_processor
9 from sentry_sdk.utils import walk_exception_chain, iter_stacks
10
11 if MYPY:
12 from typing import Optional, Dict, Any, Tuple, List
13 from types import FrameType
14
15 from sentry_sdk._types import Event, Hint
16
17 try:
18 import executing
19 except ImportError:
20 raise DidNotEnable("executing is not installed")
21
22 try:
23 import pure_eval
24 except ImportError:
25 raise DidNotEnable("pure_eval is not installed")
26
27 try:
28 # Used implicitly, just testing it's available
29 import asttokens # noqa
30 except ImportError:
31 raise DidNotEnable("asttokens is not installed")
32
33
34 class PureEvalIntegration(Integration):
35 identifier = "pure_eval"
36
37 @staticmethod
38 def setup_once():
39 # type: () -> None
40
41 @add_global_event_processor
42 def add_executing_info(event, hint):
43 # type: (Event, Optional[Hint]) -> Optional[Event]
44 if Hub.current.get_integration(PureEvalIntegration) is None:
45 return event
46
47 if hint is None:
48 return event
49
50 exc_info = hint.get("exc_info", None)
51
52 if exc_info is None:
53 return event
54
55 exception = event.get("exception", None)
56
57 if exception is None:
58 return event
59
60 values = exception.get("values", None)
61
62 if values is None:
63 return event
64
65 for exception, (_exc_type, _exc_value, exc_tb) in zip(
66 reversed(values), walk_exception_chain(exc_info)
67 ):
68 sentry_frames = [
69 frame
70 for frame in exception.get("stacktrace", {}).get("frames", [])
71 if frame.get("function")
72 ]
73 tbs = list(iter_stacks(exc_tb))
74 if len(sentry_frames) != len(tbs):
75 continue
76
77 for sentry_frame, tb in zip(sentry_frames, tbs):
78 sentry_frame["vars"] = (
79 pure_eval_frame(tb.tb_frame) or sentry_frame["vars"]
80 )
81 return event
82
83
84 def pure_eval_frame(frame):
85 # type: (FrameType) -> Dict[str, Any]
86 source = executing.Source.for_frame(frame)
87 if not source.tree:
88 return {}
89
90 statements = source.statements_at_line(frame.f_lineno)
91 if not statements:
92 return {}
93
94 scope = stmt = list(statements)[0]
95 while True:
96 # Get the parent first in case the original statement is already
97 # a function definition, e.g. if we're calling a decorator
98 # In that case we still want the surrounding scope, not that function
99 scope = scope.parent
100 if isinstance(scope, (ast.FunctionDef, ast.ClassDef, ast.Module)):
101 break
102
103 evaluator = pure_eval.Evaluator.from_frame(frame)
104 expressions = evaluator.interesting_expressions_grouped(scope)
105
106 def closeness(expression):
107 # type: (Tuple[List[Any], Any]) -> int
108 # Prioritise expressions with a node closer to the statement executed
109 # without being after that statement
110 # A higher return value is better - the expression will appear
111 # earlier in the list of values and is less likely to be trimmed
112 nodes, _value = expression
113 nodes_before_stmt = [
114 node for node in nodes if node.first_token.startpos < stmt.last_token.endpos
115 ]
116 if nodes_before_stmt:
117 # The position of the last node before or in the statement
118 return max(node.first_token.startpos for node in nodes_before_stmt)
119 else:
120 # The position of the first node after the statement
121 # Negative means it's always lower priority than nodes that come before
122 # Less negative means closer to the statement and higher priority
123 return -min(node.first_token.startpos for node in nodes)
124
125 # This adds the first_token and last_token attributes to nodes
126 atok = source.asttokens()
127
128 expressions.sort(key=closeness, reverse=True)
129 return {
130 atok.get_text(nodes[0]): value
131 for nodes, value in expressions[: serializer.MAX_DATABAG_BREADTH]
132 }
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/pure_eval.py b/sentry_sdk/integrations/pure_eval.py
--- a/sentry_sdk/integrations/pure_eval.py
+++ b/sentry_sdk/integrations/pure_eval.py
@@ -104,23 +104,29 @@
expressions = evaluator.interesting_expressions_grouped(scope)
def closeness(expression):
- # type: (Tuple[List[Any], Any]) -> int
+ # type: (Tuple[List[Any], Any]) -> Tuple[int, int]
# Prioritise expressions with a node closer to the statement executed
# without being after that statement
# A higher return value is better - the expression will appear
# earlier in the list of values and is less likely to be trimmed
nodes, _value = expression
+
+ def start(n):
+ # type: (ast.expr) -> Tuple[int, int]
+ return (n.lineno, n.col_offset)
+
nodes_before_stmt = [
- node for node in nodes if node.first_token.startpos < stmt.last_token.endpos
+ node for node in nodes if start(node) < stmt.last_token.end
]
if nodes_before_stmt:
# The position of the last node before or in the statement
- return max(node.first_token.startpos for node in nodes_before_stmt)
+ return max(start(node) for node in nodes_before_stmt)
else:
# The position of the first node after the statement
# Negative means it's always lower priority than nodes that come before
# Less negative means closer to the statement and higher priority
- return -min(node.first_token.startpos for node in nodes)
+ lineno, col_offset = min(start(node) for node in nodes)
+ return (-lineno, -col_offset)
# This adds the first_token and last_token attributes to nodes
atok = source.asttokens()
| {"golden_diff": "diff --git a/sentry_sdk/integrations/pure_eval.py b/sentry_sdk/integrations/pure_eval.py\n--- a/sentry_sdk/integrations/pure_eval.py\n+++ b/sentry_sdk/integrations/pure_eval.py\n@@ -104,23 +104,29 @@\n expressions = evaluator.interesting_expressions_grouped(scope)\n \n def closeness(expression):\n- # type: (Tuple[List[Any], Any]) -> int\n+ # type: (Tuple[List[Any], Any]) -> Tuple[int, int]\n # Prioritise expressions with a node closer to the statement executed\n # without being after that statement\n # A higher return value is better - the expression will appear\n # earlier in the list of values and is less likely to be trimmed\n nodes, _value = expression\n+\n+ def start(n):\n+ # type: (ast.expr) -> Tuple[int, int]\n+ return (n.lineno, n.col_offset)\n+\n nodes_before_stmt = [\n- node for node in nodes if node.first_token.startpos < stmt.last_token.endpos\n+ node for node in nodes if start(node) < stmt.last_token.end\n ]\n if nodes_before_stmt:\n # The position of the last node before or in the statement\n- return max(node.first_token.startpos for node in nodes_before_stmt)\n+ return max(start(node) for node in nodes_before_stmt)\n else:\n # The position of the first node after the statement\n # Negative means it's always lower priority than nodes that come before\n # Less negative means closer to the statement and higher priority\n- return -min(node.first_token.startpos for node in nodes)\n+ lineno, col_offset = min(start(node) for node in nodes)\n+ return (-lineno, -col_offset)\n \n # This adds the first_token and last_token attributes to nodes\n atok = source.asttokens()\n", "issue": "Crash in pure_eval\nThis happened while we were experiencing a DB outage:\r\n```\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 443, in fetch\r\n return await self._execute(query, args, 0, timeout)\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 1445, in _execute\r\n result, _ = await self.__execute(\r\n File \"/server/athenian/api/db.py\", line 191, in _asyncpg_execute\r\n result = await self._execute_original(query, args, limit, timeout, return_status)\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 1454, in __execute\r\n return await self._do_execute(query, executor, timeout)\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 1476, in _do_execute\r\n result = await executor(stmt, None)\r\n File \"asyncpg/protocol/protocol.pyx\", line 196, in bind_execute\r\n return await waiter\r\nasyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/scope.py\", line 353, in apply_to_event\r\n new_event = event_processor(event, hint)\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 79, in add_executing_info\r\n pure_eval_frame(tb.tb_frame) or sentry_frame[\"vars\"]\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 128, in pure_eval_frame\r\n expressions.sort(key=closeness, reverse=True)\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 113, in closeness\r\n nodes_before_stmt = [\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 114, in <listcomp>\r\n node for node in nodes if node.first_token.startpos < stmt.last_token.endpos\r\nAttributeError: 'Name' object has no attribute 'first_token'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport ast\n\nfrom sentry_sdk import Hub, serializer\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.utils import walk_exception_chain, iter_stacks\n\nif MYPY:\n from typing import Optional, Dict, Any, Tuple, List\n from types import FrameType\n\n from sentry_sdk._types import Event, Hint\n\ntry:\n import executing\nexcept ImportError:\n raise DidNotEnable(\"executing is not installed\")\n\ntry:\n import pure_eval\nexcept ImportError:\n raise DidNotEnable(\"pure_eval is not installed\")\n\ntry:\n # Used implicitly, just testing it's available\n import asttokens # noqa\nexcept ImportError:\n raise DidNotEnable(\"asttokens is not installed\")\n\n\nclass PureEvalIntegration(Integration):\n identifier = \"pure_eval\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n @add_global_event_processor\n def add_executing_info(event, hint):\n # type: (Event, Optional[Hint]) -> Optional[Event]\n if Hub.current.get_integration(PureEvalIntegration) is None:\n return event\n\n if hint is None:\n return event\n\n exc_info = hint.get(\"exc_info\", None)\n\n if exc_info is None:\n return event\n\n exception = event.get(\"exception\", None)\n\n if exception is None:\n return event\n\n values = exception.get(\"values\", None)\n\n if values is None:\n return event\n\n for exception, (_exc_type, _exc_value, exc_tb) in zip(\n reversed(values), walk_exception_chain(exc_info)\n ):\n sentry_frames = [\n frame\n for frame in exception.get(\"stacktrace\", {}).get(\"frames\", [])\n if frame.get(\"function\")\n ]\n tbs = list(iter_stacks(exc_tb))\n if len(sentry_frames) != len(tbs):\n continue\n\n for sentry_frame, tb in zip(sentry_frames, tbs):\n sentry_frame[\"vars\"] = (\n pure_eval_frame(tb.tb_frame) or sentry_frame[\"vars\"]\n )\n return event\n\n\ndef pure_eval_frame(frame):\n # type: (FrameType) -> Dict[str, Any]\n source = executing.Source.for_frame(frame)\n if not source.tree:\n return {}\n\n statements = source.statements_at_line(frame.f_lineno)\n if not statements:\n return {}\n\n scope = stmt = list(statements)[0]\n while True:\n # Get the parent first in case the original statement is already\n # a function definition, e.g. if we're calling a decorator\n # In that case we still want the surrounding scope, not that function\n scope = scope.parent\n if isinstance(scope, (ast.FunctionDef, ast.ClassDef, ast.Module)):\n break\n\n evaluator = pure_eval.Evaluator.from_frame(frame)\n expressions = evaluator.interesting_expressions_grouped(scope)\n\n def closeness(expression):\n # type: (Tuple[List[Any], Any]) -> int\n # Prioritise expressions with a node closer to the statement executed\n # without being after that statement\n # A higher return value is better - the expression will appear\n # earlier in the list of values and is less likely to be trimmed\n nodes, _value = expression\n nodes_before_stmt = [\n node for node in nodes if node.first_token.startpos < stmt.last_token.endpos\n ]\n if nodes_before_stmt:\n # The position of the last node before or in the statement\n return max(node.first_token.startpos for node in nodes_before_stmt)\n else:\n # The position of the first node after the statement\n # Negative means it's always lower priority than nodes that come before\n # Less negative means closer to the statement and higher priority\n return -min(node.first_token.startpos for node in nodes)\n\n # This adds the first_token and last_token attributes to nodes\n atok = source.asttokens()\n\n expressions.sort(key=closeness, reverse=True)\n return {\n atok.get_text(nodes[0]): value\n for nodes, value in expressions[: serializer.MAX_DATABAG_BREADTH]\n }\n", "path": "sentry_sdk/integrations/pure_eval.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport ast\n\nfrom sentry_sdk import Hub, serializer\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.utils import walk_exception_chain, iter_stacks\n\nif MYPY:\n from typing import Optional, Dict, Any, Tuple, List\n from types import FrameType\n\n from sentry_sdk._types import Event, Hint\n\ntry:\n import executing\nexcept ImportError:\n raise DidNotEnable(\"executing is not installed\")\n\ntry:\n import pure_eval\nexcept ImportError:\n raise DidNotEnable(\"pure_eval is not installed\")\n\ntry:\n # Used implicitly, just testing it's available\n import asttokens # noqa\nexcept ImportError:\n raise DidNotEnable(\"asttokens is not installed\")\n\n\nclass PureEvalIntegration(Integration):\n identifier = \"pure_eval\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n @add_global_event_processor\n def add_executing_info(event, hint):\n # type: (Event, Optional[Hint]) -> Optional[Event]\n if Hub.current.get_integration(PureEvalIntegration) is None:\n return event\n\n if hint is None:\n return event\n\n exc_info = hint.get(\"exc_info\", None)\n\n if exc_info is None:\n return event\n\n exception = event.get(\"exception\", None)\n\n if exception is None:\n return event\n\n values = exception.get(\"values\", None)\n\n if values is None:\n return event\n\n for exception, (_exc_type, _exc_value, exc_tb) in zip(\n reversed(values), walk_exception_chain(exc_info)\n ):\n sentry_frames = [\n frame\n for frame in exception.get(\"stacktrace\", {}).get(\"frames\", [])\n if frame.get(\"function\")\n ]\n tbs = list(iter_stacks(exc_tb))\n if len(sentry_frames) != len(tbs):\n continue\n\n for sentry_frame, tb in zip(sentry_frames, tbs):\n sentry_frame[\"vars\"] = (\n pure_eval_frame(tb.tb_frame) or sentry_frame[\"vars\"]\n )\n return event\n\n\ndef pure_eval_frame(frame):\n # type: (FrameType) -> Dict[str, Any]\n source = executing.Source.for_frame(frame)\n if not source.tree:\n return {}\n\n statements = source.statements_at_line(frame.f_lineno)\n if not statements:\n return {}\n\n scope = stmt = list(statements)[0]\n while True:\n # Get the parent first in case the original statement is already\n # a function definition, e.g. if we're calling a decorator\n # In that case we still want the surrounding scope, not that function\n scope = scope.parent\n if isinstance(scope, (ast.FunctionDef, ast.ClassDef, ast.Module)):\n break\n\n evaluator = pure_eval.Evaluator.from_frame(frame)\n expressions = evaluator.interesting_expressions_grouped(scope)\n\n def closeness(expression):\n # type: (Tuple[List[Any], Any]) -> Tuple[int, int]\n # Prioritise expressions with a node closer to the statement executed\n # without being after that statement\n # A higher return value is better - the expression will appear\n # earlier in the list of values and is less likely to be trimmed\n nodes, _value = expression\n\n def start(n):\n # type: (ast.expr) -> Tuple[int, int]\n return (n.lineno, n.col_offset)\n\n nodes_before_stmt = [\n node for node in nodes if start(node) < stmt.last_token.end\n ]\n if nodes_before_stmt:\n # The position of the last node before or in the statement\n return max(start(node) for node in nodes_before_stmt)\n else:\n # The position of the first node after the statement\n # Negative means it's always lower priority than nodes that come before\n # Less negative means closer to the statement and higher priority\n lineno, col_offset = min(start(node) for node in nodes)\n return (-lineno, -col_offset)\n\n # This adds the first_token and last_token attributes to nodes\n atok = source.asttokens()\n\n expressions.sort(key=closeness, reverse=True)\n return {\n atok.get_text(nodes[0]): value\n for nodes, value in expressions[: serializer.MAX_DATABAG_BREADTH]\n }\n", "path": "sentry_sdk/integrations/pure_eval.py"}]} | 2,023 | 418 |
gh_patches_debug_34688 | rasdani/github-patches | git_diff | tensorflow__addons-271 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Automate Build Process
Currently we have no automated process for building Addons across python version and operating systems. Going forward we'll want this process to be automated.. but it may be challenging for us to start builds without access to the Google internal tooling.
We could conceivably use Travis... but if we can keep consistent CI that would be ideal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """TensorFlow Addons
16
17 TensorFlow Addons is a repository of contributions that conform to
18 well-established API patterns,but implement new functionality not available in
19 core TensorFlow.TensorFlow natively supports a large number of operators,
20 layers, metrics, losses, and optimizers. However, in a fast movingfield like
21 ML, there are many interesting new developments that cannot be integrated into
22 core TensorFlow (because their broad applicability is not yet clear, or it is
23 mostly used by a smallersubset of the community).
24 """
25
26 from __future__ import absolute_import
27 from __future__ import division
28 from __future__ import print_function
29
30 import os
31
32 from setuptools import find_packages
33 from setuptools import setup
34 from setuptools.dist import Distribution
35
36 DOCLINES = __doc__.split('\n')
37
38 version = {}
39 base_dir = os.path.dirname(os.path.abspath(__file__))
40 with open(os.path.join(base_dir, "tensorflow_addons", "version.py")) as fp:
41 # yapf: disable
42 exec(fp.read(), version)
43 # yapf: enable
44
45 REQUIRED_PACKAGES = [
46 'six >= 1.10.0',
47 ]
48
49 project_name = 'tensorflow-addons'
50
51
52 class BinaryDistribution(Distribution):
53 """This class is needed in order to create OS specific wheels."""
54
55 def has_ext_modules(self):
56 return True
57
58
59 setup(
60 name=project_name,
61 version=version['__version__'],
62 description=DOCLINES[0],
63 long_description='\n'.join(DOCLINES[2:]),
64 author='Google Inc.',
65 author_email='[email protected]',
66 packages=find_packages(),
67 install_requires=REQUIRED_PACKAGES,
68 include_package_data=True,
69 zip_safe=False,
70 distclass=BinaryDistribution,
71 classifiers=[
72 'Development Status :: 4 - Beta',
73 'Intended Audience :: Developers',
74 'Intended Audience :: Education',
75 'Intended Audience :: Science/Research',
76 'License :: OSI Approved :: Apache Software License',
77 'Programming Language :: Python :: 2.7',
78 'Programming Language :: Python :: 3.4',
79 'Programming Language :: Python :: 3.5',
80 'Programming Language :: Python :: 3.6',
81 'Programming Language :: Python :: 3.7',
82 'Topic :: Scientific/Engineering :: Mathematics',
83 'Topic :: Software Development :: Libraries :: Python Modules',
84 'Topic :: Software Development :: Libraries',
85 ],
86 license='Apache 2.0',
87 keywords='tensorflow addons machine learning',
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,10 +17,10 @@
TensorFlow Addons is a repository of contributions that conform to
well-established API patterns,but implement new functionality not available in
core TensorFlow.TensorFlow natively supports a large number of operators,
-layers, metrics, losses, and optimizers. However, in a fast movingfield like
+layers, metrics, losses, and optimizers. However, in a fast moving field like
ML, there are many interesting new developments that cannot be integrated into
core TensorFlow (because their broad applicability is not yet clear, or it is
-mostly used by a smallersubset of the community).
+mostly used by a smaller subset of the community).
"""
from __future__ import absolute_import
@@ -28,7 +28,9 @@
from __future__ import print_function
import os
+import sys
+from datetime import datetime
from setuptools import find_packages
from setuptools import setup
from setuptools.dist import Distribution
@@ -46,7 +48,13 @@
'six >= 1.10.0',
]
-project_name = 'tensorflow-addons'
+if '--nightly' in sys.argv:
+ project_name = 'tfa-nightly'
+ nightly_idx = sys.argv.index('--nightly')
+ sys.argv.pop(nightly_idx)
+ version['__version__'] += datetime.strftime(datetime.today(), "%Y%m%d")
+else:
+ project_name = 'tensorflow-addons'
class BinaryDistribution(Distribution):
@@ -78,7 +86,6 @@
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
- 'Programming Language :: Python :: 3.7',
'Topic :: Scientific/Engineering :: Mathematics',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Software Development :: Libraries',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,10 +17,10 @@\n TensorFlow Addons is a repository of contributions that conform to\n well-established API patterns,but implement new functionality not available in\n core TensorFlow.TensorFlow natively supports a large number of operators,\n-layers, metrics, losses, and optimizers. However, in a fast movingfield like\n+layers, metrics, losses, and optimizers. However, in a fast moving field like\n ML, there are many interesting new developments that cannot be integrated into\n core TensorFlow (because their broad applicability is not yet clear, or it is\n-mostly used by a smallersubset of the community).\n+mostly used by a smaller subset of the community).\n \"\"\"\n \n from __future__ import absolute_import\n@@ -28,7 +28,9 @@\n from __future__ import print_function\n \n import os\n+import sys\n \n+from datetime import datetime\n from setuptools import find_packages\n from setuptools import setup\n from setuptools.dist import Distribution\n@@ -46,7 +48,13 @@\n 'six >= 1.10.0',\n ]\n \n-project_name = 'tensorflow-addons'\n+if '--nightly' in sys.argv:\n+ project_name = 'tfa-nightly'\n+ nightly_idx = sys.argv.index('--nightly')\n+ sys.argv.pop(nightly_idx)\n+ version['__version__'] += datetime.strftime(datetime.today(), \"%Y%m%d\")\n+else:\n+ project_name = 'tensorflow-addons'\n \n \n class BinaryDistribution(Distribution):\n@@ -78,7 +86,6 @@\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n- 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n", "issue": "Automate Build Process\nCurrently we have no automated process for building Addons across python version and operating systems. Going forward we'll want this process to be automated.. but it may be challenging for us to start builds without access to the Google internal tooling.\r\n\r\nWe could conceivably use Travis... but if we can keep consistent CI that would be ideal.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons \n\nTensorFlow Addons is a repository of contributions that conform to\nwell-established API patterns,but implement new functionality not available in\ncore TensorFlow.TensorFlow natively supports a large number of operators,\nlayers, metrics, losses, and optimizers. However, in a fast movingfield like\nML, there are many interesting new developments that cannot be integrated into\ncore TensorFlow (because their broad applicability is not yet clear, or it is\nmostly used by a smallersubset of the community).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\n\nDOCLINES = __doc__.split('\\n')\n\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n # yapf: disable\n exec(fp.read(), version)\n # yapf: enable\n\nREQUIRED_PACKAGES = [\n 'six >= 1.10.0',\n]\n\nproject_name = 'tensorflow-addons'\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version['__version__'],\n description=DOCLINES[0],\n long_description='\\n'.join(DOCLINES[2:]),\n author='Google Inc.',\n author_email='[email protected]',\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n license='Apache 2.0',\n keywords='tensorflow addons machine learning',\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons \n\nTensorFlow Addons is a repository of contributions that conform to\nwell-established API patterns,but implement new functionality not available in\ncore TensorFlow.TensorFlow natively supports a large number of operators,\nlayers, metrics, losses, and optimizers. However, in a fast moving field like\nML, there are many interesting new developments that cannot be integrated into\ncore TensorFlow (because their broad applicability is not yet clear, or it is\nmostly used by a smaller subset of the community).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\n\nfrom datetime import datetime\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\n\nDOCLINES = __doc__.split('\\n')\n\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n # yapf: disable\n exec(fp.read(), version)\n # yapf: enable\n\nREQUIRED_PACKAGES = [\n 'six >= 1.10.0',\n]\n\nif '--nightly' in sys.argv:\n project_name = 'tfa-nightly'\n nightly_idx = sys.argv.index('--nightly')\n sys.argv.pop(nightly_idx)\n version['__version__'] += datetime.strftime(datetime.today(), \"%Y%m%d\")\nelse:\n project_name = 'tensorflow-addons'\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version['__version__'],\n description=DOCLINES[0],\n long_description='\\n'.join(DOCLINES[2:]),\n author='Google Inc.',\n author_email='[email protected]',\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n license='Apache 2.0',\n keywords='tensorflow addons machine learning',\n)\n", "path": "setup.py"}]} | 1,169 | 431 |
gh_patches_debug_35290 | rasdani/github-patches | git_diff | docarray__docarray-979 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug(v2): relative file paths in url types
Passing relative file paths gives a validation error:
```python
from docarray import Image
url = 'Test/05978.jpg'
img = Image(url=url)
```
```text
Test/05978.jpg
Traceback (most recent call last):
File "/home/johannes/.config/JetBrains/PyCharmCE2022.3/scratches/scratch_116.py", line 12, in <module>
img = Image(url=url)
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Image
url
unsupported operand type(s) for +: 'NoneType' and 'str' (type=type_error)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docarray/typing/url/any_url.py`
Content:
```
1 from typing import TYPE_CHECKING, Type, TypeVar
2
3 from pydantic import AnyUrl as BaseAnyUrl
4 from pydantic import errors, parse_obj_as
5
6 from docarray.typing.abstract_type import AbstractType
7
8 if TYPE_CHECKING:
9 from pydantic.networks import Parts
10
11 from docarray.proto import NodeProto
12
13 T = TypeVar('T', bound='AnyUrl')
14
15
16 class AnyUrl(BaseAnyUrl, AbstractType):
17 host_required = (
18 False # turn off host requirement to allow passing of local paths as URL
19 )
20
21 def _to_node_protobuf(self) -> 'NodeProto':
22 """Convert Document into a NodeProto protobuf message. This function should
23 be called when the Document is nested into another Document that need to
24 be converted into a protobuf
25
26 :return: the nested item protobuf message
27 """
28 from docarray.proto import NodeProto
29
30 return NodeProto(any_url=str(self))
31
32 @classmethod
33 def validate_parts(cls, parts: 'Parts', validate_port: bool = True) -> 'Parts':
34 """
35 A method used to validate parts of a URL.
36 Our URLs should be able to function both in local and remote settings.
37 Therefore, we allow missing `scheme`, making it possible to pass a file path.
38 """
39 scheme = parts['scheme']
40 if scheme is None:
41 pass # allow missing scheme, unlike pydantic
42
43 elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:
44 raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))
45
46 if validate_port:
47 cls._validate_port(parts['port'])
48
49 user = parts['user']
50 if cls.user_required and user is None:
51 raise errors.UrlUserInfoError()
52
53 return parts
54
55 @classmethod
56 def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:
57 """
58 read url from a proto msg
59 :param pb_msg:
60 :return: url
61 """
62 return parse_obj_as(cls, pb_msg)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docarray/typing/url/any_url.py b/docarray/typing/url/any_url.py
--- a/docarray/typing/url/any_url.py
+++ b/docarray/typing/url/any_url.py
@@ -1,4 +1,4 @@
-from typing import TYPE_CHECKING, Type, TypeVar
+from typing import TYPE_CHECKING, Optional, Type, TypeVar
from pydantic import AnyUrl as BaseAnyUrl
from pydantic import errors, parse_obj_as
@@ -34,11 +34,14 @@
"""
A method used to validate parts of a URL.
Our URLs should be able to function both in local and remote settings.
- Therefore, we allow missing `scheme`, making it possible to pass a file path.
+ Therefore, we allow missing `scheme`, making it possible to pass a file
+ path without prefix.
+ If `scheme` is missing, we assume it is a local file path.
"""
scheme = parts['scheme']
if scheme is None:
- pass # allow missing scheme, unlike pydantic
+ # allow missing scheme, unlike pydantic
+ pass
elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:
raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))
@@ -52,6 +55,44 @@
return parts
+ @classmethod
+ def build(
+ cls,
+ *,
+ scheme: str,
+ user: Optional[str] = None,
+ password: Optional[str] = None,
+ host: str,
+ port: Optional[str] = None,
+ path: Optional[str] = None,
+ query: Optional[str] = None,
+ fragment: Optional[str] = None,
+ **_kwargs: str,
+ ) -> str:
+ """
+ Build a URL from its parts.
+ The only difference from the pydantic implementation is that we allow
+ missing `scheme`, making it possible to pass a file path without prefix.
+ """
+
+ # allow missing scheme, unlike pydantic
+ scheme_ = scheme if scheme is not None else ''
+ url = super().build(
+ scheme=scheme_,
+ user=user,
+ password=password,
+ host=host,
+ port=port,
+ path=path,
+ query=query,
+ fragment=fragment,
+ **_kwargs,
+ )
+ if scheme is None and url.startswith('://'):
+ # remove the `://` prefix, since scheme is missing
+ url = url[3:]
+ return url
+
@classmethod
def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:
"""
| {"golden_diff": "diff --git a/docarray/typing/url/any_url.py b/docarray/typing/url/any_url.py\n--- a/docarray/typing/url/any_url.py\n+++ b/docarray/typing/url/any_url.py\n@@ -1,4 +1,4 @@\n-from typing import TYPE_CHECKING, Type, TypeVar\n+from typing import TYPE_CHECKING, Optional, Type, TypeVar\n \n from pydantic import AnyUrl as BaseAnyUrl\n from pydantic import errors, parse_obj_as\n@@ -34,11 +34,14 @@\n \"\"\"\n A method used to validate parts of a URL.\n Our URLs should be able to function both in local and remote settings.\n- Therefore, we allow missing `scheme`, making it possible to pass a file path.\n+ Therefore, we allow missing `scheme`, making it possible to pass a file\n+ path without prefix.\n+ If `scheme` is missing, we assume it is a local file path.\n \"\"\"\n scheme = parts['scheme']\n if scheme is None:\n- pass # allow missing scheme, unlike pydantic\n+ # allow missing scheme, unlike pydantic\n+ pass\n \n elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:\n raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))\n@@ -52,6 +55,44 @@\n \n return parts\n \n+ @classmethod\n+ def build(\n+ cls,\n+ *,\n+ scheme: str,\n+ user: Optional[str] = None,\n+ password: Optional[str] = None,\n+ host: str,\n+ port: Optional[str] = None,\n+ path: Optional[str] = None,\n+ query: Optional[str] = None,\n+ fragment: Optional[str] = None,\n+ **_kwargs: str,\n+ ) -> str:\n+ \"\"\"\n+ Build a URL from its parts.\n+ The only difference from the pydantic implementation is that we allow\n+ missing `scheme`, making it possible to pass a file path without prefix.\n+ \"\"\"\n+\n+ # allow missing scheme, unlike pydantic\n+ scheme_ = scheme if scheme is not None else ''\n+ url = super().build(\n+ scheme=scheme_,\n+ user=user,\n+ password=password,\n+ host=host,\n+ port=port,\n+ path=path,\n+ query=query,\n+ fragment=fragment,\n+ **_kwargs,\n+ )\n+ if scheme is None and url.startswith('://'):\n+ # remove the `://` prefix, since scheme is missing\n+ url = url[3:]\n+ return url\n+\n @classmethod\n def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:\n \"\"\"\n", "issue": "bug(v2): relative file paths in url types\nPassing relative file paths gives a validation error:\n\n```python\nfrom docarray import Image\n\nurl = 'Test/05978.jpg'\nimg = Image(url=url)\n```\n\n```text\nTest/05978.jpg\nTraceback (most recent call last):\n File \"/home/johannes/.config/JetBrains/PyCharmCE2022.3/scratches/scratch_116.py\", line 12, in <module>\n img = Image(url=url)\n File \"pydantic/main.py\", line 342, in pydantic.main.BaseModel.__init__\npydantic.error_wrappers.ValidationError: 1 validation error for Image\nurl\n unsupported operand type(s) for +: 'NoneType' and 'str' (type=type_error)\n```\n\n\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Type, TypeVar\n\nfrom pydantic import AnyUrl as BaseAnyUrl\nfrom pydantic import errors, parse_obj_as\n\nfrom docarray.typing.abstract_type import AbstractType\n\nif TYPE_CHECKING:\n from pydantic.networks import Parts\n\n from docarray.proto import NodeProto\n\nT = TypeVar('T', bound='AnyUrl')\n\n\nclass AnyUrl(BaseAnyUrl, AbstractType):\n host_required = (\n False # turn off host requirement to allow passing of local paths as URL\n )\n\n def _to_node_protobuf(self) -> 'NodeProto':\n \"\"\"Convert Document into a NodeProto protobuf message. This function should\n be called when the Document is nested into another Document that need to\n be converted into a protobuf\n\n :return: the nested item protobuf message\n \"\"\"\n from docarray.proto import NodeProto\n\n return NodeProto(any_url=str(self))\n\n @classmethod\n def validate_parts(cls, parts: 'Parts', validate_port: bool = True) -> 'Parts':\n \"\"\"\n A method used to validate parts of a URL.\n Our URLs should be able to function both in local and remote settings.\n Therefore, we allow missing `scheme`, making it possible to pass a file path.\n \"\"\"\n scheme = parts['scheme']\n if scheme is None:\n pass # allow missing scheme, unlike pydantic\n\n elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:\n raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))\n\n if validate_port:\n cls._validate_port(parts['port'])\n\n user = parts['user']\n if cls.user_required and user is None:\n raise errors.UrlUserInfoError()\n\n return parts\n\n @classmethod\n def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:\n \"\"\"\n read url from a proto msg\n :param pb_msg:\n :return: url\n \"\"\"\n return parse_obj_as(cls, pb_msg)\n", "path": "docarray/typing/url/any_url.py"}], "after_files": [{"content": "from typing import TYPE_CHECKING, Optional, Type, TypeVar\n\nfrom pydantic import AnyUrl as BaseAnyUrl\nfrom pydantic import errors, parse_obj_as\n\nfrom docarray.typing.abstract_type import AbstractType\n\nif TYPE_CHECKING:\n from pydantic.networks import Parts\n\n from docarray.proto import NodeProto\n\nT = TypeVar('T', bound='AnyUrl')\n\n\nclass AnyUrl(BaseAnyUrl, AbstractType):\n host_required = (\n False # turn off host requirement to allow passing of local paths as URL\n )\n\n def _to_node_protobuf(self) -> 'NodeProto':\n \"\"\"Convert Document into a NodeProto protobuf message. This function should\n be called when the Document is nested into another Document that need to\n be converted into a protobuf\n\n :return: the nested item protobuf message\n \"\"\"\n from docarray.proto import NodeProto\n\n return NodeProto(any_url=str(self))\n\n @classmethod\n def validate_parts(cls, parts: 'Parts', validate_port: bool = True) -> 'Parts':\n \"\"\"\n A method used to validate parts of a URL.\n Our URLs should be able to function both in local and remote settings.\n Therefore, we allow missing `scheme`, making it possible to pass a file\n path without prefix.\n If `scheme` is missing, we assume it is a local file path.\n \"\"\"\n scheme = parts['scheme']\n if scheme is None:\n # allow missing scheme, unlike pydantic\n pass\n\n elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:\n raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))\n\n if validate_port:\n cls._validate_port(parts['port'])\n\n user = parts['user']\n if cls.user_required and user is None:\n raise errors.UrlUserInfoError()\n\n return parts\n\n @classmethod\n def build(\n cls,\n *,\n scheme: str,\n user: Optional[str] = None,\n password: Optional[str] = None,\n host: str,\n port: Optional[str] = None,\n path: Optional[str] = None,\n query: Optional[str] = None,\n fragment: Optional[str] = None,\n **_kwargs: str,\n ) -> str:\n \"\"\"\n Build a URL from its parts.\n The only difference from the pydantic implementation is that we allow\n missing `scheme`, making it possible to pass a file path without prefix.\n \"\"\"\n\n # allow missing scheme, unlike pydantic\n scheme_ = scheme if scheme is not None else ''\n url = super().build(\n scheme=scheme_,\n user=user,\n password=password,\n host=host,\n port=port,\n path=path,\n query=query,\n fragment=fragment,\n **_kwargs,\n )\n if scheme is None and url.startswith('://'):\n # remove the `://` prefix, since scheme is missing\n url = url[3:]\n return url\n\n @classmethod\n def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:\n \"\"\"\n read url from a proto msg\n :param pb_msg:\n :return: url\n \"\"\"\n return parse_obj_as(cls, pb_msg)\n", "path": "docarray/typing/url/any_url.py"}]} | 1,010 | 609 |
gh_patches_debug_34183 | rasdani/github-patches | git_diff | sonic-net__sonic-mgmt-4352 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Investigate RDMA nightly run failures on 202012
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/Azure/SONiC/wiki#report-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
RDMA test runs on TD2 with 202012 are quite flaky. Different set of test failures are seen daily and sometimes test fails at pretest
09/09 run skipped all tgen tests with the following reason
SKIPPED [1] /azp/agent/_work/27/s/tests/common/helpers/assertions.py:13: Port is not mapped to the expected DUT
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ansible/library/testbed_vm_info.py`
Content:
```
1 #!/usr/bin/env python
2
3 import re
4 import yaml
5 import os
6 import traceback
7 import subprocess
8 import ipaddr as ipaddress
9 from operator import itemgetter
10 from itertools import groupby
11 from collections import defaultdict
12 import re
13
14 from ansible.parsing.dataloader import DataLoader
15 from ansible.inventory.manager import InventoryManager
16
17 DOCUMENTATION = '''
18 module: testbed_vm_info.py
19 Ansible_version_added: 2.0.0.2
20 short_description: Gather all related VMs info
21 Description:
22 When deploy testbed topology with VM connected to SONiC, gather neighbor VMs info for generating SONiC minigraph file
23 options:
24 base_vm: base vm name defined in testbed.csv for the deployed topology; required: True
25 topo: topology name defined in testbed.csv for the deployed topology; required: True
26 vm_file: the virtual machine file path ; default: 'veos'
27
28 Ansible_facts:
29 'neighbor_eosvm_mgmt': all VM hosts management IPs
30 'topoall': topology information
31
32 '''
33
34 EXAMPLES = '''
35 - name: gather vm information
36 testbed_vm_info: base_vm='VM0100' topo='t1' vm_file='veos'
37 '''
38
39 ### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here
40 TOPO_PATH = 'vars/'
41 VM_INV_FILE = 'veos'
42
43
44 class TestbedVMFacts():
45 """
46 Retrieve testbed VMs management information that for a specified toplogy defined in testbed.csv
47
48 """
49
50 def __init__(self, toponame, vmbase, vmfile):
51 CLET_SUFFIX = "-clet"
52 toponame = re.sub(CLET_SUFFIX + "$", "", toponame)
53 self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'
54 self.start_index = int(re.findall('VM(\d+)', vmbase)[0])
55 self.vmhosts = {}
56 self.vmfile = vmfile
57 self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)
58 return
59
60
61 def get_neighbor_eos(self):
62 eos = {}
63 with open(self.topofile) as f:
64 vm_topology = yaml.load(f)
65 self.topoall = vm_topology
66 for vm in vm_topology['topology']['VMs']:
67 vm_index = int(vm_topology['topology']['VMs'][vm]['vm_offset'])+self.start_index
68 eos[vm] = vm_index
69 return eos
70
71
72 def main():
73 module = AnsibleModule(
74 argument_spec=dict(
75 base_vm=dict(required=True, type='str'),
76 topo=dict(required=True, type='str'),
77 vm_file=dict(default=VM_INV_FILE, type='str')
78 ),
79 supports_check_mode=True
80 )
81 m_args = module.params
82 topo_type = m_args['topo']
83 if 'ptf' in topo_type:
84 module.exit_json(ansible_facts={'neighbor_eosvm_mgmt': {}})
85 try:
86 vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])
87 neighbor_eos = vmsall.get_neighbor_eos()
88 for eos in neighbor_eos:
89 vmname = 'VM'+format(neighbor_eos[eos], '04d')
90 if vmname in vmsall.inv_mgr.hosts:
91 vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']
92 else:
93 err_msg = "cannot find the vm " + vmname + " in VM inventory file, please make sure you have enough VMs for the topology you are using"
94 module.fail_json(msg=err_msg)
95 module.exit_json(ansible_facts={'neighbor_eosvm_mgmt':vmsall.vmhosts, 'topoall': vmsall.topoall})
96 except (IOError, OSError):
97 module.fail_json(msg="Can not find file "+vmsall.topofile+" or "+m_args['vm_file']+" or "+VM_INV_FILE)
98 except Exception as e:
99 module.fail_json(msg=traceback.format_exc())
100
101 from ansible.module_utils.basic import *
102 if __name__ == "__main__":
103 main()
104
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ansible/library/testbed_vm_info.py b/ansible/library/testbed_vm_info.py
--- a/ansible/library/testbed_vm_info.py
+++ b/ansible/library/testbed_vm_info.py
@@ -39,6 +39,7 @@
### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here
TOPO_PATH = 'vars/'
VM_INV_FILE = 'veos'
+TGEN_MGMT_NETWORK = '10.65.32.0/24'
class TestbedVMFacts():
@@ -51,7 +52,10 @@
CLET_SUFFIX = "-clet"
toponame = re.sub(CLET_SUFFIX + "$", "", toponame)
self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'
- self.start_index = int(re.findall('VM(\d+)', vmbase)[0])
+ if vmbase != '':
+ self.start_index = int(re.findall('VM(\d+)', vmbase)[0])
+ else:
+ self.start_index = 0
self.vmhosts = {}
self.vmfile = vmfile
self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)
@@ -85,9 +89,12 @@
try:
vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])
neighbor_eos = vmsall.get_neighbor_eos()
- for eos in neighbor_eos:
+ tgen_mgmt_ips = list(ipaddress.IPNetwork(unicode(TGEN_MGMT_NETWORK)))
+ for index, eos in enumerate(neighbor_eos):
vmname = 'VM'+format(neighbor_eos[eos], '04d')
- if vmname in vmsall.inv_mgr.hosts:
+ if 'tgen' in topo_type:
+ vmsall.vmhosts[eos] = str(tgen_mgmt_ips[index])
+ elif vmname in vmsall.inv_mgr.hosts:
vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']
else:
err_msg = "cannot find the vm " + vmname + " in VM inventory file, please make sure you have enough VMs for the topology you are using"
| {"golden_diff": "diff --git a/ansible/library/testbed_vm_info.py b/ansible/library/testbed_vm_info.py\n--- a/ansible/library/testbed_vm_info.py\n+++ b/ansible/library/testbed_vm_info.py\n@@ -39,6 +39,7 @@\n ### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here\n TOPO_PATH = 'vars/'\n VM_INV_FILE = 'veos'\n+TGEN_MGMT_NETWORK = '10.65.32.0/24'\n \n \n class TestbedVMFacts():\n@@ -51,7 +52,10 @@\n CLET_SUFFIX = \"-clet\"\n toponame = re.sub(CLET_SUFFIX + \"$\", \"\", toponame)\n self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'\n- self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n+ if vmbase != '':\n+ self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n+ else:\n+ self.start_index = 0\n self.vmhosts = {}\n self.vmfile = vmfile\n self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)\n@@ -85,9 +89,12 @@\n try:\n vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])\n neighbor_eos = vmsall.get_neighbor_eos()\n- for eos in neighbor_eos:\n+ tgen_mgmt_ips = list(ipaddress.IPNetwork(unicode(TGEN_MGMT_NETWORK)))\n+ for index, eos in enumerate(neighbor_eos):\n vmname = 'VM'+format(neighbor_eos[eos], '04d')\n- if vmname in vmsall.inv_mgr.hosts:\n+ if 'tgen' in topo_type:\n+ vmsall.vmhosts[eos] = str(tgen_mgmt_ips[index])\n+ elif vmname in vmsall.inv_mgr.hosts:\n vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']\n else:\n err_msg = \"cannot find the vm \" + vmname + \" in VM inventory file, please make sure you have enough VMs for the topology you are using\"\n", "issue": "Investigate RDMA nightly run failures on 202012\n<!--\r\nIf you are reporting a new issue, make sure that we do not have any duplicates\r\nalready open. You can ensure this by searching the issue list for this\r\nrepository. If there is a duplicate, please close your issue and add a comment\r\nto the existing issue instead.\r\n\r\nIf you suspect your issue is a bug, please edit your issue description to\r\ninclude the BUG REPORT INFORMATION shown below. If you fail to provide this\r\ninformation within 7 days, we cannot debug your issue and will close it. We\r\nwill, however, reopen it if you later provide the information.\r\n\r\nFor more information about reporting issues, see\r\nhttps://github.com/Azure/SONiC/wiki#report-issues\r\n\r\n---------------------------------------------------\r\nGENERAL SUPPORT INFORMATION\r\n---------------------------------------------------\r\n\r\nThe GitHub issue tracker is for bug reports and feature requests.\r\nGeneral support can be found at the following locations:\r\n\r\n- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject\r\n\r\n---------------------------------------------------\r\nBUG REPORT INFORMATION\r\n---------------------------------------------------\r\nUse the commands below to provide key information from your environment:\r\nYou do NOT have to include this information if this is a FEATURE REQUEST\r\n-->\r\n\r\n**Description**\r\nRDMA test runs on TD2 with 202012 are quite flaky. Different set of test failures are seen daily and sometimes test fails at pretest\r\n09/09 run skipped all tgen tests with the following reason\r\nSKIPPED [1] /azp/agent/_work/27/s/tests/common/helpers/assertions.py:13: Port is not mapped to the expected DUT\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport re\nimport yaml\nimport os\nimport traceback\nimport subprocess\nimport ipaddr as ipaddress\nfrom operator import itemgetter\nfrom itertools import groupby\nfrom collections import defaultdict\nimport re\n\nfrom ansible.parsing.dataloader import DataLoader\nfrom ansible.inventory.manager import InventoryManager\n\nDOCUMENTATION = '''\nmodule: testbed_vm_info.py\nAnsible_version_added: 2.0.0.2\nshort_description: Gather all related VMs info\nDescription:\n When deploy testbed topology with VM connected to SONiC, gather neighbor VMs info for generating SONiC minigraph file\n options:\n base_vm: base vm name defined in testbed.csv for the deployed topology; required: True\n topo: topology name defined in testbed.csv for the deployed topology; required: True\n vm_file: the virtual machine file path ; default: 'veos'\n\nAnsible_facts:\n 'neighbor_eosvm_mgmt': all VM hosts management IPs\n 'topoall': topology information\n\n'''\n\nEXAMPLES = '''\n - name: gather vm information\n testbed_vm_info: base_vm='VM0100' topo='t1' vm_file='veos'\n'''\n\n### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here\nTOPO_PATH = 'vars/'\nVM_INV_FILE = 'veos'\n\n\nclass TestbedVMFacts():\n \"\"\"\n Retrieve testbed VMs management information that for a specified toplogy defined in testbed.csv\n\n \"\"\"\n\n def __init__(self, toponame, vmbase, vmfile):\n CLET_SUFFIX = \"-clet\"\n toponame = re.sub(CLET_SUFFIX + \"$\", \"\", toponame)\n self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'\n self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n self.vmhosts = {}\n self.vmfile = vmfile\n self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)\n return\n\n\n def get_neighbor_eos(self):\n eos = {}\n with open(self.topofile) as f:\n vm_topology = yaml.load(f)\n self.topoall = vm_topology\n for vm in vm_topology['topology']['VMs']:\n vm_index = int(vm_topology['topology']['VMs'][vm]['vm_offset'])+self.start_index\n eos[vm] = vm_index\n return eos\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n base_vm=dict(required=True, type='str'),\n topo=dict(required=True, type='str'),\n vm_file=dict(default=VM_INV_FILE, type='str')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n topo_type = m_args['topo']\n if 'ptf' in topo_type:\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt': {}})\n try:\n vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])\n neighbor_eos = vmsall.get_neighbor_eos()\n for eos in neighbor_eos:\n vmname = 'VM'+format(neighbor_eos[eos], '04d')\n if vmname in vmsall.inv_mgr.hosts:\n vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']\n else:\n err_msg = \"cannot find the vm \" + vmname + \" in VM inventory file, please make sure you have enough VMs for the topology you are using\"\n module.fail_json(msg=err_msg)\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt':vmsall.vmhosts, 'topoall': vmsall.topoall})\n except (IOError, OSError):\n module.fail_json(msg=\"Can not find file \"+vmsall.topofile+\" or \"+m_args['vm_file']+\" or \"+VM_INV_FILE)\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__ == \"__main__\":\n main()\n\n", "path": "ansible/library/testbed_vm_info.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport re\nimport yaml\nimport os\nimport traceback\nimport subprocess\nimport ipaddr as ipaddress\nfrom operator import itemgetter\nfrom itertools import groupby\nfrom collections import defaultdict\nimport re\n\nfrom ansible.parsing.dataloader import DataLoader\nfrom ansible.inventory.manager import InventoryManager\n\nDOCUMENTATION = '''\nmodule: testbed_vm_info.py\nAnsible_version_added: 2.0.0.2\nshort_description: Gather all related VMs info\nDescription:\n When deploy testbed topology with VM connected to SONiC, gather neighbor VMs info for generating SONiC minigraph file\n options:\n base_vm: base vm name defined in testbed.csv for the deployed topology; required: True\n topo: topology name defined in testbed.csv for the deployed topology; required: True\n vm_file: the virtual machine file path ; default: 'veos'\n\nAnsible_facts:\n 'neighbor_eosvm_mgmt': all VM hosts management IPs\n 'topoall': topology information\n\n'''\n\nEXAMPLES = '''\n - name: gather vm information\n testbed_vm_info: base_vm='VM0100' topo='t1' vm_file='veos'\n'''\n\n### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here\nTOPO_PATH = 'vars/'\nVM_INV_FILE = 'veos'\nTGEN_MGMT_NETWORK = '10.65.32.0/24'\n\n\nclass TestbedVMFacts():\n \"\"\"\n Retrieve testbed VMs management information that for a specified toplogy defined in testbed.csv\n\n \"\"\"\n\n def __init__(self, toponame, vmbase, vmfile):\n CLET_SUFFIX = \"-clet\"\n toponame = re.sub(CLET_SUFFIX + \"$\", \"\", toponame)\n self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'\n if vmbase != '':\n self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n else:\n self.start_index = 0\n self.vmhosts = {}\n self.vmfile = vmfile\n self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)\n return\n\n\n def get_neighbor_eos(self):\n eos = {}\n with open(self.topofile) as f:\n vm_topology = yaml.load(f)\n self.topoall = vm_topology\n for vm in vm_topology['topology']['VMs']:\n vm_index = int(vm_topology['topology']['VMs'][vm]['vm_offset'])+self.start_index\n eos[vm] = vm_index\n return eos\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n base_vm=dict(required=True, type='str'),\n topo=dict(required=True, type='str'),\n vm_file=dict(default=VM_INV_FILE, type='str')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n topo_type = m_args['topo']\n if 'ptf' in topo_type:\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt': {}})\n try:\n vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])\n neighbor_eos = vmsall.get_neighbor_eos()\n tgen_mgmt_ips = list(ipaddress.IPNetwork(unicode(TGEN_MGMT_NETWORK)))\n for index, eos in enumerate(neighbor_eos):\n vmname = 'VM'+format(neighbor_eos[eos], '04d')\n if 'tgen' in topo_type:\n vmsall.vmhosts[eos] = str(tgen_mgmt_ips[index])\n elif vmname in vmsall.inv_mgr.hosts:\n vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']\n else:\n err_msg = \"cannot find the vm \" + vmname + \" in VM inventory file, please make sure you have enough VMs for the topology you are using\"\n module.fail_json(msg=err_msg)\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt':vmsall.vmhosts, 'topoall': vmsall.topoall})\n except (IOError, OSError):\n module.fail_json(msg=\"Can not find file \"+vmsall.topofile+\" or \"+m_args['vm_file']+\" or \"+VM_INV_FILE)\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__ == \"__main__\":\n main()\n\n", "path": "ansible/library/testbed_vm_info.py"}]} | 1,748 | 523 |
gh_patches_debug_32068 | rasdani/github-patches | git_diff | conan-io__conan-center-index-16242 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] libudev/system: Fails build for conan 2.0
### Description
libudev/system fails to download or build with conan 2.0 installed. it needs an update to use conan 2.0 code for conan tools as it currently is dependent on conan 1.x code.
### Package and Environment Details
* Package Name/Version: **libudev/system**
* Operating System+version: **Linux Ubuntu 20.04**
### Conan profile
[settings]
arch=x86_64
build_type=Release
compiler=gcc
compiler.cppstd=gnu17
compiler.libcxx=libstdc++11
compiler.version=9
os=Linux
### Steps to reproduce
conan download -r conancenter libudev/system@
### Logs
ERROR: Error loading conanfile at '/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py': Unable to load conanfile in /home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py", line 4, in <module>
from conans import tools
ImportError: cannot import name 'tools' from 'conans' (/home/tbitz/.local/lib/python3.8/site-packages/conans/__init__.py)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/libudev/all/conanfile.py`
Content:
```
1 from conan import ConanFile
2 from conan.errors import ConanException, ConanInvalidConfiguration
3 from conan.tools.system import package_manager
4 from conans import tools
5
6 required_conan_version = ">=1.47"
7
8
9 class LibUDEVConan(ConanFile):
10 name = "libudev"
11 version = "system"
12 description = "API for enumerating and introspecting local devices"
13 topics = ("udev", "devices", "enumerating")
14 url = "https://github.com/conan-io/conan-center-index"
15 homepage = "https://www.freedesktop.org/software/systemd/man/udev.html"
16 license = "GPL-2.0-or-later", "LGPL-2.1-or-later"
17 settings = "os", "arch", "compiler", "build_type"
18
19 def validate(self):
20 if self.settings.os != "Linux":
21 raise ConanInvalidConfiguration("libudev is only supported on Linux.")
22
23 def package_id(self):
24 self.info.header_only()
25
26 def _fill_cppinfo_from_pkgconfig(self, name):
27 pkg_config = tools.PkgConfig(name)
28 if not pkg_config.provides:
29 raise ConanException("libudev development files aren't available, give up")
30 libs = [lib[2:] for lib in pkg_config.libs_only_l]
31 lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]
32 ldflags = [flag for flag in pkg_config.libs_only_other]
33 include_dirs = [include[2:] for include in pkg_config.cflags_only_I]
34 cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith("-D")]
35 defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith("-D")]
36
37 self.cpp_info.system_libs = libs
38 self.cpp_info.libdirs = lib_dirs
39 self.cpp_info.sharedlinkflags = ldflags
40 self.cpp_info.exelinkflags = ldflags
41 self.cpp_info.defines = defines
42 self.cpp_info.includedirs = include_dirs
43 self.cpp_info.cflags = cflags
44 self.cpp_info.cxxflags = cflags
45
46 def system_requirements(self):
47 dnf = package_manager.Dnf(self)
48 dnf.install(["systemd-devel"], update=True, check=True)
49
50 yum = package_manager.Yum(self)
51 yum.install(["systemd-devel"], update=True, check=True)
52
53 apt = package_manager.Apt(self)
54 apt.install(["libudev-dev"], update=True, check=True)
55
56 pacman = package_manager.PacMan(self)
57 pacman.install(["systemd-libs"], update=True, check=True)
58
59 zypper = package_manager.Zypper(self)
60 zypper.install(["libudev-devel"], update=True, check=True)
61
62 def package_info(self):
63 self.cpp_info.includedirs = []
64 self.cpp_info.libdirs = []
65 self._fill_cppinfo_from_pkgconfig("libudev")
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/libudev/all/conanfile.py b/recipes/libudev/all/conanfile.py
--- a/recipes/libudev/all/conanfile.py
+++ b/recipes/libudev/all/conanfile.py
@@ -1,7 +1,7 @@
from conan import ConanFile
-from conan.errors import ConanException, ConanInvalidConfiguration
+from conan.errors import ConanInvalidConfiguration
from conan.tools.system import package_manager
-from conans import tools
+from conan.tools.gnu import PkgConfig
required_conan_version = ">=1.47"
@@ -21,27 +21,7 @@
raise ConanInvalidConfiguration("libudev is only supported on Linux.")
def package_id(self):
- self.info.header_only()
-
- def _fill_cppinfo_from_pkgconfig(self, name):
- pkg_config = tools.PkgConfig(name)
- if not pkg_config.provides:
- raise ConanException("libudev development files aren't available, give up")
- libs = [lib[2:] for lib in pkg_config.libs_only_l]
- lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]
- ldflags = [flag for flag in pkg_config.libs_only_other]
- include_dirs = [include[2:] for include in pkg_config.cflags_only_I]
- cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith("-D")]
- defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith("-D")]
-
- self.cpp_info.system_libs = libs
- self.cpp_info.libdirs = lib_dirs
- self.cpp_info.sharedlinkflags = ldflags
- self.cpp_info.exelinkflags = ldflags
- self.cpp_info.defines = defines
- self.cpp_info.includedirs = include_dirs
- self.cpp_info.cflags = cflags
- self.cpp_info.cxxflags = cflags
+ self.info.clear()
def system_requirements(self):
dnf = package_manager.Dnf(self)
@@ -62,4 +42,5 @@
def package_info(self):
self.cpp_info.includedirs = []
self.cpp_info.libdirs = []
- self._fill_cppinfo_from_pkgconfig("libudev")
+ pkg_config = PkgConfig(self, "libudev")
+ pkg_config.fill_cpp_info(self.cpp_info)
| {"golden_diff": "diff --git a/recipes/libudev/all/conanfile.py b/recipes/libudev/all/conanfile.py\n--- a/recipes/libudev/all/conanfile.py\n+++ b/recipes/libudev/all/conanfile.py\n@@ -1,7 +1,7 @@\n from conan import ConanFile\n-from conan.errors import ConanException, ConanInvalidConfiguration\n+from conan.errors import ConanInvalidConfiguration\n from conan.tools.system import package_manager\n-from conans import tools\n+from conan.tools.gnu import PkgConfig\n \n required_conan_version = \">=1.47\"\n \n@@ -21,27 +21,7 @@\n raise ConanInvalidConfiguration(\"libudev is only supported on Linux.\")\n \n def package_id(self):\n- self.info.header_only()\n-\n- def _fill_cppinfo_from_pkgconfig(self, name):\n- pkg_config = tools.PkgConfig(name)\n- if not pkg_config.provides:\n- raise ConanException(\"libudev development files aren't available, give up\")\n- libs = [lib[2:] for lib in pkg_config.libs_only_l]\n- lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]\n- ldflags = [flag for flag in pkg_config.libs_only_other]\n- include_dirs = [include[2:] for include in pkg_config.cflags_only_I]\n- cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith(\"-D\")]\n- defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith(\"-D\")]\n-\n- self.cpp_info.system_libs = libs\n- self.cpp_info.libdirs = lib_dirs\n- self.cpp_info.sharedlinkflags = ldflags\n- self.cpp_info.exelinkflags = ldflags\n- self.cpp_info.defines = defines\n- self.cpp_info.includedirs = include_dirs\n- self.cpp_info.cflags = cflags\n- self.cpp_info.cxxflags = cflags\n+ self.info.clear()\n \n def system_requirements(self):\n dnf = package_manager.Dnf(self)\n@@ -62,4 +42,5 @@\n def package_info(self):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n- self._fill_cppinfo_from_pkgconfig(\"libudev\")\n+ pkg_config = PkgConfig(self, \"libudev\")\n+ pkg_config.fill_cpp_info(self.cpp_info)\n", "issue": "[package] libudev/system: Fails build for conan 2.0\n### Description\n\nlibudev/system fails to download or build with conan 2.0 installed. it needs an update to use conan 2.0 code for conan tools as it currently is dependent on conan 1.x code. \n\n### Package and Environment Details\n\n* Package Name/Version: **libudev/system**\r\n* Operating System+version: **Linux Ubuntu 20.04**\r\n\n\n### Conan profile\n\n[settings]\r\narch=x86_64\r\nbuild_type=Release\r\ncompiler=gcc\r\ncompiler.cppstd=gnu17\r\ncompiler.libcxx=libstdc++11\r\ncompiler.version=9\r\nos=Linux\r\n\n\n### Steps to reproduce\n\nconan download -r conancenter libudev/system@\n\n### Logs\n\nERROR: Error loading conanfile at '/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py': Unable to load conanfile in /home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py\r\n File \"<frozen importlib._bootstrap_external>\", line 848, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py\", line 4, in <module>\r\n from conans import tools\r\nImportError: cannot import name 'tools' from 'conans' (/home/tbitz/.local/lib/python3.8/site-packages/conans/__init__.py)\r\n\r\n\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanException, ConanInvalidConfiguration\nfrom conan.tools.system import package_manager\nfrom conans import tools\n\nrequired_conan_version = \">=1.47\"\n\n\nclass LibUDEVConan(ConanFile):\n name = \"libudev\"\n version = \"system\"\n description = \"API for enumerating and introspecting local devices\"\n topics = (\"udev\", \"devices\", \"enumerating\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.freedesktop.org/software/systemd/man/udev.html\"\n license = \"GPL-2.0-or-later\", \"LGPL-2.1-or-later\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n\n def validate(self):\n if self.settings.os != \"Linux\":\n raise ConanInvalidConfiguration(\"libudev is only supported on Linux.\")\n\n def package_id(self):\n self.info.header_only()\n\n def _fill_cppinfo_from_pkgconfig(self, name):\n pkg_config = tools.PkgConfig(name)\n if not pkg_config.provides:\n raise ConanException(\"libudev development files aren't available, give up\")\n libs = [lib[2:] for lib in pkg_config.libs_only_l]\n lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]\n ldflags = [flag for flag in pkg_config.libs_only_other]\n include_dirs = [include[2:] for include in pkg_config.cflags_only_I]\n cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith(\"-D\")]\n defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith(\"-D\")]\n\n self.cpp_info.system_libs = libs\n self.cpp_info.libdirs = lib_dirs\n self.cpp_info.sharedlinkflags = ldflags\n self.cpp_info.exelinkflags = ldflags\n self.cpp_info.defines = defines\n self.cpp_info.includedirs = include_dirs\n self.cpp_info.cflags = cflags\n self.cpp_info.cxxflags = cflags\n\n def system_requirements(self):\n dnf = package_manager.Dnf(self)\n dnf.install([\"systemd-devel\"], update=True, check=True)\n\n yum = package_manager.Yum(self)\n yum.install([\"systemd-devel\"], update=True, check=True)\n\n apt = package_manager.Apt(self)\n apt.install([\"libudev-dev\"], update=True, check=True)\n\n pacman = package_manager.PacMan(self)\n pacman.install([\"systemd-libs\"], update=True, check=True)\n\n zypper = package_manager.Zypper(self)\n zypper.install([\"libudev-devel\"], update=True, check=True)\n\n def package_info(self):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n self._fill_cppinfo_from_pkgconfig(\"libudev\")\n", "path": "recipes/libudev/all/conanfile.py"}], "after_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.system import package_manager\nfrom conan.tools.gnu import PkgConfig\n\nrequired_conan_version = \">=1.47\"\n\n\nclass LibUDEVConan(ConanFile):\n name = \"libudev\"\n version = \"system\"\n description = \"API for enumerating and introspecting local devices\"\n topics = (\"udev\", \"devices\", \"enumerating\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.freedesktop.org/software/systemd/man/udev.html\"\n license = \"GPL-2.0-or-later\", \"LGPL-2.1-or-later\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n\n def validate(self):\n if self.settings.os != \"Linux\":\n raise ConanInvalidConfiguration(\"libudev is only supported on Linux.\")\n\n def package_id(self):\n self.info.clear()\n\n def system_requirements(self):\n dnf = package_manager.Dnf(self)\n dnf.install([\"systemd-devel\"], update=True, check=True)\n\n yum = package_manager.Yum(self)\n yum.install([\"systemd-devel\"], update=True, check=True)\n\n apt = package_manager.Apt(self)\n apt.install([\"libudev-dev\"], update=True, check=True)\n\n pacman = package_manager.PacMan(self)\n pacman.install([\"systemd-libs\"], update=True, check=True)\n\n zypper = package_manager.Zypper(self)\n zypper.install([\"libudev-devel\"], update=True, check=True)\n\n def package_info(self):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n pkg_config = PkgConfig(self, \"libudev\")\n pkg_config.fill_cpp_info(self.cpp_info)\n", "path": "recipes/libudev/all/conanfile.py"}]} | 1,396 | 529 |
gh_patches_debug_36567 | rasdani/github-patches | git_diff | Slicer__ExtensionsIndex-1759 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bad dependencies kill entire extension build
[SlicerVideoCamera name change](https://github.com/Slicer/ExtensionsIndex/commit/93d1942ed51a5c576f477dab77df9529ce788754) introduced this [bug](https://github.com/Slicer/ExtensionsIndex/commit/4181b49933cca4bf1420d1b8f7b54017bbfe131c) where an extension had a non-existent dependency.
Resulting [CMake Error](https://slicer.cdash.org/build/2225046/configure) terminated the whole build process.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/check_description_files.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 Python 3.x CLI for validating extension description files.
5 """
6
7 import argparse
8 import os
9 import sys
10 import textwrap
11 import urllib.parse as urlparse
12
13 from functools import wraps
14
15
16 class ExtensionCheckError(RuntimeError):
17 """Exception raised when a particular extension check failed.
18 """
19 def __init__(self, extension_name, check_name, details):
20 self.extension_name = extension_name
21 self.check_name = check_name
22 self.details = details
23
24 def __str__(self):
25 return self.details
26
27
28 def require_metadata_key(metadata_key):
29 check_name = "require_metadata_key"
30
31 def dec(fun):
32 @wraps(fun)
33 def wrapped(*args, **kwargs):
34 extension_name = args[0]
35 metadata = args[1]
36 if metadata_key not in metadata.keys():
37 raise ExtensionCheckError(extension_name, check_name, "%s key is missing" % metadata_key)
38 return fun(*args, **kwargs)
39 return wrapped
40 return dec
41
42
43 def parse_s4ext(ext_file_path):
44 """Parse a Slicer extension description file.
45 :param ext_file_path: Path to a Slicer extension description file.
46 :return: Dictionary of extension metadata.
47 """
48 ext_metadata = {}
49 with open(ext_file_path) as ext_file:
50 for line in ext_file:
51 if not line.strip() or line.startswith("#"):
52 continue
53 fields = [field.strip() for field in line.split(' ', 1)]
54 assert(len(fields) <= 2)
55 ext_metadata[fields[0]] = fields[1] if len(fields) == 2 else None
56 return ext_metadata
57
58
59 @require_metadata_key("scmurl")
60 def check_scmurl_syntax(extension_name, metadata):
61 check_name = "check_scmurl_syntax"
62
63 if "://" not in metadata["scmurl"]:
64 raise ExtensionCheckError(extension_name, check_name, "scmurl do not match scheme://host/path")
65
66 supported_schemes = ["git", "https", "svn"]
67 scheme = urlparse.urlsplit(metadata["scmurl"]).scheme
68 if scheme not in supported_schemes:
69 raise ExtensionCheckError(
70 extension_name, check_name,
71 "scmurl scheme is '%s' but it should by any of %s" % (scheme, supported_schemes))
72
73
74 @require_metadata_key("scmurl")
75 @require_metadata_key("scm")
76 def check_git_repository_name(extension_name, metadata):
77 """See https://www.slicer.org/wiki/Documentation/Nightly/Developers/FAQ#Should_the_name_of_the_source_repository_match_the_name_of_the_extension_.3F
78 """
79 check_name = "check_git_repository_name"
80
81 if metadata["scm"] != "git":
82 return
83
84 repo_name = os.path.splitext(urlparse.urlsplit(metadata["scmurl"]).path.split("/")[-1])[0]
85
86 if not repo_name.startswith("Slicer"):
87
88 variations = [prefix + repo_name for prefix in ["Slicer-", "Slicer_", "SlicerExtension-", "SlicerExtension_"]]
89
90 raise ExtensionCheckError(
91 extension_name, check_name,
92 textwrap.dedent("""
93 extension repository name is '%s'. Please, consider changing it to 'Slicer%s' or any of
94 these variations %s.
95 """ % (
96 repo_name, repo_name, variations)))
97
98
99 def main():
100 parser = argparse.ArgumentParser(
101 description='Validate extension description files.')
102 parser.add_argument(
103 "--check-git-repository-name", action="store_true",
104 help="Check extension git repository name. Disabled by default.")
105 parser.add_argument("/path/to/description.s4ext", nargs='*')
106 args = parser.parse_args()
107
108 checks = []
109
110 if args.check_git_repository_name:
111 checks.append(check_git_repository_name)
112
113 if not checks:
114 checks = [
115 check_scmurl_syntax,
116 ]
117
118 total_failure_count = 0
119
120 file_paths = getattr(args, "/path/to/description.s4ext")
121 for file_path in file_paths:
122 extension_name = os.path.splitext(os.path.basename(file_path))[0]
123
124 failures = []
125
126 metadata = parse_s4ext(file_path)
127 for check in checks:
128 try:
129 check(extension_name, metadata)
130 except ExtensionCheckError as exc:
131 failures.append(str(exc))
132
133 if failures:
134 total_failure_count += len(failures)
135 print("%s.s4ext" % extension_name)
136 for failure in set(failures):
137 print(" %s" % failure)
138
139 print("Checked %d description files: Found %d errors" % (len(file_paths), total_failure_count))
140 sys.exit(total_failure_count)
141
142
143 if __name__ == "__main__":
144 main()
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/check_description_files.py b/scripts/check_description_files.py
--- a/scripts/check_description_files.py
+++ b/scripts/check_description_files.py
@@ -95,6 +95,38 @@
""" % (
repo_name, repo_name, variations)))
+def check_dependencies(directory):
+ import os
+ required_extensions = {} # for each extension it contains a list of extensions that require it
+ available_extensions = []
+ for filename in os.listdir(directory):
+ f = os.path.join(directory, filename)
+ if not os.path.isfile(f):
+ continue
+ extension_name = os.path.splitext(os.path.basename(filename))[0]
+ available_extensions.append(extension_name)
+ extension_description = parse_s4ext(f)
+ if 'depends' not in extension_description:
+ continue
+ dependencies = extension_description['depends'].split(' ')
+ for dependency in dependencies:
+ if dependency == 'NA':
+ # special value, just a placeholder that must be ignored
+ continue
+ if dependency in required_extensions:
+ required_extensions[dependency].append(extension_name)
+ else:
+ required_extensions[dependency] = [extension_name]
+ print(f"Checked dependency between {len(available_extensions)} extensions.")
+ error_count = 0
+ for extension in required_extensions:
+ if extension in available_extensions:
+ # required extension is found
+ continue
+ required_by_extensions = ', '.join(required_extensions[extension])
+ print(f"{extension} extension is not found. It is required by extension: {required_by_extensions}.")
+ error_count += 1
+ return error_count
def main():
parser = argparse.ArgumentParser(
@@ -102,6 +134,7 @@
parser.add_argument(
"--check-git-repository-name", action="store_true",
help="Check extension git repository name. Disabled by default.")
+ parser.add_argument("-d", "--check-dependencies", help="Check all extension dsecription files in the provided folder.")
parser.add_argument("/path/to/description.s4ext", nargs='*')
args = parser.parse_args()
@@ -136,7 +169,13 @@
for failure in set(failures):
print(" %s" % failure)
- print("Checked %d description files: Found %d errors" % (len(file_paths), total_failure_count))
+ print(f"Checked content of {len(file_paths)} description files.")
+
+
+ if args.check_dependencies:
+ total_failure_count += check_dependencies(args.check_dependencies)
+
+ print(f"Total errors found in extension descriptions: {total_failure_count}")
sys.exit(total_failure_count)
| {"golden_diff": "diff --git a/scripts/check_description_files.py b/scripts/check_description_files.py\n--- a/scripts/check_description_files.py\n+++ b/scripts/check_description_files.py\n@@ -95,6 +95,38 @@\n \"\"\" % (\n repo_name, repo_name, variations)))\n \n+def check_dependencies(directory):\n+ import os\n+ required_extensions = {} # for each extension it contains a list of extensions that require it\n+ available_extensions = []\n+ for filename in os.listdir(directory):\n+ f = os.path.join(directory, filename)\n+ if not os.path.isfile(f):\n+ continue\n+ extension_name = os.path.splitext(os.path.basename(filename))[0]\n+ available_extensions.append(extension_name)\n+ extension_description = parse_s4ext(f)\n+ if 'depends' not in extension_description:\n+ continue\n+ dependencies = extension_description['depends'].split(' ')\n+ for dependency in dependencies:\n+ if dependency == 'NA':\n+ # special value, just a placeholder that must be ignored\n+ continue\n+ if dependency in required_extensions:\n+ required_extensions[dependency].append(extension_name)\n+ else:\n+ required_extensions[dependency] = [extension_name]\n+ print(f\"Checked dependency between {len(available_extensions)} extensions.\")\n+ error_count = 0\n+ for extension in required_extensions:\n+ if extension in available_extensions:\n+ # required extension is found\n+ continue\n+ required_by_extensions = ', '.join(required_extensions[extension])\n+ print(f\"{extension} extension is not found. It is required by extension: {required_by_extensions}.\")\n+ error_count += 1\n+ return error_count\n \n def main():\n parser = argparse.ArgumentParser(\n@@ -102,6 +134,7 @@\n parser.add_argument(\n \"--check-git-repository-name\", action=\"store_true\",\n help=\"Check extension git repository name. Disabled by default.\")\n+ parser.add_argument(\"-d\", \"--check-dependencies\", help=\"Check all extension dsecription files in the provided folder.\")\n parser.add_argument(\"/path/to/description.s4ext\", nargs='*')\n args = parser.parse_args()\n \n@@ -136,7 +169,13 @@\n for failure in set(failures):\n print(\" %s\" % failure)\n \n- print(\"Checked %d description files: Found %d errors\" % (len(file_paths), total_failure_count))\n+ print(f\"Checked content of {len(file_paths)} description files.\")\n+\n+\n+ if args.check_dependencies:\n+ total_failure_count += check_dependencies(args.check_dependencies)\n+\n+ print(f\"Total errors found in extension descriptions: {total_failure_count}\")\n sys.exit(total_failure_count)\n", "issue": "Bad dependencies kill entire extension build\n[SlicerVideoCamera name change](https://github.com/Slicer/ExtensionsIndex/commit/93d1942ed51a5c576f477dab77df9529ce788754) introduced this [bug](https://github.com/Slicer/ExtensionsIndex/commit/4181b49933cca4bf1420d1b8f7b54017bbfe131c) where an extension had a non-existent dependency.\r\n\r\nResulting [CMake Error](https://slicer.cdash.org/build/2225046/configure) terminated the whole build process.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nPython 3.x CLI for validating extension description files.\n\"\"\"\n\nimport argparse\nimport os\nimport sys\nimport textwrap\nimport urllib.parse as urlparse\n\nfrom functools import wraps\n\n\nclass ExtensionCheckError(RuntimeError):\n \"\"\"Exception raised when a particular extension check failed.\n \"\"\"\n def __init__(self, extension_name, check_name, details):\n self.extension_name = extension_name\n self.check_name = check_name\n self.details = details\n\n def __str__(self):\n return self.details\n\n\ndef require_metadata_key(metadata_key):\n check_name = \"require_metadata_key\"\n\n def dec(fun):\n @wraps(fun)\n def wrapped(*args, **kwargs):\n extension_name = args[0]\n metadata = args[1]\n if metadata_key not in metadata.keys():\n raise ExtensionCheckError(extension_name, check_name, \"%s key is missing\" % metadata_key)\n return fun(*args, **kwargs)\n return wrapped\n return dec\n\n\ndef parse_s4ext(ext_file_path):\n \"\"\"Parse a Slicer extension description file.\n :param ext_file_path: Path to a Slicer extension description file.\n :return: Dictionary of extension metadata.\n \"\"\"\n ext_metadata = {}\n with open(ext_file_path) as ext_file:\n for line in ext_file:\n if not line.strip() or line.startswith(\"#\"):\n continue\n fields = [field.strip() for field in line.split(' ', 1)]\n assert(len(fields) <= 2)\n ext_metadata[fields[0]] = fields[1] if len(fields) == 2 else None\n return ext_metadata\n\n\n@require_metadata_key(\"scmurl\")\ndef check_scmurl_syntax(extension_name, metadata):\n check_name = \"check_scmurl_syntax\"\n\n if \"://\" not in metadata[\"scmurl\"]:\n raise ExtensionCheckError(extension_name, check_name, \"scmurl do not match scheme://host/path\")\n\n supported_schemes = [\"git\", \"https\", \"svn\"]\n scheme = urlparse.urlsplit(metadata[\"scmurl\"]).scheme\n if scheme not in supported_schemes:\n raise ExtensionCheckError(\n extension_name, check_name,\n \"scmurl scheme is '%s' but it should by any of %s\" % (scheme, supported_schemes))\n\n\n@require_metadata_key(\"scmurl\")\n@require_metadata_key(\"scm\")\ndef check_git_repository_name(extension_name, metadata):\n \"\"\"See https://www.slicer.org/wiki/Documentation/Nightly/Developers/FAQ#Should_the_name_of_the_source_repository_match_the_name_of_the_extension_.3F\n \"\"\"\n check_name = \"check_git_repository_name\"\n\n if metadata[\"scm\"] != \"git\":\n return\n\n repo_name = os.path.splitext(urlparse.urlsplit(metadata[\"scmurl\"]).path.split(\"/\")[-1])[0]\n\n if not repo_name.startswith(\"Slicer\"):\n\n variations = [prefix + repo_name for prefix in [\"Slicer-\", \"Slicer_\", \"SlicerExtension-\", \"SlicerExtension_\"]]\n\n raise ExtensionCheckError(\n extension_name, check_name,\n textwrap.dedent(\"\"\"\n extension repository name is '%s'. Please, consider changing it to 'Slicer%s' or any of\n these variations %s.\n \"\"\" % (\n repo_name, repo_name, variations)))\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Validate extension description files.')\n parser.add_argument(\n \"--check-git-repository-name\", action=\"store_true\",\n help=\"Check extension git repository name. Disabled by default.\")\n parser.add_argument(\"/path/to/description.s4ext\", nargs='*')\n args = parser.parse_args()\n\n checks = []\n\n if args.check_git_repository_name:\n checks.append(check_git_repository_name)\n\n if not checks:\n checks = [\n check_scmurl_syntax,\n ]\n\n total_failure_count = 0\n\n file_paths = getattr(args, \"/path/to/description.s4ext\")\n for file_path in file_paths:\n extension_name = os.path.splitext(os.path.basename(file_path))[0]\n\n failures = []\n \n metadata = parse_s4ext(file_path)\n for check in checks:\n try:\n check(extension_name, metadata)\n except ExtensionCheckError as exc:\n failures.append(str(exc))\n\n if failures:\n total_failure_count += len(failures)\n print(\"%s.s4ext\" % extension_name)\n for failure in set(failures):\n print(\" %s\" % failure)\n\n print(\"Checked %d description files: Found %d errors\" % (len(file_paths), total_failure_count))\n sys.exit(total_failure_count)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/check_description_files.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nPython 3.x CLI for validating extension description files.\n\"\"\"\n\nimport argparse\nimport os\nimport sys\nimport textwrap\nimport urllib.parse as urlparse\n\nfrom functools import wraps\n\n\nclass ExtensionCheckError(RuntimeError):\n \"\"\"Exception raised when a particular extension check failed.\n \"\"\"\n def __init__(self, extension_name, check_name, details):\n self.extension_name = extension_name\n self.check_name = check_name\n self.details = details\n\n def __str__(self):\n return self.details\n\n\ndef require_metadata_key(metadata_key):\n check_name = \"require_metadata_key\"\n\n def dec(fun):\n @wraps(fun)\n def wrapped(*args, **kwargs):\n extension_name = args[0]\n metadata = args[1]\n if metadata_key not in metadata.keys():\n raise ExtensionCheckError(extension_name, check_name, \"%s key is missing\" % metadata_key)\n return fun(*args, **kwargs)\n return wrapped\n return dec\n\n\ndef parse_s4ext(ext_file_path):\n \"\"\"Parse a Slicer extension description file.\n :param ext_file_path: Path to a Slicer extension description file.\n :return: Dictionary of extension metadata.\n \"\"\"\n ext_metadata = {}\n with open(ext_file_path) as ext_file:\n for line in ext_file:\n if not line.strip() or line.startswith(\"#\"):\n continue\n fields = [field.strip() for field in line.split(' ', 1)]\n assert(len(fields) <= 2)\n ext_metadata[fields[0]] = fields[1] if len(fields) == 2 else None\n return ext_metadata\n\n\n@require_metadata_key(\"scmurl\")\ndef check_scmurl_syntax(extension_name, metadata):\n check_name = \"check_scmurl_syntax\"\n\n if \"://\" not in metadata[\"scmurl\"]:\n raise ExtensionCheckError(extension_name, check_name, \"scmurl do not match scheme://host/path\")\n\n supported_schemes = [\"git\", \"https\", \"svn\"]\n scheme = urlparse.urlsplit(metadata[\"scmurl\"]).scheme\n if scheme not in supported_schemes:\n raise ExtensionCheckError(\n extension_name, check_name,\n \"scmurl scheme is '%s' but it should by any of %s\" % (scheme, supported_schemes))\n\n\n@require_metadata_key(\"scmurl\")\n@require_metadata_key(\"scm\")\ndef check_git_repository_name(extension_name, metadata):\n \"\"\"See https://www.slicer.org/wiki/Documentation/Nightly/Developers/FAQ#Should_the_name_of_the_source_repository_match_the_name_of_the_extension_.3F\n \"\"\"\n check_name = \"check_git_repository_name\"\n\n if metadata[\"scm\"] != \"git\":\n return\n\n repo_name = os.path.splitext(urlparse.urlsplit(metadata[\"scmurl\"]).path.split(\"/\")[-1])[0]\n\n if not repo_name.startswith(\"Slicer\"):\n\n variations = [prefix + repo_name for prefix in [\"Slicer-\", \"Slicer_\", \"SlicerExtension-\", \"SlicerExtension_\"]]\n\n raise ExtensionCheckError(\n extension_name, check_name,\n textwrap.dedent(\"\"\"\n extension repository name is '%s'. Please, consider changing it to 'Slicer%s' or any of\n these variations %s.\n \"\"\" % (\n repo_name, repo_name, variations)))\n\ndef check_dependencies(directory):\n import os\n required_extensions = {} # for each extension it contains a list of extensions that require it\n available_extensions = []\n for filename in os.listdir(directory):\n f = os.path.join(directory, filename)\n if not os.path.isfile(f):\n continue\n extension_name = os.path.splitext(os.path.basename(filename))[0]\n available_extensions.append(extension_name)\n extension_description = parse_s4ext(f)\n if 'depends' not in extension_description:\n continue\n dependencies = extension_description['depends'].split(' ')\n for dependency in dependencies:\n if dependency == 'NA':\n # special value, just a placeholder that must be ignored\n continue\n if dependency in required_extensions:\n required_extensions[dependency].append(extension_name)\n else:\n required_extensions[dependency] = [extension_name]\n print(f\"Checked dependency between {len(available_extensions)} extensions.\")\n error_count = 0\n for extension in required_extensions:\n if extension in available_extensions:\n # required extension is found\n continue\n required_by_extensions = ', '.join(required_extensions[extension])\n print(f\"{extension} extension is not found. It is required by extension: {required_by_extensions}.\")\n error_count += 1\n return error_count\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Validate extension description files.')\n parser.add_argument(\n \"--check-git-repository-name\", action=\"store_true\",\n help=\"Check extension git repository name. Disabled by default.\")\n parser.add_argument(\"-d\", \"--check-dependencies\", help=\"Check all extension dsecription files in the provided folder.\")\n parser.add_argument(\"/path/to/description.s4ext\", nargs='*')\n args = parser.parse_args()\n\n checks = []\n\n if args.check_git_repository_name:\n checks.append(check_git_repository_name)\n\n if not checks:\n checks = [\n check_scmurl_syntax,\n ]\n\n total_failure_count = 0\n\n file_paths = getattr(args, \"/path/to/description.s4ext\")\n for file_path in file_paths:\n extension_name = os.path.splitext(os.path.basename(file_path))[0]\n\n failures = []\n \n metadata = parse_s4ext(file_path)\n for check in checks:\n try:\n check(extension_name, metadata)\n except ExtensionCheckError as exc:\n failures.append(str(exc))\n\n if failures:\n total_failure_count += len(failures)\n print(\"%s.s4ext\" % extension_name)\n for failure in set(failures):\n print(\" %s\" % failure)\n\n print(f\"Checked content of {len(file_paths)} description files.\")\n\n\n if args.check_dependencies:\n total_failure_count += check_dependencies(args.check_dependencies)\n\n print(f\"Total errors found in extension descriptions: {total_failure_count}\")\n sys.exit(total_failure_count)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/check_description_files.py"}]} | 1,774 | 591 |
gh_patches_debug_32378 | rasdani/github-patches | git_diff | optuna__optuna-4684 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove experimental label from `_ProgressBar`
### Motivation
Several issues related to `_ProgressBar` have been already addressed (ref: https://github.com/optuna/optuna/issues/2892, https://github.com/optuna/optuna/issues/2957, https://github.com/optuna/optuna/issues/2958). Now we can remove the experimental label from `_ProgressBar`.
### Suggestion
Remove the `@experimental_func` decorator from `_ProgressBar`. Also, `_init_valid` method can be removed as explained in [TODO comment](https://github.com/optuna/optuna/blob/806448420863606c113aeb2e33457acf022be066/optuna/progress_bar.py#L57C28-L58).
### Additional context (optional)
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/progress_bar.py`
Content:
```
1 import logging
2 from typing import Any
3 from typing import Optional
4 from typing import TYPE_CHECKING
5
6 from tqdm.auto import tqdm
7
8 from optuna import logging as optuna_logging
9 from optuna._experimental import experimental_func
10
11
12 if TYPE_CHECKING:
13 from optuna.study import Study
14
15 _tqdm_handler: Optional["_TqdmLoggingHandler"] = None
16
17
18 # Reference: https://gist.github.com/hvy/8b80c2cedf02b15c24f85d1fa17ebe02
19 class _TqdmLoggingHandler(logging.StreamHandler):
20 def emit(self, record: Any) -> None:
21 try:
22 msg = self.format(record)
23 tqdm.write(msg)
24 self.flush()
25 except (KeyboardInterrupt, SystemExit):
26 raise
27 except Exception:
28 self.handleError(record)
29
30
31 class _ProgressBar:
32 """Progress Bar implementation for :func:`~optuna.study.Study.optimize` on the top of `tqdm`.
33
34 Args:
35 is_valid:
36 Whether to show progress bars in :func:`~optuna.study.Study.optimize`.
37 n_trials:
38 The number of trials.
39 timeout:
40 Stop study after the given number of second(s).
41 """
42
43 def __init__(
44 self,
45 is_valid: bool,
46 n_trials: Optional[int] = None,
47 timeout: Optional[float] = None,
48 ) -> None:
49 self._is_valid = is_valid and (n_trials or timeout) is not None
50 self._n_trials = n_trials
51 self._timeout = timeout
52 self._last_elapsed_seconds = 0.0
53
54 if self._is_valid:
55 self._init_valid()
56
57 # TODO(hvy): Remove initialization indirection via this method when the progress bar is no
58 # longer experimental.
59 @experimental_func("1.2.0", name="Progress bar")
60 def _init_valid(self) -> None:
61 if self._n_trials is not None:
62 self._progress_bar = tqdm(total=self._n_trials)
63
64 elif self._timeout is not None:
65 total = tqdm.format_interval(self._timeout)
66 fmt = "{desc} {percentage:3.0f}%|{bar}| {elapsed}/" + total
67 self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)
68 else:
69 assert False
70
71 global _tqdm_handler
72
73 _tqdm_handler = _TqdmLoggingHandler()
74 _tqdm_handler.setLevel(logging.INFO)
75 _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())
76 optuna_logging.disable_default_handler()
77 optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)
78
79 def update(self, elapsed_seconds: float, study: "Study") -> None:
80 """Update the progress bars if ``is_valid`` is :obj:`True`.
81
82 Args:
83 elapsed_seconds:
84 The time past since :func:`~optuna.study.Study.optimize` started.
85 study:
86 The current study object.
87 """
88
89 if self._is_valid:
90 if not study._is_multi_objective():
91 # Not updating the progress bar when there are no complete trial.
92 try:
93 msg = (
94 f"Best trial: {study.best_trial.number}. "
95 f"Best value: {study.best_value:.6g}"
96 )
97
98 self._progress_bar.set_description(msg)
99 except ValueError:
100 pass
101
102 if self._n_trials is not None:
103 self._progress_bar.update(1)
104 if self._timeout is not None:
105 self._progress_bar.set_postfix_str(
106 "{:.02f}/{} seconds".format(elapsed_seconds, self._timeout)
107 )
108
109 elif self._timeout is not None:
110 time_diff = elapsed_seconds - self._last_elapsed_seconds
111 if elapsed_seconds > self._timeout:
112 # Clip elapsed time to avoid tqdm warnings.
113 time_diff -= elapsed_seconds - self._timeout
114
115 self._progress_bar.update(time_diff)
116 self._last_elapsed_seconds = elapsed_seconds
117
118 else:
119 assert False
120
121 def close(self) -> None:
122 """Close progress bars."""
123
124 if self._is_valid:
125 self._progress_bar.close()
126 assert _tqdm_handler is not None
127 optuna_logging._get_library_root_logger().removeHandler(_tqdm_handler)
128 optuna_logging.enable_default_handler()
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optuna/progress_bar.py b/optuna/progress_bar.py
--- a/optuna/progress_bar.py
+++ b/optuna/progress_bar.py
@@ -6,7 +6,6 @@
from tqdm.auto import tqdm
from optuna import logging as optuna_logging
-from optuna._experimental import experimental_func
if TYPE_CHECKING:
@@ -52,29 +51,22 @@
self._last_elapsed_seconds = 0.0
if self._is_valid:
- self._init_valid()
-
- # TODO(hvy): Remove initialization indirection via this method when the progress bar is no
- # longer experimental.
- @experimental_func("1.2.0", name="Progress bar")
- def _init_valid(self) -> None:
- if self._n_trials is not None:
- self._progress_bar = tqdm(total=self._n_trials)
-
- elif self._timeout is not None:
- total = tqdm.format_interval(self._timeout)
- fmt = "{desc} {percentage:3.0f}%|{bar}| {elapsed}/" + total
- self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)
- else:
- assert False
-
- global _tqdm_handler
-
- _tqdm_handler = _TqdmLoggingHandler()
- _tqdm_handler.setLevel(logging.INFO)
- _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())
- optuna_logging.disable_default_handler()
- optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)
+ if self._n_trials is not None:
+ self._progress_bar = tqdm(total=self._n_trials)
+ elif self._timeout is not None:
+ total = tqdm.format_interval(self._timeout)
+ fmt = "{desc} {percentage:3.0f}%|{bar}| {elapsed}/" + total
+ self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)
+ else:
+ assert False
+
+ global _tqdm_handler
+
+ _tqdm_handler = _TqdmLoggingHandler()
+ _tqdm_handler.setLevel(logging.INFO)
+ _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())
+ optuna_logging.disable_default_handler()
+ optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)
def update(self, elapsed_seconds: float, study: "Study") -> None:
"""Update the progress bars if ``is_valid`` is :obj:`True`.
| {"golden_diff": "diff --git a/optuna/progress_bar.py b/optuna/progress_bar.py\n--- a/optuna/progress_bar.py\n+++ b/optuna/progress_bar.py\n@@ -6,7 +6,6 @@\n from tqdm.auto import tqdm\n \n from optuna import logging as optuna_logging\n-from optuna._experimental import experimental_func\n \n \n if TYPE_CHECKING:\n@@ -52,29 +51,22 @@\n self._last_elapsed_seconds = 0.0\n \n if self._is_valid:\n- self._init_valid()\n-\n- # TODO(hvy): Remove initialization indirection via this method when the progress bar is no\n- # longer experimental.\n- @experimental_func(\"1.2.0\", name=\"Progress bar\")\n- def _init_valid(self) -> None:\n- if self._n_trials is not None:\n- self._progress_bar = tqdm(total=self._n_trials)\n-\n- elif self._timeout is not None:\n- total = tqdm.format_interval(self._timeout)\n- fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n- self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n- else:\n- assert False\n-\n- global _tqdm_handler\n-\n- _tqdm_handler = _TqdmLoggingHandler()\n- _tqdm_handler.setLevel(logging.INFO)\n- _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n- optuna_logging.disable_default_handler()\n- optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n+ if self._n_trials is not None:\n+ self._progress_bar = tqdm(total=self._n_trials)\n+ elif self._timeout is not None:\n+ total = tqdm.format_interval(self._timeout)\n+ fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n+ self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n+ else:\n+ assert False\n+\n+ global _tqdm_handler\n+\n+ _tqdm_handler = _TqdmLoggingHandler()\n+ _tqdm_handler.setLevel(logging.INFO)\n+ _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n+ optuna_logging.disable_default_handler()\n+ optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n \n def update(self, elapsed_seconds: float, study: \"Study\") -> None:\n \"\"\"Update the progress bars if ``is_valid`` is :obj:`True`.\n", "issue": "Remove experimental label from `_ProgressBar`\n### Motivation\n\nSeveral issues related to `_ProgressBar` have been already addressed (ref: https://github.com/optuna/optuna/issues/2892, https://github.com/optuna/optuna/issues/2957, https://github.com/optuna/optuna/issues/2958). Now we can remove the experimental label from `_ProgressBar`.\n\n### Suggestion\n\nRemove the `@experimental_func` decorator from `_ProgressBar`. Also, `_init_valid` method can be removed as explained in [TODO comment](https://github.com/optuna/optuna/blob/806448420863606c113aeb2e33457acf022be066/optuna/progress_bar.py#L57C28-L58).\n\n### Additional context (optional)\n\n_No response_\n", "before_files": [{"content": "import logging\nfrom typing import Any\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\n\nfrom tqdm.auto import tqdm\n\nfrom optuna import logging as optuna_logging\nfrom optuna._experimental import experimental_func\n\n\nif TYPE_CHECKING:\n from optuna.study import Study\n\n_tqdm_handler: Optional[\"_TqdmLoggingHandler\"] = None\n\n\n# Reference: https://gist.github.com/hvy/8b80c2cedf02b15c24f85d1fa17ebe02\nclass _TqdmLoggingHandler(logging.StreamHandler):\n def emit(self, record: Any) -> None:\n try:\n msg = self.format(record)\n tqdm.write(msg)\n self.flush()\n except (KeyboardInterrupt, SystemExit):\n raise\n except Exception:\n self.handleError(record)\n\n\nclass _ProgressBar:\n \"\"\"Progress Bar implementation for :func:`~optuna.study.Study.optimize` on the top of `tqdm`.\n\n Args:\n is_valid:\n Whether to show progress bars in :func:`~optuna.study.Study.optimize`.\n n_trials:\n The number of trials.\n timeout:\n Stop study after the given number of second(s).\n \"\"\"\n\n def __init__(\n self,\n is_valid: bool,\n n_trials: Optional[int] = None,\n timeout: Optional[float] = None,\n ) -> None:\n self._is_valid = is_valid and (n_trials or timeout) is not None\n self._n_trials = n_trials\n self._timeout = timeout\n self._last_elapsed_seconds = 0.0\n\n if self._is_valid:\n self._init_valid()\n\n # TODO(hvy): Remove initialization indirection via this method when the progress bar is no\n # longer experimental.\n @experimental_func(\"1.2.0\", name=\"Progress bar\")\n def _init_valid(self) -> None:\n if self._n_trials is not None:\n self._progress_bar = tqdm(total=self._n_trials)\n\n elif self._timeout is not None:\n total = tqdm.format_interval(self._timeout)\n fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n else:\n assert False\n\n global _tqdm_handler\n\n _tqdm_handler = _TqdmLoggingHandler()\n _tqdm_handler.setLevel(logging.INFO)\n _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n optuna_logging.disable_default_handler()\n optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n\n def update(self, elapsed_seconds: float, study: \"Study\") -> None:\n \"\"\"Update the progress bars if ``is_valid`` is :obj:`True`.\n\n Args:\n elapsed_seconds:\n The time past since :func:`~optuna.study.Study.optimize` started.\n study:\n The current study object.\n \"\"\"\n\n if self._is_valid:\n if not study._is_multi_objective():\n # Not updating the progress bar when there are no complete trial.\n try:\n msg = (\n f\"Best trial: {study.best_trial.number}. \"\n f\"Best value: {study.best_value:.6g}\"\n )\n\n self._progress_bar.set_description(msg)\n except ValueError:\n pass\n\n if self._n_trials is not None:\n self._progress_bar.update(1)\n if self._timeout is not None:\n self._progress_bar.set_postfix_str(\n \"{:.02f}/{} seconds\".format(elapsed_seconds, self._timeout)\n )\n\n elif self._timeout is not None:\n time_diff = elapsed_seconds - self._last_elapsed_seconds\n if elapsed_seconds > self._timeout:\n # Clip elapsed time to avoid tqdm warnings.\n time_diff -= elapsed_seconds - self._timeout\n\n self._progress_bar.update(time_diff)\n self._last_elapsed_seconds = elapsed_seconds\n\n else:\n assert False\n\n def close(self) -> None:\n \"\"\"Close progress bars.\"\"\"\n\n if self._is_valid:\n self._progress_bar.close()\n assert _tqdm_handler is not None\n optuna_logging._get_library_root_logger().removeHandler(_tqdm_handler)\n optuna_logging.enable_default_handler()\n", "path": "optuna/progress_bar.py"}], "after_files": [{"content": "import logging\nfrom typing import Any\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\n\nfrom tqdm.auto import tqdm\n\nfrom optuna import logging as optuna_logging\n\n\nif TYPE_CHECKING:\n from optuna.study import Study\n\n_tqdm_handler: Optional[\"_TqdmLoggingHandler\"] = None\n\n\n# Reference: https://gist.github.com/hvy/8b80c2cedf02b15c24f85d1fa17ebe02\nclass _TqdmLoggingHandler(logging.StreamHandler):\n def emit(self, record: Any) -> None:\n try:\n msg = self.format(record)\n tqdm.write(msg)\n self.flush()\n except (KeyboardInterrupt, SystemExit):\n raise\n except Exception:\n self.handleError(record)\n\n\nclass _ProgressBar:\n \"\"\"Progress Bar implementation for :func:`~optuna.study.Study.optimize` on the top of `tqdm`.\n\n Args:\n is_valid:\n Whether to show progress bars in :func:`~optuna.study.Study.optimize`.\n n_trials:\n The number of trials.\n timeout:\n Stop study after the given number of second(s).\n \"\"\"\n\n def __init__(\n self,\n is_valid: bool,\n n_trials: Optional[int] = None,\n timeout: Optional[float] = None,\n ) -> None:\n self._is_valid = is_valid and (n_trials or timeout) is not None\n self._n_trials = n_trials\n self._timeout = timeout\n self._last_elapsed_seconds = 0.0\n\n if self._is_valid:\n if self._n_trials is not None:\n self._progress_bar = tqdm(total=self._n_trials)\n elif self._timeout is not None:\n total = tqdm.format_interval(self._timeout)\n fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n else:\n assert False\n\n global _tqdm_handler\n\n _tqdm_handler = _TqdmLoggingHandler()\n _tqdm_handler.setLevel(logging.INFO)\n _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n optuna_logging.disable_default_handler()\n optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n\n def update(self, elapsed_seconds: float, study: \"Study\") -> None:\n \"\"\"Update the progress bars if ``is_valid`` is :obj:`True`.\n\n Args:\n elapsed_seconds:\n The time past since :func:`~optuna.study.Study.optimize` started.\n study:\n The current study object.\n \"\"\"\n\n if self._is_valid:\n if not study._is_multi_objective():\n # Not updating the progress bar when there are no complete trial.\n try:\n msg = (\n f\"Best trial: {study.best_trial.number}. \"\n f\"Best value: {study.best_value:.6g}\"\n )\n\n self._progress_bar.set_description(msg)\n except ValueError:\n pass\n\n if self._n_trials is not None:\n self._progress_bar.update(1)\n if self._timeout is not None:\n self._progress_bar.set_postfix_str(\n \"{:.02f}/{} seconds\".format(elapsed_seconds, self._timeout)\n )\n\n elif self._timeout is not None:\n time_diff = elapsed_seconds - self._last_elapsed_seconds\n if elapsed_seconds > self._timeout:\n # Clip elapsed time to avoid tqdm warnings.\n time_diff -= elapsed_seconds - self._timeout\n\n self._progress_bar.update(time_diff)\n self._last_elapsed_seconds = elapsed_seconds\n\n else:\n assert False\n\n def close(self) -> None:\n \"\"\"Close progress bars.\"\"\"\n\n if self._is_valid:\n self._progress_bar.close()\n assert _tqdm_handler is not None\n optuna_logging._get_library_root_logger().removeHandler(_tqdm_handler)\n optuna_logging.enable_default_handler()\n", "path": "optuna/progress_bar.py"}]} | 1,676 | 570 |
gh_patches_debug_20648 | rasdani/github-patches | git_diff | microsoft__ptvsd-1253 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PTVSD_LOG_DIR doesn't work with VS
No logs are generated even with the environment variable set. It looks like logging initialization is missing on the VS entry point (`debugger.py`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7 from ptvsd._local import run_module, run_file, run_main
8
9
10 # TODO: not needed?
11 DONT_DEBUG = []
12
13 LOCALHOST = 'localhost'
14
15 RUNNERS = {
16 'module': run_module, # python -m spam
17 'script': run_file, # python spam.py
18 'code': run_file, # python -c 'print("spam")'
19 None: run_file, # catchall
20 }
21
22
23 def debug(filename, port_num, debug_id, debug_options, run_as,
24 _runners=RUNNERS, _extra=None, *args, **kwargs):
25 # TODO: docstring
26 if _extra is None:
27 _extra = sys.argv[1:]
28 address = (LOCALHOST, port_num)
29 try:
30 run = _runners[run_as]
31 except KeyError:
32 # TODO: fail?
33 run = _runners[None]
34 if _extra:
35 args = _extra + list(args)
36 kwargs.setdefault('singlesession', True)
37 run(address, filename, *args, **kwargs)
38
39
40 def run(filename, port_num, run_as,
41 *args, **kwargs):
42 address = (LOCALHOST, port_num)
43 run_main(address, filename, run_as, *args, **kwargs)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/ptvsd/debugger.py b/src/ptvsd/debugger.py
--- a/src/ptvsd/debugger.py
+++ b/src/ptvsd/debugger.py
@@ -4,6 +4,7 @@
import sys
+import ptvsd.log
from ptvsd._local import run_module, run_file, run_main
@@ -22,7 +23,10 @@
def debug(filename, port_num, debug_id, debug_options, run_as,
_runners=RUNNERS, _extra=None, *args, **kwargs):
- # TODO: docstring
+
+ ptvsd.log.to_file()
+ ptvsd.log.info('debug{0!r}', (filename, port_num, debug_id, debug_options, run_as))
+
if _extra is None:
_extra = sys.argv[1:]
address = (LOCALHOST, port_num)
@@ -39,5 +43,9 @@
def run(filename, port_num, run_as,
*args, **kwargs):
+
+ ptvsd.log.to_file()
+ ptvsd.log.info('run{0!r}', (filename, port_num, run_as))
+
address = (LOCALHOST, port_num)
run_main(address, filename, run_as, *args, **kwargs)
| {"golden_diff": "diff --git a/src/ptvsd/debugger.py b/src/ptvsd/debugger.py\n--- a/src/ptvsd/debugger.py\n+++ b/src/ptvsd/debugger.py\n@@ -4,6 +4,7 @@\n \n import sys\n \n+import ptvsd.log\n from ptvsd._local import run_module, run_file, run_main\n \n \n@@ -22,7 +23,10 @@\n \n def debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n- # TODO: docstring\n+\n+ ptvsd.log.to_file()\n+ ptvsd.log.info('debug{0!r}', (filename, port_num, debug_id, debug_options, run_as))\n+\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n@@ -39,5 +43,9 @@\n \n def run(filename, port_num, run_as,\n *args, **kwargs):\n+\n+ ptvsd.log.to_file()\n+ ptvsd.log.info('run{0!r}', (filename, port_num, run_as))\n+\n address = (LOCALHOST, port_num)\n run_main(address, filename, run_as, *args, **kwargs)\n", "issue": "PTVSD_LOG_DIR doesn't work with VS\nNo logs are generated even with the environment variable set. It looks like logging initialization is missing on the VS entry point (`debugger.py`).\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\nfrom ptvsd._local import run_module, run_file, run_main\n\n\n# TODO: not needed?\nDONT_DEBUG = []\n\nLOCALHOST = 'localhost'\n\nRUNNERS = {\n 'module': run_module, # python -m spam\n 'script': run_file, # python spam.py\n 'code': run_file, # python -c 'print(\"spam\")'\n None: run_file, # catchall\n}\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n # TODO: docstring\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n try:\n run = _runners[run_as]\n except KeyError:\n # TODO: fail?\n run = _runners[None]\n if _extra:\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n\n\ndef run(filename, port_num, run_as,\n *args, **kwargs):\n address = (LOCALHOST, port_num)\n run_main(address, filename, run_as, *args, **kwargs)\n", "path": "src/ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\nimport ptvsd.log\nfrom ptvsd._local import run_module, run_file, run_main\n\n\n# TODO: not needed?\nDONT_DEBUG = []\n\nLOCALHOST = 'localhost'\n\nRUNNERS = {\n 'module': run_module, # python -m spam\n 'script': run_file, # python spam.py\n 'code': run_file, # python -c 'print(\"spam\")'\n None: run_file, # catchall\n}\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n\n ptvsd.log.to_file()\n ptvsd.log.info('debug{0!r}', (filename, port_num, debug_id, debug_options, run_as))\n\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n try:\n run = _runners[run_as]\n except KeyError:\n # TODO: fail?\n run = _runners[None]\n if _extra:\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n\n\ndef run(filename, port_num, run_as,\n *args, **kwargs):\n\n ptvsd.log.to_file()\n ptvsd.log.info('run{0!r}', (filename, port_num, run_as))\n\n address = (LOCALHOST, port_num)\n run_main(address, filename, run_as, *args, **kwargs)\n", "path": "src/ptvsd/debugger.py"}]} | 699 | 295 |
gh_patches_debug_36977 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4942 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failed to run check CKV_AWS_224: TemplateAttributeError: get is invalid
**Describe the issue**
Error occurs when checked ECS Cluster using terraform_plan framework.
**Examples**
```
module "cluster" {
source = "terraform-aws-modules/ecs/aws"
version = "4.1.3"
cluster_name = "foo"
fargate_capacity_providers = {
FARGATE = {}
}
}
```
**Version (please complete the following information):**
- checkov 2.3.165
- terraform 1.4.5
- aws provider 4.63.0
**Additional context**
traceback:
```
2023-04-18 09:53:09,676 [MainThread ] [ERROR] Failed to run check CKV_AWS_224 on /tfplan.json:aws_ecs_cluster.this
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check.py", line 73, in run
check_result["result"] = self.scan_entity_conf(entity_configuration, entity_type)
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 43, in scan_entity_conf
return self.scan_resource_conf(conf)
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py", line 21, in scan_resource_conf
if log_conf.get('cloud_watch_encryption_enabled') == [True] or \
File "/usr/local/lib/python3.9/site-packages/checkov/common/parsers/node.py", line 189, in __getattr__
raise TemplateAttributeError(f'{name} is invalid')
checkov.common.parsers.node.TemplateAttributeError: get is invalid
```
This only occurs when using terraform_plan framework. It works without issue when using vanilla terraform framework.
The plan generation is just `terraform plan -out tfplan.bin && terraform show -json tfplan.bin > tfplan.json` then running `checkof -f tfplan.json`.
Here is my checkov config file in repo:
```
➜ cat .checkov.yaml
block-list-secret-scan: []
compact: true
download-external-modules: true
evaluate-variables: true
external-modules-download-path: .external_modules
file:
- tfplan.json
framework:
- terraform_plan
mask: []
quiet: true
repo-root-for-plan-enrichment:
- .
secrets-history-timeout: 12h
secrets-scan-file-type: []
skip-check:
- CKV2_AWS_34
summary-position: top
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3
4
5 class ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):
6 def __init__(self):
7 name = "Ensure Cluster logging with CMK"
8 id = "CKV_AWS_224"
9 supported_resources = ['aws_ecs_cluster']
10 categories = [CheckCategories.ENCRYPTION]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def scan_resource_conf(self, conf):
14 configuration = conf.get("configuration")
15 if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):
16 command_conf = configuration[0].get('execute_command_configuration')[0]
17 if not command_conf.get('logging') == ['NONE']:
18 if command_conf.get('kms_key_id'):
19 if command_conf.get('log_configuration'):
20 log_conf = command_conf.get('log_configuration')[0]
21 if log_conf.get('cloud_watch_encryption_enabled') == [True] or \
22 log_conf.get('s3_bucket_encryption_enabled') == [True]:
23 return CheckResult.PASSED
24 return CheckResult.FAILED
25 else:
26 return CheckResult.FAILED
27
28 return CheckResult.UNKNOWN
29
30
31 check = ECSClusterLoggingEncryptedWithCMK()
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py
--- a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py
+++ b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py
@@ -1,28 +1,36 @@
+from __future__ import annotations
+
+from typing import Any
+
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
class ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):
- def __init__(self):
- name = "Ensure Cluster logging with CMK"
+ def __init__(self) -> None:
+ name = "Ensure ECS Cluster logging uses CMK"
id = "CKV_AWS_224"
- supported_resources = ['aws_ecs_cluster']
- categories = [CheckCategories.ENCRYPTION]
+ supported_resources = ("aws_ecs_cluster",)
+ categories = (CheckCategories.ENCRYPTION,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def scan_resource_conf(self, conf):
+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
configuration = conf.get("configuration")
- if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):
- command_conf = configuration[0].get('execute_command_configuration')[0]
- if not command_conf.get('logging') == ['NONE']:
- if command_conf.get('kms_key_id'):
- if command_conf.get('log_configuration'):
- log_conf = command_conf.get('log_configuration')[0]
- if log_conf.get('cloud_watch_encryption_enabled') == [True] or \
- log_conf.get('s3_bucket_encryption_enabled') == [True]:
- return CheckResult.PASSED
- return CheckResult.FAILED
- else:
+ if configuration and isinstance(configuration, list) and isinstance(configuration[0], dict):
+ execute_command = configuration[0].get("execute_command_configuration")
+ if execute_command and isinstance(execute_command, list):
+ execute_command = execute_command[0]
+ if isinstance(execute_command, dict) and not execute_command.get("logging") == ["NONE"]:
+ if execute_command.get("kms_key_id"):
+ log_conf = execute_command.get("log_configuration")
+ if log_conf and isinstance(log_conf, list):
+ log_conf = log_conf[0]
+ if isinstance(log_conf, dict) and (
+ log_conf.get("cloud_watch_encryption_enabled") == [True]
+ or log_conf.get("s3_bucket_encryption_enabled") == [True]
+ ):
+ return CheckResult.PASSED
+
return CheckResult.FAILED
return CheckResult.UNKNOWN
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\n--- a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\n+++ b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\n@@ -1,28 +1,36 @@\n+from __future__ import annotations\n+\n+from typing import Any\n+\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n \n class ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):\n- def __init__(self):\n- name = \"Ensure Cluster logging with CMK\"\n+ def __init__(self) -> None:\n+ name = \"Ensure ECS Cluster logging uses CMK\"\n id = \"CKV_AWS_224\"\n- supported_resources = ['aws_ecs_cluster']\n- categories = [CheckCategories.ENCRYPTION]\n+ supported_resources = (\"aws_ecs_cluster\",)\n+ categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def scan_resource_conf(self, conf):\n+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n configuration = conf.get(\"configuration\")\n- if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):\n- command_conf = configuration[0].get('execute_command_configuration')[0]\n- if not command_conf.get('logging') == ['NONE']:\n- if command_conf.get('kms_key_id'):\n- if command_conf.get('log_configuration'):\n- log_conf = command_conf.get('log_configuration')[0]\n- if log_conf.get('cloud_watch_encryption_enabled') == [True] or \\\n- log_conf.get('s3_bucket_encryption_enabled') == [True]:\n- return CheckResult.PASSED\n- return CheckResult.FAILED\n- else:\n+ if configuration and isinstance(configuration, list) and isinstance(configuration[0], dict):\n+ execute_command = configuration[0].get(\"execute_command_configuration\")\n+ if execute_command and isinstance(execute_command, list):\n+ execute_command = execute_command[0]\n+ if isinstance(execute_command, dict) and not execute_command.get(\"logging\") == [\"NONE\"]:\n+ if execute_command.get(\"kms_key_id\"):\n+ log_conf = execute_command.get(\"log_configuration\")\n+ if log_conf and isinstance(log_conf, list):\n+ log_conf = log_conf[0]\n+ if isinstance(log_conf, dict) and (\n+ log_conf.get(\"cloud_watch_encryption_enabled\") == [True]\n+ or log_conf.get(\"s3_bucket_encryption_enabled\") == [True]\n+ ):\n+ return CheckResult.PASSED\n+\n return CheckResult.FAILED\n \n return CheckResult.UNKNOWN\n", "issue": "Failed to run check CKV_AWS_224: TemplateAttributeError: get is invalid\n**Describe the issue**\r\nError occurs when checked ECS Cluster using terraform_plan framework.\r\n\r\n**Examples**\r\n```\r\nmodule \"cluster\" {\r\n source = \"terraform-aws-modules/ecs/aws\"\r\n version = \"4.1.3\"\r\n\r\n cluster_name = \"foo\"\r\n fargate_capacity_providers = {\r\n FARGATE = {}\r\n }\r\n}\r\n```\r\n\r\n**Version (please complete the following information):**\r\n- checkov 2.3.165\r\n- terraform 1.4.5\r\n- aws provider 4.63.0\r\n\r\n**Additional context**\r\ntraceback:\r\n```\r\n2023-04-18 09:53:09,676 [MainThread ] [ERROR] Failed to run check CKV_AWS_224 on /tfplan.json:aws_ecs_cluster.this\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check.py\", line 73, in run\r\n check_result[\"result\"] = self.scan_entity_conf(entity_configuration, entity_type)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 43, in scan_entity_conf\r\n return self.scan_resource_conf(conf)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\", line 21, in scan_resource_conf\r\n if log_conf.get('cloud_watch_encryption_enabled') == [True] or \\\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/parsers/node.py\", line 189, in __getattr__\r\n raise TemplateAttributeError(f'{name} is invalid')\r\ncheckov.common.parsers.node.TemplateAttributeError: get is invalid\r\n```\r\n\r\nThis only occurs when using terraform_plan framework. It works without issue when using vanilla terraform framework.\r\n\r\nThe plan generation is just `terraform plan -out tfplan.bin && terraform show -json tfplan.bin > tfplan.json` then running `checkof -f tfplan.json`.\r\n\r\nHere is my checkov config file in repo:\r\n```\r\n\u279c cat .checkov.yaml \r\nblock-list-secret-scan: []\r\ncompact: true\r\ndownload-external-modules: true\r\nevaluate-variables: true\r\nexternal-modules-download-path: .external_modules\r\nfile:\r\n- tfplan.json\r\nframework:\r\n- terraform_plan\r\nmask: []\r\nquiet: true\r\nrepo-root-for-plan-enrichment:\r\n- .\r\nsecrets-history-timeout: 12h\r\nsecrets-scan-file-type: []\r\nskip-check:\r\n- CKV2_AWS_34\r\nsummary-position: top\r\n```\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure Cluster logging with CMK\"\n id = \"CKV_AWS_224\"\n supported_resources = ['aws_ecs_cluster']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n configuration = conf.get(\"configuration\")\n if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):\n command_conf = configuration[0].get('execute_command_configuration')[0]\n if not command_conf.get('logging') == ['NONE']:\n if command_conf.get('kms_key_id'):\n if command_conf.get('log_configuration'):\n log_conf = command_conf.get('log_configuration')[0]\n if log_conf.get('cloud_watch_encryption_enabled') == [True] or \\\n log_conf.get('s3_bucket_encryption_enabled') == [True]:\n return CheckResult.PASSED\n return CheckResult.FAILED\n else:\n return CheckResult.FAILED\n\n return CheckResult.UNKNOWN\n\n\ncheck = ECSClusterLoggingEncryptedWithCMK()\n", "path": "checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure ECS Cluster logging uses CMK\"\n id = \"CKV_AWS_224\"\n supported_resources = (\"aws_ecs_cluster\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n configuration = conf.get(\"configuration\")\n if configuration and isinstance(configuration, list) and isinstance(configuration[0], dict):\n execute_command = configuration[0].get(\"execute_command_configuration\")\n if execute_command and isinstance(execute_command, list):\n execute_command = execute_command[0]\n if isinstance(execute_command, dict) and not execute_command.get(\"logging\") == [\"NONE\"]:\n if execute_command.get(\"kms_key_id\"):\n log_conf = execute_command.get(\"log_configuration\")\n if log_conf and isinstance(log_conf, list):\n log_conf = log_conf[0]\n if isinstance(log_conf, dict) and (\n log_conf.get(\"cloud_watch_encryption_enabled\") == [True]\n or log_conf.get(\"s3_bucket_encryption_enabled\") == [True]\n ):\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n return CheckResult.UNKNOWN\n\n\ncheck = ECSClusterLoggingEncryptedWithCMK()\n", "path": "checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py"}]} | 1,234 | 671 |
gh_patches_debug_16904 | rasdani/github-patches | git_diff | saleor__saleor-5443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Creating a new sale raises error in Celery task
### Steps to reproduce the problem
1. Run the following mutation as an admin user (with `MANAGE_DISCOUNTS` permission):
```
mutation {
saleCreate(input: {name: "Test"}) {
errors {
field
message
}
sale {
id
name
}
}
}
```
The response from API is successful, but in the Django server console I'm getting the following error:
```
ERROR celery.app.trace Task saleor.product.tasks.update_products_minimal_variant_prices_of_discount_task[4ec46245-d1f1-47ae-ab23-0c0ab73a9981] raised unexpected: ValueError('Provide at least one of the ID lists:\n\tproduct_ids,\n\tcategory_ids,\n\tcollection_ids.') [PID:31316:Thread-175]
Traceback (most recent call last):
File "/Users/marcin/.pyenv/versions/saleor3.8.1/lib/python3.8/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/tasks.py", line 64, in update_products_minimal_variant_prices_of_discount_task
update_products_minimal_variant_prices_of_discount(discount)
File "/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py", line 76, in update_products_minimal_variant_prices_of_discount
update_products_minimal_variant_prices_of_catalogues(
File "/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py", line 62, in update_products_minimal_variant_prices_of_catalogues
raise ValueError(
ValueError: Provide at least one of the ID lists:
product_ids,
category_ids,
collection_ids.
```
I suppose that the Celery task that recalculates minimal variant prices is run even there are no products to update. Probably an additional check needs to be added to not run the task in this case.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/product/utils/variant_prices.py`
Content:
```
1 import operator
2 from functools import reduce
3
4 from django.db.models.query_utils import Q
5 from prices import Money
6
7 from ...discount.utils import fetch_active_discounts
8 from ..models import Product
9
10
11 def _get_product_minimal_variant_price(product, discounts) -> Money:
12 # Start with the product's price as the minimal one
13 minimal_variant_price = product.price
14 for variant in product.variants.all():
15 variant_price = variant.get_price(discounts=discounts)
16 minimal_variant_price = min(minimal_variant_price, variant_price)
17 return minimal_variant_price
18
19
20 def update_product_minimal_variant_price(product, discounts=None, save=True):
21 if discounts is None:
22 discounts = fetch_active_discounts()
23 minimal_variant_price = _get_product_minimal_variant_price(product, discounts)
24 if product.minimal_variant_price != minimal_variant_price:
25 product.minimal_variant_price_amount = minimal_variant_price.amount
26 if save:
27 product.save(update_fields=["minimal_variant_price_amount", "updated_at"])
28 return product
29
30
31 def update_products_minimal_variant_prices(products, discounts=None):
32 if discounts is None:
33 discounts = fetch_active_discounts()
34 changed_products_to_update = []
35 for product in products:
36 old_minimal_variant_price = product.minimal_variant_price
37 updated_product = update_product_minimal_variant_price(
38 product, discounts, save=False
39 )
40 # Check if the "minimal_variant_price" has changed
41 if updated_product.minimal_variant_price != old_minimal_variant_price:
42 changed_products_to_update.append(updated_product)
43 # Bulk update the changed products
44 Product.objects.bulk_update(
45 changed_products_to_update, ["minimal_variant_price_amount"]
46 )
47
48
49 def update_products_minimal_variant_prices_of_catalogues(
50 product_ids=None, category_ids=None, collection_ids=None
51 ):
52 # Building the matching products query
53 q_list = []
54 if product_ids:
55 q_list.append(Q(pk__in=product_ids))
56 if category_ids:
57 q_list.append(Q(category_id__in=category_ids))
58 if collection_ids:
59 q_list.append(Q(collectionproduct__collection_id__in=collection_ids))
60 # Asserting that the function was called with some ids
61 if not q_list:
62 raise ValueError(
63 "Provide at least one of the ID lists:\n"
64 "\tproduct_ids,\n"
65 "\tcategory_ids,\n"
66 "\tcollection_ids."
67 )
68 # Querying the products
69 q_or = reduce(operator.or_, q_list)
70 products = Product.objects.filter(q_or).distinct()
71
72 update_products_minimal_variant_prices(products)
73
74
75 def update_products_minimal_variant_prices_of_discount(discount):
76 update_products_minimal_variant_prices_of_catalogues(
77 product_ids=discount.products.all().values_list("id", flat=True),
78 category_ids=discount.categories.all().values_list("id", flat=True),
79 collection_ids=discount.collections.all().values_list("id", flat=True),
80 )
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/product/utils/variant_prices.py b/saleor/product/utils/variant_prices.py
--- a/saleor/product/utils/variant_prices.py
+++ b/saleor/product/utils/variant_prices.py
@@ -58,18 +58,12 @@
if collection_ids:
q_list.append(Q(collectionproduct__collection_id__in=collection_ids))
# Asserting that the function was called with some ids
- if not q_list:
- raise ValueError(
- "Provide at least one of the ID lists:\n"
- "\tproduct_ids,\n"
- "\tcategory_ids,\n"
- "\tcollection_ids."
- )
- # Querying the products
- q_or = reduce(operator.or_, q_list)
- products = Product.objects.filter(q_or).distinct()
+ if q_list:
+ # Querying the products
+ q_or = reduce(operator.or_, q_list)
+ products = Product.objects.filter(q_or).distinct()
- update_products_minimal_variant_prices(products)
+ update_products_minimal_variant_prices(products)
def update_products_minimal_variant_prices_of_discount(discount):
| {"golden_diff": "diff --git a/saleor/product/utils/variant_prices.py b/saleor/product/utils/variant_prices.py\n--- a/saleor/product/utils/variant_prices.py\n+++ b/saleor/product/utils/variant_prices.py\n@@ -58,18 +58,12 @@\n if collection_ids:\n q_list.append(Q(collectionproduct__collection_id__in=collection_ids))\n # Asserting that the function was called with some ids\n- if not q_list:\n- raise ValueError(\n- \"Provide at least one of the ID lists:\\n\"\n- \"\\tproduct_ids,\\n\"\n- \"\\tcategory_ids,\\n\"\n- \"\\tcollection_ids.\"\n- )\n- # Querying the products\n- q_or = reduce(operator.or_, q_list)\n- products = Product.objects.filter(q_or).distinct()\n+ if q_list:\n+ # Querying the products\n+ q_or = reduce(operator.or_, q_list)\n+ products = Product.objects.filter(q_or).distinct()\n \n- update_products_minimal_variant_prices(products)\n+ update_products_minimal_variant_prices(products)\n \n \n def update_products_minimal_variant_prices_of_discount(discount):\n", "issue": "Creating a new sale raises error in Celery task\n### Steps to reproduce the problem\r\n1. Run the following mutation as an admin user (with `MANAGE_DISCOUNTS` permission):\r\n```\r\nmutation {\r\n saleCreate(input: {name: \"Test\"}) {\r\n errors {\r\n field\r\n message\r\n }\r\n sale {\r\n id\r\n name\r\n }\r\n }\r\n}\r\n```\r\n\r\nThe response from API is successful, but in the Django server console I'm getting the following error:\r\n\r\n```\r\nERROR celery.app.trace Task saleor.product.tasks.update_products_minimal_variant_prices_of_discount_task[4ec46245-d1f1-47ae-ab23-0c0ab73a9981] raised unexpected: ValueError('Provide at least one of the ID lists:\\n\\tproduct_ids,\\n\\tcategory_ids,\\n\\tcollection_ids.') [PID:31316:Thread-175]\r\nTraceback (most recent call last):\r\n File \"/Users/marcin/.pyenv/versions/saleor3.8.1/lib/python3.8/site-packages/celery/app/trace.py\", line 385, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/tasks.py\", line 64, in update_products_minimal_variant_prices_of_discount_task\r\n update_products_minimal_variant_prices_of_discount(discount)\r\n File \"/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py\", line 76, in update_products_minimal_variant_prices_of_discount\r\n update_products_minimal_variant_prices_of_catalogues(\r\n File \"/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py\", line 62, in update_products_minimal_variant_prices_of_catalogues\r\n raise ValueError(\r\nValueError: Provide at least one of the ID lists:\r\n\tproduct_ids,\r\n\tcategory_ids,\r\n\tcollection_ids.\r\n```\r\n\r\nI suppose that the Celery task that recalculates minimal variant prices is run even there are no products to update. Probably an additional check needs to be added to not run the task in this case.\n", "before_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.db.models.query_utils import Q\nfrom prices import Money\n\nfrom ...discount.utils import fetch_active_discounts\nfrom ..models import Product\n\n\ndef _get_product_minimal_variant_price(product, discounts) -> Money:\n # Start with the product's price as the minimal one\n minimal_variant_price = product.price\n for variant in product.variants.all():\n variant_price = variant.get_price(discounts=discounts)\n minimal_variant_price = min(minimal_variant_price, variant_price)\n return minimal_variant_price\n\n\ndef update_product_minimal_variant_price(product, discounts=None, save=True):\n if discounts is None:\n discounts = fetch_active_discounts()\n minimal_variant_price = _get_product_minimal_variant_price(product, discounts)\n if product.minimal_variant_price != minimal_variant_price:\n product.minimal_variant_price_amount = minimal_variant_price.amount\n if save:\n product.save(update_fields=[\"minimal_variant_price_amount\", \"updated_at\"])\n return product\n\n\ndef update_products_minimal_variant_prices(products, discounts=None):\n if discounts is None:\n discounts = fetch_active_discounts()\n changed_products_to_update = []\n for product in products:\n old_minimal_variant_price = product.minimal_variant_price\n updated_product = update_product_minimal_variant_price(\n product, discounts, save=False\n )\n # Check if the \"minimal_variant_price\" has changed\n if updated_product.minimal_variant_price != old_minimal_variant_price:\n changed_products_to_update.append(updated_product)\n # Bulk update the changed products\n Product.objects.bulk_update(\n changed_products_to_update, [\"minimal_variant_price_amount\"]\n )\n\n\ndef update_products_minimal_variant_prices_of_catalogues(\n product_ids=None, category_ids=None, collection_ids=None\n):\n # Building the matching products query\n q_list = []\n if product_ids:\n q_list.append(Q(pk__in=product_ids))\n if category_ids:\n q_list.append(Q(category_id__in=category_ids))\n if collection_ids:\n q_list.append(Q(collectionproduct__collection_id__in=collection_ids))\n # Asserting that the function was called with some ids\n if not q_list:\n raise ValueError(\n \"Provide at least one of the ID lists:\\n\"\n \"\\tproduct_ids,\\n\"\n \"\\tcategory_ids,\\n\"\n \"\\tcollection_ids.\"\n )\n # Querying the products\n q_or = reduce(operator.or_, q_list)\n products = Product.objects.filter(q_or).distinct()\n\n update_products_minimal_variant_prices(products)\n\n\ndef update_products_minimal_variant_prices_of_discount(discount):\n update_products_minimal_variant_prices_of_catalogues(\n product_ids=discount.products.all().values_list(\"id\", flat=True),\n category_ids=discount.categories.all().values_list(\"id\", flat=True),\n collection_ids=discount.collections.all().values_list(\"id\", flat=True),\n )\n", "path": "saleor/product/utils/variant_prices.py"}], "after_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.db.models.query_utils import Q\nfrom prices import Money\n\nfrom ...discount.utils import fetch_active_discounts\nfrom ..models import Product\n\n\ndef _get_product_minimal_variant_price(product, discounts) -> Money:\n # Start with the product's price as the minimal one\n minimal_variant_price = product.price\n for variant in product.variants.all():\n variant_price = variant.get_price(discounts=discounts)\n minimal_variant_price = min(minimal_variant_price, variant_price)\n return minimal_variant_price\n\n\ndef update_product_minimal_variant_price(product, discounts=None, save=True):\n if discounts is None:\n discounts = fetch_active_discounts()\n minimal_variant_price = _get_product_minimal_variant_price(product, discounts)\n if product.minimal_variant_price != minimal_variant_price:\n product.minimal_variant_price_amount = minimal_variant_price.amount\n if save:\n product.save(update_fields=[\"minimal_variant_price_amount\", \"updated_at\"])\n return product\n\n\ndef update_products_minimal_variant_prices(products, discounts=None):\n if discounts is None:\n discounts = fetch_active_discounts()\n changed_products_to_update = []\n for product in products:\n old_minimal_variant_price = product.minimal_variant_price\n updated_product = update_product_minimal_variant_price(\n product, discounts, save=False\n )\n # Check if the \"minimal_variant_price\" has changed\n if updated_product.minimal_variant_price != old_minimal_variant_price:\n changed_products_to_update.append(updated_product)\n # Bulk update the changed products\n Product.objects.bulk_update(\n changed_products_to_update, [\"minimal_variant_price_amount\"]\n )\n\n\ndef update_products_minimal_variant_prices_of_catalogues(\n product_ids=None, category_ids=None, collection_ids=None\n):\n # Building the matching products query\n q_list = []\n if product_ids:\n q_list.append(Q(pk__in=product_ids))\n if category_ids:\n q_list.append(Q(category_id__in=category_ids))\n if collection_ids:\n q_list.append(Q(collectionproduct__collection_id__in=collection_ids))\n # Asserting that the function was called with some ids\n if q_list:\n # Querying the products\n q_or = reduce(operator.or_, q_list)\n products = Product.objects.filter(q_or).distinct()\n\n update_products_minimal_variant_prices(products)\n\n\ndef update_products_minimal_variant_prices_of_discount(discount):\n update_products_minimal_variant_prices_of_catalogues(\n product_ids=discount.products.all().values_list(\"id\", flat=True),\n category_ids=discount.categories.all().values_list(\"id\", flat=True),\n collection_ids=discount.collections.all().values_list(\"id\", flat=True),\n )\n", "path": "saleor/product/utils/variant_prices.py"}]} | 1,523 | 254 |
gh_patches_debug_24558 | rasdani/github-patches | git_diff | marshmallow-code__webargs-43 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pyramid parser use_kwargs throws exception when used
The following code using the pyramid parser throws an exception:
``` python
@parser.use_kwargs({'myvalue': Arg(int)})
def baz(request, myvalue):
return {'myvalue': myvalue}
```
The exception:
```
kwargs['as_kwargs'] = True
> return self.use_args(*args, **kwargs)
E TypeError: use_args() got an unexpected keyword argument 'as_kwargs'
```
Pyramid parser use_kwargs throws exception when used
The following code using the pyramid parser throws an exception:
``` python
@parser.use_kwargs({'myvalue': Arg(int)})
def baz(request, myvalue):
return {'myvalue': myvalue}
```
The exception:
```
kwargs['as_kwargs'] = True
> return self.use_args(*args, **kwargs)
E TypeError: use_args() got an unexpected keyword argument 'as_kwargs'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `webargs/pyramidparser.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Pyramid request argument parsing.
3
4 Example usage: ::
5
6 from wsgiref.simple_server import make_server
7 from pyramid.config import Configurator
8 from pyramid.response import Response
9 from webargs import Arg
10 from webargs.pyramidparser import use_args
11
12 hello_args = {
13 'name': Arg(str, default='World')
14 }
15
16 @use_args(hello_args)
17 def hello_world(request, args):
18 return Response('Hello ' + args['name'])
19
20 if __name__ == '__main__':
21 config = Configurator()
22 config.add_route('hello', '/')
23 config.add_view(hello_world, route_name='hello')
24 app = config.make_wsgi_app()
25 server = make_server('0.0.0.0', 6543, app)
26 server.serve_forever()
27 """
28 import functools
29 import logging
30
31 from webob.multidict import MultiDict
32 from pyramid.httpexceptions import exception_response
33
34 from webargs import core
35 from webargs.core import text_type
36
37 logger = logging.getLogger(__name__)
38
39 class PyramidParser(core.Parser):
40 """Pyramid request argument parser."""
41
42 def parse_querystring(self, req, name, arg):
43 """Pull a querystring value from the request."""
44 return core.get_value(req.GET, name, arg.multiple)
45
46 def parse_form(self, req, name, arg):
47 """Pull a form value from the request."""
48 return core.get_value(req.POST, name, arg.multiple)
49
50 def parse_json(self, req, name, arg):
51 """Pull a json value from the request."""
52 try:
53 json_data = req.json_body
54 except ValueError:
55 return core.Missing
56
57 return core.get_value(json_data, name, arg.multiple)
58
59 def parse_cookies(self, req, name, arg):
60 """Pull the value from the cookiejar."""
61 return core.get_value(req.cookies, name, arg.multiple)
62
63 def parse_headers(self, req, name, arg):
64 """Pull a value from the header data."""
65 return core.get_value(req.headers, name, arg.multiple)
66
67 def parse_files(self, req, name, arg):
68 """Pull a file from the request."""
69 files = ((k, v) for k, v in req.POST.items() if hasattr(v, 'file'))
70 return core.get_value(MultiDict(files), name, arg.multiple)
71
72 def handle_error(self, error):
73 """Handles errors during parsing. Aborts the current HTTP request and
74 responds with a 400 error.
75 """
76 logger.error(error)
77 status_code = getattr(error, 'status_code', 400)
78 data = getattr(error, 'data', {})
79 raise exception_response(status_code, detail=text_type(error), **data)
80
81 def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,
82 validate=None):
83 """Decorator that injects parsed arguments into a view callable.
84 Supports the *Class-based View* pattern where `request` is saved as an instance
85 attribute on a view class.
86
87 :param dict argmap: Dictionary of argument_name:Arg object pairs.
88 :param req: The request object to parse
89 :param tuple locations: Where on the request to search for values.
90 :param callable validate:
91 Validation function that receives the dictionary of parsed arguments.
92 If the function returns ``False``, the parser will raise a
93 :exc:`ValidationError`.
94 """
95 def decorator(func):
96 @functools.wraps(func)
97 def wrapper(obj, *args, **kwargs):
98 # The first argument is either `self` or `request`
99 try: # get self.request
100 request = obj.request
101 except AttributeError: # first arg is request
102 request = obj
103 parsed_args = self.parse(argmap, req=request, locations=locations,
104 validate=None)
105 return func(obj, parsed_args, *args, **kwargs)
106 return wrapper
107 return decorator
108
109 parser = PyramidParser()
110 use_args = parser.use_args
111 use_kwargs = parser.use_kwargs
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/webargs/pyramidparser.py b/webargs/pyramidparser.py
--- a/webargs/pyramidparser.py
+++ b/webargs/pyramidparser.py
@@ -79,7 +79,7 @@
raise exception_response(status_code, detail=text_type(error), **data)
def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,
- validate=None):
+ as_kwargs=False, validate=None):
"""Decorator that injects parsed arguments into a view callable.
Supports the *Class-based View* pattern where `request` is saved as an instance
attribute on a view class.
@@ -102,7 +102,11 @@
request = obj
parsed_args = self.parse(argmap, req=request, locations=locations,
validate=None)
- return func(obj, parsed_args, *args, **kwargs)
+ if as_kwargs:
+ kwargs.update(parsed_args)
+ return func(obj, *args, **kwargs)
+ else:
+ return func(obj, parsed_args, *args, **kwargs)
return wrapper
return decorator
| {"golden_diff": "diff --git a/webargs/pyramidparser.py b/webargs/pyramidparser.py\n--- a/webargs/pyramidparser.py\n+++ b/webargs/pyramidparser.py\n@@ -79,7 +79,7 @@\n raise exception_response(status_code, detail=text_type(error), **data)\n \n def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,\n- validate=None):\n+ as_kwargs=False, validate=None):\n \"\"\"Decorator that injects parsed arguments into a view callable.\n Supports the *Class-based View* pattern where `request` is saved as an instance\n attribute on a view class.\n@@ -102,7 +102,11 @@\n request = obj\n parsed_args = self.parse(argmap, req=request, locations=locations,\n validate=None)\n- return func(obj, parsed_args, *args, **kwargs)\n+ if as_kwargs:\n+ kwargs.update(parsed_args)\n+ return func(obj, *args, **kwargs)\n+ else:\n+ return func(obj, parsed_args, *args, **kwargs)\n return wrapper\n return decorator\n", "issue": "Pyramid parser use_kwargs throws exception when used\nThe following code using the pyramid parser throws an exception:\n\n``` python\[email protected]_kwargs({'myvalue': Arg(int)})\ndef baz(request, myvalue):\n return {'myvalue': myvalue}\n```\n\nThe exception:\n\n```\n kwargs['as_kwargs'] = True\n> return self.use_args(*args, **kwargs)\nE TypeError: use_args() got an unexpected keyword argument 'as_kwargs'\n```\n\nPyramid parser use_kwargs throws exception when used\nThe following code using the pyramid parser throws an exception:\n\n``` python\[email protected]_kwargs({'myvalue': Arg(int)})\ndef baz(request, myvalue):\n return {'myvalue': myvalue}\n```\n\nThe exception:\n\n```\n kwargs['as_kwargs'] = True\n> return self.use_args(*args, **kwargs)\nE TypeError: use_args() got an unexpected keyword argument 'as_kwargs'\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Pyramid request argument parsing.\n\nExample usage: ::\n\n from wsgiref.simple_server import make_server\n from pyramid.config import Configurator\n from pyramid.response import Response\n from webargs import Arg\n from webargs.pyramidparser import use_args\n\n hello_args = {\n 'name': Arg(str, default='World')\n }\n\n @use_args(hello_args)\n def hello_world(request, args):\n return Response('Hello ' + args['name'])\n\n if __name__ == '__main__':\n config = Configurator()\n config.add_route('hello', '/')\n config.add_view(hello_world, route_name='hello')\n app = config.make_wsgi_app()\n server = make_server('0.0.0.0', 6543, app)\n server.serve_forever()\n\"\"\"\nimport functools\nimport logging\n\nfrom webob.multidict import MultiDict\nfrom pyramid.httpexceptions import exception_response\n\nfrom webargs import core\nfrom webargs.core import text_type\n\nlogger = logging.getLogger(__name__)\n\nclass PyramidParser(core.Parser):\n \"\"\"Pyramid request argument parser.\"\"\"\n\n def parse_querystring(self, req, name, arg):\n \"\"\"Pull a querystring value from the request.\"\"\"\n return core.get_value(req.GET, name, arg.multiple)\n\n def parse_form(self, req, name, arg):\n \"\"\"Pull a form value from the request.\"\"\"\n return core.get_value(req.POST, name, arg.multiple)\n\n def parse_json(self, req, name, arg):\n \"\"\"Pull a json value from the request.\"\"\"\n try:\n json_data = req.json_body\n except ValueError:\n return core.Missing\n\n return core.get_value(json_data, name, arg.multiple)\n\n def parse_cookies(self, req, name, arg):\n \"\"\"Pull the value from the cookiejar.\"\"\"\n return core.get_value(req.cookies, name, arg.multiple)\n\n def parse_headers(self, req, name, arg):\n \"\"\"Pull a value from the header data.\"\"\"\n return core.get_value(req.headers, name, arg.multiple)\n\n def parse_files(self, req, name, arg):\n \"\"\"Pull a file from the request.\"\"\"\n files = ((k, v) for k, v in req.POST.items() if hasattr(v, 'file'))\n return core.get_value(MultiDict(files), name, arg.multiple)\n\n def handle_error(self, error):\n \"\"\"Handles errors during parsing. Aborts the current HTTP request and\n responds with a 400 error.\n \"\"\"\n logger.error(error)\n status_code = getattr(error, 'status_code', 400)\n data = getattr(error, 'data', {})\n raise exception_response(status_code, detail=text_type(error), **data)\n\n def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,\n validate=None):\n \"\"\"Decorator that injects parsed arguments into a view callable.\n Supports the *Class-based View* pattern where `request` is saved as an instance\n attribute on a view class.\n\n :param dict argmap: Dictionary of argument_name:Arg object pairs.\n :param req: The request object to parse\n :param tuple locations: Where on the request to search for values.\n :param callable validate:\n Validation function that receives the dictionary of parsed arguments.\n If the function returns ``False``, the parser will raise a\n :exc:`ValidationError`.\n \"\"\"\n def decorator(func):\n @functools.wraps(func)\n def wrapper(obj, *args, **kwargs):\n # The first argument is either `self` or `request`\n try: # get self.request\n request = obj.request\n except AttributeError: # first arg is request\n request = obj\n parsed_args = self.parse(argmap, req=request, locations=locations,\n validate=None)\n return func(obj, parsed_args, *args, **kwargs)\n return wrapper\n return decorator\n\nparser = PyramidParser()\nuse_args = parser.use_args\nuse_kwargs = parser.use_kwargs\n", "path": "webargs/pyramidparser.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Pyramid request argument parsing.\n\nExample usage: ::\n\n from wsgiref.simple_server import make_server\n from pyramid.config import Configurator\n from pyramid.response import Response\n from webargs import Arg\n from webargs.pyramidparser import use_args\n\n hello_args = {\n 'name': Arg(str, default='World')\n }\n\n @use_args(hello_args)\n def hello_world(request, args):\n return Response('Hello ' + args['name'])\n\n if __name__ == '__main__':\n config = Configurator()\n config.add_route('hello', '/')\n config.add_view(hello_world, route_name='hello')\n app = config.make_wsgi_app()\n server = make_server('0.0.0.0', 6543, app)\n server.serve_forever()\n\"\"\"\nimport functools\nimport logging\n\nfrom webob.multidict import MultiDict\nfrom pyramid.httpexceptions import exception_response\n\nfrom webargs import core\nfrom webargs.core import text_type\n\nlogger = logging.getLogger(__name__)\n\nclass PyramidParser(core.Parser):\n \"\"\"Pyramid request argument parser.\"\"\"\n\n def parse_querystring(self, req, name, arg):\n \"\"\"Pull a querystring value from the request.\"\"\"\n return core.get_value(req.GET, name, arg.multiple)\n\n def parse_form(self, req, name, arg):\n \"\"\"Pull a form value from the request.\"\"\"\n return core.get_value(req.POST, name, arg.multiple)\n\n def parse_json(self, req, name, arg):\n \"\"\"Pull a json value from the request.\"\"\"\n try:\n json_data = req.json_body\n except ValueError:\n return core.Missing\n\n return core.get_value(json_data, name, arg.multiple)\n\n def parse_cookies(self, req, name, arg):\n \"\"\"Pull the value from the cookiejar.\"\"\"\n return core.get_value(req.cookies, name, arg.multiple)\n\n def parse_headers(self, req, name, arg):\n \"\"\"Pull a value from the header data.\"\"\"\n return core.get_value(req.headers, name, arg.multiple)\n\n def parse_files(self, req, name, arg):\n \"\"\"Pull a file from the request.\"\"\"\n files = ((k, v) for k, v in req.POST.items() if hasattr(v, 'file'))\n return core.get_value(MultiDict(files), name, arg.multiple)\n\n def handle_error(self, error):\n \"\"\"Handles errors during parsing. Aborts the current HTTP request and\n responds with a 400 error.\n \"\"\"\n logger.error(error)\n status_code = getattr(error, 'status_code', 400)\n data = getattr(error, 'data', {})\n raise exception_response(status_code, detail=text_type(error), **data)\n\n def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,\n as_kwargs=False, validate=None):\n \"\"\"Decorator that injects parsed arguments into a view callable.\n Supports the *Class-based View* pattern where `request` is saved as an instance\n attribute on a view class.\n\n :param dict argmap: Dictionary of argument_name:Arg object pairs.\n :param req: The request object to parse\n :param tuple locations: Where on the request to search for values.\n :param callable validate:\n Validation function that receives the dictionary of parsed arguments.\n If the function returns ``False``, the parser will raise a\n :exc:`ValidationError`.\n \"\"\"\n def decorator(func):\n @functools.wraps(func)\n def wrapper(obj, *args, **kwargs):\n # The first argument is either `self` or `request`\n try: # get self.request\n request = obj.request\n except AttributeError: # first arg is request\n request = obj\n parsed_args = self.parse(argmap, req=request, locations=locations,\n validate=None)\n if as_kwargs:\n kwargs.update(parsed_args)\n return func(obj, *args, **kwargs)\n else:\n return func(obj, parsed_args, *args, **kwargs)\n return wrapper\n return decorator\n\nparser = PyramidParser()\nuse_args = parser.use_args\nuse_kwargs = parser.use_kwargs\n", "path": "webargs/pyramidparser.py"}]} | 1,565 | 244 |
gh_patches_debug_10387 | rasdani/github-patches | git_diff | WordPress__openverse-api-727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Possibly make `thumbnail` null for audio files without artwork
## Description
<!-- Concisely describe the bug. -->
Currently the frontend tries to fetch thumbnails for all audio files regardless of whether the audio file in question has one or not.
I noticed that the API returns the thumbnail URL for all tracks. That makes sense, but could we improve this to be `null` for audio tracks without artwork? Then we could check the field in the frontend before making a network request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/catalog/api/serializers/audio_serializers.py`
Content:
```
1 from rest_framework import serializers
2
3 from elasticsearch_dsl.response import Hit
4
5 from catalog.api.constants.field_order import field_position_map
6 from catalog.api.constants.field_values import AUDIO_CATEGORIES, LENGTHS
7 from catalog.api.docs.media_docs import fields_to_md
8 from catalog.api.models import Audio, AudioReport, AudioSet
9 from catalog.api.serializers.fields import (
10 EnumCharField,
11 SchemableHyperlinkedIdentityField,
12 )
13 from catalog.api.serializers.media_serializers import (
14 MediaReportRequestSerializer,
15 MediaSearchRequestSerializer,
16 MediaSearchSerializer,
17 MediaSerializer,
18 get_hyperlinks_serializer,
19 get_search_request_source_serializer,
20 )
21
22
23 #######################
24 # Request serializers #
25 #######################
26
27
28 AudioSearchRequestSourceSerializer = get_search_request_source_serializer("audio")
29
30
31 class AudioSearchRequestSerializer(
32 AudioSearchRequestSourceSerializer,
33 MediaSearchRequestSerializer,
34 ):
35 """Parse and validate search query string parameters."""
36
37 fields_names = [
38 *MediaSearchRequestSerializer.fields_names,
39 *AudioSearchRequestSourceSerializer.field_names,
40 "category",
41 "length",
42 ]
43 """
44 Keep the fields names in sync with the actual fields below as this list is
45 used to generate Swagger documentation.
46 """
47
48 category = EnumCharField(
49 plural="categories",
50 enum_class=AUDIO_CATEGORIES,
51 required=False,
52 )
53 length = EnumCharField(
54 plural="lengths",
55 enum_class=LENGTHS,
56 required=False,
57 )
58
59
60 class AudioReportRequestSerializer(MediaReportRequestSerializer):
61 class Meta(MediaReportRequestSerializer.Meta):
62 model = AudioReport
63
64
65 ########################
66 # Response serializers #
67 ########################
68
69
70 class AudioSetSerializer(serializers.ModelSerializer):
71 """An audio set, rendered as a part of the ``AudioSerializer`` output."""
72
73 class Meta:
74 model = AudioSet
75 fields = [
76 "title",
77 "foreign_landing_url",
78 "creator",
79 "creator_url",
80 "url",
81 "filesize",
82 "filetype",
83 ]
84
85
86 AudioHyperlinksSerializer = get_hyperlinks_serializer("audio")
87
88
89 class AudioSerializer(AudioHyperlinksSerializer, MediaSerializer):
90 """A single audio file. Used in search results."""
91
92 class Meta:
93 model = Audio
94 fields = sorted( # keep this list ordered logically
95 [
96 *MediaSerializer.Meta.fields,
97 *AudioHyperlinksSerializer.field_names,
98 "genres",
99 "alt_files",
100 "audio_set",
101 "duration",
102 "bit_rate",
103 "sample_rate",
104 "waveform", # hyperlink to the endpoint that generates the waveform
105 "peaks", # waveform peaks, if they have already been generated
106 ],
107 key=lambda val: field_position_map.get(val, 999),
108 )
109 """
110 Keep the fields names in sync with the actual fields below as this list is
111 used to generate Swagger documentation.
112 """
113
114 audio_set = AudioSetSerializer(
115 allow_null=True,
116 help_text="Reference to set of which this track is a part.",
117 read_only=True,
118 )
119
120 waveform = SchemableHyperlinkedIdentityField(
121 read_only=True,
122 view_name="audio-waveform",
123 lookup_field="identifier",
124 help_text="A direct link to the waveform peaks.",
125 )
126
127 # Add-on data
128 peaks = serializers.SerializerMethodField(
129 help_text="The list of peaks used to generate the waveform for the audio."
130 )
131
132 @staticmethod
133 def get_peaks(obj) -> list[int]:
134 if isinstance(obj, Hit):
135 obj = Audio.objects.get(identifier=obj.identifier)
136 return obj.get_waveform()
137
138
139 class AudioSearchSerializer(MediaSearchSerializer):
140 """
141 The full audio search response.
142 This serializer is purely representational and not actually used to
143 serialize the response.
144 """
145
146 results = AudioSerializer(
147 many=True,
148 help_text=(
149 "An array of audios and their details such as "
150 f"{fields_to_md(AudioSerializer.Meta.fields)}."
151 ),
152 )
153
154
155 ##########################
156 # Additional serializers #
157 ##########################
158
159
160 class AudioWaveformSerializer(serializers.Serializer):
161 len = serializers.SerializerMethodField()
162 points = serializers.ListField(
163 child=serializers.FloatField(min_value=0, max_value=1)
164 )
165
166 @staticmethod
167 def get_len(obj) -> int:
168 return len(obj.get("points", []))
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py
--- a/api/catalog/api/serializers/audio_serializers.py
+++ b/api/catalog/api/serializers/audio_serializers.py
@@ -135,6 +135,18 @@
obj = Audio.objects.get(identifier=obj.identifier)
return obj.get_waveform()
+ def to_representation(self, instance):
+ # Get the original representation
+ output = super().to_representation(instance)
+
+ if isinstance(instance, Hit):
+ # TODO: Remove when updating ES indexes
+ audio = Audio.objects.get(identifier=instance.identifier)
+ if not audio.thumbnail:
+ output["thumbnail"] = None
+
+ return output
+
class AudioSearchSerializer(MediaSearchSerializer):
"""
| {"golden_diff": "diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py\n--- a/api/catalog/api/serializers/audio_serializers.py\n+++ b/api/catalog/api/serializers/audio_serializers.py\n@@ -135,6 +135,18 @@\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n \n+ def to_representation(self, instance):\n+ # Get the original representation\n+ output = super().to_representation(instance)\n+\n+ if isinstance(instance, Hit):\n+ # TODO: Remove when updating ES indexes\n+ audio = Audio.objects.get(identifier=instance.identifier)\n+ if not audio.thumbnail:\n+ output[\"thumbnail\"] = None\n+\n+ return output\n+\n \n class AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n", "issue": "Possibly make `thumbnail` null for audio files without artwork\n## Description\r\n<!-- Concisely describe the bug. -->\r\n\r\nCurrently the frontend tries to fetch thumbnails for all audio files regardless of whether the audio file in question has one or not. \r\nI noticed that the API returns the thumbnail URL for all tracks. That makes sense, but could we improve this to be `null` for audio tracks without artwork? Then we could check the field in the frontend before making a network request.\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom elasticsearch_dsl.response import Hit\n\nfrom catalog.api.constants.field_order import field_position_map\nfrom catalog.api.constants.field_values import AUDIO_CATEGORIES, LENGTHS\nfrom catalog.api.docs.media_docs import fields_to_md\nfrom catalog.api.models import Audio, AudioReport, AudioSet\nfrom catalog.api.serializers.fields import (\n EnumCharField,\n SchemableHyperlinkedIdentityField,\n)\nfrom catalog.api.serializers.media_serializers import (\n MediaReportRequestSerializer,\n MediaSearchRequestSerializer,\n MediaSearchSerializer,\n MediaSerializer,\n get_hyperlinks_serializer,\n get_search_request_source_serializer,\n)\n\n\n#######################\n# Request serializers #\n#######################\n\n\nAudioSearchRequestSourceSerializer = get_search_request_source_serializer(\"audio\")\n\n\nclass AudioSearchRequestSerializer(\n AudioSearchRequestSourceSerializer,\n MediaSearchRequestSerializer,\n):\n \"\"\"Parse and validate search query string parameters.\"\"\"\n\n fields_names = [\n *MediaSearchRequestSerializer.fields_names,\n *AudioSearchRequestSourceSerializer.field_names,\n \"category\",\n \"length\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n category = EnumCharField(\n plural=\"categories\",\n enum_class=AUDIO_CATEGORIES,\n required=False,\n )\n length = EnumCharField(\n plural=\"lengths\",\n enum_class=LENGTHS,\n required=False,\n )\n\n\nclass AudioReportRequestSerializer(MediaReportRequestSerializer):\n class Meta(MediaReportRequestSerializer.Meta):\n model = AudioReport\n\n\n########################\n# Response serializers #\n########################\n\n\nclass AudioSetSerializer(serializers.ModelSerializer):\n \"\"\"An audio set, rendered as a part of the ``AudioSerializer`` output.\"\"\"\n\n class Meta:\n model = AudioSet\n fields = [\n \"title\",\n \"foreign_landing_url\",\n \"creator\",\n \"creator_url\",\n \"url\",\n \"filesize\",\n \"filetype\",\n ]\n\n\nAudioHyperlinksSerializer = get_hyperlinks_serializer(\"audio\")\n\n\nclass AudioSerializer(AudioHyperlinksSerializer, MediaSerializer):\n \"\"\"A single audio file. Used in search results.\"\"\"\n\n class Meta:\n model = Audio\n fields = sorted( # keep this list ordered logically\n [\n *MediaSerializer.Meta.fields,\n *AudioHyperlinksSerializer.field_names,\n \"genres\",\n \"alt_files\",\n \"audio_set\",\n \"duration\",\n \"bit_rate\",\n \"sample_rate\",\n \"waveform\", # hyperlink to the endpoint that generates the waveform\n \"peaks\", # waveform peaks, if they have already been generated\n ],\n key=lambda val: field_position_map.get(val, 999),\n )\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n audio_set = AudioSetSerializer(\n allow_null=True,\n help_text=\"Reference to set of which this track is a part.\",\n read_only=True,\n )\n\n waveform = SchemableHyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-waveform\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the waveform peaks.\",\n )\n\n # Add-on data\n peaks = serializers.SerializerMethodField(\n help_text=\"The list of peaks used to generate the waveform for the audio.\"\n )\n\n @staticmethod\n def get_peaks(obj) -> list[int]:\n if isinstance(obj, Hit):\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n\n\nclass AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n The full audio search response.\n This serializer is purely representational and not actually used to\n serialize the response.\n \"\"\"\n\n results = AudioSerializer(\n many=True,\n help_text=(\n \"An array of audios and their details such as \"\n f\"{fields_to_md(AudioSerializer.Meta.fields)}.\"\n ),\n )\n\n\n##########################\n# Additional serializers #\n##########################\n\n\nclass AudioWaveformSerializer(serializers.Serializer):\n len = serializers.SerializerMethodField()\n points = serializers.ListField(\n child=serializers.FloatField(min_value=0, max_value=1)\n )\n\n @staticmethod\n def get_len(obj) -> int:\n return len(obj.get(\"points\", []))\n", "path": "api/catalog/api/serializers/audio_serializers.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom elasticsearch_dsl.response import Hit\n\nfrom catalog.api.constants.field_order import field_position_map\nfrom catalog.api.constants.field_values import AUDIO_CATEGORIES, LENGTHS\nfrom catalog.api.docs.media_docs import fields_to_md\nfrom catalog.api.models import Audio, AudioReport, AudioSet\nfrom catalog.api.serializers.fields import (\n EnumCharField,\n SchemableHyperlinkedIdentityField,\n)\nfrom catalog.api.serializers.media_serializers import (\n MediaReportRequestSerializer,\n MediaSearchRequestSerializer,\n MediaSearchSerializer,\n MediaSerializer,\n get_hyperlinks_serializer,\n get_search_request_source_serializer,\n)\n\n\n#######################\n# Request serializers #\n#######################\n\n\nAudioSearchRequestSourceSerializer = get_search_request_source_serializer(\"audio\")\n\n\nclass AudioSearchRequestSerializer(\n AudioSearchRequestSourceSerializer,\n MediaSearchRequestSerializer,\n):\n \"\"\"Parse and validate search query string parameters.\"\"\"\n\n fields_names = [\n *MediaSearchRequestSerializer.fields_names,\n *AudioSearchRequestSourceSerializer.field_names,\n \"category\",\n \"length\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n category = EnumCharField(\n plural=\"categories\",\n enum_class=AUDIO_CATEGORIES,\n required=False,\n )\n length = EnumCharField(\n plural=\"lengths\",\n enum_class=LENGTHS,\n required=False,\n )\n\n\nclass AudioReportRequestSerializer(MediaReportRequestSerializer):\n class Meta(MediaReportRequestSerializer.Meta):\n model = AudioReport\n\n\n########################\n# Response serializers #\n########################\n\n\nclass AudioSetSerializer(serializers.ModelSerializer):\n \"\"\"An audio set, rendered as a part of the ``AudioSerializer`` output.\"\"\"\n\n class Meta:\n model = AudioSet\n fields = [\n \"title\",\n \"foreign_landing_url\",\n \"creator\",\n \"creator_url\",\n \"url\",\n \"filesize\",\n \"filetype\",\n ]\n\n\nAudioHyperlinksSerializer = get_hyperlinks_serializer(\"audio\")\n\n\nclass AudioSerializer(AudioHyperlinksSerializer, MediaSerializer):\n \"\"\"A single audio file. Used in search results.\"\"\"\n\n class Meta:\n model = Audio\n fields = sorted( # keep this list ordered logically\n [\n *MediaSerializer.Meta.fields,\n *AudioHyperlinksSerializer.field_names,\n \"genres\",\n \"alt_files\",\n \"audio_set\",\n \"duration\",\n \"bit_rate\",\n \"sample_rate\",\n \"waveform\", # hyperlink to the endpoint that generates the waveform\n \"peaks\", # waveform peaks, if they have already been generated\n ],\n key=lambda val: field_position_map.get(val, 999),\n )\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n audio_set = AudioSetSerializer(\n allow_null=True,\n help_text=\"Reference to set of which this track is a part.\",\n read_only=True,\n )\n\n waveform = SchemableHyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-waveform\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the waveform peaks.\",\n )\n\n # Add-on data\n peaks = serializers.SerializerMethodField(\n help_text=\"The list of peaks used to generate the waveform for the audio.\"\n )\n\n @staticmethod\n def get_peaks(obj) -> list[int]:\n if isinstance(obj, Hit):\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n\n def to_representation(self, instance):\n # Get the original representation\n output = super().to_representation(instance)\n\n if isinstance(instance, Hit):\n # TODO: Remove when updating ES indexes\n audio = Audio.objects.get(identifier=instance.identifier)\n if not audio.thumbnail:\n output[\"thumbnail\"] = None\n\n return output\n\n\nclass AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n The full audio search response.\n This serializer is purely representational and not actually used to\n serialize the response.\n \"\"\"\n\n results = AudioSerializer(\n many=True,\n help_text=(\n \"An array of audios and their details such as \"\n f\"{fields_to_md(AudioSerializer.Meta.fields)}.\"\n ),\n )\n\n\n##########################\n# Additional serializers #\n##########################\n\n\nclass AudioWaveformSerializer(serializers.Serializer):\n len = serializers.SerializerMethodField()\n points = serializers.ListField(\n child=serializers.FloatField(min_value=0, max_value=1)\n )\n\n @staticmethod\n def get_len(obj) -> int:\n return len(obj.get(\"points\", []))\n", "path": "api/catalog/api/serializers/audio_serializers.py"}]} | 1,687 | 178 |
gh_patches_debug_25250 | rasdani/github-patches | git_diff | pre-commit__pre-commit-193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
^C^C during installation may leave pre-commit in a bad state
There's code which handles the first ^C, however I think the second one (during execution of the finally block) may not be handled well. I probably need to make the cleanup atomic somehow...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/repository.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from cached_property import cached_property
4
5 from pre_commit.languages.all import languages
6 from pre_commit.manifest import Manifest
7 from pre_commit.prefixed_command_runner import PrefixedCommandRunner
8
9
10 class Repository(object):
11 def __init__(self, repo_config, repo_path_getter):
12 self.repo_config = repo_config
13 self.repo_path_getter = repo_path_getter
14 self.__installed = False
15
16 @classmethod
17 def create(cls, config, store):
18 repo_path_getter = store.get_repo_path_getter(
19 config['repo'], config['sha']
20 )
21 return cls(config, repo_path_getter)
22
23 @cached_property
24 def repo_url(self):
25 return self.repo_config['repo']
26
27 @cached_property
28 def sha(self):
29 return self.repo_config['sha']
30
31 @cached_property
32 def languages(self):
33 return set(
34 (hook['language'], hook['language_version'])
35 for _, hook in self.hooks
36 )
37
38 @cached_property
39 def hooks(self):
40 # TODO: merging in manifest dicts is a smell imo
41 return tuple(
42 (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))
43 for hook in self.repo_config['hooks']
44 )
45
46 @cached_property
47 def manifest(self):
48 return Manifest(self.repo_path_getter)
49
50 @cached_property
51 def cmd_runner(self):
52 return PrefixedCommandRunner(self.repo_path_getter.repo_path)
53
54 def require_installed(self):
55 if self.__installed:
56 return
57
58 self.install()
59 self.__installed = True
60
61 def install(self):
62 """Install the hook repository."""
63 for language_name, language_version in self.languages:
64 language = languages[language_name]
65 if (
66 language.ENVIRONMENT_DIR is None or
67 self.cmd_runner.exists(language.ENVIRONMENT_DIR)
68 ):
69 # The language is already installed
70 continue
71 language.install_environment(self.cmd_runner, language_version)
72
73 def run_hook(self, hook, file_args):
74 """Run a hook.
75
76 Args:
77 hook - Hook dictionary
78 file_args - List of files to run
79 """
80 self.require_installed()
81 return languages[hook['language']].run_hook(
82 self.cmd_runner, hook, file_args,
83 )
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -1,5 +1,7 @@
from __future__ import unicode_literals
+import shutil
+
from cached_property import cached_property
from pre_commit.languages.all import languages
@@ -64,11 +66,21 @@
language = languages[language_name]
if (
language.ENVIRONMENT_DIR is None or
- self.cmd_runner.exists(language.ENVIRONMENT_DIR)
+ self.cmd_runner.exists(language.ENVIRONMENT_DIR, '.installed')
):
# The language is already installed
continue
+ # There's potentially incomplete cleanup from previous runs
+ # Clean it up!
+ if self.cmd_runner.exists(language.ENVIRONMENT_DIR):
+ shutil.rmtree(self.cmd_runner.path(language.ENVIRONMENT_DIR))
+
language.install_environment(self.cmd_runner, language_version)
+ # Touch the .installed file (atomic) to indicate we've installed
+ open(
+ self.cmd_runner.path(language.ENVIRONMENT_DIR, '.installed'),
+ 'w',
+ ).close()
def run_hook(self, hook, file_args):
"""Run a hook.
| {"golden_diff": "diff --git a/pre_commit/repository.py b/pre_commit/repository.py\n--- a/pre_commit/repository.py\n+++ b/pre_commit/repository.py\n@@ -1,5 +1,7 @@\n from __future__ import unicode_literals\n \n+import shutil\n+\n from cached_property import cached_property\n \n from pre_commit.languages.all import languages\n@@ -64,11 +66,21 @@\n language = languages[language_name]\n if (\n language.ENVIRONMENT_DIR is None or\n- self.cmd_runner.exists(language.ENVIRONMENT_DIR)\n+ self.cmd_runner.exists(language.ENVIRONMENT_DIR, '.installed')\n ):\n # The language is already installed\n continue\n+ # There's potentially incomplete cleanup from previous runs\n+ # Clean it up!\n+ if self.cmd_runner.exists(language.ENVIRONMENT_DIR):\n+ shutil.rmtree(self.cmd_runner.path(language.ENVIRONMENT_DIR))\n+\n language.install_environment(self.cmd_runner, language_version)\n+ # Touch the .installed file (atomic) to indicate we've installed\n+ open(\n+ self.cmd_runner.path(language.ENVIRONMENT_DIR, '.installed'),\n+ 'w',\n+ ).close()\n \n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n", "issue": "^C^C during installation may leave pre-commit in a bad state\nThere's code which handles the first ^C, however I think the second one (during execution of the finally block) may not be handled well. I probably need to make the cleanup atomic somehow...\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom cached_property import cached_property\n\nfrom pre_commit.languages.all import languages\nfrom pre_commit.manifest import Manifest\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\n\n\nclass Repository(object):\n def __init__(self, repo_config, repo_path_getter):\n self.repo_config = repo_config\n self.repo_path_getter = repo_path_getter\n self.__installed = False\n\n @classmethod\n def create(cls, config, store):\n repo_path_getter = store.get_repo_path_getter(\n config['repo'], config['sha']\n )\n return cls(config, repo_path_getter)\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(\n (hook['language'], hook['language_version'])\n for _, hook in self.hooks\n )\n\n @cached_property\n def hooks(self):\n # TODO: merging in manifest dicts is a smell imo\n return tuple(\n (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n return Manifest(self.repo_path_getter)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.repo_path_getter.repo_path)\n\n def require_installed(self):\n if self.__installed:\n return\n\n self.install()\n self.__installed = True\n\n def install(self):\n \"\"\"Install the hook repository.\"\"\"\n for language_name, language_version in self.languages:\n language = languages[language_name]\n if (\n language.ENVIRONMENT_DIR is None or\n self.cmd_runner.exists(language.ENVIRONMENT_DIR)\n ):\n # The language is already installed\n continue\n language.install_environment(self.cmd_runner, language_version)\n\n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n\n Args:\n hook - Hook dictionary\n file_args - List of files to run\n \"\"\"\n self.require_installed()\n return languages[hook['language']].run_hook(\n self.cmd_runner, hook, file_args,\n )\n", "path": "pre_commit/repository.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport shutil\n\nfrom cached_property import cached_property\n\nfrom pre_commit.languages.all import languages\nfrom pre_commit.manifest import Manifest\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\n\n\nclass Repository(object):\n def __init__(self, repo_config, repo_path_getter):\n self.repo_config = repo_config\n self.repo_path_getter = repo_path_getter\n self.__installed = False\n\n @classmethod\n def create(cls, config, store):\n repo_path_getter = store.get_repo_path_getter(\n config['repo'], config['sha']\n )\n return cls(config, repo_path_getter)\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(\n (hook['language'], hook['language_version'])\n for _, hook in self.hooks\n )\n\n @cached_property\n def hooks(self):\n # TODO: merging in manifest dicts is a smell imo\n return tuple(\n (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n return Manifest(self.repo_path_getter)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.repo_path_getter.repo_path)\n\n def require_installed(self):\n if self.__installed:\n return\n\n self.install()\n self.__installed = True\n\n def install(self):\n \"\"\"Install the hook repository.\"\"\"\n for language_name, language_version in self.languages:\n language = languages[language_name]\n if (\n language.ENVIRONMENT_DIR is None or\n self.cmd_runner.exists(language.ENVIRONMENT_DIR, '.installed')\n ):\n # The language is already installed\n continue\n # There's potentially incomplete cleanup from previous runs\n # Clean it up!\n if self.cmd_runner.exists(language.ENVIRONMENT_DIR):\n shutil.rmtree(self.cmd_runner.path(language.ENVIRONMENT_DIR))\n\n language.install_environment(self.cmd_runner, language_version)\n # Touch the .installed file (atomic) to indicate we've installed\n open(\n self.cmd_runner.path(language.ENVIRONMENT_DIR, '.installed'),\n 'w',\n ).close()\n\n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n\n Args:\n hook - Hook dictionary\n file_args - List of files to run\n \"\"\"\n self.require_installed()\n return languages[hook['language']].run_hook(\n self.cmd_runner, hook, file_args,\n )\n", "path": "pre_commit/repository.py"}]} | 974 | 263 |
gh_patches_debug_27672 | rasdani/github-patches | git_diff | bids-standard__pybids-589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
model: JSON to dict modified key values for transformation
In ` Replace` transformation, you specify as a dict which variables to transform.
e.g.:
```
{'LIKELY': "5"}
```
However, the parser from JSON to dict to convert BIDS Stats Models modifies keys to lower case, which in the case of specific case sensitive values modifies the transformation itself.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bids/utils.py`
Content:
```
1 """ Utility functions. """
2
3 import re
4 import os
5
6
7 def listify(obj):
8 ''' Wraps all non-list or tuple objects in a list; provides a simple way
9 to accept flexible arguments. '''
10 return obj if isinstance(obj, (list, tuple, type(None))) else [obj]
11
12
13 def matches_entities(obj, entities, strict=False):
14 ''' Checks whether an object's entities match the input. '''
15 if strict and set(obj.entities.keys()) != set(entities.keys()):
16 return False
17
18 comm_ents = list(set(obj.entities.keys()) & set(entities.keys()))
19 for k in comm_ents:
20 current = obj.entities[k]
21 target = entities[k]
22 if isinstance(target, (list, tuple)):
23 if current not in target:
24 return False
25 elif current != target:
26 return False
27 return True
28
29
30 def natural_sort(l, field=None):
31 '''
32 based on snippet found at http://stackoverflow.com/a/4836734/2445984
33 '''
34 convert = lambda text: int(text) if text.isdigit() else text.lower()
35
36 def alphanum_key(key):
37 if field is not None:
38 key = getattr(key, field)
39 if not isinstance(key, str):
40 key = str(key)
41 return [convert(c) for c in re.split('([0-9]+)', key)]
42 return sorted(l, key=alphanum_key)
43
44
45 def convert_JSON(j):
46 """ Recursively convert CamelCase keys to snake_case.
47 From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria
48 """
49
50 def camel_to_snake(s):
51 a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')
52 return a.sub(r'_\1', s).lower()
53
54 def convertArray(a):
55 newArr = []
56 for i in a:
57 if isinstance(i,list):
58 newArr.append(convertArray(i))
59 elif isinstance(i, dict):
60 newArr.append(convert_JSON(i))
61 else:
62 newArr.append(i)
63 return newArr
64
65 out = {}
66 for k, value in j.items():
67 newK = camel_to_snake(k)
68
69 if isinstance(value, dict):
70 out[newK] = convert_JSON(value)
71 elif isinstance(value, list):
72 out[newK] = convertArray(value)
73 else:
74 out[newK] = value
75
76 return out
77
78
79 def splitext(path):
80 """splitext for paths with directories that may contain dots.
81 From https://stackoverflow.com/questions/5930036/separating-file-extensions-using-python-os-path-module"""
82 li = []
83 path_without_extensions = os.path.join(os.path.dirname(path),
84 os.path.basename(path).split(os.extsep)[0])
85 extensions = os.path.basename(path).split(os.extsep)[1:]
86 li.append(path_without_extensions)
87 # li.append(extensions) if you want extensions in another list inside the list that is returned.
88 li.extend(extensions)
89 return li
90
91
92 def make_bidsfile(filename):
93 """Create a BIDSFile instance of the appropriate class. """
94 from .layout import models
95
96 patt = re.compile("[._]*[a-zA-Z0-9]*?\\.([^/\\\\]+)$")
97 m = re.search(patt, filename)
98
99 ext = None if not m else m.group(1)
100
101 if ext in ['nii', 'nii.gz']:
102 cls = 'BIDSImageFile'
103 elif ext in ['tsv', 'tsv.gz']:
104 cls = 'BIDSDataFile'
105 elif ext == 'json':
106 cls = 'BIDSJSONFile'
107 else:
108 cls = 'BIDSFile'
109
110 Cls = getattr(models, cls)
111 return Cls(filename)
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bids/utils.py b/bids/utils.py
--- a/bids/utils.py
+++ b/bids/utils.py
@@ -44,9 +44,10 @@
def convert_JSON(j):
""" Recursively convert CamelCase keys to snake_case.
- From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria
+ From: https://stackoverflow.com/questions/17156078/
+ converting-identifier-naming-between-camelcase-and-
+ underscores-during-json-seria
"""
-
def camel_to_snake(s):
a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')
return a.sub(r'_\1', s).lower()
@@ -54,7 +55,7 @@
def convertArray(a):
newArr = []
for i in a:
- if isinstance(i,list):
+ if isinstance(i, list):
newArr.append(convertArray(i))
elif isinstance(i, dict):
newArr.append(convert_JSON(i))
@@ -66,7 +67,8 @@
for k, value in j.items():
newK = camel_to_snake(k)
- if isinstance(value, dict):
+ # Replace transformation uses a dict, so skip lower-casing
+ if isinstance(value, dict) and k != 'Replace':
out[newK] = convert_JSON(value)
elif isinstance(value, list):
out[newK] = convertArray(value)
| {"golden_diff": "diff --git a/bids/utils.py b/bids/utils.py\n--- a/bids/utils.py\n+++ b/bids/utils.py\n@@ -44,9 +44,10 @@\n \n def convert_JSON(j):\n \"\"\" Recursively convert CamelCase keys to snake_case.\n- From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria\n+ From: https://stackoverflow.com/questions/17156078/\n+ converting-identifier-naming-between-camelcase-and-\n+ underscores-during-json-seria\n \"\"\"\n-\n def camel_to_snake(s):\n a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')\n return a.sub(r'_\\1', s).lower()\n@@ -54,7 +55,7 @@\n def convertArray(a):\n newArr = []\n for i in a:\n- if isinstance(i,list):\n+ if isinstance(i, list):\n newArr.append(convertArray(i))\n elif isinstance(i, dict):\n newArr.append(convert_JSON(i))\n@@ -66,7 +67,8 @@\n for k, value in j.items():\n newK = camel_to_snake(k)\n \n- if isinstance(value, dict):\n+ # Replace transformation uses a dict, so skip lower-casing\n+ if isinstance(value, dict) and k != 'Replace':\n out[newK] = convert_JSON(value)\n elif isinstance(value, list):\n out[newK] = convertArray(value)\n", "issue": "model: JSON to dict modified key values for transformation\nIn ` Replace` transformation, you specify as a dict which variables to transform.\r\n\r\ne.g.:\r\n\r\n```\r\n{'LIKELY': \"5\"}\r\n```\r\n\r\nHowever, the parser from JSON to dict to convert BIDS Stats Models modifies keys to lower case, which in the case of specific case sensitive values modifies the transformation itself.\n", "before_files": [{"content": "\"\"\" Utility functions. \"\"\"\n\nimport re\nimport os\n\n\ndef listify(obj):\n ''' Wraps all non-list or tuple objects in a list; provides a simple way\n to accept flexible arguments. '''\n return obj if isinstance(obj, (list, tuple, type(None))) else [obj]\n\n\ndef matches_entities(obj, entities, strict=False):\n ''' Checks whether an object's entities match the input. '''\n if strict and set(obj.entities.keys()) != set(entities.keys()):\n return False\n\n comm_ents = list(set(obj.entities.keys()) & set(entities.keys()))\n for k in comm_ents:\n current = obj.entities[k]\n target = entities[k]\n if isinstance(target, (list, tuple)):\n if current not in target:\n return False\n elif current != target:\n return False\n return True\n\n\ndef natural_sort(l, field=None):\n '''\n based on snippet found at http://stackoverflow.com/a/4836734/2445984\n '''\n convert = lambda text: int(text) if text.isdigit() else text.lower()\n\n def alphanum_key(key):\n if field is not None:\n key = getattr(key, field)\n if not isinstance(key, str):\n key = str(key)\n return [convert(c) for c in re.split('([0-9]+)', key)]\n return sorted(l, key=alphanum_key)\n\n\ndef convert_JSON(j):\n \"\"\" Recursively convert CamelCase keys to snake_case.\n From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria\n \"\"\"\n\n def camel_to_snake(s):\n a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')\n return a.sub(r'_\\1', s).lower()\n\n def convertArray(a):\n newArr = []\n for i in a:\n if isinstance(i,list):\n newArr.append(convertArray(i))\n elif isinstance(i, dict):\n newArr.append(convert_JSON(i))\n else:\n newArr.append(i)\n return newArr\n\n out = {}\n for k, value in j.items():\n newK = camel_to_snake(k)\n\n if isinstance(value, dict):\n out[newK] = convert_JSON(value)\n elif isinstance(value, list):\n out[newK] = convertArray(value)\n else:\n out[newK] = value\n\n return out\n\n\ndef splitext(path):\n \"\"\"splitext for paths with directories that may contain dots.\n From https://stackoverflow.com/questions/5930036/separating-file-extensions-using-python-os-path-module\"\"\"\n li = []\n path_without_extensions = os.path.join(os.path.dirname(path),\n os.path.basename(path).split(os.extsep)[0])\n extensions = os.path.basename(path).split(os.extsep)[1:]\n li.append(path_without_extensions)\n # li.append(extensions) if you want extensions in another list inside the list that is returned.\n li.extend(extensions)\n return li\n\n\ndef make_bidsfile(filename):\n \"\"\"Create a BIDSFile instance of the appropriate class. \"\"\"\n from .layout import models\n\n patt = re.compile(\"[._]*[a-zA-Z0-9]*?\\\\.([^/\\\\\\\\]+)$\")\n m = re.search(patt, filename)\n\n ext = None if not m else m.group(1)\n\n if ext in ['nii', 'nii.gz']:\n cls = 'BIDSImageFile'\n elif ext in ['tsv', 'tsv.gz']:\n cls = 'BIDSDataFile'\n elif ext == 'json':\n cls = 'BIDSJSONFile'\n else:\n cls = 'BIDSFile'\n\n Cls = getattr(models, cls)\n return Cls(filename)\n", "path": "bids/utils.py"}], "after_files": [{"content": "\"\"\" Utility functions. \"\"\"\n\nimport re\nimport os\n\n\ndef listify(obj):\n ''' Wraps all non-list or tuple objects in a list; provides a simple way\n to accept flexible arguments. '''\n return obj if isinstance(obj, (list, tuple, type(None))) else [obj]\n\n\ndef matches_entities(obj, entities, strict=False):\n ''' Checks whether an object's entities match the input. '''\n if strict and set(obj.entities.keys()) != set(entities.keys()):\n return False\n\n comm_ents = list(set(obj.entities.keys()) & set(entities.keys()))\n for k in comm_ents:\n current = obj.entities[k]\n target = entities[k]\n if isinstance(target, (list, tuple)):\n if current not in target:\n return False\n elif current != target:\n return False\n return True\n\n\ndef natural_sort(l, field=None):\n '''\n based on snippet found at http://stackoverflow.com/a/4836734/2445984\n '''\n convert = lambda text: int(text) if text.isdigit() else text.lower()\n\n def alphanum_key(key):\n if field is not None:\n key = getattr(key, field)\n if not isinstance(key, str):\n key = str(key)\n return [convert(c) for c in re.split('([0-9]+)', key)]\n return sorted(l, key=alphanum_key)\n\n\ndef convert_JSON(j):\n \"\"\" Recursively convert CamelCase keys to snake_case.\n From: https://stackoverflow.com/questions/17156078/\n converting-identifier-naming-between-camelcase-and-\n underscores-during-json-seria\n \"\"\"\n def camel_to_snake(s):\n a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')\n return a.sub(r'_\\1', s).lower()\n\n def convertArray(a):\n newArr = []\n for i in a:\n if isinstance(i, list):\n newArr.append(convertArray(i))\n elif isinstance(i, dict):\n newArr.append(convert_JSON(i))\n else:\n newArr.append(i)\n return newArr\n\n out = {}\n for k, value in j.items():\n newK = camel_to_snake(k)\n\n # Replace transformation uses a dict, so skip lower-casing\n if isinstance(value, dict) and k != 'Replace':\n out[newK] = convert_JSON(value)\n elif isinstance(value, list):\n out[newK] = convertArray(value)\n else:\n out[newK] = value\n\n return out\n\n\ndef splitext(path):\n \"\"\"splitext for paths with directories that may contain dots.\n From https://stackoverflow.com/questions/5930036/separating-file-extensions-using-python-os-path-module\"\"\"\n li = []\n path_without_extensions = os.path.join(os.path.dirname(path),\n os.path.basename(path).split(os.extsep)[0])\n extensions = os.path.basename(path).split(os.extsep)[1:]\n li.append(path_without_extensions)\n # li.append(extensions) if you want extensions in another list inside the list that is returned.\n li.extend(extensions)\n return li\n\n\ndef make_bidsfile(filename):\n \"\"\"Create a BIDSFile instance of the appropriate class. \"\"\"\n from .layout import models\n\n patt = re.compile(\"[._]*[a-zA-Z0-9]*?\\\\.([^/\\\\\\\\]+)$\")\n m = re.search(patt, filename)\n\n ext = None if not m else m.group(1)\n\n if ext in ['nii', 'nii.gz']:\n cls = 'BIDSImageFile'\n elif ext in ['tsv', 'tsv.gz']:\n cls = 'BIDSDataFile'\n elif ext == 'json':\n cls = 'BIDSJSONFile'\n else:\n cls = 'BIDSFile'\n\n Cls = getattr(models, cls)\n return Cls(filename)\n", "path": "bids/utils.py"}]} | 1,410 | 353 |
gh_patches_debug_24872 | rasdani/github-patches | git_diff | rotki__rotki-174 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
USD Value for IOTA is incorrect
## Problem Definition
The usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.
I tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68)
The asset "IOTA" uses symbol "IOT" at the api endpoint therefore the incorrect rate is returned when querying:
https://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD
vs.
https://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD
USD Value for IOTA is incorrect
## Problem Definition
The usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.
I tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68)
The asset "IOTA" uses symbol "IOT" at the api endpoint therefore the incorrect rate is returned when querying:
https://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD
vs.
https://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/constants.py`
Content:
```
1 from typing import cast
2 from rotkehlchen import typing
3
4 ETH_DAO_FORK_TS = 1469020840 # 2016-07-20 13:20:40 UTC
5 BTC_BCH_FORK_TS = 1501593374 # 2017-08-01 13:16:14 UTC
6
7 SUPPORTED_EXCHANGES = ['kraken', 'poloniex', 'bittrex', 'bitmex', 'binance']
8 ROTKEHLCHEN_SERVER_TIMEOUT = 5
9 ALL_REMOTES_TIMEOUT = 20
10
11 YEAR_IN_SECONDS = 31536000 # 60 * 60 * 24 * 365
12
13 S_EMPTYSTR = typing.EmptyStr('')
14
15 S_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')
16 S_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')
17 S_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')
18
19 S_RDN = cast(typing.EthToken, 'RDN')
20
21
22 S_USD = typing.FiatAsset('USD')
23 S_EUR = typing.FiatAsset('EUR')
24 S_GBP = typing.FiatAsset('GBP')
25 S_JPY = typing.FiatAsset('JPY')
26 S_CNY = typing.FiatAsset('CNY')
27 FIAT_CURRENCIES = (S_USD, S_EUR, S_GBP, S_JPY, S_CNY)
28
29 EV_BUY = typing.EventType('buy')
30 EV_SELL = typing.EventType('sell')
31 EV_TX_GAS_COST = typing.EventType('tx_gas_cost')
32 EV_ASSET_MOVE = typing.EventType('asset_movement')
33 EV_LOAN_SETTLE = typing.EventType('loan_settlement')
34 EV_INTEREST_PAYMENT = typing.EventType('interest_rate_payment')
35 EV_MARGIN_CLOSE = typing.EventType('margin_position_close')
36
```
Path: `rotkehlchen/inquirer.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import logging
4 from typing import Dict, Iterable, Optional, cast
5
6 import requests
7
8 from rotkehlchen import typing
9 from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD
10 from rotkehlchen.errors import RemoteError
11 from rotkehlchen.fval import FVal
12 from rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads
13
14 logger = logging.getLogger(__name__)
15
16
17 def get_fiat_usd_exchange_rates(
18 currencies: Optional[Iterable[typing.FiatAsset]] = None,
19 ) -> Dict[typing.FiatAsset, FVal]:
20 rates = {S_USD: FVal(1)}
21 if not currencies:
22 currencies = FIAT_CURRENCIES[1:]
23 for currency in currencies:
24 rates[currency] = query_fiat_pair(S_USD, currency)
25
26 return rates
27
28
29 def world_to_cryptocompare(asset):
30 # Adjust some ETH tokens to how cryptocompare knows them
31 if asset == S_RDN:
32 # remove this if cryptocompare changes the symbol
33 asset = cast(typing.EthToken, 'RDN*')
34 elif asset == S_DATACOIN:
35 asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')
36
37 return asset
38
39
40 class Inquirer(object):
41 def __init__(self, kraken=None): # TODO: Add type after fixing cyclic dependency
42 self.kraken = kraken
43 self.session = requests.session()
44
45 def query_kraken_for_price(
46 self,
47 asset: typing.Asset,
48 asset_btc_price: FVal,
49 ) -> FVal:
50 if asset == 'BTC':
51 return self.kraken.usdprice['BTC']
52 return asset_btc_price * self.kraken.usdprice['BTC']
53
54 def find_usd_price(
55 self,
56 asset: typing.Asset,
57 asset_btc_price: Optional[FVal] = None,
58 ) -> FVal:
59 if self.kraken and self.kraken.first_connection_made and asset_btc_price is not None:
60 return self.query_kraken_for_price(asset, asset_btc_price)
61
62 asset = world_to_cryptocompare(asset)
63 resp = retry_calls(
64 5,
65 'find_usd_price',
66 'requests.get',
67 requests.get,
68 u'https://min-api.cryptocompare.com/data/price?'
69 'fsym={}&tsyms=USD'.format(asset)
70 )
71
72 if resp.status_code != 200:
73 raise RemoteError('Cant reach cryptocompare to get USD value of {}'.format(asset))
74
75 resp = rlk_jsonloads(resp.text)
76
77 # If there is an error in the response skip this token
78 if 'USD' not in resp:
79 if resp['Response'] == 'Error':
80 print('Could not query USD price for {}. Error: "{}"'.format(
81 asset,
82 resp['Message']),
83 )
84 else:
85 print('Could not query USD price for {}'.format(asset))
86 return FVal(0)
87
88 return FVal(resp['USD'])
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rotkehlchen/constants.py b/rotkehlchen/constants.py
--- a/rotkehlchen/constants.py
+++ b/rotkehlchen/constants.py
@@ -15,6 +15,7 @@
S_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')
S_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')
S_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')
+S_IOTA = cast(typing.NonEthTokenBlockchainAsset, 'IOTA')
S_RDN = cast(typing.EthToken, 'RDN')
diff --git a/rotkehlchen/inquirer.py b/rotkehlchen/inquirer.py
--- a/rotkehlchen/inquirer.py
+++ b/rotkehlchen/inquirer.py
@@ -6,7 +6,7 @@
import requests
from rotkehlchen import typing
-from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD
+from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD, S_IOTA
from rotkehlchen.errors import RemoteError
from rotkehlchen.fval import FVal
from rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads
@@ -33,6 +33,8 @@
asset = cast(typing.EthToken, 'RDN*')
elif asset == S_DATACOIN:
asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')
+ elif asset == S_IOTA:
+ asset = cast(typing.NonEthTokenBlockchainAsset, 'IOT')
return asset
| {"golden_diff": "diff --git a/rotkehlchen/constants.py b/rotkehlchen/constants.py\n--- a/rotkehlchen/constants.py\n+++ b/rotkehlchen/constants.py\n@@ -15,6 +15,7 @@\n S_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')\n S_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')\n S_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')\n+S_IOTA = cast(typing.NonEthTokenBlockchainAsset, 'IOTA')\n \n S_RDN = cast(typing.EthToken, 'RDN')\n \ndiff --git a/rotkehlchen/inquirer.py b/rotkehlchen/inquirer.py\n--- a/rotkehlchen/inquirer.py\n+++ b/rotkehlchen/inquirer.py\n@@ -6,7 +6,7 @@\n import requests\n \n from rotkehlchen import typing\n-from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD\n+from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD, S_IOTA\n from rotkehlchen.errors import RemoteError\n from rotkehlchen.fval import FVal\n from rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads\n@@ -33,6 +33,8 @@\n asset = cast(typing.EthToken, 'RDN*')\n elif asset == S_DATACOIN:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')\n+ elif asset == S_IOTA:\n+ asset = cast(typing.NonEthTokenBlockchainAsset, 'IOT')\n \n return asset\n", "issue": "USD Value for IOTA is incorrect\n## Problem Definition\r\n\r\nThe usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.\r\n\r\nI tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68) \r\n\r\nThe asset \"IOTA\" uses symbol \"IOT\" at the api endpoint therefore the incorrect rate is returned when querying: \r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD\r\nvs.\r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD\nUSD Value for IOTA is incorrect\n## Problem Definition\r\n\r\nThe usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.\r\n\r\nI tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68) \r\n\r\nThe asset \"IOTA\" uses symbol \"IOT\" at the api endpoint therefore the incorrect rate is returned when querying: \r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD\r\nvs.\r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD\n", "before_files": [{"content": "from typing import cast\nfrom rotkehlchen import typing\n\nETH_DAO_FORK_TS = 1469020840 # 2016-07-20 13:20:40 UTC\nBTC_BCH_FORK_TS = 1501593374 # 2017-08-01 13:16:14 UTC\n\nSUPPORTED_EXCHANGES = ['kraken', 'poloniex', 'bittrex', 'bitmex', 'binance']\nROTKEHLCHEN_SERVER_TIMEOUT = 5\nALL_REMOTES_TIMEOUT = 20\n\nYEAR_IN_SECONDS = 31536000 # 60 * 60 * 24 * 365\n\nS_EMPTYSTR = typing.EmptyStr('')\n\nS_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')\nS_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')\nS_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')\n\nS_RDN = cast(typing.EthToken, 'RDN')\n\n\nS_USD = typing.FiatAsset('USD')\nS_EUR = typing.FiatAsset('EUR')\nS_GBP = typing.FiatAsset('GBP')\nS_JPY = typing.FiatAsset('JPY')\nS_CNY = typing.FiatAsset('CNY')\nFIAT_CURRENCIES = (S_USD, S_EUR, S_GBP, S_JPY, S_CNY)\n\nEV_BUY = typing.EventType('buy')\nEV_SELL = typing.EventType('sell')\nEV_TX_GAS_COST = typing.EventType('tx_gas_cost')\nEV_ASSET_MOVE = typing.EventType('asset_movement')\nEV_LOAN_SETTLE = typing.EventType('loan_settlement')\nEV_INTEREST_PAYMENT = typing.EventType('interest_rate_payment')\nEV_MARGIN_CLOSE = typing.EventType('margin_position_close')\n", "path": "rotkehlchen/constants.py"}, {"content": "from __future__ import unicode_literals\n\nimport logging\nfrom typing import Dict, Iterable, Optional, cast\n\nimport requests\n\nfrom rotkehlchen import typing\nfrom rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_fiat_usd_exchange_rates(\n currencies: Optional[Iterable[typing.FiatAsset]] = None,\n) -> Dict[typing.FiatAsset, FVal]:\n rates = {S_USD: FVal(1)}\n if not currencies:\n currencies = FIAT_CURRENCIES[1:]\n for currency in currencies:\n rates[currency] = query_fiat_pair(S_USD, currency)\n\n return rates\n\n\ndef world_to_cryptocompare(asset):\n # Adjust some ETH tokens to how cryptocompare knows them\n if asset == S_RDN:\n # remove this if cryptocompare changes the symbol\n asset = cast(typing.EthToken, 'RDN*')\n elif asset == S_DATACOIN:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')\n\n return asset\n\n\nclass Inquirer(object):\n def __init__(self, kraken=None): # TODO: Add type after fixing cyclic dependency\n self.kraken = kraken\n self.session = requests.session()\n\n def query_kraken_for_price(\n self,\n asset: typing.Asset,\n asset_btc_price: FVal,\n ) -> FVal:\n if asset == 'BTC':\n return self.kraken.usdprice['BTC']\n return asset_btc_price * self.kraken.usdprice['BTC']\n\n def find_usd_price(\n self,\n asset: typing.Asset,\n asset_btc_price: Optional[FVal] = None,\n ) -> FVal:\n if self.kraken and self.kraken.first_connection_made and asset_btc_price is not None:\n return self.query_kraken_for_price(asset, asset_btc_price)\n\n asset = world_to_cryptocompare(asset)\n resp = retry_calls(\n 5,\n 'find_usd_price',\n 'requests.get',\n requests.get,\n u'https://min-api.cryptocompare.com/data/price?'\n 'fsym={}&tsyms=USD'.format(asset)\n )\n\n if resp.status_code != 200:\n raise RemoteError('Cant reach cryptocompare to get USD value of {}'.format(asset))\n\n resp = rlk_jsonloads(resp.text)\n\n # If there is an error in the response skip this token\n if 'USD' not in resp:\n if resp['Response'] == 'Error':\n print('Could not query USD price for {}. Error: \"{}\"'.format(\n asset,\n resp['Message']),\n )\n else:\n print('Could not query USD price for {}'.format(asset))\n return FVal(0)\n\n return FVal(resp['USD'])\n", "path": "rotkehlchen/inquirer.py"}], "after_files": [{"content": "from typing import cast\nfrom rotkehlchen import typing\n\nETH_DAO_FORK_TS = 1469020840 # 2016-07-20 13:20:40 UTC\nBTC_BCH_FORK_TS = 1501593374 # 2017-08-01 13:16:14 UTC\n\nSUPPORTED_EXCHANGES = ['kraken', 'poloniex', 'bittrex', 'bitmex', 'binance']\nROTKEHLCHEN_SERVER_TIMEOUT = 5\nALL_REMOTES_TIMEOUT = 20\n\nYEAR_IN_SECONDS = 31536000 # 60 * 60 * 24 * 365\n\nS_EMPTYSTR = typing.EmptyStr('')\n\nS_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')\nS_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')\nS_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')\nS_IOTA = cast(typing.NonEthTokenBlockchainAsset, 'IOTA')\n\nS_RDN = cast(typing.EthToken, 'RDN')\n\n\nS_USD = typing.FiatAsset('USD')\nS_EUR = typing.FiatAsset('EUR')\nS_GBP = typing.FiatAsset('GBP')\nS_JPY = typing.FiatAsset('JPY')\nS_CNY = typing.FiatAsset('CNY')\nFIAT_CURRENCIES = (S_USD, S_EUR, S_GBP, S_JPY, S_CNY)\n\nEV_BUY = typing.EventType('buy')\nEV_SELL = typing.EventType('sell')\nEV_TX_GAS_COST = typing.EventType('tx_gas_cost')\nEV_ASSET_MOVE = typing.EventType('asset_movement')\nEV_LOAN_SETTLE = typing.EventType('loan_settlement')\nEV_INTEREST_PAYMENT = typing.EventType('interest_rate_payment')\nEV_MARGIN_CLOSE = typing.EventType('margin_position_close')\n", "path": "rotkehlchen/constants.py"}, {"content": "from __future__ import unicode_literals\n\nimport logging\nfrom typing import Dict, Iterable, Optional, cast\n\nimport requests\n\nfrom rotkehlchen import typing\nfrom rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD, S_IOTA\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_fiat_usd_exchange_rates(\n currencies: Optional[Iterable[typing.FiatAsset]] = None,\n) -> Dict[typing.FiatAsset, FVal]:\n rates = {S_USD: FVal(1)}\n if not currencies:\n currencies = FIAT_CURRENCIES[1:]\n for currency in currencies:\n rates[currency] = query_fiat_pair(S_USD, currency)\n\n return rates\n\n\ndef world_to_cryptocompare(asset):\n # Adjust some ETH tokens to how cryptocompare knows them\n if asset == S_RDN:\n # remove this if cryptocompare changes the symbol\n asset = cast(typing.EthToken, 'RDN*')\n elif asset == S_DATACOIN:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')\n elif asset == S_IOTA:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'IOT')\n\n return asset\n\n\nclass Inquirer(object):\n def __init__(self, kraken=None): # TODO: Add type after fixing cyclic dependency\n self.kraken = kraken\n self.session = requests.session()\n\n def query_kraken_for_price(\n self,\n asset: typing.Asset,\n asset_btc_price: FVal,\n ) -> FVal:\n if asset == 'BTC':\n return self.kraken.usdprice['BTC']\n return asset_btc_price * self.kraken.usdprice['BTC']\n\n def find_usd_price(\n self,\n asset: typing.Asset,\n asset_btc_price: Optional[FVal] = None,\n ) -> FVal:\n if self.kraken and self.kraken.first_connection_made and asset_btc_price is not None:\n return self.query_kraken_for_price(asset, asset_btc_price)\n\n asset = world_to_cryptocompare(asset)\n resp = retry_calls(\n 5,\n 'find_usd_price',\n 'requests.get',\n requests.get,\n u'https://min-api.cryptocompare.com/data/price?'\n 'fsym={}&tsyms=USD'.format(asset)\n )\n\n if resp.status_code != 200:\n raise RemoteError('Cant reach cryptocompare to get USD value of {}'.format(asset))\n\n resp = rlk_jsonloads(resp.text)\n\n # If there is an error in the response skip this token\n if 'USD' not in resp:\n if resp['Response'] == 'Error':\n print('Could not query USD price for {}. Error: \"{}\"'.format(\n asset,\n resp['Message']),\n )\n else:\n print('Could not query USD price for {}'.format(asset))\n return FVal(0)\n\n return FVal(resp['USD'])\n", "path": "rotkehlchen/inquirer.py"}]} | 1,973 | 386 |
gh_patches_debug_20588 | rasdani/github-patches | git_diff | dotkom__onlineweb4-812 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hide attendanceevent from django admin
https://online.ntnu.no/admin/events/attendanceevent/
This view should not be used by anyone and attendance info should be edited through the event directly.
Should be possible to hide this by removing
`admin.site.register(AttendanceEvent, AttendanceEventAdmin)`
in events/admin.py (untested)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/events/admin.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from django import forms
4 from django.contrib import admin
5 from django.core import validators
6 from django.utils.translation import ugettext as _
7
8 from apps.events.models import Event
9 from apps.events.models import AttendanceEvent
10 from apps.events.models import Attendee
11 from apps.events.models import CompanyEvent
12 from apps.events.models import RuleBundle
13 from apps.events.models import FieldOfStudyRule
14 from apps.events.models import GradeRule
15 from apps.events.models import UserGroupRule
16 from apps.feedback.admin import FeedbackRelationInline
17
18
19
20 class AttendeeInline(admin.TabularInline):
21 model = Attendee
22 extra = 1
23 classes = ('grp-collapse grp-open',) # style
24 inline_classes = ('grp-collapse grp-open',) # style
25
26
27 class CompanyInline(admin.TabularInline):
28 model = CompanyEvent
29 max_num = 20
30 extra = 0
31 classes = ('grp-collapse grp-open',) # style
32 inline_classes = ('grp-collapse grp-open',) # style
33
34
35 class RuleBundleInline(admin.TabularInline):
36 model = RuleBundle
37 extra = 1
38 max_num = 20
39 classes = ('grp-collapse grp-open',) # style
40 inline_classes = ('grp-collapse grp-open',) # style
41
42
43 class AttendanceEventAdmin(admin.ModelAdmin):
44 model = AttendanceEvent
45 inlines = (AttendeeInline, RuleBundleInline)
46
47
48 class AttendeeAdmin(admin.ModelAdmin):
49 model = Attendee
50 list_display = ('user', 'event', 'paid')
51 actions = None
52
53 def delete_model(self, request, obj):
54 event = obj.event.event
55 event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=obj.user)
56 obj.delete()
57
58
59 class CompanyEventAdmin(admin.ModelAdmin):
60 model = CompanyEvent
61 inlines = (CompanyInline,)
62
63
64 class RuleBundleAdmin(admin.ModelAdmin):
65 model = RuleBundle
66
67
68 class FieldOfStudyRuleAdmin(admin.ModelAdmin):
69 model = FieldOfStudyRule
70
71
72 class GradeRuleAdmin(admin.ModelAdmin):
73 model = GradeRule
74
75
76 class UserGroupRuleAdmin(admin.ModelAdmin):
77 model = UserGroupRule
78
79
80 class AttendanceEventInline(admin.StackedInline):
81 model = AttendanceEvent
82 max_num = 1
83 extra = 0
84 filter_horizontal = ('rule_bundles',)
85 classes = ('grp-collapse grp-open',) # style
86 inline_classes = ('grp-collapse grp-open',) # style
87
88
89 class EventAdmin(admin.ModelAdmin):
90 inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)
91 exclude = ("author", )
92 search_fields = ('title',)
93
94 def save_model(self, request, obj, form, change):
95 if not change: # created
96 obj.author = request.user
97 else:
98 # If attendance max capacity changed we will notify users that they are now on the attend list
99 old_event = Event.objects.get(id=obj.id)
100 if old_event.is_attendance_event() and old_event.wait_list:
101 diff_capacity = obj.attendance_event.max_capacity - old_event.attendance_event.max_capacity
102 if diff_capacity > 0:
103 if diff_capacity > len(old_event.wait_list):
104 diff_capacity = len(old_event.wait_list)
105 # Using old_event because max_capacity has already been changed in obj
106 old_event.notify_waiting_list(host=request.META['HTTP_HOST'], extra_capacity=diff_capacity)
107 obj.save()
108
109 def save_formset(self, request, form, formset, change):
110 instances = formset.save(commit=False)
111 for instance in instances:
112 instance.save()
113 formset.save_m2m()
114
115 def get_form(self, request, obj=None, **kwargs):
116 form = super(EventAdmin, self).get_form(request, obj, **kwargs)
117 form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(50)]
118 form.base_fields['ingress'].validators=[validators.MinLengthValidator(75)]
119 form.base_fields['description'].validators=[validators.MinLengthValidator(140)]
120 return form
121
122 admin.site.register(Event, EventAdmin)
123 admin.site.register(Attendee, AttendeeAdmin)
124 admin.site.register(AttendanceEvent, AttendanceEventAdmin)
125 admin.site.register(RuleBundle, RuleBundleAdmin)
126 admin.site.register(GradeRule, GradeRuleAdmin)
127 admin.site.register(UserGroupRule, UserGroupRuleAdmin)
128 admin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/events/admin.py b/apps/events/admin.py
--- a/apps/events/admin.py
+++ b/apps/events/admin.py
@@ -40,11 +40,6 @@
inline_classes = ('grp-collapse grp-open',) # style
-class AttendanceEventAdmin(admin.ModelAdmin):
- model = AttendanceEvent
- inlines = (AttendeeInline, RuleBundleInline)
-
-
class AttendeeAdmin(admin.ModelAdmin):
model = Attendee
list_display = ('user', 'event', 'paid')
@@ -119,9 +114,9 @@
form.base_fields['description'].validators=[validators.MinLengthValidator(140)]
return form
+
admin.site.register(Event, EventAdmin)
admin.site.register(Attendee, AttendeeAdmin)
-admin.site.register(AttendanceEvent, AttendanceEventAdmin)
admin.site.register(RuleBundle, RuleBundleAdmin)
admin.site.register(GradeRule, GradeRuleAdmin)
admin.site.register(UserGroupRule, UserGroupRuleAdmin)
| {"golden_diff": "diff --git a/apps/events/admin.py b/apps/events/admin.py\n--- a/apps/events/admin.py\n+++ b/apps/events/admin.py\n@@ -40,11 +40,6 @@\n inline_classes = ('grp-collapse grp-open',) # style\n \n \n-class AttendanceEventAdmin(admin.ModelAdmin):\n- model = AttendanceEvent\n- inlines = (AttendeeInline, RuleBundleInline)\n-\n-\n class AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event', 'paid')\n@@ -119,9 +114,9 @@\n form.base_fields['description'].validators=[validators.MinLengthValidator(140)]\n return form\n \n+\n admin.site.register(Event, EventAdmin)\n admin.site.register(Attendee, AttendeeAdmin)\n-admin.site.register(AttendanceEvent, AttendanceEventAdmin)\n admin.site.register(RuleBundle, RuleBundleAdmin)\n admin.site.register(GradeRule, GradeRuleAdmin)\n admin.site.register(UserGroupRule, UserGroupRuleAdmin)\n", "issue": "Hide attendanceevent from django admin\nhttps://online.ntnu.no/admin/events/attendanceevent/\n\nThis view should not be used by anyone and attendance info should be edited through the event directly. \n\nShould be possible to hide this by removing \n`admin.site.register(AttendanceEvent, AttendanceEventAdmin)`\n in events/admin.py (untested)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django import forms\nfrom django.contrib import admin\nfrom django.core import validators\nfrom django.utils.translation import ugettext as _\n\nfrom apps.events.models import Event\nfrom apps.events.models import AttendanceEvent\nfrom apps.events.models import Attendee\nfrom apps.events.models import CompanyEvent\nfrom apps.events.models import RuleBundle\nfrom apps.events.models import FieldOfStudyRule\nfrom apps.events.models import GradeRule\nfrom apps.events.models import UserGroupRule\nfrom apps.feedback.admin import FeedbackRelationInline\n\n\n\nclass AttendeeInline(admin.TabularInline):\n model = Attendee\n extra = 1\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass CompanyInline(admin.TabularInline):\n model = CompanyEvent\n max_num = 20\n extra = 0\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass RuleBundleInline(admin.TabularInline):\n model = RuleBundle\n extra = 1\n max_num = 20\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass AttendanceEventAdmin(admin.ModelAdmin):\n model = AttendanceEvent\n inlines = (AttendeeInline, RuleBundleInline)\n\n\nclass AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event', 'paid')\n actions = None\n\n def delete_model(self, request, obj):\n event = obj.event.event\n event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=obj.user)\n obj.delete()\n\n\nclass CompanyEventAdmin(admin.ModelAdmin):\n model = CompanyEvent\n inlines = (CompanyInline,)\n\n\nclass RuleBundleAdmin(admin.ModelAdmin):\n model = RuleBundle\n\n\nclass FieldOfStudyRuleAdmin(admin.ModelAdmin):\n model = FieldOfStudyRule\n\n\nclass GradeRuleAdmin(admin.ModelAdmin):\n model = GradeRule\n\n\nclass UserGroupRuleAdmin(admin.ModelAdmin):\n model = UserGroupRule\n\n\nclass AttendanceEventInline(admin.StackedInline):\n model = AttendanceEvent\n max_num = 1\n extra = 0\n filter_horizontal = ('rule_bundles',)\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass EventAdmin(admin.ModelAdmin):\n inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)\n exclude = (\"author\", )\n search_fields = ('title',)\n\n def save_model(self, request, obj, form, change):\n if not change: # created\n obj.author = request.user\n else:\n # If attendance max capacity changed we will notify users that they are now on the attend list\n old_event = Event.objects.get(id=obj.id)\n if old_event.is_attendance_event() and old_event.wait_list:\n diff_capacity = obj.attendance_event.max_capacity - old_event.attendance_event.max_capacity\n if diff_capacity > 0:\n if diff_capacity > len(old_event.wait_list):\n diff_capacity = len(old_event.wait_list)\n # Using old_event because max_capacity has already been changed in obj\n old_event.notify_waiting_list(host=request.META['HTTP_HOST'], extra_capacity=diff_capacity)\n obj.save()\n\n def save_formset(self, request, form, formset, change):\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n\n def get_form(self, request, obj=None, **kwargs):\n form = super(EventAdmin, self).get_form(request, obj, **kwargs)\n form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(50)]\n form.base_fields['ingress'].validators=[validators.MinLengthValidator(75)]\n form.base_fields['description'].validators=[validators.MinLengthValidator(140)]\n return form\n\nadmin.site.register(Event, EventAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(AttendanceEvent, AttendanceEventAdmin)\nadmin.site.register(RuleBundle, RuleBundleAdmin)\nadmin.site.register(GradeRule, GradeRuleAdmin)\nadmin.site.register(UserGroupRule, UserGroupRuleAdmin)\nadmin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)\n", "path": "apps/events/admin.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django import forms\nfrom django.contrib import admin\nfrom django.core import validators\nfrom django.utils.translation import ugettext as _\n\nfrom apps.events.models import Event\nfrom apps.events.models import AttendanceEvent\nfrom apps.events.models import Attendee\nfrom apps.events.models import CompanyEvent\nfrom apps.events.models import RuleBundle\nfrom apps.events.models import FieldOfStudyRule\nfrom apps.events.models import GradeRule\nfrom apps.events.models import UserGroupRule\nfrom apps.feedback.admin import FeedbackRelationInline\n\n\n\nclass AttendeeInline(admin.TabularInline):\n model = Attendee\n extra = 1\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass CompanyInline(admin.TabularInline):\n model = CompanyEvent\n max_num = 20\n extra = 0\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass RuleBundleInline(admin.TabularInline):\n model = RuleBundle\n extra = 1\n max_num = 20\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event', 'paid')\n actions = None\n\n def delete_model(self, request, obj):\n event = obj.event.event\n event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=obj.user)\n obj.delete()\n\n\nclass CompanyEventAdmin(admin.ModelAdmin):\n model = CompanyEvent\n inlines = (CompanyInline,)\n\n\nclass RuleBundleAdmin(admin.ModelAdmin):\n model = RuleBundle\n\n\nclass FieldOfStudyRuleAdmin(admin.ModelAdmin):\n model = FieldOfStudyRule\n\n\nclass GradeRuleAdmin(admin.ModelAdmin):\n model = GradeRule\n\n\nclass UserGroupRuleAdmin(admin.ModelAdmin):\n model = UserGroupRule\n\n\nclass AttendanceEventInline(admin.StackedInline):\n model = AttendanceEvent\n max_num = 1\n extra = 0\n filter_horizontal = ('rule_bundles',)\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass EventAdmin(admin.ModelAdmin):\n inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)\n exclude = (\"author\", )\n search_fields = ('title',)\n\n def save_model(self, request, obj, form, change):\n if not change: # created\n obj.author = request.user\n else:\n # If attendance max capacity changed we will notify users that they are now on the attend list\n old_event = Event.objects.get(id=obj.id)\n if old_event.is_attendance_event() and old_event.wait_list:\n diff_capacity = obj.attendance_event.max_capacity - old_event.attendance_event.max_capacity\n if diff_capacity > 0:\n if diff_capacity > len(old_event.wait_list):\n diff_capacity = len(old_event.wait_list)\n # Using old_event because max_capacity has already been changed in obj\n old_event.notify_waiting_list(host=request.META['HTTP_HOST'], extra_capacity=diff_capacity)\n obj.save()\n\n def save_formset(self, request, form, formset, change):\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n\n def get_form(self, request, obj=None, **kwargs):\n form = super(EventAdmin, self).get_form(request, obj, **kwargs)\n form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(50)]\n form.base_fields['ingress'].validators=[validators.MinLengthValidator(75)]\n form.base_fields['description'].validators=[validators.MinLengthValidator(140)]\n return form\n\n\nadmin.site.register(Event, EventAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(RuleBundle, RuleBundleAdmin)\nadmin.site.register(GradeRule, GradeRuleAdmin)\nadmin.site.register(UserGroupRule, UserGroupRuleAdmin)\nadmin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)\n", "path": "apps/events/admin.py"}]} | 1,567 | 214 |
gh_patches_debug_13193 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-499 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make plugin integtest.sh run against non-snapshot build
The plugin integtest.sh picks up the opensearch version provided in build.gradle, which is 1.1.0-SNAPSHOT. Since the release candidates are non snapshot built artifacts, make this configurable in integ test job
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bundle-workflow/src/paths/script_finder.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import os
8
9
10 class ScriptFinder:
11 class ScriptNotFoundError(Exception):
12 def __init__(self, kind, paths):
13 self.kind = kind
14 self.paths = paths
15 super().__init__(f"Could not find {kind} script. Looked in {paths}.")
16
17 component_scripts_path = os.path.realpath(
18 os.path.join(
19 os.path.dirname(os.path.abspath(__file__)), "../../scripts/components"
20 )
21 )
22
23 default_scripts_path = os.path.realpath(
24 os.path.join(
25 os.path.dirname(os.path.abspath(__file__)), "../../scripts/default"
26 )
27 )
28
29 """
30 ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.
31
32 For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,
33 it will look in the following locations, in order:
34 * Root of the Git repository
35 * /scripts/<script-name> in the Git repository
36 * <component_scripts_path>/<component_name>/<script-name>
37 * <default_scripts_path>/<script-name>
38
39 For install.sh scripts, given a component name, it will look in the following locations, in order:
40 * <component_scripts_path>/<component_name>/<script-name>
41 * <default_scripts_path>/<script-name>
42 """
43
44 @classmethod
45 def __find_script(cls, name, paths):
46 script = next(filter(lambda path: os.path.exists(path), paths), None)
47 if script is None:
48 raise ScriptFinder.ScriptNotFoundError(name, paths)
49 return script
50
51 @classmethod
52 def find_build_script(cls, component_name, git_dir):
53 paths = [
54 os.path.realpath(os.path.join(git_dir, "build.sh")),
55 os.path.realpath(os.path.join(git_dir, "scripts/build.sh")),
56 os.path.realpath(
57 os.path.join(cls.component_scripts_path, component_name, "build.sh")
58 ),
59 os.path.realpath(os.path.join(cls.default_scripts_path, "build.sh")),
60 ]
61
62 return cls.__find_script("build.sh", paths)
63
64 @classmethod
65 def find_integ_test_script(cls, component_name, git_dir):
66 paths = [
67 os.path.realpath(os.path.join(git_dir, "integtest.sh")),
68 os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
69 os.path.realpath(
70 os.path.join(cls.component_scripts_path, component_name, "integtest.sh")
71 ),
72 os.path.realpath(os.path.join(cls.default_scripts_path, "integtest.sh")),
73 ]
74
75 return cls.__find_script("integtest.sh", paths)
76
77 @classmethod
78 def find_install_script(cls, component_name):
79 paths = [
80 os.path.realpath(
81 os.path.join(cls.component_scripts_path, component_name, "install.sh")
82 ),
83 os.path.realpath(os.path.join(cls.default_scripts_path, "install.sh")),
84 ]
85
86 return cls.__find_script("install.sh", paths)
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py
--- a/bundle-workflow/src/paths/script_finder.py
+++ b/bundle-workflow/src/paths/script_finder.py
@@ -64,8 +64,9 @@
@classmethod
def find_integ_test_script(cls, component_name, git_dir):
paths = [
- os.path.realpath(os.path.join(git_dir, "integtest.sh")),
- os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
+ # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497
+ # os.path.realpath(os.path.join(git_dir, "integtest.sh")),
+ # os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
os.path.realpath(
os.path.join(cls.component_scripts_path, component_name, "integtest.sh")
),
| {"golden_diff": "diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py\n--- a/bundle-workflow/src/paths/script_finder.py\n+++ b/bundle-workflow/src/paths/script_finder.py\n@@ -64,8 +64,9 @@\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n- os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n- os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n+ # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n+ # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n+ # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n", "issue": "Make plugin integtest.sh run against non-snapshot build\nThe plugin integtest.sh picks up the opensearch version provided in build.gradle, which is 1.1.0-SNAPSHOT. Since the release candidates are non snapshot built artifacts, make this configurable in integ test job\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\nclass ScriptFinder:\n class ScriptNotFoundError(Exception):\n def __init__(self, kind, paths):\n self.kind = kind\n self.paths = paths\n super().__init__(f\"Could not find {kind} script. Looked in {paths}.\")\n\n component_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/components\"\n )\n )\n\n default_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/default\"\n )\n )\n\n \"\"\"\n ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.\n\n For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,\n it will look in the following locations, in order:\n * Root of the Git repository\n * /scripts/<script-name> in the Git repository\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n\n For install.sh scripts, given a component name, it will look in the following locations, in order:\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n \"\"\"\n\n @classmethod\n def __find_script(cls, name, paths):\n script = next(filter(lambda path: os.path.exists(path), paths), None)\n if script is None:\n raise ScriptFinder.ScriptNotFoundError(name, paths)\n return script\n\n @classmethod\n def find_build_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"build.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/build.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"build.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"build.sh\")),\n ]\n\n return cls.__find_script(\"build.sh\", paths)\n\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"integtest.sh\")),\n ]\n\n return cls.__find_script(\"integtest.sh\", paths)\n\n @classmethod\n def find_install_script(cls, component_name):\n paths = [\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"install.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"install.sh\")),\n ]\n\n return cls.__find_script(\"install.sh\", paths)\n", "path": "bundle-workflow/src/paths/script_finder.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\nclass ScriptFinder:\n class ScriptNotFoundError(Exception):\n def __init__(self, kind, paths):\n self.kind = kind\n self.paths = paths\n super().__init__(f\"Could not find {kind} script. Looked in {paths}.\")\n\n component_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/components\"\n )\n )\n\n default_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/default\"\n )\n )\n\n \"\"\"\n ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.\n\n For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,\n it will look in the following locations, in order:\n * Root of the Git repository\n * /scripts/<script-name> in the Git repository\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n\n For install.sh scripts, given a component name, it will look in the following locations, in order:\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n \"\"\"\n\n @classmethod\n def __find_script(cls, name, paths):\n script = next(filter(lambda path: os.path.exists(path), paths), None)\n if script is None:\n raise ScriptFinder.ScriptNotFoundError(name, paths)\n return script\n\n @classmethod\n def find_build_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"build.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/build.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"build.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"build.sh\")),\n ]\n\n return cls.__find_script(\"build.sh\", paths)\n\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"integtest.sh\")),\n ]\n\n return cls.__find_script(\"integtest.sh\", paths)\n\n @classmethod\n def find_install_script(cls, component_name):\n paths = [\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"install.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"install.sh\")),\n ]\n\n return cls.__find_script(\"install.sh\", paths)\n", "path": "bundle-workflow/src/paths/script_finder.py"}]} | 1,165 | 214 |
gh_patches_debug_15180 | rasdani/github-patches | git_diff | pre-commit__pre-commit-38 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider using --no-checkout for cloning
I'd assume it is faster...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/repository.py`
Content:
```
1
2 import contextlib
3 from plumbum import local
4
5 import pre_commit.constants as C
6 from pre_commit.clientlib.validate_manifest import validate_manifest
7 from pre_commit.hooks_workspace import in_hooks_workspace
8 from pre_commit.languages.all import languages
9 from pre_commit.util import cached_property
10
11
12 class Repository(object):
13 def __init__(self, repo_config):
14 self.repo_config = repo_config
15
16 @cached_property
17 def repo_url(self):
18 return self.repo_config['repo']
19
20 @cached_property
21 def sha(self):
22 return self.repo_config['sha']
23
24 @cached_property
25 def languages(self):
26 return set(filter(None, (
27 hook.get('language') for hook in self.hooks.values()
28 )))
29
30 @cached_property
31 def hooks(self):
32 return dict(
33 (hook['id'], dict(hook, **self.manifest[hook['id']]))
34 for hook in self.repo_config['hooks']
35 )
36
37 @cached_property
38 def manifest(self):
39 with self.in_checkout():
40 return dict(
41 (hook['id'], hook)
42 for hook in validate_manifest(C.MANIFEST_FILE)
43 )
44
45 @contextlib.contextmanager
46 def in_checkout(self):
47 with in_hooks_workspace():
48 # SMELL:
49 self.create()
50 with local.cwd(self.sha):
51 yield
52
53 def create(self):
54 with in_hooks_workspace():
55 if local.path(self.sha).exists():
56 # Project already exists, no reason to re-create it
57 return
58
59 local['git']['clone', self.repo_url, self.sha]()
60 with self.in_checkout():
61 local['git']['checkout', self.sha]()
62
63 def install(self):
64 with self.in_checkout():
65 for language in C.SUPPORTED_LANGUAGES:
66 if language in self.languages:
67 languages[language].install_environment()
68
69 def run_hook(self, hook_id, file_args):
70 with self.in_checkout():
71 hook = self.hooks[hook_id]
72 return languages[hook['language']].run_hook(hook, file_args)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -56,7 +56,7 @@
# Project already exists, no reason to re-create it
return
- local['git']['clone', self.repo_url, self.sha]()
+ local['git']['clone', '--no-checkout', self.repo_url, self.sha]()
with self.in_checkout():
local['git']['checkout', self.sha]()
@@ -69,4 +69,4 @@
def run_hook(self, hook_id, file_args):
with self.in_checkout():
hook = self.hooks[hook_id]
- return languages[hook['language']].run_hook(hook, file_args)
\ No newline at end of file
+ return languages[hook['language']].run_hook(hook, file_args)
| {"golden_diff": "diff --git a/pre_commit/repository.py b/pre_commit/repository.py\n--- a/pre_commit/repository.py\n+++ b/pre_commit/repository.py\n@@ -56,7 +56,7 @@\n # Project already exists, no reason to re-create it\n return\n \n- local['git']['clone', self.repo_url, self.sha]()\n+ local['git']['clone', '--no-checkout', self.repo_url, self.sha]()\n with self.in_checkout():\n local['git']['checkout', self.sha]()\n \n@@ -69,4 +69,4 @@\n def run_hook(self, hook_id, file_args):\n with self.in_checkout():\n hook = self.hooks[hook_id]\n- return languages[hook['language']].run_hook(hook, file_args)\n\\ No newline at end of file\n+ return languages[hook['language']].run_hook(hook, file_args)\n", "issue": "Consider using --no-checkout for cloning\nI'd assume it is faster...\n\n", "before_files": [{"content": "\nimport contextlib\nfrom plumbum import local\n\nimport pre_commit.constants as C\nfrom pre_commit.clientlib.validate_manifest import validate_manifest\nfrom pre_commit.hooks_workspace import in_hooks_workspace\nfrom pre_commit.languages.all import languages\nfrom pre_commit.util import cached_property\n\n\nclass Repository(object):\n def __init__(self, repo_config):\n self.repo_config = repo_config\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(filter(None, (\n hook.get('language') for hook in self.hooks.values()\n )))\n\n @cached_property\n def hooks(self):\n return dict(\n (hook['id'], dict(hook, **self.manifest[hook['id']]))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n with self.in_checkout():\n return dict(\n (hook['id'], hook)\n for hook in validate_manifest(C.MANIFEST_FILE)\n )\n\n @contextlib.contextmanager\n def in_checkout(self):\n with in_hooks_workspace():\n # SMELL:\n self.create()\n with local.cwd(self.sha):\n yield\n\n def create(self):\n with in_hooks_workspace():\n if local.path(self.sha).exists():\n # Project already exists, no reason to re-create it\n return\n\n local['git']['clone', self.repo_url, self.sha]()\n with self.in_checkout():\n local['git']['checkout', self.sha]()\n\n def install(self):\n with self.in_checkout():\n for language in C.SUPPORTED_LANGUAGES:\n if language in self.languages:\n languages[language].install_environment()\n\n def run_hook(self, hook_id, file_args):\n with self.in_checkout():\n hook = self.hooks[hook_id]\n return languages[hook['language']].run_hook(hook, file_args)", "path": "pre_commit/repository.py"}], "after_files": [{"content": "\nimport contextlib\nfrom plumbum import local\n\nimport pre_commit.constants as C\nfrom pre_commit.clientlib.validate_manifest import validate_manifest\nfrom pre_commit.hooks_workspace import in_hooks_workspace\nfrom pre_commit.languages.all import languages\nfrom pre_commit.util import cached_property\n\n\nclass Repository(object):\n def __init__(self, repo_config):\n self.repo_config = repo_config\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(filter(None, (\n hook.get('language') for hook in self.hooks.values()\n )))\n\n @cached_property\n def hooks(self):\n return dict(\n (hook['id'], dict(hook, **self.manifest[hook['id']]))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n with self.in_checkout():\n return dict(\n (hook['id'], hook)\n for hook in validate_manifest(C.MANIFEST_FILE)\n )\n\n @contextlib.contextmanager\n def in_checkout(self):\n with in_hooks_workspace():\n # SMELL:\n self.create()\n with local.cwd(self.sha):\n yield\n\n def create(self):\n with in_hooks_workspace():\n if local.path(self.sha).exists():\n # Project already exists, no reason to re-create it\n return\n\n local['git']['clone', '--no-checkout', self.repo_url, self.sha]()\n with self.in_checkout():\n local['git']['checkout', self.sha]()\n\n def install(self):\n with self.in_checkout():\n for language in C.SUPPORTED_LANGUAGES:\n if language in self.languages:\n languages[language].install_environment()\n\n def run_hook(self, hook_id, file_args):\n with self.in_checkout():\n hook = self.hooks[hook_id]\n return languages[hook['language']].run_hook(hook, file_args)\n", "path": "pre_commit/repository.py"}]} | 844 | 190 |
gh_patches_debug_12470 | rasdani/github-patches | git_diff | joke2k__faker-759 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Generating invalid cpf (brazillian ssn)
Faker is generating invalid checksum digits for cpf (brazillian ssn).
### Steps to reproduce
1. Create fake instance using localization "pt_BR"
1. Call fake.cpf()
### Expected behavior
It should generate a valid CPF.
### Actual behavior
It is generating a CPF with invalid checksum digits, in some cases.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/ssn/pt_BR/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from __future__ import unicode_literals
4 from .. import Provider as SsnProvider
5
6
7 def checksum(digits):
8 s = 0
9 p = len(digits) + 1
10 for i in range(0, len(digits)):
11 s += digits[i] * p
12 p -= 1
13
14 reminder = s % 11
15 if reminder == 0 or reminder == 1:
16 return 1
17 else:
18 return 11 - reminder
19
20
21 class Provider(SsnProvider):
22 """
23 Provider for Brazilian SSN also known in Brazil as CPF.
24 There are two methods Provider.ssn and Provider.cpf
25 The snn returns a valid number with numbers only
26 The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn
27 """
28
29 def ssn(self):
30 digits = self.generator.random.sample(range(10), 9)
31
32 dv = checksum(digits)
33 digits.append(dv)
34 digits.append(checksum(digits))
35
36 return ''.join(map(str, digits))
37
38 def cpf(self):
39 c = self.ssn()
40 return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/ssn/pt_BR/__init__.py b/faker/providers/ssn/pt_BR/__init__.py
--- a/faker/providers/ssn/pt_BR/__init__.py
+++ b/faker/providers/ssn/pt_BR/__init__.py
@@ -5,6 +5,12 @@
def checksum(digits):
+ """
+ Returns the checksum of CPF digits.
+ References to the algorithm:
+ https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo
+ https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm
+ """
s = 0
p = len(digits) + 1
for i in range(0, len(digits)):
@@ -13,7 +19,7 @@
reminder = s % 11
if reminder == 0 or reminder == 1:
- return 1
+ return 0
else:
return 11 - reminder
| {"golden_diff": "diff --git a/faker/providers/ssn/pt_BR/__init__.py b/faker/providers/ssn/pt_BR/__init__.py\n--- a/faker/providers/ssn/pt_BR/__init__.py\n+++ b/faker/providers/ssn/pt_BR/__init__.py\n@@ -5,6 +5,12 @@\n \n \n def checksum(digits):\n+ \"\"\"\n+ Returns the checksum of CPF digits.\n+ References to the algorithm:\n+ https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo\n+ https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm\n+ \"\"\"\n s = 0\n p = len(digits) + 1\n for i in range(0, len(digits)):\n@@ -13,7 +19,7 @@\n \n reminder = s % 11\n if reminder == 0 or reminder == 1:\n- return 1\n+ return 0\n else:\n return 11 - reminder\n", "issue": "Generating invalid cpf (brazillian ssn)\nFaker is generating invalid checksum digits for cpf (brazillian ssn).\r\n\r\n### Steps to reproduce\r\n\r\n1. Create fake instance using localization \"pt_BR\"\r\n1. Call fake.cpf()\r\n\r\n### Expected behavior\r\n\r\nIt should generate a valid CPF.\r\n\r\n### Actual behavior\r\n\r\nIt is generating a CPF with invalid checksum digits, in some cases.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import unicode_literals\nfrom .. import Provider as SsnProvider\n\n\ndef checksum(digits):\n s = 0\n p = len(digits) + 1\n for i in range(0, len(digits)):\n s += digits[i] * p\n p -= 1\n\n reminder = s % 11\n if reminder == 0 or reminder == 1:\n return 1\n else:\n return 11 - reminder\n\n\nclass Provider(SsnProvider):\n \"\"\"\n Provider for Brazilian SSN also known in Brazil as CPF.\n There are two methods Provider.ssn and Provider.cpf\n The snn returns a valid number with numbers only\n The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn\n \"\"\"\n\n def ssn(self):\n digits = self.generator.random.sample(range(10), 9)\n\n dv = checksum(digits)\n digits.append(dv)\n digits.append(checksum(digits))\n\n return ''.join(map(str, digits))\n\n def cpf(self):\n c = self.ssn()\n return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]\n", "path": "faker/providers/ssn/pt_BR/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import unicode_literals\nfrom .. import Provider as SsnProvider\n\n\ndef checksum(digits):\n \"\"\"\n Returns the checksum of CPF digits.\n References to the algorithm:\n https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo\n https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm\n \"\"\"\n s = 0\n p = len(digits) + 1\n for i in range(0, len(digits)):\n s += digits[i] * p\n p -= 1\n\n reminder = s % 11\n if reminder == 0 or reminder == 1:\n return 0\n else:\n return 11 - reminder\n\n\nclass Provider(SsnProvider):\n \"\"\"\n Provider for Brazilian SSN also known in Brazil as CPF.\n There are two methods Provider.ssn and Provider.cpf\n The snn returns a valid number with numbers only\n The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn\n \"\"\"\n\n def ssn(self):\n digits = self.generator.random.sample(range(10), 9)\n\n dv = checksum(digits)\n digits.append(dv)\n digits.append(checksum(digits))\n\n return ''.join(map(str, digits))\n\n def cpf(self):\n c = self.ssn()\n return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]\n", "path": "faker/providers/ssn/pt_BR/__init__.py"}]} | 701 | 246 |
gh_patches_debug_4256 | rasdani/github-patches | git_diff | ivy-llc__ivy-17092 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
solve
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/linalg.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle import promote_types_of_paddle_inputs
5 from ivy.functional.frontends.paddle.func_wrapper import (
6 to_ivy_arrays_and_back,
7 )
8
9
10 @with_supported_dtypes(
11 {"2.4.2 and below": ("float32", "float64", "int32", "int64")}, "paddle"
12 )
13 @to_ivy_arrays_and_back
14 def cross(x, y, /, *, axis=9, name=None):
15 x, y = promote_types_of_paddle_inputs(x, y)
16 return ivy.cross(x, y, axis=axis)
17
18
19 # matmul
20 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
21 @to_ivy_arrays_and_back
22 def matmul(x, y, transpose_x=False, transpose_y=False, name=None):
23 x, y = promote_types_of_paddle_inputs(x, y)
24 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
25
26
27 # norm
28 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
29 @to_ivy_arrays_and_back
30 def norm(x, p="fro", axis=None, keepdim=False, name=None):
31 if axis is None and p is not None:
32 if p == "fro":
33 p = 2
34 ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)
35 if keepdim:
36 ret = ret.reshape([1] * len(x.shape))
37 if len(ret.shape) == 0:
38 return ivy.array([ret])
39 return ret
40
41 if isinstance(axis, tuple):
42 axis = list(axis)
43 if isinstance(axis, list) and len(axis) == 1:
44 axis = axis[0]
45
46 if isinstance(axis, int):
47 if p == "fro":
48 p = 2
49 if p in [0, 1, 2, ivy.inf, -ivy.inf]:
50 ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)
51 elif isinstance(p, (int, float)):
52 ret = ivy.pow(
53 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
54 float(1.0 / p),
55 )
56
57 elif isinstance(axis, list) and len(axis) == 2:
58 if p == 0:
59 raise ValueError
60 elif p == 1:
61 ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)
62 elif p == 2 or p == "fro":
63 ret = ivy.matrix_norm(x, ord="fro", axis=axis, keepdims=keepdim)
64 elif p == ivy.inf:
65 ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)
66 elif p == -ivy.inf:
67 ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)
68 elif isinstance(p, (int, float)) and p > 0:
69 ret = ivy.pow(
70 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
71 float(1.0 / p),
72 )
73 else:
74 raise ValueError
75
76 else:
77 raise ValueError
78
79 if len(ret.shape) == 0:
80 ret = ivy.array(
81 [ret]
82 ) # this is done so as to match shape of output from paddle
83 return ret
84
85
86 # eig
87 @to_ivy_arrays_and_back
88 def eig(x, name=None):
89 return ivy.eig(x)
90
91
92 # eigvals
93 @to_ivy_arrays_and_back
94 def eigvals(x, name=None):
95 return ivy.eigvals(x)
96
97
98 # eigvalsh
99 @to_ivy_arrays_and_back
100 def eigvalsh(x, UPLO="L", name=None):
101 return ivy.eigvalsh(x, UPLO=UPLO)
102
103
104 # eigh
105 @to_ivy_arrays_and_back
106 def eigh(x, UPLO="L", name=None):
107 return ivy.eigh(x, UPLO=UPLO)
108
109
110 # pinv
111 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
112 @to_ivy_arrays_and_back
113 def pinv(x, rcond=1e-15, hermitian=False, name=None):
114 # TODO: Add hermitian functionality
115 return ivy.pinv(x, rtol=rcond)
116
117
118 # cholesky
119 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
120 @to_ivy_arrays_and_back
121 def cholesky(x, /, *, upper=False, name=None):
122 return ivy.cholesky(x, upper=upper)
123
124
125 # bmm
126 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
127 @to_ivy_arrays_and_back
128 def bmm(x, y, transpose_x=False, transpose_y=False, name=None):
129 if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:
130 raise RuntimeError("input must be 3D matrices")
131 x, y = promote_types_of_paddle_inputs(x, y)
132 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py
--- a/ivy/functional/frontends/paddle/tensor/linalg.py
+++ b/ivy/functional/frontends/paddle/tensor/linalg.py
@@ -115,6 +115,13 @@
return ivy.pinv(x, rtol=rcond)
+# solve
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+@to_ivy_arrays_and_back
+def solve(x1, x2, name=None):
+ return ivy.solve(x1, x2)
+
+
# cholesky
@with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py\n--- a/ivy/functional/frontends/paddle/tensor/linalg.py\n+++ b/ivy/functional/frontends/paddle/tensor/linalg.py\n@@ -115,6 +115,13 @@\n return ivy.pinv(x, rtol=rcond)\n \n \n+# solve\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def solve(x1, x2, name=None):\n+ return ivy.solve(x1, x2)\n+\n+\n # cholesky\n @with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n @to_ivy_arrays_and_back\n", "issue": "solve\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle import promote_types_of_paddle_inputs\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef cross(x, y, /, *, axis=9, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.cross(x, y, axis=axis)\n\n\n# matmul\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matmul(x, y, transpose_x=False, transpose_y=False, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# norm\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef norm(x, p=\"fro\", axis=None, keepdim=False, name=None):\n if axis is None and p is not None:\n if p == \"fro\":\n p = 2\n ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)\n if keepdim:\n ret = ret.reshape([1] * len(x.shape))\n if len(ret.shape) == 0:\n return ivy.array([ret])\n return ret\n\n if isinstance(axis, tuple):\n axis = list(axis)\n if isinstance(axis, list) and len(axis) == 1:\n axis = axis[0]\n\n if isinstance(axis, int):\n if p == \"fro\":\n p = 2\n if p in [0, 1, 2, ivy.inf, -ivy.inf]:\n ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)):\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n\n elif isinstance(axis, list) and len(axis) == 2:\n if p == 0:\n raise ValueError\n elif p == 1:\n ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == 2 or p == \"fro\":\n ret = ivy.matrix_norm(x, ord=\"fro\", axis=axis, keepdims=keepdim)\n elif p == ivy.inf:\n ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == -ivy.inf:\n ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)) and p > 0:\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n else:\n raise ValueError\n\n else:\n raise ValueError\n\n if len(ret.shape) == 0:\n ret = ivy.array(\n [ret]\n ) # this is done so as to match shape of output from paddle\n return ret\n\n\n# eig\n@to_ivy_arrays_and_back\ndef eig(x, name=None):\n return ivy.eig(x)\n\n\n# eigvals\n@to_ivy_arrays_and_back\ndef eigvals(x, name=None):\n return ivy.eigvals(x)\n\n\n# eigvalsh\n@to_ivy_arrays_and_back\ndef eigvalsh(x, UPLO=\"L\", name=None):\n return ivy.eigvalsh(x, UPLO=UPLO)\n\n\n# eigh\n@to_ivy_arrays_and_back\ndef eigh(x, UPLO=\"L\", name=None):\n return ivy.eigh(x, UPLO=UPLO)\n\n\n# pinv\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pinv(x, rcond=1e-15, hermitian=False, name=None):\n # TODO: Add hermitian functionality\n return ivy.pinv(x, rtol=rcond)\n\n\n# cholesky\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky(x, /, *, upper=False, name=None):\n return ivy.cholesky(x, upper=upper)\n\n\n# bmm\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bmm(x, y, transpose_x=False, transpose_y=False, name=None):\n if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:\n raise RuntimeError(\"input must be 3D matrices\")\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n", "path": "ivy/functional/frontends/paddle/tensor/linalg.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle import promote_types_of_paddle_inputs\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef cross(x, y, /, *, axis=9, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.cross(x, y, axis=axis)\n\n\n# matmul\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matmul(x, y, transpose_x=False, transpose_y=False, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# norm\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef norm(x, p=\"fro\", axis=None, keepdim=False, name=None):\n if axis is None and p is not None:\n if p == \"fro\":\n p = 2\n ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)\n if keepdim:\n ret = ret.reshape([1] * len(x.shape))\n if len(ret.shape) == 0:\n return ivy.array([ret])\n return ret\n\n if isinstance(axis, tuple):\n axis = list(axis)\n if isinstance(axis, list) and len(axis) == 1:\n axis = axis[0]\n\n if isinstance(axis, int):\n if p == \"fro\":\n p = 2\n if p in [0, 1, 2, ivy.inf, -ivy.inf]:\n ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)):\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n\n elif isinstance(axis, list) and len(axis) == 2:\n if p == 0:\n raise ValueError\n elif p == 1:\n ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == 2 or p == \"fro\":\n ret = ivy.matrix_norm(x, ord=\"fro\", axis=axis, keepdims=keepdim)\n elif p == ivy.inf:\n ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == -ivy.inf:\n ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)) and p > 0:\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n else:\n raise ValueError\n\n else:\n raise ValueError\n\n if len(ret.shape) == 0:\n ret = ivy.array(\n [ret]\n ) # this is done so as to match shape of output from paddle\n return ret\n\n\n# eig\n@to_ivy_arrays_and_back\ndef eig(x, name=None):\n return ivy.eig(x)\n\n\n# eigvals\n@to_ivy_arrays_and_back\ndef eigvals(x, name=None):\n return ivy.eigvals(x)\n\n\n# eigvalsh\n@to_ivy_arrays_and_back\ndef eigvalsh(x, UPLO=\"L\", name=None):\n return ivy.eigvalsh(x, UPLO=UPLO)\n\n\n# eigh\n@to_ivy_arrays_and_back\ndef eigh(x, UPLO=\"L\", name=None):\n return ivy.eigh(x, UPLO=UPLO)\n\n\n# pinv\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pinv(x, rcond=1e-15, hermitian=False, name=None):\n # TODO: Add hermitian functionality\n return ivy.pinv(x, rtol=rcond)\n\n\n# solve\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef solve(x1, x2, name=None):\n return ivy.solve(x1, x2)\n\n\n# cholesky\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky(x, /, *, upper=False, name=None):\n return ivy.cholesky(x, upper=upper)\n\n\n# bmm\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bmm(x, y, transpose_x=False, transpose_y=False, name=None):\n if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:\n raise RuntimeError(\"input must be 3D matrices\")\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n", "path": "ivy/functional/frontends/paddle/tensor/linalg.py"}]} | 1,818 | 205 |
gh_patches_debug_63106 | rasdani/github-patches | git_diff | kornia__kornia-1263 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] save pointcloud not updates num_points when inf
## 🐛 Bug
The function `K.utils.save_pointcloud_ply` doesn't update the final number of points to be serialized when one of the values contain an infinite value.
How to fix:
update this line https://github.com/kornia/kornia/blob/master/kornia/utils/pointcloud_io.py#L34
```python
if not bool(torch.isfinite(xyz).any()):
continue
```
by
```python
if not bool(torch.isfinite(xyz).any()):
num_points -= 1
continue
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/utils/pointcloud_io.py`
Content:
```
1 import os
2 from typing import Optional
3
4 import torch
5
6
7 def save_pointcloud_ply(filename: str, pointcloud: torch.Tensor) -> None:
8 r"""Utility function to save to disk a pointcloud in PLY format.
9
10 Args:
11 filename: the path to save the pointcloud.
12 pointcloud: tensor containing the pointcloud to save.
13 The tensor must be in the shape of :math:`(*, 3)` where the last
14 component is assumed to be a 3d point coordinate :math:`(X, Y, Z)`.
15 """
16 if not isinstance(filename, str) and filename[-3:] == '.ply':
17 raise TypeError("Input filename must be a string in with the .ply " "extension. Got {}".format(filename))
18
19 if not torch.is_tensor(pointcloud):
20 raise TypeError(f"Input pointcloud type is not a torch.Tensor. Got {type(pointcloud)}")
21
22 if not len(pointcloud.shape) == 3 and pointcloud.shape[-1] == 3:
23 raise TypeError("Input pointcloud must be in the following shape " "HxWx3. Got {}.".format(pointcloud.shape))
24
25 # flatten the input pointcloud in a vector to iterate points
26 xyz_vec: torch.Tensor = pointcloud.reshape(-1, 3)
27
28 with open(filename, 'w') as f:
29 data_str: str = ''
30 num_points: int = xyz_vec.shape[0]
31 for idx in range(num_points):
32 xyz = xyz_vec[idx]
33 if not bool(torch.isfinite(xyz).any()):
34 continue
35 x: float = xyz[0].item()
36 y: float = xyz[1].item()
37 z: float = xyz[2].item()
38 data_str += f'{x} {y} {z}\n'
39
40 f.write("ply\n")
41 f.write("format ascii 1.0\n")
42 f.write("comment arraiy generated\n")
43 f.write("element vertex %d\n" % num_points)
44 f.write("property double x\n")
45 f.write("property double y\n")
46 f.write("property double z\n")
47 f.write("end_header\n")
48 f.write(data_str)
49
50
51 def load_pointcloud_ply(filename: str, header_size: int = 8) -> torch.Tensor:
52 r"""Utility function to load from disk a pointcloud in PLY format.
53
54 Args:
55 filename: the path to the pointcloud.
56 header_size: the size of the ply file header that will
57 be skipped during loading.
58
59 Return:
60 tensor containing the loaded point with shape :math:`(*, 3)` where
61 :math:`*` represents the number of points.
62 """
63 if not isinstance(filename, str) and filename[-3:] == '.ply':
64 raise TypeError("Input filename must be a string in with the .ply " "extension. Got {}".format(filename))
65 if not os.path.isfile(filename):
66 raise ValueError("Input filename is not an existing file.")
67 if not (isinstance(header_size, int) and header_size > 0):
68 raise TypeError(f"Input header_size must be a positive integer. Got {header_size}.")
69 # open the file and populate tensor
70 with open(filename) as f:
71 points = []
72
73 # skip header
74 lines = f.readlines()[header_size:]
75
76 # iterate over the points
77 for line in lines:
78 x_str, y_str, z_str = line.split()
79 points.append((torch.tensor(float(x_str)), torch.tensor(float(y_str)), torch.tensor(float(z_str))))
80
81 # create tensor from list
82 pointcloud: torch.Tensor = torch.tensor(points)
83 return pointcloud
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/utils/pointcloud_io.py b/kornia/utils/pointcloud_io.py
--- a/kornia/utils/pointcloud_io.py
+++ b/kornia/utils/pointcloud_io.py
@@ -31,6 +31,7 @@
for idx in range(num_points):
xyz = xyz_vec[idx]
if not bool(torch.isfinite(xyz).any()):
+ num_points -= 1
continue
x: float = xyz[0].item()
y: float = xyz[1].item()
| {"golden_diff": "diff --git a/kornia/utils/pointcloud_io.py b/kornia/utils/pointcloud_io.py\n--- a/kornia/utils/pointcloud_io.py\n+++ b/kornia/utils/pointcloud_io.py\n@@ -31,6 +31,7 @@\n for idx in range(num_points):\n xyz = xyz_vec[idx]\n if not bool(torch.isfinite(xyz).any()):\n+ num_points -= 1\n continue\n x: float = xyz[0].item()\n y: float = xyz[1].item()\n", "issue": "[Bug] save pointcloud not updates num_points when inf\n## \ud83d\udc1b Bug\r\n\r\nThe function `K.utils.save_pointcloud_ply` doesn't update the final number of points to be serialized when one of the values contain an infinite value.\r\n\r\nHow to fix:\r\n\r\nupdate this line https://github.com/kornia/kornia/blob/master/kornia/utils/pointcloud_io.py#L34\r\n\r\n```python\r\n if not bool(torch.isfinite(xyz).any()):\r\n continue\r\n```\r\nby\r\n\r\n```python\r\n if not bool(torch.isfinite(xyz).any()):\r\n num_points -= 1\r\n continue\r\n```\n", "before_files": [{"content": "import os\nfrom typing import Optional\n\nimport torch\n\n\ndef save_pointcloud_ply(filename: str, pointcloud: torch.Tensor) -> None:\n r\"\"\"Utility function to save to disk a pointcloud in PLY format.\n\n Args:\n filename: the path to save the pointcloud.\n pointcloud: tensor containing the pointcloud to save.\n The tensor must be in the shape of :math:`(*, 3)` where the last\n component is assumed to be a 3d point coordinate :math:`(X, Y, Z)`.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n\n if not torch.is_tensor(pointcloud):\n raise TypeError(f\"Input pointcloud type is not a torch.Tensor. Got {type(pointcloud)}\")\n\n if not len(pointcloud.shape) == 3 and pointcloud.shape[-1] == 3:\n raise TypeError(\"Input pointcloud must be in the following shape \" \"HxWx3. Got {}.\".format(pointcloud.shape))\n\n # flatten the input pointcloud in a vector to iterate points\n xyz_vec: torch.Tensor = pointcloud.reshape(-1, 3)\n\n with open(filename, 'w') as f:\n data_str: str = ''\n num_points: int = xyz_vec.shape[0]\n for idx in range(num_points):\n xyz = xyz_vec[idx]\n if not bool(torch.isfinite(xyz).any()):\n continue\n x: float = xyz[0].item()\n y: float = xyz[1].item()\n z: float = xyz[2].item()\n data_str += f'{x} {y} {z}\\n'\n\n f.write(\"ply\\n\")\n f.write(\"format ascii 1.0\\n\")\n f.write(\"comment arraiy generated\\n\")\n f.write(\"element vertex %d\\n\" % num_points)\n f.write(\"property double x\\n\")\n f.write(\"property double y\\n\")\n f.write(\"property double z\\n\")\n f.write(\"end_header\\n\")\n f.write(data_str)\n\n\ndef load_pointcloud_ply(filename: str, header_size: int = 8) -> torch.Tensor:\n r\"\"\"Utility function to load from disk a pointcloud in PLY format.\n\n Args:\n filename: the path to the pointcloud.\n header_size: the size of the ply file header that will\n be skipped during loading.\n\n Return:\n tensor containing the loaded point with shape :math:`(*, 3)` where\n :math:`*` represents the number of points.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n if not os.path.isfile(filename):\n raise ValueError(\"Input filename is not an existing file.\")\n if not (isinstance(header_size, int) and header_size > 0):\n raise TypeError(f\"Input header_size must be a positive integer. Got {header_size}.\")\n # open the file and populate tensor\n with open(filename) as f:\n points = []\n\n # skip header\n lines = f.readlines()[header_size:]\n\n # iterate over the points\n for line in lines:\n x_str, y_str, z_str = line.split()\n points.append((torch.tensor(float(x_str)), torch.tensor(float(y_str)), torch.tensor(float(z_str))))\n\n # create tensor from list\n pointcloud: torch.Tensor = torch.tensor(points)\n return pointcloud\n", "path": "kornia/utils/pointcloud_io.py"}], "after_files": [{"content": "import os\nfrom typing import Optional\n\nimport torch\n\n\ndef save_pointcloud_ply(filename: str, pointcloud: torch.Tensor) -> None:\n r\"\"\"Utility function to save to disk a pointcloud in PLY format.\n\n Args:\n filename: the path to save the pointcloud.\n pointcloud: tensor containing the pointcloud to save.\n The tensor must be in the shape of :math:`(*, 3)` where the last\n component is assumed to be a 3d point coordinate :math:`(X, Y, Z)`.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n\n if not torch.is_tensor(pointcloud):\n raise TypeError(f\"Input pointcloud type is not a torch.Tensor. Got {type(pointcloud)}\")\n\n if not len(pointcloud.shape) == 3 and pointcloud.shape[-1] == 3:\n raise TypeError(\"Input pointcloud must be in the following shape \" \"HxWx3. Got {}.\".format(pointcloud.shape))\n\n # flatten the input pointcloud in a vector to iterate points\n xyz_vec: torch.Tensor = pointcloud.reshape(-1, 3)\n\n with open(filename, 'w') as f:\n data_str: str = ''\n num_points: int = xyz_vec.shape[0]\n for idx in range(num_points):\n xyz = xyz_vec[idx]\n if not bool(torch.isfinite(xyz).any()):\n num_points -= 1\n continue\n x: float = xyz[0].item()\n y: float = xyz[1].item()\n z: float = xyz[2].item()\n data_str += f'{x} {y} {z}\\n'\n\n f.write(\"ply\\n\")\n f.write(\"format ascii 1.0\\n\")\n f.write(\"comment arraiy generated\\n\")\n f.write(\"element vertex %d\\n\" % num_points)\n f.write(\"property double x\\n\")\n f.write(\"property double y\\n\")\n f.write(\"property double z\\n\")\n f.write(\"end_header\\n\")\n f.write(data_str)\n\n\ndef load_pointcloud_ply(filename: str, header_size: int = 8) -> torch.Tensor:\n r\"\"\"Utility function to load from disk a pointcloud in PLY format.\n\n Args:\n filename: the path to the pointcloud.\n header_size: the size of the ply file header that will\n be skipped during loading.\n\n Return:\n tensor containing the loaded point with shape :math:`(*, 3)` where\n :math:`*` represents the number of points.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n if not os.path.isfile(filename):\n raise ValueError(\"Input filename is not an existing file.\")\n if not (isinstance(header_size, int) and header_size > 0):\n raise TypeError(f\"Input header_size must be a positive integer. Got {header_size}.\")\n # open the file and populate tensor\n with open(filename) as f:\n points = []\n\n # skip header\n lines = f.readlines()[header_size:]\n\n # iterate over the points\n for line in lines:\n x_str, y_str, z_str = line.split()\n points.append((torch.tensor(float(x_str)), torch.tensor(float(y_str)), torch.tensor(float(z_str))))\n\n # create tensor from list\n pointcloud: torch.Tensor = torch.tensor(points)\n return pointcloud\n", "path": "kornia/utils/pointcloud_io.py"}]} | 1,354 | 120 |
gh_patches_debug_42864 | rasdani/github-patches | git_diff | sunpy__sunpy-4129 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Maintain coherence between keycomments and the metadict
See #2748
This is probably best implemented by adding the functionality to our `MetaDict` object or something, so that we don't have to do it manually everywhere.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/util/metadata.py`
Content:
```
1 """
2 This module provides a generalized dictionary class that deals with header
3 parsing and normalization.
4 """
5 from collections import OrderedDict
6
7 __all__ = ['MetaDict']
8
9
10 class MetaDict(OrderedDict):
11 """
12 A class to hold metadata associated with a `sunpy.map.Map
13 <sunpy.map.map_factory.MapFactory.__call__>` derivative.
14
15 This class handles everything in lower case. This allows case
16 insensitive indexing.
17 """
18
19 def __init__(self, *args):
20 """
21 Creates a new MapHeader instance.
22 """
23 # Store all keys as upper-case to allow for case-insensitive indexing
24 # OrderedDict can be instantiated from a list of lists or a tuple of tuples
25 tags = dict()
26 if args:
27 args = list(args)
28 adict = args[0]
29 if isinstance(adict, list) or isinstance(adict, tuple):
30 tags = OrderedDict((k.upper(), v) for k, v in adict)
31 elif isinstance(adict, dict):
32 tags = OrderedDict((k.upper(), v) for k, v in adict.items())
33 else:
34 raise TypeError("Can not create a MetaDict from this type input")
35 args[0] = tags
36
37 super().__init__(*args)
38
39 def __contains__(self, key):
40 """
41 Override ``__contains__``.
42 """
43 return OrderedDict.__contains__(self, key.lower())
44
45 def __getitem__(self, key):
46 """
47 Override ``[]`` indexing.
48 """
49 return OrderedDict.__getitem__(self, key.lower())
50
51 def __setitem__(self, key, value):
52 """
53 Override ``[]`` indexing.
54 """
55 return OrderedDict.__setitem__(self, key.lower(), value)
56
57 def get(self, key, default=None):
58 """
59 Override ``.get()`` indexing.
60 """
61 return OrderedDict.get(self, key.lower(), default)
62
63 def has_key(self, key):
64 """
65 Override ``.has_key()`` to perform case-insensitively.
66 """
67 return key.lower() in self
68
69 def pop(self, key, default=None):
70 """
71 Override ``.pop()`` to perform case-insensitively.
72 """
73 return OrderedDict.pop(self, key.lower(), default)
74
75 def update(self, d2):
76 """
77 Override ``.update()`` to perform case-insensitively.
78 """
79 return OrderedDict.update(self, OrderedDict((k.lower(), v) for k, v in d2.items()))
80
81 def setdefault(self, key, default=None):
82 """
83 Override ``.setdefault()`` to perform case-insensitively.
84 """
85 return OrderedDict.setdefault(self, key.lower(), default)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/util/metadata.py b/sunpy/util/metadata.py
--- a/sunpy/util/metadata.py
+++ b/sunpy/util/metadata.py
@@ -1,6 +1,6 @@
"""
This module provides a generalized dictionary class that deals with header
-parsing and normalization.
+parsing, normalization, and maintaining coherence between keys and keycomments.
"""
from collections import OrderedDict
@@ -14,28 +14,67 @@
This class handles everything in lower case. This allows case
insensitive indexing.
+
+ If the key 'keycomments' exists, its value must be a dictionary mapping
+ keys in the `MetaDict` to their comments. The casing of keys in the
+ keycomments dictionary is not significant. If a key is removed from the
+ `MetaDict`, it will also be removed from the keycomments dictionary.
+ Additionally, any extraneous keycomments will be removed when the
+ `MetaDict` is instantiated.
"""
def __init__(self, *args):
"""
- Creates a new MapHeader instance.
+ Creates a new MetaDict instance.
"""
- # Store all keys as upper-case to allow for case-insensitive indexing
+ # Store all keys as lower-case to allow for case-insensitive indexing
# OrderedDict can be instantiated from a list of lists or a tuple of tuples
tags = dict()
if args:
args = list(args)
adict = args[0]
if isinstance(adict, list) or isinstance(adict, tuple):
- tags = OrderedDict((k.upper(), v) for k, v in adict)
+ tags = OrderedDict((k.lower(), v) for k, v in adict)
elif isinstance(adict, dict):
- tags = OrderedDict((k.upper(), v) for k, v in adict.items())
+ tags = OrderedDict((k.lower(), v) for k, v in adict.items())
else:
raise TypeError("Can not create a MetaDict from this type input")
args[0] = tags
super().__init__(*args)
+ # Use `copy=True` to avoid mutating the caller's keycomments
+ # dictionary (if they provided one).
+ self._prune_keycomments(copy=True)
+
+ def _prune_keycomments(self, copy=False):
+ """
+ Remove keycomments for keys that are not contained in the MetaDict.
+
+ Parameters
+ ----------
+ copy : `bool`, optional
+ Make a copy of the current keycomments dict before removing keys.
+ """
+ if 'keycomments' not in self:
+ return
+
+ keycomments = self['keycomments']
+
+ if not isinstance(keycomments, dict):
+ raise TypeError(
+ "'keycomments' key must have a value of type `dict`. Found "
+ "the following type: %r" % type(keycomments))
+
+ if copy:
+ keycomments = keycomments.copy()
+
+ for key in list(keycomments.keys()):
+ if key not in self:
+ del keycomments[key]
+
+ self['keycomments'] = keycomments
+
def __contains__(self, key):
"""
Override ``__contains__``.
@@ -54,6 +93,15 @@
"""
return OrderedDict.__setitem__(self, key.lower(), value)
+ # Note: `OrderedDict.popitem()` does not need to be overridden to prune
+ # keycomments because it calls `__delitem__` internally.
+ def __delitem__(self, key):
+ """
+ Override ``del dict[key]`` key deletion.
+ """
+ OrderedDict.__delitem__(self, key.lower())
+ self._prune_keycomments()
+
def get(self, key, default=None):
"""
Override ``.get()`` indexing.
@@ -70,7 +118,11 @@
"""
Override ``.pop()`` to perform case-insensitively.
"""
- return OrderedDict.pop(self, key.lower(), default)
+ has_key = key in self
+ result = OrderedDict.pop(self, key.lower(), default)
+ if has_key:
+ self._prune_keycomments()
+ return result
def update(self, d2):
"""
| {"golden_diff": "diff --git a/sunpy/util/metadata.py b/sunpy/util/metadata.py\n--- a/sunpy/util/metadata.py\n+++ b/sunpy/util/metadata.py\n@@ -1,6 +1,6 @@\n \"\"\"\n This module provides a generalized dictionary class that deals with header\n-parsing and normalization.\n+parsing, normalization, and maintaining coherence between keys and keycomments.\n \"\"\"\n from collections import OrderedDict\n \n@@ -14,28 +14,67 @@\n \n This class handles everything in lower case. This allows case\n insensitive indexing.\n+\n+ If the key 'keycomments' exists, its value must be a dictionary mapping\n+ keys in the `MetaDict` to their comments. The casing of keys in the\n+ keycomments dictionary is not significant. If a key is removed from the\n+ `MetaDict`, it will also be removed from the keycomments dictionary.\n+ Additionally, any extraneous keycomments will be removed when the\n+ `MetaDict` is instantiated.\n \"\"\"\n \n def __init__(self, *args):\n \"\"\"\n- Creates a new MapHeader instance.\n+ Creates a new MetaDict instance.\n \"\"\"\n- # Store all keys as upper-case to allow for case-insensitive indexing\n+ # Store all keys as lower-case to allow for case-insensitive indexing\n # OrderedDict can be instantiated from a list of lists or a tuple of tuples\n tags = dict()\n if args:\n args = list(args)\n adict = args[0]\n if isinstance(adict, list) or isinstance(adict, tuple):\n- tags = OrderedDict((k.upper(), v) for k, v in adict)\n+ tags = OrderedDict((k.lower(), v) for k, v in adict)\n elif isinstance(adict, dict):\n- tags = OrderedDict((k.upper(), v) for k, v in adict.items())\n+ tags = OrderedDict((k.lower(), v) for k, v in adict.items())\n else:\n raise TypeError(\"Can not create a MetaDict from this type input\")\n args[0] = tags\n \n super().__init__(*args)\n \n+ # Use `copy=True` to avoid mutating the caller's keycomments\n+ # dictionary (if they provided one).\n+ self._prune_keycomments(copy=True)\n+\n+ def _prune_keycomments(self, copy=False):\n+ \"\"\"\n+ Remove keycomments for keys that are not contained in the MetaDict.\n+\n+ Parameters\n+ ----------\n+ copy : `bool`, optional\n+ Make a copy of the current keycomments dict before removing keys.\n+ \"\"\"\n+ if 'keycomments' not in self:\n+ return\n+\n+ keycomments = self['keycomments']\n+\n+ if not isinstance(keycomments, dict):\n+ raise TypeError(\n+ \"'keycomments' key must have a value of type `dict`. Found \"\n+ \"the following type: %r\" % type(keycomments))\n+\n+ if copy:\n+ keycomments = keycomments.copy()\n+\n+ for key in list(keycomments.keys()):\n+ if key not in self:\n+ del keycomments[key]\n+\n+ self['keycomments'] = keycomments\n+\n def __contains__(self, key):\n \"\"\"\n Override ``__contains__``.\n@@ -54,6 +93,15 @@\n \"\"\"\n return OrderedDict.__setitem__(self, key.lower(), value)\n \n+ # Note: `OrderedDict.popitem()` does not need to be overridden to prune\n+ # keycomments because it calls `__delitem__` internally.\n+ def __delitem__(self, key):\n+ \"\"\"\n+ Override ``del dict[key]`` key deletion.\n+ \"\"\"\n+ OrderedDict.__delitem__(self, key.lower())\n+ self._prune_keycomments()\n+\n def get(self, key, default=None):\n \"\"\"\n Override ``.get()`` indexing.\n@@ -70,7 +118,11 @@\n \"\"\"\n Override ``.pop()`` to perform case-insensitively.\n \"\"\"\n- return OrderedDict.pop(self, key.lower(), default)\n+ has_key = key in self\n+ result = OrderedDict.pop(self, key.lower(), default)\n+ if has_key:\n+ self._prune_keycomments()\n+ return result\n \n def update(self, d2):\n \"\"\"\n", "issue": "Maintain coherence between keycomments and the metadict\nSee #2748 \r\n\r\nThis is probably best implemented by adding the functionality to our `MetaDict` object or something, so that we don't have to do it manually everywhere.\n", "before_files": [{"content": "\"\"\"\nThis module provides a generalized dictionary class that deals with header\nparsing and normalization.\n\"\"\"\nfrom collections import OrderedDict\n\n__all__ = ['MetaDict']\n\n\nclass MetaDict(OrderedDict):\n \"\"\"\n A class to hold metadata associated with a `sunpy.map.Map\n <sunpy.map.map_factory.MapFactory.__call__>` derivative.\n\n This class handles everything in lower case. This allows case\n insensitive indexing.\n \"\"\"\n\n def __init__(self, *args):\n \"\"\"\n Creates a new MapHeader instance.\n \"\"\"\n # Store all keys as upper-case to allow for case-insensitive indexing\n # OrderedDict can be instantiated from a list of lists or a tuple of tuples\n tags = dict()\n if args:\n args = list(args)\n adict = args[0]\n if isinstance(adict, list) or isinstance(adict, tuple):\n tags = OrderedDict((k.upper(), v) for k, v in adict)\n elif isinstance(adict, dict):\n tags = OrderedDict((k.upper(), v) for k, v in adict.items())\n else:\n raise TypeError(\"Can not create a MetaDict from this type input\")\n args[0] = tags\n\n super().__init__(*args)\n\n def __contains__(self, key):\n \"\"\"\n Override ``__contains__``.\n \"\"\"\n return OrderedDict.__contains__(self, key.lower())\n\n def __getitem__(self, key):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__getitem__(self, key.lower())\n\n def __setitem__(self, key, value):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__setitem__(self, key.lower(), value)\n\n def get(self, key, default=None):\n \"\"\"\n Override ``.get()`` indexing.\n \"\"\"\n return OrderedDict.get(self, key.lower(), default)\n\n def has_key(self, key):\n \"\"\"\n Override ``.has_key()`` to perform case-insensitively.\n \"\"\"\n return key.lower() in self\n\n def pop(self, key, default=None):\n \"\"\"\n Override ``.pop()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.pop(self, key.lower(), default)\n\n def update(self, d2):\n \"\"\"\n Override ``.update()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.update(self, OrderedDict((k.lower(), v) for k, v in d2.items()))\n\n def setdefault(self, key, default=None):\n \"\"\"\n Override ``.setdefault()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.setdefault(self, key.lower(), default)\n", "path": "sunpy/util/metadata.py"}], "after_files": [{"content": "\"\"\"\nThis module provides a generalized dictionary class that deals with header\nparsing, normalization, and maintaining coherence between keys and keycomments.\n\"\"\"\nfrom collections import OrderedDict\n\n__all__ = ['MetaDict']\n\n\nclass MetaDict(OrderedDict):\n \"\"\"\n A class to hold metadata associated with a `sunpy.map.Map\n <sunpy.map.map_factory.MapFactory.__call__>` derivative.\n\n This class handles everything in lower case. This allows case\n insensitive indexing.\n\n If the key 'keycomments' exists, its value must be a dictionary mapping\n keys in the `MetaDict` to their comments. The casing of keys in the\n keycomments dictionary is not significant. If a key is removed from the\n `MetaDict`, it will also be removed from the keycomments dictionary.\n Additionally, any extraneous keycomments will be removed when the\n `MetaDict` is instantiated.\n \"\"\"\n\n def __init__(self, *args):\n \"\"\"\n Creates a new MetaDict instance.\n \"\"\"\n # Store all keys as lower-case to allow for case-insensitive indexing\n # OrderedDict can be instantiated from a list of lists or a tuple of tuples\n tags = dict()\n if args:\n args = list(args)\n adict = args[0]\n if isinstance(adict, list) or isinstance(adict, tuple):\n tags = OrderedDict((k.lower(), v) for k, v in adict)\n elif isinstance(adict, dict):\n tags = OrderedDict((k.lower(), v) for k, v in adict.items())\n else:\n raise TypeError(\"Can not create a MetaDict from this type input\")\n args[0] = tags\n\n super().__init__(*args)\n\n # Use `copy=True` to avoid mutating the caller's keycomments\n # dictionary (if they provided one).\n self._prune_keycomments(copy=True)\n\n def _prune_keycomments(self, copy=False):\n \"\"\"\n Remove keycomments for keys that are not contained in the MetaDict.\n\n Parameters\n ----------\n copy : `bool`, optional\n Make a copy of the current keycomments dict before removing keys.\n \"\"\"\n if 'keycomments' not in self:\n return\n\n keycomments = self['keycomments']\n\n if not isinstance(keycomments, dict):\n raise TypeError(\n \"'keycomments' key must have a value of type `dict`. Found \"\n \"the following type: %r\" % type(keycomments))\n\n if copy:\n keycomments = keycomments.copy()\n\n for key in list(keycomments.keys()):\n if key not in self:\n del keycomments[key]\n\n self['keycomments'] = keycomments\n\n def __contains__(self, key):\n \"\"\"\n Override ``__contains__``.\n \"\"\"\n return OrderedDict.__contains__(self, key.lower())\n\n def __getitem__(self, key):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__getitem__(self, key.lower())\n\n def __setitem__(self, key, value):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__setitem__(self, key.lower(), value)\n\n # Note: `OrderedDict.popitem()` does not need to be overridden to prune\n # keycomments because it calls `__delitem__` internally.\n def __delitem__(self, key):\n \"\"\"\n Override ``del dict[key]`` key deletion.\n \"\"\"\n OrderedDict.__delitem__(self, key.lower())\n self._prune_keycomments()\n\n def get(self, key, default=None):\n \"\"\"\n Override ``.get()`` indexing.\n \"\"\"\n return OrderedDict.get(self, key.lower(), default)\n\n def has_key(self, key):\n \"\"\"\n Override ``.has_key()`` to perform case-insensitively.\n \"\"\"\n return key.lower() in self\n\n def pop(self, key, default=None):\n \"\"\"\n Override ``.pop()`` to perform case-insensitively.\n \"\"\"\n has_key = key in self\n result = OrderedDict.pop(self, key.lower(), default)\n if has_key:\n self._prune_keycomments()\n return result\n\n def update(self, d2):\n \"\"\"\n Override ``.update()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.update(self, OrderedDict((k.lower(), v) for k, v in d2.items()))\n\n def setdefault(self, key, default=None):\n \"\"\"\n Override ``.setdefault()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.setdefault(self, key.lower(), default)\n", "path": "sunpy/util/metadata.py"}]} | 1,048 | 951 |
gh_patches_debug_16288 | rasdani/github-patches | git_diff | pytorch__vision-7702 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
to_grayscale gives non-actionable deprecation warning
_Originally reported in the [user forum](https://discuss.pytorch.org/t/cannot-find-convert-color-space/182591) by `@function2`._
> When I use to_grayscale, there’s a deprecation warning:
> ```
> UserWarning: The function `to_grayscale(...)` is deprecated in will be removed in a future release. Instead, please use `convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)`.
> ```
> However, I can’t find this function in the current code base
---
Note that this only applies to `torchvision.transforms.v2.function`
https://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/v2/functional/_deprecated.py#L12-L22
since the v1 version, i.e. `torchvision.transforms.functional` does not emit the warning
https://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/functional.py#L1249-L1253
Fixing the v2 warning was forgotten in #7120.
cc @vfdev-5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/transforms/v2/functional/_deprecated.py`
Content:
```
1 import warnings
2 from typing import Any, List, Union
3
4 import PIL.Image
5 import torch
6
7 from torchvision import datapoints
8 from torchvision.transforms import functional as _F
9
10
11 @torch.jit.unused
12 def to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:
13 call = ", num_output_channels=3" if num_output_channels == 3 else ""
14 replacement = "convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)"
15 if num_output_channels == 3:
16 replacement = f"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)"
17 warnings.warn(
18 f"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. "
19 f"Instead, please use `{replacement}`.",
20 )
21
22 return _F.to_grayscale(inpt, num_output_channels=num_output_channels)
23
24
25 @torch.jit.unused
26 def to_tensor(inpt: Any) -> torch.Tensor:
27 warnings.warn(
28 "The function `to_tensor(...)` is deprecated and will be removed in a future release. "
29 "Instead, please use `to_image_tensor(...)` followed by `convert_image_dtype(...)`."
30 )
31 return _F.to_tensor(inpt)
32
33
34 def get_image_size(inpt: Union[datapoints._ImageTypeJIT, datapoints._VideoTypeJIT]) -> List[int]:
35 warnings.warn(
36 "The function `get_image_size(...)` is deprecated and will be removed in a future release. "
37 "Instead, please use `get_spatial_size(...)` which returns `[h, w]` instead of `[w, h]`."
38 )
39 return _F.get_image_size(inpt)
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchvision/transforms/v2/functional/_deprecated.py b/torchvision/transforms/v2/functional/_deprecated.py
--- a/torchvision/transforms/v2/functional/_deprecated.py
+++ b/torchvision/transforms/v2/functional/_deprecated.py
@@ -10,15 +10,10 @@
@torch.jit.unused
def to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:
- call = ", num_output_channels=3" if num_output_channels == 3 else ""
- replacement = "convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)"
- if num_output_channels == 3:
- replacement = f"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)"
warnings.warn(
- f"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. "
- f"Instead, please use `{replacement}`.",
+ "The function `to_grayscale` is deprecated in will be removed in a future release. "
+ "Instead, please use `rgb_to_grayscale`.",
)
-
return _F.to_grayscale(inpt, num_output_channels=num_output_channels)
| {"golden_diff": "diff --git a/torchvision/transforms/v2/functional/_deprecated.py b/torchvision/transforms/v2/functional/_deprecated.py\n--- a/torchvision/transforms/v2/functional/_deprecated.py\n+++ b/torchvision/transforms/v2/functional/_deprecated.py\n@@ -10,15 +10,10 @@\n \n @torch.jit.unused\n def to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:\n- call = \", num_output_channels=3\" if num_output_channels == 3 else \"\"\n- replacement = \"convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)\"\n- if num_output_channels == 3:\n- replacement = f\"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)\"\n warnings.warn(\n- f\"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. \"\n- f\"Instead, please use `{replacement}`.\",\n+ \"The function `to_grayscale` is deprecated in will be removed in a future release. \"\n+ \"Instead, please use `rgb_to_grayscale`.\",\n )\n-\n return _F.to_grayscale(inpt, num_output_channels=num_output_channels)\n", "issue": "to_grayscale gives non-actionable deprecation warning\n_Originally reported in the [user forum](https://discuss.pytorch.org/t/cannot-find-convert-color-space/182591) by `@function2`._\r\n\r\n> When I use to_grayscale, there\u2019s a deprecation warning:\r\n> ```\r\n> UserWarning: The function `to_grayscale(...)` is deprecated in will be removed in a future release. Instead, please use `convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)`.\r\n> ```\r\n> However, I can\u2019t find this function in the current code base\r\n\r\n---\r\n\r\nNote that this only applies to `torchvision.transforms.v2.function`\r\n\r\nhttps://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/v2/functional/_deprecated.py#L12-L22\r\n\r\nsince the v1 version, i.e. `torchvision.transforms.functional` does not emit the warning\r\n\r\nhttps://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/functional.py#L1249-L1253\r\n\r\nFixing the v2 warning was forgotten in #7120.\r\n\n\ncc @vfdev-5\n", "before_files": [{"content": "import warnings\nfrom typing import Any, List, Union\n\nimport PIL.Image\nimport torch\n\nfrom torchvision import datapoints\nfrom torchvision.transforms import functional as _F\n\n\[email protected]\ndef to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:\n call = \", num_output_channels=3\" if num_output_channels == 3 else \"\"\n replacement = \"convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)\"\n if num_output_channels == 3:\n replacement = f\"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)\"\n warnings.warn(\n f\"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. \"\n f\"Instead, please use `{replacement}`.\",\n )\n\n return _F.to_grayscale(inpt, num_output_channels=num_output_channels)\n\n\[email protected]\ndef to_tensor(inpt: Any) -> torch.Tensor:\n warnings.warn(\n \"The function `to_tensor(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `to_image_tensor(...)` followed by `convert_image_dtype(...)`.\"\n )\n return _F.to_tensor(inpt)\n\n\ndef get_image_size(inpt: Union[datapoints._ImageTypeJIT, datapoints._VideoTypeJIT]) -> List[int]:\n warnings.warn(\n \"The function `get_image_size(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `get_spatial_size(...)` which returns `[h, w]` instead of `[w, h]`.\"\n )\n return _F.get_image_size(inpt)\n", "path": "torchvision/transforms/v2/functional/_deprecated.py"}], "after_files": [{"content": "import warnings\nfrom typing import Any, List, Union\n\nimport PIL.Image\nimport torch\n\nfrom torchvision import datapoints\nfrom torchvision.transforms import functional as _F\n\n\[email protected]\ndef to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:\n warnings.warn(\n \"The function `to_grayscale` is deprecated in will be removed in a future release. \"\n \"Instead, please use `rgb_to_grayscale`.\",\n )\n return _F.to_grayscale(inpt, num_output_channels=num_output_channels)\n\n\[email protected]\ndef to_tensor(inpt: Any) -> torch.Tensor:\n warnings.warn(\n \"The function `to_tensor(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `to_image_tensor(...)` followed by `convert_image_dtype(...)`.\"\n )\n return _F.to_tensor(inpt)\n\n\ndef get_image_size(inpt: Union[datapoints._ImageTypeJIT, datapoints._VideoTypeJIT]) -> List[int]:\n warnings.warn(\n \"The function `get_image_size(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `get_spatial_size(...)` which returns `[h, w]` instead of `[w, h]`.\"\n )\n return _F.get_image_size(inpt)\n", "path": "torchvision/transforms/v2/functional/_deprecated.py"}]} | 1,025 | 276 |
gh_patches_debug_28038 | rasdani/github-patches | git_diff | TheAlgorithms__Python-796 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
math CheckPrime is wrong
The current implementation doesn't support:
* Negative values, -1 is not a prime number. Current implementation raise a TypeError due to `math.sqrt` on negative values.
The current implementation return the wrong value for:
* 0, 0 doesn't have any divider, primes must have two.
* 1, 1 just have one divider, primes must have two.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `maths/PrimeCheck.py`
Content:
```
1 import math
2 def primeCheck(number):
3 if number % 2 == 0 and number > 2:
4 return False
5 return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))
6
7 def main():
8 print(primeCheck(37))
9 print(primeCheck(100))
10 print(primeCheck(77))
11
12 if __name__ == '__main__':
13 main()
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/maths/PrimeCheck.py b/maths/PrimeCheck.py
--- a/maths/PrimeCheck.py
+++ b/maths/PrimeCheck.py
@@ -1,13 +1,54 @@
import math
+import unittest
+
+
def primeCheck(number):
- if number % 2 == 0 and number > 2:
+ """
+ A number is prime if it has exactly two dividers: 1 and itself.
+ """
+ if number < 2:
+ # Negatives, 0 and 1 are not primes
return False
- return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))
+ if number < 4:
+ # 2 and 3 are primes
+ return True
+ if number % 2 == 0:
+ # Even values are not primes
+ return False
+
+ # Except 2, all primes are odd. If any odd value divide
+ # the number, then that number is not prime.
+ odd_numbers = range(3, int(math.sqrt(number)) + 1, 2)
+ return not any(number % i == 0 for i in odd_numbers)
+
+
+class Test(unittest.TestCase):
+ def test_primes(self):
+ self.assertTrue(primeCheck(2))
+ self.assertTrue(primeCheck(3))
+ self.assertTrue(primeCheck(5))
+ self.assertTrue(primeCheck(7))
+ self.assertTrue(primeCheck(11))
+ self.assertTrue(primeCheck(13))
+ self.assertTrue(primeCheck(17))
+ self.assertTrue(primeCheck(19))
+ self.assertTrue(primeCheck(23))
+ self.assertTrue(primeCheck(29))
+
+ def test_not_primes(self):
+ self.assertFalse(primeCheck(-19),
+ "Negative numbers are not prime.")
+ self.assertFalse(primeCheck(0),
+ "Zero doesn't have any divider, primes must have two")
+ self.assertFalse(primeCheck(1),
+ "One just have 1 divider, primes must have two.")
+ self.assertFalse(primeCheck(2 * 2))
+ self.assertFalse(primeCheck(2 * 3))
+ self.assertFalse(primeCheck(3 * 3))
+ self.assertFalse(primeCheck(3 * 5))
+ self.assertFalse(primeCheck(3 * 5 * 7))
-def main():
- print(primeCheck(37))
- print(primeCheck(100))
- print(primeCheck(77))
if __name__ == '__main__':
- main()
+ unittest.main()
+
| {"golden_diff": "diff --git a/maths/PrimeCheck.py b/maths/PrimeCheck.py\n--- a/maths/PrimeCheck.py\n+++ b/maths/PrimeCheck.py\n@@ -1,13 +1,54 @@\n import math\n+import unittest\n+\n+\n def primeCheck(number):\n- if number % 2 == 0 and number > 2: \n+ \"\"\"\n+ A number is prime if it has exactly two dividers: 1 and itself.\n+ \"\"\"\n+ if number < 2:\n+ # Negatives, 0 and 1 are not primes\n return False\n- return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))\n+ if number < 4:\n+ # 2 and 3 are primes\n+ return True\n+ if number % 2 == 0:\n+ # Even values are not primes\n+ return False\n+\n+ # Except 2, all primes are odd. If any odd value divide\n+ # the number, then that number is not prime.\n+ odd_numbers = range(3, int(math.sqrt(number)) + 1, 2)\n+ return not any(number % i == 0 for i in odd_numbers)\n+\n+\n+class Test(unittest.TestCase):\n+ def test_primes(self):\n+ self.assertTrue(primeCheck(2))\n+ self.assertTrue(primeCheck(3))\n+ self.assertTrue(primeCheck(5))\n+ self.assertTrue(primeCheck(7))\n+ self.assertTrue(primeCheck(11))\n+ self.assertTrue(primeCheck(13))\n+ self.assertTrue(primeCheck(17))\n+ self.assertTrue(primeCheck(19))\n+ self.assertTrue(primeCheck(23))\n+ self.assertTrue(primeCheck(29))\n+\n+ def test_not_primes(self):\n+ self.assertFalse(primeCheck(-19),\n+ \"Negative numbers are not prime.\")\n+ self.assertFalse(primeCheck(0),\n+ \"Zero doesn't have any divider, primes must have two\")\n+ self.assertFalse(primeCheck(1),\n+ \"One just have 1 divider, primes must have two.\")\n+ self.assertFalse(primeCheck(2 * 2))\n+ self.assertFalse(primeCheck(2 * 3))\n+ self.assertFalse(primeCheck(3 * 3))\n+ self.assertFalse(primeCheck(3 * 5))\n+ self.assertFalse(primeCheck(3 * 5 * 7))\n \n-def main():\n- print(primeCheck(37))\n- print(primeCheck(100))\n- print(primeCheck(77))\n \n if __name__ == '__main__':\n-\tmain()\n+ unittest.main()\n+\n", "issue": "math CheckPrime is wrong\nThe current implementation doesn't support:\r\n\r\n* Negative values, -1 is not a prime number. Current implementation raise a TypeError due to `math.sqrt` on negative values.\r\n\r\nThe current implementation return the wrong value for:\r\n\r\n* 0, 0 doesn't have any divider, primes must have two.\r\n* 1, 1 just have one divider, primes must have two.\n", "before_files": [{"content": "import math\ndef primeCheck(number):\n if number % 2 == 0 and number > 2: \n return False\n return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))\n\ndef main():\n print(primeCheck(37))\n print(primeCheck(100))\n print(primeCheck(77))\n\nif __name__ == '__main__':\n\tmain()\n", "path": "maths/PrimeCheck.py"}], "after_files": [{"content": "import math\nimport unittest\n\n\ndef primeCheck(number):\n \"\"\"\n A number is prime if it has exactly two dividers: 1 and itself.\n \"\"\"\n if number < 2:\n # Negatives, 0 and 1 are not primes\n return False\n if number < 4:\n # 2 and 3 are primes\n return True\n if number % 2 == 0:\n # Even values are not primes\n return False\n\n # Except 2, all primes are odd. If any odd value divide\n # the number, then that number is not prime.\n odd_numbers = range(3, int(math.sqrt(number)) + 1, 2)\n return not any(number % i == 0 for i in odd_numbers)\n\n\nclass Test(unittest.TestCase):\n def test_primes(self):\n self.assertTrue(primeCheck(2))\n self.assertTrue(primeCheck(3))\n self.assertTrue(primeCheck(5))\n self.assertTrue(primeCheck(7))\n self.assertTrue(primeCheck(11))\n self.assertTrue(primeCheck(13))\n self.assertTrue(primeCheck(17))\n self.assertTrue(primeCheck(19))\n self.assertTrue(primeCheck(23))\n self.assertTrue(primeCheck(29))\n\n def test_not_primes(self):\n self.assertFalse(primeCheck(-19),\n \"Negative numbers are not prime.\")\n self.assertFalse(primeCheck(0),\n \"Zero doesn't have any divider, primes must have two\")\n self.assertFalse(primeCheck(1),\n \"One just have 1 divider, primes must have two.\")\n self.assertFalse(primeCheck(2 * 2))\n self.assertFalse(primeCheck(2 * 3))\n self.assertFalse(primeCheck(3 * 3))\n self.assertFalse(primeCheck(3 * 5))\n self.assertFalse(primeCheck(3 * 5 * 7))\n\n\nif __name__ == '__main__':\n unittest.main()\n\n", "path": "maths/PrimeCheck.py"}]} | 458 | 599 |
gh_patches_debug_655 | rasdani/github-patches | git_diff | pex-tool__pex-2104 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.130
On the docket:
+ [x] Pex fails to lock - missing artifact #2098
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.129"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.129"
+__version__ = "2.1.130"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.129\"\n+__version__ = \"2.1.130\"\n", "issue": "Release 2.1.130\nOn the docket:\r\n+ [x] Pex fails to lock - missing artifact #2098 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.129\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.130\"\n", "path": "pex/version.py"}]} | 341 | 98 |
gh_patches_debug_22746 | rasdani/github-patches | git_diff | pre-commit__pre-commit-346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows: Terminal width support
We detect terminal width in unixlikes by running `tput cols`. This works fine for those platforms but doesn't work well for windows. Maybe find a package which does this logic for us and depend on that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/output.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import os
4 import subprocess
5 import sys
6
7 from pre_commit import color
8 from pre_commit import five
9
10
11 # TODO: smell: import side-effects
12 try:
13 if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
14 raise OSError('Cannot determine width without TERM')
15 else: # pragma no cover (windows)
16 COLS = int(
17 subprocess.Popen(
18 ('tput', 'cols'), stdout=subprocess.PIPE,
19 ).communicate()[0] or
20 # Default in the case of no terminal
21 80
22 )
23 except OSError: # pragma: no cover (windows)
24 COLS = 80
25
26
27 def get_hook_message(
28 start,
29 postfix='',
30 end_msg=None,
31 end_len=0,
32 end_color=None,
33 use_color=None,
34 cols=COLS,
35 ):
36 """Prints a message for running a hook.
37
38 This currently supports three approaches:
39
40 # Print `start` followed by dots, leaving 6 characters at the end
41 >>> print_hook_message('start', end_len=6)
42 start...............................................................
43
44 # Print `start` followed by dots with the end message colored if coloring
45 # is specified and a newline afterwards
46 >>> print_hook_message(
47 'start',
48 end_msg='end',
49 end_color=color.RED,
50 use_color=True,
51 )
52 start...................................................................end
53
54 # Print `start` followed by dots, followed by the `postfix` message
55 # uncolored, followed by the `end_msg` colored if specified and a newline
56 # afterwards
57 >>> print_hook_message(
58 'start',
59 postfix='postfix ',
60 end_msg='end',
61 end_color=color.RED,
62 use_color=True,
63 )
64 start...........................................................postfix end
65 """
66 if bool(end_msg) == bool(end_len):
67 raise ValueError('Expected one of (`end_msg`, `end_len`)')
68 if end_msg is not None and (end_color is None or use_color is None):
69 raise ValueError(
70 '`end_color` and `use_color` are required with `end_msg`'
71 )
72
73 if end_len:
74 return start + '.' * (cols - len(start) - end_len - 1)
75 else:
76 return '{0}{1}{2}{3}\n'.format(
77 start,
78 '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),
79 postfix,
80 color.format_color(end_msg, end_color, use_color),
81 )
82
83
84 stdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)
85
86
87 def sys_stdout_write_wrapper(s, stream=stdout_byte_stream):
88 stream.write(five.to_bytes(s))
89
```
Path: `setup.py`
Content:
```
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 setup(
6 name='pre_commit',
7 description=(
8 'A framework for managing and maintaining multi-language pre-commit '
9 'hooks.'
10 ),
11 url='https://github.com/pre-commit/pre-commit',
12 version='0.7.6',
13
14 author='Anthony Sottile',
15 author_email='[email protected]',
16
17 platforms='linux',
18 classifiers=[
19 'License :: OSI Approved :: MIT License',
20 'Programming Language :: Python :: 2',
21 'Programming Language :: Python :: 2.6',
22 'Programming Language :: Python :: 2.7',
23 'Programming Language :: Python :: 3',
24 'Programming Language :: Python :: 3.3',
25 'Programming Language :: Python :: 3.4',
26 'Programming Language :: Python :: 3.5',
27 'Programming Language :: Python :: Implementation :: CPython',
28 'Programming Language :: Python :: Implementation :: PyPy',
29 ],
30
31 packages=find_packages('.', exclude=('tests*', 'testing*')),
32 package_data={
33 'pre_commit': [
34 'resources/hook-tmpl',
35 'resources/pre-push-tmpl',
36 'resources/rbenv.tar.gz',
37 'resources/ruby-build.tar.gz',
38 'resources/ruby-download.tar.gz',
39 ]
40 },
41 install_requires=[
42 'argparse',
43 'aspy.yaml',
44 'cached-property',
45 'jsonschema',
46 'nodeenv>=0.11.1',
47 'ordereddict',
48 'pyyaml',
49 'virtualenv',
50 ],
51 entry_points={
52 'console_scripts': [
53 'pre-commit = pre_commit.main:main',
54 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa
55 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa
56 ],
57 },
58 )
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,27 +1,15 @@
from __future__ import unicode_literals
-import os
-import subprocess
import sys
+from backports.shutil_get_terminal_size import get_terminal_size
+
from pre_commit import color
from pre_commit import five
-
# TODO: smell: import side-effects
-try:
- if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
- raise OSError('Cannot determine width without TERM')
- else: # pragma no cover (windows)
- COLS = int(
- subprocess.Popen(
- ('tput', 'cols'), stdout=subprocess.PIPE,
- ).communicate()[0] or
- # Default in the case of no terminal
- 80
- )
-except OSError: # pragma: no cover (windows)
- COLS = 80
+# TODO: https://github.com/chrippa/backports.shutil_get_terminal_size/issues/4
+COLS = get_terminal_size().columns or 80
def get_hook_message(
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,6 +41,7 @@
install_requires=[
'argparse',
'aspy.yaml',
+ 'backports.shutil_get_terminal_size',
'cached-property',
'jsonschema',
'nodeenv>=0.11.1',
| {"golden_diff": "diff --git a/pre_commit/output.py b/pre_commit/output.py\n--- a/pre_commit/output.py\n+++ b/pre_commit/output.py\n@@ -1,27 +1,15 @@\n from __future__ import unicode_literals\n \n-import os\n-import subprocess\n import sys\n \n+from backports.shutil_get_terminal_size import get_terminal_size\n+\n from pre_commit import color\n from pre_commit import five\n \n-\n # TODO: smell: import side-effects\n-try:\n- if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)\n- raise OSError('Cannot determine width without TERM')\n- else: # pragma no cover (windows)\n- COLS = int(\n- subprocess.Popen(\n- ('tput', 'cols'), stdout=subprocess.PIPE,\n- ).communicate()[0] or\n- # Default in the case of no terminal\n- 80\n- )\n-except OSError: # pragma: no cover (windows)\n- COLS = 80\n+# TODO: https://github.com/chrippa/backports.shutil_get_terminal_size/issues/4\n+COLS = get_terminal_size().columns or 80\n \n \n def get_hook_message(\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,6 +41,7 @@\n install_requires=[\n 'argparse',\n 'aspy.yaml',\n+ 'backports.shutil_get_terminal_size',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n", "issue": "Windows: Terminal width support\nWe detect terminal width in unixlikes by running `tput cols`. This works fine for those platforms but doesn't work well for windows. Maybe find a package which does this logic for us and depend on that.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nimport subprocess\nimport sys\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n\n# TODO: smell: import side-effects\ntry:\n if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)\n raise OSError('Cannot determine width without TERM')\n else: # pragma no cover (windows)\n COLS = int(\n subprocess.Popen(\n ('tput', 'cols'), stdout=subprocess.PIPE,\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n )\nexcept OSError: # pragma: no cover (windows)\n COLS = 80\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\nstdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)\n\n\ndef sys_stdout_write_wrapper(s, stream=stdout_byte_stream):\n stream.write(five.to_bytes(s))\n", "path": "pre_commit/output.py"}, {"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='0.7.6',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages('.', exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n ]\n },\n install_requires=[\n 'argparse',\n 'aspy.yaml',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n 'ordereddict',\n 'pyyaml',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport sys\n\nfrom backports.shutil_get_terminal_size import get_terminal_size\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n# TODO: smell: import side-effects\n# TODO: https://github.com/chrippa/backports.shutil_get_terminal_size/issues/4\nCOLS = get_terminal_size().columns or 80\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\nstdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)\n\n\ndef sys_stdout_write_wrapper(s, stream=stdout_byte_stream):\n stream.write(five.to_bytes(s))\n", "path": "pre_commit/output.py"}, {"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='0.7.6',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages('.', exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n ]\n },\n install_requires=[\n 'argparse',\n 'aspy.yaml',\n 'backports.shutil_get_terminal_size',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n 'ordereddict',\n 'pyyaml',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa\n ],\n },\n)\n", "path": "setup.py"}]} | 1,602 | 339 |
gh_patches_debug_60612 | rasdani/github-patches | git_diff | cloudtools__troposphere-2037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for additional Flink runtimes in Kinesis Data Analytics.
Kinesis supports additional Flink runtimes (FLINK-1_13, ZEPPELIN-FLINK-1_0, ZEPPELIN-FLINK-2_0), see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesisanalyticsv2-application.html.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/validators/kinesisanalyticsv2.py`
Content:
```
1 # Copyright (c) 2012-2022, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6
7 def validate_runtime_environment(runtime_environment):
8 """
9 Validate RuntimeEnvironment for Application
10 Property: Application.RuntimeEnvironment
11 """
12
13 VALID_RUNTIME_ENVIRONMENTS = ("SQL-1_0", "FLINK-1_6", "FLINK-1_8", "FLINK-1_11")
14
15 if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:
16 raise ValueError(
17 "Application RuntimeEnvironment must be one of: %s"
18 % ", ".join(VALID_RUNTIME_ENVIRONMENTS)
19 )
20 return runtime_environment
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/validators/kinesisanalyticsv2.py b/troposphere/validators/kinesisanalyticsv2.py
--- a/troposphere/validators/kinesisanalyticsv2.py
+++ b/troposphere/validators/kinesisanalyticsv2.py
@@ -10,7 +10,15 @@
Property: Application.RuntimeEnvironment
"""
- VALID_RUNTIME_ENVIRONMENTS = ("SQL-1_0", "FLINK-1_6", "FLINK-1_8", "FLINK-1_11")
+ VALID_RUNTIME_ENVIRONMENTS = (
+ "FLINK-1_6",
+ "FLINK-1_8",
+ "FLINK-1_11",
+ "FLINK-1_13",
+ "SQL-1_0",
+ "ZEPPELIN-FLINK-1_0",
+ "ZEPPELIN-FLINK-2_0",
+ )
if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:
raise ValueError(
| {"golden_diff": "diff --git a/troposphere/validators/kinesisanalyticsv2.py b/troposphere/validators/kinesisanalyticsv2.py\n--- a/troposphere/validators/kinesisanalyticsv2.py\n+++ b/troposphere/validators/kinesisanalyticsv2.py\n@@ -10,7 +10,15 @@\n Property: Application.RuntimeEnvironment\n \"\"\"\n \n- VALID_RUNTIME_ENVIRONMENTS = (\"SQL-1_0\", \"FLINK-1_6\", \"FLINK-1_8\", \"FLINK-1_11\")\n+ VALID_RUNTIME_ENVIRONMENTS = (\n+ \"FLINK-1_6\",\n+ \"FLINK-1_8\",\n+ \"FLINK-1_11\",\n+ \"FLINK-1_13\",\n+ \"SQL-1_0\",\n+ \"ZEPPELIN-FLINK-1_0\",\n+ \"ZEPPELIN-FLINK-2_0\",\n+ )\n \n if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:\n raise ValueError(\n", "issue": "Add support for additional Flink runtimes in Kinesis Data Analytics.\nKinesis supports additional Flink runtimes (FLINK-1_13, ZEPPELIN-FLINK-1_0, ZEPPELIN-FLINK-2_0), see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesisanalyticsv2-application.html.\n", "before_files": [{"content": "# Copyright (c) 2012-2022, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\n\ndef validate_runtime_environment(runtime_environment):\n \"\"\"\n Validate RuntimeEnvironment for Application\n Property: Application.RuntimeEnvironment\n \"\"\"\n\n VALID_RUNTIME_ENVIRONMENTS = (\"SQL-1_0\", \"FLINK-1_6\", \"FLINK-1_8\", \"FLINK-1_11\")\n\n if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:\n raise ValueError(\n \"Application RuntimeEnvironment must be one of: %s\"\n % \", \".join(VALID_RUNTIME_ENVIRONMENTS)\n )\n return runtime_environment\n", "path": "troposphere/validators/kinesisanalyticsv2.py"}], "after_files": [{"content": "# Copyright (c) 2012-2022, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\n\ndef validate_runtime_environment(runtime_environment):\n \"\"\"\n Validate RuntimeEnvironment for Application\n Property: Application.RuntimeEnvironment\n \"\"\"\n\n VALID_RUNTIME_ENVIRONMENTS = (\n \"FLINK-1_6\",\n \"FLINK-1_8\",\n \"FLINK-1_11\",\n \"FLINK-1_13\",\n \"SQL-1_0\",\n \"ZEPPELIN-FLINK-1_0\",\n \"ZEPPELIN-FLINK-2_0\",\n )\n\n if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:\n raise ValueError(\n \"Application RuntimeEnvironment must be one of: %s\"\n % \", \".join(VALID_RUNTIME_ENVIRONMENTS)\n )\n return runtime_environment\n", "path": "troposphere/validators/kinesisanalyticsv2.py"}]} | 534 | 233 |
gh_patches_debug_6154 | rasdani/github-patches | git_diff | litestar-org__litestar-1659 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/repository/filters.py`
Content:
```
1 """Collection filter datastructures."""
2 from __future__ import annotations
3
4 from dataclasses import dataclass
5 from datetime import datetime # noqa: TCH003
6 from typing import TYPE_CHECKING, Generic, Literal, TypeVar
7
8 if TYPE_CHECKING:
9 from collections import abc
10
11
12 T = TypeVar("T")
13
14 __all__ = ["BeforeAfter", "CollectionFilter", "LimitOffset", "OrderBy", "SearchFilter"]
15
16
17 @dataclass
18 class BeforeAfter:
19 """Data required to filter a query on a ``datetime`` column."""
20
21 field_name: str
22 """Name of the model attribute to filter on."""
23 before: datetime | None
24 """Filter results where field earlier than this."""
25 after: datetime | None
26 """Filter results where field later than this."""
27
28
29 @dataclass
30 class CollectionFilter(Generic[T]):
31 """Data required to construct a ``WHERE ... IN (...)`` clause."""
32
33 field_name: str
34 """Name of the model attribute to filter on."""
35 values: abc.Collection[T]
36 """Values for ``IN`` clause."""
37
38
39 @dataclass
40 class LimitOffset:
41 """Data required to add limit/offset filtering to a query."""
42
43 limit: int
44 """Value for ``LIMIT`` clause of query."""
45 offset: int
46 """Value for ``OFFSET`` clause of query."""
47
48
49 @dataclass
50 class OrderBy:
51 """Data required to construct a ``ORDER BY ...`` clause."""
52
53 field_name: str
54 """Name of the model attribute to sort on."""
55 sort_order: Literal["asc", "desc"] = "asc"
56 """Sort ascending or descending"""
57
58
59 @dataclass
60 class SearchFilter:
61 """Data required to construct a ``WHERE field_name LIKE '%' || :value || '%'`` clause."""
62
63 field_name: str
64 """Name of the model attribute to sort on."""
65 value: str
66 """Values for ``LIKE`` clause."""
67 ignore_case: bool | None = False
68 """Should the search be case insensitive."""
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/contrib/repository/filters.py b/litestar/contrib/repository/filters.py
--- a/litestar/contrib/repository/filters.py
+++ b/litestar/contrib/repository/filters.py
@@ -1,13 +1,10 @@
"""Collection filter datastructures."""
from __future__ import annotations
+from collections import abc # noqa: TCH003
from dataclasses import dataclass
from datetime import datetime # noqa: TCH003
-from typing import TYPE_CHECKING, Generic, Literal, TypeVar
-
-if TYPE_CHECKING:
- from collections import abc
-
+from typing import Generic, Literal, TypeVar
T = TypeVar("T")
| {"golden_diff": "diff --git a/litestar/contrib/repository/filters.py b/litestar/contrib/repository/filters.py\n--- a/litestar/contrib/repository/filters.py\n+++ b/litestar/contrib/repository/filters.py\n@@ -1,13 +1,10 @@\n \"\"\"Collection filter datastructures.\"\"\"\n from __future__ import annotations\n \n+from collections import abc # noqa: TCH003\n from dataclasses import dataclass\n from datetime import datetime # noqa: TCH003\n-from typing import TYPE_CHECKING, Generic, Literal, TypeVar\n-\n-if TYPE_CHECKING:\n- from collections import abc\n-\n+from typing import Generic, Literal, TypeVar\n \n T = TypeVar(\"T\")\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "\"\"\"Collection filter datastructures.\"\"\"\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom datetime import datetime # noqa: TCH003\nfrom typing import TYPE_CHECKING, Generic, Literal, TypeVar\n\nif TYPE_CHECKING:\n from collections import abc\n\n\nT = TypeVar(\"T\")\n\n__all__ = [\"BeforeAfter\", \"CollectionFilter\", \"LimitOffset\", \"OrderBy\", \"SearchFilter\"]\n\n\n@dataclass\nclass BeforeAfter:\n \"\"\"Data required to filter a query on a ``datetime`` column.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n before: datetime | None\n \"\"\"Filter results where field earlier than this.\"\"\"\n after: datetime | None\n \"\"\"Filter results where field later than this.\"\"\"\n\n\n@dataclass\nclass CollectionFilter(Generic[T]):\n \"\"\"Data required to construct a ``WHERE ... IN (...)`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n values: abc.Collection[T]\n \"\"\"Values for ``IN`` clause.\"\"\"\n\n\n@dataclass\nclass LimitOffset:\n \"\"\"Data required to add limit/offset filtering to a query.\"\"\"\n\n limit: int\n \"\"\"Value for ``LIMIT`` clause of query.\"\"\"\n offset: int\n \"\"\"Value for ``OFFSET`` clause of query.\"\"\"\n\n\n@dataclass\nclass OrderBy:\n \"\"\"Data required to construct a ``ORDER BY ...`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n sort_order: Literal[\"asc\", \"desc\"] = \"asc\"\n \"\"\"Sort ascending or descending\"\"\"\n\n\n@dataclass\nclass SearchFilter:\n \"\"\"Data required to construct a ``WHERE field_name LIKE '%' || :value || '%'`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n value: str\n \"\"\"Values for ``LIKE`` clause.\"\"\"\n ignore_case: bool | None = False\n \"\"\"Should the search be case insensitive.\"\"\"\n", "path": "litestar/contrib/repository/filters.py"}], "after_files": [{"content": "\"\"\"Collection filter datastructures.\"\"\"\nfrom __future__ import annotations\n\nfrom collections import abc # noqa: TCH003\nfrom dataclasses import dataclass\nfrom datetime import datetime # noqa: TCH003\nfrom typing import Generic, Literal, TypeVar\n\nT = TypeVar(\"T\")\n\n__all__ = [\"BeforeAfter\", \"CollectionFilter\", \"LimitOffset\", \"OrderBy\", \"SearchFilter\"]\n\n\n@dataclass\nclass BeforeAfter:\n \"\"\"Data required to filter a query on a ``datetime`` column.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n before: datetime | None\n \"\"\"Filter results where field earlier than this.\"\"\"\n after: datetime | None\n \"\"\"Filter results where field later than this.\"\"\"\n\n\n@dataclass\nclass CollectionFilter(Generic[T]):\n \"\"\"Data required to construct a ``WHERE ... IN (...)`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n values: abc.Collection[T]\n \"\"\"Values for ``IN`` clause.\"\"\"\n\n\n@dataclass\nclass LimitOffset:\n \"\"\"Data required to add limit/offset filtering to a query.\"\"\"\n\n limit: int\n \"\"\"Value for ``LIMIT`` clause of query.\"\"\"\n offset: int\n \"\"\"Value for ``OFFSET`` clause of query.\"\"\"\n\n\n@dataclass\nclass OrderBy:\n \"\"\"Data required to construct a ``ORDER BY ...`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n sort_order: Literal[\"asc\", \"desc\"] = \"asc\"\n \"\"\"Sort ascending or descending\"\"\"\n\n\n@dataclass\nclass SearchFilter:\n \"\"\"Data required to construct a ``WHERE field_name LIKE '%' || :value || '%'`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n value: str\n \"\"\"Values for ``LIKE`` clause.\"\"\"\n ignore_case: bool | None = False\n \"\"\"Should the search be case insensitive.\"\"\"\n", "path": "litestar/contrib/repository/filters.py"}]} | 994 | 155 |
gh_patches_debug_26852 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-1999 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tiles on plans and container: blue corner missing for external projects
for external projects the little blue corner is missing
mac on chrome and firefox
<img width="400" alt="bildschirmfoto 2019-02-11 um 16 45 01" src="https://user-images.githubusercontent.com/35491681/52574395-7d708980-2e1c-11e9-8cfd-b9f8be74ea16.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/dashboard/__init__.py`
Content:
```
1 from adhocracy4.dashboard import components
2 from adhocracy4.dashboard import ProjectDashboard
3 from meinberlin.apps.projects import get_project_type
4
5
6 default_app_config = 'meinberlin.apps.dashboard.apps.Config'
7
8
9 class TypedProjectDashboard(ProjectDashboard):
10 def __init__(self, project):
11 self.project_type = get_project_type(project)
12 if self.project_type == 'bplan':
13 project = project.externalproject.bplan
14 elif self.project_type == 'external':
15 project = project.externalproject
16 elif self.project_type == 'container':
17 project = project.projectcontainer
18 super().__init__(project)
19
20 def get_project_components(self):
21 if self.project_type == 'bplan':
22 return [components.projects.get('bplan'),
23 components.projects.get('adminlog')]
24 elif self.project_type == 'external':
25 return [components.projects.get('external'),
26 components.projects.get('adminlog')]
27 elif self.project_type == 'container':
28 return [components.projects.get('container-basic'),
29 components.projects.get('container-information'),
30 components.projects.get('topics'),
31 components.projects.get('point'),
32 components.projects.get('container-projects')]
33
34 return [component for component in components.get_project_components()
35 if component.is_effective(self.project)]
36
37 def get_module_components(self):
38 if self.project_type == 'bplan':
39 return []
40 elif self.project_type == 'external':
41 return []
42 elif self.project_type == 'container':
43 return []
44
45 return components.get_module_components()
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/dashboard/__init__.py b/meinberlin/apps/dashboard/__init__.py
--- a/meinberlin/apps/dashboard/__init__.py
+++ b/meinberlin/apps/dashboard/__init__.py
@@ -20,15 +20,20 @@
def get_project_components(self):
if self.project_type == 'bplan':
return [components.projects.get('bplan'),
+ components.projects.get('plans'),
components.projects.get('adminlog')]
elif self.project_type == 'external':
return [components.projects.get('external'),
+ components.projects.get('topics'),
+ components.projects.get('point'),
+ components.projects.get('plans'),
components.projects.get('adminlog')]
elif self.project_type == 'container':
return [components.projects.get('container-basic'),
components.projects.get('container-information'),
components.projects.get('topics'),
components.projects.get('point'),
+ components.projects.get('plans'),
components.projects.get('container-projects')]
return [component for component in components.get_project_components()
| {"golden_diff": "diff --git a/meinberlin/apps/dashboard/__init__.py b/meinberlin/apps/dashboard/__init__.py\n--- a/meinberlin/apps/dashboard/__init__.py\n+++ b/meinberlin/apps/dashboard/__init__.py\n@@ -20,15 +20,20 @@\n def get_project_components(self):\n if self.project_type == 'bplan':\n return [components.projects.get('bplan'),\n+ components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'external':\n return [components.projects.get('external'),\n+ components.projects.get('topics'),\n+ components.projects.get('point'),\n+ components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'container':\n return [components.projects.get('container-basic'),\n components.projects.get('container-information'),\n components.projects.get('topics'),\n components.projects.get('point'),\n+ components.projects.get('plans'),\n components.projects.get('container-projects')]\n \n return [component for component in components.get_project_components()\n", "issue": "tiles on plans and container: blue corner missing for external projects\nfor external projects the little blue corner is missing\r\n\r\nmac on chrome and firefox\r\n\r\n<img width=\"400\" alt=\"bildschirmfoto 2019-02-11 um 16 45 01\" src=\"https://user-images.githubusercontent.com/35491681/52574395-7d708980-2e1c-11e9-8cfd-b9f8be74ea16.png\">\r\n\n", "before_files": [{"content": "from adhocracy4.dashboard import components\nfrom adhocracy4.dashboard import ProjectDashboard\nfrom meinberlin.apps.projects import get_project_type\n\n\ndefault_app_config = 'meinberlin.apps.dashboard.apps.Config'\n\n\nclass TypedProjectDashboard(ProjectDashboard):\n def __init__(self, project):\n self.project_type = get_project_type(project)\n if self.project_type == 'bplan':\n project = project.externalproject.bplan\n elif self.project_type == 'external':\n project = project.externalproject\n elif self.project_type == 'container':\n project = project.projectcontainer\n super().__init__(project)\n\n def get_project_components(self):\n if self.project_type == 'bplan':\n return [components.projects.get('bplan'),\n components.projects.get('adminlog')]\n elif self.project_type == 'external':\n return [components.projects.get('external'),\n components.projects.get('adminlog')]\n elif self.project_type == 'container':\n return [components.projects.get('container-basic'),\n components.projects.get('container-information'),\n components.projects.get('topics'),\n components.projects.get('point'),\n components.projects.get('container-projects')]\n\n return [component for component in components.get_project_components()\n if component.is_effective(self.project)]\n\n def get_module_components(self):\n if self.project_type == 'bplan':\n return []\n elif self.project_type == 'external':\n return []\n elif self.project_type == 'container':\n return []\n\n return components.get_module_components()\n", "path": "meinberlin/apps/dashboard/__init__.py"}], "after_files": [{"content": "from adhocracy4.dashboard import components\nfrom adhocracy4.dashboard import ProjectDashboard\nfrom meinberlin.apps.projects import get_project_type\n\n\ndefault_app_config = 'meinberlin.apps.dashboard.apps.Config'\n\n\nclass TypedProjectDashboard(ProjectDashboard):\n def __init__(self, project):\n self.project_type = get_project_type(project)\n if self.project_type == 'bplan':\n project = project.externalproject.bplan\n elif self.project_type == 'external':\n project = project.externalproject\n elif self.project_type == 'container':\n project = project.projectcontainer\n super().__init__(project)\n\n def get_project_components(self):\n if self.project_type == 'bplan':\n return [components.projects.get('bplan'),\n components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'external':\n return [components.projects.get('external'),\n components.projects.get('topics'),\n components.projects.get('point'),\n components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'container':\n return [components.projects.get('container-basic'),\n components.projects.get('container-information'),\n components.projects.get('topics'),\n components.projects.get('point'),\n components.projects.get('plans'),\n components.projects.get('container-projects')]\n\n return [component for component in components.get_project_components()\n if component.is_effective(self.project)]\n\n def get_module_components(self):\n if self.project_type == 'bplan':\n return []\n elif self.project_type == 'external':\n return []\n elif self.project_type == 'container':\n return []\n\n return components.get_module_components()\n", "path": "meinberlin/apps/dashboard/__init__.py"}]} | 793 | 230 |
gh_patches_debug_4607 | rasdani/github-patches | git_diff | CTFd__CTFd-1726 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect update alert in Admin panel
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 3.1.1
- Operating System: Ubuntu 20.4
- Web Browser and Version: Chrome 85
**What happened?**
The admin panel shows an alert: "A new CTFd version is available!", which links to "https://github.com/CTFd/CTFd/releases/tag/2.4.2". I encountered the issue with version 3.0.2. as well. After complete reinstall and upgrade to version 3.1.1 the problem persisted
**What did you expect to happen?**
I expected no alert, as my CTFd version is the newest, and certainly newer than 2.4.2.
**How to reproduce your issue**
Go to the admin pages.
**Any associated stack traces or error logs**
No
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/updates/__init__.py`
Content:
```
1 import sys
2 import time
3 from distutils.version import StrictVersion
4 from platform import python_version
5
6 import requests
7 from flask import current_app as app
8
9 from CTFd.models import Challenges, Teams, Users, db
10 from CTFd.utils import get_app_config, get_config, set_config
11 from CTFd.utils.config import is_setup
12 from CTFd.utils.crypto import sha256
13
14
15 def update_check(force=False):
16 """
17 Makes a request to ctfd.io to check if there is a new version of CTFd available. The service is provided in return
18 for users opting in to anonymous usage data collection. Users can opt-out of update checks by specifying
19 UPDATE_CHECK = False in config.py
20
21 :param force:
22 :return:
23 """
24 # If UPDATE_CHECK is disabled don't check for updates at all.
25 if app.config.get("UPDATE_CHECK") is False:
26 return
27
28 # Don't do an update check if not setup
29 if is_setup() is False:
30 return
31
32 # Get when we should check for updates next.
33 next_update_check = get_config("next_update_check") or 0
34
35 # If we have passed our saved time or we are forcing we should check.
36 update = (next_update_check < time.time()) or force
37
38 if update:
39 try:
40 name = str(get_config("ctf_name")) or ""
41 params = {
42 "ctf_id": sha256(name),
43 "current": app.VERSION,
44 "python_version_raw": sys.hexversion,
45 "python_version": python_version(),
46 "db_driver": db.session.bind.dialect.name,
47 "challenge_count": Challenges.query.count(),
48 "user_mode": get_config("user_mode"),
49 "user_count": Users.query.count(),
50 "team_count": Teams.query.count(),
51 "theme": get_config("ctf_theme"),
52 "upload_provider": get_app_config("UPLOAD_PROVIDER"),
53 "channel": app.CHANNEL,
54 }
55 check = requests.get(
56 "https://versioning.ctfd.io/check", params=params, timeout=0.1
57 ).json()
58 except requests.exceptions.RequestException:
59 pass
60 except ValueError:
61 pass
62 else:
63 try:
64 latest = check["resource"]["tag"]
65 html_url = check["resource"]["html_url"]
66 if StrictVersion(latest) > StrictVersion(app.VERSION):
67 set_config("version_latest", html_url)
68 elif StrictVersion(latest) <= StrictVersion(app.VERSION):
69 set_config("version_latest", None)
70 next_update_check_time = check["resource"].get(
71 "next", int(time.time() + 43200)
72 )
73 set_config("next_update_check", next_update_check_time)
74 except KeyError:
75 set_config("version_latest", None)
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/updates/__init__.py b/CTFd/utils/updates/__init__.py
--- a/CTFd/utils/updates/__init__.py
+++ b/CTFd/utils/updates/__init__.py
@@ -53,7 +53,7 @@
"channel": app.CHANNEL,
}
check = requests.get(
- "https://versioning.ctfd.io/check", params=params, timeout=0.1
+ "https://versioning.ctfd.io/check", params=params, timeout=3
).json()
except requests.exceptions.RequestException:
pass
| {"golden_diff": "diff --git a/CTFd/utils/updates/__init__.py b/CTFd/utils/updates/__init__.py\n--- a/CTFd/utils/updates/__init__.py\n+++ b/CTFd/utils/updates/__init__.py\n@@ -53,7 +53,7 @@\n \"channel\": app.CHANNEL,\n }\n check = requests.get(\n- \"https://versioning.ctfd.io/check\", params=params, timeout=0.1\n+ \"https://versioning.ctfd.io/check\", params=params, timeout=3\n ).json()\n except requests.exceptions.RequestException:\n pass\n", "issue": "Incorrect update alert in Admin panel\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n- CTFd Version/Commit: 3.1.1\r\n- Operating System: Ubuntu 20.4\r\n- Web Browser and Version: Chrome 85\r\n\r\n**What happened?**\r\nThe admin panel shows an alert: \"A new CTFd version is available!\", which links to \"https://github.com/CTFd/CTFd/releases/tag/2.4.2\". I encountered the issue with version 3.0.2. as well. After complete reinstall and upgrade to version 3.1.1 the problem persisted\r\n\r\n**What did you expect to happen?**\r\nI expected no alert, as my CTFd version is the newest, and certainly newer than 2.4.2.\r\n\r\n**How to reproduce your issue**\r\nGo to the admin pages.\r\n\r\n**Any associated stack traces or error logs**\r\nNo\n", "before_files": [{"content": "import sys\nimport time\nfrom distutils.version import StrictVersion\nfrom platform import python_version\n\nimport requests\nfrom flask import current_app as app\n\nfrom CTFd.models import Challenges, Teams, Users, db\nfrom CTFd.utils import get_app_config, get_config, set_config\nfrom CTFd.utils.config import is_setup\nfrom CTFd.utils.crypto import sha256\n\n\ndef update_check(force=False):\n \"\"\"\n Makes a request to ctfd.io to check if there is a new version of CTFd available. The service is provided in return\n for users opting in to anonymous usage data collection. Users can opt-out of update checks by specifying\n UPDATE_CHECK = False in config.py\n\n :param force:\n :return:\n \"\"\"\n # If UPDATE_CHECK is disabled don't check for updates at all.\n if app.config.get(\"UPDATE_CHECK\") is False:\n return\n\n # Don't do an update check if not setup\n if is_setup() is False:\n return\n\n # Get when we should check for updates next.\n next_update_check = get_config(\"next_update_check\") or 0\n\n # If we have passed our saved time or we are forcing we should check.\n update = (next_update_check < time.time()) or force\n\n if update:\n try:\n name = str(get_config(\"ctf_name\")) or \"\"\n params = {\n \"ctf_id\": sha256(name),\n \"current\": app.VERSION,\n \"python_version_raw\": sys.hexversion,\n \"python_version\": python_version(),\n \"db_driver\": db.session.bind.dialect.name,\n \"challenge_count\": Challenges.query.count(),\n \"user_mode\": get_config(\"user_mode\"),\n \"user_count\": Users.query.count(),\n \"team_count\": Teams.query.count(),\n \"theme\": get_config(\"ctf_theme\"),\n \"upload_provider\": get_app_config(\"UPLOAD_PROVIDER\"),\n \"channel\": app.CHANNEL,\n }\n check = requests.get(\n \"https://versioning.ctfd.io/check\", params=params, timeout=0.1\n ).json()\n except requests.exceptions.RequestException:\n pass\n except ValueError:\n pass\n else:\n try:\n latest = check[\"resource\"][\"tag\"]\n html_url = check[\"resource\"][\"html_url\"]\n if StrictVersion(latest) > StrictVersion(app.VERSION):\n set_config(\"version_latest\", html_url)\n elif StrictVersion(latest) <= StrictVersion(app.VERSION):\n set_config(\"version_latest\", None)\n next_update_check_time = check[\"resource\"].get(\n \"next\", int(time.time() + 43200)\n )\n set_config(\"next_update_check\", next_update_check_time)\n except KeyError:\n set_config(\"version_latest\", None)\n", "path": "CTFd/utils/updates/__init__.py"}], "after_files": [{"content": "import sys\nimport time\nfrom distutils.version import StrictVersion\nfrom platform import python_version\n\nimport requests\nfrom flask import current_app as app\n\nfrom CTFd.models import Challenges, Teams, Users, db\nfrom CTFd.utils import get_app_config, get_config, set_config\nfrom CTFd.utils.config import is_setup\nfrom CTFd.utils.crypto import sha256\n\n\ndef update_check(force=False):\n \"\"\"\n Makes a request to ctfd.io to check if there is a new version of CTFd available. The service is provided in return\n for users opting in to anonymous usage data collection. Users can opt-out of update checks by specifying\n UPDATE_CHECK = False in config.py\n\n :param force:\n :return:\n \"\"\"\n # If UPDATE_CHECK is disabled don't check for updates at all.\n if app.config.get(\"UPDATE_CHECK\") is False:\n return\n\n # Don't do an update check if not setup\n if is_setup() is False:\n return\n\n # Get when we should check for updates next.\n next_update_check = get_config(\"next_update_check\") or 0\n\n # If we have passed our saved time or we are forcing we should check.\n update = (next_update_check < time.time()) or force\n\n if update:\n try:\n name = str(get_config(\"ctf_name\")) or \"\"\n params = {\n \"ctf_id\": sha256(name),\n \"current\": app.VERSION,\n \"python_version_raw\": sys.hexversion,\n \"python_version\": python_version(),\n \"db_driver\": db.session.bind.dialect.name,\n \"challenge_count\": Challenges.query.count(),\n \"user_mode\": get_config(\"user_mode\"),\n \"user_count\": Users.query.count(),\n \"team_count\": Teams.query.count(),\n \"theme\": get_config(\"ctf_theme\"),\n \"upload_provider\": get_app_config(\"UPLOAD_PROVIDER\"),\n \"channel\": app.CHANNEL,\n }\n check = requests.get(\n \"https://versioning.ctfd.io/check\", params=params, timeout=3\n ).json()\n except requests.exceptions.RequestException:\n pass\n except ValueError:\n pass\n else:\n try:\n latest = check[\"resource\"][\"tag\"]\n html_url = check[\"resource\"][\"html_url\"]\n if StrictVersion(latest) > StrictVersion(app.VERSION):\n set_config(\"version_latest\", html_url)\n elif StrictVersion(latest) <= StrictVersion(app.VERSION):\n set_config(\"version_latest\", None)\n next_update_check_time = check[\"resource\"].get(\n \"next\", int(time.time() + 43200)\n )\n set_config(\"next_update_check\", next_update_check_time)\n except KeyError:\n set_config(\"version_latest\", None)\n", "path": "CTFd/utils/updates/__init__.py"}]} | 1,224 | 134 |
gh_patches_debug_44031 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-26 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support forward references
See: https://www.python.org/dev/peps/pep-0563/#forward-references
Right now the following code would break:
```python
from __future__ import annotations
import strawberry
import typing
@strawberry.type
class User:
name: str
friend: typing.Optional[User]
```
This is the error we get:
```
File "/Users/patrickarminio/Documents/personal/temp/stra/app.py", line 7, in <module>
from schema import schema
File "/Users/patrickarminio/Documents/personal/temp/stra/schema.py", line 10, in <module>
@strawberry.type
File "/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py", line 60, in type
return wrap()
File "/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py", line 55, in wrap
cls._fields = _get_fields(cls)
File "/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py", line 27, in _get_fields
cls_annotations = typing.get_type_hints(cls)
File "/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py", line 973, in get_type_hints
value = _eval_type(value, base_globals, localns)
File "/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py", line 260, in _eval_type
return t._evaluate(globalns, localns)
File "/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py", line 464, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
NameError: name 'User' is not defined
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/type.py`
Content:
```
1 import typing
2
3 from dataclasses import dataclass
4 from graphql import GraphQLField, GraphQLObjectType
5 from graphql.utilities.schema_printer import print_type
6
7 from .constants import IS_STRAWBERRY_FIELD
8 from .type_converter import get_graphql_type_for_annotation
9
10
11 def _get_resolver(cls, field_name):
12 def _resolver(obj, info):
13 # TODO: can we make this nicer?
14 # does it work in all the cases?
15
16 field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)
17
18 if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):
19 return field_resolver(obj, info)
20
21 return field_resolver
22
23 return _resolver
24
25
26 def _get_fields(cls):
27 cls_annotations = typing.get_type_hints(cls)
28
29 fields = {
30 key: GraphQLField(
31 get_graphql_type_for_annotation(value, field_name=key),
32 resolve=_get_resolver(cls, key),
33 )
34 for key, value in cls_annotations.items()
35 }
36
37 fields.update(
38 {
39 key: value.field
40 for key, value in cls.__dict__.items()
41 if getattr(value, IS_STRAWBERRY_FIELD, False)
42 }
43 )
44
45 return fields
46
47
48 def type(cls):
49 def wrap():
50 def repr_(self):
51 return print_type(self.field)
52
53 setattr(cls, "__repr__", repr_)
54
55 cls._fields = _get_fields(cls)
56 cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)
57
58 return dataclass(cls, repr=False)
59
60 return wrap()
61
```
Path: `strawberry/type_converter.py`
Content:
```
1 from graphql import (
2 GraphQLBoolean,
3 GraphQLFloat,
4 GraphQLID,
5 GraphQLInt,
6 GraphQLList,
7 GraphQLNonNull,
8 GraphQLString,
9 GraphQLUnionType,
10 )
11
12 from .scalars import ID
13
14
15 TYPE_MAP = {
16 str: GraphQLString,
17 int: GraphQLInt,
18 float: GraphQLFloat,
19 bool: GraphQLBoolean,
20 ID: GraphQLID,
21 }
22
23
24 # TODO: make so that we don't pass force optional
25 # we use that when trying to get the type for a
26 # option field (which can either be a scalar or an object type)
27 def get_graphql_type_for_annotation(
28 annotation, field_name: str, force_optional: bool = False
29 ):
30 # TODO: nice error
31
32 is_optional = False
33
34 # TODO: this might lead to issues with types that have a field value
35 if hasattr(annotation, "field"):
36 graphql_type = annotation.field
37 else:
38 annotation_name = getattr(annotation, "_name", None)
39
40 if annotation_name == "List":
41 list_of_type = get_graphql_type_for_annotation(
42 annotation.__args__[0], field_name
43 )
44
45 return GraphQLList(list_of_type)
46
47 # for some reason _name is None for Optional and Union types, so we check if we
48 # have __args__ populated, there might be some edge cases where __args__ is
49 # populated but the type is not an Union, like in the above case with Lists
50 if hasattr(annotation, "__args__"):
51 types = annotation.__args__
52 non_none_types = [x for x in types if x != type(None)] # noqa:E721
53
54 # optionals are represented as Union[type, None]
55 if len(non_none_types) == 1:
56 is_optional = True
57 graphql_type = get_graphql_type_for_annotation(
58 non_none_types[0], field_name, force_optional=True
59 )
60 else:
61 is_optional = type(None) in types
62
63 # TODO: union types don't work with scalar types
64 # so we want to return a nice error
65 # also we want to make sure we have been passed
66 # strawberry types
67 graphql_type = GraphQLUnionType(
68 field_name, [type.field for type in types]
69 )
70 else:
71 graphql_type = TYPE_MAP.get(annotation)
72
73 if not graphql_type:
74 raise ValueError(f"Unable to get GraphQL type for {annotation}")
75
76 if is_optional or force_optional:
77 return graphql_type
78
79 return GraphQLNonNull(graphql_type)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/type.py b/strawberry/type.py
--- a/strawberry/type.py
+++ b/strawberry/type.py
@@ -5,7 +5,7 @@
from graphql.utilities.schema_printer import print_type
from .constants import IS_STRAWBERRY_FIELD
-from .type_converter import get_graphql_type_for_annotation
+from .type_converter import REGISTRY, get_graphql_type_for_annotation
def _get_resolver(cls, field_name):
@@ -23,37 +23,39 @@
return _resolver
-def _get_fields(cls):
- cls_annotations = typing.get_type_hints(cls)
-
- fields = {
- key: GraphQLField(
- get_graphql_type_for_annotation(value, field_name=key),
- resolve=_get_resolver(cls, key),
- )
- for key, value in cls_annotations.items()
- }
-
- fields.update(
- {
- key: value.field
- for key, value in cls.__dict__.items()
- if getattr(value, IS_STRAWBERRY_FIELD, False)
- }
- )
-
- return fields
-
-
def type(cls):
def wrap():
+ name = cls.__name__
+ REGISTRY[name] = cls
+
def repr_(self):
return print_type(self.field)
setattr(cls, "__repr__", repr_)
- cls._fields = _get_fields(cls)
- cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)
+ annotations = typing.get_type_hints(cls, None, REGISTRY)
+
+ def _get_fields():
+
+ fields = {
+ key: GraphQLField(
+ get_graphql_type_for_annotation(value, key),
+ resolve=_get_resolver(cls, key),
+ )
+ for key, value in annotations.items()
+ }
+
+ fields.update(
+ {
+ key: value.field
+ for key, value in cls.__dict__.items()
+ if getattr(value, IS_STRAWBERRY_FIELD, False)
+ }
+ )
+
+ return fields
+
+ cls.field = GraphQLObjectType(name, lambda: _get_fields())
return dataclass(cls, repr=False)
diff --git a/strawberry/type_converter.py b/strawberry/type_converter.py
--- a/strawberry/type_converter.py
+++ b/strawberry/type_converter.py
@@ -12,7 +12,7 @@
from .scalars import ID
-TYPE_MAP = {
+REGISTRY = {
str: GraphQLString,
int: GraphQLInt,
float: GraphQLFloat,
@@ -27,11 +27,9 @@
def get_graphql_type_for_annotation(
annotation, field_name: str, force_optional: bool = False
):
- # TODO: nice error
-
- is_optional = False
-
# TODO: this might lead to issues with types that have a field value
+ is_optional = force_optional
+
if hasattr(annotation, "field"):
graphql_type = annotation.field
else:
@@ -49,7 +47,7 @@
# populated but the type is not an Union, like in the above case with Lists
if hasattr(annotation, "__args__"):
types = annotation.__args__
- non_none_types = [x for x in types if x != type(None)] # noqa:E721
+ non_none_types = [x for x in types if x != None.__class__] # noqa:E721
# optionals are represented as Union[type, None]
if len(non_none_types) == 1:
@@ -58,7 +56,7 @@
non_none_types[0], field_name, force_optional=True
)
else:
- is_optional = type(None) in types
+ is_optional = None.__class__ in types
# TODO: union types don't work with scalar types
# so we want to return a nice error
@@ -68,12 +66,12 @@
field_name, [type.field for type in types]
)
else:
- graphql_type = TYPE_MAP.get(annotation)
+ graphql_type = REGISTRY.get(annotation)
if not graphql_type:
raise ValueError(f"Unable to get GraphQL type for {annotation}")
- if is_optional or force_optional:
+ if is_optional:
return graphql_type
return GraphQLNonNull(graphql_type)
| {"golden_diff": "diff --git a/strawberry/type.py b/strawberry/type.py\n--- a/strawberry/type.py\n+++ b/strawberry/type.py\n@@ -5,7 +5,7 @@\n from graphql.utilities.schema_printer import print_type\n \n from .constants import IS_STRAWBERRY_FIELD\n-from .type_converter import get_graphql_type_for_annotation\n+from .type_converter import REGISTRY, get_graphql_type_for_annotation\n \n \n def _get_resolver(cls, field_name):\n@@ -23,37 +23,39 @@\n return _resolver\n \n \n-def _get_fields(cls):\n- cls_annotations = typing.get_type_hints(cls)\n-\n- fields = {\n- key: GraphQLField(\n- get_graphql_type_for_annotation(value, field_name=key),\n- resolve=_get_resolver(cls, key),\n- )\n- for key, value in cls_annotations.items()\n- }\n-\n- fields.update(\n- {\n- key: value.field\n- for key, value in cls.__dict__.items()\n- if getattr(value, IS_STRAWBERRY_FIELD, False)\n- }\n- )\n-\n- return fields\n-\n-\n def type(cls):\n def wrap():\n+ name = cls.__name__\n+ REGISTRY[name] = cls\n+\n def repr_(self):\n return print_type(self.field)\n \n setattr(cls, \"__repr__\", repr_)\n \n- cls._fields = _get_fields(cls)\n- cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)\n+ annotations = typing.get_type_hints(cls, None, REGISTRY)\n+\n+ def _get_fields():\n+\n+ fields = {\n+ key: GraphQLField(\n+ get_graphql_type_for_annotation(value, key),\n+ resolve=_get_resolver(cls, key),\n+ )\n+ for key, value in annotations.items()\n+ }\n+\n+ fields.update(\n+ {\n+ key: value.field\n+ for key, value in cls.__dict__.items()\n+ if getattr(value, IS_STRAWBERRY_FIELD, False)\n+ }\n+ )\n+\n+ return fields\n+\n+ cls.field = GraphQLObjectType(name, lambda: _get_fields())\n \n return dataclass(cls, repr=False)\n \ndiff --git a/strawberry/type_converter.py b/strawberry/type_converter.py\n--- a/strawberry/type_converter.py\n+++ b/strawberry/type_converter.py\n@@ -12,7 +12,7 @@\n from .scalars import ID\n \n \n-TYPE_MAP = {\n+REGISTRY = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n@@ -27,11 +27,9 @@\n def get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n ):\n- # TODO: nice error\n-\n- is_optional = False\n-\n # TODO: this might lead to issues with types that have a field value\n+ is_optional = force_optional\n+\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n@@ -49,7 +47,7 @@\n # populated but the type is not an Union, like in the above case with Lists\n if hasattr(annotation, \"__args__\"):\n types = annotation.__args__\n- non_none_types = [x for x in types if x != type(None)] # noqa:E721\n+ non_none_types = [x for x in types if x != None.__class__] # noqa:E721\n \n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n@@ -58,7 +56,7 @@\n non_none_types[0], field_name, force_optional=True\n )\n else:\n- is_optional = type(None) in types\n+ is_optional = None.__class__ in types\n \n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n@@ -68,12 +66,12 @@\n field_name, [type.field for type in types]\n )\n else:\n- graphql_type = TYPE_MAP.get(annotation)\n+ graphql_type = REGISTRY.get(annotation)\n \n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n \n- if is_optional or force_optional:\n+ if is_optional:\n return graphql_type\n \n return GraphQLNonNull(graphql_type)\n", "issue": "Support forward references\nSee: https://www.python.org/dev/peps/pep-0563/#forward-references\r\n\r\nRight now the following code would break:\r\n\r\n```python\r\nfrom __future__ import annotations\r\n\r\nimport strawberry\r\nimport typing\r\n\r\[email protected]\r\nclass User:\r\n name: str\r\n friend: typing.Optional[User]\r\n```\r\n\r\nThis is the error we get:\r\n\r\n```\r\n File \"/Users/patrickarminio/Documents/personal/temp/stra/app.py\", line 7, in <module>\r\n from schema import schema\r\n File \"/Users/patrickarminio/Documents/personal/temp/stra/schema.py\", line 10, in <module>\r\n @strawberry.type\r\n File \"/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py\", line 60, in type\r\n return wrap()\r\n File \"/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py\", line 55, in wrap\r\n cls._fields = _get_fields(cls)\r\n File \"/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py\", line 27, in _get_fields\r\n cls_annotations = typing.get_type_hints(cls)\r\n File \"/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py\", line 973, in get_type_hints\r\n value = _eval_type(value, base_globals, localns)\r\n File \"/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py\", line 260, in _eval_type\r\n return t._evaluate(globalns, localns)\r\n File \"/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py\", line 464, in _evaluate\r\n eval(self.__forward_code__, globalns, localns),\r\n File \"<string>\", line 1, in <module>\r\nNameError: name 'User' is not defined\r\n```\n", "before_files": [{"content": "import typing\n\nfrom dataclasses import dataclass\nfrom graphql import GraphQLField, GraphQLObjectType\nfrom graphql.utilities.schema_printer import print_type\n\nfrom .constants import IS_STRAWBERRY_FIELD\nfrom .type_converter import get_graphql_type_for_annotation\n\n\ndef _get_resolver(cls, field_name):\n def _resolver(obj, info):\n # TODO: can we make this nicer?\n # does it work in all the cases?\n\n field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n\n return field_resolver\n\n return _resolver\n\n\ndef _get_fields(cls):\n cls_annotations = typing.get_type_hints(cls)\n\n fields = {\n key: GraphQLField(\n get_graphql_type_for_annotation(value, field_name=key),\n resolve=_get_resolver(cls, key),\n )\n for key, value in cls_annotations.items()\n }\n\n fields.update(\n {\n key: value.field\n for key, value in cls.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n )\n\n return fields\n\n\ndef type(cls):\n def wrap():\n def repr_(self):\n return print_type(self.field)\n\n setattr(cls, \"__repr__\", repr_)\n\n cls._fields = _get_fields(cls)\n cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)\n\n return dataclass(cls, repr=False)\n\n return wrap()\n", "path": "strawberry/type.py"}, {"content": "from graphql import (\n GraphQLBoolean,\n GraphQLFloat,\n GraphQLID,\n GraphQLInt,\n GraphQLList,\n GraphQLNonNull,\n GraphQLString,\n GraphQLUnionType,\n)\n\nfrom .scalars import ID\n\n\nTYPE_MAP = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n bool: GraphQLBoolean,\n ID: GraphQLID,\n}\n\n\n# TODO: make so that we don't pass force optional\n# we use that when trying to get the type for a\n# option field (which can either be a scalar or an object type)\ndef get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n):\n # TODO: nice error\n\n is_optional = False\n\n # TODO: this might lead to issues with types that have a field value\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n annotation_name = getattr(annotation, \"_name\", None)\n\n if annotation_name == \"List\":\n list_of_type = get_graphql_type_for_annotation(\n annotation.__args__[0], field_name\n )\n\n return GraphQLList(list_of_type)\n\n # for some reason _name is None for Optional and Union types, so we check if we\n # have __args__ populated, there might be some edge cases where __args__ is\n # populated but the type is not an Union, like in the above case with Lists\n if hasattr(annotation, \"__args__\"):\n types = annotation.__args__\n non_none_types = [x for x in types if x != type(None)] # noqa:E721\n\n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n is_optional = True\n graphql_type = get_graphql_type_for_annotation(\n non_none_types[0], field_name, force_optional=True\n )\n else:\n is_optional = type(None) in types\n\n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n # also we want to make sure we have been passed\n # strawberry types\n graphql_type = GraphQLUnionType(\n field_name, [type.field for type in types]\n )\n else:\n graphql_type = TYPE_MAP.get(annotation)\n\n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n\n if is_optional or force_optional:\n return graphql_type\n\n return GraphQLNonNull(graphql_type)\n", "path": "strawberry/type_converter.py"}], "after_files": [{"content": "import typing\n\nfrom dataclasses import dataclass\nfrom graphql import GraphQLField, GraphQLObjectType\nfrom graphql.utilities.schema_printer import print_type\n\nfrom .constants import IS_STRAWBERRY_FIELD\nfrom .type_converter import REGISTRY, get_graphql_type_for_annotation\n\n\ndef _get_resolver(cls, field_name):\n def _resolver(obj, info):\n # TODO: can we make this nicer?\n # does it work in all the cases?\n\n field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n\n return field_resolver\n\n return _resolver\n\n\ndef type(cls):\n def wrap():\n name = cls.__name__\n REGISTRY[name] = cls\n\n def repr_(self):\n return print_type(self.field)\n\n setattr(cls, \"__repr__\", repr_)\n\n annotations = typing.get_type_hints(cls, None, REGISTRY)\n\n def _get_fields():\n\n fields = {\n key: GraphQLField(\n get_graphql_type_for_annotation(value, key),\n resolve=_get_resolver(cls, key),\n )\n for key, value in annotations.items()\n }\n\n fields.update(\n {\n key: value.field\n for key, value in cls.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n )\n\n return fields\n\n cls.field = GraphQLObjectType(name, lambda: _get_fields())\n\n return dataclass(cls, repr=False)\n\n return wrap()\n", "path": "strawberry/type.py"}, {"content": "from graphql import (\n GraphQLBoolean,\n GraphQLFloat,\n GraphQLID,\n GraphQLInt,\n GraphQLList,\n GraphQLNonNull,\n GraphQLString,\n GraphQLUnionType,\n)\n\nfrom .scalars import ID\n\n\nREGISTRY = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n bool: GraphQLBoolean,\n ID: GraphQLID,\n}\n\n\n# TODO: make so that we don't pass force optional\n# we use that when trying to get the type for a\n# option field (which can either be a scalar or an object type)\ndef get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n):\n # TODO: this might lead to issues with types that have a field value\n is_optional = force_optional\n\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n annotation_name = getattr(annotation, \"_name\", None)\n\n if annotation_name == \"List\":\n list_of_type = get_graphql_type_for_annotation(\n annotation.__args__[0], field_name\n )\n\n return GraphQLList(list_of_type)\n\n # for some reason _name is None for Optional and Union types, so we check if we\n # have __args__ populated, there might be some edge cases where __args__ is\n # populated but the type is not an Union, like in the above case with Lists\n if hasattr(annotation, \"__args__\"):\n types = annotation.__args__\n non_none_types = [x for x in types if x != None.__class__] # noqa:E721\n\n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n is_optional = True\n graphql_type = get_graphql_type_for_annotation(\n non_none_types[0], field_name, force_optional=True\n )\n else:\n is_optional = None.__class__ in types\n\n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n # also we want to make sure we have been passed\n # strawberry types\n graphql_type = GraphQLUnionType(\n field_name, [type.field for type in types]\n )\n else:\n graphql_type = REGISTRY.get(annotation)\n\n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n\n if is_optional:\n return graphql_type\n\n return GraphQLNonNull(graphql_type)\n", "path": "strawberry/type_converter.py"}]} | 1,924 | 982 |
gh_patches_debug_1657 | rasdani/github-patches | git_diff | kubeflow__pipelines-5054 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeErro occurs in gcp/automl/create_dataset_for_tables component
### What steps did you take:
[A clear and concise description of what the bug is.]
[gcp/automl/create_dataset_for_tables component](https://github.com/kubeflow/pipelines/tree/master/components/gcp/automl/create_dataset_for_tables)'s `create_time` output is declared as a string:
https://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.yaml#L15
however, `google.protobuf.timestamp_pb2.Timestamp` is returned in actual fact:
https://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.py#L54
FYI: The `dataset` object is an instance of `google.cloud.automl_v1beta1.types.Dataset` class and its [document](https://googleapis.dev/python/automl/0.4.0/gapic/v1beta1/types.html#google.cloud.automl_v1beta1.types.Dataset.create_time) says:
> **create_time**
> Output only. Timestamp when this dataset was created.
### What happened:
`TypeError` occurs

### What did you expect to happen:
Work.
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)? AI Platform Pipelines
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: 1.0.4 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: 1.3.0 <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `components/gcp/automl/create_dataset_for_tables/component.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import NamedTuple
16
17
18 def automl_create_dataset_for_tables(
19 gcp_project_id: str,
20 gcp_region: str,
21 display_name: str,
22 description: str = None,
23 tables_dataset_metadata: dict = {},
24 retry=None, #=google.api_core.gapic_v1.method.DEFAULT,
25 timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,
26 metadata: dict = None,
27 ) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):
28 '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables
29 '''
30 import google
31 from google.cloud import automl
32 client = automl.AutoMlClient()
33
34 location_path = client.location_path(gcp_project_id, gcp_region)
35 dataset_dict = {
36 'display_name': display_name,
37 'description': description,
38 'tables_dataset_metadata': tables_dataset_metadata,
39 }
40 dataset = client.create_dataset(
41 location_path,
42 dataset_dict,
43 retry or google.api_core.gapic_v1.method.DEFAULT,
44 timeout or google.api_core.gapic_v1.method.DEFAULT,
45 metadata,
46 )
47 print(dataset)
48 dataset_id = dataset.name.rsplit('/', 1)[-1]
49 dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(
50 project_id=gcp_project_id,
51 region=gcp_region,
52 dataset_id=dataset_id,
53 )
54 return (dataset.name, dataset.create_time, dataset_id, dataset_url)
55
56
57 if __name__ == '__main__':
58 import kfp
59 kfp.components.func_to_container_op(
60 automl_create_dataset_for_tables,
61 output_component_file='component.yaml',
62 base_image='python:3.7',
63 packages_to_install=['google-cloud-automl==0.4.0']
64 )
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py
--- a/components/gcp/automl/create_dataset_for_tables/component.py
+++ b/components/gcp/automl/create_dataset_for_tables/component.py
@@ -51,7 +51,7 @@
region=gcp_region,
dataset_id=dataset_id,
)
- return (dataset.name, dataset.create_time, dataset_id, dataset_url)
+ return (dataset.name, str(dataset.create_time), dataset_id, dataset_url)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py\n--- a/components/gcp/automl/create_dataset_for_tables/component.py\n+++ b/components/gcp/automl/create_dataset_for_tables/component.py\n@@ -51,7 +51,7 @@\n region=gcp_region,\n dataset_id=dataset_id,\n )\n- return (dataset.name, dataset.create_time, dataset_id, dataset_url)\n+ return (dataset.name, str(dataset.create_time), dataset_id, dataset_url)\n \n \n if __name__ == '__main__':\n", "issue": "TypeErro occurs in gcp/automl/create_dataset_for_tables component\n### What steps did you take:\r\n[A clear and concise description of what the bug is.]\r\n\r\n[gcp/automl/create_dataset_for_tables component](https://github.com/kubeflow/pipelines/tree/master/components/gcp/automl/create_dataset_for_tables)'s `create_time` output is declared as a string:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.yaml#L15\r\n\r\nhowever, `google.protobuf.timestamp_pb2.Timestamp` is returned in actual fact:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.py#L54\r\n\r\nFYI: The `dataset` object is an instance of `google.cloud.automl_v1beta1.types.Dataset` class and its [document](https://googleapis.dev/python/automl/0.4.0/gapic/v1beta1/types.html#google.cloud.automl_v1beta1.types.Dataset.create_time) says:\r\n\r\n> **create_time**\r\n> Output only. Timestamp when this dataset was created.\r\n\r\n### What happened:\r\n\r\n`TypeError` occurs\r\n\r\n\r\n\r\n### What did you expect to happen:\r\n\r\nWork.\r\n\r\n### Environment:\r\n<!-- Please fill in those that seem relevant. -->\r\n\r\nHow did you deploy Kubeflow Pipelines (KFP)? AI Platform Pipelines\r\n<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->\r\n\r\nKFP version: 1.0.4 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->\r\n\r\nKFP SDK version: 1.3.0 <!-- Please attach the output of this shell command: $pip list | grep kfp -->\r\n\r\n\r\n### Anything else you would like to add:\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n/kind bug\r\n<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->\r\n<!--\r\n// /area frontend\r\n// /area backend\r\n// /area sdk\r\n// /area testing\r\n// /area engprod\r\n-->\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_dataset_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n description: str = None,\n tables_dataset_metadata: dict = {},\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n dataset_dict = {\n 'display_name': display_name,\n 'description': description,\n 'tables_dataset_metadata': tables_dataset_metadata,\n }\n dataset = client.create_dataset(\n location_path,\n dataset_dict,\n retry or google.api_core.gapic_v1.method.DEFAULT,\n timeout or google.api_core.gapic_v1.method.DEFAULT,\n metadata,\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(\n project_id=gcp_project_id,\n region=gcp_region,\n dataset_id=dataset_id,\n )\n return (dataset.name, dataset.create_time, dataset_id, dataset_url)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(\n automl_create_dataset_for_tables,\n output_component_file='component.yaml',\n base_image='python:3.7',\n packages_to_install=['google-cloud-automl==0.4.0']\n )\n", "path": "components/gcp/automl/create_dataset_for_tables/component.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_dataset_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n description: str = None,\n tables_dataset_metadata: dict = {},\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n dataset_dict = {\n 'display_name': display_name,\n 'description': description,\n 'tables_dataset_metadata': tables_dataset_metadata,\n }\n dataset = client.create_dataset(\n location_path,\n dataset_dict,\n retry or google.api_core.gapic_v1.method.DEFAULT,\n timeout or google.api_core.gapic_v1.method.DEFAULT,\n metadata,\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(\n project_id=gcp_project_id,\n region=gcp_region,\n dataset_id=dataset_id,\n )\n return (dataset.name, str(dataset.create_time), dataset_id, dataset_url)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(\n automl_create_dataset_for_tables,\n output_component_file='component.yaml',\n base_image='python:3.7',\n packages_to_install=['google-cloud-automl==0.4.0']\n )\n", "path": "components/gcp/automl/create_dataset_for_tables/component.py"}]} | 1,543 | 131 |
gh_patches_debug_7432 | rasdani/github-patches | git_diff | pulp__pulpcore-3412 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
0077_move_remote_url_credentials.py fails on Remotes that have @ in path, not netloc
**Version**
3.18.10
**Describe the bug**
Migration 0077 fails when you have a remote that has an @ somewhere in the path
```
Applying core.0077_move_remote_url_credentials...Traceback (most recent call last):
File "/usr/bin/pulpcore-manager", line 33, in <module>
sys.exit(load_entry_point('pulpcore==3.18.10', 'console_scripts', 'pulpcore-manager')())
File "/usr/lib/python3.9/site-packages/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3.9/site-packages/django/db/migrations/migration.py", line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/lib/python3.9/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/usr/lib/python3.9/site-packages/pulpcore/app/migrations/0077_move_remote_url_credentials.py", line 19, in move_remote_url_credentials
_, url_split = url.netloc.rsplit("@", maxsplit=1)
ValueError: not enough values to unpack (expected 2, got 1)
```
**To Reproduce**
Steps to reproduce the behavior:
* Have a remote `https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/`
* Try to migrate 0077
**Expected behavior**
migration aplies
**Additional context**
https://community.theforeman.org/t/foreman-3-3-katello-4-5-upgrade-failed-pulpcore-manager-migrate-noinput/31088
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/migrations/0077_move_remote_url_credentials.py`
Content:
```
1 # Generated by Django 3.2.6 on 2021-09-29 14:00
2
3 from urllib.parse import urlparse, urlunparse
4
5 from django.db import migrations
6
7
8 def move_remote_url_credentials(apps, schema_editor):
9 Remote = apps.get_model("core", "Remote")
10
11 for remote in Remote.objects.filter(url__contains="@").iterator():
12 url = urlparse(remote.url)
13
14 if not remote.username:
15 remote.username = url.username
16 if not remote.password:
17 remote.password = url.password
18
19 _, url_split = url.netloc.rsplit("@", maxsplit=1)
20 remote.url = urlunparse(url._replace(netloc=url_split))
21 remote.save()
22
23
24 class Migration(migrations.Migration):
25
26 dependencies = [
27 ('core', '0076_remove_reserved_resource'),
28 ]
29
30 operations = [
31 migrations.RunPython(
32 code=move_remote_url_credentials,
33 reverse_code=migrations.RunPython.noop,
34 elidable=True,
35 )
36 ]
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/migrations/0077_move_remote_url_credentials.py b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
--- a/pulpcore/app/migrations/0077_move_remote_url_credentials.py
+++ b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
@@ -11,6 +11,11 @@
for remote in Remote.objects.filter(url__contains="@").iterator():
url = urlparse(remote.url)
+ if '@' not in url.netloc:
+ # URLs can have an @ in other places than the netloc,
+ # but those do not indicate credentials
+ continue
+
if not remote.username:
remote.username = url.username
if not remote.password:
| {"golden_diff": "diff --git a/pulpcore/app/migrations/0077_move_remote_url_credentials.py b/pulpcore/app/migrations/0077_move_remote_url_credentials.py\n--- a/pulpcore/app/migrations/0077_move_remote_url_credentials.py\n+++ b/pulpcore/app/migrations/0077_move_remote_url_credentials.py\n@@ -11,6 +11,11 @@\n for remote in Remote.objects.filter(url__contains=\"@\").iterator():\n url = urlparse(remote.url)\n \n+ if '@' not in url.netloc:\n+ # URLs can have an @ in other places than the netloc,\n+ # but those do not indicate credentials\n+ continue\n+\n if not remote.username:\n remote.username = url.username\n if not remote.password:\n", "issue": "0077_move_remote_url_credentials.py fails on Remotes that have @ in path, not netloc\n**Version**\r\n3.18.10\r\n\r\n**Describe the bug**\r\nMigration 0077 fails when you have a remote that has an @ somewhere in the path\r\n\r\n```\r\n Applying core.0077_move_remote_url_credentials...Traceback (most recent call last):\r\n File \"/usr/bin/pulpcore-manager\", line 33, in <module>\r\n sys.exit(load_entry_point('pulpcore==3.18.10', 'console_scripts', 'pulpcore-manager')())\r\n File \"/usr/lib/python3.9/site-packages/pulpcore/app/manage.py\", line 11, in manage\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/__init__.py\", line 419, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/__init__.py\", line 413, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/base.py\", line 354, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/base.py\", line 398, in execute\r\n output = self.handle(*args, **options)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/base.py\", line 89, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/commands/migrate.py\", line 244, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 117, in migrate\r\n state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 147, in _migrate_all_forwards\r\n state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 227, in apply_migration\r\n state = migration.apply(state, schema_editor)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/migration.py\", line 126, in apply\r\n operation.database_forwards(self.app_label, schema_editor, old_state, project_state)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/operations/special.py\", line 190, in database_forwards\r\n self.code(from_state.apps, schema_editor)\r\n File \"/usr/lib/python3.9/site-packages/pulpcore/app/migrations/0077_move_remote_url_credentials.py\", line 19, in move_remote_url_credentials\r\n _, url_split = url.netloc.rsplit(\"@\", maxsplit=1)\r\nValueError: not enough values to unpack (expected 2, got 1)\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n* Have a remote `https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/`\r\n* Try to migrate 0077\r\n\r\n**Expected behavior**\r\nmigration aplies\r\n\r\n**Additional context**\r\nhttps://community.theforeman.org/t/foreman-3-3-katello-4-5-upgrade-failed-pulpcore-manager-migrate-noinput/31088\r\n\n", "before_files": [{"content": "# Generated by Django 3.2.6 on 2021-09-29 14:00\n\nfrom urllib.parse import urlparse, urlunparse\n\nfrom django.db import migrations\n\n\ndef move_remote_url_credentials(apps, schema_editor):\n Remote = apps.get_model(\"core\", \"Remote\")\n\n for remote in Remote.objects.filter(url__contains=\"@\").iterator():\n url = urlparse(remote.url)\n\n if not remote.username:\n remote.username = url.username\n if not remote.password:\n remote.password = url.password\n\n _, url_split = url.netloc.rsplit(\"@\", maxsplit=1)\n remote.url = urlunparse(url._replace(netloc=url_split))\n remote.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0076_remove_reserved_resource'),\n ]\n\n operations = [\n migrations.RunPython(\n code=move_remote_url_credentials,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n )\n ]\n", "path": "pulpcore/app/migrations/0077_move_remote_url_credentials.py"}], "after_files": [{"content": "# Generated by Django 3.2.6 on 2021-09-29 14:00\n\nfrom urllib.parse import urlparse, urlunparse\n\nfrom django.db import migrations\n\n\ndef move_remote_url_credentials(apps, schema_editor):\n Remote = apps.get_model(\"core\", \"Remote\")\n\n for remote in Remote.objects.filter(url__contains=\"@\").iterator():\n url = urlparse(remote.url)\n\n if '@' not in url.netloc:\n # URLs can have an @ in other places than the netloc,\n # but those do not indicate credentials\n continue\n\n if not remote.username:\n remote.username = url.username\n if not remote.password:\n remote.password = url.password\n\n _, url_split = url.netloc.rsplit(\"@\", maxsplit=1)\n remote.url = urlunparse(url._replace(netloc=url_split))\n remote.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0076_remove_reserved_resource'),\n ]\n\n operations = [\n migrations.RunPython(\n code=move_remote_url_credentials,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n )\n ]\n", "path": "pulpcore/app/migrations/0077_move_remote_url_credentials.py"}]} | 1,388 | 172 |
gh_patches_debug_112 | rasdani/github-patches | git_diff | InstaPy__InstaPy-4046 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Instapy-chromedriver not supporting latest Chrome browser version
The Instapy-chrome driver only supports Chrome upto versions 71 and since the update, the whole program quits with the error of ensure chromedriver is installed at .../insta-py/chromedriver_linux64..
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instapy/__init__.py`
Content:
```
1 # flake8: noqa
2
3 from .instapy import InstaPy
4 from .util import smart_run
5 from .settings import Settings
6 from .file_manager import set_workspace
7 from .file_manager import get_workspace
8
9
10 # __variables__ with double-quoted values will be available in setup.py
11 __version__ = "0.2.1"
12
13
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instapy/__init__.py b/instapy/__init__.py
--- a/instapy/__init__.py
+++ b/instapy/__init__.py
@@ -8,5 +8,5 @@
# __variables__ with double-quoted values will be available in setup.py
-__version__ = "0.2.1"
+__version__ = "0.2.2"
| {"golden_diff": "diff --git a/instapy/__init__.py b/instapy/__init__.py\n--- a/instapy/__init__.py\n+++ b/instapy/__init__.py\n@@ -8,5 +8,5 @@\n \n \n # __variables__ with double-quoted values will be available in setup.py\n-__version__ = \"0.2.1\"\n+__version__ = \"0.2.2\"\n", "issue": "Instapy-chromedriver not supporting latest Chrome browser version\nThe Instapy-chrome driver only supports Chrome upto versions 71 and since the update, the whole program quits with the error of ensure chromedriver is installed at .../insta-py/chromedriver_linux64..\n", "before_files": [{"content": "# flake8: noqa\n\nfrom .instapy import InstaPy\nfrom .util import smart_run\nfrom .settings import Settings\nfrom .file_manager import set_workspace\nfrom .file_manager import get_workspace\n\n\n# __variables__ with double-quoted values will be available in setup.py\n__version__ = \"0.2.1\"\n\n", "path": "instapy/__init__.py"}], "after_files": [{"content": "# flake8: noqa\n\nfrom .instapy import InstaPy\nfrom .util import smart_run\nfrom .settings import Settings\nfrom .file_manager import set_workspace\nfrom .file_manager import get_workspace\n\n\n# __variables__ with double-quoted values will be available in setup.py\n__version__ = \"0.2.2\"\n\n", "path": "instapy/__init__.py"}]} | 412 | 91 |
gh_patches_debug_25769 | rasdani/github-patches | git_diff | encode__starlette-1401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
templateing: jinja2: pass kwargs for environment
I think it would be good to pass something like `env_kwargs` via https://github.com/blueyed/starlette/blob/24c135de71ac56a73f7f797258115941579155bf/starlette/templating.py#L51-L53.
While you can change the env afterwards, it would allow Jinja2 to validate e.g. `enable_async`, and call `load_extensions` etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/templating.py`
Content:
```
1 import typing
2 from os import PathLike
3
4 from starlette.background import BackgroundTask
5 from starlette.responses import Response
6 from starlette.types import Receive, Scope, Send
7
8 try:
9 import jinja2
10
11 # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1
12 if hasattr(jinja2, "pass_context"):
13 pass_context = jinja2.pass_context
14 else: # pragma: nocover
15 pass_context = jinja2.contextfunction
16 except ImportError: # pragma: nocover
17 jinja2 = None # type: ignore
18
19
20 class _TemplateResponse(Response):
21 media_type = "text/html"
22
23 def __init__(
24 self,
25 template: typing.Any,
26 context: dict,
27 status_code: int = 200,
28 headers: dict = None,
29 media_type: str = None,
30 background: BackgroundTask = None,
31 ):
32 self.template = template
33 self.context = context
34 content = template.render(context)
35 super().__init__(content, status_code, headers, media_type, background)
36
37 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
38 request = self.context.get("request", {})
39 extensions = request.get("extensions", {})
40 if "http.response.template" in extensions:
41 await send(
42 {
43 "type": "http.response.template",
44 "template": self.template,
45 "context": self.context,
46 }
47 )
48 await super().__call__(scope, receive, send)
49
50
51 class Jinja2Templates:
52 """
53 templates = Jinja2Templates("templates")
54
55 return templates.TemplateResponse("index.html", {"request": request})
56 """
57
58 def __init__(self, directory: typing.Union[str, PathLike]) -> None:
59 assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
60 self.env = self._create_env(directory)
61
62 def _create_env(
63 self, directory: typing.Union[str, PathLike]
64 ) -> "jinja2.Environment":
65 @pass_context
66 def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
67 request = context["request"]
68 return request.url_for(name, **path_params)
69
70 loader = jinja2.FileSystemLoader(directory)
71 env = jinja2.Environment(loader=loader, autoescape=True)
72 env.globals["url_for"] = url_for
73 return env
74
75 def get_template(self, name: str) -> "jinja2.Template":
76 return self.env.get_template(name)
77
78 def TemplateResponse(
79 self,
80 name: str,
81 context: dict,
82 status_code: int = 200,
83 headers: dict = None,
84 media_type: str = None,
85 background: BackgroundTask = None,
86 ) -> _TemplateResponse:
87 if "request" not in context:
88 raise ValueError('context must include a "request" key')
89 template = self.get_template(name)
90 return _TemplateResponse(
91 template,
92 context,
93 status_code=status_code,
94 headers=headers,
95 media_type=media_type,
96 background=background,
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/templating.py b/starlette/templating.py
--- a/starlette/templating.py
+++ b/starlette/templating.py
@@ -55,12 +55,14 @@
return templates.TemplateResponse("index.html", {"request": request})
"""
- def __init__(self, directory: typing.Union[str, PathLike]) -> None:
+ def __init__(
+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any
+ ) -> None:
assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
- self.env = self._create_env(directory)
+ self.env = self._create_env(directory, **env_options)
def _create_env(
- self, directory: typing.Union[str, PathLike]
+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any
) -> "jinja2.Environment":
@pass_context
def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
@@ -68,7 +70,10 @@
return request.url_for(name, **path_params)
loader = jinja2.FileSystemLoader(directory)
- env = jinja2.Environment(loader=loader, autoescape=True)
+ env_options.setdefault("loader", loader)
+ env_options.setdefault("autoescape", True)
+
+ env = jinja2.Environment(**env_options)
env.globals["url_for"] = url_for
return env
| {"golden_diff": "diff --git a/starlette/templating.py b/starlette/templating.py\n--- a/starlette/templating.py\n+++ b/starlette/templating.py\n@@ -55,12 +55,14 @@\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n \n- def __init__(self, directory: typing.Union[str, PathLike]) -> None:\n+ def __init__(\n+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n+ ) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n- self.env = self._create_env(directory)\n+ self.env = self._create_env(directory, **env_options)\n \n def _create_env(\n- self, directory: typing.Union[str, PathLike]\n+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n ) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n@@ -68,7 +70,10 @@\n return request.url_for(name, **path_params)\n \n loader = jinja2.FileSystemLoader(directory)\n- env = jinja2.Environment(loader=loader, autoescape=True)\n+ env_options.setdefault(\"loader\", loader)\n+ env_options.setdefault(\"autoescape\", True)\n+\n+ env = jinja2.Environment(**env_options)\n env.globals[\"url_for\"] = url_for\n return env\n", "issue": "templateing: jinja2: pass kwargs for environment\nI think it would be good to pass something like `env_kwargs` via https://github.com/blueyed/starlette/blob/24c135de71ac56a73f7f797258115941579155bf/starlette/templating.py#L51-L53.\r\n\r\nWhile you can change the env afterwards, it would allow Jinja2 to validate e.g. `enable_async`, and call `load_extensions` etc.\n", "before_files": [{"content": "import typing\nfrom os import PathLike\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(self, directory: typing.Union[str, PathLike]) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self._create_env(directory)\n\n def _create_env(\n self, directory: typing.Union[str, PathLike]\n ) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env = jinja2.Environment(loader=loader, autoescape=True)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}], "after_files": [{"content": "import typing\nfrom os import PathLike\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(\n self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n ) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self._create_env(directory, **env_options)\n\n def _create_env(\n self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n ) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env_options.setdefault(\"loader\", loader)\n env_options.setdefault(\"autoescape\", True)\n\n env = jinja2.Environment(**env_options)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}]} | 1,268 | 349 |
gh_patches_debug_18185 | rasdani/github-patches | git_diff | mozilla__bugbug-214 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use the bug snapshot transform in the "uplift" model
Depends on #5.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/models/uplift.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import xgboost
7 from imblearn.under_sampling import RandomUnderSampler
8 from sklearn.compose import ColumnTransformer
9 from sklearn.feature_extraction import DictVectorizer
10 from sklearn.pipeline import Pipeline
11
12 from bugbug import bug_features
13 from bugbug import bugzilla
14 from bugbug.model import Model
15
16
17 class UpliftModel(Model):
18 def __init__(self, lemmatization=False):
19 Model.__init__(self, lemmatization)
20
21 self.sampler = RandomUnderSampler(random_state=0)
22
23 feature_extractors = [
24 bug_features.has_str(),
25 bug_features.has_regression_range(),
26 bug_features.severity(),
27 bug_features.keywords(),
28 bug_features.is_coverity_issue(),
29 bug_features.has_crash_signature(),
30 bug_features.has_url(),
31 bug_features.has_w3c_url(),
32 bug_features.has_github_url(),
33 bug_features.whiteboard(),
34 bug_features.patches(),
35 bug_features.landings(),
36 bug_features.title(),
37 ]
38
39 cleanup_functions = [
40 bug_features.cleanup_fileref,
41 bug_features.cleanup_url,
42 bug_features.cleanup_synonyms,
43 ]
44
45 self.extraction_pipeline = Pipeline([
46 ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),
47 ('union', ColumnTransformer([
48 ('data', DictVectorizer(), 'data'),
49
50 ('title', self.text_vectorizer(), 'title'),
51
52 ('comments', self.text_vectorizer(), 'comments'),
53 ])),
54 ])
55
56 self.clf = xgboost.XGBClassifier(n_jobs=16)
57 self.clf.set_params(predictor='cpu_predictor')
58
59 def get_labels(self):
60 classes = {}
61
62 for bug_data in bugzilla.get_bugs():
63 bug_id = int(bug_data['id'])
64
65 for attachment in bug_data['attachments']:
66 for flag in attachment['flags']:
67 if not flag['name'].startswith('approval-mozilla-') or flag['status'] not in ['+', '-']:
68 continue
69
70 if flag['status'] == '+':
71 classes[bug_id] = 1
72 elif flag['status'] == '-':
73 classes[bug_id] = 0
74
75 return classes
76
77 def get_feature_names(self):
78 return self.extraction_pipeline.named_steps['union'].get_feature_names()
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bugbug/models/uplift.py b/bugbug/models/uplift.py
--- a/bugbug/models/uplift.py
+++ b/bugbug/models/uplift.py
@@ -43,7 +43,7 @@
]
self.extraction_pipeline = Pipeline([
- ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),
+ ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),
('union', ColumnTransformer([
('data', DictVectorizer(), 'data'),
@@ -56,6 +56,9 @@
self.clf = xgboost.XGBClassifier(n_jobs=16)
self.clf.set_params(predictor='cpu_predictor')
+ def rollback(self, change):
+ return (change['field_name'] == 'flagtypes.name' and change['added'].startswith('approval-mozilla-') and (change['added'].endswith('+') or change['added'].endswith('-')))
+
def get_labels(self):
classes = {}
| {"golden_diff": "diff --git a/bugbug/models/uplift.py b/bugbug/models/uplift.py\n--- a/bugbug/models/uplift.py\n+++ b/bugbug/models/uplift.py\n@@ -43,7 +43,7 @@\n ]\n \n self.extraction_pipeline = Pipeline([\n- ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),\n+ ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n \n@@ -56,6 +56,9 @@\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n \n+ def rollback(self, change):\n+ return (change['field_name'] == 'flagtypes.name' and change['added'].startswith('approval-mozilla-') and (change['added'].endswith('+') or change['added'].endswith('-')))\n+\n def get_labels(self):\n classes = {}\n", "issue": "Use the bug snapshot transform in the \"uplift\" model\nDepends on #5.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug.model import Model\n\n\nclass UpliftModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n self.sampler = RandomUnderSampler(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.keywords(),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_fileref,\n bug_features.cleanup_url,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(), 'title'),\n\n ('comments', self.text_vectorizer(), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs():\n bug_id = int(bug_data['id'])\n\n for attachment in bug_data['attachments']:\n for flag in attachment['flags']:\n if not flag['name'].startswith('approval-mozilla-') or flag['status'] not in ['+', '-']:\n continue\n\n if flag['status'] == '+':\n classes[bug_id] = 1\n elif flag['status'] == '-':\n classes[bug_id] = 0\n\n return classes\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n", "path": "bugbug/models/uplift.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug.model import Model\n\n\nclass UpliftModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n self.sampler = RandomUnderSampler(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.keywords(),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_fileref,\n bug_features.cleanup_url,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(), 'title'),\n\n ('comments', self.text_vectorizer(), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def rollback(self, change):\n return (change['field_name'] == 'flagtypes.name' and change['added'].startswith('approval-mozilla-') and (change['added'].endswith('+') or change['added'].endswith('-')))\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs():\n bug_id = int(bug_data['id'])\n\n for attachment in bug_data['attachments']:\n for flag in attachment['flags']:\n if not flag['name'].startswith('approval-mozilla-') or flag['status'] not in ['+', '-']:\n continue\n\n if flag['status'] == '+':\n classes[bug_id] = 1\n elif flag['status'] == '-':\n classes[bug_id] = 0\n\n return classes\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n", "path": "bugbug/models/uplift.py"}]} | 972 | 231 |
gh_patches_debug_349 | rasdani/github-patches | git_diff | google__turbinia-1070 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing sys module import in logger.py
Logger module is missing an import statement for 'sys'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/config/logger.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2017 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Sets up logging."""
16
17 from __future__ import unicode_literals
18 import logging
19
20 import warnings
21 import logging.handlers
22 import os
23
24 from turbinia import config
25 from turbinia import TurbiniaException
26
27 # Environment variable to look for node name in
28 ENVNODENAME = 'NODE_NAME'
29
30
31 def setup(need_file_handler=True, need_stream_handler=True, log_file_path=None):
32 """Set up logging parameters.
33
34 This will also set the root logger, which is the default logger when a named
35 logger is not specified. We currently use 'turbinia' as the named logger,
36 however some external modules that are called by Turbinia can use the root
37 logger, so we want to be able to optionally configure that as well.
38 """
39 # Remove known warning about credentials
40 warnings.filterwarnings(
41 'ignore', 'Your application has authenticated using end user credentials')
42
43 logger = logging.getLogger('turbinia')
44 # Eliminate double logging from root logger
45 logger.propagate = False
46
47 # We only need a handler if one of that type doesn't exist already
48 if logger.handlers:
49 for handler in logger.handlers:
50 # Want to do strict type-checking here because is instance will include
51 # subclasses and so won't distinguish between StreamHandlers and
52 # FileHandlers.
53 # pylint: disable=unidiomatic-typecheck
54 if type(handler) == logging.FileHandler:
55 need_file_handler = False
56
57 # pylint: disable=unidiomatic-typecheck
58 if type(handler) == logging.StreamHandler:
59 need_stream_handler = False
60
61 if need_file_handler:
62 try:
63 config.LoadConfig()
64 except TurbiniaException as exception:
65 print(
66 'Could not load config file ({0!s}).\n{1:s}'.format(
67 exception, config.CONFIG_MSG))
68 sys.exit(1)
69
70 # Check if a user specified log path was provided else create default path
71 if not log_file_path:
72 log_name = os.uname().nodename
73 # Check if NODE_NAME available for GKE setups
74 if ENVNODENAME in os.environ:
75 log_name = log_name + '.{0!s}'.format(os.environ[ENVNODENAME])
76 log_file_path = os.path.join(config.LOG_DIR, log_name) + '.log'
77
78 file_handler = logging.FileHandler(log_file_path)
79 formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')
80 file_handler.setFormatter(formatter)
81 file_handler.setLevel(logging.DEBUG)
82 logger.addHandler(file_handler)
83
84 console_handler = logging.StreamHandler()
85 formatter = logging.Formatter(
86 '%(asctime)s [%(levelname)s] %(message)s', "%Y-%m-%d %H:%M:%S")
87 console_handler.setFormatter(formatter)
88 if need_stream_handler:
89 logger.addHandler(console_handler)
90
91 # Configure the root logger to use exactly our handlers because other modules
92 # like PSQ use this, and we want to see log messages from it when executing
93 # from CLI.
94 root_log = logging.getLogger()
95 for handler in root_log.handlers:
96 root_log.removeHandler(handler)
97 root_log.addHandler(console_handler)
98 if need_file_handler:
99 root_log.addHandler(file_handler)
100
101 # Set filelock logging to ERROR due to log spam
102 logging.getLogger("filelock").setLevel(logging.ERROR)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/config/logger.py b/turbinia/config/logger.py
--- a/turbinia/config/logger.py
+++ b/turbinia/config/logger.py
@@ -20,6 +20,7 @@
import warnings
import logging.handlers
import os
+import sys
from turbinia import config
from turbinia import TurbiniaException
| {"golden_diff": "diff --git a/turbinia/config/logger.py b/turbinia/config/logger.py\n--- a/turbinia/config/logger.py\n+++ b/turbinia/config/logger.py\n@@ -20,6 +20,7 @@\n import warnings\n import logging.handlers\n import os\n+import sys\n \n from turbinia import config\n from turbinia import TurbiniaException\n", "issue": "Missing sys module import in logger.py\nLogger module is missing an import statement for 'sys'\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Sets up logging.\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\n\nimport warnings\nimport logging.handlers\nimport os\n\nfrom turbinia import config\nfrom turbinia import TurbiniaException\n\n# Environment variable to look for node name in\nENVNODENAME = 'NODE_NAME'\n\n\ndef setup(need_file_handler=True, need_stream_handler=True, log_file_path=None):\n \"\"\"Set up logging parameters.\n\n This will also set the root logger, which is the default logger when a named\n logger is not specified. We currently use 'turbinia' as the named logger,\n however some external modules that are called by Turbinia can use the root\n logger, so we want to be able to optionally configure that as well.\n \"\"\"\n # Remove known warning about credentials\n warnings.filterwarnings(\n 'ignore', 'Your application has authenticated using end user credentials')\n\n logger = logging.getLogger('turbinia')\n # Eliminate double logging from root logger\n logger.propagate = False\n\n # We only need a handler if one of that type doesn't exist already\n if logger.handlers:\n for handler in logger.handlers:\n # Want to do strict type-checking here because is instance will include\n # subclasses and so won't distinguish between StreamHandlers and\n # FileHandlers.\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.FileHandler:\n need_file_handler = False\n\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.StreamHandler:\n need_stream_handler = False\n\n if need_file_handler:\n try:\n config.LoadConfig()\n except TurbiniaException as exception:\n print(\n 'Could not load config file ({0!s}).\\n{1:s}'.format(\n exception, config.CONFIG_MSG))\n sys.exit(1)\n\n # Check if a user specified log path was provided else create default path\n if not log_file_path:\n log_name = os.uname().nodename\n # Check if NODE_NAME available for GKE setups\n if ENVNODENAME in os.environ:\n log_name = log_name + '.{0!s}'.format(os.environ[ENVNODENAME])\n log_file_path = os.path.join(config.LOG_DIR, log_name) + '.log'\n\n file_handler = logging.FileHandler(log_file_path)\n formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')\n file_handler.setFormatter(formatter)\n file_handler.setLevel(logging.DEBUG)\n logger.addHandler(file_handler)\n\n console_handler = logging.StreamHandler()\n formatter = logging.Formatter(\n '%(asctime)s [%(levelname)s] %(message)s', \"%Y-%m-%d %H:%M:%S\")\n console_handler.setFormatter(formatter)\n if need_stream_handler:\n logger.addHandler(console_handler)\n\n # Configure the root logger to use exactly our handlers because other modules\n # like PSQ use this, and we want to see log messages from it when executing\n # from CLI.\n root_log = logging.getLogger()\n for handler in root_log.handlers:\n root_log.removeHandler(handler)\n root_log.addHandler(console_handler)\n if need_file_handler:\n root_log.addHandler(file_handler)\n\n # Set filelock logging to ERROR due to log spam\n logging.getLogger(\"filelock\").setLevel(logging.ERROR)\n", "path": "turbinia/config/logger.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Sets up logging.\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\n\nimport warnings\nimport logging.handlers\nimport os\nimport sys\n\nfrom turbinia import config\nfrom turbinia import TurbiniaException\n\n# Environment variable to look for node name in\nENVNODENAME = 'NODE_NAME'\n\n\ndef setup(need_file_handler=True, need_stream_handler=True, log_file_path=None):\n \"\"\"Set up logging parameters.\n\n This will also set the root logger, which is the default logger when a named\n logger is not specified. We currently use 'turbinia' as the named logger,\n however some external modules that are called by Turbinia can use the root\n logger, so we want to be able to optionally configure that as well.\n \"\"\"\n # Remove known warning about credentials\n warnings.filterwarnings(\n 'ignore', 'Your application has authenticated using end user credentials')\n\n logger = logging.getLogger('turbinia')\n # Eliminate double logging from root logger\n logger.propagate = False\n\n # We only need a handler if one of that type doesn't exist already\n if logger.handlers:\n for handler in logger.handlers:\n # Want to do strict type-checking here because is instance will include\n # subclasses and so won't distinguish between StreamHandlers and\n # FileHandlers.\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.FileHandler:\n need_file_handler = False\n\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.StreamHandler:\n need_stream_handler = False\n\n if need_file_handler:\n try:\n config.LoadConfig()\n except TurbiniaException as exception:\n print(\n 'Could not load config file ({0!s}).\\n{1:s}'.format(\n exception, config.CONFIG_MSG))\n sys.exit(1)\n\n # Check if a user specified log path was provided else create default path\n if not log_file_path:\n log_name = os.uname().nodename\n # Check if NODE_NAME available for GKE setups\n if ENVNODENAME in os.environ:\n log_name = log_name + '.{0!s}'.format(os.environ[ENVNODENAME])\n log_file_path = os.path.join(config.LOG_DIR, log_name) + '.log'\n\n file_handler = logging.FileHandler(log_file_path)\n formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')\n file_handler.setFormatter(formatter)\n file_handler.setLevel(logging.DEBUG)\n logger.addHandler(file_handler)\n\n console_handler = logging.StreamHandler()\n formatter = logging.Formatter(\n '%(asctime)s [%(levelname)s] %(message)s', \"%Y-%m-%d %H:%M:%S\")\n console_handler.setFormatter(formatter)\n if need_stream_handler:\n logger.addHandler(console_handler)\n\n # Configure the root logger to use exactly our handlers because other modules\n # like PSQ use this, and we want to see log messages from it when executing\n # from CLI.\n root_log = logging.getLogger()\n for handler in root_log.handlers:\n root_log.removeHandler(handler)\n root_log.addHandler(console_handler)\n if need_file_handler:\n root_log.addHandler(file_handler)\n\n # Set filelock logging to ERROR due to log spam\n logging.getLogger(\"filelock\").setLevel(logging.ERROR)\n", "path": "turbinia/config/logger.py"}]} | 1,341 | 83 |
gh_patches_debug_23568 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2921 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up dependencies
Spring cleaning! We currently declare some dependencies which are either unused or can easily be substituted:
- h11 - not used at all?
- requests - tests + examples only.
We should IMHO also eventually consider removing the following dependencies, although that involves a bit of work and shouldn't be in scope for this issue:
- pyasn1 - replace with asn1crypto, which is used by cryptography/pyOpenSSL
- ldap3 - only used for ldap proxy auth, which should probably live outside of the core once we have a healthy addon system.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 from codecs import open
3
4 import re
5 from setuptools import setup, find_packages
6
7 # Based on https://github.com/pypa/sampleproject/blob/master/setup.py
8 # and https://python-packaging-user-guide.readthedocs.org/
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12 with open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:
13 long_description = f.read()
14
15 with open(os.path.join(here, "mitmproxy", "version.py")) as f:
16 VERSION = re.search(r'VERSION = "(.+?)(?:-0x|")', f.read()).group(1)
17
18 setup(
19 name="mitmproxy",
20 version=VERSION,
21 description="An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.",
22 long_description=long_description,
23 url="http://mitmproxy.org",
24 author="Aldo Cortesi",
25 author_email="[email protected]",
26 license="MIT",
27 classifiers=[
28 "License :: OSI Approved :: MIT License",
29 "Development Status :: 5 - Production/Stable",
30 "Environment :: Console",
31 "Environment :: Console :: Curses",
32 "Operating System :: MacOS :: MacOS X",
33 "Operating System :: POSIX",
34 "Operating System :: Microsoft :: Windows",
35 "Programming Language :: Python",
36 "Programming Language :: Python :: 3",
37 "Programming Language :: Python :: 3 :: Only",
38 "Programming Language :: Python :: 3.5",
39 "Programming Language :: Python :: 3.6",
40 "Programming Language :: Python :: Implementation :: CPython",
41 "Topic :: Security",
42 "Topic :: Internet",
43 "Topic :: Internet :: WWW/HTTP",
44 "Topic :: Internet :: Proxy Servers",
45 "Topic :: Software Development :: Testing"
46 ],
47 packages=find_packages(include=[
48 "mitmproxy", "mitmproxy.*",
49 "pathod", "pathod.*",
50 ]),
51 include_package_data=True,
52 entry_points={
53 'console_scripts': [
54 "mitmproxy = mitmproxy.tools.main:mitmproxy",
55 "mitmdump = mitmproxy.tools.main:mitmdump",
56 "mitmweb = mitmproxy.tools.main:mitmweb",
57 "pathod = pathod.pathod_cmdline:go_pathod",
58 "pathoc = pathod.pathoc_cmdline:go_pathoc"
59 ]
60 },
61 # https://packaging.python.org/en/latest/requirements/#install-requires
62 # It is not considered best practice to use install_requires to pin dependencies to specific versions.
63 install_requires=[
64 "blinker>=1.4, <1.5",
65 "brotlipy>=0.7.0,<0.8",
66 "certifi>=2015.11.20.1", # no semver here - this should always be on the last release!
67 "click>=6.2, <7",
68 "cryptography>=2.1.4,<2.2",
69 'h11>=0.7.0,<0.8',
70 "h2>=3.0.1,<4",
71 "hyperframe>=5.1.0,<6",
72 "kaitaistruct>=0.7,<0.9",
73 "ldap3>=2.4,<2.5",
74 "passlib>=1.6.5, <1.8",
75 "pyasn1>=0.3.1,<0.5",
76 "pyOpenSSL>=17.5,<17.6",
77 "pyparsing>=2.1.3, <2.3",
78 "pyperclip>=1.6.0, <1.7",
79 "requests>=2.9.1, <3",
80 "ruamel.yaml>=0.13.2, <0.16",
81 "sortedcontainers>=1.5.4, <1.6",
82 "tornado>=4.3, <4.6",
83 "urwid>=2.0.1,<2.1",
84 "wsproto>=0.11.0,<0.12.0",
85 ],
86 extras_require={
87 ':sys_platform == "win32"': [
88 "pydivert>=2.0.3,<2.2",
89 ],
90 'dev': [
91 "flake8>=3.5, <3.6",
92 "Flask>=0.10.1, <0.13",
93 "mypy>=0.560,<0.561",
94 "pytest-cov>=2.5.1,<3",
95 "pytest-faulthandler>=1.3.1,<2",
96 "pytest-timeout>=1.2.1,<2",
97 "pytest-xdist>=1.22,<2",
98 "pytest>=3.3,<4",
99 "tox>=2.3, <3",
100 "rstcheck>=2.2, <4.0",
101 ],
102 'examples': [
103 "beautifulsoup4>=4.4.1, <4.7",
104 "Pillow>=4.3,<5.1",
105 ]
106 }
107 )
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -66,7 +66,6 @@
"certifi>=2015.11.20.1", # no semver here - this should always be on the last release!
"click>=6.2, <7",
"cryptography>=2.1.4,<2.2",
- 'h11>=0.7.0,<0.8',
"h2>=3.0.1,<4",
"hyperframe>=5.1.0,<6",
"kaitaistruct>=0.7,<0.9",
@@ -76,7 +75,6 @@
"pyOpenSSL>=17.5,<17.6",
"pyparsing>=2.1.3, <2.3",
"pyperclip>=1.6.0, <1.7",
- "requests>=2.9.1, <3",
"ruamel.yaml>=0.13.2, <0.16",
"sortedcontainers>=1.5.4, <1.6",
"tornado>=4.3, <4.6",
@@ -96,6 +94,7 @@
"pytest-timeout>=1.2.1,<2",
"pytest-xdist>=1.22,<2",
"pytest>=3.3,<4",
+ "requests>=2.9.1, <3",
"tox>=2.3, <3",
"rstcheck>=2.2, <4.0",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -66,7 +66,6 @@\n \"certifi>=2015.11.20.1\", # no semver here - this should always be on the last release!\n \"click>=6.2, <7\",\n \"cryptography>=2.1.4,<2.2\",\n- 'h11>=0.7.0,<0.8',\n \"h2>=3.0.1,<4\",\n \"hyperframe>=5.1.0,<6\",\n \"kaitaistruct>=0.7,<0.9\",\n@@ -76,7 +75,6 @@\n \"pyOpenSSL>=17.5,<17.6\",\n \"pyparsing>=2.1.3, <2.3\",\n \"pyperclip>=1.6.0, <1.7\",\n- \"requests>=2.9.1, <3\",\n \"ruamel.yaml>=0.13.2, <0.16\",\n \"sortedcontainers>=1.5.4, <1.6\",\n \"tornado>=4.3, <4.6\",\n@@ -96,6 +94,7 @@\n \"pytest-timeout>=1.2.1,<2\",\n \"pytest-xdist>=1.22,<2\",\n \"pytest>=3.3,<4\",\n+ \"requests>=2.9.1, <3\",\n \"tox>=2.3, <3\",\n \"rstcheck>=2.2, <4.0\",\n ],\n", "issue": "Clean up dependencies\nSpring cleaning! We currently declare some dependencies which are either unused or can easily be substituted:\r\n\r\n - h11 - not used at all?\r\n - requests - tests + examples only.\r\n\r\nWe should IMHO also eventually consider removing the following dependencies, although that involves a bit of work and shouldn't be in scope for this issue:\r\n\r\n - pyasn1 - replace with asn1crypto, which is used by cryptography/pyOpenSSL\r\n - ldap3 - only used for ldap proxy auth, which should probably live outside of the core once we have a healthy addon system.\n", "before_files": [{"content": "import os\nfrom codecs import open\n\nimport re\nfrom setuptools import setup, find_packages\n\n# Based on https://github.com/pypa/sampleproject/blob/master/setup.py\n# and https://python-packaging-user-guide.readthedocs.org/\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:\n long_description = f.read()\n\nwith open(os.path.join(here, \"mitmproxy\", \"version.py\")) as f:\n VERSION = re.search(r'VERSION = \"(.+?)(?:-0x|\")', f.read()).group(1)\n\nsetup(\n name=\"mitmproxy\",\n version=VERSION,\n description=\"An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.\",\n long_description=long_description,\n url=\"http://mitmproxy.org\",\n author=\"Aldo Cortesi\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Environment :: Console :: Curses\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Security\",\n \"Topic :: Internet\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: Proxy Servers\",\n \"Topic :: Software Development :: Testing\"\n ],\n packages=find_packages(include=[\n \"mitmproxy\", \"mitmproxy.*\",\n \"pathod\", \"pathod.*\",\n ]),\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n \"mitmproxy = mitmproxy.tools.main:mitmproxy\",\n \"mitmdump = mitmproxy.tools.main:mitmdump\",\n \"mitmweb = mitmproxy.tools.main:mitmweb\",\n \"pathod = pathod.pathod_cmdline:go_pathod\",\n \"pathoc = pathod.pathoc_cmdline:go_pathoc\"\n ]\n },\n # https://packaging.python.org/en/latest/requirements/#install-requires\n # It is not considered best practice to use install_requires to pin dependencies to specific versions.\n install_requires=[\n \"blinker>=1.4, <1.5\",\n \"brotlipy>=0.7.0,<0.8\",\n \"certifi>=2015.11.20.1\", # no semver here - this should always be on the last release!\n \"click>=6.2, <7\",\n \"cryptography>=2.1.4,<2.2\",\n 'h11>=0.7.0,<0.8',\n \"h2>=3.0.1,<4\",\n \"hyperframe>=5.1.0,<6\",\n \"kaitaistruct>=0.7,<0.9\",\n \"ldap3>=2.4,<2.5\",\n \"passlib>=1.6.5, <1.8\",\n \"pyasn1>=0.3.1,<0.5\",\n \"pyOpenSSL>=17.5,<17.6\",\n \"pyparsing>=2.1.3, <2.3\",\n \"pyperclip>=1.6.0, <1.7\",\n \"requests>=2.9.1, <3\",\n \"ruamel.yaml>=0.13.2, <0.16\",\n \"sortedcontainers>=1.5.4, <1.6\",\n \"tornado>=4.3, <4.6\",\n \"urwid>=2.0.1,<2.1\",\n \"wsproto>=0.11.0,<0.12.0\",\n ],\n extras_require={\n ':sys_platform == \"win32\"': [\n \"pydivert>=2.0.3,<2.2\",\n ],\n 'dev': [\n \"flake8>=3.5, <3.6\",\n \"Flask>=0.10.1, <0.13\",\n \"mypy>=0.560,<0.561\",\n \"pytest-cov>=2.5.1,<3\",\n \"pytest-faulthandler>=1.3.1,<2\",\n \"pytest-timeout>=1.2.1,<2\",\n \"pytest-xdist>=1.22,<2\",\n \"pytest>=3.3,<4\",\n \"tox>=2.3, <3\",\n \"rstcheck>=2.2, <4.0\",\n ],\n 'examples': [\n \"beautifulsoup4>=4.4.1, <4.7\",\n \"Pillow>=4.3,<5.1\",\n ]\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nfrom codecs import open\n\nimport re\nfrom setuptools import setup, find_packages\n\n# Based on https://github.com/pypa/sampleproject/blob/master/setup.py\n# and https://python-packaging-user-guide.readthedocs.org/\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:\n long_description = f.read()\n\nwith open(os.path.join(here, \"mitmproxy\", \"version.py\")) as f:\n VERSION = re.search(r'VERSION = \"(.+?)(?:-0x|\")', f.read()).group(1)\n\nsetup(\n name=\"mitmproxy\",\n version=VERSION,\n description=\"An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.\",\n long_description=long_description,\n url=\"http://mitmproxy.org\",\n author=\"Aldo Cortesi\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Environment :: Console :: Curses\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Security\",\n \"Topic :: Internet\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: Proxy Servers\",\n \"Topic :: Software Development :: Testing\"\n ],\n packages=find_packages(include=[\n \"mitmproxy\", \"mitmproxy.*\",\n \"pathod\", \"pathod.*\",\n ]),\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n \"mitmproxy = mitmproxy.tools.main:mitmproxy\",\n \"mitmdump = mitmproxy.tools.main:mitmdump\",\n \"mitmweb = mitmproxy.tools.main:mitmweb\",\n \"pathod = pathod.pathod_cmdline:go_pathod\",\n \"pathoc = pathod.pathoc_cmdline:go_pathoc\"\n ]\n },\n # https://packaging.python.org/en/latest/requirements/#install-requires\n # It is not considered best practice to use install_requires to pin dependencies to specific versions.\n install_requires=[\n \"blinker>=1.4, <1.5\",\n \"brotlipy>=0.7.0,<0.8\",\n \"certifi>=2015.11.20.1\", # no semver here - this should always be on the last release!\n \"click>=6.2, <7\",\n \"cryptography>=2.1.4,<2.2\",\n \"h2>=3.0.1,<4\",\n \"hyperframe>=5.1.0,<6\",\n \"kaitaistruct>=0.7,<0.9\",\n \"ldap3>=2.4,<2.5\",\n \"passlib>=1.6.5, <1.8\",\n \"pyasn1>=0.3.1,<0.5\",\n \"pyOpenSSL>=17.5,<17.6\",\n \"pyparsing>=2.1.3, <2.3\",\n \"pyperclip>=1.6.0, <1.7\",\n \"ruamel.yaml>=0.13.2, <0.16\",\n \"sortedcontainers>=1.5.4, <1.6\",\n \"tornado>=4.3, <4.6\",\n \"urwid>=2.0.1,<2.1\",\n \"wsproto>=0.11.0,<0.12.0\",\n ],\n extras_require={\n ':sys_platform == \"win32\"': [\n \"pydivert>=2.0.3,<2.2\",\n ],\n 'dev': [\n \"flake8>=3.5, <3.6\",\n \"Flask>=0.10.1, <0.13\",\n \"mypy>=0.560,<0.561\",\n \"pytest-cov>=2.5.1,<3\",\n \"pytest-faulthandler>=1.3.1,<2\",\n \"pytest-timeout>=1.2.1,<2\",\n \"pytest-xdist>=1.22,<2\",\n \"pytest>=3.3,<4\",\n \"requests>=2.9.1, <3\",\n \"tox>=2.3, <3\",\n \"rstcheck>=2.2, <4.0\",\n ],\n 'examples': [\n \"beautifulsoup4>=4.4.1, <4.7\",\n \"Pillow>=4.3,<5.1\",\n ]\n }\n)\n", "path": "setup.py"}]} | 1,718 | 367 |
gh_patches_debug_33273 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-1377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Should not disable edit button if having bypass structure permission
Workaround: write url by hand (eg. "/trek/edit/1/").
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/authent/models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 Models to manage users and profiles
5 """
6 from django.db import models
7 from django.contrib.auth.models import User
8 from django.conf import settings
9 from django.utils.translation import ugettext_lazy as _
10 from django.dispatch import receiver
11 from django.contrib.auth.signals import user_logged_in
12
13 from geotrek.common.utils import reify
14
15
16 class Structure(models.Model):
17 """
18 Represents an organisational structure, to which users are related.
19 """
20 name = models.CharField(max_length=256, verbose_name=_(u"Nom"))
21
22 def __unicode__(self):
23 return self.name
24
25 class Meta:
26 verbose_name = _(u"Structure")
27 verbose_name_plural = _(u"Structures")
28 ordering = ['name']
29 permissions = (("can_bypass_structure", _("Can by structure")),)
30
31
32 def default_structure():
33 """ Create default structure if necessary """
34 return Structure.objects.get_or_create(name=settings.DEFAULT_STRUCTURE_NAME)[0]
35
36
37 class StructureRelatedQuerySet(models.query.QuerySet):
38 def for_user(self, user):
39 return StructureRelatedQuerySet.queryset_for_user(self, user)
40
41 @staticmethod
42 def queryset_for_user(queryset, user):
43 return queryset.filter(structure=user.profile.structure)
44
45
46 class StructureRelatedManager(models.Manager):
47 """ A simple manager to manage structure related objects"""
48 def get_queryset(self):
49 return StructureRelatedQuerySet(self.model, using=self._db)
50
51 def for_user(self, user):
52 """ Filter by user's structure """
53 return self.get_queryset().for_user(user)
54
55
56 class StructureRelated(models.Model):
57 """
58 A mixin used for any entities that belong to a structure
59 """
60 structure = models.ForeignKey(Structure, default=default_structure,
61 verbose_name=_(u"Related structure"), db_column='structure')
62
63 objects = models.Manager()
64 in_structure = StructureRelatedManager()
65
66 @classmethod
67 def for_user(cls, user):
68 """ Shortcut to manager's filter by user """
69 return cls.in_structure.for_user(user)
70
71 def same_structure(self, user):
72 """ Returns True if the user is in the same structure, False otherwise. """
73 return user.profile.structure == self.structure
74
75 class Meta:
76 abstract = True
77 verbose_name = _(u"Related structures")
78 verbose_name_plural = _(u"Related structure")
79
80
81 class UserProfile(StructureRelated):
82 """
83 A custom user profile
84 """
85 user = models.OneToOneField(User, unique=True)
86
87 language = models.CharField(_(u"Language"), max_length=10,
88 choices=settings.LANGUAGES,
89 default=settings.LANGUAGE_CODE)
90
91 class Meta:
92 verbose_name = _(u"User's profile")
93 verbose_name_plural = _(u"User's profiles")
94
95 def __unicode__(self):
96 return _("Profile for %s") % self.user
97
98 User.profile = reify(lambda u: UserProfile.objects.get_or_create(user=u)[0])
99
100
101 @receiver(user_logged_in)
102 def lang(sender, **kwargs):
103 """ Set user's language in session when he logs in. """
104 lang_code = kwargs['user'].profile.language
105 kwargs['request'].session['django_language'] = lang_code
106
```
Path: `geotrek/authent/decorators.py`
Content:
```
1 from functools import wraps
2
3 from django.shortcuts import redirect
4 from django.contrib import messages
5 from django.http import HttpResponseRedirect
6 from django.utils.translation import ugettext_lazy as _
7 from django.utils.decorators import available_attrs
8
9
10 def same_structure_required(redirect_to):
11 """
12 A decorator for class-based views. It relies on ``self.get_object()``
13 method object, and assumes decorated views to handle ``StructureRelated``
14 objects.
15 """
16 def decorator(view_func):
17 @wraps(view_func, assigned=available_attrs(view_func))
18 def _wrapped_view(self, request, *args, **kwargs):
19 result = view_func(self, request, *args, **kwargs)
20
21 # Superuser is always allowed
22 if request.user.is_superuser:
23 return result
24
25 if isinstance(result, HttpResponseRedirect):
26 return result
27
28 can_bypass_structure = request.user.has_perm('can_bypass_structure')
29 obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)
30 if can_bypass_structure or (obj and obj.same_structure(request.user)):
31 return result
32 messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))
33
34 return redirect(redirect_to, *args, **kwargs)
35 return _wrapped_view
36 return decorator
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geotrek/authent/decorators.py b/geotrek/authent/decorators.py
--- a/geotrek/authent/decorators.py
+++ b/geotrek/authent/decorators.py
@@ -18,16 +18,11 @@
def _wrapped_view(self, request, *args, **kwargs):
result = view_func(self, request, *args, **kwargs)
- # Superuser is always allowed
- if request.user.is_superuser:
- return result
-
if isinstance(result, HttpResponseRedirect):
return result
- can_bypass_structure = request.user.has_perm('can_bypass_structure')
obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)
- if can_bypass_structure or (obj and obj.same_structure(request.user)):
+ if obj.same_structure(request.user):
return result
messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))
diff --git a/geotrek/authent/models.py b/geotrek/authent/models.py
--- a/geotrek/authent/models.py
+++ b/geotrek/authent/models.py
@@ -26,7 +26,7 @@
verbose_name = _(u"Structure")
verbose_name_plural = _(u"Structures")
ordering = ['name']
- permissions = (("can_bypass_structure", _("Can by structure")),)
+ permissions = (("can_bypass_structure", _("Can bypass structure")),)
def default_structure():
@@ -69,8 +69,11 @@
return cls.in_structure.for_user(user)
def same_structure(self, user):
- """ Returns True if the user is in the same structure, False otherwise. """
- return user.profile.structure == self.structure
+ """ Returns True if the user is in the same structure or has
+ bypass_structure permission, False otherwise. """
+ return (user.profile.structure == self.structure or
+ user.is_superuser or
+ user.has_perm('authent.can_bypass_structure'))
class Meta:
abstract = True
| {"golden_diff": "diff --git a/geotrek/authent/decorators.py b/geotrek/authent/decorators.py\n--- a/geotrek/authent/decorators.py\n+++ b/geotrek/authent/decorators.py\n@@ -18,16 +18,11 @@\n def _wrapped_view(self, request, *args, **kwargs):\n result = view_func(self, request, *args, **kwargs)\n \n- # Superuser is always allowed\n- if request.user.is_superuser:\n- return result\n-\n if isinstance(result, HttpResponseRedirect):\n return result\n \n- can_bypass_structure = request.user.has_perm('can_bypass_structure')\n obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)\n- if can_bypass_structure or (obj and obj.same_structure(request.user)):\n+ if obj.same_structure(request.user):\n return result\n messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))\n \ndiff --git a/geotrek/authent/models.py b/geotrek/authent/models.py\n--- a/geotrek/authent/models.py\n+++ b/geotrek/authent/models.py\n@@ -26,7 +26,7 @@\n verbose_name = _(u\"Structure\")\n verbose_name_plural = _(u\"Structures\")\n ordering = ['name']\n- permissions = ((\"can_bypass_structure\", _(\"Can by structure\")),)\n+ permissions = ((\"can_bypass_structure\", _(\"Can bypass structure\")),)\n \n \n def default_structure():\n@@ -69,8 +69,11 @@\n return cls.in_structure.for_user(user)\n \n def same_structure(self, user):\n- \"\"\" Returns True if the user is in the same structure, False otherwise. \"\"\"\n- return user.profile.structure == self.structure\n+ \"\"\" Returns True if the user is in the same structure or has\n+ bypass_structure permission, False otherwise. \"\"\"\n+ return (user.profile.structure == self.structure or\n+ user.is_superuser or\n+ user.has_perm('authent.can_bypass_structure'))\n \n class Meta:\n abstract = True\n", "issue": "Should not disable edit button if having bypass structure permission\nWorkaround: write url by hand (eg. \"/trek/edit/1/\").\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\n Models to manage users and profiles\n\"\"\"\nfrom django.db import models\nfrom django.contrib.auth.models import User\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.dispatch import receiver\nfrom django.contrib.auth.signals import user_logged_in\n\nfrom geotrek.common.utils import reify\n\n\nclass Structure(models.Model):\n \"\"\"\n Represents an organisational structure, to which users are related.\n \"\"\"\n name = models.CharField(max_length=256, verbose_name=_(u\"Nom\"))\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n verbose_name = _(u\"Structure\")\n verbose_name_plural = _(u\"Structures\")\n ordering = ['name']\n permissions = ((\"can_bypass_structure\", _(\"Can by structure\")),)\n\n\ndef default_structure():\n \"\"\" Create default structure if necessary \"\"\"\n return Structure.objects.get_or_create(name=settings.DEFAULT_STRUCTURE_NAME)[0]\n\n\nclass StructureRelatedQuerySet(models.query.QuerySet):\n def for_user(self, user):\n return StructureRelatedQuerySet.queryset_for_user(self, user)\n\n @staticmethod\n def queryset_for_user(queryset, user):\n return queryset.filter(structure=user.profile.structure)\n\n\nclass StructureRelatedManager(models.Manager):\n \"\"\" A simple manager to manage structure related objects\"\"\"\n def get_queryset(self):\n return StructureRelatedQuerySet(self.model, using=self._db)\n\n def for_user(self, user):\n \"\"\" Filter by user's structure \"\"\"\n return self.get_queryset().for_user(user)\n\n\nclass StructureRelated(models.Model):\n \"\"\"\n A mixin used for any entities that belong to a structure\n \"\"\"\n structure = models.ForeignKey(Structure, default=default_structure,\n verbose_name=_(u\"Related structure\"), db_column='structure')\n\n objects = models.Manager()\n in_structure = StructureRelatedManager()\n\n @classmethod\n def for_user(cls, user):\n \"\"\" Shortcut to manager's filter by user \"\"\"\n return cls.in_structure.for_user(user)\n\n def same_structure(self, user):\n \"\"\" Returns True if the user is in the same structure, False otherwise. \"\"\"\n return user.profile.structure == self.structure\n\n class Meta:\n abstract = True\n verbose_name = _(u\"Related structures\")\n verbose_name_plural = _(u\"Related structure\")\n\n\nclass UserProfile(StructureRelated):\n \"\"\"\n A custom user profile\n \"\"\"\n user = models.OneToOneField(User, unique=True)\n\n language = models.CharField(_(u\"Language\"), max_length=10,\n choices=settings.LANGUAGES,\n default=settings.LANGUAGE_CODE)\n\n class Meta:\n verbose_name = _(u\"User's profile\")\n verbose_name_plural = _(u\"User's profiles\")\n\n def __unicode__(self):\n return _(\"Profile for %s\") % self.user\n\nUser.profile = reify(lambda u: UserProfile.objects.get_or_create(user=u)[0])\n\n\n@receiver(user_logged_in)\ndef lang(sender, **kwargs):\n \"\"\" Set user's language in session when he logs in. \"\"\"\n lang_code = kwargs['user'].profile.language\n kwargs['request'].session['django_language'] = lang_code\n", "path": "geotrek/authent/models.py"}, {"content": "from functools import wraps\n\nfrom django.shortcuts import redirect\nfrom django.contrib import messages\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.utils.decorators import available_attrs\n\n\ndef same_structure_required(redirect_to):\n \"\"\"\n A decorator for class-based views. It relies on ``self.get_object()``\n method object, and assumes decorated views to handle ``StructureRelated``\n objects.\n \"\"\"\n def decorator(view_func):\n @wraps(view_func, assigned=available_attrs(view_func))\n def _wrapped_view(self, request, *args, **kwargs):\n result = view_func(self, request, *args, **kwargs)\n\n # Superuser is always allowed\n if request.user.is_superuser:\n return result\n\n if isinstance(result, HttpResponseRedirect):\n return result\n\n can_bypass_structure = request.user.has_perm('can_bypass_structure')\n obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)\n if can_bypass_structure or (obj and obj.same_structure(request.user)):\n return result\n messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))\n\n return redirect(redirect_to, *args, **kwargs)\n return _wrapped_view\n return decorator\n", "path": "geotrek/authent/decorators.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\n Models to manage users and profiles\n\"\"\"\nfrom django.db import models\nfrom django.contrib.auth.models import User\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.dispatch import receiver\nfrom django.contrib.auth.signals import user_logged_in\n\nfrom geotrek.common.utils import reify\n\n\nclass Structure(models.Model):\n \"\"\"\n Represents an organisational structure, to which users are related.\n \"\"\"\n name = models.CharField(max_length=256, verbose_name=_(u\"Nom\"))\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n verbose_name = _(u\"Structure\")\n verbose_name_plural = _(u\"Structures\")\n ordering = ['name']\n permissions = ((\"can_bypass_structure\", _(\"Can bypass structure\")),)\n\n\ndef default_structure():\n \"\"\" Create default structure if necessary \"\"\"\n return Structure.objects.get_or_create(name=settings.DEFAULT_STRUCTURE_NAME)[0]\n\n\nclass StructureRelatedQuerySet(models.query.QuerySet):\n def for_user(self, user):\n return StructureRelatedQuerySet.queryset_for_user(self, user)\n\n @staticmethod\n def queryset_for_user(queryset, user):\n return queryset.filter(structure=user.profile.structure)\n\n\nclass StructureRelatedManager(models.Manager):\n \"\"\" A simple manager to manage structure related objects\"\"\"\n def get_queryset(self):\n return StructureRelatedQuerySet(self.model, using=self._db)\n\n def for_user(self, user):\n \"\"\" Filter by user's structure \"\"\"\n return self.get_queryset().for_user(user)\n\n\nclass StructureRelated(models.Model):\n \"\"\"\n A mixin used for any entities that belong to a structure\n \"\"\"\n structure = models.ForeignKey(Structure, default=default_structure,\n verbose_name=_(u\"Related structure\"), db_column='structure')\n\n objects = models.Manager()\n in_structure = StructureRelatedManager()\n\n @classmethod\n def for_user(cls, user):\n \"\"\" Shortcut to manager's filter by user \"\"\"\n return cls.in_structure.for_user(user)\n\n def same_structure(self, user):\n \"\"\" Returns True if the user is in the same structure or has\n bypass_structure permission, False otherwise. \"\"\"\n return (user.profile.structure == self.structure or\n user.is_superuser or\n user.has_perm('authent.can_bypass_structure'))\n\n class Meta:\n abstract = True\n verbose_name = _(u\"Related structures\")\n verbose_name_plural = _(u\"Related structure\")\n\n\nclass UserProfile(StructureRelated):\n \"\"\"\n A custom user profile\n \"\"\"\n user = models.OneToOneField(User, unique=True)\n\n language = models.CharField(_(u\"Language\"), max_length=10,\n choices=settings.LANGUAGES,\n default=settings.LANGUAGE_CODE)\n\n class Meta:\n verbose_name = _(u\"User's profile\")\n verbose_name_plural = _(u\"User's profiles\")\n\n def __unicode__(self):\n return _(\"Profile for %s\") % self.user\n\nUser.profile = reify(lambda u: UserProfile.objects.get_or_create(user=u)[0])\n\n\n@receiver(user_logged_in)\ndef lang(sender, **kwargs):\n \"\"\" Set user's language in session when he logs in. \"\"\"\n lang_code = kwargs['user'].profile.language\n kwargs['request'].session['django_language'] = lang_code\n", "path": "geotrek/authent/models.py"}, {"content": "from functools import wraps\n\nfrom django.shortcuts import redirect\nfrom django.contrib import messages\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.utils.decorators import available_attrs\n\n\ndef same_structure_required(redirect_to):\n \"\"\"\n A decorator for class-based views. It relies on ``self.get_object()``\n method object, and assumes decorated views to handle ``StructureRelated``\n objects.\n \"\"\"\n def decorator(view_func):\n @wraps(view_func, assigned=available_attrs(view_func))\n def _wrapped_view(self, request, *args, **kwargs):\n result = view_func(self, request, *args, **kwargs)\n\n if isinstance(result, HttpResponseRedirect):\n return result\n\n obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)\n if obj.same_structure(request.user):\n return result\n messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))\n\n return redirect(redirect_to, *args, **kwargs)\n return _wrapped_view\n return decorator\n", "path": "geotrek/authent/decorators.py"}]} | 1,548 | 465 |
gh_patches_debug_19323 | rasdani/github-patches | git_diff | PokemonGoF__PokemonGo-Bot-5036 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash on Level Up
I'm gonna guess an issue with:
https://github.com/PokemonGoF/PokemonGo-Bot/pull/5016
which is also the version im on
```
Traceback (most recent call last):
File "pokecli.py", line 781, in <module>
main()
File "pokecli.py", line 139, in main
bot.tick()
File "C:\Users\Steve\Downloads\PokemonGo-Bot\pokemongo_bot\__init__.py", line 658, in tick
if worker.work() == WorkerResult.RUNNING:
File "C:\Users\Steve\Downloads\PokemonGo-Bot\pokemongo_bot\cell_workers\collect_level_up_reward.py", line 37, in work
self._collect_level_reward()
File "C:\Users\Steve\Downloads\PokemonGo-Bot\pokemongo_bot\cell_workers\collect_level_up_reward.py", line 70, in _collect_level_reward
'items': ', '.join(["{}x {}".format(data[x], x) for x in data])
TypeError: list indices must be integers, not dict
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pokemongo_bot/cell_workers/collect_level_up_reward.py`
Content:
```
1 import sys
2
3 from pokemongo_bot.base_task import BaseTask
4 from pokemongo_bot import inventory
5
6
7 class CollectLevelUpReward(BaseTask):
8 SUPPORTED_TASK_API_VERSION = 1
9
10 current_level = 0
11 previous_level = 0
12
13 def initialize(self):
14 self._process_config()
15 self.current_level = inventory.player().level
16 self.previous_level = 0
17
18 def work(self):
19 if self._should_run():
20 self.current_level = inventory.player().level
21
22 if self.collect_reward:
23 # let's check level reward on bot initialization
24 # to be able get rewards for old bots
25 if self.previous_level == 0:
26 self._collect_level_reward()
27 # level up situation
28 elif self.current_level > self.previous_level:
29 self.emit_event(
30 'level_up',
31 formatted='Level up from {previous_level} to {current_level}',
32 data={
33 'previous_level': self.previous_level,
34 'current_level': self.current_level
35 }
36 )
37 self._collect_level_reward()
38
39 if self.level_limit != -1 and self.current_level >= self.level_limit:
40 sys.exit("You have reached your target level! Exiting now.")
41
42 self.previous_level = self.current_level
43
44 def _process_config(self):
45 self.level_limit = self.config.get('level_limit', -1)
46 self.collect_reward = self.config.get('collect_reward', True)
47
48 def _should_run(self):
49 return self.level_limit != -1 or self.collect_reward
50
51 def _collect_level_reward(self):
52 response_dict = self.bot.api.level_up_rewards(level=self.current_level)
53 if 'status_code' in response_dict and response_dict['status_code'] == 1:
54 data = (response_dict
55 .get('responses', {})
56 .get('LEVEL_UP_REWARDS', {})
57 .get('items_awarded', []))
58
59 for item in data:
60 if 'item_id' in item and str(item['item_id']) in self.bot.item_list:
61 got_item = self.bot.item_list[str(item['item_id'])]
62 item['name'] = got_item
63 count = 'item_count' in item and item['item_count'] or 0
64 inventory.items().get(item['item_id']).add(count)
65 try:
66 self.emit_event(
67 'level_up_reward',
68 formatted='Received level up reward: {items}',
69 data={
70 'items': ', '.join(["{}x {}".format(data[x], x) for x in data])
71 }
72 )
73 except TypeError:
74 pass
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pokemongo_bot/cell_workers/collect_level_up_reward.py b/pokemongo_bot/cell_workers/collect_level_up_reward.py
--- a/pokemongo_bot/cell_workers/collect_level_up_reward.py
+++ b/pokemongo_bot/cell_workers/collect_level_up_reward.py
@@ -62,13 +62,11 @@
item['name'] = got_item
count = 'item_count' in item and item['item_count'] or 0
inventory.items().get(item['item_id']).add(count)
- try:
- self.emit_event(
- 'level_up_reward',
- formatted='Received level up reward: {items}',
- data={
- 'items': ', '.join(["{}x {}".format(data[x], x) for x in data])
- }
- )
- except TypeError:
- pass
+ self.emit_event(
+ 'level_up_reward',
+ formatted='Received level up reward: {items}',
+ data={
+ # [{'item_id': 3, 'name': u'Ultraball', 'item_count': 10}, {'item_id': 103, 'name': u'Hyper Potion', 'item_count': 10}]
+ 'items': ', '.join(["{}x {}".format(x['item_count'], x['name']) for x in data])
+ }
+ )
| {"golden_diff": "diff --git a/pokemongo_bot/cell_workers/collect_level_up_reward.py b/pokemongo_bot/cell_workers/collect_level_up_reward.py\n--- a/pokemongo_bot/cell_workers/collect_level_up_reward.py\n+++ b/pokemongo_bot/cell_workers/collect_level_up_reward.py\n@@ -62,13 +62,11 @@\n item['name'] = got_item\n count = 'item_count' in item and item['item_count'] or 0\n inventory.items().get(item['item_id']).add(count)\n- try:\n- self.emit_event(\n- 'level_up_reward',\n- formatted='Received level up reward: {items}',\n- data={\n- 'items': ', '.join([\"{}x {}\".format(data[x], x) for x in data])\n- }\n- )\n- except TypeError:\n- pass\n+ self.emit_event(\n+ 'level_up_reward',\n+ formatted='Received level up reward: {items}',\n+ data={\n+ # [{'item_id': 3, 'name': u'Ultraball', 'item_count': 10}, {'item_id': 103, 'name': u'Hyper Potion', 'item_count': 10}]\n+ 'items': ', '.join([\"{}x {}\".format(x['item_count'], x['name']) for x in data])\n+ }\n+ )\n", "issue": "Crash on Level Up\nI'm gonna guess an issue with:\nhttps://github.com/PokemonGoF/PokemonGo-Bot/pull/5016\n\nwhich is also the version im on\n\n```\nTraceback (most recent call last):\n File \"pokecli.py\", line 781, in <module>\n main()\n File \"pokecli.py\", line 139, in main\n bot.tick()\n File \"C:\\Users\\Steve\\Downloads\\PokemonGo-Bot\\pokemongo_bot\\__init__.py\", line 658, in tick\n if worker.work() == WorkerResult.RUNNING:\n File \"C:\\Users\\Steve\\Downloads\\PokemonGo-Bot\\pokemongo_bot\\cell_workers\\collect_level_up_reward.py\", line 37, in work\n self._collect_level_reward()\n File \"C:\\Users\\Steve\\Downloads\\PokemonGo-Bot\\pokemongo_bot\\cell_workers\\collect_level_up_reward.py\", line 70, in _collect_level_reward\n 'items': ', '.join([\"{}x {}\".format(data[x], x) for x in data])\nTypeError: list indices must be integers, not dict\n```\n\n", "before_files": [{"content": "import sys\n\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot import inventory\n\n\nclass CollectLevelUpReward(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n\n current_level = 0\n previous_level = 0\n\n def initialize(self):\n self._process_config()\n self.current_level = inventory.player().level\n self.previous_level = 0\n\n def work(self):\n if self._should_run():\n self.current_level = inventory.player().level\n\n if self.collect_reward:\n # let's check level reward on bot initialization\n # to be able get rewards for old bots\n if self.previous_level == 0:\n self._collect_level_reward()\n # level up situation\n elif self.current_level > self.previous_level:\n self.emit_event(\n 'level_up',\n formatted='Level up from {previous_level} to {current_level}',\n data={\n 'previous_level': self.previous_level,\n 'current_level': self.current_level\n }\n )\n self._collect_level_reward()\n\n if self.level_limit != -1 and self.current_level >= self.level_limit:\n sys.exit(\"You have reached your target level! Exiting now.\")\n\n self.previous_level = self.current_level\n\n def _process_config(self):\n self.level_limit = self.config.get('level_limit', -1)\n self.collect_reward = self.config.get('collect_reward', True)\n\n def _should_run(self):\n return self.level_limit != -1 or self.collect_reward\n\n def _collect_level_reward(self):\n response_dict = self.bot.api.level_up_rewards(level=self.current_level)\n if 'status_code' in response_dict and response_dict['status_code'] == 1:\n data = (response_dict\n .get('responses', {})\n .get('LEVEL_UP_REWARDS', {})\n .get('items_awarded', []))\n\n for item in data:\n if 'item_id' in item and str(item['item_id']) in self.bot.item_list:\n got_item = self.bot.item_list[str(item['item_id'])]\n item['name'] = got_item\n count = 'item_count' in item and item['item_count'] or 0\n inventory.items().get(item['item_id']).add(count)\n try:\n self.emit_event(\n 'level_up_reward',\n formatted='Received level up reward: {items}',\n data={\n 'items': ', '.join([\"{}x {}\".format(data[x], x) for x in data])\n }\n )\n except TypeError:\n pass\n", "path": "pokemongo_bot/cell_workers/collect_level_up_reward.py"}], "after_files": [{"content": "import sys\n\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot import inventory\n\n\nclass CollectLevelUpReward(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n\n current_level = 0\n previous_level = 0\n\n def initialize(self):\n self._process_config()\n self.current_level = inventory.player().level\n self.previous_level = 0\n\n def work(self):\n if self._should_run():\n self.current_level = inventory.player().level\n\n if self.collect_reward:\n # let's check level reward on bot initialization\n # to be able get rewards for old bots\n if self.previous_level == 0:\n self._collect_level_reward()\n # level up situation\n elif self.current_level > self.previous_level:\n self.emit_event(\n 'level_up',\n formatted='Level up from {previous_level} to {current_level}',\n data={\n 'previous_level': self.previous_level,\n 'current_level': self.current_level\n }\n )\n self._collect_level_reward()\n\n if self.level_limit != -1 and self.current_level >= self.level_limit:\n sys.exit(\"You have reached your target level! Exiting now.\")\n\n self.previous_level = self.current_level\n\n def _process_config(self):\n self.level_limit = self.config.get('level_limit', -1)\n self.collect_reward = self.config.get('collect_reward', True)\n\n def _should_run(self):\n return self.level_limit != -1 or self.collect_reward\n\n def _collect_level_reward(self):\n response_dict = self.bot.api.level_up_rewards(level=self.current_level)\n if 'status_code' in response_dict and response_dict['status_code'] == 1:\n data = (response_dict\n .get('responses', {})\n .get('LEVEL_UP_REWARDS', {})\n .get('items_awarded', []))\n\n for item in data:\n if 'item_id' in item and str(item['item_id']) in self.bot.item_list:\n got_item = self.bot.item_list[str(item['item_id'])]\n item['name'] = got_item\n count = 'item_count' in item and item['item_count'] or 0\n inventory.items().get(item['item_id']).add(count)\n self.emit_event(\n 'level_up_reward',\n formatted='Received level up reward: {items}',\n data={\n # [{'item_id': 3, 'name': u'Ultraball', 'item_count': 10}, {'item_id': 103, 'name': u'Hyper Potion', 'item_count': 10}]\n 'items': ', '.join([\"{}x {}\".format(x['item_count'], x['name']) for x in data])\n }\n )\n", "path": "pokemongo_bot/cell_workers/collect_level_up_reward.py"}]} | 1,224 | 309 |
gh_patches_debug_664 | rasdani/github-patches | git_diff | fedora-infra__bodhi-507 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
setup.py test doesn't include extra_requires from fedmsg deps
```
======================================================================
ERROR: Failure: ImportError (No module named psutil)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/home/decause/code/bodhi/bodhi/tests/test_masher.py", line 27, in <module>
from bodhi.consumers.masher import Masher, MasherThread
File "/home/decause/code/bodhi/bodhi/consumers/masher.py", line 30, in <module>
import fedmsg.consumers
File "/home/decause/code/bodhi/.eggs/fedmsg-0.16.0-py2.7.egg/fedmsg/consumers/__init__.py", line 25, in <module>
import psutil
ImportError: No module named psutil
----------------------------------------------------------------------
Ran 335 tests in 138.787s
FAILED (errors=1)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import __main__
2 __requires__ = __main__.__requires__ = 'WebOb>=1.4.1'
3 import pkg_resources
4
5 # The following two imports are required to shut up an
6 # atexit error when running tests with python 2.7
7 import logging
8 import multiprocessing
9
10 import os
11 import sys
12
13 from setuptools import setup, find_packages
14
15 here = os.path.abspath(os.path.dirname(__file__))
16 README = open(os.path.join(here, 'README.rst')).read()
17 CHANGES = open(os.path.join(here, 'CHANGES.txt')).read()
18
19 requires = [
20 'pyramid',
21 'pyramid_mako',
22 'pyramid_debugtoolbar',
23 'pyramid_tm',
24 'waitress',
25 'colander',
26 'cornice',
27
28 'python-openid',
29 'pyramid_fas_openid',
30 'packagedb-cli',
31
32 'sqlalchemy',
33 'zope.sqlalchemy',
34
35 'webhelpers',
36 'progressbar',
37
38 'bunch',
39
40 # for captchas
41 'cryptography',
42 'Pillow',
43
44 # Useful tools
45 'kitchen',
46 'python-fedora',
47 'pylibravatar',
48 'pyDNS',
49 'dogpile.cache',
50 'arrow',
51 'markdown',
52
53 # i18n, that we're not actually doing yet.
54 #'Babel',
55 #'lingua',
56
57 # External resources
58 'python-bugzilla',
59 'simplemediawiki',
60 'fedmsg',
61
62 'Sphinx',
63
64 # For the bodhi-client
65 'click',
66
67 'WebOb>=1.4.1',
68 ]
69
70 if sys.version_info[:3] < (2,7,0):
71 requires.append('importlib')
72
73 if sys.version_info[:3] < (2,5,0):
74 requires.append('pysqlite')
75
76 setup(name='bodhi',
77 version='2.0',
78 description='bodhi',
79 long_description=README + '\n\n' + CHANGES,
80 classifiers=[
81 "Programming Language :: Python",
82 "Framework :: Pyramid",
83 "Topic :: Internet :: WWW/HTTP",
84 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
85 ],
86 author='',
87 author_email='',
88 url='',
89 keywords='web fedora pyramid',
90 packages=find_packages(),
91 include_package_data=True,
92 zip_safe=False,
93 install_requires = requires,
94 tests_require = [
95 'nose',
96 'nose-cov',
97 'webtest',
98 'mock'
99 ],
100 test_suite="nose.collector",
101 message_extractors = { '.': [
102 #('**.py', 'lingua_python', None),
103 #('**.mak', 'lingua_xml', None),
104 ]},
105 entry_points = """\
106 [paste.app_factory]
107 main = bodhi:main
108 [console_scripts]
109 initialize_bodhi_db = bodhi.scripts.initializedb:main
110 bodhi = bodhi.cli:cli
111 bodhi-push = bodhi.push:push
112 bodhi-expire-overrides = bodhi.scripts.expire_overrides:main
113 [moksha.consumer]
114 masher = bodhi.consumers.masher:Masher
115 updates = bodhi.consumers.updates:UpdatesHandler
116 """,
117 paster_plugins=['pyramid'],
118 )
119
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -57,7 +57,9 @@
# External resources
'python-bugzilla',
'simplemediawiki',
- 'fedmsg',
+
+ # "python setup.py test" needs one of fedmsg's setup.py extra_requires
+ 'fedmsg[consumers]',
'Sphinx',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -57,7 +57,9 @@\n # External resources\n 'python-bugzilla',\n 'simplemediawiki',\n- 'fedmsg',\n+\n+ # \"python setup.py test\" needs one of fedmsg's setup.py extra_requires\n+ 'fedmsg[consumers]',\n \n 'Sphinx',\n", "issue": "setup.py test doesn't include extra_requires from fedmsg deps\n```\n======================================================================\nERROR: Failure: ImportError (No module named psutil)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/loader.py\", line 418, in loadTestsFromName\n addr.filename, addr.module)\n File \"/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py\", line 47, in importFromPath\n return self.importFromDir(dir_path, fqname)\n File \"/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py\", line 94, in importFromDir\n mod = load_module(part_fqname, fh, filename, desc)\n File \"/home/decause/code/bodhi/bodhi/tests/test_masher.py\", line 27, in <module>\n from bodhi.consumers.masher import Masher, MasherThread\n File \"/home/decause/code/bodhi/bodhi/consumers/masher.py\", line 30, in <module>\n import fedmsg.consumers\n File \"/home/decause/code/bodhi/.eggs/fedmsg-0.16.0-py2.7.egg/fedmsg/consumers/__init__.py\", line 25, in <module>\n import psutil\nImportError: No module named psutil\n\n----------------------------------------------------------------------\nRan 335 tests in 138.787s\n\nFAILED (errors=1)\n```\n\n", "before_files": [{"content": "import __main__\n__requires__ = __main__.__requires__ = 'WebOb>=1.4.1'\nimport pkg_resources\n\n# The following two imports are required to shut up an\n# atexit error when running tests with python 2.7\nimport logging\nimport multiprocessing\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(here, 'README.rst')).read()\nCHANGES = open(os.path.join(here, 'CHANGES.txt')).read()\n\nrequires = [\n 'pyramid',\n 'pyramid_mako',\n 'pyramid_debugtoolbar',\n 'pyramid_tm',\n 'waitress',\n 'colander',\n 'cornice',\n\n 'python-openid',\n 'pyramid_fas_openid',\n 'packagedb-cli',\n\n 'sqlalchemy',\n 'zope.sqlalchemy',\n\n 'webhelpers',\n 'progressbar',\n\n 'bunch',\n\n # for captchas\n 'cryptography',\n 'Pillow',\n\n # Useful tools\n 'kitchen',\n 'python-fedora',\n 'pylibravatar',\n 'pyDNS',\n 'dogpile.cache',\n 'arrow',\n 'markdown',\n\n # i18n, that we're not actually doing yet.\n #'Babel',\n #'lingua',\n\n # External resources\n 'python-bugzilla',\n 'simplemediawiki',\n 'fedmsg',\n\n 'Sphinx',\n\n # For the bodhi-client\n 'click',\n\n 'WebOb>=1.4.1',\n ]\n\nif sys.version_info[:3] < (2,7,0):\n requires.append('importlib')\n\nif sys.version_info[:3] < (2,5,0):\n requires.append('pysqlite')\n\nsetup(name='bodhi',\n version='2.0',\n description='bodhi',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Programming Language :: Python\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n author='',\n author_email='',\n url='',\n keywords='web fedora pyramid',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires = requires,\n tests_require = [\n 'nose',\n 'nose-cov',\n 'webtest',\n 'mock'\n ],\n test_suite=\"nose.collector\",\n message_extractors = { '.': [\n #('**.py', 'lingua_python', None),\n #('**.mak', 'lingua_xml', None),\n ]},\n entry_points = \"\"\"\\\n [paste.app_factory]\n main = bodhi:main\n [console_scripts]\n initialize_bodhi_db = bodhi.scripts.initializedb:main\n bodhi = bodhi.cli:cli\n bodhi-push = bodhi.push:push\n bodhi-expire-overrides = bodhi.scripts.expire_overrides:main\n [moksha.consumer]\n masher = bodhi.consumers.masher:Masher\n updates = bodhi.consumers.updates:UpdatesHandler\n \"\"\",\n paster_plugins=['pyramid'],\n )\n\n", "path": "setup.py"}], "after_files": [{"content": "import __main__\n__requires__ = __main__.__requires__ = 'WebOb>=1.4.1'\nimport pkg_resources\n\n# The following two imports are required to shut up an\n# atexit error when running tests with python 2.7\nimport logging\nimport multiprocessing\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(here, 'README.rst')).read()\nCHANGES = open(os.path.join(here, 'CHANGES.txt')).read()\n\nrequires = [\n 'pyramid',\n 'pyramid_mako',\n 'pyramid_debugtoolbar',\n 'pyramid_tm',\n 'waitress',\n 'colander',\n 'cornice',\n\n 'python-openid',\n 'pyramid_fas_openid',\n 'packagedb-cli',\n\n 'sqlalchemy',\n 'zope.sqlalchemy',\n\n 'webhelpers',\n 'progressbar',\n\n 'bunch',\n\n # for captchas\n 'cryptography',\n 'Pillow',\n\n # Useful tools\n 'kitchen',\n 'python-fedora',\n 'pylibravatar',\n 'pyDNS',\n 'dogpile.cache',\n 'arrow',\n 'markdown',\n\n # i18n, that we're not actually doing yet.\n #'Babel',\n #'lingua',\n\n # External resources\n 'python-bugzilla',\n 'simplemediawiki',\n\n # \"python setup.py test\" needs one of fedmsg's setup.py extra_requires\n 'fedmsg[consumers]',\n\n 'Sphinx',\n\n # For the bodhi-client\n 'click',\n\n 'WebOb>=1.4.1',\n ]\n\nif sys.version_info[:3] < (2,7,0):\n requires.append('importlib')\n\nif sys.version_info[:3] < (2,5,0):\n requires.append('pysqlite')\n\nsetup(name='bodhi',\n version='2.0',\n description='bodhi',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Programming Language :: Python\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n author='',\n author_email='',\n url='',\n keywords='web fedora pyramid',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires = requires,\n tests_require = [\n 'nose',\n 'nose-cov',\n 'webtest',\n 'mock'\n ],\n test_suite=\"nose.collector\",\n message_extractors = { '.': [\n #('**.py', 'lingua_python', None),\n #('**.mak', 'lingua_xml', None),\n ]},\n entry_points = \"\"\"\\\n [paste.app_factory]\n main = bodhi:main\n [console_scripts]\n initialize_bodhi_db = bodhi.scripts.initializedb:main\n bodhi = bodhi.cli:cli\n bodhi-push = bodhi.push:push\n bodhi-expire-overrides = bodhi.scripts.expire_overrides:main\n [moksha.consumer]\n masher = bodhi.consumers.masher:Masher\n updates = bodhi.consumers.updates:UpdatesHandler\n \"\"\",\n paster_plugins=['pyramid'],\n )\n\n", "path": "setup.py"}]} | 1,613 | 93 |
gh_patches_debug_25493 | rasdani/github-patches | git_diff | liqd__adhocracy4-211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Subject with new line crashes email sending
Subject with new line crashes email sending
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `adhocracy4/emails/mixins.py`
Content:
```
1 from email.mime.image import MIMEImage
2
3 from django.contrib.staticfiles import finders
4 from .base import EmailBase
5
6
7 class PlatformEmailMixin:
8 """
9 Attaches the static file images/logo.png so it can be used in an html
10 email.
11 """
12 def get_attachments(self):
13 attachments = super().get_attachments()
14 filename = (
15 finders.find('images/email_logo.png')
16 or finders.find('images/email_logo.svg')
17 )
18 if filename:
19 if filename.endswith('.png'):
20 imagetype = 'png'
21 else:
22 imagetype = 'svg+xml'
23
24 with open(filename, 'rb') as f:
25 logo = MIMEImage(f.read(), imagetype)
26
27 logo.add_header('Content-ID', '<{}>'.format('logo'))
28 return attachments + [logo]
29 return attachments
30
31
32 class SyncEmailMixin(EmailBase):
33 """Send Emails synchronously."""
34
35 @classmethod
36 def send(cls, object, *args, **kwargs):
37 """Call dispatch immediately"""
38 return cls().dispatch(object, *args, **kwargs)
39
```
Path: `adhocracy4/emails/base.py`
Content:
```
1 from django.conf import settings
2 from django.contrib.contenttypes.models import ContentType
3 from django.contrib.sites import models as site_models
4 from django.core.mail.message import EmailMultiAlternatives
5 from django.template.loader import select_template
6 from django.utils import translation
7
8 from . import tasks
9
10
11 class EmailBase:
12 site_id = 1
13 object = None
14 template_name = None
15 fallback_language = 'en'
16 for_moderator = False
17
18 def get_site(self):
19 return site_models.Site.objects.get(pk=self.site_id)
20
21 def get_host(self):
22 site = self.get_site()
23 ssl_enabled = True
24 if site.domain.startswith('localhost:'):
25 ssl_enabled = False
26
27 url = 'http{ssl_flag}://{domain}'.format(
28 ssl_flag='s' if ssl_enabled else '',
29 domain=site.domain,
30 )
31 return url
32
33 def get_context(self):
34 object_context_key = self.object.__class__.__name__.lower()
35 return {
36 'email': self,
37 'site': self.get_site(),
38 object_context_key: self.object
39 }
40
41 def get_receivers(self):
42 return []
43
44 def get_attachments(self):
45 return []
46
47 def get_languages(self, receiver):
48 return [translation.get_language(), self.fallback_language]
49
50 def get_reply_to(self):
51 return None
52
53 @classmethod
54 def send(cls, object, *args, **kwargs):
55 """Send email asynchronously.
56
57 NOTE: args and kwargs must be JSON serializable.
58 """
59 ct = ContentType.objects.get_for_model(object)
60 tasks.send_async(
61 cls.__module__, cls.__name__,
62 ct.app_label, ct.model, object.pk,
63 args, kwargs)
64 return []
65
66 def render(self, template_name, context):
67 languages = self.get_languages(context['receiver'])
68 template = select_template([
69 '{}.{}.email'.format(template_name, lang)
70 for lang in languages
71 ])
72
73 # Get the actually chosen language from the template name
74 language = template.template.name.split('.', 2)[-2]
75
76 with translation.override(language):
77 parts = []
78 for part_type in ('subject', 'txt', 'html'):
79 context['part_type'] = part_type
80 parts.append(template.render(context))
81 context.pop('part_type')
82
83 return tuple(parts)
84
85 def dispatch(self, object, *args, **kwargs):
86 self.object = object
87 self.kwargs = kwargs
88 receivers = self.get_receivers()
89 context = self.get_context()
90 context.update(kwargs)
91 attachments = self.get_attachments()
92 template = self.template_name
93
94 mails = []
95 for receiver in receivers:
96 context['receiver'] = receiver
97 (subject, text, html) = self.render(template, context)
98 context.pop('receiver')
99
100 if hasattr(receiver, 'email'):
101 to_address = receiver.email
102 else:
103 to_address = receiver
104
105 mail = EmailMultiAlternatives(
106 subject=subject.strip(),
107 body=text,
108 from_email=settings.DEFAULT_FROM_EMAIL,
109 to=[to_address],
110 reply_to=self.get_reply_to(),
111 )
112
113 if len(attachments) > 0:
114 mail.mixed_subtype = 'related'
115
116 for attachment in attachments:
117 mail.attach(attachment)
118
119 mail.attach_alternative(html, 'text/html')
120 mail.send()
121 mails.append(mail)
122 return mails
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/adhocracy4/emails/base.py b/adhocracy4/emails/base.py
--- a/adhocracy4/emails/base.py
+++ b/adhocracy4/emails/base.py
@@ -1,3 +1,5 @@
+import re
+
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from django.contrib.sites import models as site_models
@@ -102,8 +104,10 @@
else:
to_address = receiver
+ subject_clean = re.sub(r'[\r\n]', '', subject).strip()
+
mail = EmailMultiAlternatives(
- subject=subject.strip(),
+ subject=subject_clean,
body=text,
from_email=settings.DEFAULT_FROM_EMAIL,
to=[to_address],
diff --git a/adhocracy4/emails/mixins.py b/adhocracy4/emails/mixins.py
--- a/adhocracy4/emails/mixins.py
+++ b/adhocracy4/emails/mixins.py
@@ -1,7 +1,6 @@
from email.mime.image import MIMEImage
from django.contrib.staticfiles import finders
-from .base import EmailBase
class PlatformEmailMixin:
@@ -29,7 +28,7 @@
return attachments
-class SyncEmailMixin(EmailBase):
+class SyncEmailMixin:
"""Send Emails synchronously."""
@classmethod
| {"golden_diff": "diff --git a/adhocracy4/emails/base.py b/adhocracy4/emails/base.py\n--- a/adhocracy4/emails/base.py\n+++ b/adhocracy4/emails/base.py\n@@ -1,3 +1,5 @@\n+import re\n+\n from django.conf import settings\n from django.contrib.contenttypes.models import ContentType\n from django.contrib.sites import models as site_models\n@@ -102,8 +104,10 @@\n else:\n to_address = receiver\n \n+ subject_clean = re.sub(r'[\\r\\n]', '', subject).strip()\n+\n mail = EmailMultiAlternatives(\n- subject=subject.strip(),\n+ subject=subject_clean,\n body=text,\n from_email=settings.DEFAULT_FROM_EMAIL,\n to=[to_address],\ndiff --git a/adhocracy4/emails/mixins.py b/adhocracy4/emails/mixins.py\n--- a/adhocracy4/emails/mixins.py\n+++ b/adhocracy4/emails/mixins.py\n@@ -1,7 +1,6 @@\n from email.mime.image import MIMEImage\n \n from django.contrib.staticfiles import finders\n-from .base import EmailBase\n \n \n class PlatformEmailMixin:\n@@ -29,7 +28,7 @@\n return attachments\n \n \n-class SyncEmailMixin(EmailBase):\n+class SyncEmailMixin:\n \"\"\"Send Emails synchronously.\"\"\"\n \n @classmethod\n", "issue": "Subject with new line crashes email sending\n\nSubject with new line crashes email sending\n\n", "before_files": [{"content": "from email.mime.image import MIMEImage\n\nfrom django.contrib.staticfiles import finders\nfrom .base import EmailBase\n\n\nclass PlatformEmailMixin:\n \"\"\"\n Attaches the static file images/logo.png so it can be used in an html\n email.\n \"\"\"\n def get_attachments(self):\n attachments = super().get_attachments()\n filename = (\n finders.find('images/email_logo.png')\n or finders.find('images/email_logo.svg')\n )\n if filename:\n if filename.endswith('.png'):\n imagetype = 'png'\n else:\n imagetype = 'svg+xml'\n\n with open(filename, 'rb') as f:\n logo = MIMEImage(f.read(), imagetype)\n\n logo.add_header('Content-ID', '<{}>'.format('logo'))\n return attachments + [logo]\n return attachments\n\n\nclass SyncEmailMixin(EmailBase):\n \"\"\"Send Emails synchronously.\"\"\"\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Call dispatch immediately\"\"\"\n return cls().dispatch(object, *args, **kwargs)\n", "path": "adhocracy4/emails/mixins.py"}, {"content": "from django.conf import settings\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.contrib.sites import models as site_models\nfrom django.core.mail.message import EmailMultiAlternatives\nfrom django.template.loader import select_template\nfrom django.utils import translation\n\nfrom . import tasks\n\n\nclass EmailBase:\n site_id = 1\n object = None\n template_name = None\n fallback_language = 'en'\n for_moderator = False\n\n def get_site(self):\n return site_models.Site.objects.get(pk=self.site_id)\n\n def get_host(self):\n site = self.get_site()\n ssl_enabled = True\n if site.domain.startswith('localhost:'):\n ssl_enabled = False\n\n url = 'http{ssl_flag}://{domain}'.format(\n ssl_flag='s' if ssl_enabled else '',\n domain=site.domain,\n )\n return url\n\n def get_context(self):\n object_context_key = self.object.__class__.__name__.lower()\n return {\n 'email': self,\n 'site': self.get_site(),\n object_context_key: self.object\n }\n\n def get_receivers(self):\n return []\n\n def get_attachments(self):\n return []\n\n def get_languages(self, receiver):\n return [translation.get_language(), self.fallback_language]\n\n def get_reply_to(self):\n return None\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Send email asynchronously.\n\n NOTE: args and kwargs must be JSON serializable.\n \"\"\"\n ct = ContentType.objects.get_for_model(object)\n tasks.send_async(\n cls.__module__, cls.__name__,\n ct.app_label, ct.model, object.pk,\n args, kwargs)\n return []\n\n def render(self, template_name, context):\n languages = self.get_languages(context['receiver'])\n template = select_template([\n '{}.{}.email'.format(template_name, lang)\n for lang in languages\n ])\n\n # Get the actually chosen language from the template name\n language = template.template.name.split('.', 2)[-2]\n\n with translation.override(language):\n parts = []\n for part_type in ('subject', 'txt', 'html'):\n context['part_type'] = part_type\n parts.append(template.render(context))\n context.pop('part_type')\n\n return tuple(parts)\n\n def dispatch(self, object, *args, **kwargs):\n self.object = object\n self.kwargs = kwargs\n receivers = self.get_receivers()\n context = self.get_context()\n context.update(kwargs)\n attachments = self.get_attachments()\n template = self.template_name\n\n mails = []\n for receiver in receivers:\n context['receiver'] = receiver\n (subject, text, html) = self.render(template, context)\n context.pop('receiver')\n\n if hasattr(receiver, 'email'):\n to_address = receiver.email\n else:\n to_address = receiver\n\n mail = EmailMultiAlternatives(\n subject=subject.strip(),\n body=text,\n from_email=settings.DEFAULT_FROM_EMAIL,\n to=[to_address],\n reply_to=self.get_reply_to(),\n )\n\n if len(attachments) > 0:\n mail.mixed_subtype = 'related'\n\n for attachment in attachments:\n mail.attach(attachment)\n\n mail.attach_alternative(html, 'text/html')\n mail.send()\n mails.append(mail)\n return mails\n", "path": "adhocracy4/emails/base.py"}], "after_files": [{"content": "from email.mime.image import MIMEImage\n\nfrom django.contrib.staticfiles import finders\n\n\nclass PlatformEmailMixin:\n \"\"\"\n Attaches the static file images/logo.png so it can be used in an html\n email.\n \"\"\"\n def get_attachments(self):\n attachments = super().get_attachments()\n filename = (\n finders.find('images/email_logo.png')\n or finders.find('images/email_logo.svg')\n )\n if filename:\n if filename.endswith('.png'):\n imagetype = 'png'\n else:\n imagetype = 'svg+xml'\n\n with open(filename, 'rb') as f:\n logo = MIMEImage(f.read(), imagetype)\n\n logo.add_header('Content-ID', '<{}>'.format('logo'))\n return attachments + [logo]\n return attachments\n\n\nclass SyncEmailMixin:\n \"\"\"Send Emails synchronously.\"\"\"\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Call dispatch immediately\"\"\"\n return cls().dispatch(object, *args, **kwargs)\n", "path": "adhocracy4/emails/mixins.py"}, {"content": "import re\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.contrib.sites import models as site_models\nfrom django.core.mail.message import EmailMultiAlternatives\nfrom django.template.loader import select_template\nfrom django.utils import translation\n\nfrom . import tasks\n\n\nclass EmailBase:\n site_id = 1\n object = None\n template_name = None\n fallback_language = 'en'\n for_moderator = False\n\n def get_site(self):\n return site_models.Site.objects.get(pk=self.site_id)\n\n def get_host(self):\n site = self.get_site()\n ssl_enabled = True\n if site.domain.startswith('localhost:'):\n ssl_enabled = False\n\n url = 'http{ssl_flag}://{domain}'.format(\n ssl_flag='s' if ssl_enabled else '',\n domain=site.domain,\n )\n return url\n\n def get_context(self):\n object_context_key = self.object.__class__.__name__.lower()\n return {\n 'email': self,\n 'site': self.get_site(),\n object_context_key: self.object\n }\n\n def get_receivers(self):\n return []\n\n def get_attachments(self):\n return []\n\n def get_languages(self, receiver):\n return [translation.get_language(), self.fallback_language]\n\n def get_reply_to(self):\n return None\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Send email asynchronously.\n\n NOTE: args and kwargs must be JSON serializable.\n \"\"\"\n ct = ContentType.objects.get_for_model(object)\n tasks.send_async(\n cls.__module__, cls.__name__,\n ct.app_label, ct.model, object.pk,\n args, kwargs)\n return []\n\n def render(self, template_name, context):\n languages = self.get_languages(context['receiver'])\n template = select_template([\n '{}.{}.email'.format(template_name, lang)\n for lang in languages\n ])\n\n # Get the actually chosen language from the template name\n language = template.template.name.split('.', 2)[-2]\n\n with translation.override(language):\n parts = []\n for part_type in ('subject', 'txt', 'html'):\n context['part_type'] = part_type\n parts.append(template.render(context))\n context.pop('part_type')\n\n return tuple(parts)\n\n def dispatch(self, object, *args, **kwargs):\n self.object = object\n self.kwargs = kwargs\n receivers = self.get_receivers()\n context = self.get_context()\n context.update(kwargs)\n attachments = self.get_attachments()\n template = self.template_name\n\n mails = []\n for receiver in receivers:\n context['receiver'] = receiver\n (subject, text, html) = self.render(template, context)\n context.pop('receiver')\n\n if hasattr(receiver, 'email'):\n to_address = receiver.email\n else:\n to_address = receiver\n\n subject_clean = re.sub(r'[\\r\\n]', '', subject).strip()\n\n mail = EmailMultiAlternatives(\n subject=subject_clean,\n body=text,\n from_email=settings.DEFAULT_FROM_EMAIL,\n to=[to_address],\n reply_to=self.get_reply_to(),\n )\n\n if len(attachments) > 0:\n mail.mixed_subtype = 'related'\n\n for attachment in attachments:\n mail.attach(attachment)\n\n mail.attach_alternative(html, 'text/html')\n mail.send()\n mails.append(mail)\n return mails\n", "path": "adhocracy4/emails/base.py"}]} | 1,586 | 303 |
gh_patches_debug_28410 | rasdani/github-patches | git_diff | mne-tools__mne-python-9092 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
split code block in examples/preprocessing/plot_virtual_evoked
right now, because all plots come from a single code block, they are plotted at the top of the example in a group of 4 (and consequently the plots are really small). By splitting the 4 plotting calls into different code blocks, they will plot larger / be easier to see & compare, without increasing run time of the example. Code blocks can be split with a line of 79 `#` marks (adding a bit of explanatory text too is usually a good idea)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/preprocessing/plot_virtual_evoked.py`
Content:
```
1 """
2 =======================
3 Remap MEG channel types
4 =======================
5
6 In this example, MEG data are remapped from one channel type to another.
7 This is useful to:
8
9 - visualize combined magnetometers and gradiometers as magnetometers
10 or gradiometers.
11 - run statistics from both magnetometers and gradiometers while
12 working with a single type of channels.
13 """
14
15 # Author: Mainak Jas <[email protected]>
16
17 # License: BSD (3-clause)
18
19 import mne
20 from mne.datasets import sample
21
22 print(__doc__)
23
24 # read the evoked
25 data_path = sample.data_path()
26 fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
27 evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))
28
29 # go from grad + mag to mag
30 virt_evoked = evoked.as_type('mag')
31 evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')
32 virt_evoked.plot_topomap(ch_type='mag', time_unit='s',
33 title='mag (interpolated from mag + grad)')
34
35 # go from grad + mag to grad
36 virt_evoked = evoked.as_type('grad')
37 evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')
38 virt_evoked.plot_topomap(ch_type='grad', time_unit='s',
39 title='grad (interpolated from mag + grad)')
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/preprocessing/plot_virtual_evoked.py b/examples/preprocessing/plot_virtual_evoked.py
--- a/examples/preprocessing/plot_virtual_evoked.py
+++ b/examples/preprocessing/plot_virtual_evoked.py
@@ -26,14 +26,30 @@
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))
-# go from grad + mag to mag
+###############################################################################
+# First, let's call remap gradiometers to magnometers, and plot
+# the original and remapped topomaps of the magnetometers.
+
+# go from grad + mag to mag and plot original mag
virt_evoked = evoked.as_type('mag')
evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')
+
+###############################################################################
+
+# plot interpolated grad + mag
virt_evoked.plot_topomap(ch_type='mag', time_unit='s',
title='mag (interpolated from mag + grad)')
-# go from grad + mag to grad
+###############################################################################
+# Now, we remap magnometers to gradiometers, and plot
+# the original and remapped topomaps of the gradiometers
+
+# go from grad + mag to grad and plot original grad
virt_evoked = evoked.as_type('grad')
evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')
+
+###############################################################################
+
+# plot interpolated grad + mag
virt_evoked.plot_topomap(ch_type='grad', time_unit='s',
title='grad (interpolated from mag + grad)')
| {"golden_diff": "diff --git a/examples/preprocessing/plot_virtual_evoked.py b/examples/preprocessing/plot_virtual_evoked.py\n--- a/examples/preprocessing/plot_virtual_evoked.py\n+++ b/examples/preprocessing/plot_virtual_evoked.py\n@@ -26,14 +26,30 @@\n fname = data_path + '/MEG/sample/sample_audvis-ave.fif'\n evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))\n \n-# go from grad + mag to mag\n+###############################################################################\n+# First, let's call remap gradiometers to magnometers, and plot\n+# the original and remapped topomaps of the magnetometers.\n+\n+# go from grad + mag to mag and plot original mag\n virt_evoked = evoked.as_type('mag')\n evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')\n+\n+###############################################################################\n+\n+# plot interpolated grad + mag\n virt_evoked.plot_topomap(ch_type='mag', time_unit='s',\n title='mag (interpolated from mag + grad)')\n \n-# go from grad + mag to grad\n+###############################################################################\n+# Now, we remap magnometers to gradiometers, and plot\n+# the original and remapped topomaps of the gradiometers\n+\n+# go from grad + mag to grad and plot original grad\n virt_evoked = evoked.as_type('grad')\n evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')\n+\n+###############################################################################\n+\n+# plot interpolated grad + mag\n virt_evoked.plot_topomap(ch_type='grad', time_unit='s',\n title='grad (interpolated from mag + grad)')\n", "issue": "split code block in examples/preprocessing/plot_virtual_evoked\nright now, because all plots come from a single code block, they are plotted at the top of the example in a group of 4 (and consequently the plots are really small). By splitting the 4 plotting calls into different code blocks, they will plot larger / be easier to see & compare, without increasing run time of the example. Code blocks can be split with a line of 79 `#` marks (adding a bit of explanatory text too is usually a good idea)\n", "before_files": [{"content": "\"\"\"\n=======================\nRemap MEG channel types\n=======================\n\nIn this example, MEG data are remapped from one channel type to another.\nThis is useful to:\n\n - visualize combined magnetometers and gradiometers as magnetometers\n or gradiometers.\n - run statistics from both magnetometers and gradiometers while\n working with a single type of channels.\n\"\"\"\n\n# Author: Mainak Jas <[email protected]>\n\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# read the evoked\ndata_path = sample.data_path()\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\nevoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))\n\n# go from grad + mag to mag\nvirt_evoked = evoked.as_type('mag')\nevoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')\nvirt_evoked.plot_topomap(ch_type='mag', time_unit='s',\n title='mag (interpolated from mag + grad)')\n\n# go from grad + mag to grad\nvirt_evoked = evoked.as_type('grad')\nevoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')\nvirt_evoked.plot_topomap(ch_type='grad', time_unit='s',\n title='grad (interpolated from mag + grad)')\n", "path": "examples/preprocessing/plot_virtual_evoked.py"}], "after_files": [{"content": "\"\"\"\n=======================\nRemap MEG channel types\n=======================\n\nIn this example, MEG data are remapped from one channel type to another.\nThis is useful to:\n\n - visualize combined magnetometers and gradiometers as magnetometers\n or gradiometers.\n - run statistics from both magnetometers and gradiometers while\n working with a single type of channels.\n\"\"\"\n\n# Author: Mainak Jas <[email protected]>\n\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# read the evoked\ndata_path = sample.data_path()\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\nevoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))\n\n###############################################################################\n# First, let's call remap gradiometers to magnometers, and plot\n# the original and remapped topomaps of the magnetometers.\n\n# go from grad + mag to mag and plot original mag\nvirt_evoked = evoked.as_type('mag')\nevoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')\n\n###############################################################################\n\n# plot interpolated grad + mag\nvirt_evoked.plot_topomap(ch_type='mag', time_unit='s',\n title='mag (interpolated from mag + grad)')\n\n###############################################################################\n# Now, we remap magnometers to gradiometers, and plot\n# the original and remapped topomaps of the gradiometers\n\n# go from grad + mag to grad and plot original grad\nvirt_evoked = evoked.as_type('grad')\nevoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')\n\n###############################################################################\n\n# plot interpolated grad + mag\nvirt_evoked.plot_topomap(ch_type='grad', time_unit='s',\n title='grad (interpolated from mag + grad)')\n", "path": "examples/preprocessing/plot_virtual_evoked.py"}]} | 773 | 365 |
gh_patches_debug_157 | rasdani/github-patches | git_diff | doccano__doccano-1907 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot access Django admin panel in a Heroku deployment
How to reproduce the behaviour
---------
The FAQ describes how to [create a user via the Django admin panel](https://github.com/doccano/doccano/blob/master/docs/faq.md#how-to-create-a-user) for a locally hosted Doccano. When run locally, I have no problem to reach the admin panel on `http://localhost:8000/admin/`, in Heroku however it is not working.
I have tried to reach it on
- `https://mydeployment.herokuapp.com/admin/`
- `https://mydeployment.herokuapp.com/admin/login`
- `https://mydeployment.herokuapp.com/admin/login/`
- `http://mydeployment.herokuapp.com/admin/`
Those urls all result in a `500 Internal Server Error`.
Am I missing something here, or is this perhaps a bug?
Your Environment
---------
<!-- Include details of your environment. -->
* Operating System: -
* Python Version Used: -
* When did you install doccano: A few days ago
* How did you install doccano (Heroku button etc): Heroku button
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/config/settings/heroku.py`
Content:
```
1 import django_heroku
2
3 from .base import * # noqa: F401,F403
4
5 django_heroku.settings(locals(), test_runner=False)
6
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/config/settings/heroku.py b/backend/config/settings/heroku.py
--- a/backend/config/settings/heroku.py
+++ b/backend/config/settings/heroku.py
@@ -2,4 +2,4 @@
from .base import * # noqa: F401,F403
-django_heroku.settings(locals(), test_runner=False)
+django_heroku.settings(locals(), test_runner=False, staticfiles=False)
| {"golden_diff": "diff --git a/backend/config/settings/heroku.py b/backend/config/settings/heroku.py\n--- a/backend/config/settings/heroku.py\n+++ b/backend/config/settings/heroku.py\n@@ -2,4 +2,4 @@\n \n from .base import * # noqa: F401,F403\n \n-django_heroku.settings(locals(), test_runner=False)\n+django_heroku.settings(locals(), test_runner=False, staticfiles=False)\n", "issue": "Cannot access Django admin panel in a Heroku deployment\nHow to reproduce the behaviour\r\n---------\r\nThe FAQ describes how to [create a user via the Django admin panel](https://github.com/doccano/doccano/blob/master/docs/faq.md#how-to-create-a-user) for a locally hosted Doccano. When run locally, I have no problem to reach the admin panel on `http://localhost:8000/admin/`, in Heroku however it is not working.\r\n\r\nI have tried to reach it on\r\n- `https://mydeployment.herokuapp.com/admin/`\r\n- `https://mydeployment.herokuapp.com/admin/login`\r\n- `https://mydeployment.herokuapp.com/admin/login/`\r\n- `http://mydeployment.herokuapp.com/admin/`\r\n\r\nThose urls all result in a `500 Internal Server Error`.\r\nAm I missing something here, or is this perhaps a bug?\r\n\r\nYour Environment\r\n---------\r\n<!-- Include details of your environment. -->\r\n\r\n* Operating System: -\r\n* Python Version Used: -\r\n* When did you install doccano: A few days ago\r\n* How did you install doccano (Heroku button etc): Heroku button\r\n\n", "before_files": [{"content": "import django_heroku\n\nfrom .base import * # noqa: F401,F403\n\ndjango_heroku.settings(locals(), test_runner=False)\n", "path": "backend/config/settings/heroku.py"}], "after_files": [{"content": "import django_heroku\n\nfrom .base import * # noqa: F401,F403\n\ndjango_heroku.settings(locals(), test_runner=False, staticfiles=False)\n", "path": "backend/config/settings/heroku.py"}]} | 545 | 95 |
gh_patches_debug_30592 | rasdani/github-patches | git_diff | mne-tools__mne-python-4380 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove deprecated imp module
Currently, `mne/commands/utils.py` still uses the deprecated `imp` module, which has long been replaced with `importlib`. According to [this answer on SO](https://stackoverflow.com/a/67692/1112283), the current solution works only on Python 3.5/3.6, and there is a (deprecated) alternative for Python 3.3/3.4. All versions < 3.3 need to use `imp`.
How should this be handled in MNE?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mne/commands/utils.py`
Content:
```
1 """Some utility functions for commands (e.g. for cmdline handling)."""
2
3 # Authors: Yaroslav Halchenko <[email protected]>
4 #
5 # License: BSD (3-clause)
6
7 import imp
8 import os
9 import re
10 from optparse import OptionParser
11
12 import mne
13
14
15 def get_optparser(cmdpath, usage=None):
16 """Create OptionParser with cmd specific settings (e.g. prog value)."""
17 command = os.path.basename(cmdpath)
18 if re.match('mne_(.*).py', command):
19 command = command[4:-3]
20 elif re.match('mne_(.*).pyc', command):
21 command = command[4:-4]
22
23 # Fetch description
24 if cmdpath.endswith('.pyc'):
25 mod = imp.load_compiled('__temp', cmdpath)
26 else:
27 mod = imp.load_source('__temp', cmdpath)
28 if mod.__doc__:
29 doc, description, epilog = mod.__doc__, None, None
30
31 doc_lines = doc.split('\n')
32 description = doc_lines[0]
33 if len(doc_lines) > 1:
34 epilog = '\n'.join(doc_lines[1:])
35
36 # monkey patch OptionParser to not wrap epilog
37 OptionParser.format_epilog = lambda self, formatter: self.epilog
38 parser = OptionParser(prog="mne %s" % command,
39 version=mne.__version__,
40 description=description,
41 epilog=epilog, usage=usage)
42
43 return parser
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mne/commands/utils.py b/mne/commands/utils.py
--- a/mne/commands/utils.py
+++ b/mne/commands/utils.py
@@ -4,7 +4,7 @@
#
# License: BSD (3-clause)
-import imp
+import sys
import os
import re
from optparse import OptionParser
@@ -12,6 +12,42 @@
import mne
+def load_module(name, path):
+ """Load module from .py/.pyc file.
+
+ Parameters
+ ----------
+ name : str
+ Name of the module.
+ path : str
+ Path to .py/.pyc file.
+
+ Returns
+ -------
+ mod : module
+ Imported module.
+ """
+ if sys.version_info < (3, 3):
+ import imp
+ if path.endswith('.pyc'):
+ return imp.load_compiled(name, path)
+ else:
+ return imp.load_source(name, path)
+ elif sys.version_info < (3, 5):
+ if path.endswith('.pyc'):
+ from importlib.machinery import SourcelessFileLoader
+ return SourcelessFileLoader(name, path).load_module()
+ else:
+ from importlib.machinery import SourceFileLoader
+ return SourceFileLoader(name, path).load_module()
+ else: # Python 3.5 or greater
+ from importlib.util import spec_from_file_location, module_from_spec
+ spec = spec_from_file_location(name, path)
+ mod = module_from_spec(spec)
+ spec.loader.exec_module(mod)
+ return mod
+
+
def get_optparser(cmdpath, usage=None):
"""Create OptionParser with cmd specific settings (e.g. prog value)."""
command = os.path.basename(cmdpath)
@@ -21,10 +57,7 @@
command = command[4:-4]
# Fetch description
- if cmdpath.endswith('.pyc'):
- mod = imp.load_compiled('__temp', cmdpath)
- else:
- mod = imp.load_source('__temp', cmdpath)
+ mod = load_module('__temp', cmdpath)
if mod.__doc__:
doc, description, epilog = mod.__doc__, None, None
| {"golden_diff": "diff --git a/mne/commands/utils.py b/mne/commands/utils.py\n--- a/mne/commands/utils.py\n+++ b/mne/commands/utils.py\n@@ -4,7 +4,7 @@\n #\n # License: BSD (3-clause)\n \n-import imp\n+import sys\n import os\n import re\n from optparse import OptionParser\n@@ -12,6 +12,42 @@\n import mne\n \n \n+def load_module(name, path):\n+ \"\"\"Load module from .py/.pyc file.\n+\n+ Parameters\n+ ----------\n+ name : str\n+ Name of the module.\n+ path : str\n+ Path to .py/.pyc file.\n+\n+ Returns\n+ -------\n+ mod : module\n+ Imported module.\n+ \"\"\"\n+ if sys.version_info < (3, 3):\n+ import imp\n+ if path.endswith('.pyc'):\n+ return imp.load_compiled(name, path)\n+ else:\n+ return imp.load_source(name, path)\n+ elif sys.version_info < (3, 5):\n+ if path.endswith('.pyc'):\n+ from importlib.machinery import SourcelessFileLoader\n+ return SourcelessFileLoader(name, path).load_module()\n+ else:\n+ from importlib.machinery import SourceFileLoader\n+ return SourceFileLoader(name, path).load_module()\n+ else: # Python 3.5 or greater\n+ from importlib.util import spec_from_file_location, module_from_spec\n+ spec = spec_from_file_location(name, path)\n+ mod = module_from_spec(spec)\n+ spec.loader.exec_module(mod)\n+ return mod\n+\n+\n def get_optparser(cmdpath, usage=None):\n \"\"\"Create OptionParser with cmd specific settings (e.g. prog value).\"\"\"\n command = os.path.basename(cmdpath)\n@@ -21,10 +57,7 @@\n command = command[4:-4]\n \n # Fetch description\n- if cmdpath.endswith('.pyc'):\n- mod = imp.load_compiled('__temp', cmdpath)\n- else:\n- mod = imp.load_source('__temp', cmdpath)\n+ mod = load_module('__temp', cmdpath)\n if mod.__doc__:\n doc, description, epilog = mod.__doc__, None, None\n", "issue": "Remove deprecated imp module\nCurrently, `mne/commands/utils.py` still uses the deprecated `imp` module, which has long been replaced with `importlib`. According to [this answer on SO](https://stackoverflow.com/a/67692/1112283), the current solution works only on Python 3.5/3.6, and there is a (deprecated) alternative for Python 3.3/3.4. All versions < 3.3 need to use `imp`.\r\n\r\nHow should this be handled in MNE?\n", "before_files": [{"content": "\"\"\"Some utility functions for commands (e.g. for cmdline handling).\"\"\"\n\n# Authors: Yaroslav Halchenko <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport imp\nimport os\nimport re\nfrom optparse import OptionParser\n\nimport mne\n\n\ndef get_optparser(cmdpath, usage=None):\n \"\"\"Create OptionParser with cmd specific settings (e.g. prog value).\"\"\"\n command = os.path.basename(cmdpath)\n if re.match('mne_(.*).py', command):\n command = command[4:-3]\n elif re.match('mne_(.*).pyc', command):\n command = command[4:-4]\n\n # Fetch description\n if cmdpath.endswith('.pyc'):\n mod = imp.load_compiled('__temp', cmdpath)\n else:\n mod = imp.load_source('__temp', cmdpath)\n if mod.__doc__:\n doc, description, epilog = mod.__doc__, None, None\n\n doc_lines = doc.split('\\n')\n description = doc_lines[0]\n if len(doc_lines) > 1:\n epilog = '\\n'.join(doc_lines[1:])\n\n # monkey patch OptionParser to not wrap epilog\n OptionParser.format_epilog = lambda self, formatter: self.epilog\n parser = OptionParser(prog=\"mne %s\" % command,\n version=mne.__version__,\n description=description,\n epilog=epilog, usage=usage)\n\n return parser\n", "path": "mne/commands/utils.py"}], "after_files": [{"content": "\"\"\"Some utility functions for commands (e.g. for cmdline handling).\"\"\"\n\n# Authors: Yaroslav Halchenko <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport sys\nimport os\nimport re\nfrom optparse import OptionParser\n\nimport mne\n\n\ndef load_module(name, path):\n \"\"\"Load module from .py/.pyc file.\n\n Parameters\n ----------\n name : str\n Name of the module.\n path : str\n Path to .py/.pyc file.\n\n Returns\n -------\n mod : module\n Imported module.\n \"\"\"\n if sys.version_info < (3, 3):\n import imp\n if path.endswith('.pyc'):\n return imp.load_compiled(name, path)\n else:\n return imp.load_source(name, path)\n elif sys.version_info < (3, 5):\n if path.endswith('.pyc'):\n from importlib.machinery import SourcelessFileLoader\n return SourcelessFileLoader(name, path).load_module()\n else:\n from importlib.machinery import SourceFileLoader\n return SourceFileLoader(name, path).load_module()\n else: # Python 3.5 or greater\n from importlib.util import spec_from_file_location, module_from_spec\n spec = spec_from_file_location(name, path)\n mod = module_from_spec(spec)\n spec.loader.exec_module(mod)\n return mod\n\n\ndef get_optparser(cmdpath, usage=None):\n \"\"\"Create OptionParser with cmd specific settings (e.g. prog value).\"\"\"\n command = os.path.basename(cmdpath)\n if re.match('mne_(.*).py', command):\n command = command[4:-3]\n elif re.match('mne_(.*).pyc', command):\n command = command[4:-4]\n\n # Fetch description\n mod = load_module('__temp', cmdpath)\n if mod.__doc__:\n doc, description, epilog = mod.__doc__, None, None\n\n doc_lines = doc.split('\\n')\n description = doc_lines[0]\n if len(doc_lines) > 1:\n epilog = '\\n'.join(doc_lines[1:])\n\n # monkey patch OptionParser to not wrap epilog\n OptionParser.format_epilog = lambda self, formatter: self.epilog\n parser = OptionParser(prog=\"mne %s\" % command,\n version=mne.__version__,\n description=description,\n epilog=epilog, usage=usage)\n\n return parser\n", "path": "mne/commands/utils.py"}]} | 788 | 512 |
gh_patches_debug_48679 | rasdani/github-patches | git_diff | ethereum__web3.py-2659 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
protobuf dependency compatibility
* Python: 3.5
* OS: osx
* `import web3` output
```
ContextualVersionConflict
```
### What was wrong?
[protobuf](https://github.com/ethereum/web3.py/pull/1493) compatibility needs updating. Needed to downgrade protobuf to get it working. Version currently needs to be >4 but protobuf's latest version is 4.21.6
### How can it be fixed?
The newest version of protobuf should be compatible https://pypi.org/project/protobuf/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import (
3 find_packages,
4 setup,
5 )
6
7 extras_require = {
8 "tester": [
9 "eth-tester[py-evm]==v0.6.0-beta.6",
10 "py-geth>=3.9.1,<4",
11 ],
12 "linter": [
13 "flake8==3.8.3",
14 "isort>=4.2.15,<4.3.5",
15 "mypy==0.910",
16 "types-setuptools>=57.4.4,<58",
17 "types-requests>=2.26.1,<3",
18 "types-protobuf==3.19.13",
19 ],
20 "docs": [
21 "mock",
22 "sphinx-better-theme>=0.1.4",
23 "click>=5.1",
24 "configparser==3.5.0",
25 "contextlib2>=0.5.4",
26 "py-geth>=3.9.1,<4",
27 "py-solc>=0.4.0",
28 "pytest>=4.4.0,<5.0.0",
29 "sphinx>=3.0,<4",
30 "sphinx_rtd_theme>=0.1.9",
31 "toposort>=1.4",
32 "towncrier==18.5.0",
33 "urllib3",
34 "wheel",
35 "Jinja2<=3.0.3", # Jinja v3.1.0 dropped support for python 3.6
36 ],
37 "dev": [
38 "bumpversion",
39 "flaky>=3.7.0,<4",
40 "hypothesis>=3.31.2,<6",
41 "pytest>=4.4.0,<5.0.0",
42 "pytest-asyncio>=0.10.0,<0.11",
43 "pytest-mock>=1.10,<2",
44 "pytest-pythonpath>=0.3",
45 "pytest-watch>=4.2,<5",
46 "pytest-xdist>=1.29,<2",
47 "setuptools>=38.6.0",
48 "tox>=1.8.0",
49 "tqdm>4.32,<5",
50 "twine>=1.13,<2",
51 "pluggy==0.13.1",
52 "when-changed>=0.3.0,<0.4",
53 ],
54 }
55
56 extras_require["dev"] = (
57 extras_require["tester"]
58 + extras_require["linter"]
59 + extras_require["docs"]
60 + extras_require["dev"]
61 )
62
63 with open("./README.md") as readme:
64 long_description = readme.read()
65
66 setup(
67 name="web3",
68 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
69 version="5.31.0",
70 description="""Web3.py""",
71 long_description_content_type="text/markdown",
72 long_description=long_description,
73 author="Piper Merriam",
74 author_email="[email protected]",
75 url="https://github.com/ethereum/web3.py",
76 include_package_data=True,
77 install_requires=[
78 "aiohttp>=3.7.4.post0,<4",
79 "eth-abi>=2.0.0b6,<3.0.0",
80 "eth-account>=0.5.9,<0.6.0",
81 "eth-hash[pycryptodome]>=0.2.0,<1.0.0",
82 # eth-account allows too broad of an eth-rlp dependency.
83 # This eth-rlp pin can be removed once it gets tightened up in eth-account
84 "eth-rlp<0.3",
85 "eth-typing>=2.0.0,<3.0.0",
86 "eth-utils>=1.9.5,<2.0.0",
87 "hexbytes>=0.1.0,<1.0.0",
88 "ipfshttpclient==0.8.0a2",
89 "jsonschema>=3.2.0,<5",
90 "lru-dict>=1.1.6,<2.0.0",
91 "protobuf>=3.10.0,<4",
92 "pywin32>=223;platform_system=='Windows'",
93 "requests>=2.16.0,<3.0.0",
94 # remove typing_extensions after python_requires>=3.8, see web3._utils.compat
95 "typing-extensions>=3.7.4.1,<5;python_version<'3.8'",
96 "websockets>=9.1,<10",
97 ],
98 python_requires=">=3.6,<4",
99 extras_require=extras_require,
100 py_modules=["web3", "ens", "ethpm"],
101 entry_points={"pytest11": ["pytest_ethereum = web3.tools.pytest_ethereum.plugins"]},
102 license="MIT",
103 zip_safe=False,
104 keywords="ethereum",
105 packages=find_packages(exclude=["tests", "tests.*"]),
106 package_data={"web3": ["py.typed"]},
107 classifiers=[
108 "Development Status :: 5 - Production/Stable",
109 "Intended Audience :: Developers",
110 "License :: OSI Approved :: MIT License",
111 "Natural Language :: English",
112 "Programming Language :: Python :: 3",
113 "Programming Language :: Python :: 3.6",
114 "Programming Language :: Python :: 3.7",
115 "Programming Language :: Python :: 3.8",
116 "Programming Language :: Python :: 3.9",
117 ],
118 )
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -88,7 +88,7 @@
"ipfshttpclient==0.8.0a2",
"jsonschema>=3.2.0,<5",
"lru-dict>=1.1.6,<2.0.0",
- "protobuf>=3.10.0,<4",
+ "protobuf==3.19.4",
"pywin32>=223;platform_system=='Windows'",
"requests>=2.16.0,<3.0.0",
# remove typing_extensions after python_requires>=3.8, see web3._utils.compat
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -88,7 +88,7 @@\n \"ipfshttpclient==0.8.0a2\",\n \"jsonschema>=3.2.0,<5\",\n \"lru-dict>=1.1.6,<2.0.0\",\n- \"protobuf>=3.10.0,<4\",\n+ \"protobuf==3.19.4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n", "issue": "protobuf dependency compatibility\n* Python: 3.5\r\n* OS: osx\r\n* `import web3` output\r\n\r\n```\r\nContextualVersionConflict\r\n```\r\n\r\n### What was wrong?\r\n\r\n[protobuf](https://github.com/ethereum/web3.py/pull/1493) compatibility needs updating. Needed to downgrade protobuf to get it working. Version currently needs to be >4 but protobuf's latest version is 4.21.6\r\n\r\n### How can it be fixed?\r\n\r\nThe newest version of protobuf should be compatible https://pypi.org/project/protobuf/\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.6.0-beta.6\",\n \"py-geth>=3.9.1,<4\",\n ],\n \"linter\": [\n \"flake8==3.8.3\",\n \"isort>=4.2.15,<4.3.5\",\n \"mypy==0.910\",\n \"types-setuptools>=57.4.4,<58\",\n \"types-requests>=2.26.1,<3\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n \"py-geth>=3.9.1,<4\",\n \"py-solc>=0.4.0\",\n \"pytest>=4.4.0,<5.0.0\",\n \"sphinx>=3.0,<4\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"towncrier==18.5.0\",\n \"urllib3\",\n \"wheel\",\n \"Jinja2<=3.0.3\", # Jinja v3.1.0 dropped support for python 3.6\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0,<4\",\n \"hypothesis>=3.31.2,<6\",\n \"pytest>=4.4.0,<5.0.0\",\n \"pytest-asyncio>=0.10.0,<0.11\",\n \"pytest-mock>=1.10,<2\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch>=4.2,<5\",\n \"pytest-xdist>=1.29,<2\",\n \"setuptools>=38.6.0\",\n \"tox>=1.8.0\",\n \"tqdm>4.32,<5\",\n \"twine>=1.13,<2\",\n \"pluggy==0.13.1\",\n \"when-changed>=0.3.0,<0.4\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"5.31.0\",\n description=\"\"\"Web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"Piper Merriam\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0,<4\",\n \"eth-abi>=2.0.0b6,<3.0.0\",\n \"eth-account>=0.5.9,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n # eth-account allows too broad of an eth-rlp dependency.\n # This eth-rlp pin can be removed once it gets tightened up in eth-account\n \"eth-rlp<0.3\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"ipfshttpclient==0.8.0a2\",\n \"jsonschema>=3.2.0,<5\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf>=3.10.0,<4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<5;python_version<'3.8'\",\n \"websockets>=9.1,<10\",\n ],\n python_requires=\">=3.6,<4\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.6.0-beta.6\",\n \"py-geth>=3.9.1,<4\",\n ],\n \"linter\": [\n \"flake8==3.8.3\",\n \"isort>=4.2.15,<4.3.5\",\n \"mypy==0.910\",\n \"types-setuptools>=57.4.4,<58\",\n \"types-requests>=2.26.1,<3\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n \"py-geth>=3.9.1,<4\",\n \"py-solc>=0.4.0\",\n \"pytest>=4.4.0,<5.0.0\",\n \"sphinx>=3.0,<4\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"towncrier==18.5.0\",\n \"urllib3\",\n \"wheel\",\n \"Jinja2<=3.0.3\", # Jinja v3.1.0 dropped support for python 3.6\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0,<4\",\n \"hypothesis>=3.31.2,<6\",\n \"pytest>=4.4.0,<5.0.0\",\n \"pytest-asyncio>=0.10.0,<0.11\",\n \"pytest-mock>=1.10,<2\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch>=4.2,<5\",\n \"pytest-xdist>=1.29,<2\",\n \"setuptools>=38.6.0\",\n \"tox>=1.8.0\",\n \"tqdm>4.32,<5\",\n \"twine>=1.13,<2\",\n \"pluggy==0.13.1\",\n \"when-changed>=0.3.0,<0.4\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"5.31.0\",\n description=\"\"\"Web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"Piper Merriam\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0,<4\",\n \"eth-abi>=2.0.0b6,<3.0.0\",\n \"eth-account>=0.5.9,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n # eth-account allows too broad of an eth-rlp dependency.\n # This eth-rlp pin can be removed once it gets tightened up in eth-account\n \"eth-rlp<0.3\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"ipfshttpclient==0.8.0a2\",\n \"jsonschema>=3.2.0,<5\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf==3.19.4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<5;python_version<'3.8'\",\n \"websockets>=9.1,<10\",\n ],\n python_requires=\">=3.6,<4\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}]} | 1,857 | 158 |
gh_patches_debug_37097 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-12975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request]: Where is the save style button?
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Is it possible to make the old implementation of save style as well?
Not being able to save the currently typed prompt is very troublesome.
Why do we have to open the edit screen and copy/paste the prompt?
### Proposed workflow
Restore old implementation of save styles button
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modules/ui_prompt_styles.py`
Content:
```
1 import gradio as gr
2
3 from modules import shared, ui_common, ui_components, styles
4
5 styles_edit_symbol = '\U0001f58c\uFE0F' # 🖌️
6 styles_materialize_symbol = '\U0001f4cb' # 📋
7
8
9 def select_style(name):
10 style = shared.prompt_styles.styles.get(name)
11 existing = style is not None
12 empty = not name
13
14 prompt = style.prompt if style else gr.update()
15 negative_prompt = style.negative_prompt if style else gr.update()
16
17 return prompt, negative_prompt, gr.update(visible=existing), gr.update(visible=not empty)
18
19
20 def save_style(name, prompt, negative_prompt):
21 if not name:
22 return gr.update(visible=False)
23
24 style = styles.PromptStyle(name, prompt, negative_prompt)
25 shared.prompt_styles.styles[style.name] = style
26 shared.prompt_styles.save_styles(shared.styles_filename)
27
28 return gr.update(visible=True)
29
30
31 def delete_style(name):
32 if name == "":
33 return
34
35 shared.prompt_styles.styles.pop(name, None)
36 shared.prompt_styles.save_styles(shared.styles_filename)
37
38 return '', '', ''
39
40
41 def materialize_styles(prompt, negative_prompt, styles):
42 prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, styles)
43 negative_prompt = shared.prompt_styles.apply_negative_styles_to_prompt(negative_prompt, styles)
44
45 return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=negative_prompt), gr.Dropdown.update(value=[])]
46
47
48 def refresh_styles():
49 return gr.update(choices=list(shared.prompt_styles.styles)), gr.update(choices=list(shared.prompt_styles.styles))
50
51
52 class UiPromptStyles:
53 def __init__(self, tabname, main_ui_prompt, main_ui_negative_prompt):
54 self.tabname = tabname
55
56 with gr.Row(elem_id=f"{tabname}_styles_row"):
57 self.dropdown = gr.Dropdown(label="Styles", show_label=False, elem_id=f"{tabname}_styles", choices=list(shared.prompt_styles.styles), value=[], multiselect=True, tooltip="Styles")
58 edit_button = ui_components.ToolButton(value=styles_edit_symbol, elem_id=f"{tabname}_styles_edit_button", tooltip="Edit styles")
59
60 with gr.Box(elem_id=f"{tabname}_styles_dialog", elem_classes="popup-dialog") as styles_dialog:
61 with gr.Row():
62 self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.")
63 ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles")
64 self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply", tooltip="Apply all selected styles from the style selction dropdown in main UI to the prompt.")
65
66 with gr.Row():
67 self.prompt = gr.Textbox(label="Prompt", show_label=True, elem_id=f"{tabname}_edit_style_prompt", lines=3)
68
69 with gr.Row():
70 self.neg_prompt = gr.Textbox(label="Negative prompt", show_label=True, elem_id=f"{tabname}_edit_style_neg_prompt", lines=3)
71
72 with gr.Row():
73 self.save = gr.Button('Save', variant='primary', elem_id=f'{tabname}_edit_style_save', visible=False)
74 self.delete = gr.Button('Delete', variant='primary', elem_id=f'{tabname}_edit_style_delete', visible=False)
75 self.close = gr.Button('Close', variant='secondary', elem_id=f'{tabname}_edit_style_close')
76
77 self.selection.change(
78 fn=select_style,
79 inputs=[self.selection],
80 outputs=[self.prompt, self.neg_prompt, self.delete, self.save],
81 show_progress=False,
82 )
83
84 self.save.click(
85 fn=save_style,
86 inputs=[self.selection, self.prompt, self.neg_prompt],
87 outputs=[self.delete],
88 show_progress=False,
89 ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)
90
91 self.delete.click(
92 fn=delete_style,
93 _js='function(name){ if(name == "") return ""; return confirm("Delete style " + name + "?") ? name : ""; }',
94 inputs=[self.selection],
95 outputs=[self.selection, self.prompt, self.neg_prompt],
96 show_progress=False,
97 ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)
98
99 self.materialize.click(
100 fn=materialize_styles,
101 inputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],
102 outputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],
103 show_progress=False,
104 ).then(fn=None, _js="function(){update_"+tabname+"_tokens(); closePopup();}", show_progress=False)
105
106 ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)
107
108
109
110
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modules/ui_prompt_styles.py b/modules/ui_prompt_styles.py
--- a/modules/ui_prompt_styles.py
+++ b/modules/ui_prompt_styles.py
@@ -4,6 +4,7 @@
styles_edit_symbol = '\U0001f58c\uFE0F' # 🖌️
styles_materialize_symbol = '\U0001f4cb' # 📋
+styles_copy_symbol = '\U0001f4dd' # 📝
def select_style(name):
@@ -62,6 +63,7 @@
self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.")
ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles")
self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply", tooltip="Apply all selected styles from the style selction dropdown in main UI to the prompt.")
+ self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f"{tabname}_style_copy", tooltip="Copy main UI prompt to style.")
with gr.Row():
self.prompt = gr.Textbox(label="Prompt", show_label=True, elem_id=f"{tabname}_edit_style_prompt", lines=3)
@@ -103,6 +105,13 @@
show_progress=False,
).then(fn=None, _js="function(){update_"+tabname+"_tokens(); closePopup();}", show_progress=False)
+ self.copy.click(
+ fn=lambda p, n: (p, n),
+ inputs=[main_ui_prompt, main_ui_negative_prompt],
+ outputs=[self.prompt, self.neg_prompt],
+ show_progress=False,
+ )
+
ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)
| {"golden_diff": "diff --git a/modules/ui_prompt_styles.py b/modules/ui_prompt_styles.py\n--- a/modules/ui_prompt_styles.py\n+++ b/modules/ui_prompt_styles.py\n@@ -4,6 +4,7 @@\n \r\n styles_edit_symbol = '\\U0001f58c\\uFE0F' # \ud83d\udd8c\ufe0f\r\n styles_materialize_symbol = '\\U0001f4cb' # \ud83d\udccb\r\n+styles_copy_symbol = '\\U0001f4dd' # \ud83d\udcdd\r\n \r\n \r\n def select_style(name):\r\n@@ -62,6 +63,7 @@\n self.selection = gr.Dropdown(label=\"Styles\", elem_id=f\"{tabname}_styles_edit_select\", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info=\"Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.\")\r\n ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {\"choices\": list(shared.prompt_styles.styles)}, f\"refresh_{tabname}_styles\")\r\n self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f\"{tabname}_style_apply\", tooltip=\"Apply all selected styles from the style selction dropdown in main UI to the prompt.\")\r\n+ self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f\"{tabname}_style_copy\", tooltip=\"Copy main UI prompt to style.\")\r\n \r\n with gr.Row():\r\n self.prompt = gr.Textbox(label=\"Prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_prompt\", lines=3)\r\n@@ -103,6 +105,13 @@\n show_progress=False,\r\n ).then(fn=None, _js=\"function(){update_\"+tabname+\"_tokens(); closePopup();}\", show_progress=False)\r\n \r\n+ self.copy.click(\r\n+ fn=lambda p, n: (p, n),\r\n+ inputs=[main_ui_prompt, main_ui_negative_prompt],\r\n+ outputs=[self.prompt, self.neg_prompt],\r\n+ show_progress=False,\r\n+ )\r\n+\r\n ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)\n", "issue": "[Feature Request]: Where is the save style button?\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What would your feature do ?\n\nIs it possible to make the old implementation of save style as well?\r\nNot being able to save the currently typed prompt is very troublesome.\r\nWhy do we have to open the edit screen and copy/paste the prompt?\n\n### Proposed workflow\n\nRestore old implementation of save styles button\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import gradio as gr\r\n\r\nfrom modules import shared, ui_common, ui_components, styles\r\n\r\nstyles_edit_symbol = '\\U0001f58c\\uFE0F' # \ud83d\udd8c\ufe0f\r\nstyles_materialize_symbol = '\\U0001f4cb' # \ud83d\udccb\r\n\r\n\r\ndef select_style(name):\r\n style = shared.prompt_styles.styles.get(name)\r\n existing = style is not None\r\n empty = not name\r\n\r\n prompt = style.prompt if style else gr.update()\r\n negative_prompt = style.negative_prompt if style else gr.update()\r\n\r\n return prompt, negative_prompt, gr.update(visible=existing), gr.update(visible=not empty)\r\n\r\n\r\ndef save_style(name, prompt, negative_prompt):\r\n if not name:\r\n return gr.update(visible=False)\r\n\r\n style = styles.PromptStyle(name, prompt, negative_prompt)\r\n shared.prompt_styles.styles[style.name] = style\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return gr.update(visible=True)\r\n\r\n\r\ndef delete_style(name):\r\n if name == \"\":\r\n return\r\n\r\n shared.prompt_styles.styles.pop(name, None)\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return '', '', ''\r\n\r\n\r\ndef materialize_styles(prompt, negative_prompt, styles):\r\n prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, styles)\r\n negative_prompt = shared.prompt_styles.apply_negative_styles_to_prompt(negative_prompt, styles)\r\n\r\n return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=negative_prompt), gr.Dropdown.update(value=[])]\r\n\r\n\r\ndef refresh_styles():\r\n return gr.update(choices=list(shared.prompt_styles.styles)), gr.update(choices=list(shared.prompt_styles.styles))\r\n\r\n\r\nclass UiPromptStyles:\r\n def __init__(self, tabname, main_ui_prompt, main_ui_negative_prompt):\r\n self.tabname = tabname\r\n\r\n with gr.Row(elem_id=f\"{tabname}_styles_row\"):\r\n self.dropdown = gr.Dropdown(label=\"Styles\", show_label=False, elem_id=f\"{tabname}_styles\", choices=list(shared.prompt_styles.styles), value=[], multiselect=True, tooltip=\"Styles\")\r\n edit_button = ui_components.ToolButton(value=styles_edit_symbol, elem_id=f\"{tabname}_styles_edit_button\", tooltip=\"Edit styles\")\r\n\r\n with gr.Box(elem_id=f\"{tabname}_styles_dialog\", elem_classes=\"popup-dialog\") as styles_dialog:\r\n with gr.Row():\r\n self.selection = gr.Dropdown(label=\"Styles\", elem_id=f\"{tabname}_styles_edit_select\", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info=\"Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.\")\r\n ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {\"choices\": list(shared.prompt_styles.styles)}, f\"refresh_{tabname}_styles\")\r\n self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f\"{tabname}_style_apply\", tooltip=\"Apply all selected styles from the style selction dropdown in main UI to the prompt.\")\r\n\r\n with gr.Row():\r\n self.prompt = gr.Textbox(label=\"Prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.neg_prompt = gr.Textbox(label=\"Negative prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_neg_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.save = gr.Button('Save', variant='primary', elem_id=f'{tabname}_edit_style_save', visible=False)\r\n self.delete = gr.Button('Delete', variant='primary', elem_id=f'{tabname}_edit_style_delete', visible=False)\r\n self.close = gr.Button('Close', variant='secondary', elem_id=f'{tabname}_edit_style_close')\r\n\r\n self.selection.change(\r\n fn=select_style,\r\n inputs=[self.selection],\r\n outputs=[self.prompt, self.neg_prompt, self.delete, self.save],\r\n show_progress=False,\r\n )\r\n\r\n self.save.click(\r\n fn=save_style,\r\n inputs=[self.selection, self.prompt, self.neg_prompt],\r\n outputs=[self.delete],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.delete.click(\r\n fn=delete_style,\r\n _js='function(name){ if(name == \"\") return \"\"; return confirm(\"Delete style \" + name + \"?\") ? name : \"\"; }',\r\n inputs=[self.selection],\r\n outputs=[self.selection, self.prompt, self.neg_prompt],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.materialize.click(\r\n fn=materialize_styles,\r\n inputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n outputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n show_progress=False,\r\n ).then(fn=None, _js=\"function(){update_\"+tabname+\"_tokens(); closePopup();}\", show_progress=False)\r\n\r\n ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)\r\n\r\n\r\n\r\n\r\n", "path": "modules/ui_prompt_styles.py"}], "after_files": [{"content": "import gradio as gr\r\n\r\nfrom modules import shared, ui_common, ui_components, styles\r\n\r\nstyles_edit_symbol = '\\U0001f58c\\uFE0F' # \ud83d\udd8c\ufe0f\r\nstyles_materialize_symbol = '\\U0001f4cb' # \ud83d\udccb\r\nstyles_copy_symbol = '\\U0001f4dd' # \ud83d\udcdd\r\n\r\n\r\ndef select_style(name):\r\n style = shared.prompt_styles.styles.get(name)\r\n existing = style is not None\r\n empty = not name\r\n\r\n prompt = style.prompt if style else gr.update()\r\n negative_prompt = style.negative_prompt if style else gr.update()\r\n\r\n return prompt, negative_prompt, gr.update(visible=existing), gr.update(visible=not empty)\r\n\r\n\r\ndef save_style(name, prompt, negative_prompt):\r\n if not name:\r\n return gr.update(visible=False)\r\n\r\n style = styles.PromptStyle(name, prompt, negative_prompt)\r\n shared.prompt_styles.styles[style.name] = style\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return gr.update(visible=True)\r\n\r\n\r\ndef delete_style(name):\r\n if name == \"\":\r\n return\r\n\r\n shared.prompt_styles.styles.pop(name, None)\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return '', '', ''\r\n\r\n\r\ndef materialize_styles(prompt, negative_prompt, styles):\r\n prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, styles)\r\n negative_prompt = shared.prompt_styles.apply_negative_styles_to_prompt(negative_prompt, styles)\r\n\r\n return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=negative_prompt), gr.Dropdown.update(value=[])]\r\n\r\n\r\ndef refresh_styles():\r\n return gr.update(choices=list(shared.prompt_styles.styles)), gr.update(choices=list(shared.prompt_styles.styles))\r\n\r\n\r\nclass UiPromptStyles:\r\n def __init__(self, tabname, main_ui_prompt, main_ui_negative_prompt):\r\n self.tabname = tabname\r\n\r\n with gr.Row(elem_id=f\"{tabname}_styles_row\"):\r\n self.dropdown = gr.Dropdown(label=\"Styles\", show_label=False, elem_id=f\"{tabname}_styles\", choices=list(shared.prompt_styles.styles), value=[], multiselect=True, tooltip=\"Styles\")\r\n edit_button = ui_components.ToolButton(value=styles_edit_symbol, elem_id=f\"{tabname}_styles_edit_button\", tooltip=\"Edit styles\")\r\n\r\n with gr.Box(elem_id=f\"{tabname}_styles_dialog\", elem_classes=\"popup-dialog\") as styles_dialog:\r\n with gr.Row():\r\n self.selection = gr.Dropdown(label=\"Styles\", elem_id=f\"{tabname}_styles_edit_select\", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info=\"Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.\")\r\n ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {\"choices\": list(shared.prompt_styles.styles)}, f\"refresh_{tabname}_styles\")\r\n self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f\"{tabname}_style_apply\", tooltip=\"Apply all selected styles from the style selction dropdown in main UI to the prompt.\")\r\n self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f\"{tabname}_style_copy\", tooltip=\"Copy main UI prompt to style.\")\r\n\r\n with gr.Row():\r\n self.prompt = gr.Textbox(label=\"Prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.neg_prompt = gr.Textbox(label=\"Negative prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_neg_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.save = gr.Button('Save', variant='primary', elem_id=f'{tabname}_edit_style_save', visible=False)\r\n self.delete = gr.Button('Delete', variant='primary', elem_id=f'{tabname}_edit_style_delete', visible=False)\r\n self.close = gr.Button('Close', variant='secondary', elem_id=f'{tabname}_edit_style_close')\r\n\r\n self.selection.change(\r\n fn=select_style,\r\n inputs=[self.selection],\r\n outputs=[self.prompt, self.neg_prompt, self.delete, self.save],\r\n show_progress=False,\r\n )\r\n\r\n self.save.click(\r\n fn=save_style,\r\n inputs=[self.selection, self.prompt, self.neg_prompt],\r\n outputs=[self.delete],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.delete.click(\r\n fn=delete_style,\r\n _js='function(name){ if(name == \"\") return \"\"; return confirm(\"Delete style \" + name + \"?\") ? name : \"\"; }',\r\n inputs=[self.selection],\r\n outputs=[self.selection, self.prompt, self.neg_prompt],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.materialize.click(\r\n fn=materialize_styles,\r\n inputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n outputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n show_progress=False,\r\n ).then(fn=None, _js=\"function(){update_\"+tabname+\"_tokens(); closePopup();}\", show_progress=False)\r\n\r\n self.copy.click(\r\n fn=lambda p, n: (p, n),\r\n inputs=[main_ui_prompt, main_ui_negative_prompt],\r\n outputs=[self.prompt, self.neg_prompt],\r\n show_progress=False,\r\n )\r\n\r\n ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)\r\n\r\n\r\n\r\n\r\n", "path": "modules/ui_prompt_styles.py"}]} | 1,710 | 490 |
gh_patches_debug_11197 | rasdani/github-patches | git_diff | ESMCI__cime-2860 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
floating point mpiprocs when running ./case.setup with python3
I noticed that when running with python3, mpiprocs is set to be a float, i.e.,
$ python3 ./case.setup # will create the following in .case.run:
#PBS -l select=5:ncpus=36:mpiprocs=36.0:ompthreads=1
$ python2 ./case.setup # will create the following .case.run:
#PBS -l select=5:ncpus=36:mpiprocs=36:ompthreads=1
NOTE: You'll need to rm .case.run, in between ./case.setup executions to see the difference.
I haven't looked this into depth, but I bet it has to do with "true division" that comes with python3.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/XML/env_mach_pes.py`
Content:
```
1 """
2 Interface to the env_mach_pes.xml file. This class inherits from EntryID
3 """
4 from CIME.XML.standard_module_setup import *
5 from CIME.XML.env_base import EnvBase
6 import math
7
8 logger = logging.getLogger(__name__)
9
10 class EnvMachPes(EnvBase):
11
12 def __init__(self, case_root=None, infile="env_mach_pes.xml", components=None):
13 """
14 initialize an object interface to file env_mach_pes.xml in the case directory
15 """
16 self._components = components
17 schema = os.path.join(get_cime_root(), "config", "xml_schemas", "env_mach_pes.xsd")
18 EnvBase.__init__(self, case_root, infile, schema=schema)
19
20 def add_comment(self, comment):
21 if comment is not None:
22 node = self.make_child("comment", text=comment)
23 # make_child adds to the end of the file but we want it to follow the header
24 # so we need to remove it and add it in the correct position
25 self.remove_child(node)
26 self.add_child(node, position=1)
27
28 def get_value(self, vid, attribute=None, resolved=True, subgroup=None, max_mpitasks_per_node=None): # pylint: disable=arguments-differ
29 # Special variable NINST_MAX is used to determine the number of
30 # drivers in multi-driver mode.
31 if vid == "NINST_MAX":
32 value = 1
33 for comp in self._components:
34 if comp != "CPL":
35 value = max(value, self.get_value("NINST_{}".format(comp)))
36 return value
37
38 value = EnvBase.get_value(self, vid, attribute, resolved, subgroup)
39
40 if "NTASKS" in vid or "ROOTPE" in vid:
41 if max_mpitasks_per_node is None:
42 max_mpitasks_per_node = self.get_value("MAX_MPITASKS_PER_NODE")
43 if value is not None and value < 0:
44 value = -1*value*max_mpitasks_per_node
45
46 return value
47
48 def set_value(self, vid, value, subgroup=None, ignore_type=False):
49 """
50 Set the value of an entry-id field to value
51 Returns the value or None if not found
52 subgroup is ignored in the general routine and applied in specific methods
53 """
54 if vid == "MULTI_DRIVER" and value:
55 ninst_max = self.get_value("NINST_MAX")
56 for comp in self._components:
57 if comp == "CPL":
58 continue
59 ninst = self.get_value("NINST_{}".format(comp))
60 expect(ninst == ninst_max,
61 "All components must have the same NINST value in multi_driver mode. NINST_{}={} shoud be {}".format(comp,ninst,ninst_max))
62 if "NTASKS" in vid or "NTHRDS" in vid:
63 expect(value != 0, "Cannot set NTASKS or NTHRDS to 0")
64
65
66 return EnvBase.set_value(self, vid, value, subgroup=subgroup, ignore_type=ignore_type)
67
68
69 def get_max_thread_count(self, comp_classes):
70 ''' Find the maximum number of openmp threads for any component in the case '''
71 max_threads = 1
72 for comp in comp_classes:
73 threads = self.get_value("NTHRDS",attribute={"compclass":comp})
74 expect(threads is not None, "Error no thread count found for component class {}".format(comp))
75 if threads > max_threads:
76 max_threads = threads
77 return max_threads
78
79 def get_total_tasks(self, comp_classes):
80 total_tasks = 0
81 maxinst = 1
82 for comp in comp_classes:
83 ntasks = self.get_value("NTASKS", attribute={"compclass":comp})
84 rootpe = self.get_value("ROOTPE", attribute={"compclass":comp})
85 pstrid = self.get_value("PSTRID", attribute={"compclass":comp})
86 if comp != "CPL":
87 ninst = self.get_value("NINST", attribute={"compclass":comp})
88 maxinst = max(maxinst, ninst)
89 tt = rootpe + (ntasks - 1) * pstrid + 1
90 total_tasks = max(tt, total_tasks)
91 if self.get_value("MULTI_DRIVER"):
92 total_tasks *= maxinst
93 return total_tasks
94
95 def get_tasks_per_node(self, total_tasks, max_thread_count):
96 expect(total_tasks > 0,"totaltasks > 0 expected, totaltasks = {}".format(total_tasks))
97 tasks_per_node = min(self.get_value("MAX_TASKS_PER_NODE")/ max_thread_count,
98 self.get_value("MAX_MPITASKS_PER_NODE"), total_tasks)
99 return tasks_per_node if tasks_per_node > 0 else 1
100
101 def get_total_nodes(self, total_tasks, max_thread_count):
102 """
103 Return (num_active_nodes, num_spare_nodes)
104 """
105 tasks_per_node = self.get_tasks_per_node(total_tasks, max_thread_count)
106 num_nodes = int(math.ceil(float(total_tasks) / tasks_per_node))
107 return num_nodes, self.get_spare_nodes(num_nodes)
108
109 def get_spare_nodes(self, num_nodes):
110 force_spare_nodes = self.get_value("FORCE_SPARE_NODES")
111 if force_spare_nodes != -999:
112 return force_spare_nodes
113
114 if self.get_value("ALLOCATE_SPARE_NODES"):
115 ten_pct = int(math.ceil(float(num_nodes) * 0.1))
116 if ten_pct < 1:
117 return 1 # Always provide at lease one spare node
118 elif ten_pct > 10:
119 return 10 # Never provide more than 10 spare nodes
120 else:
121 return ten_pct
122 else:
123 return 0
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/lib/CIME/XML/env_mach_pes.py b/scripts/lib/CIME/XML/env_mach_pes.py
--- a/scripts/lib/CIME/XML/env_mach_pes.py
+++ b/scripts/lib/CIME/XML/env_mach_pes.py
@@ -94,7 +94,7 @@
def get_tasks_per_node(self, total_tasks, max_thread_count):
expect(total_tasks > 0,"totaltasks > 0 expected, totaltasks = {}".format(total_tasks))
- tasks_per_node = min(self.get_value("MAX_TASKS_PER_NODE")/ max_thread_count,
+ tasks_per_node = min(self.get_value("MAX_TASKS_PER_NODE")// max_thread_count,
self.get_value("MAX_MPITASKS_PER_NODE"), total_tasks)
return tasks_per_node if tasks_per_node > 0 else 1
| {"golden_diff": "diff --git a/scripts/lib/CIME/XML/env_mach_pes.py b/scripts/lib/CIME/XML/env_mach_pes.py\n--- a/scripts/lib/CIME/XML/env_mach_pes.py\n+++ b/scripts/lib/CIME/XML/env_mach_pes.py\n@@ -94,7 +94,7 @@\n \n def get_tasks_per_node(self, total_tasks, max_thread_count):\n expect(total_tasks > 0,\"totaltasks > 0 expected, totaltasks = {}\".format(total_tasks))\n- tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")/ max_thread_count,\n+ tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")// max_thread_count,\n self.get_value(\"MAX_MPITASKS_PER_NODE\"), total_tasks)\n return tasks_per_node if tasks_per_node > 0 else 1\n", "issue": "floating point mpiprocs when running ./case.setup with python3\nI noticed that when running with python3, mpiprocs is set to be a float, i.e.,\r\n\r\n$ python3 ./case.setup # will create the following in .case.run:\r\n#PBS -l select=5:ncpus=36:mpiprocs=36.0:ompthreads=1\r\n\r\n$ python2 ./case.setup # will create the following .case.run:\r\n#PBS -l select=5:ncpus=36:mpiprocs=36:ompthreads=1\r\n\r\nNOTE: You'll need to rm .case.run, in between ./case.setup executions to see the difference.\r\n\r\nI haven't looked this into depth, but I bet it has to do with \"true division\" that comes with python3.\n", "before_files": [{"content": "\"\"\"\nInterface to the env_mach_pes.xml file. This class inherits from EntryID\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.env_base import EnvBase\nimport math\n\nlogger = logging.getLogger(__name__)\n\nclass EnvMachPes(EnvBase):\n\n def __init__(self, case_root=None, infile=\"env_mach_pes.xml\", components=None):\n \"\"\"\n initialize an object interface to file env_mach_pes.xml in the case directory\n \"\"\"\n self._components = components\n schema = os.path.join(get_cime_root(), \"config\", \"xml_schemas\", \"env_mach_pes.xsd\")\n EnvBase.__init__(self, case_root, infile, schema=schema)\n\n def add_comment(self, comment):\n if comment is not None:\n node = self.make_child(\"comment\", text=comment)\n # make_child adds to the end of the file but we want it to follow the header\n # so we need to remove it and add it in the correct position\n self.remove_child(node)\n self.add_child(node, position=1)\n\n def get_value(self, vid, attribute=None, resolved=True, subgroup=None, max_mpitasks_per_node=None): # pylint: disable=arguments-differ\n # Special variable NINST_MAX is used to determine the number of\n # drivers in multi-driver mode.\n if vid == \"NINST_MAX\":\n value = 1\n for comp in self._components:\n if comp != \"CPL\":\n value = max(value, self.get_value(\"NINST_{}\".format(comp)))\n return value\n\n value = EnvBase.get_value(self, vid, attribute, resolved, subgroup)\n\n if \"NTASKS\" in vid or \"ROOTPE\" in vid:\n if max_mpitasks_per_node is None:\n max_mpitasks_per_node = self.get_value(\"MAX_MPITASKS_PER_NODE\")\n if value is not None and value < 0:\n value = -1*value*max_mpitasks_per_node\n\n return value\n\n def set_value(self, vid, value, subgroup=None, ignore_type=False):\n \"\"\"\n Set the value of an entry-id field to value\n Returns the value or None if not found\n subgroup is ignored in the general routine and applied in specific methods\n \"\"\"\n if vid == \"MULTI_DRIVER\" and value:\n ninst_max = self.get_value(\"NINST_MAX\")\n for comp in self._components:\n if comp == \"CPL\":\n continue\n ninst = self.get_value(\"NINST_{}\".format(comp))\n expect(ninst == ninst_max,\n \"All components must have the same NINST value in multi_driver mode. NINST_{}={} shoud be {}\".format(comp,ninst,ninst_max))\n if \"NTASKS\" in vid or \"NTHRDS\" in vid:\n expect(value != 0, \"Cannot set NTASKS or NTHRDS to 0\")\n\n\n return EnvBase.set_value(self, vid, value, subgroup=subgroup, ignore_type=ignore_type)\n\n\n def get_max_thread_count(self, comp_classes):\n ''' Find the maximum number of openmp threads for any component in the case '''\n max_threads = 1\n for comp in comp_classes:\n threads = self.get_value(\"NTHRDS\",attribute={\"compclass\":comp})\n expect(threads is not None, \"Error no thread count found for component class {}\".format(comp))\n if threads > max_threads:\n max_threads = threads\n return max_threads\n\n def get_total_tasks(self, comp_classes):\n total_tasks = 0\n maxinst = 1\n for comp in comp_classes:\n ntasks = self.get_value(\"NTASKS\", attribute={\"compclass\":comp})\n rootpe = self.get_value(\"ROOTPE\", attribute={\"compclass\":comp})\n pstrid = self.get_value(\"PSTRID\", attribute={\"compclass\":comp})\n if comp != \"CPL\":\n ninst = self.get_value(\"NINST\", attribute={\"compclass\":comp})\n maxinst = max(maxinst, ninst)\n tt = rootpe + (ntasks - 1) * pstrid + 1\n total_tasks = max(tt, total_tasks)\n if self.get_value(\"MULTI_DRIVER\"):\n total_tasks *= maxinst\n return total_tasks\n\n def get_tasks_per_node(self, total_tasks, max_thread_count):\n expect(total_tasks > 0,\"totaltasks > 0 expected, totaltasks = {}\".format(total_tasks))\n tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")/ max_thread_count,\n self.get_value(\"MAX_MPITASKS_PER_NODE\"), total_tasks)\n return tasks_per_node if tasks_per_node > 0 else 1\n\n def get_total_nodes(self, total_tasks, max_thread_count):\n \"\"\"\n Return (num_active_nodes, num_spare_nodes)\n \"\"\"\n tasks_per_node = self.get_tasks_per_node(total_tasks, max_thread_count)\n num_nodes = int(math.ceil(float(total_tasks) / tasks_per_node))\n return num_nodes, self.get_spare_nodes(num_nodes)\n\n def get_spare_nodes(self, num_nodes):\n force_spare_nodes = self.get_value(\"FORCE_SPARE_NODES\")\n if force_spare_nodes != -999:\n return force_spare_nodes\n\n if self.get_value(\"ALLOCATE_SPARE_NODES\"):\n ten_pct = int(math.ceil(float(num_nodes) * 0.1))\n if ten_pct < 1:\n return 1 # Always provide at lease one spare node\n elif ten_pct > 10:\n return 10 # Never provide more than 10 spare nodes\n else:\n return ten_pct\n else:\n return 0\n", "path": "scripts/lib/CIME/XML/env_mach_pes.py"}], "after_files": [{"content": "\"\"\"\nInterface to the env_mach_pes.xml file. This class inherits from EntryID\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.env_base import EnvBase\nimport math\n\nlogger = logging.getLogger(__name__)\n\nclass EnvMachPes(EnvBase):\n\n def __init__(self, case_root=None, infile=\"env_mach_pes.xml\", components=None):\n \"\"\"\n initialize an object interface to file env_mach_pes.xml in the case directory\n \"\"\"\n self._components = components\n schema = os.path.join(get_cime_root(), \"config\", \"xml_schemas\", \"env_mach_pes.xsd\")\n EnvBase.__init__(self, case_root, infile, schema=schema)\n\n def add_comment(self, comment):\n if comment is not None:\n node = self.make_child(\"comment\", text=comment)\n # make_child adds to the end of the file but we want it to follow the header\n # so we need to remove it and add it in the correct position\n self.remove_child(node)\n self.add_child(node, position=1)\n\n def get_value(self, vid, attribute=None, resolved=True, subgroup=None, max_mpitasks_per_node=None): # pylint: disable=arguments-differ\n # Special variable NINST_MAX is used to determine the number of\n # drivers in multi-driver mode.\n if vid == \"NINST_MAX\":\n value = 1\n for comp in self._components:\n if comp != \"CPL\":\n value = max(value, self.get_value(\"NINST_{}\".format(comp)))\n return value\n\n value = EnvBase.get_value(self, vid, attribute, resolved, subgroup)\n\n if \"NTASKS\" in vid or \"ROOTPE\" in vid:\n if max_mpitasks_per_node is None:\n max_mpitasks_per_node = self.get_value(\"MAX_MPITASKS_PER_NODE\")\n if value is not None and value < 0:\n value = -1*value*max_mpitasks_per_node\n\n return value\n\n def set_value(self, vid, value, subgroup=None, ignore_type=False):\n \"\"\"\n Set the value of an entry-id field to value\n Returns the value or None if not found\n subgroup is ignored in the general routine and applied in specific methods\n \"\"\"\n if vid == \"MULTI_DRIVER\" and value:\n ninst_max = self.get_value(\"NINST_MAX\")\n for comp in self._components:\n if comp == \"CPL\":\n continue\n ninst = self.get_value(\"NINST_{}\".format(comp))\n expect(ninst == ninst_max,\n \"All components must have the same NINST value in multi_driver mode. NINST_{}={} shoud be {}\".format(comp,ninst,ninst_max))\n if \"NTASKS\" in vid or \"NTHRDS\" in vid:\n expect(value != 0, \"Cannot set NTASKS or NTHRDS to 0\")\n\n\n return EnvBase.set_value(self, vid, value, subgroup=subgroup, ignore_type=ignore_type)\n\n\n def get_max_thread_count(self, comp_classes):\n ''' Find the maximum number of openmp threads for any component in the case '''\n max_threads = 1\n for comp in comp_classes:\n threads = self.get_value(\"NTHRDS\",attribute={\"compclass\":comp})\n expect(threads is not None, \"Error no thread count found for component class {}\".format(comp))\n if threads > max_threads:\n max_threads = threads\n return max_threads\n\n def get_total_tasks(self, comp_classes):\n total_tasks = 0\n maxinst = 1\n for comp in comp_classes:\n ntasks = self.get_value(\"NTASKS\", attribute={\"compclass\":comp})\n rootpe = self.get_value(\"ROOTPE\", attribute={\"compclass\":comp})\n pstrid = self.get_value(\"PSTRID\", attribute={\"compclass\":comp})\n if comp != \"CPL\":\n ninst = self.get_value(\"NINST\", attribute={\"compclass\":comp})\n maxinst = max(maxinst, ninst)\n tt = rootpe + (ntasks - 1) * pstrid + 1\n total_tasks = max(tt, total_tasks)\n if self.get_value(\"MULTI_DRIVER\"):\n total_tasks *= maxinst\n return total_tasks\n\n def get_tasks_per_node(self, total_tasks, max_thread_count):\n expect(total_tasks > 0,\"totaltasks > 0 expected, totaltasks = {}\".format(total_tasks))\n tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")// max_thread_count,\n self.get_value(\"MAX_MPITASKS_PER_NODE\"), total_tasks)\n return tasks_per_node if tasks_per_node > 0 else 1\n\n def get_total_nodes(self, total_tasks, max_thread_count):\n \"\"\"\n Return (num_active_nodes, num_spare_nodes)\n \"\"\"\n tasks_per_node = self.get_tasks_per_node(total_tasks, max_thread_count)\n num_nodes = int(math.ceil(float(total_tasks) / tasks_per_node))\n return num_nodes, self.get_spare_nodes(num_nodes)\n\n def get_spare_nodes(self, num_nodes):\n force_spare_nodes = self.get_value(\"FORCE_SPARE_NODES\")\n if force_spare_nodes != -999:\n return force_spare_nodes\n\n if self.get_value(\"ALLOCATE_SPARE_NODES\"):\n ten_pct = int(math.ceil(float(num_nodes) * 0.1))\n if ten_pct < 1:\n return 1 # Always provide at lease one spare node\n elif ten_pct > 10:\n return 10 # Never provide more than 10 spare nodes\n else:\n return ten_pct\n else:\n return 0\n", "path": "scripts/lib/CIME/XML/env_mach_pes.py"}]} | 1,955 | 179 |
gh_patches_debug_379 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3650 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Non-executable files with shebangs in the repository
**Describe your environment**
(Nothing relevant to describe)
**Steps to reproduce**
```
$ rg -l '^#!' | xargs ls -l
-rwxr-xr-x. 1 ben ben 1420 Jul 5 2023 docs/examples/django/manage.py
-rw-r--r--. 1 ben ben 1300 Jul 5 2023 docs/examples/opencensus-exporter-tracer/collector.py
-rwxr-xr-x. 1 ben ben 1485 Jul 5 2023 docs/examples/opentracing/main.py
-rwxr-xr-x. 1 ben ben 853 Jul 13 2023 scripts/build.sh
-rwxr-xr-x. 1 ben ben 1163 Jan 22 10:06 scripts/coverage.sh
-rwxr-xr-x. 1 ben ben 20741 Jul 13 2023 scripts/eachdist.py
-rwxr-xr-x. 1 ben ben 215 Jul 5 2023 scripts/generate_website_docs.sh
-rwxr-xr-x. 1 ben ben 2377 Jan 22 10:06 scripts/proto_codegen.sh
-rwxr-xr-x. 1 ben ben 1928 Jan 22 10:06 scripts/semconv/generate.sh
-rwxr-xr-x. 1 ben ben 945 Jul 5 2023 scripts/tracecontext-integration-test.sh
-rw-r--r--. 1 ben ben 2519 Jan 22 11:43 tests/w3c_tracecontext_validation_server.py
```
Note that two files have shebang lines (`#!`) but do not have the executable bit set, which makes the shebang lines useless.
**What is the expected behavior?**
Files should either be non-executable and have no shebang line, or be executable and have a shebang line.
**What is the actual behavior?**
The following files are not executable and have useless shebang lines:
- `docs/examples/opencensus-exporter-tracer/collector.py`
- `tests/w3c_tracecontext_validation_server.py`
**Additional context**
This is a trivial thing, but I would like to fix it in a PR – either by setting the executable bit on these two files, or by removing the useless shebang lines. Both files are “script-like,” i.e. they have `if __name__ == "__main__"` or have useful side effects. Which approach would you prefer?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/examples/opencensus-exporter-tracer/collector.py`
Content:
```
1 #!/usr/bin/env python3
2 #
3 # Copyright The OpenTelemetry Authors
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from opentelemetry import trace
18 from opentelemetry.exporter.opencensus.trace_exporter import (
19 OpenCensusSpanExporter,
20 )
21 from opentelemetry.sdk.trace import TracerProvider
22 from opentelemetry.sdk.trace.export import BatchSpanProcessor
23
24 exporter = OpenCensusSpanExporter(endpoint="localhost:55678")
25
26 trace.set_tracer_provider(TracerProvider())
27 tracer = trace.get_tracer(__name__)
28 span_processor = BatchSpanProcessor(exporter)
29
30 trace.get_tracer_provider().add_span_processor(span_processor)
31 with tracer.start_as_current_span("foo"):
32 with tracer.start_as_current_span("bar"):
33 with tracer.start_as_current_span("baz"):
34 print("Hello world from OpenTelemetry Python!")
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/examples/opencensus-exporter-tracer/collector.py b/docs/examples/opencensus-exporter-tracer/collector.py
--- a/docs/examples/opencensus-exporter-tracer/collector.py
+++ b/docs/examples/opencensus-exporter-tracer/collector.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python3
-#
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
| {"golden_diff": "diff --git a/docs/examples/opencensus-exporter-tracer/collector.py b/docs/examples/opencensus-exporter-tracer/collector.py\n--- a/docs/examples/opencensus-exporter-tracer/collector.py\n+++ b/docs/examples/opencensus-exporter-tracer/collector.py\n@@ -1,5 +1,3 @@\n-#!/usr/bin/env python3\n-#\n # Copyright The OpenTelemetry Authors\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n", "issue": "Non-executable files with shebangs in the repository\n**Describe your environment**\r\n\r\n(Nothing relevant to describe)\r\n\r\n**Steps to reproduce**\r\n\r\n```\r\n$ rg -l '^#!' | xargs ls -l\r\n-rwxr-xr-x. 1 ben ben 1420 Jul 5 2023 docs/examples/django/manage.py\r\n-rw-r--r--. 1 ben ben 1300 Jul 5 2023 docs/examples/opencensus-exporter-tracer/collector.py\r\n-rwxr-xr-x. 1 ben ben 1485 Jul 5 2023 docs/examples/opentracing/main.py\r\n-rwxr-xr-x. 1 ben ben 853 Jul 13 2023 scripts/build.sh\r\n-rwxr-xr-x. 1 ben ben 1163 Jan 22 10:06 scripts/coverage.sh\r\n-rwxr-xr-x. 1 ben ben 20741 Jul 13 2023 scripts/eachdist.py\r\n-rwxr-xr-x. 1 ben ben 215 Jul 5 2023 scripts/generate_website_docs.sh\r\n-rwxr-xr-x. 1 ben ben 2377 Jan 22 10:06 scripts/proto_codegen.sh\r\n-rwxr-xr-x. 1 ben ben 1928 Jan 22 10:06 scripts/semconv/generate.sh\r\n-rwxr-xr-x. 1 ben ben 945 Jul 5 2023 scripts/tracecontext-integration-test.sh\r\n-rw-r--r--. 1 ben ben 2519 Jan 22 11:43 tests/w3c_tracecontext_validation_server.py\r\n```\r\n\r\nNote that two files have shebang lines (`#!`) but do not have the executable bit set, which makes the shebang lines useless.\r\n\r\n**What is the expected behavior?**\r\n\r\nFiles should either be non-executable and have no shebang line, or be executable and have a shebang line.\r\n\r\n**What is the actual behavior?**\r\n\r\nThe following files are not executable and have useless shebang lines:\r\n\r\n- `docs/examples/opencensus-exporter-tracer/collector.py`\r\n- `tests/w3c_tracecontext_validation_server.py`\r\n\r\n**Additional context**\r\n\r\nThis is a trivial thing, but I would like to fix it in a PR \u2013 either by setting the executable bit on these two files, or by removing the useless shebang lines. Both files are \u201cscript-like,\u201d i.e. they have `if __name__ == \"__main__\"` or have useful side effects. Which approach would you prefer?\n", "before_files": [{"content": "#!/usr/bin/env python3\n#\n# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.opencensus.trace_exporter import (\n OpenCensusSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nexporter = OpenCensusSpanExporter(endpoint=\"localhost:55678\")\n\ntrace.set_tracer_provider(TracerProvider())\ntracer = trace.get_tracer(__name__)\nspan_processor = BatchSpanProcessor(exporter)\n\ntrace.get_tracer_provider().add_span_processor(span_processor)\nwith tracer.start_as_current_span(\"foo\"):\n with tracer.start_as_current_span(\"bar\"):\n with tracer.start_as_current_span(\"baz\"):\n print(\"Hello world from OpenTelemetry Python!\")\n", "path": "docs/examples/opencensus-exporter-tracer/collector.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.opencensus.trace_exporter import (\n OpenCensusSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nexporter = OpenCensusSpanExporter(endpoint=\"localhost:55678\")\n\ntrace.set_tracer_provider(TracerProvider())\ntracer = trace.get_tracer(__name__)\nspan_processor = BatchSpanProcessor(exporter)\n\ntrace.get_tracer_provider().add_span_processor(span_processor)\nwith tracer.start_as_current_span(\"foo\"):\n with tracer.start_as_current_span(\"bar\"):\n with tracer.start_as_current_span(\"baz\"):\n print(\"Hello world from OpenTelemetry Python!\")\n", "path": "docs/examples/opencensus-exporter-tracer/collector.py"}]} | 1,229 | 106 |
gh_patches_debug_9537 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1452 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SMAPE formula typo
## 📚 Documentation
There's a typo in the [SMAPE formula](https://torchmetrics.readthedocs.io/en/stable/regression/symmetric_mean_absolute_percentage_error.html). It should be `{SMAPE} = \frac{2}{n}\sum_1^n\frac{| y_i - \hat{y_i} |}{\max(| y_i | + | \hat{y_i} |, \epsilon)}` instead of `{SMAPE} = \frac{2}{n}\sum_1^n max(\frac{| y_i - \hat{y_i} |}{| y_i | + | \hat{y_i} |, \epsilon})`. The attached screenshot shows the typo and its correction.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/regression/symmetric_mape.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any
15
16 from torch import Tensor, tensor
17
18 from torchmetrics.functional.regression.symmetric_mape import (
19 _symmetric_mean_absolute_percentage_error_compute,
20 _symmetric_mean_absolute_percentage_error_update,
21 )
22 from torchmetrics.metric import Metric
23
24
25 class SymmetricMeanAbsolutePercentageError(Metric):
26 r"""Computes symmetric mean absolute percentage error (`SMAPE`_).
27
28 .. math:: \text{SMAPE} = \frac{2}{n}\sum_1^n max(\frac{| y_i - \hat{y_i} |}{| y_i | + | \hat{y_i} |, \epsilon})
29
30 Where :math:`y` is a tensor of target values, and :math:`\hat{y}` is a tensor of predictions.
31
32 As input to ``forward`` and ``update`` the metric accepts the following input:
33
34 - ``preds`` (:class:`~torch.Tensor`): Predictions from model
35 - ``target`` (:class:`~torch.Tensor`): Ground truth values
36
37 As output of ``forward`` and ``compute`` the metric returns the following output:
38
39 - ``smape`` (:class:`~torch.Tensor`): A tensor with non-negative floating point smape value between 0 and 1
40
41 Args:
42 kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
43
44 Example:
45 >>> from torchmetrics import SymmetricMeanAbsolutePercentageError
46 >>> target = tensor([1, 10, 1e6])
47 >>> preds = tensor([0.9, 15, 1.2e6])
48 >>> smape = SymmetricMeanAbsolutePercentageError()
49 >>> smape(preds, target)
50 tensor(0.2290)
51 """
52 is_differentiable: bool = True
53 higher_is_better: bool = False
54 full_state_update: bool = False
55 sum_abs_per_error: Tensor
56 total: Tensor
57
58 def __init__(
59 self,
60 **kwargs: Any,
61 ) -> None:
62 super().__init__(**kwargs)
63
64 self.add_state("sum_abs_per_error", default=tensor(0.0), dist_reduce_fx="sum")
65 self.add_state("total", default=tensor(0.0), dist_reduce_fx="sum")
66
67 def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore
68 """Update state with predictions and targets."""
69 sum_abs_per_error, num_obs = _symmetric_mean_absolute_percentage_error_update(preds, target)
70
71 self.sum_abs_per_error += sum_abs_per_error
72 self.total += num_obs
73
74 def compute(self) -> Tensor:
75 """Computes mean absolute percentage error over state."""
76 return _symmetric_mean_absolute_percentage_error_compute(self.sum_abs_per_error, self.total)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/torchmetrics/regression/symmetric_mape.py b/src/torchmetrics/regression/symmetric_mape.py
--- a/src/torchmetrics/regression/symmetric_mape.py
+++ b/src/torchmetrics/regression/symmetric_mape.py
@@ -25,7 +25,7 @@
class SymmetricMeanAbsolutePercentageError(Metric):
r"""Computes symmetric mean absolute percentage error (`SMAPE`_).
- .. math:: \text{SMAPE} = \frac{2}{n}\sum_1^n max(\frac{| y_i - \hat{y_i} |}{| y_i | + | \hat{y_i} |, \epsilon})
+ .. math:: \text{SMAPE} = \frac{2}{n}\sum_1^n\frac{| y_i - \hat{y_i} |}{\max(| y_i | + | \hat{y_i} |, \epsilon)}
Where :math:`y` is a tensor of target values, and :math:`\hat{y}` is a tensor of predictions.
| {"golden_diff": "diff --git a/src/torchmetrics/regression/symmetric_mape.py b/src/torchmetrics/regression/symmetric_mape.py\n--- a/src/torchmetrics/regression/symmetric_mape.py\n+++ b/src/torchmetrics/regression/symmetric_mape.py\n@@ -25,7 +25,7 @@\n class SymmetricMeanAbsolutePercentageError(Metric):\n r\"\"\"Computes symmetric mean absolute percentage error (`SMAPE`_).\n \n- .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n max(\\frac{| y_i - \\hat{y_i} |}{| y_i | + | \\hat{y_i} |, \\epsilon})\n+ .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n\\frac{| y_i - \\hat{y_i} |}{\\max(| y_i | + | \\hat{y_i} |, \\epsilon)}\n \n Where :math:`y` is a tensor of target values, and :math:`\\hat{y}` is a tensor of predictions.\n", "issue": "SMAPE formula typo\n## \ud83d\udcda Documentation\r\n\r\n\r\nThere's a typo in the [SMAPE formula](https://torchmetrics.readthedocs.io/en/stable/regression/symmetric_mean_absolute_percentage_error.html). It should be `{SMAPE} = \\frac{2}{n}\\sum_1^n\\frac{| y_i - \\hat{y_i} |}{\\max(| y_i | + | \\hat{y_i} |, \\epsilon)}` instead of `{SMAPE} = \\frac{2}{n}\\sum_1^n max(\\frac{| y_i - \\hat{y_i} |}{| y_i | + | \\hat{y_i} |, \\epsilon})`. The attached screenshot shows the typo and its correction.\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.regression.symmetric_mape import (\n _symmetric_mean_absolute_percentage_error_compute,\n _symmetric_mean_absolute_percentage_error_update,\n)\nfrom torchmetrics.metric import Metric\n\n\nclass SymmetricMeanAbsolutePercentageError(Metric):\n r\"\"\"Computes symmetric mean absolute percentage error (`SMAPE`_).\n\n .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n max(\\frac{| y_i - \\hat{y_i} |}{| y_i | + | \\hat{y_i} |, \\epsilon})\n\n Where :math:`y` is a tensor of target values, and :math:`\\hat{y}` is a tensor of predictions.\n\n As input to ``forward`` and ``update`` the metric accepts the following input:\n\n - ``preds`` (:class:`~torch.Tensor`): Predictions from model\n - ``target`` (:class:`~torch.Tensor`): Ground truth values\n\n As output of ``forward`` and ``compute`` the metric returns the following output:\n\n - ``smape`` (:class:`~torch.Tensor`): A tensor with non-negative floating point smape value between 0 and 1\n\n Args:\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Example:\n >>> from torchmetrics import SymmetricMeanAbsolutePercentageError\n >>> target = tensor([1, 10, 1e6])\n >>> preds = tensor([0.9, 15, 1.2e6])\n >>> smape = SymmetricMeanAbsolutePercentageError()\n >>> smape(preds, target)\n tensor(0.2290)\n \"\"\"\n is_differentiable: bool = True\n higher_is_better: bool = False\n full_state_update: bool = False\n sum_abs_per_error: Tensor\n total: Tensor\n\n def __init__(\n self,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n\n self.add_state(\"sum_abs_per_error\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore\n \"\"\"Update state with predictions and targets.\"\"\"\n sum_abs_per_error, num_obs = _symmetric_mean_absolute_percentage_error_update(preds, target)\n\n self.sum_abs_per_error += sum_abs_per_error\n self.total += num_obs\n\n def compute(self) -> Tensor:\n \"\"\"Computes mean absolute percentage error over state.\"\"\"\n return _symmetric_mean_absolute_percentage_error_compute(self.sum_abs_per_error, self.total)\n", "path": "src/torchmetrics/regression/symmetric_mape.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.regression.symmetric_mape import (\n _symmetric_mean_absolute_percentage_error_compute,\n _symmetric_mean_absolute_percentage_error_update,\n)\nfrom torchmetrics.metric import Metric\n\n\nclass SymmetricMeanAbsolutePercentageError(Metric):\n r\"\"\"Computes symmetric mean absolute percentage error (`SMAPE`_).\n\n .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n\\frac{| y_i - \\hat{y_i} |}{\\max(| y_i | + | \\hat{y_i} |, \\epsilon)}\n\n Where :math:`y` is a tensor of target values, and :math:`\\hat{y}` is a tensor of predictions.\n\n As input to ``forward`` and ``update`` the metric accepts the following input:\n\n - ``preds`` (:class:`~torch.Tensor`): Predictions from model\n - ``target`` (:class:`~torch.Tensor`): Ground truth values\n\n As output of ``forward`` and ``compute`` the metric returns the following output:\n\n - ``smape`` (:class:`~torch.Tensor`): A tensor with non-negative floating point smape value between 0 and 1\n\n Args:\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Example:\n >>> from torchmetrics import SymmetricMeanAbsolutePercentageError\n >>> target = tensor([1, 10, 1e6])\n >>> preds = tensor([0.9, 15, 1.2e6])\n >>> smape = SymmetricMeanAbsolutePercentageError()\n >>> smape(preds, target)\n tensor(0.2290)\n \"\"\"\n is_differentiable: bool = True\n higher_is_better: bool = False\n full_state_update: bool = False\n sum_abs_per_error: Tensor\n total: Tensor\n\n def __init__(\n self,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n\n self.add_state(\"sum_abs_per_error\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore\n \"\"\"Update state with predictions and targets.\"\"\"\n sum_abs_per_error, num_obs = _symmetric_mean_absolute_percentage_error_update(preds, target)\n\n self.sum_abs_per_error += sum_abs_per_error\n self.total += num_obs\n\n def compute(self) -> Tensor:\n \"\"\"Computes mean absolute percentage error over state.\"\"\"\n return _symmetric_mean_absolute_percentage_error_compute(self.sum_abs_per_error, self.total)\n", "path": "src/torchmetrics/regression/symmetric_mape.py"}]} | 1,379 | 241 |
gh_patches_debug_11153 | rasdani/github-patches | git_diff | open-mmlab__mmsegmentation-19 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FileNotFoundError: [Errno 2] No such file or directory: 'VOCdevkit/VOCaug/dataset/trainval.txt'
https://github.com/open-mmlab/mmsegmentation/blob/1c3f54765981ba352d4cf6582edb1c8915e51d71/tools/convert_datasets/voc_aug.py#L53
Directory `VOCdevkit/VOCaug/dataset` does not exist `trainval.txt`, `trainval.txt` is the merger of `train.txt` and `val.txt`?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/convert_datasets/voc_aug.py`
Content:
```
1 import argparse
2 import os.path as osp
3 from functools import partial
4
5 import mmcv
6 import numpy as np
7 from PIL import Image
8 from scipy.io import loadmat
9
10 AUG_LEN = 10582
11
12
13 def convert_mat(mat_file, in_dir, out_dir):
14 data = loadmat(osp.join(in_dir, mat_file))
15 mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8)
16 seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png'))
17 Image.fromarray(mask).save(seg_filename, 'PNG')
18
19
20 def generate_aug_list(merged_list, excluded_list):
21 return list(set(merged_list) - set(excluded_list))
22
23
24 def parse_args():
25 parser = argparse.ArgumentParser(
26 description='Convert PASCAL VOC annotations to mmsegmentation format')
27 parser.add_argument('devkit_path', help='pascal voc devkit path')
28 parser.add_argument('aug_path', help='pascal voc aug path')
29 parser.add_argument('-o', '--out_dir', help='output path')
30 parser.add_argument(
31 '--nproc', default=1, type=int, help='number of process')
32 args = parser.parse_args()
33 return args
34
35
36 def main():
37 args = parse_args()
38 devkit_path = args.devkit_path
39 aug_path = args.aug_path
40 nproc = args.nproc
41 if args.out_dir is None:
42 out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug')
43 else:
44 out_dir = args.out_dir
45 mmcv.mkdir_or_exist(out_dir)
46 in_dir = osp.join(aug_path, 'dataset', 'cls')
47
48 mmcv.track_parallel_progress(
49 partial(convert_mat, in_dir=in_dir, out_dir=out_dir),
50 list(mmcv.scandir(in_dir, suffix='.mat')),
51 nproc=nproc)
52
53 with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:
54 full_aug_list = [line.strip() for line in f]
55 with open(
56 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
57 'train.txt')) as f:
58 ori_train_list = [line.strip() for line in f]
59 with open(
60 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
61 'val.txt')) as f:
62 val_list = [line.strip() for line in f]
63
64 aug_train_list = generate_aug_list(ori_train_list + full_aug_list,
65 val_list)
66 assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format(
67 AUG_LEN)
68
69 with open(
70 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
71 'trainaug.txt'), 'w') as f:
72 f.writelines(line + '\n' for line in aug_train_list)
73
74 aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list)
75 assert len(aug_list) == AUG_LEN - len(
76 ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN -
77 len(ori_train_list))
78 with open(
79 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'),
80 'w') as f:
81 f.writelines(line + '\n' for line in aug_list)
82
83 print('Done!')
84
85
86 if __name__ == '__main__':
87 main()
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/convert_datasets/voc_aug.py b/tools/convert_datasets/voc_aug.py
--- a/tools/convert_datasets/voc_aug.py
+++ b/tools/convert_datasets/voc_aug.py
@@ -50,8 +50,12 @@
list(mmcv.scandir(in_dir, suffix='.mat')),
nproc=nproc)
- with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:
- full_aug_list = [line.strip() for line in f]
+ full_aug_list = []
+ with open(osp.join(aug_path, 'dataset', 'train.txt')) as f:
+ full_aug_list += [line.strip() for line in f]
+ with open(osp.join(aug_path, 'dataset', 'val.txt')) as f:
+ full_aug_list += [line.strip() for line in f]
+
with open(
osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
'train.txt')) as f:
| {"golden_diff": "diff --git a/tools/convert_datasets/voc_aug.py b/tools/convert_datasets/voc_aug.py\n--- a/tools/convert_datasets/voc_aug.py\n+++ b/tools/convert_datasets/voc_aug.py\n@@ -50,8 +50,12 @@\n list(mmcv.scandir(in_dir, suffix='.mat')),\n nproc=nproc)\n \n- with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:\n- full_aug_list = [line.strip() for line in f]\n+ full_aug_list = []\n+ with open(osp.join(aug_path, 'dataset', 'train.txt')) as f:\n+ full_aug_list += [line.strip() for line in f]\n+ with open(osp.join(aug_path, 'dataset', 'val.txt')) as f:\n+ full_aug_list += [line.strip() for line in f]\n+\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'train.txt')) as f:\n", "issue": "FileNotFoundError: [Errno 2] No such file or directory: 'VOCdevkit/VOCaug/dataset/trainval.txt'\nhttps://github.com/open-mmlab/mmsegmentation/blob/1c3f54765981ba352d4cf6582edb1c8915e51d71/tools/convert_datasets/voc_aug.py#L53\r\n\r\nDirectory `VOCdevkit/VOCaug/dataset` does not exist `trainval.txt`, `trainval.txt` is the merger of `train.txt` and `val.txt`?\n", "before_files": [{"content": "import argparse\nimport os.path as osp\nfrom functools import partial\n\nimport mmcv\nimport numpy as np\nfrom PIL import Image\nfrom scipy.io import loadmat\n\nAUG_LEN = 10582\n\n\ndef convert_mat(mat_file, in_dir, out_dir):\n data = loadmat(osp.join(in_dir, mat_file))\n mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8)\n seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png'))\n Image.fromarray(mask).save(seg_filename, 'PNG')\n\n\ndef generate_aug_list(merged_list, excluded_list):\n return list(set(merged_list) - set(excluded_list))\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Convert PASCAL VOC annotations to mmsegmentation format')\n parser.add_argument('devkit_path', help='pascal voc devkit path')\n parser.add_argument('aug_path', help='pascal voc aug path')\n parser.add_argument('-o', '--out_dir', help='output path')\n parser.add_argument(\n '--nproc', default=1, type=int, help='number of process')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = parse_args()\n devkit_path = args.devkit_path\n aug_path = args.aug_path\n nproc = args.nproc\n if args.out_dir is None:\n out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug')\n else:\n out_dir = args.out_dir\n mmcv.mkdir_or_exist(out_dir)\n in_dir = osp.join(aug_path, 'dataset', 'cls')\n\n mmcv.track_parallel_progress(\n partial(convert_mat, in_dir=in_dir, out_dir=out_dir),\n list(mmcv.scandir(in_dir, suffix='.mat')),\n nproc=nproc)\n\n with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:\n full_aug_list = [line.strip() for line in f]\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'train.txt')) as f:\n ori_train_list = [line.strip() for line in f]\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'val.txt')) as f:\n val_list = [line.strip() for line in f]\n\n aug_train_list = generate_aug_list(ori_train_list + full_aug_list,\n val_list)\n assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format(\n AUG_LEN)\n\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'trainaug.txt'), 'w') as f:\n f.writelines(line + '\\n' for line in aug_train_list)\n\n aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list)\n assert len(aug_list) == AUG_LEN - len(\n ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN -\n len(ori_train_list))\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'),\n 'w') as f:\n f.writelines(line + '\\n' for line in aug_list)\n\n print('Done!')\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/convert_datasets/voc_aug.py"}], "after_files": [{"content": "import argparse\nimport os.path as osp\nfrom functools import partial\n\nimport mmcv\nimport numpy as np\nfrom PIL import Image\nfrom scipy.io import loadmat\n\nAUG_LEN = 10582\n\n\ndef convert_mat(mat_file, in_dir, out_dir):\n data = loadmat(osp.join(in_dir, mat_file))\n mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8)\n seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png'))\n Image.fromarray(mask).save(seg_filename, 'PNG')\n\n\ndef generate_aug_list(merged_list, excluded_list):\n return list(set(merged_list) - set(excluded_list))\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Convert PASCAL VOC annotations to mmsegmentation format')\n parser.add_argument('devkit_path', help='pascal voc devkit path')\n parser.add_argument('aug_path', help='pascal voc aug path')\n parser.add_argument('-o', '--out_dir', help='output path')\n parser.add_argument(\n '--nproc', default=1, type=int, help='number of process')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = parse_args()\n devkit_path = args.devkit_path\n aug_path = args.aug_path\n nproc = args.nproc\n if args.out_dir is None:\n out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug')\n else:\n out_dir = args.out_dir\n mmcv.mkdir_or_exist(out_dir)\n in_dir = osp.join(aug_path, 'dataset', 'cls')\n\n mmcv.track_parallel_progress(\n partial(convert_mat, in_dir=in_dir, out_dir=out_dir),\n list(mmcv.scandir(in_dir, suffix='.mat')),\n nproc=nproc)\n\n full_aug_list = []\n with open(osp.join(aug_path, 'dataset', 'train.txt')) as f:\n full_aug_list += [line.strip() for line in f]\n with open(osp.join(aug_path, 'dataset', 'val.txt')) as f:\n full_aug_list += [line.strip() for line in f]\n\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'train.txt')) as f:\n ori_train_list = [line.strip() for line in f]\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'val.txt')) as f:\n val_list = [line.strip() for line in f]\n\n aug_train_list = generate_aug_list(ori_train_list + full_aug_list,\n val_list)\n assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format(\n AUG_LEN)\n\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'trainaug.txt'), 'w') as f:\n f.writelines(line + '\\n' for line in aug_train_list)\n\n aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list)\n assert len(aug_list) == AUG_LEN - len(\n ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN -\n len(ori_train_list))\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'),\n 'w') as f:\n f.writelines(line + '\\n' for line in aug_list)\n\n print('Done!')\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/convert_datasets/voc_aug.py"}]} | 1,328 | 228 |
gh_patches_debug_2582 | rasdani/github-patches | git_diff | azavea__raster-vision-1586 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Same explanation for SlidingWindowGeoDataset and RandomWindowGeoDataset
## 📚 Documentation
<!-- A clear and concise description of what content in https://docs.rastervision.io/ is an issue.-->
> The SlidingWindowGeoDataset allows reading the scene by sampling random window sizes and locations.
This description is same to explained both SlidingWindowGeoDataset and RandomWindowGeoDataset. This can be found here: https://docs.rastervision.io/en/latest/tutorials/sampling_training_data.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rastervision_core/rastervision/core/data/class_config.py`
Content:
```
1 from typing import List, Optional, Tuple, Union
2
3 from rastervision.pipeline.config import (Config, register_config, ConfigError,
4 Field, validator)
5 from rastervision.core.data.utils import color_to_triple, normalize_color
6
7 DEFAULT_NULL_CLASS_NAME = 'null'
8 DEFAULT_NULL_CLASS_COLOR = 'black'
9
10
11 @register_config('class_config')
12 class ClassConfig(Config):
13 """Configures the class names that are being predicted."""
14 names: List[str] = Field(
15 ...,
16 description='Names of classes. The i-th class in this list will have '
17 'class ID = i.')
18 colors: Optional[List[Union[str, Tuple]]] = Field(
19 None,
20 description=
21 ('Colors used to visualize classes. Can be color strings accepted by '
22 'matplotlib or RGB tuples. If None, a random color will be auto-generated '
23 'for each class.'))
24 null_class: Optional[str] = Field(
25 None,
26 description='Optional name of class in `names` to use as the null '
27 'class. This is used in semantic segmentation to represent the label '
28 'for imagery pixels that are NODATA or that are missing a label. '
29 f'If None and the class names include "{DEFAULT_NULL_CLASS_NAME}", '
30 'it will automatically be used as the null class. If None, and this '
31 'Config is part of a SemanticSegmentationConfig, a null class will be '
32 'added automatically.')
33
34 @validator('colors', always=True)
35 def validate_colors(cls, v: Optional[List[Union[str, Tuple]]],
36 values: dict) -> Optional[List[Union[str, Tuple]]]:
37 """Compare length w/ names. Also auto-generate if not specified."""
38 class_names = values['names']
39 class_colors = v
40 if class_colors is None:
41 class_colors = [color_to_triple() for _ in class_names]
42 elif len(class_names) != len(class_colors):
43 raise ConfigError(f'len(class_names) ({len(class_names)}) != '
44 f'len(class_colors) ({len(class_colors)})\n'
45 f'class_names: {class_names}\n'
46 f'class_colors: {class_colors}')
47 return class_colors
48
49 @validator('null_class', always=True)
50 def validate_null_class(cls, v: Optional[str],
51 values: dict) -> Optional[str]:
52 """Check if in names. If 'null' in names, use it as null class."""
53 names = values['names']
54 if v is None:
55 if DEFAULT_NULL_CLASS_NAME in names:
56 v = DEFAULT_NULL_CLASS_NAME
57 else:
58 if v not in names:
59 raise ConfigError(
60 f'The null_class, "{v}", must be in list of class names.')
61
62 # edge case
63 default_null_class_in_names = (DEFAULT_NULL_CLASS_NAME in names)
64 null_class_neq_default = (v != DEFAULT_NULL_CLASS_NAME)
65 if default_null_class_in_names and null_class_neq_default:
66 raise ConfigError(
67 f'"{DEFAULT_NULL_CLASS_NAME}" is in names but the '
68 f'specified null_class is something else ("{v}").')
69 return v
70
71 def get_class_id(self, name: str) -> int:
72 return self.names.index(name)
73
74 def get_name(self, id: int) -> str:
75 return self.names[id]
76
77 @property
78 def null_class_id(self) -> int:
79 if self.null_class is None:
80 raise ValueError('null_class is not set')
81 return self.get_class_id(self.null_class)
82
83 def get_color_to_class_id(self) -> dict:
84 return dict([(self.colors[i], i) for i in range(len(self.colors))])
85
86 def ensure_null_class(self) -> None:
87 """Add a null class if one isn't set. This method is idempotent."""
88 if self.null_class is not None:
89 return
90
91 null_class_name = DEFAULT_NULL_CLASS_NAME
92 null_class_color = DEFAULT_NULL_CLASS_COLOR
93
94 # This might seeem redundant given the null class validator above, but
95 # is actually important. Sometimes there can be multiple ClassConfig
96 # instances that reference the same list objects for names and colors
97 # (not clear why this happens). This means that
98 # each ensure_null_class() call will add to names and colors in each
99 # copy of ClassConfig but only set its own null_class, which makes this
100 # method() non-idempotent.
101 if null_class_name in self.names:
102 self.null_class = null_class_name
103 return
104
105 # use random color if default color is already taken
106 null_class_color_triple = color_to_triple(null_class_color)
107 all_color_triples = [
108 color_to_triple(c) if isinstance(c, str) else c
109 for c in self.colors
110 ]
111 if null_class_color_triple in all_color_triples:
112 null_class_color = color_to_triple()
113
114 self.names.append(null_class_name)
115 self.colors.append(null_class_color)
116 self.null_class = null_class_name
117
118 def __len__(self) -> int:
119 return len(self.names)
120
121 @property
122 def color_triples(self) -> List[Tuple[float, float, float]]:
123 color_triples = [normalize_color(c) for c in self.colors]
124 return color_triples
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rastervision_core/rastervision/core/data/class_config.py b/rastervision_core/rastervision/core/data/class_config.py
--- a/rastervision_core/rastervision/core/data/class_config.py
+++ b/rastervision_core/rastervision/core/data/class_config.py
@@ -120,5 +120,6 @@
@property
def color_triples(self) -> List[Tuple[float, float, float]]:
+ """Class colors in a normalized form."""
color_triples = [normalize_color(c) for c in self.colors]
return color_triples
| {"golden_diff": "diff --git a/rastervision_core/rastervision/core/data/class_config.py b/rastervision_core/rastervision/core/data/class_config.py\n--- a/rastervision_core/rastervision/core/data/class_config.py\n+++ b/rastervision_core/rastervision/core/data/class_config.py\n@@ -120,5 +120,6 @@\n \n @property\n def color_triples(self) -> List[Tuple[float, float, float]]:\n+ \"\"\"Class colors in a normalized form.\"\"\"\n color_triples = [normalize_color(c) for c in self.colors]\n return color_triples\n", "issue": "Same explanation for SlidingWindowGeoDataset and RandomWindowGeoDataset\n## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://docs.rastervision.io/ is an issue.-->\r\n\r\n> The SlidingWindowGeoDataset allows reading the scene by sampling random window sizes and locations.\r\n\r\nThis description is same to explained both SlidingWindowGeoDataset and RandomWindowGeoDataset. This can be found here: https://docs.rastervision.io/en/latest/tutorials/sampling_training_data.html\n", "before_files": [{"content": "from typing import List, Optional, Tuple, Union\n\nfrom rastervision.pipeline.config import (Config, register_config, ConfigError,\n Field, validator)\nfrom rastervision.core.data.utils import color_to_triple, normalize_color\n\nDEFAULT_NULL_CLASS_NAME = 'null'\nDEFAULT_NULL_CLASS_COLOR = 'black'\n\n\n@register_config('class_config')\nclass ClassConfig(Config):\n \"\"\"Configures the class names that are being predicted.\"\"\"\n names: List[str] = Field(\n ...,\n description='Names of classes. The i-th class in this list will have '\n 'class ID = i.')\n colors: Optional[List[Union[str, Tuple]]] = Field(\n None,\n description=\n ('Colors used to visualize classes. Can be color strings accepted by '\n 'matplotlib or RGB tuples. If None, a random color will be auto-generated '\n 'for each class.'))\n null_class: Optional[str] = Field(\n None,\n description='Optional name of class in `names` to use as the null '\n 'class. This is used in semantic segmentation to represent the label '\n 'for imagery pixels that are NODATA or that are missing a label. '\n f'If None and the class names include \"{DEFAULT_NULL_CLASS_NAME}\", '\n 'it will automatically be used as the null class. If None, and this '\n 'Config is part of a SemanticSegmentationConfig, a null class will be '\n 'added automatically.')\n\n @validator('colors', always=True)\n def validate_colors(cls, v: Optional[List[Union[str, Tuple]]],\n values: dict) -> Optional[List[Union[str, Tuple]]]:\n \"\"\"Compare length w/ names. Also auto-generate if not specified.\"\"\"\n class_names = values['names']\n class_colors = v\n if class_colors is None:\n class_colors = [color_to_triple() for _ in class_names]\n elif len(class_names) != len(class_colors):\n raise ConfigError(f'len(class_names) ({len(class_names)}) != '\n f'len(class_colors) ({len(class_colors)})\\n'\n f'class_names: {class_names}\\n'\n f'class_colors: {class_colors}')\n return class_colors\n\n @validator('null_class', always=True)\n def validate_null_class(cls, v: Optional[str],\n values: dict) -> Optional[str]:\n \"\"\"Check if in names. If 'null' in names, use it as null class.\"\"\"\n names = values['names']\n if v is None:\n if DEFAULT_NULL_CLASS_NAME in names:\n v = DEFAULT_NULL_CLASS_NAME\n else:\n if v not in names:\n raise ConfigError(\n f'The null_class, \"{v}\", must be in list of class names.')\n\n # edge case\n default_null_class_in_names = (DEFAULT_NULL_CLASS_NAME in names)\n null_class_neq_default = (v != DEFAULT_NULL_CLASS_NAME)\n if default_null_class_in_names and null_class_neq_default:\n raise ConfigError(\n f'\"{DEFAULT_NULL_CLASS_NAME}\" is in names but the '\n f'specified null_class is something else (\"{v}\").')\n return v\n\n def get_class_id(self, name: str) -> int:\n return self.names.index(name)\n\n def get_name(self, id: int) -> str:\n return self.names[id]\n\n @property\n def null_class_id(self) -> int:\n if self.null_class is None:\n raise ValueError('null_class is not set')\n return self.get_class_id(self.null_class)\n\n def get_color_to_class_id(self) -> dict:\n return dict([(self.colors[i], i) for i in range(len(self.colors))])\n\n def ensure_null_class(self) -> None:\n \"\"\"Add a null class if one isn't set. This method is idempotent.\"\"\"\n if self.null_class is not None:\n return\n\n null_class_name = DEFAULT_NULL_CLASS_NAME\n null_class_color = DEFAULT_NULL_CLASS_COLOR\n\n # This might seeem redundant given the null class validator above, but\n # is actually important. Sometimes there can be multiple ClassConfig\n # instances that reference the same list objects for names and colors\n # (not clear why this happens). This means that\n # each ensure_null_class() call will add to names and colors in each\n # copy of ClassConfig but only set its own null_class, which makes this\n # method() non-idempotent.\n if null_class_name in self.names:\n self.null_class = null_class_name\n return\n\n # use random color if default color is already taken\n null_class_color_triple = color_to_triple(null_class_color)\n all_color_triples = [\n color_to_triple(c) if isinstance(c, str) else c\n for c in self.colors\n ]\n if null_class_color_triple in all_color_triples:\n null_class_color = color_to_triple()\n\n self.names.append(null_class_name)\n self.colors.append(null_class_color)\n self.null_class = null_class_name\n\n def __len__(self) -> int:\n return len(self.names)\n\n @property\n def color_triples(self) -> List[Tuple[float, float, float]]:\n color_triples = [normalize_color(c) for c in self.colors]\n return color_triples\n", "path": "rastervision_core/rastervision/core/data/class_config.py"}], "after_files": [{"content": "from typing import List, Optional, Tuple, Union\n\nfrom rastervision.pipeline.config import (Config, register_config, ConfigError,\n Field, validator)\nfrom rastervision.core.data.utils import color_to_triple, normalize_color\n\nDEFAULT_NULL_CLASS_NAME = 'null'\nDEFAULT_NULL_CLASS_COLOR = 'black'\n\n\n@register_config('class_config')\nclass ClassConfig(Config):\n \"\"\"Configures the class names that are being predicted.\"\"\"\n names: List[str] = Field(\n ...,\n description='Names of classes. The i-th class in this list will have '\n 'class ID = i.')\n colors: Optional[List[Union[str, Tuple]]] = Field(\n None,\n description=\n ('Colors used to visualize classes. Can be color strings accepted by '\n 'matplotlib or RGB tuples. If None, a random color will be auto-generated '\n 'for each class.'))\n null_class: Optional[str] = Field(\n None,\n description='Optional name of class in `names` to use as the null '\n 'class. This is used in semantic segmentation to represent the label '\n 'for imagery pixels that are NODATA or that are missing a label. '\n f'If None and the class names include \"{DEFAULT_NULL_CLASS_NAME}\", '\n 'it will automatically be used as the null class. If None, and this '\n 'Config is part of a SemanticSegmentationConfig, a null class will be '\n 'added automatically.')\n\n @validator('colors', always=True)\n def validate_colors(cls, v: Optional[List[Union[str, Tuple]]],\n values: dict) -> Optional[List[Union[str, Tuple]]]:\n \"\"\"Compare length w/ names. Also auto-generate if not specified.\"\"\"\n class_names = values['names']\n class_colors = v\n if class_colors is None:\n class_colors = [color_to_triple() for _ in class_names]\n elif len(class_names) != len(class_colors):\n raise ConfigError(f'len(class_names) ({len(class_names)}) != '\n f'len(class_colors) ({len(class_colors)})\\n'\n f'class_names: {class_names}\\n'\n f'class_colors: {class_colors}')\n return class_colors\n\n @validator('null_class', always=True)\n def validate_null_class(cls, v: Optional[str],\n values: dict) -> Optional[str]:\n \"\"\"Check if in names. If 'null' in names, use it as null class.\"\"\"\n names = values['names']\n if v is None:\n if DEFAULT_NULL_CLASS_NAME in names:\n v = DEFAULT_NULL_CLASS_NAME\n else:\n if v not in names:\n raise ConfigError(\n f'The null_class, \"{v}\", must be in list of class names.')\n\n # edge case\n default_null_class_in_names = (DEFAULT_NULL_CLASS_NAME in names)\n null_class_neq_default = (v != DEFAULT_NULL_CLASS_NAME)\n if default_null_class_in_names and null_class_neq_default:\n raise ConfigError(\n f'\"{DEFAULT_NULL_CLASS_NAME}\" is in names but the '\n f'specified null_class is something else (\"{v}\").')\n return v\n\n def get_class_id(self, name: str) -> int:\n return self.names.index(name)\n\n def get_name(self, id: int) -> str:\n return self.names[id]\n\n @property\n def null_class_id(self) -> int:\n if self.null_class is None:\n raise ValueError('null_class is not set')\n return self.get_class_id(self.null_class)\n\n def get_color_to_class_id(self) -> dict:\n return dict([(self.colors[i], i) for i in range(len(self.colors))])\n\n def ensure_null_class(self) -> None:\n \"\"\"Add a null class if one isn't set. This method is idempotent.\"\"\"\n if self.null_class is not None:\n return\n\n null_class_name = DEFAULT_NULL_CLASS_NAME\n null_class_color = DEFAULT_NULL_CLASS_COLOR\n\n # This might seeem redundant given the null class validator above, but\n # is actually important. Sometimes there can be multiple ClassConfig\n # instances that reference the same list objects for names and colors\n # (not clear why this happens). This means that\n # each ensure_null_class() call will add to names and colors in each\n # copy of ClassConfig but only set its own null_class, which makes this\n # method() non-idempotent.\n if null_class_name in self.names:\n self.null_class = null_class_name\n return\n\n # use random color if default color is already taken\n null_class_color_triple = color_to_triple(null_class_color)\n all_color_triples = [\n color_to_triple(c) if isinstance(c, str) else c\n for c in self.colors\n ]\n if null_class_color_triple in all_color_triples:\n null_class_color = color_to_triple()\n\n self.names.append(null_class_name)\n self.colors.append(null_class_color)\n self.null_class = null_class_name\n\n def __len__(self) -> int:\n return len(self.names)\n\n @property\n def color_triples(self) -> List[Tuple[float, float, float]]:\n \"\"\"Class colors in a normalized form.\"\"\"\n color_triples = [normalize_color(c) for c in self.colors]\n return color_triples\n", "path": "rastervision_core/rastervision/core/data/class_config.py"}]} | 1,792 | 136 |
gh_patches_debug_10054 | rasdani/github-patches | git_diff | acl-org__acl-anthology-990 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Recaser bug: adding fixed-case inside tex-math markup
`<tex-math><fixed-case>O</fixed-case>(<fixed-case>M</fixed-case>(n^2))</tex-math>` caused the build to fail in #892
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bin/fixedcase/protect.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # protect.py <infile> <outfile>
4 # looks for file "truelist" in current dir
5
6 # cd data/xml
7 # for i in *xml ; do (cd ../../tools/fixedcase/ ; python3 ./protect.py ../../data/xml/$i /tmp/$i ; echo $i ); done > log
8
9
10 import lxml.etree as ET
11 import os
12 import sys
13 import copy
14 import itertools
15 import inspect
16
17 from collections import defaultdict
18
19 if __name__ == "__main__":
20 from common import *
21 else:
22 from .common import *
23
24 # recursive helper called by protect
25 # protect text of "node", including children, and tails of children
26 def protect_recurse(node, recased):
27 if node.tag == "fixed-case": # already protected
28 newnode = copy.deepcopy(node) # don't need to modify descendents
29 newnode.tail = None # tail will be protected by caller
30 return newnode
31 newnode = ET.Element(node.tag, node.attrib)
32
33 def process(text, rc):
34 i = 0
35 for upper, chars in itertools.groupby(rc[: len(text)], lambda c: c.isupper()):
36 charstr = "".join(chars)
37 if upper:
38 p = ET.Element("fixed-case")
39 p.text = charstr
40 newnode.append(p)
41 else:
42 append_text(newnode, text[i : i + len(charstr)])
43
44 assert text[i : i + len(charstr)].lower() == charstr.lower(), (
45 i,
46 text,
47 charstr,
48 )
49 i += len(charstr)
50
51 if node.text:
52 process(node.text, recased)
53 recased = recased[len(node.text) :]
54 for child in node:
55 protected_child = protect_recurse(child, recased)
56 recased = recased[len(get_text(protected_child)) :]
57 newnode.append(protected_child)
58 if child.tail:
59 process(child.tail, recased)
60 recased = recased[len(child.tail) :]
61
62 return newnode
63
64
65 def protect(node):
66 rawtext = get_text(node).strip()
67 recased = None
68 if rawtext.lower() in special_titles:
69 recased = special_titles[rawtext.lower()]
70 else:
71 text = tokenize(rawtext)
72 fixed = fixedcase_title(
73 text,
74 truelist=truelist,
75 phrase_truelist=phrase_truelist,
76 amodifiers=amodifiers,
77 ndescriptors=ndescriptors,
78 )
79 if any(fixed):
80 # Generate the recased string so we know where to look in the XML
81 # to apply fixed-case
82 recasedtoks = [(w if b else w.lower()) for w, b in zip(text, fixed)]
83 recased = TreebankWordDetokenizer().detokenize(recasedtoks)
84 # PTB (de)tokenizer doesn't think of hyphens as separate tokens,
85 # so we need to manually detokenize them.
86 # Assuming the only edits that need to be made are adding/deleting
87 # spaces, the following will work:
88 i = 0
89 while i < len(rawtext):
90 # scan rawtext from left to right and adjust recased by adding/removing
91 # spaces until it matches
92 t = rawtext[i]
93 assert i < len(recased), ((i, t), rawtext, recased)
94 c = recased[i]
95 if t.isspace() and not c.isspace(): # may be ' ' or '\n'
96 # add space to recased
97 recased = recased[:i] + t + recased[i:]
98 i += 1
99 elif c.isspace() and not t.isspace():
100 # remove space from recased
101 recased = recased[:i] + recased[i + 1 :]
102 # don't increment i
103 elif t != c and t.isspace() and c.isspace():
104 recased = recased[:i] + t + recased[i + 1 :]
105 i += 1
106 else:
107 assert t == c or t.lower() == c.lower(), (
108 (i, t, c),
109 rawtext,
110 recased,
111 text,
112 )
113 i += 1
114 if len(recased) > len(rawtext):
115 recased = recased[: len(rawtext)]
116 assert rawtext.lower() == recased.lower(), (rawtext, recased)
117
118 if recased:
119 newnode = protect_recurse(node, recased)
120 newnode.tail = node.tail # tail of top level is not protected
121 replace_node(node, newnode)
122
123
124 # Read in the truelist (list of words that should always be protected)
125 truelist, phrase_truelist, special_titles, amodifiers, ndescriptors = load_lists()
126
127 if __name__ == "__main__":
128 infile, outfile = sys.argv[1:]
129
130 tree = ET.parse(infile)
131 if not tree.getroot().tail:
132 tree.getroot().tail = "\n"
133 for paper in tree.getroot().findall(".//paper"):
134 for title in paper.xpath("./title|./booktitle"):
135 protect(title)
136 tree.write(outfile, encoding="UTF-8", xml_declaration=True)
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bin/fixedcase/protect.py b/bin/fixedcase/protect.py
--- a/bin/fixedcase/protect.py
+++ b/bin/fixedcase/protect.py
@@ -24,7 +24,7 @@
# recursive helper called by protect
# protect text of "node", including children, and tails of children
def protect_recurse(node, recased):
- if node.tag == "fixed-case": # already protected
+ if node.tag in ("fixed-case", "tex-math"): # already protected text, or math
newnode = copy.deepcopy(node) # don't need to modify descendents
newnode.tail = None # tail will be protected by caller
return newnode
| {"golden_diff": "diff --git a/bin/fixedcase/protect.py b/bin/fixedcase/protect.py\n--- a/bin/fixedcase/protect.py\n+++ b/bin/fixedcase/protect.py\n@@ -24,7 +24,7 @@\n # recursive helper called by protect\n # protect text of \"node\", including children, and tails of children\n def protect_recurse(node, recased):\n- if node.tag == \"fixed-case\": # already protected\n+ if node.tag in (\"fixed-case\", \"tex-math\"): # already protected text, or math\n newnode = copy.deepcopy(node) # don't need to modify descendents\n newnode.tail = None # tail will be protected by caller\n return newnode\n", "issue": "Recaser bug: adding fixed-case inside tex-math markup\n`<tex-math><fixed-case>O</fixed-case>(<fixed-case>M</fixed-case>(n^2))</tex-math>` caused the build to fail in #892\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# protect.py <infile> <outfile>\n# looks for file \"truelist\" in current dir\n\n# cd data/xml\n# for i in *xml ; do (cd ../../tools/fixedcase/ ; python3 ./protect.py ../../data/xml/$i /tmp/$i ; echo $i ); done > log\n\n\nimport lxml.etree as ET\nimport os\nimport sys\nimport copy\nimport itertools\nimport inspect\n\nfrom collections import defaultdict\n\nif __name__ == \"__main__\":\n from common import *\nelse:\n from .common import *\n\n# recursive helper called by protect\n# protect text of \"node\", including children, and tails of children\ndef protect_recurse(node, recased):\n if node.tag == \"fixed-case\": # already protected\n newnode = copy.deepcopy(node) # don't need to modify descendents\n newnode.tail = None # tail will be protected by caller\n return newnode\n newnode = ET.Element(node.tag, node.attrib)\n\n def process(text, rc):\n i = 0\n for upper, chars in itertools.groupby(rc[: len(text)], lambda c: c.isupper()):\n charstr = \"\".join(chars)\n if upper:\n p = ET.Element(\"fixed-case\")\n p.text = charstr\n newnode.append(p)\n else:\n append_text(newnode, text[i : i + len(charstr)])\n\n assert text[i : i + len(charstr)].lower() == charstr.lower(), (\n i,\n text,\n charstr,\n )\n i += len(charstr)\n\n if node.text:\n process(node.text, recased)\n recased = recased[len(node.text) :]\n for child in node:\n protected_child = protect_recurse(child, recased)\n recased = recased[len(get_text(protected_child)) :]\n newnode.append(protected_child)\n if child.tail:\n process(child.tail, recased)\n recased = recased[len(child.tail) :]\n\n return newnode\n\n\ndef protect(node):\n rawtext = get_text(node).strip()\n recased = None\n if rawtext.lower() in special_titles:\n recased = special_titles[rawtext.lower()]\n else:\n text = tokenize(rawtext)\n fixed = fixedcase_title(\n text,\n truelist=truelist,\n phrase_truelist=phrase_truelist,\n amodifiers=amodifiers,\n ndescriptors=ndescriptors,\n )\n if any(fixed):\n # Generate the recased string so we know where to look in the XML\n # to apply fixed-case\n recasedtoks = [(w if b else w.lower()) for w, b in zip(text, fixed)]\n recased = TreebankWordDetokenizer().detokenize(recasedtoks)\n # PTB (de)tokenizer doesn't think of hyphens as separate tokens,\n # so we need to manually detokenize them.\n # Assuming the only edits that need to be made are adding/deleting\n # spaces, the following will work:\n i = 0\n while i < len(rawtext):\n # scan rawtext from left to right and adjust recased by adding/removing\n # spaces until it matches\n t = rawtext[i]\n assert i < len(recased), ((i, t), rawtext, recased)\n c = recased[i]\n if t.isspace() and not c.isspace(): # may be ' ' or '\\n'\n # add space to recased\n recased = recased[:i] + t + recased[i:]\n i += 1\n elif c.isspace() and not t.isspace():\n # remove space from recased\n recased = recased[:i] + recased[i + 1 :]\n # don't increment i\n elif t != c and t.isspace() and c.isspace():\n recased = recased[:i] + t + recased[i + 1 :]\n i += 1\n else:\n assert t == c or t.lower() == c.lower(), (\n (i, t, c),\n rawtext,\n recased,\n text,\n )\n i += 1\n if len(recased) > len(rawtext):\n recased = recased[: len(rawtext)]\n assert rawtext.lower() == recased.lower(), (rawtext, recased)\n\n if recased:\n newnode = protect_recurse(node, recased)\n newnode.tail = node.tail # tail of top level is not protected\n replace_node(node, newnode)\n\n\n# Read in the truelist (list of words that should always be protected)\ntruelist, phrase_truelist, special_titles, amodifiers, ndescriptors = load_lists()\n\nif __name__ == \"__main__\":\n infile, outfile = sys.argv[1:]\n\n tree = ET.parse(infile)\n if not tree.getroot().tail:\n tree.getroot().tail = \"\\n\"\n for paper in tree.getroot().findall(\".//paper\"):\n for title in paper.xpath(\"./title|./booktitle\"):\n protect(title)\n tree.write(outfile, encoding=\"UTF-8\", xml_declaration=True)\n", "path": "bin/fixedcase/protect.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# protect.py <infile> <outfile>\n# looks for file \"truelist\" in current dir\n\n# cd data/xml\n# for i in *xml ; do (cd ../../tools/fixedcase/ ; python3 ./protect.py ../../data/xml/$i /tmp/$i ; echo $i ); done > log\n\n\nimport lxml.etree as ET\nimport os\nimport sys\nimport copy\nimport itertools\nimport inspect\n\nfrom collections import defaultdict\n\nif __name__ == \"__main__\":\n from common import *\nelse:\n from .common import *\n\n# recursive helper called by protect\n# protect text of \"node\", including children, and tails of children\ndef protect_recurse(node, recased):\n if node.tag in (\"fixed-case\", \"tex-math\"): # already protected text, or math\n newnode = copy.deepcopy(node) # don't need to modify descendents\n newnode.tail = None # tail will be protected by caller\n return newnode\n newnode = ET.Element(node.tag, node.attrib)\n\n def process(text, rc):\n i = 0\n for upper, chars in itertools.groupby(rc[: len(text)], lambda c: c.isupper()):\n charstr = \"\".join(chars)\n if upper:\n p = ET.Element(\"fixed-case\")\n p.text = charstr\n newnode.append(p)\n else:\n append_text(newnode, text[i : i + len(charstr)])\n\n assert text[i : i + len(charstr)].lower() == charstr.lower(), (\n i,\n text,\n charstr,\n )\n i += len(charstr)\n\n if node.text:\n process(node.text, recased)\n recased = recased[len(node.text) :]\n for child in node:\n protected_child = protect_recurse(child, recased)\n recased = recased[len(get_text(protected_child)) :]\n newnode.append(protected_child)\n if child.tail:\n process(child.tail, recased)\n recased = recased[len(child.tail) :]\n\n return newnode\n\n\ndef protect(node):\n rawtext = get_text(node).strip()\n recased = None\n if rawtext.lower() in special_titles:\n recased = special_titles[rawtext.lower()]\n else:\n text = tokenize(rawtext)\n fixed = fixedcase_title(\n text,\n truelist=truelist,\n phrase_truelist=phrase_truelist,\n amodifiers=amodifiers,\n ndescriptors=ndescriptors,\n )\n if any(fixed):\n # Generate the recased string so we know where to look in the XML\n # to apply fixed-case\n recasedtoks = [(w if b else w.lower()) for w, b in zip(text, fixed)]\n recased = TreebankWordDetokenizer().detokenize(recasedtoks)\n # PTB (de)tokenizer doesn't think of hyphens as separate tokens,\n # so we need to manually detokenize them.\n # Assuming the only edits that need to be made are adding/deleting\n # spaces, the following will work:\n i = 0\n while i < len(rawtext):\n # scan rawtext from left to right and adjust recased by adding/removing\n # spaces until it matches\n t = rawtext[i]\n assert i < len(recased), ((i, t), rawtext, recased)\n c = recased[i]\n if t.isspace() and not c.isspace(): # may be ' ' or '\\n'\n # add space to recased\n recased = recased[:i] + t + recased[i:]\n i += 1\n elif c.isspace() and not t.isspace():\n # remove space from recased\n recased = recased[:i] + recased[i + 1 :]\n # don't increment i\n elif t != c and t.isspace() and c.isspace():\n recased = recased[:i] + t + recased[i + 1 :]\n i += 1\n else:\n assert t == c or t.lower() == c.lower(), (\n (i, t, c),\n rawtext,\n recased,\n text,\n )\n i += 1\n if len(recased) > len(rawtext):\n recased = recased[: len(rawtext)]\n assert rawtext.lower() == recased.lower(), (rawtext, recased)\n\n if recased:\n newnode = protect_recurse(node, recased)\n newnode.tail = node.tail # tail of top level is not protected\n replace_node(node, newnode)\n\n\n# Read in the truelist (list of words that should always be protected)\ntruelist, phrase_truelist, special_titles, amodifiers, ndescriptors = load_lists()\n\nif __name__ == \"__main__\":\n infile, outfile = sys.argv[1:]\n\n tree = ET.parse(infile)\n if not tree.getroot().tail:\n tree.getroot().tail = \"\\n\"\n for paper in tree.getroot().findall(\".//paper\"):\n for title in paper.xpath(\"./title|./booktitle\"):\n protect(title)\n tree.write(outfile, encoding=\"UTF-8\", xml_declaration=True)\n", "path": "bin/fixedcase/protect.py"}]} | 1,767 | 161 |
gh_patches_debug_13 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1779 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ebola Page>Map: disable scroll wheel zoom
CJ - The specific property is here: https://github.com/OCHA-DAP/hdx-design/blob/gh-pages/js/country.js
line 111: map.scrollWheelZoom.disable();
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version = 'v0.5.1'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.5.1'
+hdx_version = 'v0.5.2'
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.5.1'\n+hdx_version = 'v0.5.2'\n", "issue": "Ebola Page>Map: disable scroll wheel zoom\nCJ - The specific property is here: https://github.com/OCHA-DAP/hdx-design/blob/gh-pages/js/country.js\n\nline 111: map.scrollWheelZoom.disable();\n\n", "before_files": [{"content": "hdx_version = 'v0.5.1'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.5.2'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 335 | 106 |
gh_patches_debug_40775 | rasdani/github-patches | git_diff | streamlink__streamlink-3662 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.bfmtv: No playable streams found
Hello. for few days, the plugin isn't working anymore
/usr/local/bin/streamlink --loglevel debug https://rmcdecouverte.bfmtv.com/mediaplayer-direct/ best
[cli][info] streamlink is running as root! Be careful!
[cli][debug] OS: Linux-5.8.0-44-generic-x86_64-with-glibc2.29
[cli][debug] Python: 3.8.5
[cli][debug] Streamlink: 2.1.1
[cli][debug] Requests(2.22.0), Socks(1.7.1), Websocket(0.58.0)
[cli][debug] Arguments:
[cli][debug] url=https://rmcdecouverte.bfmtv.com/mediaplayer-direct/
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin bfmtv for URL https://rmcdecouverte.bfmtv.com/mediaplayer-direct/
error: No playable streams found on this URL: https://rmcdecouverte.bfmtv.com/mediaplayer-direct/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/bfmtv.py`
Content:
```
1 import logging
2 import re
3
4 from streamlink.plugin import Plugin
5 from streamlink.plugins.brightcove import BrightcovePlayer
6
7 log = logging.getLogger(__name__)
8
9
10 class BFMTV(Plugin):
11 _url_re = re.compile(r'https://.+\.(?:bfmtv|01net)\.com')
12 _dailymotion_url = 'https://www.dailymotion.com/embed/video/{}'
13 _brightcove_video_re = re.compile(
14 r'accountid="(?P<account_id>[0-9]+).*?videoid="(?P<video_id>[0-9]+)"',
15 re.DOTALL
16 )
17 _brightcove_video_alt_re = re.compile(
18 r'data-account="(?P<account_id>[0-9]+).*?data-video-id="(?P<video_id>[0-9]+)"',
19 re.DOTALL
20 )
21 _embed_video_id_re = re.compile(
22 r'<iframe.*?src=".*?/(?P<video_id>\w+)"',
23 re.DOTALL
24 )
25
26 @classmethod
27 def can_handle_url(cls, url):
28 return cls._url_re.match(url) is not None
29
30 def _get_streams(self):
31 # Retrieve URL page and search for Brightcove video data
32 res = self.session.http.get(self.url)
33 match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)
34 if match is not None:
35 account_id = match.group('account_id')
36 log.debug(f'Account ID: {account_id}')
37 video_id = match.group('video_id')
38 log.debug(f'Video ID: {video_id}')
39 player = BrightcovePlayer(self.session, account_id)
40 yield from player.get_streams(video_id)
41 else:
42 # Try to find the Dailymotion video ID
43 match = self._embed_video_id_re.search(res.text)
44 if match is not None:
45 video_id = match.group('video_id')
46 log.debug(f'Video ID: {video_id}')
47 yield from self.session.streams(self._dailymotion_url.format(video_id)).items()
48
49
50 __plugin__ = BFMTV
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/bfmtv.py b/src/streamlink/plugins/bfmtv.py
--- a/src/streamlink/plugins/bfmtv.py
+++ b/src/streamlink/plugins/bfmtv.py
@@ -1,8 +1,11 @@
import logging
import re
+from urllib.parse import urljoin, urlparse
from streamlink.plugin import Plugin
+from streamlink.plugin.api.utils import itertags
from streamlink.plugins.brightcove import BrightcovePlayer
+from streamlink.stream import HTTPStream
log = logging.getLogger(__name__)
@@ -22,29 +25,68 @@
r'<iframe.*?src=".*?/(?P<video_id>\w+)"',
re.DOTALL
)
+ _main_js_url_re = re.compile(r'src="([\w/]+/main\.\w+\.js)"')
+ _js_brightcove_video_re = re.compile(
+ r'i\?\([A-Z]="[^"]+",y="(?P<video_id>[0-9]+).*"data-account"\s*:\s*"(?P<account_id>[0-9]+)',
+ )
@classmethod
def can_handle_url(cls, url):
return cls._url_re.match(url) is not None
def _get_streams(self):
- # Retrieve URL page and search for Brightcove video data
res = self.session.http.get(self.url)
- match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)
- if match is not None:
- account_id = match.group('account_id')
+
+ m = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)
+ if m:
+ account_id = m.group('account_id')
log.debug(f'Account ID: {account_id}')
- video_id = match.group('video_id')
+ video_id = m.group('video_id')
log.debug(f'Video ID: {video_id}')
player = BrightcovePlayer(self.session, account_id)
yield from player.get_streams(video_id)
- else:
- # Try to find the Dailymotion video ID
- match = self._embed_video_id_re.search(res.text)
- if match is not None:
- video_id = match.group('video_id')
+ return
+
+ # Try to find the Dailymotion video ID
+ m = self._embed_video_id_re.search(res.text)
+ if m:
+ video_id = m.group('video_id')
+ log.debug(f'Video ID: {video_id}')
+ yield from self.session.streams(self._dailymotion_url.format(video_id)).items()
+ return
+
+ # Try the JS for Brightcove video data
+ m = self._main_js_url_re.search(res.text)
+ if m:
+ log.debug(f'JS URL: {urljoin(self.url, m.group(1))}')
+ res = self.session.http.get(urljoin(self.url, m.group(1)))
+ m = self._js_brightcove_video_re.search(res.text)
+ if m:
+ account_id = m.group('account_id')
+ log.debug(f'Account ID: {account_id}')
+ video_id = m.group('video_id')
log.debug(f'Video ID: {video_id}')
- yield from self.session.streams(self._dailymotion_url.format(video_id)).items()
+ player = BrightcovePlayer(self.session, account_id)
+ yield from player.get_streams(video_id)
+ return
+
+ # Audio Live
+ audio_url = None
+ for source in itertags(res.text, 'source'):
+ url = source.attributes.get('src')
+ if url:
+ p_url = urlparse(url)
+ if p_url.path.endswith(('.mp3')):
+ audio_url = url
+
+ # Audio VOD
+ for div in itertags(res.text, 'div'):
+ if div.attributes.get('class') == 'audio-player':
+ audio_url = div.attributes.get('data-media-url')
+
+ if audio_url:
+ yield 'audio', HTTPStream(self.session, audio_url)
+ return
__plugin__ = BFMTV
| {"golden_diff": "diff --git a/src/streamlink/plugins/bfmtv.py b/src/streamlink/plugins/bfmtv.py\n--- a/src/streamlink/plugins/bfmtv.py\n+++ b/src/streamlink/plugins/bfmtv.py\n@@ -1,8 +1,11 @@\n import logging\n import re\n+from urllib.parse import urljoin, urlparse\n \n from streamlink.plugin import Plugin\n+from streamlink.plugin.api.utils import itertags\n from streamlink.plugins.brightcove import BrightcovePlayer\n+from streamlink.stream import HTTPStream\n \n log = logging.getLogger(__name__)\n \n@@ -22,29 +25,68 @@\n r'<iframe.*?src=\".*?/(?P<video_id>\\w+)\"',\n re.DOTALL\n )\n+ _main_js_url_re = re.compile(r'src=\"([\\w/]+/main\\.\\w+\\.js)\"')\n+ _js_brightcove_video_re = re.compile(\n+ r'i\\?\\([A-Z]=\"[^\"]+\",y=\"(?P<video_id>[0-9]+).*\"data-account\"\\s*:\\s*\"(?P<account_id>[0-9]+)',\n+ )\n \n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url) is not None\n \n def _get_streams(self):\n- # Retrieve URL page and search for Brightcove video data\n res = self.session.http.get(self.url)\n- match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)\n- if match is not None:\n- account_id = match.group('account_id')\n+\n+ m = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)\n+ if m:\n+ account_id = m.group('account_id')\n log.debug(f'Account ID: {account_id}')\n- video_id = match.group('video_id')\n+ video_id = m.group('video_id')\n log.debug(f'Video ID: {video_id}')\n player = BrightcovePlayer(self.session, account_id)\n yield from player.get_streams(video_id)\n- else:\n- # Try to find the Dailymotion video ID\n- match = self._embed_video_id_re.search(res.text)\n- if match is not None:\n- video_id = match.group('video_id')\n+ return\n+\n+ # Try to find the Dailymotion video ID\n+ m = self._embed_video_id_re.search(res.text)\n+ if m:\n+ video_id = m.group('video_id')\n+ log.debug(f'Video ID: {video_id}')\n+ yield from self.session.streams(self._dailymotion_url.format(video_id)).items()\n+ return\n+\n+ # Try the JS for Brightcove video data\n+ m = self._main_js_url_re.search(res.text)\n+ if m:\n+ log.debug(f'JS URL: {urljoin(self.url, m.group(1))}')\n+ res = self.session.http.get(urljoin(self.url, m.group(1)))\n+ m = self._js_brightcove_video_re.search(res.text)\n+ if m:\n+ account_id = m.group('account_id')\n+ log.debug(f'Account ID: {account_id}')\n+ video_id = m.group('video_id')\n log.debug(f'Video ID: {video_id}')\n- yield from self.session.streams(self._dailymotion_url.format(video_id)).items()\n+ player = BrightcovePlayer(self.session, account_id)\n+ yield from player.get_streams(video_id)\n+ return\n+\n+ # Audio Live\n+ audio_url = None\n+ for source in itertags(res.text, 'source'):\n+ url = source.attributes.get('src')\n+ if url:\n+ p_url = urlparse(url)\n+ if p_url.path.endswith(('.mp3')):\n+ audio_url = url\n+\n+ # Audio VOD\n+ for div in itertags(res.text, 'div'):\n+ if div.attributes.get('class') == 'audio-player':\n+ audio_url = div.attributes.get('data-media-url')\n+\n+ if audio_url:\n+ yield 'audio', HTTPStream(self.session, audio_url)\n+ return\n \n \n __plugin__ = BFMTV\n", "issue": "plugins.bfmtv: No playable streams found\n Hello. for few days, the plugin isn't working anymore\r\n\r\n\r\n/usr/local/bin/streamlink --loglevel debug https://rmcdecouverte.bfmtv.com/mediaplayer-direct/ best\r\n[cli][info] streamlink is running as root! Be careful!\r\n[cli][debug] OS: Linux-5.8.0-44-generic-x86_64-with-glibc2.29\r\n[cli][debug] Python: 3.8.5\r\n[cli][debug] Streamlink: 2.1.1\r\n[cli][debug] Requests(2.22.0), Socks(1.7.1), Websocket(0.58.0)\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://rmcdecouverte.bfmtv.com/mediaplayer-direct/\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin bfmtv for URL https://rmcdecouverte.bfmtv.com/mediaplayer-direct/\r\nerror: No playable streams found on this URL: https://rmcdecouverte.bfmtv.com/mediaplayer-direct/\n", "before_files": [{"content": "import logging\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugins.brightcove import BrightcovePlayer\n\nlog = logging.getLogger(__name__)\n\n\nclass BFMTV(Plugin):\n _url_re = re.compile(r'https://.+\\.(?:bfmtv|01net)\\.com')\n _dailymotion_url = 'https://www.dailymotion.com/embed/video/{}'\n _brightcove_video_re = re.compile(\n r'accountid=\"(?P<account_id>[0-9]+).*?videoid=\"(?P<video_id>[0-9]+)\"',\n re.DOTALL\n )\n _brightcove_video_alt_re = re.compile(\n r'data-account=\"(?P<account_id>[0-9]+).*?data-video-id=\"(?P<video_id>[0-9]+)\"',\n re.DOTALL\n )\n _embed_video_id_re = re.compile(\n r'<iframe.*?src=\".*?/(?P<video_id>\\w+)\"',\n re.DOTALL\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url) is not None\n\n def _get_streams(self):\n # Retrieve URL page and search for Brightcove video data\n res = self.session.http.get(self.url)\n match = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)\n if match is not None:\n account_id = match.group('account_id')\n log.debug(f'Account ID: {account_id}')\n video_id = match.group('video_id')\n log.debug(f'Video ID: {video_id}')\n player = BrightcovePlayer(self.session, account_id)\n yield from player.get_streams(video_id)\n else:\n # Try to find the Dailymotion video ID\n match = self._embed_video_id_re.search(res.text)\n if match is not None:\n video_id = match.group('video_id')\n log.debug(f'Video ID: {video_id}')\n yield from self.session.streams(self._dailymotion_url.format(video_id)).items()\n\n\n__plugin__ = BFMTV\n", "path": "src/streamlink/plugins/bfmtv.py"}], "after_files": [{"content": "import logging\nimport re\nfrom urllib.parse import urljoin, urlparse\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api.utils import itertags\nfrom streamlink.plugins.brightcove import BrightcovePlayer\nfrom streamlink.stream import HTTPStream\n\nlog = logging.getLogger(__name__)\n\n\nclass BFMTV(Plugin):\n _url_re = re.compile(r'https://.+\\.(?:bfmtv|01net)\\.com')\n _dailymotion_url = 'https://www.dailymotion.com/embed/video/{}'\n _brightcove_video_re = re.compile(\n r'accountid=\"(?P<account_id>[0-9]+).*?videoid=\"(?P<video_id>[0-9]+)\"',\n re.DOTALL\n )\n _brightcove_video_alt_re = re.compile(\n r'data-account=\"(?P<account_id>[0-9]+).*?data-video-id=\"(?P<video_id>[0-9]+)\"',\n re.DOTALL\n )\n _embed_video_id_re = re.compile(\n r'<iframe.*?src=\".*?/(?P<video_id>\\w+)\"',\n re.DOTALL\n )\n _main_js_url_re = re.compile(r'src=\"([\\w/]+/main\\.\\w+\\.js)\"')\n _js_brightcove_video_re = re.compile(\n r'i\\?\\([A-Z]=\"[^\"]+\",y=\"(?P<video_id>[0-9]+).*\"data-account\"\\s*:\\s*\"(?P<account_id>[0-9]+)',\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url) is not None\n\n def _get_streams(self):\n res = self.session.http.get(self.url)\n\n m = self._brightcove_video_re.search(res.text) or self._brightcove_video_alt_re.search(res.text)\n if m:\n account_id = m.group('account_id')\n log.debug(f'Account ID: {account_id}')\n video_id = m.group('video_id')\n log.debug(f'Video ID: {video_id}')\n player = BrightcovePlayer(self.session, account_id)\n yield from player.get_streams(video_id)\n return\n\n # Try to find the Dailymotion video ID\n m = self._embed_video_id_re.search(res.text)\n if m:\n video_id = m.group('video_id')\n log.debug(f'Video ID: {video_id}')\n yield from self.session.streams(self._dailymotion_url.format(video_id)).items()\n return\n\n # Try the JS for Brightcove video data\n m = self._main_js_url_re.search(res.text)\n if m:\n log.debug(f'JS URL: {urljoin(self.url, m.group(1))}')\n res = self.session.http.get(urljoin(self.url, m.group(1)))\n m = self._js_brightcove_video_re.search(res.text)\n if m:\n account_id = m.group('account_id')\n log.debug(f'Account ID: {account_id}')\n video_id = m.group('video_id')\n log.debug(f'Video ID: {video_id}')\n player = BrightcovePlayer(self.session, account_id)\n yield from player.get_streams(video_id)\n return\n\n # Audio Live\n audio_url = None\n for source in itertags(res.text, 'source'):\n url = source.attributes.get('src')\n if url:\n p_url = urlparse(url)\n if p_url.path.endswith(('.mp3')):\n audio_url = url\n\n # Audio VOD\n for div in itertags(res.text, 'div'):\n if div.attributes.get('class') == 'audio-player':\n audio_url = div.attributes.get('data-media-url')\n\n if audio_url:\n yield 'audio', HTTPStream(self.session, audio_url)\n return\n\n\n__plugin__ = BFMTV\n", "path": "src/streamlink/plugins/bfmtv.py"}]} | 1,111 | 955 |
gh_patches_debug_486 | rasdani/github-patches | git_diff | DDMAL__CantusDB-228 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove the "Users Online" section in footer.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/templatetags/helper_tags.py`
Content:
```
1 import calendar
2 from typing import Union, Optional
3 from django.utils.http import urlencode
4 from django import template
5 from main_app.models import Source
6 from django.utils.safestring import mark_safe
7
8 register = template.Library()
9
10
11 @register.filter(name="month_to_string")
12 def month_to_string(value: Optional[Union[str, int]]) -> Optional[Union[str, int]]:
13 """Converts month number to textual representation, 3 letters (Jan, Mar, etc)"""
14 if type(value) == int and value in range(1, 13):
15 return calendar.month_abbr[value]
16 else:
17 return value
18
19
20 @register.simple_tag(takes_context=True)
21 def url_add_get_params(context, **kwargs):
22 query = context["request"].GET.copy()
23 query.pop("page", None)
24 query.update(kwargs)
25 return query.urlencode()
26
27
28 @register.simple_tag(takes_context=False)
29 def source_links():
30 sources = (
31 Source.objects.filter(public=True, visible=True, segment__id=4063)
32 .exclude(siglum=None)
33 .values("siglum", "id")
34 .order_by("siglum")
35 )
36 options = ""
37 # <option value="source1">Source 1</option>
38 # <option value="source2">Source 2</option>
39 # <option value="source3">Source 3</option>
40 for source in sources:
41 option_str = (
42 f"<option value=source/{source['id']}>{source['siglum']}</option>\n"
43 )
44 options += option_str
45
46 return mark_safe(options)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/templatetags/helper_tags.py b/django/cantusdb_project/main_app/templatetags/helper_tags.py
--- a/django/cantusdb_project/main_app/templatetags/helper_tags.py
+++ b/django/cantusdb_project/main_app/templatetags/helper_tags.py
@@ -44,3 +44,7 @@
options += option_str
return mark_safe(options)
+
[email protected](name='has_group')
+def has_group(user, group_name):
+ return user.groups.filter(name=group_name).exists()
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/templatetags/helper_tags.py b/django/cantusdb_project/main_app/templatetags/helper_tags.py\n--- a/django/cantusdb_project/main_app/templatetags/helper_tags.py\n+++ b/django/cantusdb_project/main_app/templatetags/helper_tags.py\n@@ -44,3 +44,7 @@\n options += option_str\n \n return mark_safe(options)\n+\[email protected](name='has_group') \n+def has_group(user, group_name):\n+ return user.groups.filter(name=group_name).exists()\n", "issue": "Remove the \"Users Online\" section in footer.\n\n", "before_files": [{"content": "import calendar\nfrom typing import Union, Optional\nfrom django.utils.http import urlencode\nfrom django import template\nfrom main_app.models import Source\nfrom django.utils.safestring import mark_safe\n\nregister = template.Library()\n\n\[email protected](name=\"month_to_string\")\ndef month_to_string(value: Optional[Union[str, int]]) -> Optional[Union[str, int]]:\n \"\"\"Converts month number to textual representation, 3 letters (Jan, Mar, etc)\"\"\"\n if type(value) == int and value in range(1, 13):\n return calendar.month_abbr[value]\n else:\n return value\n\n\[email protected]_tag(takes_context=True)\ndef url_add_get_params(context, **kwargs):\n query = context[\"request\"].GET.copy()\n query.pop(\"page\", None)\n query.update(kwargs)\n return query.urlencode()\n\n\[email protected]_tag(takes_context=False)\ndef source_links():\n sources = (\n Source.objects.filter(public=True, visible=True, segment__id=4063)\n .exclude(siglum=None)\n .values(\"siglum\", \"id\")\n .order_by(\"siglum\")\n )\n options = \"\"\n # <option value=\"source1\">Source 1</option>\n # <option value=\"source2\">Source 2</option>\n # <option value=\"source3\">Source 3</option>\n for source in sources:\n option_str = (\n f\"<option value=source/{source['id']}>{source['siglum']}</option>\\n\"\n )\n options += option_str\n\n return mark_safe(options)\n", "path": "django/cantusdb_project/main_app/templatetags/helper_tags.py"}], "after_files": [{"content": "import calendar\nfrom typing import Union, Optional\nfrom django.utils.http import urlencode\nfrom django import template\nfrom main_app.models import Source\nfrom django.utils.safestring import mark_safe\n\nregister = template.Library()\n\n\[email protected](name=\"month_to_string\")\ndef month_to_string(value: Optional[Union[str, int]]) -> Optional[Union[str, int]]:\n \"\"\"Converts month number to textual representation, 3 letters (Jan, Mar, etc)\"\"\"\n if type(value) == int and value in range(1, 13):\n return calendar.month_abbr[value]\n else:\n return value\n\n\[email protected]_tag(takes_context=True)\ndef url_add_get_params(context, **kwargs):\n query = context[\"request\"].GET.copy()\n query.pop(\"page\", None)\n query.update(kwargs)\n return query.urlencode()\n\n\[email protected]_tag(takes_context=False)\ndef source_links():\n sources = (\n Source.objects.filter(public=True, visible=True, segment__id=4063)\n .exclude(siglum=None)\n .values(\"siglum\", \"id\")\n .order_by(\"siglum\")\n )\n options = \"\"\n # <option value=\"source1\">Source 1</option>\n # <option value=\"source2\">Source 2</option>\n # <option value=\"source3\">Source 3</option>\n for source in sources:\n option_str = (\n f\"<option value=source/{source['id']}>{source['siglum']}</option>\\n\"\n )\n options += option_str\n\n return mark_safe(options)\n\[email protected](name='has_group') \ndef has_group(user, group_name):\n return user.groups.filter(name=group_name).exists() \n", "path": "django/cantusdb_project/main_app/templatetags/helper_tags.py"}]} | 712 | 137 |
gh_patches_debug_13979 | rasdani/github-patches | git_diff | facebookresearch__fairscale-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
the main branch is not compatible with python 3.6, but setup.py only requires ">=3.6"
python 3.6 can pip install latest fairscale
https://github.com/facebookresearch/fairscale/blob/1bc96fa8c69def6d990e42bfbd75f86146ce29bd/setup.py#L67
but, some code is not compatible with python 3.6
https://github.com/facebookresearch/fairscale/blob/1bc96fa8c69def6d990e42bfbd75f86146ce29bd/fairscale/experimental/nn/ssd_offload.py#L6
and python<3.7 has no dataclasses
https://github.com/facebookresearch/fairscale/blob/1bc96fa8c69def6d990e42bfbd75f86146ce29bd/fairscale/nn/data_parallel/fully_sharded_data_parallel.py#L8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
4 #
5 # This source code is licensed under the BSD license found in the
6 # LICENSE file in the root directory of this source tree.
7
8 import os
9 import re
10
11 import setuptools
12
13 this_dir = os.path.dirname(os.path.abspath(__file__))
14
15
16 def fetch_requirements():
17 with open("requirements.txt") as f:
18 reqs = f.read().strip().split("\n")
19 return reqs
20
21
22 # https://packaging.python.org/guides/single-sourcing-package-version/
23 def find_version(version_file_path) -> str:
24 with open(version_file_path) as version_file:
25 version_match = re.search(r"^__version_tuple__ = (.*)", version_file.read(), re.M)
26 if version_match:
27 ver_tup = eval(version_match.group(1))
28 ver_str = ".".join([str(x) for x in ver_tup])
29 return ver_str
30 raise RuntimeError("Unable to find version tuple.")
31
32
33 extensions = []
34 cmdclass = {}
35
36 if os.getenv("BUILD_CUDA_EXTENSIONS", "0") == "1":
37 from torch.utils.cpp_extension import BuildExtension, CUDAExtension
38
39 extensions.extend(
40 [
41 CUDAExtension(
42 name="fairscale.fused_adam_cuda",
43 include_dirs=[os.path.join(this_dir, "fairscale/clib/fused_adam_cuda")],
44 sources=[
45 "fairscale/clib/fused_adam_cuda/fused_adam_cuda.cpp",
46 "fairscale/clib/fused_adam_cuda/fused_adam_cuda_kernel.cu",
47 ],
48 extra_compile_args={"cxx": ["-O3"], "nvcc": ["-O3", "--use_fast_math"]},
49 )
50 ]
51 )
52
53 cmdclass["build_ext"] = BuildExtension
54
55
56 if __name__ == "__main__":
57 setuptools.setup(
58 name="fairscale",
59 description="FairScale: A PyTorch library for large-scale and high-performance training.",
60 version=find_version("fairscale/version.py"),
61 setup_requires=["ninja"], # ninja is required to build extensions
62 install_requires=fetch_requirements(),
63 include_package_data=True,
64 packages=setuptools.find_packages(exclude=("tests", "tests.*")),
65 ext_modules=extensions,
66 cmdclass=cmdclass,
67 python_requires=">=3.6",
68 author="Facebook AI Research",
69 author_email="[email protected]",
70 long_description="FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library extends basic PyTorch capabilities while adding new experimental ones.",
71 long_description_content_type="text/markdown",
72 classifiers=[
73 "Programming Language :: Python :: 3.7",
74 "Programming Language :: Python :: 3.8",
75 "Programming Language :: Python :: 3.9",
76 "License :: OSI Approved :: BSD License",
77 "Topic :: Scientific/Engineering :: Artificial Intelligence",
78 "Operating System :: OS Independent",
79 ],
80 )
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,7 +64,7 @@
packages=setuptools.find_packages(exclude=("tests", "tests.*")),
ext_modules=extensions,
cmdclass=cmdclass,
- python_requires=">=3.6",
+ python_requires=">=3.7",
author="Facebook AI Research",
author_email="[email protected]",
long_description="FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library extends basic PyTorch capabilities while adding new experimental ones.",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,7 +64,7 @@\n packages=setuptools.find_packages(exclude=(\"tests\", \"tests.*\")),\n ext_modules=extensions,\n cmdclass=cmdclass,\n- python_requires=\">=3.6\",\n+ python_requires=\">=3.7\",\n author=\"Facebook AI Research\",\n author_email=\"[email protected]\",\n long_description=\"FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library extends basic PyTorch capabilities while adding new experimental ones.\",\n", "issue": "the main branch is not compatible with python 3.6, but setup.py only requires \">=3.6\"\npython 3.6 can pip install latest fairscale\r\nhttps://github.com/facebookresearch/fairscale/blob/1bc96fa8c69def6d990e42bfbd75f86146ce29bd/setup.py#L67\r\n\r\nbut, some code is not compatible with python 3.6\r\nhttps://github.com/facebookresearch/fairscale/blob/1bc96fa8c69def6d990e42bfbd75f86146ce29bd/fairscale/experimental/nn/ssd_offload.py#L6\r\nand python<3.7 has no dataclasses\r\nhttps://github.com/facebookresearch/fairscale/blob/1bc96fa8c69def6d990e42bfbd75f86146ce29bd/fairscale/nn/data_parallel/fully_sharded_data_parallel.py#L8\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport os\nimport re\n\nimport setuptools\n\nthis_dir = os.path.dirname(os.path.abspath(__file__))\n\n\ndef fetch_requirements():\n with open(\"requirements.txt\") as f:\n reqs = f.read().strip().split(\"\\n\")\n return reqs\n\n\n# https://packaging.python.org/guides/single-sourcing-package-version/\ndef find_version(version_file_path) -> str:\n with open(version_file_path) as version_file:\n version_match = re.search(r\"^__version_tuple__ = (.*)\", version_file.read(), re.M)\n if version_match:\n ver_tup = eval(version_match.group(1))\n ver_str = \".\".join([str(x) for x in ver_tup])\n return ver_str\n raise RuntimeError(\"Unable to find version tuple.\")\n\n\nextensions = []\ncmdclass = {}\n\nif os.getenv(\"BUILD_CUDA_EXTENSIONS\", \"0\") == \"1\":\n from torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\n extensions.extend(\n [\n CUDAExtension(\n name=\"fairscale.fused_adam_cuda\",\n include_dirs=[os.path.join(this_dir, \"fairscale/clib/fused_adam_cuda\")],\n sources=[\n \"fairscale/clib/fused_adam_cuda/fused_adam_cuda.cpp\",\n \"fairscale/clib/fused_adam_cuda/fused_adam_cuda_kernel.cu\",\n ],\n extra_compile_args={\"cxx\": [\"-O3\"], \"nvcc\": [\"-O3\", \"--use_fast_math\"]},\n )\n ]\n )\n\n cmdclass[\"build_ext\"] = BuildExtension\n\n\nif __name__ == \"__main__\":\n setuptools.setup(\n name=\"fairscale\",\n description=\"FairScale: A PyTorch library for large-scale and high-performance training.\",\n version=find_version(\"fairscale/version.py\"),\n setup_requires=[\"ninja\"], # ninja is required to build extensions\n install_requires=fetch_requirements(),\n include_package_data=True,\n packages=setuptools.find_packages(exclude=(\"tests\", \"tests.*\")),\n ext_modules=extensions,\n cmdclass=cmdclass,\n python_requires=\">=3.6\",\n author=\"Facebook AI Research\",\n author_email=\"[email protected]\",\n long_description=\"FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library extends basic PyTorch capabilities while adding new experimental ones.\",\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"License :: OSI Approved :: BSD License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Operating System :: OS Independent\",\n ],\n )\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport os\nimport re\n\nimport setuptools\n\nthis_dir = os.path.dirname(os.path.abspath(__file__))\n\n\ndef fetch_requirements():\n with open(\"requirements.txt\") as f:\n reqs = f.read().strip().split(\"\\n\")\n return reqs\n\n\n# https://packaging.python.org/guides/single-sourcing-package-version/\ndef find_version(version_file_path) -> str:\n with open(version_file_path) as version_file:\n version_match = re.search(r\"^__version_tuple__ = (.*)\", version_file.read(), re.M)\n if version_match:\n ver_tup = eval(version_match.group(1))\n ver_str = \".\".join([str(x) for x in ver_tup])\n return ver_str\n raise RuntimeError(\"Unable to find version tuple.\")\n\n\nextensions = []\ncmdclass = {}\n\nif os.getenv(\"BUILD_CUDA_EXTENSIONS\", \"0\") == \"1\":\n from torch.utils.cpp_extension import BuildExtension, CUDAExtension\n\n extensions.extend(\n [\n CUDAExtension(\n name=\"fairscale.fused_adam_cuda\",\n include_dirs=[os.path.join(this_dir, \"fairscale/clib/fused_adam_cuda\")],\n sources=[\n \"fairscale/clib/fused_adam_cuda/fused_adam_cuda.cpp\",\n \"fairscale/clib/fused_adam_cuda/fused_adam_cuda_kernel.cu\",\n ],\n extra_compile_args={\"cxx\": [\"-O3\"], \"nvcc\": [\"-O3\", \"--use_fast_math\"]},\n )\n ]\n )\n\n cmdclass[\"build_ext\"] = BuildExtension\n\n\nif __name__ == \"__main__\":\n setuptools.setup(\n name=\"fairscale\",\n description=\"FairScale: A PyTorch library for large-scale and high-performance training.\",\n version=find_version(\"fairscale/version.py\"),\n setup_requires=[\"ninja\"], # ninja is required to build extensions\n install_requires=fetch_requirements(),\n include_package_data=True,\n packages=setuptools.find_packages(exclude=(\"tests\", \"tests.*\")),\n ext_modules=extensions,\n cmdclass=cmdclass,\n python_requires=\">=3.7\",\n author=\"Facebook AI Research\",\n author_email=\"[email protected]\",\n long_description=\"FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes. This library extends basic PyTorch capabilities while adding new experimental ones.\",\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"License :: OSI Approved :: BSD License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Operating System :: OS Independent\",\n ],\n )\n", "path": "setup.py"}]} | 1,297 | 138 |
gh_patches_debug_16300 | rasdani/github-patches | git_diff | pre-commit__pre-commit-399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop python2.6?
Is it worth attempting to continue to support python2.6?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 setup(
6 name='pre_commit',
7 description=(
8 'A framework for managing and maintaining multi-language pre-commit '
9 'hooks.'
10 ),
11 url='https://github.com/pre-commit/pre-commit',
12 version='0.8.2',
13
14 author='Anthony Sottile',
15 author_email='[email protected]',
16
17 platforms='linux',
18 classifiers=[
19 'License :: OSI Approved :: MIT License',
20 'Programming Language :: Python :: 2',
21 'Programming Language :: Python :: 2.6',
22 'Programming Language :: Python :: 2.7',
23 'Programming Language :: Python :: 3',
24 'Programming Language :: Python :: 3.4',
25 'Programming Language :: Python :: 3.5',
26 'Programming Language :: Python :: Implementation :: CPython',
27 'Programming Language :: Python :: Implementation :: PyPy',
28 ],
29
30 packages=find_packages('.', exclude=('tests*', 'testing*')),
31 package_data={
32 'pre_commit': [
33 'resources/hook-tmpl',
34 'resources/pre-push-tmpl',
35 'resources/rbenv.tar.gz',
36 'resources/ruby-build.tar.gz',
37 'resources/ruby-download.tar.gz',
38 ]
39 },
40 install_requires=[
41 'aspy.yaml',
42 'cached-property',
43 'jsonschema',
44 'nodeenv>=0.11.1',
45 'pyterminalsize',
46 'pyyaml',
47 'virtualenv',
48 ],
49 extras_require={
50 ':python_version=="2.6"': ['argparse', 'ordereddict'],
51 },
52 entry_points={
53 'console_scripts': [
54 'pre-commit = pre_commit.main:main',
55 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa
56 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa
57 ],
58 },
59 )
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,6 @@
classifiers=[
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
@@ -46,9 +45,6 @@
'pyyaml',
'virtualenv',
],
- extras_require={
- ':python_version=="2.6"': ['argparse', 'ordereddict'],
- },
entry_points={
'console_scripts': [
'pre-commit = pre_commit.main:main',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,6 @@\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n- 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n@@ -46,9 +45,6 @@\n 'pyyaml',\n 'virtualenv',\n ],\n- extras_require={\n- ':python_version==\"2.6\"': ['argparse', 'ordereddict'],\n- },\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n", "issue": "Drop python2.6?\nIs it worth attempting to continue to support python2.6?\n\n", "before_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='0.8.2',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages('.', exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n ]\n },\n install_requires=[\n 'aspy.yaml',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n 'pyterminalsize',\n 'pyyaml',\n 'virtualenv',\n ],\n extras_require={\n ':python_version==\"2.6\"': ['argparse', 'ordereddict'],\n },\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='0.8.2',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages('.', exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n ]\n },\n install_requires=[\n 'aspy.yaml',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n 'pyterminalsize',\n 'pyyaml',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa\n ],\n },\n)\n", "path": "setup.py"}]} | 796 | 174 |
gh_patches_debug_21753 | rasdani/github-patches | git_diff | Flexget__Flexget-1600 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nyaa changed TLD
hi peeps. it seems they switched TLD from .eu to .se
i changed my local flexget/plugins/sites/nyaa.py, removed the pyc & reloaded the daemon. its pulling stuff. but i aint got the skills to send a pull request, so i thought i'd do the next best thing and say something
if you don't want to do anything, i guess thats fine too. the old is redirecting to the new
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/sites/nyaa.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3 from future.moves.urllib.parse import quote
4
5 import logging
6
7 import feedparser
8
9 from flexget import plugin
10 from flexget.entry import Entry
11 from flexget.event import event
12 from flexget.utils.search import normalize_unicode
13
14 log = logging.getLogger('nyaa')
15
16 # TODO: Other categories
17 CATEGORIES = {'all': '0_0',
18 'anime': '1_0',
19 'anime eng': '1_37',
20 'anime non-eng': '1_38',
21 'anime raw': '1_11'}
22 FILTERS = ['all', 'filter remakes', 'trusted only', 'a+ only']
23
24
25 class UrlRewriteNyaa(object):
26 """Nyaa urlrewriter and search plugin."""
27
28 schema = {
29 'oneOf': [
30 {'type': 'string', 'enum': list(CATEGORIES)},
31 {
32 'type': 'object',
33 'properties': {
34 'category': {'type': 'string', 'enum': list(CATEGORIES)},
35 'filter': {'type': 'string', 'enum': list(FILTERS)}
36 },
37 'additionalProperties': False
38 }
39 ]
40 }
41
42 def search(self, task, entry, config):
43 if not isinstance(config, dict):
44 config = {'category': config}
45 config.setdefault('category', 'anime eng')
46 config.setdefault('filter', 'all')
47 entries = set()
48 for search_string in entry.get('search_strings', [entry['title']]):
49 name = normalize_unicode(search_string)
50 url = 'http://www.nyaa.eu/?page=rss&cats=%s&filter=%s&term=%s' % (
51 CATEGORIES[config['category']], FILTERS.index(config['filter']), quote(name.encode('utf-8')))
52
53 log.debug('requesting: %s' % url)
54 rss = feedparser.parse(url)
55
56 status = rss.get('status', False)
57 if status != 200:
58 log.debug('Search result not 200 (OK), received %s' % status)
59 if status >= 400:
60 continue
61
62 ex = rss.get('bozo_exception', False)
63 if ex:
64 log.error('Got bozo_exception (bad feed) on %s' % url)
65 continue
66
67 for item in rss.entries:
68 entry = Entry()
69 entry['title'] = item.title
70 entry['url'] = item.link
71 # TODO: parse some shit
72 # entry['torrent_seeds'] = int(item.seeds)
73 # entry['torrent_leeches'] = int(item.leechs)
74 # entry['search_sort'] = torrent_availability(entry['torrent_seeds'], entry['torrent_leeches'])
75 # entry['content_size'] = int(item.size) / 1024 / 1024
76
77 entries.add(entry)
78
79 return entries
80
81 def url_rewritable(self, task, entry):
82 return entry['url'].startswith('http://www.nyaa.eu/?page=torrentinfo&tid=')
83
84 def url_rewrite(self, task, entry):
85 entry['url'] = entry['url'].replace('torrentinfo', 'download')
86
87
88 @event('plugin.register')
89 def register_plugin():
90 plugin.register(UrlRewriteNyaa, 'nyaa', groups=['search', 'urlrewriter'], api_ver=2)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/plugins/sites/nyaa.py b/flexget/plugins/sites/nyaa.py
--- a/flexget/plugins/sites/nyaa.py
+++ b/flexget/plugins/sites/nyaa.py
@@ -47,7 +47,7 @@
entries = set()
for search_string in entry.get('search_strings', [entry['title']]):
name = normalize_unicode(search_string)
- url = 'http://www.nyaa.eu/?page=rss&cats=%s&filter=%s&term=%s' % (
+ url = 'http://www.nyaa.se/?page=rss&cats=%s&filter=%s&term=%s' % (
CATEGORIES[config['category']], FILTERS.index(config['filter']), quote(name.encode('utf-8')))
log.debug('requesting: %s' % url)
@@ -79,7 +79,7 @@
return entries
def url_rewritable(self, task, entry):
- return entry['url'].startswith('http://www.nyaa.eu/?page=torrentinfo&tid=')
+ return entry['url'].startswith('http://www.nyaa.se/?page=torrentinfo&tid=')
def url_rewrite(self, task, entry):
entry['url'] = entry['url'].replace('torrentinfo', 'download')
| {"golden_diff": "diff --git a/flexget/plugins/sites/nyaa.py b/flexget/plugins/sites/nyaa.py\n--- a/flexget/plugins/sites/nyaa.py\n+++ b/flexget/plugins/sites/nyaa.py\n@@ -47,7 +47,7 @@\n entries = set()\n for search_string in entry.get('search_strings', [entry['title']]):\n name = normalize_unicode(search_string)\n- url = 'http://www.nyaa.eu/?page=rss&cats=%s&filter=%s&term=%s' % (\n+ url = 'http://www.nyaa.se/?page=rss&cats=%s&filter=%s&term=%s' % (\n CATEGORIES[config['category']], FILTERS.index(config['filter']), quote(name.encode('utf-8')))\n \n log.debug('requesting: %s' % url)\n@@ -79,7 +79,7 @@\n return entries\n \n def url_rewritable(self, task, entry):\n- return entry['url'].startswith('http://www.nyaa.eu/?page=torrentinfo&tid=')\n+ return entry['url'].startswith('http://www.nyaa.se/?page=torrentinfo&tid=')\n \n def url_rewrite(self, task, entry):\n entry['url'] = entry['url'].replace('torrentinfo', 'download')\n", "issue": "nyaa changed TLD\nhi peeps. it seems they switched TLD from .eu to .se\r\n\r\ni changed my local flexget/plugins/sites/nyaa.py, removed the pyc & reloaded the daemon. its pulling stuff. but i aint got the skills to send a pull request, so i thought i'd do the next best thing and say something\r\n\r\nif you don't want to do anything, i guess thats fine too. the old is redirecting to the new\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\nfrom future.moves.urllib.parse import quote\n\nimport logging\n\nimport feedparser\n\nfrom flexget import plugin\nfrom flexget.entry import Entry\nfrom flexget.event import event\nfrom flexget.utils.search import normalize_unicode\n\nlog = logging.getLogger('nyaa')\n\n# TODO: Other categories\nCATEGORIES = {'all': '0_0',\n 'anime': '1_0',\n 'anime eng': '1_37',\n 'anime non-eng': '1_38',\n 'anime raw': '1_11'}\nFILTERS = ['all', 'filter remakes', 'trusted only', 'a+ only']\n\n\nclass UrlRewriteNyaa(object):\n \"\"\"Nyaa urlrewriter and search plugin.\"\"\"\n\n schema = {\n 'oneOf': [\n {'type': 'string', 'enum': list(CATEGORIES)},\n {\n 'type': 'object',\n 'properties': {\n 'category': {'type': 'string', 'enum': list(CATEGORIES)},\n 'filter': {'type': 'string', 'enum': list(FILTERS)}\n },\n 'additionalProperties': False\n }\n ]\n }\n\n def search(self, task, entry, config):\n if not isinstance(config, dict):\n config = {'category': config}\n config.setdefault('category', 'anime eng')\n config.setdefault('filter', 'all')\n entries = set()\n for search_string in entry.get('search_strings', [entry['title']]):\n name = normalize_unicode(search_string)\n url = 'http://www.nyaa.eu/?page=rss&cats=%s&filter=%s&term=%s' % (\n CATEGORIES[config['category']], FILTERS.index(config['filter']), quote(name.encode('utf-8')))\n\n log.debug('requesting: %s' % url)\n rss = feedparser.parse(url)\n\n status = rss.get('status', False)\n if status != 200:\n log.debug('Search result not 200 (OK), received %s' % status)\n if status >= 400:\n continue\n\n ex = rss.get('bozo_exception', False)\n if ex:\n log.error('Got bozo_exception (bad feed) on %s' % url)\n continue\n\n for item in rss.entries:\n entry = Entry()\n entry['title'] = item.title\n entry['url'] = item.link\n # TODO: parse some shit\n # entry['torrent_seeds'] = int(item.seeds)\n # entry['torrent_leeches'] = int(item.leechs)\n # entry['search_sort'] = torrent_availability(entry['torrent_seeds'], entry['torrent_leeches'])\n # entry['content_size'] = int(item.size) / 1024 / 1024\n\n entries.add(entry)\n\n return entries\n\n def url_rewritable(self, task, entry):\n return entry['url'].startswith('http://www.nyaa.eu/?page=torrentinfo&tid=')\n\n def url_rewrite(self, task, entry):\n entry['url'] = entry['url'].replace('torrentinfo', 'download')\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(UrlRewriteNyaa, 'nyaa', groups=['search', 'urlrewriter'], api_ver=2)\n", "path": "flexget/plugins/sites/nyaa.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\nfrom future.moves.urllib.parse import quote\n\nimport logging\n\nimport feedparser\n\nfrom flexget import plugin\nfrom flexget.entry import Entry\nfrom flexget.event import event\nfrom flexget.utils.search import normalize_unicode\n\nlog = logging.getLogger('nyaa')\n\n# TODO: Other categories\nCATEGORIES = {'all': '0_0',\n 'anime': '1_0',\n 'anime eng': '1_37',\n 'anime non-eng': '1_38',\n 'anime raw': '1_11'}\nFILTERS = ['all', 'filter remakes', 'trusted only', 'a+ only']\n\n\nclass UrlRewriteNyaa(object):\n \"\"\"Nyaa urlrewriter and search plugin.\"\"\"\n\n schema = {\n 'oneOf': [\n {'type': 'string', 'enum': list(CATEGORIES)},\n {\n 'type': 'object',\n 'properties': {\n 'category': {'type': 'string', 'enum': list(CATEGORIES)},\n 'filter': {'type': 'string', 'enum': list(FILTERS)}\n },\n 'additionalProperties': False\n }\n ]\n }\n\n def search(self, task, entry, config):\n if not isinstance(config, dict):\n config = {'category': config}\n config.setdefault('category', 'anime eng')\n config.setdefault('filter', 'all')\n entries = set()\n for search_string in entry.get('search_strings', [entry['title']]):\n name = normalize_unicode(search_string)\n url = 'http://www.nyaa.se/?page=rss&cats=%s&filter=%s&term=%s' % (\n CATEGORIES[config['category']], FILTERS.index(config['filter']), quote(name.encode('utf-8')))\n\n log.debug('requesting: %s' % url)\n rss = feedparser.parse(url)\n\n status = rss.get('status', False)\n if status != 200:\n log.debug('Search result not 200 (OK), received %s' % status)\n if status >= 400:\n continue\n\n ex = rss.get('bozo_exception', False)\n if ex:\n log.error('Got bozo_exception (bad feed) on %s' % url)\n continue\n\n for item in rss.entries:\n entry = Entry()\n entry['title'] = item.title\n entry['url'] = item.link\n # TODO: parse some shit\n # entry['torrent_seeds'] = int(item.seeds)\n # entry['torrent_leeches'] = int(item.leechs)\n # entry['search_sort'] = torrent_availability(entry['torrent_seeds'], entry['torrent_leeches'])\n # entry['content_size'] = int(item.size) / 1024 / 1024\n\n entries.add(entry)\n\n return entries\n\n def url_rewritable(self, task, entry):\n return entry['url'].startswith('http://www.nyaa.se/?page=torrentinfo&tid=')\n\n def url_rewrite(self, task, entry):\n entry['url'] = entry['url'].replace('torrentinfo', 'download')\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(UrlRewriteNyaa, 'nyaa', groups=['search', 'urlrewriter'], api_ver=2)\n", "path": "flexget/plugins/sites/nyaa.py"}]} | 1,294 | 292 |
gh_patches_debug_10022 | rasdani/github-patches | git_diff | bokeh__bokeh-6724 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set initial date in date picker in models/file/widgets
This is needed to make image diff not fail when example is run on different days.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/models/file/widgets.py`
Content:
```
1 from __future__ import print_function
2
3 #from datetime import date
4
5 from bokeh.document import Document
6 from bokeh.embed import file_html
7 from bokeh.resources import INLINE
8 from bokeh.util.browser import view
9 from bokeh.models import ColumnDataSource
10 from bokeh.models.layouts import Column, Row, WidgetBox
11 from bokeh.models.widgets import (
12 Button, Toggle, Dropdown,
13 CheckboxGroup, RadioGroup,
14 CheckboxButtonGroup, RadioButtonGroup,
15 TextInput, AutocompleteInput,
16 Select, MultiSelect,
17 Slider, RangeSlider, #DateRangeSlider,
18 DatePicker,
19 Paragraph, Div, PreText,
20 Panel, Tabs,
21 DataTable, TableColumn,
22 StringFormatter, NumberFormatter,
23 StringEditor, IntEditor, NumberEditor, SelectEditor,
24 )
25 from bokeh.plotting import figure
26 from bokeh.sampledata.iris import flowers
27 from bokeh.sampledata.autompg2 import autompg2 as mpg
28
29 button = Button(label="Button (disabled) - still has click event", button_type="primary", disabled=True)
30 toggle = Toggle(label="Toggle button", button_type="success")
31
32 menu = [("Item 1", "item_1_value"), ("Item 2", "item_2_value"), ("Item 3", "item_3_value")]
33
34 dropdown = Dropdown(label="Dropdown button", button_type="warning", menu=menu)
35 #dropdown_split = Dropdown(label="Split button", button_type="danger", menu=menu, default_value="default"))
36
37 checkbox_group = CheckboxGroup(labels=["Option 1", "Option 2", "Option 3"], active=[0, 1])
38 radio_group = RadioGroup(labels=["Option 1", "Option 2", "Option 3"], active=0)
39
40 checkbox_button_group = CheckboxButtonGroup(labels=["Option 1", "Option 2", "Option 3"], active=[0, 1])
41 radio_button_group = RadioButtonGroup(labels=["Option 1", "Option 2", "Option 3"], active=0)
42
43 text_input = TextInput(placeholder="Enter value ...")
44
45 completions = ["aaa", "aab", "aac", "baa", "caa"]
46 autocomplete_input = AutocompleteInput(placeholder="Enter value ...", completions=completions)
47
48 select = Select(options=["Option 1", "Option 2", "Option 3"])
49
50 multi_select = MultiSelect(options=["Option %d" % (i+1) for i in range(16)], size=6)
51
52 slider = Slider(value=10, start=0, end=100, step=0.5)
53
54 range_slider = RangeSlider(value=[10, 90], start=0, end=100, step=0.5)
55
56 #date_range_slider = DateRangeSlider(value=(date(2016, 1, 1), date(2016, 12, 31)))
57
58 date_picker = DatePicker()
59
60 paragraph = Paragraph(text="some text")
61
62 div = Div(text="some <b>text</b>")
63
64 pre_text = PreText(text="some text")
65
66 def mk_tab(color):
67 plot = figure(plot_width=300, plot_height=300)
68 plot.scatter(flowers["petal_length"], flowers["petal_width"], color=color, fill_alpha=0.2, size=12)
69 return Panel(title="Tab 1: %s" % color.capitalize(), child=plot)
70
71 tabs = Tabs(tabs=[mk_tab("red"), mk_tab("green"), mk_tab("blue")])
72
73 source = ColumnDataSource(data=mpg)
74 columns = [
75 TableColumn(field="manufacturer",
76 title="Manufacturer",
77 editor=SelectEditor(options=sorted(mpg["manufacturer"].unique())),
78 formatter=StringFormatter(font_style="bold")),
79 TableColumn(field="model",
80 title="Model",
81 editor=StringEditor(completions=sorted(mpg["model"].unique()))),
82 TableColumn(field="displ",
83 title="Displacement",
84 editor=NumberEditor(step=0.1),
85 formatter=NumberFormatter(format="0.0")),
86 TableColumn(field="year",
87 title="Year",
88 editor=IntEditor()),
89 TableColumn(field="cyl",
90 title="Cylinders",
91 editor=IntEditor()),
92 TableColumn(field="trans",
93 title="Transmission",
94 editor=SelectEditor(options=sorted(mpg["trans"].unique()))),
95 TableColumn(field="drv",
96 title="Drive",
97 editor=SelectEditor(options=sorted(mpg["drv"].unique()))),
98 TableColumn(field="class",
99 title="Class",
100 editor=SelectEditor(options=sorted(mpg["class"].unique()))),
101 TableColumn(field="cty",
102 title="City MPG",
103 editor=IntEditor()),
104 TableColumn(field="hwy",
105 title="Highway MPG",
106 editor=IntEditor()),
107 ]
108 table = DataTable(source=source, columns=columns, editable=True, width=800)
109
110 widgets = Column(children=[
111 Row(children=[
112 WidgetBox(children=[
113 button, toggle, dropdown, #dropdown_split,
114 checkbox_group, radio_group,
115 checkbox_button_group, radio_button_group,
116 ]),
117 WidgetBox(children=[
118 text_input, autocomplete_input,
119 select, multi_select,
120 slider, range_slider, #date_range_slider,
121 date_picker,
122 paragraph, div, pre_text,
123 ]),
124 WidgetBox(children=[
125 tabs,
126 ], width=400),
127 ]),
128 WidgetBox(children=[table]),
129 ])
130
131
132 doc = Document()
133 doc.add_root(widgets)
134
135 if __name__ == "__main__":
136 doc.validate()
137 filename = "widgets.html"
138 with open(filename, "w") as f:
139 f.write(file_html(doc, INLINE, "Widgets"))
140 print("Wrote %s" % filename)
141 view(filename)
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/models/file/widgets.py b/examples/models/file/widgets.py
--- a/examples/models/file/widgets.py
+++ b/examples/models/file/widgets.py
@@ -1,6 +1,6 @@
from __future__ import print_function
-#from datetime import date
+from datetime import date
from bokeh.document import Document
from bokeh.embed import file_html
@@ -55,7 +55,7 @@
#date_range_slider = DateRangeSlider(value=(date(2016, 1, 1), date(2016, 12, 31)))
-date_picker = DatePicker()
+date_picker = DatePicker(value=date(2017, 8, 1))
paragraph = Paragraph(text="some text")
| {"golden_diff": "diff --git a/examples/models/file/widgets.py b/examples/models/file/widgets.py\n--- a/examples/models/file/widgets.py\n+++ b/examples/models/file/widgets.py\n@@ -1,6 +1,6 @@\n from __future__ import print_function\n \n-#from datetime import date\n+from datetime import date\n \n from bokeh.document import Document\n from bokeh.embed import file_html\n@@ -55,7 +55,7 @@\n \n #date_range_slider = DateRangeSlider(value=(date(2016, 1, 1), date(2016, 12, 31)))\n \n-date_picker = DatePicker()\n+date_picker = DatePicker(value=date(2017, 8, 1))\n \n paragraph = Paragraph(text=\"some text\")\n", "issue": "Set initial date in date picker in models/file/widgets\nThis is needed to make image diff not fail when example is run on different days.\r\n\n", "before_files": [{"content": "from __future__ import print_function\n\n#from datetime import date\n\nfrom bokeh.document import Document\nfrom bokeh.embed import file_html\nfrom bokeh.resources import INLINE\nfrom bokeh.util.browser import view\nfrom bokeh.models import ColumnDataSource\nfrom bokeh.models.layouts import Column, Row, WidgetBox\nfrom bokeh.models.widgets import (\n Button, Toggle, Dropdown,\n CheckboxGroup, RadioGroup,\n CheckboxButtonGroup, RadioButtonGroup,\n TextInput, AutocompleteInput,\n Select, MultiSelect,\n Slider, RangeSlider, #DateRangeSlider,\n DatePicker,\n Paragraph, Div, PreText,\n Panel, Tabs,\n DataTable, TableColumn,\n StringFormatter, NumberFormatter,\n StringEditor, IntEditor, NumberEditor, SelectEditor,\n)\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.iris import flowers\nfrom bokeh.sampledata.autompg2 import autompg2 as mpg\n\nbutton = Button(label=\"Button (disabled) - still has click event\", button_type=\"primary\", disabled=True)\ntoggle = Toggle(label=\"Toggle button\", button_type=\"success\")\n\nmenu = [(\"Item 1\", \"item_1_value\"), (\"Item 2\", \"item_2_value\"), (\"Item 3\", \"item_3_value\")]\n\ndropdown = Dropdown(label=\"Dropdown button\", button_type=\"warning\", menu=menu)\n#dropdown_split = Dropdown(label=\"Split button\", button_type=\"danger\", menu=menu, default_value=\"default\"))\n\ncheckbox_group = CheckboxGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=[0, 1])\nradio_group = RadioGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=0)\n\ncheckbox_button_group = CheckboxButtonGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=[0, 1])\nradio_button_group = RadioButtonGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=0)\n\ntext_input = TextInput(placeholder=\"Enter value ...\")\n\ncompletions = [\"aaa\", \"aab\", \"aac\", \"baa\", \"caa\"]\nautocomplete_input = AutocompleteInput(placeholder=\"Enter value ...\", completions=completions)\n\nselect = Select(options=[\"Option 1\", \"Option 2\", \"Option 3\"])\n\nmulti_select = MultiSelect(options=[\"Option %d\" % (i+1) for i in range(16)], size=6)\n\nslider = Slider(value=10, start=0, end=100, step=0.5)\n\nrange_slider = RangeSlider(value=[10, 90], start=0, end=100, step=0.5)\n\n#date_range_slider = DateRangeSlider(value=(date(2016, 1, 1), date(2016, 12, 31)))\n\ndate_picker = DatePicker()\n\nparagraph = Paragraph(text=\"some text\")\n\ndiv = Div(text=\"some <b>text</b>\")\n\npre_text = PreText(text=\"some text\")\n\ndef mk_tab(color):\n plot = figure(plot_width=300, plot_height=300)\n plot.scatter(flowers[\"petal_length\"], flowers[\"petal_width\"], color=color, fill_alpha=0.2, size=12)\n return Panel(title=\"Tab 1: %s\" % color.capitalize(), child=plot)\n\ntabs = Tabs(tabs=[mk_tab(\"red\"), mk_tab(\"green\"), mk_tab(\"blue\")])\n\nsource = ColumnDataSource(data=mpg)\ncolumns = [\n TableColumn(field=\"manufacturer\",\n title=\"Manufacturer\",\n editor=SelectEditor(options=sorted(mpg[\"manufacturer\"].unique())),\n formatter=StringFormatter(font_style=\"bold\")),\n TableColumn(field=\"model\",\n title=\"Model\",\n editor=StringEditor(completions=sorted(mpg[\"model\"].unique()))),\n TableColumn(field=\"displ\",\n title=\"Displacement\",\n editor=NumberEditor(step=0.1),\n formatter=NumberFormatter(format=\"0.0\")),\n TableColumn(field=\"year\",\n title=\"Year\",\n editor=IntEditor()),\n TableColumn(field=\"cyl\",\n title=\"Cylinders\",\n editor=IntEditor()),\n TableColumn(field=\"trans\",\n title=\"Transmission\",\n editor=SelectEditor(options=sorted(mpg[\"trans\"].unique()))),\n TableColumn(field=\"drv\",\n title=\"Drive\",\n editor=SelectEditor(options=sorted(mpg[\"drv\"].unique()))),\n TableColumn(field=\"class\",\n title=\"Class\",\n editor=SelectEditor(options=sorted(mpg[\"class\"].unique()))),\n TableColumn(field=\"cty\",\n title=\"City MPG\",\n editor=IntEditor()),\n TableColumn(field=\"hwy\",\n title=\"Highway MPG\",\n editor=IntEditor()),\n]\ntable = DataTable(source=source, columns=columns, editable=True, width=800)\n\nwidgets = Column(children=[\n Row(children=[\n WidgetBox(children=[\n button, toggle, dropdown, #dropdown_split,\n checkbox_group, radio_group,\n checkbox_button_group, radio_button_group,\n ]),\n WidgetBox(children=[\n text_input, autocomplete_input,\n select, multi_select,\n slider, range_slider, #date_range_slider,\n date_picker,\n paragraph, div, pre_text,\n ]),\n WidgetBox(children=[\n tabs,\n ], width=400),\n ]),\n WidgetBox(children=[table]),\n])\n\n\ndoc = Document()\ndoc.add_root(widgets)\n\nif __name__ == \"__main__\":\n doc.validate()\n filename = \"widgets.html\"\n with open(filename, \"w\") as f:\n f.write(file_html(doc, INLINE, \"Widgets\"))\n print(\"Wrote %s\" % filename)\n view(filename)\n", "path": "examples/models/file/widgets.py"}], "after_files": [{"content": "from __future__ import print_function\n\nfrom datetime import date\n\nfrom bokeh.document import Document\nfrom bokeh.embed import file_html\nfrom bokeh.resources import INLINE\nfrom bokeh.util.browser import view\nfrom bokeh.models import ColumnDataSource\nfrom bokeh.models.layouts import Column, Row, WidgetBox\nfrom bokeh.models.widgets import (\n Button, Toggle, Dropdown,\n CheckboxGroup, RadioGroup,\n CheckboxButtonGroup, RadioButtonGroup,\n TextInput, AutocompleteInput,\n Select, MultiSelect,\n Slider, RangeSlider, #DateRangeSlider,\n DatePicker,\n Paragraph, Div, PreText,\n Panel, Tabs,\n DataTable, TableColumn,\n StringFormatter, NumberFormatter,\n StringEditor, IntEditor, NumberEditor, SelectEditor,\n)\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.iris import flowers\nfrom bokeh.sampledata.autompg2 import autompg2 as mpg\n\nbutton = Button(label=\"Button (disabled) - still has click event\", button_type=\"primary\", disabled=True)\ntoggle = Toggle(label=\"Toggle button\", button_type=\"success\")\n\nmenu = [(\"Item 1\", \"item_1_value\"), (\"Item 2\", \"item_2_value\"), (\"Item 3\", \"item_3_value\")]\n\ndropdown = Dropdown(label=\"Dropdown button\", button_type=\"warning\", menu=menu)\n#dropdown_split = Dropdown(label=\"Split button\", button_type=\"danger\", menu=menu, default_value=\"default\"))\n\ncheckbox_group = CheckboxGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=[0, 1])\nradio_group = RadioGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=0)\n\ncheckbox_button_group = CheckboxButtonGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=[0, 1])\nradio_button_group = RadioButtonGroup(labels=[\"Option 1\", \"Option 2\", \"Option 3\"], active=0)\n\ntext_input = TextInput(placeholder=\"Enter value ...\")\n\ncompletions = [\"aaa\", \"aab\", \"aac\", \"baa\", \"caa\"]\nautocomplete_input = AutocompleteInput(placeholder=\"Enter value ...\", completions=completions)\n\nselect = Select(options=[\"Option 1\", \"Option 2\", \"Option 3\"])\n\nmulti_select = MultiSelect(options=[\"Option %d\" % (i+1) for i in range(16)], size=6)\n\nslider = Slider(value=10, start=0, end=100, step=0.5)\n\nrange_slider = RangeSlider(value=[10, 90], start=0, end=100, step=0.5)\n\n#date_range_slider = DateRangeSlider(value=(date(2016, 1, 1), date(2016, 12, 31)))\n\ndate_picker = DatePicker(value=date(2017, 8, 1))\n\nparagraph = Paragraph(text=\"some text\")\n\ndiv = Div(text=\"some <b>text</b>\")\n\npre_text = PreText(text=\"some text\")\n\ndef mk_tab(color):\n plot = figure(plot_width=300, plot_height=300)\n plot.scatter(flowers[\"petal_length\"], flowers[\"petal_width\"], color=color, fill_alpha=0.2, size=12)\n return Panel(title=\"Tab 1: %s\" % color.capitalize(), child=plot)\n\ntabs = Tabs(tabs=[mk_tab(\"red\"), mk_tab(\"green\"), mk_tab(\"blue\")])\n\nsource = ColumnDataSource(data=mpg)\ncolumns = [\n TableColumn(field=\"manufacturer\",\n title=\"Manufacturer\",\n editor=SelectEditor(options=sorted(mpg[\"manufacturer\"].unique())),\n formatter=StringFormatter(font_style=\"bold\")),\n TableColumn(field=\"model\",\n title=\"Model\",\n editor=StringEditor(completions=sorted(mpg[\"model\"].unique()))),\n TableColumn(field=\"displ\",\n title=\"Displacement\",\n editor=NumberEditor(step=0.1),\n formatter=NumberFormatter(format=\"0.0\")),\n TableColumn(field=\"year\",\n title=\"Year\",\n editor=IntEditor()),\n TableColumn(field=\"cyl\",\n title=\"Cylinders\",\n editor=IntEditor()),\n TableColumn(field=\"trans\",\n title=\"Transmission\",\n editor=SelectEditor(options=sorted(mpg[\"trans\"].unique()))),\n TableColumn(field=\"drv\",\n title=\"Drive\",\n editor=SelectEditor(options=sorted(mpg[\"drv\"].unique()))),\n TableColumn(field=\"class\",\n title=\"Class\",\n editor=SelectEditor(options=sorted(mpg[\"class\"].unique()))),\n TableColumn(field=\"cty\",\n title=\"City MPG\",\n editor=IntEditor()),\n TableColumn(field=\"hwy\",\n title=\"Highway MPG\",\n editor=IntEditor()),\n]\ntable = DataTable(source=source, columns=columns, editable=True, width=800)\n\nwidgets = Column(children=[\n Row(children=[\n WidgetBox(children=[\n button, toggle, dropdown, #dropdown_split,\n checkbox_group, radio_group,\n checkbox_button_group, radio_button_group,\n ]),\n WidgetBox(children=[\n text_input, autocomplete_input,\n select, multi_select,\n slider, range_slider, #date_range_slider,\n date_picker,\n paragraph, div, pre_text,\n ]),\n WidgetBox(children=[\n tabs,\n ], width=400),\n ]),\n WidgetBox(children=[table]),\n])\n\n\ndoc = Document()\ndoc.add_root(widgets)\n\nif __name__ == \"__main__\":\n doc.validate()\n filename = \"widgets.html\"\n with open(filename, \"w\") as f:\n f.write(file_html(doc, INLINE, \"Widgets\"))\n print(\"Wrote %s\" % filename)\n view(filename)\n", "path": "examples/models/file/widgets.py"}]} | 1,837 | 163 |
gh_patches_debug_5302 | rasdani/github-patches | git_diff | searx__searx-2991 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Only a lower case "ip" displays the IP address
When the feature is enabled to show a user's IP address when "ip" is entered into the search bar, it only does so when it is all lowercase. Querying "IP" does not return an IP. This seems like a bug, apologies if this was intended.
Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/plugins/self_info.py`
Content:
```
1 '''
2 searx is free software: you can redistribute it and/or modify
3 it under the terms of the GNU Affero General Public License as published by
4 the Free Software Foundation, either version 3 of the License, or
5 (at your option) any later version.
6
7 searx is distributed in the hope that it will be useful,
8 but WITHOUT ANY WARRANTY; without even the implied warranty of
9 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 GNU Affero General Public License for more details.
11
12 You should have received a copy of the GNU Affero General Public License
13 along with searx. If not, see < http://www.gnu.org/licenses/ >.
14
15 (C) 2015 by Adam Tauber, <[email protected]>
16 '''
17 from flask_babel import gettext
18 import re
19 name = gettext('Self Informations')
20 description = gettext('Displays your IP if the query is "ip" and your user agent if the query contains "user agent".')
21 default_on = True
22
23
24 # Self User Agent regex
25 p = re.compile('.*user[ -]agent.*', re.IGNORECASE)
26
27
28 # attach callback to the post search hook
29 # request: flask request object
30 # ctx: the whole local context of the pre search hook
31 def post_search(request, search):
32 if search.search_query.pageno > 1:
33 return True
34 if search.search_query.query == 'ip':
35 x_forwarded_for = request.headers.getlist("X-Forwarded-For")
36 if x_forwarded_for:
37 ip = x_forwarded_for[0]
38 else:
39 ip = request.remote_addr
40 search.result_container.answers['ip'] = {'answer': ip}
41 elif p.match(search.search_query.query):
42 ua = request.user_agent
43 search.result_container.answers['user-agent'] = {'answer': ua}
44 return True
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/plugins/self_info.py b/searx/plugins/self_info.py
--- a/searx/plugins/self_info.py
+++ b/searx/plugins/self_info.py
@@ -31,7 +31,7 @@
def post_search(request, search):
if search.search_query.pageno > 1:
return True
- if search.search_query.query == 'ip':
+ if search.search_query.query.lower() == 'ip':
x_forwarded_for = request.headers.getlist("X-Forwarded-For")
if x_forwarded_for:
ip = x_forwarded_for[0]
| {"golden_diff": "diff --git a/searx/plugins/self_info.py b/searx/plugins/self_info.py\n--- a/searx/plugins/self_info.py\n+++ b/searx/plugins/self_info.py\n@@ -31,7 +31,7 @@\n def post_search(request, search):\n if search.search_query.pageno > 1:\n return True\n- if search.search_query.query == 'ip':\n+ if search.search_query.query.lower() == 'ip':\n x_forwarded_for = request.headers.getlist(\"X-Forwarded-For\")\n if x_forwarded_for:\n ip = x_forwarded_for[0]\n", "issue": "Only a lower case \"ip\" displays the IP address\nWhen the feature is enabled to show a user's IP address when \"ip\" is entered into the search bar, it only does so when it is all lowercase. Querying \"IP\" does not return an IP. This seems like a bug, apologies if this was intended.\r\n\r\nThanks\n", "before_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2015 by Adam Tauber, <[email protected]>\n'''\nfrom flask_babel import gettext\nimport re\nname = gettext('Self Informations')\ndescription = gettext('Displays your IP if the query is \"ip\" and your user agent if the query contains \"user agent\".')\ndefault_on = True\n\n\n# Self User Agent regex\np = re.compile('.*user[ -]agent.*', re.IGNORECASE)\n\n\n# attach callback to the post search hook\n# request: flask request object\n# ctx: the whole local context of the pre search hook\ndef post_search(request, search):\n if search.search_query.pageno > 1:\n return True\n if search.search_query.query == 'ip':\n x_forwarded_for = request.headers.getlist(\"X-Forwarded-For\")\n if x_forwarded_for:\n ip = x_forwarded_for[0]\n else:\n ip = request.remote_addr\n search.result_container.answers['ip'] = {'answer': ip}\n elif p.match(search.search_query.query):\n ua = request.user_agent\n search.result_container.answers['user-agent'] = {'answer': ua}\n return True\n", "path": "searx/plugins/self_info.py"}], "after_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2015 by Adam Tauber, <[email protected]>\n'''\nfrom flask_babel import gettext\nimport re\nname = gettext('Self Informations')\ndescription = gettext('Displays your IP if the query is \"ip\" and your user agent if the query contains \"user agent\".')\ndefault_on = True\n\n\n# Self User Agent regex\np = re.compile('.*user[ -]agent.*', re.IGNORECASE)\n\n\n# attach callback to the post search hook\n# request: flask request object\n# ctx: the whole local context of the pre search hook\ndef post_search(request, search):\n if search.search_query.pageno > 1:\n return True\n if search.search_query.query.lower() == 'ip':\n x_forwarded_for = request.headers.getlist(\"X-Forwarded-For\")\n if x_forwarded_for:\n ip = x_forwarded_for[0]\n else:\n ip = request.remote_addr\n search.result_container.answers['ip'] = {'answer': ip}\n elif p.match(search.search_query.query):\n ua = request.user_agent\n search.result_container.answers['user-agent'] = {'answer': ua}\n return True\n", "path": "searx/plugins/self_info.py"}]} | 808 | 134 |
gh_patches_debug_24158 | rasdani/github-patches | git_diff | pystiche__pystiche-9 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
propagate_guide() of Encoder raises a TypeError
When running the replication of [Gatys et al. 2017](https://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/replication/gatys_et_al_2017.py#L254), the following error is raised:
```
TypeError: Unions cannot be used with isinstance().
```
This points towards the [Encoder](https://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/pystiche/encoding/encoder.py#L12), specifically these `if` statements in the `propagate_guide()` method:
https://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/pystiche/encoding/encoder.py#L50-L53
`PoolModule` and `ConvModule` are defined in `pystiche.typing`:
https://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/pystiche/typing.py#L18-L23
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pystiche/typing.py`
Content:
```
1 from typing import Union, Sequence
2 import torch
3 from torch import nn
4
5 __all__ = [
6 "Numeric",
7 "TensorMeta",
8 "ConvModule",
9 "ConvModuleMeta",
10 "PoolModule",
11 "PoolModuleMeta",
12 ]
13
14 Numeric = Union[int, float]
15
16 TensorMeta = Union[torch.device, torch.dtype]
17
18 ConvModule = Union[nn.Conv1d, nn.Conv2d, nn.Conv2d]
19 ConvModuleMeta = Union[int, Sequence[int]]
20
21 PoolModule = Union[
22 nn.AvgPool1d, nn.AvgPool2d, nn.AvgPool3d, nn.MaxPool1d, nn.MaxPool2d, nn.MaxPool3d
23 ]
24 PoolModuleMeta = Union[int, Sequence[int]]
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pystiche/typing.py b/pystiche/typing.py
--- a/pystiche/typing.py
+++ b/pystiche/typing.py
@@ -1,4 +1,4 @@
-from typing import Union, Sequence
+from typing import Union, Any, Sequence
import torch
from torch import nn
@@ -6,8 +6,10 @@
"Numeric",
"TensorMeta",
"ConvModule",
+ "is_conv_module",
"ConvModuleMeta",
"PoolModule",
+ "is_pool_module",
"PoolModuleMeta",
]
@@ -15,10 +17,32 @@
TensorMeta = Union[torch.device, torch.dtype]
-ConvModule = Union[nn.Conv1d, nn.Conv2d, nn.Conv2d]
+ConvModule = Union[nn.Conv1d, nn.Conv2d, nn.Conv3d]
+
+
+def is_conv_module(x: Any) -> bool:
+ return isinstance(x, (nn.Conv1d, nn.Conv2d, nn.Conv3d))
+
+
ConvModuleMeta = Union[int, Sequence[int]]
PoolModule = Union[
nn.AvgPool1d, nn.AvgPool2d, nn.AvgPool3d, nn.MaxPool1d, nn.MaxPool2d, nn.MaxPool3d
]
+
+
+def is_pool_module(x: Any) -> bool:
+ return isinstance(
+ x,
+ (
+ nn.AvgPool1d,
+ nn.AvgPool2d,
+ nn.AvgPool3d,
+ nn.MaxPool1d,
+ nn.MaxPool2d,
+ nn.MaxPool3d,
+ ),
+ )
+
+
PoolModuleMeta = Union[int, Sequence[int]]
| {"golden_diff": "diff --git a/pystiche/typing.py b/pystiche/typing.py\n--- a/pystiche/typing.py\n+++ b/pystiche/typing.py\n@@ -1,4 +1,4 @@\n-from typing import Union, Sequence\n+from typing import Union, Any, Sequence\n import torch\n from torch import nn\n \n@@ -6,8 +6,10 @@\n \"Numeric\",\n \"TensorMeta\",\n \"ConvModule\",\n+ \"is_conv_module\",\n \"ConvModuleMeta\",\n \"PoolModule\",\n+ \"is_pool_module\",\n \"PoolModuleMeta\",\n ]\n \n@@ -15,10 +17,32 @@\n \n TensorMeta = Union[torch.device, torch.dtype]\n \n-ConvModule = Union[nn.Conv1d, nn.Conv2d, nn.Conv2d]\n+ConvModule = Union[nn.Conv1d, nn.Conv2d, nn.Conv3d]\n+\n+\n+def is_conv_module(x: Any) -> bool:\n+ return isinstance(x, (nn.Conv1d, nn.Conv2d, nn.Conv3d))\n+\n+\n ConvModuleMeta = Union[int, Sequence[int]]\n \n PoolModule = Union[\n nn.AvgPool1d, nn.AvgPool2d, nn.AvgPool3d, nn.MaxPool1d, nn.MaxPool2d, nn.MaxPool3d\n ]\n+\n+\n+def is_pool_module(x: Any) -> bool:\n+ return isinstance(\n+ x,\n+ (\n+ nn.AvgPool1d,\n+ nn.AvgPool2d,\n+ nn.AvgPool3d,\n+ nn.MaxPool1d,\n+ nn.MaxPool2d,\n+ nn.MaxPool3d,\n+ ),\n+ )\n+\n+\n PoolModuleMeta = Union[int, Sequence[int]]\n", "issue": "propagate_guide() of Encoder raises a TypeError\nWhen running the replication of [Gatys et al. 2017](https://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/replication/gatys_et_al_2017.py#L254), the following error is raised:\r\n\r\n```\r\nTypeError: Unions cannot be used with isinstance().\r\n```\r\n\r\nThis points towards the [Encoder](https://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/pystiche/encoding/encoder.py#L12), specifically these `if` statements in the `propagate_guide()` method:\r\n\r\nhttps://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/pystiche/encoding/encoder.py#L50-L53\r\n\r\n`PoolModule` and `ConvModule` are defined in `pystiche.typing`:\r\n\r\nhttps://github.com/pmeier/pystiche/blob/3260b68ea8dd88de433777ad3750d7abe3894743/pystiche/typing.py#L18-L23\r\n\n", "before_files": [{"content": "from typing import Union, Sequence\nimport torch\nfrom torch import nn\n\n__all__ = [\n \"Numeric\",\n \"TensorMeta\",\n \"ConvModule\",\n \"ConvModuleMeta\",\n \"PoolModule\",\n \"PoolModuleMeta\",\n]\n\nNumeric = Union[int, float]\n\nTensorMeta = Union[torch.device, torch.dtype]\n\nConvModule = Union[nn.Conv1d, nn.Conv2d, nn.Conv2d]\nConvModuleMeta = Union[int, Sequence[int]]\n\nPoolModule = Union[\n nn.AvgPool1d, nn.AvgPool2d, nn.AvgPool3d, nn.MaxPool1d, nn.MaxPool2d, nn.MaxPool3d\n]\nPoolModuleMeta = Union[int, Sequence[int]]\n", "path": "pystiche/typing.py"}], "after_files": [{"content": "from typing import Union, Any, Sequence\nimport torch\nfrom torch import nn\n\n__all__ = [\n \"Numeric\",\n \"TensorMeta\",\n \"ConvModule\",\n \"is_conv_module\",\n \"ConvModuleMeta\",\n \"PoolModule\",\n \"is_pool_module\",\n \"PoolModuleMeta\",\n]\n\nNumeric = Union[int, float]\n\nTensorMeta = Union[torch.device, torch.dtype]\n\nConvModule = Union[nn.Conv1d, nn.Conv2d, nn.Conv3d]\n\n\ndef is_conv_module(x: Any) -> bool:\n return isinstance(x, (nn.Conv1d, nn.Conv2d, nn.Conv3d))\n\n\nConvModuleMeta = Union[int, Sequence[int]]\n\nPoolModule = Union[\n nn.AvgPool1d, nn.AvgPool2d, nn.AvgPool3d, nn.MaxPool1d, nn.MaxPool2d, nn.MaxPool3d\n]\n\n\ndef is_pool_module(x: Any) -> bool:\n return isinstance(\n x,\n (\n nn.AvgPool1d,\n nn.AvgPool2d,\n nn.AvgPool3d,\n nn.MaxPool1d,\n nn.MaxPool2d,\n nn.MaxPool3d,\n ),\n )\n\n\nPoolModuleMeta = Union[int, Sequence[int]]\n", "path": "pystiche/typing.py"}]} | 801 | 388 |
gh_patches_debug_12132 | rasdani/github-patches | git_diff | angr__angr-1862 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Strange successors of the return block of a function
I'm analysing a MIPS binary when facing the problem.
The problem exists in the funcition `do_ssc`.
In the following block which has a return statement

When I run `node.successors` I got
```
In [103]: end.successors
Out[103]:
[<CFGNode 0x40a7a8[28]>,
<CFGNode do_ssc+0x12c [28]>,
<CFGNode do_ssc+0x4c4 [28]>,
<CFGNode do_ssc+0x45c [24]>,
<CFGNode do_ssc+0x2a8 [24]>]
```
Their addresses are `0x40a7a8`, `0x40a33c`, `0x40a6d4` and `0x40a4b8` respectively.
I know the cfg of angr is interfunctional, however, only `0x40a7a8` is an caller of `do_ssc`.
May I know why other threes exist?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py`
Content:
```
1
2 import logging
3
4 import pyvex
5 import archinfo
6
7
8 from .... import options, BP_BEFORE
9 from ....blade import Blade
10 from ....annocfg import AnnotatedCFG
11 from ....exploration_techniques import Slicecutor
12
13 from .resolver import IndirectJumpResolver
14
15
16 l = logging.getLogger(name=__name__)
17
18
19 class MipsElfFastResolver(IndirectJumpResolver):
20 def __init__(self, project):
21 super(MipsElfFastResolver, self).__init__(project, timeless=True)
22
23 def filter(self, cfg, addr, func_addr, block, jumpkind):
24 if not isinstance(self.project.arch, (archinfo.ArchMIPS32, archinfo.ArchMIPS64, )):
25 return False
26 return True
27
28 def resolve(self, cfg, addr, func_addr, block, jumpkind):
29 """
30 Resolves the indirect jump in MIPS ELF binaries where all external function calls are indexed using gp.
31
32 :param cfg: A CFG instance.
33 :param int addr: IRSB address.
34 :param int func_addr: The function address.
35 :param pyvex.IRSB block: The IRSB.
36 :param str jumpkind: The jumpkind.
37 :return: If it was resolved and targets alongside it
38 :rtype: tuple
39 """
40
41 project = self.project
42
43 b = Blade(cfg.graph, addr, -1, cfg=cfg, project=project, ignore_sp=True, ignore_bp=True,
44 ignored_regs=('gp',)
45 )
46
47 sources = [n for n in b.slice.nodes() if b.slice.in_degree(n) == 0]
48 if not sources:
49 return False, []
50
51 source = sources[0]
52 source_addr = source[0]
53 annotated_cfg = AnnotatedCFG(project, None, detect_loops=False)
54 annotated_cfg.from_digraph(b.slice)
55
56 state = project.factory.blank_state(addr=source_addr, mode="fastpath",
57 remove_options=options.refs
58 )
59 func = cfg.kb.functions.function(addr=func_addr)
60
61 gp_offset = project.arch.registers['gp'][0]
62 if 'gp' not in func.info:
63 sec = project.loader.find_section_containing(func.addr)
64 if sec is None or sec.name != '.plt':
65 # this might a special case: gp is only used once in this function, and it can be initialized right before
66 # its use site.
67 # TODO: handle this case
68 l.debug('Failed to determine value of register gp for function %#x.', func.addr)
69 return False, [ ]
70 else:
71 state.regs.gp = func.info['gp']
72
73 def overwrite_tmp_value(state):
74 state.inspect.tmp_write_expr = state.solver.BVV(func.info['gp'], state.arch.bits)
75
76 # Special handling for cases where `gp` is stored on the stack
77 got_gp_stack_store = False
78 for block_addr_in_slice in set(slice_node[0] for slice_node in b.slice.nodes()):
79 for stmt in project.factory.block(block_addr_in_slice).vex.statements:
80 if isinstance(stmt, pyvex.IRStmt.Put) and stmt.offset == gp_offset and \
81 isinstance(stmt.data, pyvex.IRExpr.RdTmp):
82 tmp_offset = stmt.data.tmp # pylint:disable=cell-var-from-loop
83 # we must make sure value of that temporary variable equals to the correct gp value
84 state.inspect.make_breakpoint('tmp_write', when=BP_BEFORE,
85 condition=lambda s, bbl_addr_=block_addr_in_slice,
86 tmp_offset_=tmp_offset:
87 s.scratch.bbl_addr == bbl_addr_ and s.inspect.tmp_write_num == tmp_offset_,
88 action=overwrite_tmp_value
89 )
90 got_gp_stack_store = True
91 break
92 if got_gp_stack_store:
93 break
94
95 simgr = self.project.factory.simulation_manager(state)
96 simgr.use_technique(Slicecutor(annotated_cfg))
97 simgr.run()
98
99 if simgr.cut:
100 target = simgr.cut[0].addr
101
102 if self._is_target_valid(cfg, target):
103 l.debug("Indirect jump at %#x is resolved to target %#x.", addr, target)
104 return True, [ target ]
105
106 l.debug("Indirect jump at %#x is resolved to target %#x, which seems to be invalid.", addr, target)
107 return False, [ ]
108
109 l.debug("Indirect jump at %#x cannot be resolved by %s.", addr, repr(self))
110 return False, [ ]
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py b/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py
--- a/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py
+++ b/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py
@@ -97,7 +97,13 @@
simgr.run()
if simgr.cut:
- target = simgr.cut[0].addr
+ # pick the successor that is cut right after executing `addr`
+ try:
+ target_state = next(iter(cut for cut in simgr.cut if cut.history.addr == addr))
+ except StopIteration:
+ l.debug("Indirect jump at %#x cannot be resolved by %s.", addr, repr(self))
+ return False, [ ]
+ target = target_state.addr
if self._is_target_valid(cfg, target):
l.debug("Indirect jump at %#x is resolved to target %#x.", addr, target)
| {"golden_diff": "diff --git a/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py b/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py\n--- a/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py\n+++ b/angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py\n@@ -97,7 +97,13 @@\n simgr.run()\n \n if simgr.cut:\n- target = simgr.cut[0].addr\n+ # pick the successor that is cut right after executing `addr`\n+ try:\n+ target_state = next(iter(cut for cut in simgr.cut if cut.history.addr == addr))\n+ except StopIteration:\n+ l.debug(\"Indirect jump at %#x cannot be resolved by %s.\", addr, repr(self))\n+ return False, [ ]\n+ target = target_state.addr\n \n if self._is_target_valid(cfg, target):\n l.debug(\"Indirect jump at %#x is resolved to target %#x.\", addr, target)\n", "issue": "Strange successors of the return block of a function\nI'm analysing a MIPS binary when facing the problem.\r\n\r\nThe problem exists in the funcition `do_ssc`.\r\n\r\nIn the following block which has a return statement\r\n\r\n\r\nWhen I run `node.successors` I got\r\n```\r\nIn [103]: end.successors \r\nOut[103]: \r\n[<CFGNode 0x40a7a8[28]>,\r\n <CFGNode do_ssc+0x12c [28]>,\r\n <CFGNode do_ssc+0x4c4 [28]>,\r\n <CFGNode do_ssc+0x45c [24]>,\r\n <CFGNode do_ssc+0x2a8 [24]>]\r\n```\r\nTheir addresses are `0x40a7a8`, `0x40a33c`, `0x40a6d4` and `0x40a4b8` respectively.\r\n\r\nI know the cfg of angr is interfunctional, however, only `0x40a7a8` is an caller of `do_ssc`.\r\n\r\nMay I know why other threes exist?\r\n\r\n\r\n\n", "before_files": [{"content": "\nimport logging\n\nimport pyvex\nimport archinfo\n\n\nfrom .... import options, BP_BEFORE\nfrom ....blade import Blade\nfrom ....annocfg import AnnotatedCFG\nfrom ....exploration_techniques import Slicecutor\n\nfrom .resolver import IndirectJumpResolver\n\n\nl = logging.getLogger(name=__name__)\n\n\nclass MipsElfFastResolver(IndirectJumpResolver):\n def __init__(self, project):\n super(MipsElfFastResolver, self).__init__(project, timeless=True)\n\n def filter(self, cfg, addr, func_addr, block, jumpkind):\n if not isinstance(self.project.arch, (archinfo.ArchMIPS32, archinfo.ArchMIPS64, )):\n return False\n return True\n\n def resolve(self, cfg, addr, func_addr, block, jumpkind):\n \"\"\"\n Resolves the indirect jump in MIPS ELF binaries where all external function calls are indexed using gp.\n\n :param cfg: A CFG instance.\n :param int addr: IRSB address.\n :param int func_addr: The function address.\n :param pyvex.IRSB block: The IRSB.\n :param str jumpkind: The jumpkind.\n :return: If it was resolved and targets alongside it\n :rtype: tuple\n \"\"\"\n\n project = self.project\n\n b = Blade(cfg.graph, addr, -1, cfg=cfg, project=project, ignore_sp=True, ignore_bp=True,\n ignored_regs=('gp',)\n )\n\n sources = [n for n in b.slice.nodes() if b.slice.in_degree(n) == 0]\n if not sources:\n return False, []\n\n source = sources[0]\n source_addr = source[0]\n annotated_cfg = AnnotatedCFG(project, None, detect_loops=False)\n annotated_cfg.from_digraph(b.slice)\n\n state = project.factory.blank_state(addr=source_addr, mode=\"fastpath\",\n remove_options=options.refs\n )\n func = cfg.kb.functions.function(addr=func_addr)\n\n gp_offset = project.arch.registers['gp'][0]\n if 'gp' not in func.info:\n sec = project.loader.find_section_containing(func.addr)\n if sec is None or sec.name != '.plt':\n # this might a special case: gp is only used once in this function, and it can be initialized right before\n # its use site.\n # TODO: handle this case\n l.debug('Failed to determine value of register gp for function %#x.', func.addr)\n return False, [ ]\n else:\n state.regs.gp = func.info['gp']\n\n def overwrite_tmp_value(state):\n state.inspect.tmp_write_expr = state.solver.BVV(func.info['gp'], state.arch.bits)\n\n # Special handling for cases where `gp` is stored on the stack\n got_gp_stack_store = False\n for block_addr_in_slice in set(slice_node[0] for slice_node in b.slice.nodes()):\n for stmt in project.factory.block(block_addr_in_slice).vex.statements:\n if isinstance(stmt, pyvex.IRStmt.Put) and stmt.offset == gp_offset and \\\n isinstance(stmt.data, pyvex.IRExpr.RdTmp):\n tmp_offset = stmt.data.tmp # pylint:disable=cell-var-from-loop\n # we must make sure value of that temporary variable equals to the correct gp value\n state.inspect.make_breakpoint('tmp_write', when=BP_BEFORE,\n condition=lambda s, bbl_addr_=block_addr_in_slice,\n tmp_offset_=tmp_offset:\n s.scratch.bbl_addr == bbl_addr_ and s.inspect.tmp_write_num == tmp_offset_,\n action=overwrite_tmp_value\n )\n got_gp_stack_store = True\n break\n if got_gp_stack_store:\n break\n\n simgr = self.project.factory.simulation_manager(state)\n simgr.use_technique(Slicecutor(annotated_cfg))\n simgr.run()\n\n if simgr.cut:\n target = simgr.cut[0].addr\n\n if self._is_target_valid(cfg, target):\n l.debug(\"Indirect jump at %#x is resolved to target %#x.\", addr, target)\n return True, [ target ]\n\n l.debug(\"Indirect jump at %#x is resolved to target %#x, which seems to be invalid.\", addr, target)\n return False, [ ]\n\n l.debug(\"Indirect jump at %#x cannot be resolved by %s.\", addr, repr(self))\n return False, [ ]\n", "path": "angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py"}], "after_files": [{"content": "\nimport logging\n\nimport pyvex\nimport archinfo\n\n\nfrom .... import options, BP_BEFORE\nfrom ....blade import Blade\nfrom ....annocfg import AnnotatedCFG\nfrom ....exploration_techniques import Slicecutor\n\nfrom .resolver import IndirectJumpResolver\n\n\nl = logging.getLogger(name=__name__)\n\n\nclass MipsElfFastResolver(IndirectJumpResolver):\n def __init__(self, project):\n super(MipsElfFastResolver, self).__init__(project, timeless=True)\n\n def filter(self, cfg, addr, func_addr, block, jumpkind):\n if not isinstance(self.project.arch, (archinfo.ArchMIPS32, archinfo.ArchMIPS64, )):\n return False\n return True\n\n def resolve(self, cfg, addr, func_addr, block, jumpkind):\n \"\"\"\n Resolves the indirect jump in MIPS ELF binaries where all external function calls are indexed using gp.\n\n :param cfg: A CFG instance.\n :param int addr: IRSB address.\n :param int func_addr: The function address.\n :param pyvex.IRSB block: The IRSB.\n :param str jumpkind: The jumpkind.\n :return: If it was resolved and targets alongside it\n :rtype: tuple\n \"\"\"\n\n project = self.project\n\n b = Blade(cfg.graph, addr, -1, cfg=cfg, project=project, ignore_sp=True, ignore_bp=True,\n ignored_regs=('gp',)\n )\n\n sources = [n for n in b.slice.nodes() if b.slice.in_degree(n) == 0]\n if not sources:\n return False, []\n\n source = sources[0]\n source_addr = source[0]\n annotated_cfg = AnnotatedCFG(project, None, detect_loops=False)\n annotated_cfg.from_digraph(b.slice)\n\n state = project.factory.blank_state(addr=source_addr, mode=\"fastpath\",\n remove_options=options.refs\n )\n func = cfg.kb.functions.function(addr=func_addr)\n\n gp_offset = project.arch.registers['gp'][0]\n if 'gp' not in func.info:\n sec = project.loader.find_section_containing(func.addr)\n if sec is None or sec.name != '.plt':\n # this might a special case: gp is only used once in this function, and it can be initialized right before\n # its use site.\n # TODO: handle this case\n l.debug('Failed to determine value of register gp for function %#x.', func.addr)\n return False, [ ]\n else:\n state.regs.gp = func.info['gp']\n\n def overwrite_tmp_value(state):\n state.inspect.tmp_write_expr = state.solver.BVV(func.info['gp'], state.arch.bits)\n\n # Special handling for cases where `gp` is stored on the stack\n got_gp_stack_store = False\n for block_addr_in_slice in set(slice_node[0] for slice_node in b.slice.nodes()):\n for stmt in project.factory.block(block_addr_in_slice).vex.statements:\n if isinstance(stmt, pyvex.IRStmt.Put) and stmt.offset == gp_offset and \\\n isinstance(stmt.data, pyvex.IRExpr.RdTmp):\n tmp_offset = stmt.data.tmp # pylint:disable=cell-var-from-loop\n # we must make sure value of that temporary variable equals to the correct gp value\n state.inspect.make_breakpoint('tmp_write', when=BP_BEFORE,\n condition=lambda s, bbl_addr_=block_addr_in_slice,\n tmp_offset_=tmp_offset:\n s.scratch.bbl_addr == bbl_addr_ and s.inspect.tmp_write_num == tmp_offset_,\n action=overwrite_tmp_value\n )\n got_gp_stack_store = True\n break\n if got_gp_stack_store:\n break\n\n simgr = self.project.factory.simulation_manager(state)\n simgr.use_technique(Slicecutor(annotated_cfg))\n simgr.run()\n\n if simgr.cut:\n # pick the successor that is cut right after executing `addr`\n try:\n target_state = next(iter(cut for cut in simgr.cut if cut.history.addr == addr))\n except StopIteration:\n l.debug(\"Indirect jump at %#x cannot be resolved by %s.\", addr, repr(self))\n return False, [ ]\n target = target_state.addr\n\n if self._is_target_valid(cfg, target):\n l.debug(\"Indirect jump at %#x is resolved to target %#x.\", addr, target)\n return True, [ target ]\n\n l.debug(\"Indirect jump at %#x is resolved to target %#x, which seems to be invalid.\", addr, target)\n return False, [ ]\n\n l.debug(\"Indirect jump at %#x cannot be resolved by %s.\", addr, repr(self))\n return False, [ ]\n", "path": "angr/analyses/cfg/indirect_jump_resolvers/mips_elf_fast.py"}]} | 1,794 | 246 |
gh_patches_debug_1220 | rasdani/github-patches | git_diff | DataBiosphere__toil-239 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jenkins should only deploy to PyPI when building off the master branch
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2
3 setup(
4 name='toil',
5 version='3.0.4',
6 description='Pipeline management software for clusters.',
7 author='Benedict Paten',
8 author_email='[email protected]',
9 url="https://github.com/BD2KGenomics/toil",
10 install_requires=['bd2k-python-lib>=1.7.dev1'],
11 extras_require={
12 'mesos': [
13 'mesos.interface==0.22.0',
14 'psutil==3.0.1' ],
15 'aws': [
16 'boto==2.38.0' ] },
17 package_dir={ '': 'src' },
18 packages=find_packages( 'src', exclude=[ '*.test' ] ),
19 entry_points={
20 'console_scripts': [
21 'toilKill = toil.utils.toilKill:main',
22 'toilStatus = toil.utils.toilStatus:main',
23 'toilStats = toil.utils.toilStats:main',
24 'toilRestarts = toil.utils.toilRestarts:main',
25 'multijob = toil.batchSystems.multijob:main',
26 'toil-mesos-executor = toil.batchSystems.mesos.executor:main [mesos]'] } )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -2,7 +2,7 @@
setup(
name='toil',
- version='3.0.4',
+ version='3.0.5.dev1',
description='Pipeline management software for clusters.',
author='Benedict Paten',
author_email='[email protected]',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -2,7 +2,7 @@\n \n setup(\n name='toil',\n- version='3.0.4',\n+ version='3.0.5.dev1',\n description='Pipeline management software for clusters.',\n author='Benedict Paten',\n author_email='[email protected]',\n", "issue": "Jenkins should only deploy to PyPI when building off the master branch\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='toil',\n version='3.0.4',\n description='Pipeline management software for clusters.',\n author='Benedict Paten',\n author_email='[email protected]',\n url=\"https://github.com/BD2KGenomics/toil\",\n install_requires=['bd2k-python-lib>=1.7.dev1'],\n extras_require={\n 'mesos': [\n 'mesos.interface==0.22.0',\n 'psutil==3.0.1' ],\n 'aws': [\n 'boto==2.38.0' ] },\n package_dir={ '': 'src' },\n packages=find_packages( 'src', exclude=[ '*.test' ] ),\n entry_points={\n 'console_scripts': [\n 'toilKill = toil.utils.toilKill:main',\n 'toilStatus = toil.utils.toilStatus:main',\n 'toilStats = toil.utils.toilStats:main',\n 'toilRestarts = toil.utils.toilRestarts:main',\n 'multijob = toil.batchSystems.multijob:main',\n 'toil-mesos-executor = toil.batchSystems.mesos.executor:main [mesos]'] } )\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='toil',\n version='3.0.5.dev1',\n description='Pipeline management software for clusters.',\n author='Benedict Paten',\n author_email='[email protected]',\n url=\"https://github.com/BD2KGenomics/toil\",\n install_requires=['bd2k-python-lib>=1.7.dev1'],\n extras_require={\n 'mesos': [\n 'mesos.interface==0.22.0',\n 'psutil==3.0.1' ],\n 'aws': [\n 'boto==2.38.0' ] },\n package_dir={ '': 'src' },\n packages=find_packages( 'src', exclude=[ '*.test' ] ),\n entry_points={\n 'console_scripts': [\n 'toilKill = toil.utils.toilKill:main',\n 'toilStatus = toil.utils.toilStatus:main',\n 'toilStats = toil.utils.toilStats:main',\n 'toilRestarts = toil.utils.toilRestarts:main',\n 'multijob = toil.batchSystems.multijob:main',\n 'toil-mesos-executor = toil.batchSystems.mesos.executor:main [mesos]'] } )\n", "path": "setup.py"}]} | 600 | 93 |
gh_patches_debug_59245 | rasdani/github-patches | git_diff | facebookresearch__hydra-287 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] example of override fail in multirun
This fails
`python examples/tutorial/5_composition/my_app.py -m db=mysql,postgresql db.user=omry`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import codecs
3 import distutils
4 import os
5 import re
6 import shutil
7 from os.path import join, exists, isdir
8
9 from setuptools import setup, find_packages
10
11 here = os.path.abspath(os.path.dirname(__file__))
12
13
14 def read(*parts):
15 with codecs.open(os.path.join(here, *parts), "r") as fp:
16 return fp.read()
17
18
19 def find_version(*file_paths):
20 version_file = read(*file_paths)
21 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
22 if version_match:
23 return version_match.group(1)
24 raise RuntimeError("Unable to find version string.")
25
26
27 class CleanCommand(distutils.cmd.Command):
28 """
29 Our custom command to clean out junk files.
30 """
31
32 description = "Cleans out junk files we don't want in the repo"
33 user_options = []
34
35 def initialize_options(self):
36 pass
37
38 def finalize_options(self):
39 pass
40
41 @staticmethod
42 def find(root, includes, excludes=[]):
43 res = []
44 for parent, dirs, files in os.walk(root):
45 for f in dirs + files:
46 add = list()
47 for include in includes:
48 if re.findall(include, f):
49 add.append(join(parent, f))
50 res.extend(add)
51 final_list = []
52 # Exclude things that matches an exclude pattern
53 for ex in excludes:
54 for file in res:
55 if not re.findall(ex, file):
56 final_list.append(file)
57 return final_list
58
59 def run(self):
60 delete_patterns = [
61 ".eggs",
62 ".egg-info",
63 ".pytest_cache",
64 "build",
65 "dist",
66 "__pycache__",
67 ".pyc",
68 ]
69 deletion_list = CleanCommand.find(
70 ".", includes=delete_patterns, excludes=["\\.nox/.*"]
71 )
72
73 for f in deletion_list:
74 if exists(f):
75 if isdir(f):
76 shutil.rmtree(f, ignore_errors=True)
77 else:
78 os.unlink(f)
79
80
81 with open("README.md", "r") as fh:
82 LONG_DESC = fh.read()
83 setup(
84 cmdclass={"clean": CleanCommand},
85 name="hydra-core",
86 version=find_version("hydra", "__init__.py"),
87 author="Omry Yadan",
88 author_email="[email protected]",
89 description="Hydra is a library for writing flexible command line applications",
90 long_description=LONG_DESC,
91 long_description_content_type="text/markdown",
92 url="https://github.com/facebookresearch/hydra",
93 keywords="command-line configuration yaml tab-completion",
94 packages=find_packages(),
95 include_package_data=True,
96 classifiers=[
97 "License :: OSI Approved :: MIT License",
98 "Development Status :: 4 - Beta",
99 "Programming Language :: Python :: 2.7",
100 "Programming Language :: Python :: 3.6",
101 "Programming Language :: Python :: 3.7",
102 "Operating System :: POSIX :: Linux",
103 "Operating System :: MacOS",
104 "Operating System :: Microsoft :: Windows",
105 ],
106 install_requires=[
107 "omegaconf>=1.4.0rc2",
108 'pathlib2>=2.2.0;python_version<"3.0"',
109 ],
110 # Install development dependencies with
111 # pip install -e .[dev]
112 extras_require={
113 "dev": [
114 "black",
115 "coverage",
116 "flake8",
117 "flake8-copyright",
118 "nox",
119 "pre-commit",
120 "pytest",
121 "setuptools",
122 "towncrier",
123 "twine",
124 ]
125 },
126 )
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -104,7 +104,7 @@
"Operating System :: Microsoft :: Windows",
],
install_requires=[
- "omegaconf>=1.4.0rc2",
+ "omegaconf>=1.4.0rc3",
'pathlib2>=2.2.0;python_version<"3.0"',
],
# Install development dependencies with
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -104,7 +104,7 @@\n \"Operating System :: Microsoft :: Windows\",\n ],\n install_requires=[\n- \"omegaconf>=1.4.0rc2\",\n+ \"omegaconf>=1.4.0rc3\",\n 'pathlib2>=2.2.0;python_version<\"3.0\"',\n ],\n # Install development dependencies with\n", "issue": "[Bug] example of override fail in multirun\nThis fails\r\n\r\n`python examples/tutorial/5_composition/my_app.py -m db=mysql,postgresql db.user=omry`\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport codecs\nimport distutils\nimport os\nimport re\nimport shutil\nfrom os.path import join, exists, isdir\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n with codecs.open(os.path.join(here, *parts), \"r\") as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nclass CleanCommand(distutils.cmd.Command):\n \"\"\"\n Our custom command to clean out junk files.\n \"\"\"\n\n description = \"Cleans out junk files we don't want in the repo\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n @staticmethod\n def find(root, includes, excludes=[]):\n res = []\n for parent, dirs, files in os.walk(root):\n for f in dirs + files:\n add = list()\n for include in includes:\n if re.findall(include, f):\n add.append(join(parent, f))\n res.extend(add)\n final_list = []\n # Exclude things that matches an exclude pattern\n for ex in excludes:\n for file in res:\n if not re.findall(ex, file):\n final_list.append(file)\n return final_list\n\n def run(self):\n delete_patterns = [\n \".eggs\",\n \".egg-info\",\n \".pytest_cache\",\n \"build\",\n \"dist\",\n \"__pycache__\",\n \".pyc\",\n ]\n deletion_list = CleanCommand.find(\n \".\", includes=delete_patterns, excludes=[\"\\\\.nox/.*\"]\n )\n\n for f in deletion_list:\n if exists(f):\n if isdir(f):\n shutil.rmtree(f, ignore_errors=True)\n else:\n os.unlink(f)\n\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n cmdclass={\"clean\": CleanCommand},\n name=\"hydra-core\",\n version=find_version(\"hydra\", \"__init__.py\"),\n author=\"Omry Yadan\",\n author_email=\"[email protected]\",\n description=\"Hydra is a library for writing flexible command line applications\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra\",\n keywords=\"command-line configuration yaml tab-completion\",\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS\",\n \"Operating System :: Microsoft :: Windows\",\n ],\n install_requires=[\n \"omegaconf>=1.4.0rc2\",\n 'pathlib2>=2.2.0;python_version<\"3.0\"',\n ],\n # Install development dependencies with\n # pip install -e .[dev]\n extras_require={\n \"dev\": [\n \"black\",\n \"coverage\",\n \"flake8\",\n \"flake8-copyright\",\n \"nox\",\n \"pre-commit\",\n \"pytest\",\n \"setuptools\",\n \"towncrier\",\n \"twine\",\n ]\n },\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport codecs\nimport distutils\nimport os\nimport re\nimport shutil\nfrom os.path import join, exists, isdir\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n with codecs.open(os.path.join(here, *parts), \"r\") as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nclass CleanCommand(distutils.cmd.Command):\n \"\"\"\n Our custom command to clean out junk files.\n \"\"\"\n\n description = \"Cleans out junk files we don't want in the repo\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n @staticmethod\n def find(root, includes, excludes=[]):\n res = []\n for parent, dirs, files in os.walk(root):\n for f in dirs + files:\n add = list()\n for include in includes:\n if re.findall(include, f):\n add.append(join(parent, f))\n res.extend(add)\n final_list = []\n # Exclude things that matches an exclude pattern\n for ex in excludes:\n for file in res:\n if not re.findall(ex, file):\n final_list.append(file)\n return final_list\n\n def run(self):\n delete_patterns = [\n \".eggs\",\n \".egg-info\",\n \".pytest_cache\",\n \"build\",\n \"dist\",\n \"__pycache__\",\n \".pyc\",\n ]\n deletion_list = CleanCommand.find(\n \".\", includes=delete_patterns, excludes=[\"\\\\.nox/.*\"]\n )\n\n for f in deletion_list:\n if exists(f):\n if isdir(f):\n shutil.rmtree(f, ignore_errors=True)\n else:\n os.unlink(f)\n\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n cmdclass={\"clean\": CleanCommand},\n name=\"hydra-core\",\n version=find_version(\"hydra\", \"__init__.py\"),\n author=\"Omry Yadan\",\n author_email=\"[email protected]\",\n description=\"Hydra is a library for writing flexible command line applications\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra\",\n keywords=\"command-line configuration yaml tab-completion\",\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS\",\n \"Operating System :: Microsoft :: Windows\",\n ],\n install_requires=[\n \"omegaconf>=1.4.0rc3\",\n 'pathlib2>=2.2.0;python_version<\"3.0\"',\n ],\n # Install development dependencies with\n # pip install -e .[dev]\n extras_require={\n \"dev\": [\n \"black\",\n \"coverage\",\n \"flake8\",\n \"flake8-copyright\",\n \"nox\",\n \"pre-commit\",\n \"pytest\",\n \"setuptools\",\n \"towncrier\",\n \"twine\",\n ]\n },\n )\n", "path": "setup.py"}]} | 1,366 | 105 |
gh_patches_debug_5110 | rasdani/github-patches | git_diff | mindsdb__mindsdb-177 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'PredictTransactionOutputData' object has no attribute 'predicted_values'
**Describe the bug**
After running predict.py in the example mindsdb/docs/examples/time_series/ I got the following AttributeError:
```
Traceback (most recent call last):
File "predict.py", line 12, in <module>
print(result.predicted_values)
AttributeError: 'PredictTransactionOutputData' object has no attribute 'predicted_values'
```
**To Reproduce**
Steps to reproduce the behavior:
1. First run train.py, with python3 train.py
2. When training is finished, run predict.py with python3 predict.py
3. See error
**Expected behavior**
I expected to see the predicted values.
**Desktop (please complete the following information):**
- OS: Ubuntu 18.04.2 LTS
- mindsdb 1.0.5
- pip 19.1
- python 3.6.7
- virtualenv 15.1.0
- urllib3 1.24
**Additional context**
Before the Traceback I got the following warning many times:
```
WARNING:mindsdb-logger-core-logger:libs/backends/ludwig.py:141 - ('Missing previous predicted values for output column: '
'Main_Engine_Fuel_Consumption_MT_day, these should be included in your input '
'under the name: previous_Main_Engine_Fuel_Consumption_MT_day')
```
Finally, I've installed mindsdb using pip3 inside a virtualenvironment.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/examples/time_series/predict.py`
Content:
```
1 """
2
3 """
4
5 from mindsdb import Predictor
6
7 # Here we use the model to make predictions (NOTE: You need to run train.py first)
8 result = Predictor(name='fuel').predict(when_data = 'fuel_predict.csv')
9
10 # you can now print the results
11 print('The predicted main engine fuel consumption')
12 print(result.predicted_values)
```
Path: `docs/examples/nlp/predict.py`
Content:
```
1 from mindsdb import *
2
3 mdb = Predictor(name='real_estate_desc')
4
5 # Here we use the model to make predictions (NOTE: You need to run train.py first)
6 result = mdb.predict(
7 when={
8 "description": """A true gem
9 rooms: 2
10 bathrooms: 0
11 neighboorhood: thowsand_oaks
12 amenities: parking
13 area: 84.0291068642868
14 condition: great !
15 """
16 }
17 )
18
19 # you can now print the results
20 print('The predicted number of rooms')
21 print(result.predicted_values)
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/examples/nlp/predict.py b/docs/examples/nlp/predict.py
--- a/docs/examples/nlp/predict.py
+++ b/docs/examples/nlp/predict.py
@@ -18,4 +18,4 @@
# you can now print the results
print('The predicted number of rooms')
-print(result.predicted_values)
+print(result)
diff --git a/docs/examples/time_series/predict.py b/docs/examples/time_series/predict.py
--- a/docs/examples/time_series/predict.py
+++ b/docs/examples/time_series/predict.py
@@ -9,4 +9,5 @@
# you can now print the results
print('The predicted main engine fuel consumption')
-print(result.predicted_values)
\ No newline at end of file
+for row in result:
+ print(row)
| {"golden_diff": "diff --git a/docs/examples/nlp/predict.py b/docs/examples/nlp/predict.py\n--- a/docs/examples/nlp/predict.py\n+++ b/docs/examples/nlp/predict.py\n@@ -18,4 +18,4 @@\n \n # you can now print the results\n print('The predicted number of rooms')\n-print(result.predicted_values)\n+print(result)\ndiff --git a/docs/examples/time_series/predict.py b/docs/examples/time_series/predict.py\n--- a/docs/examples/time_series/predict.py\n+++ b/docs/examples/time_series/predict.py\n@@ -9,4 +9,5 @@\n \n # you can now print the results\n print('The predicted main engine fuel consumption')\n-print(result.predicted_values)\n\\ No newline at end of file\n+for row in result:\n+ print(row)\n", "issue": "AttributeError: 'PredictTransactionOutputData' object has no attribute 'predicted_values'\n**Describe the bug**\r\nAfter running predict.py in the example mindsdb/docs/examples/time_series/ I got the following AttributeError:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"predict.py\", line 12, in <module>\r\n print(result.predicted_values)\r\nAttributeError: 'PredictTransactionOutputData' object has no attribute 'predicted_values'\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. First run train.py, with python3 train.py\r\n2. When training is finished, run predict.py with python3 predict.py\r\n3. See error\r\n\r\n**Expected behavior**\r\nI expected to see the predicted values.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Ubuntu 18.04.2 LTS\r\n- mindsdb 1.0.5\r\n- pip 19.1\r\n- python 3.6.7\r\n- virtualenv 15.1.0\r\n- urllib3 1.24\r\n\r\n**Additional context**\r\nBefore the Traceback I got the following warning many times:\r\n\r\n```\r\nWARNING:mindsdb-logger-core-logger:libs/backends/ludwig.py:141 - ('Missing previous predicted values for output column: '\r\n 'Main_Engine_Fuel_Consumption_MT_day, these should be included in your input '\r\n 'under the name: previous_Main_Engine_Fuel_Consumption_MT_day')\r\n```\r\nFinally, I've installed mindsdb using pip3 inside a virtualenvironment.\r\n\n", "before_files": [{"content": "\"\"\"\n\n\"\"\"\n\nfrom mindsdb import Predictor\n\n# Here we use the model to make predictions (NOTE: You need to run train.py first)\nresult = Predictor(name='fuel').predict(when_data = 'fuel_predict.csv')\n\n# you can now print the results\nprint('The predicted main engine fuel consumption')\nprint(result.predicted_values)", "path": "docs/examples/time_series/predict.py"}, {"content": "from mindsdb import *\n\nmdb = Predictor(name='real_estate_desc')\n\n# Here we use the model to make predictions (NOTE: You need to run train.py first)\nresult = mdb.predict(\n when={\n \"description\": \"\"\"A true gem\n rooms: 2\n bathrooms: 0\n neighboorhood: thowsand_oaks\n amenities: parking\n area: 84.0291068642868\n condition: great !\n \"\"\"\n }\n)\n\n# you can now print the results\nprint('The predicted number of rooms')\nprint(result.predicted_values)\n", "path": "docs/examples/nlp/predict.py"}], "after_files": [{"content": "\"\"\"\n\n\"\"\"\n\nfrom mindsdb import Predictor\n\n# Here we use the model to make predictions (NOTE: You need to run train.py first)\nresult = Predictor(name='fuel').predict(when_data = 'fuel_predict.csv')\n\n# you can now print the results\nprint('The predicted main engine fuel consumption')\nfor row in result:\n print(row)\n", "path": "docs/examples/time_series/predict.py"}, {"content": "from mindsdb import *\n\nmdb = Predictor(name='real_estate_desc')\n\n# Here we use the model to make predictions (NOTE: You need to run train.py first)\nresult = mdb.predict(\n when={\n \"description\": \"\"\"A true gem\n rooms: 2\n bathrooms: 0\n neighboorhood: thowsand_oaks\n amenities: parking\n area: 84.0291068642868\n condition: great !\n \"\"\"\n }\n)\n\n# you can now print the results\nprint('The predicted number of rooms')\nprint(result)\n", "path": "docs/examples/nlp/predict.py"}]} | 864 | 172 |
gh_patches_debug_58021 | rasdani/github-patches | git_diff | sopel-irc__sopel-949 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problem in (at least) Wikipedia module: possibly Unicode related
Hi,
observe the following use case:
https://en.wikipedia.org/wiki/Hir%C5%8D_Onoda
@willie_5.4.1 KeyError: u'extract' (file "/usr/local/lib/python2.7/dist-packages/willie-5.4.1-py2.7.egg/willie/modules/wikipedia.py", line 89, in mw_snippet)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/wikipedia.py`
Content:
```
1 # coding=utf-8
2 """
3 wikipedia.py - Sopel Wikipedia Module
4 Copyright 2013 Edward Powell - embolalia.net
5 Licensed under the Eiffel Forum License 2.
6
7 http://sopel.chat
8 """
9 from __future__ import unicode_literals, absolute_import, print_function, division
10 from sopel import web, tools
11 from sopel.config.types import StaticSection, ValidatedAttribute
12 from sopel.module import NOLIMIT, commands, example, rule
13 import json
14 import re
15
16 import sys
17 if sys.version_info.major < 3:
18 from urlparse import unquote
19 else:
20 from urllib.parse import unquote
21
22 REDIRECT = re.compile(r'^REDIRECT (.*)')
23
24
25 class WikipediaSection(StaticSection):
26 default_lang = ValidatedAttribute('default_lang', default='en')
27 """The default language to find articles from."""
28 lang_per_channel = ValidatedAttribute('lang_per_channel')
29
30
31 def setup(bot):
32 bot.config.define_section('wikipedia', WikipediaSection)
33
34 regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')
35 if not bot.memory.contains('url_callbacks'):
36 bot.memory['url_callbacks'] = tools.SopelMemory()
37 bot.memory['url_callbacks'][regex] = mw_info
38
39
40 def configure(config):
41 config.define_section('wikipedia', WikipediaSection)
42 config.wikipedia.configure_setting(
43 'default_lang',
44 "Enter the default language to find articles from."
45 )
46
47
48 def mw_search(server, query, num):
49 """
50 Searches the specified MediaWiki server for the given query, and returns
51 the specified number of results.
52 """
53 search_url = ('http://%s/w/api.php?format=json&action=query'
54 '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'
55 '&srsearch=') % (server, num)
56 search_url += query
57 query = json.loads(web.get(search_url))
58 if 'query' in query:
59 query = query['query']['search']
60 return [r['title'] for r in query]
61 else:
62 return None
63
64
65 def say_snippet(bot, server, query, show_url=True):
66 page_name = query.replace('_', ' ')
67 query = query.replace(' ', '_')
68 snippet = mw_snippet(server, query)
69 msg = '[WIKIPEDIA] {} | "{}"'.format(page_name, snippet)
70 if show_url:
71 msg = msg + ' | https://{}/wiki/{}'.format(server, query)
72 bot.say(msg)
73
74
75 def mw_snippet(server, query):
76 """
77 Retrives a snippet of the specified length from the given page on the given
78 server.
79 """
80 snippet_url = ('https://' + server + '/w/api.php?format=json'
81 '&action=query&prop=extracts&exintro&explaintext'
82 '&exchars=300&redirects&titles=')
83 snippet_url += query
84 snippet = json.loads(web.get(snippet_url))
85 snippet = snippet['query']['pages']
86
87 # For some reason, the API gives the page *number* as the key, so we just
88 # grab the first page number in the results.
89 snippet = snippet[list(snippet.keys())[0]]
90
91 return snippet['extract']
92
93
94 @rule('.*/([a-z]+\.wikipedia.org)/wiki/([^ ]+).*')
95 def mw_info(bot, trigger, found_match=None):
96 """
97 Retrives a snippet of the specified length from the given page on the given
98 server.
99 """
100 match = found_match or trigger
101 say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)
102
103
104 @commands('w', 'wiki', 'wik')
105 @example('.w San Francisco')
106 def wikipedia(bot, trigger):
107 lang = bot.config.wikipedia.default_lang
108
109 #change lang if channel has custom language set
110 if (trigger.sender and not trigger.sender.is_nick() and
111 bot.config.wikipedia.lang_per_channel):
112 customlang = re.search('(' + trigger.sender + '):(\w+)',
113 bot.config.wikipedia.lang_per_channel)
114 if customlang is not None:
115 lang = customlang.group(2)
116
117 if trigger.group(2) is None:
118 bot.reply("What do you want me to look up?")
119 return NOLIMIT
120
121 query = trigger.group(2)
122 args = re.search(r'^-([a-z]{2,12})\s(.*)', query)
123 if args is not None:
124 lang = args.group(1)
125 query = args.group(2)
126
127 if not query:
128 bot.reply('What do you want me to look up?')
129 return NOLIMIT
130 server = lang + '.wikipedia.org'
131 query = mw_search(server, query, 1)
132 if not query:
133 bot.reply("I can't find any results for that.")
134 return NOLIMIT
135 else:
136 query = query[0]
137 say_snippet(bot, server, query)
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py
--- a/sopel/modules/wikipedia.py
+++ b/sopel/modules/wikipedia.py
@@ -15,7 +15,8 @@
import sys
if sys.version_info.major < 3:
- from urlparse import unquote
+ from urlparse import unquote as _unquote
+ unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')
else:
from urllib.parse import unquote
| {"golden_diff": "diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py\n--- a/sopel/modules/wikipedia.py\n+++ b/sopel/modules/wikipedia.py\n@@ -15,7 +15,8 @@\n \n import sys\n if sys.version_info.major < 3:\n- from urlparse import unquote\n+ from urlparse import unquote as _unquote\n+ unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\n else:\n from urllib.parse import unquote\n", "issue": "Problem in (at least) Wikipedia module: possibly Unicode related\nHi,\nobserve the following use case:\n https://en.wikipedia.org/wiki/Hir%C5%8D_Onoda\n @willie_5.4.1 KeyError: u'extract' (file \"/usr/local/lib/python2.7/dist-packages/willie-5.4.1-py2.7.egg/willie/modules/wikipedia.py\", line 89, in mw_snippet)\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nwikipedia.py - Sopel Wikipedia Module\nCopyright 2013 Edward Powell - embolalia.net\nLicensed under the Eiffel Forum License 2.\n\nhttp://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urlparse import unquote\nelse:\n from urllib.parse import unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = query.replace(' ', '_')\n snippet = mw_snippet(server, query)\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*/([a-z]+\\.wikipedia.org)/wiki/([^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n #change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nwikipedia.py - Sopel Wikipedia Module\nCopyright 2013 Edward Powell - embolalia.net\nLicensed under the Eiffel Forum License 2.\n\nhttp://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urlparse import unquote as _unquote\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\nelse:\n from urllib.parse import unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = query.replace(' ', '_')\n snippet = mw_snippet(server, query)\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*/([a-z]+\\.wikipedia.org)/wiki/([^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n #change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}]} | 1,752 | 119 |
gh_patches_debug_585 | rasdani/github-patches | git_diff | pex-tool__pex-1679 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.73
On the docket:
+ [x] Unexpected distribution hash #1683
+ [x] Pex fails to parse wheel tags correctly when resolving from a lock. #1676
+ [x] `pex3 lock create --style universal` does not fully patch ambient interpreter properties. #1681
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.72"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.72"
+__version__ = "2.1.73"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.72\"\n+__version__ = \"2.1.73\"\n", "issue": "Release 2.1.73\nOn the docket:\r\n+ [x] Unexpected distribution hash #1683 \r\n+ [x] Pex fails to parse wheel tags correctly when resolving from a lock. #1676 \r\n+ [x] `pex3 lock create --style universal` does not fully patch ambient interpreter properties. #1681 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.72\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.73\"\n", "path": "pex/version.py"}]} | 386 | 96 |
gh_patches_debug_40938 | rasdani/github-patches | git_diff | Cog-Creators__Red-DiscordBot-3911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mod cog sends owner notifications on fresh install.
# Other bugs
I got reminded about it when I saw a fix for #3587. Mod cog sends owner notifications about `[p]moveignoredchannels` and `[p]movedeletedelay` on fresh Red installs. Only viable solution seems to be looping through all guild settings and only send the message if `delete_delay` has been changed from the default in at least one of them though I'm basing that on my comment [here](https://github.com/Cog-Creators/Red-DiscordBot/pull/3638#discussion_r392119234).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/cogs/mod/mod.py`
Content:
```
1 import asyncio
2 import logging
3 import re
4 from abc import ABC
5 from collections import defaultdict
6 from typing import List, Tuple
7
8 import discord
9 from redbot.core import Config, modlog, commands
10 from redbot.core.bot import Red
11 from redbot.core.i18n import Translator, cog_i18n
12 from redbot.core.utils._internal_utils import send_to_owners_with_prefix_replaced
13 from .casetypes import CASETYPES
14 from .events import Events
15 from .kickban import KickBanMixin
16 from .mutes import MuteMixin
17 from .names import ModInfo
18 from .slowmode import Slowmode
19 from .settings import ModSettings
20
21 _ = T_ = Translator("Mod", __file__)
22
23 __version__ = "1.2.0"
24
25
26 class CompositeMetaClass(type(commands.Cog), type(ABC)):
27 """
28 This allows the metaclass used for proper type detection to
29 coexist with discord.py's metaclass
30 """
31
32 pass
33
34
35 @cog_i18n(_)
36 class Mod(
37 ModSettings,
38 Events,
39 KickBanMixin,
40 MuteMixin,
41 ModInfo,
42 Slowmode,
43 commands.Cog,
44 metaclass=CompositeMetaClass,
45 ):
46 """Moderation tools."""
47
48 default_global_settings = {"version": ""}
49
50 default_guild_settings = {
51 "ban_mention_spam": False,
52 "delete_repeats": -1,
53 "ignored": False,
54 "respect_hierarchy": True,
55 "delete_delay": -1,
56 "reinvite_on_unban": False,
57 "current_tempbans": [],
58 "dm_on_kickban": False,
59 "default_days": 0,
60 }
61
62 default_channel_settings = {"ignored": False}
63
64 default_member_settings = {"past_nicks": [], "perms_cache": {}, "banned_until": False}
65
66 default_user_settings = {"past_names": []}
67
68 def __init__(self, bot: Red):
69 super().__init__()
70 self.bot = bot
71
72 self.config = Config.get_conf(self, 4961522000, force_registration=True)
73 self.config.register_global(**self.default_global_settings)
74 self.config.register_guild(**self.default_guild_settings)
75 self.config.register_channel(**self.default_channel_settings)
76 self.config.register_member(**self.default_member_settings)
77 self.config.register_user(**self.default_user_settings)
78 self.cache: dict = {}
79 self.tban_expiry_task = self.bot.loop.create_task(self.check_tempban_expirations())
80 self.last_case: dict = defaultdict(dict)
81
82 self._ready = asyncio.Event()
83
84 async def initialize(self):
85 await self._maybe_update_config()
86 self._ready.set()
87
88 async def cog_before_invoke(self, ctx: commands.Context) -> None:
89 await self._ready.wait()
90
91 def cog_unload(self):
92 self.tban_expiry_task.cancel()
93
94 async def _maybe_update_config(self):
95 """Maybe update `delete_delay` value set by Config prior to Mod 1.0.0."""
96 if not await self.config.version():
97 guild_dict = await self.config.all_guilds()
98 for guild_id, info in guild_dict.items():
99 delete_repeats = info.get("delete_repeats", False)
100 if delete_repeats:
101 val = 3
102 else:
103 val = -1
104 await self.config.guild(discord.Object(id=guild_id)).delete_repeats.set(val)
105 await self.config.version.set("1.0.0") # set version of last update
106 if await self.config.version() < "1.1.0":
107 msg = _(
108 "Ignored guilds and channels have been moved. "
109 "Please use `[p]moveignoredchannels` if "
110 "you were previously using these functions."
111 )
112 self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))
113 await self.config.version.set("1.1.0")
114 if await self.config.version() < "1.2.0":
115 msg = _(
116 "Delete delay settings have been moved. "
117 "Please use `[p]movedeletedelay` if "
118 "you were previously using these functions."
119 )
120 self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))
121 await self.config.version.set("1.2.0")
122
123 @commands.command()
124 @commands.is_owner()
125 async def moveignoredchannels(self, ctx: commands.Context) -> None:
126 """Move ignored channels and servers to core"""
127 all_guilds = await self.config.all_guilds()
128 all_channels = await self.config.all_channels()
129 for guild_id, settings in all_guilds.items():
130 await self.bot._config.guild_from_id(guild_id).ignored.set(settings["ignored"])
131 await self.config.guild_from_id(guild_id).ignored.clear()
132 for channel_id, settings in all_channels.items():
133 await self.bot._config.channel_from_id(channel_id).ignored.set(settings["ignored"])
134 await self.config.channel_from_id(channel_id).clear()
135 await ctx.send(_("Ignored channels and guilds restored."))
136
137 @commands.command()
138 @commands.is_owner()
139 async def movedeletedelay(self, ctx: commands.Context) -> None:
140 """
141 Move deletedelay settings to core
142 """
143 all_guilds = await self.config.all_guilds()
144 for guild_id, settings in all_guilds.items():
145 await self.bot._config.guild_from_id(guild_id).delete_delay.set(
146 settings["delete_delay"]
147 )
148 await self.config.guild_from_id(guild_id).delete_delay.clear()
149 await ctx.send(_("Delete delay settings restored."))
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redbot/cogs/mod/mod.py b/redbot/cogs/mod/mod.py
--- a/redbot/cogs/mod/mod.py
+++ b/redbot/cogs/mod/mod.py
@@ -6,6 +6,8 @@
from typing import List, Tuple
import discord
+from redbot.core.utils import AsyncIter
+
from redbot.core import Config, modlog, commands
from redbot.core.bot import Red
from redbot.core.i18n import Translator, cog_i18n
@@ -95,7 +97,7 @@
"""Maybe update `delete_delay` value set by Config prior to Mod 1.0.0."""
if not await self.config.version():
guild_dict = await self.config.all_guilds()
- for guild_id, info in guild_dict.items():
+ async for guild_id, info in AsyncIter(guild_dict.items(), steps=25):
delete_repeats = info.get("delete_repeats", False)
if delete_repeats:
val = 3
@@ -104,20 +106,37 @@
await self.config.guild(discord.Object(id=guild_id)).delete_repeats.set(val)
await self.config.version.set("1.0.0") # set version of last update
if await self.config.version() < "1.1.0":
- msg = _(
- "Ignored guilds and channels have been moved. "
- "Please use `[p]moveignoredchannels` if "
- "you were previously using these functions."
- )
- self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))
+ message_sent = False
+ async for e in AsyncIter((await self.config.all_channels()).values(), steps=25):
+ if e["ignored"] is not False:
+ msg = _(
+ "Ignored guilds and channels have been moved. "
+ "Please use `[p]moveignoredchannels` to migrate the old settings."
+ )
+ self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))
+ message_sent = True
+ break
+ if message_sent is False:
+ async for e in AsyncIter((await self.config.all_guilds()).values(), steps=25):
+ if e["ignored"] is not False:
+ msg = _(
+ "Ignored guilds and channels have been moved. "
+ "Please use `[p]moveignoredchannels` to migrate the old settings."
+ )
+ self.bot.loop.create_task(
+ send_to_owners_with_prefix_replaced(self.bot, msg)
+ )
+ break
await self.config.version.set("1.1.0")
if await self.config.version() < "1.2.0":
- msg = _(
- "Delete delay settings have been moved. "
- "Please use `[p]movedeletedelay` if "
- "you were previously using these functions."
- )
- self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))
+ async for e in AsyncIter((await self.config.all_guilds()).values(), steps=25):
+ if e["delete_delay"] != -1:
+ msg = _(
+ "Delete delay settings have been moved. "
+ "Please use `[p]movedeletedelay` to migrate the old settings."
+ )
+ self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))
+ break
await self.config.version.set("1.2.0")
@commands.command()
| {"golden_diff": "diff --git a/redbot/cogs/mod/mod.py b/redbot/cogs/mod/mod.py\n--- a/redbot/cogs/mod/mod.py\n+++ b/redbot/cogs/mod/mod.py\n@@ -6,6 +6,8 @@\n from typing import List, Tuple\n \n import discord\n+from redbot.core.utils import AsyncIter\n+\n from redbot.core import Config, modlog, commands\n from redbot.core.bot import Red\n from redbot.core.i18n import Translator, cog_i18n\n@@ -95,7 +97,7 @@\n \"\"\"Maybe update `delete_delay` value set by Config prior to Mod 1.0.0.\"\"\"\n if not await self.config.version():\n guild_dict = await self.config.all_guilds()\n- for guild_id, info in guild_dict.items():\n+ async for guild_id, info in AsyncIter(guild_dict.items(), steps=25):\n delete_repeats = info.get(\"delete_repeats\", False)\n if delete_repeats:\n val = 3\n@@ -104,20 +106,37 @@\n await self.config.guild(discord.Object(id=guild_id)).delete_repeats.set(val)\n await self.config.version.set(\"1.0.0\") # set version of last update\n if await self.config.version() < \"1.1.0\":\n- msg = _(\n- \"Ignored guilds and channels have been moved. \"\n- \"Please use `[p]moveignoredchannels` if \"\n- \"you were previously using these functions.\"\n- )\n- self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n+ message_sent = False\n+ async for e in AsyncIter((await self.config.all_channels()).values(), steps=25):\n+ if e[\"ignored\"] is not False:\n+ msg = _(\n+ \"Ignored guilds and channels have been moved. \"\n+ \"Please use `[p]moveignoredchannels` to migrate the old settings.\"\n+ )\n+ self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n+ message_sent = True\n+ break\n+ if message_sent is False:\n+ async for e in AsyncIter((await self.config.all_guilds()).values(), steps=25):\n+ if e[\"ignored\"] is not False:\n+ msg = _(\n+ \"Ignored guilds and channels have been moved. \"\n+ \"Please use `[p]moveignoredchannels` to migrate the old settings.\"\n+ )\n+ self.bot.loop.create_task(\n+ send_to_owners_with_prefix_replaced(self.bot, msg)\n+ )\n+ break\n await self.config.version.set(\"1.1.0\")\n if await self.config.version() < \"1.2.0\":\n- msg = _(\n- \"Delete delay settings have been moved. \"\n- \"Please use `[p]movedeletedelay` if \"\n- \"you were previously using these functions.\"\n- )\n- self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n+ async for e in AsyncIter((await self.config.all_guilds()).values(), steps=25):\n+ if e[\"delete_delay\"] != -1:\n+ msg = _(\n+ \"Delete delay settings have been moved. \"\n+ \"Please use `[p]movedeletedelay` to migrate the old settings.\"\n+ )\n+ self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n+ break\n await self.config.version.set(\"1.2.0\")\n \n @commands.command()\n", "issue": "Mod cog sends owner notifications on fresh install.\n# Other bugs\r\n\r\nI got reminded about it when I saw a fix for #3587. Mod cog sends owner notifications about `[p]moveignoredchannels` and `[p]movedeletedelay` on fresh Red installs. Only viable solution seems to be looping through all guild settings and only send the message if `delete_delay` has been changed from the default in at least one of them though I'm basing that on my comment [here](https://github.com/Cog-Creators/Red-DiscordBot/pull/3638#discussion_r392119234).\r\n\n", "before_files": [{"content": "import asyncio\nimport logging\nimport re\nfrom abc import ABC\nfrom collections import defaultdict\nfrom typing import List, Tuple\n\nimport discord\nfrom redbot.core import Config, modlog, commands\nfrom redbot.core.bot import Red\nfrom redbot.core.i18n import Translator, cog_i18n\nfrom redbot.core.utils._internal_utils import send_to_owners_with_prefix_replaced\nfrom .casetypes import CASETYPES\nfrom .events import Events\nfrom .kickban import KickBanMixin\nfrom .mutes import MuteMixin\nfrom .names import ModInfo\nfrom .slowmode import Slowmode\nfrom .settings import ModSettings\n\n_ = T_ = Translator(\"Mod\", __file__)\n\n__version__ = \"1.2.0\"\n\n\nclass CompositeMetaClass(type(commands.Cog), type(ABC)):\n \"\"\"\n This allows the metaclass used for proper type detection to\n coexist with discord.py's metaclass\n \"\"\"\n\n pass\n\n\n@cog_i18n(_)\nclass Mod(\n ModSettings,\n Events,\n KickBanMixin,\n MuteMixin,\n ModInfo,\n Slowmode,\n commands.Cog,\n metaclass=CompositeMetaClass,\n):\n \"\"\"Moderation tools.\"\"\"\n\n default_global_settings = {\"version\": \"\"}\n\n default_guild_settings = {\n \"ban_mention_spam\": False,\n \"delete_repeats\": -1,\n \"ignored\": False,\n \"respect_hierarchy\": True,\n \"delete_delay\": -1,\n \"reinvite_on_unban\": False,\n \"current_tempbans\": [],\n \"dm_on_kickban\": False,\n \"default_days\": 0,\n }\n\n default_channel_settings = {\"ignored\": False}\n\n default_member_settings = {\"past_nicks\": [], \"perms_cache\": {}, \"banned_until\": False}\n\n default_user_settings = {\"past_names\": []}\n\n def __init__(self, bot: Red):\n super().__init__()\n self.bot = bot\n\n self.config = Config.get_conf(self, 4961522000, force_registration=True)\n self.config.register_global(**self.default_global_settings)\n self.config.register_guild(**self.default_guild_settings)\n self.config.register_channel(**self.default_channel_settings)\n self.config.register_member(**self.default_member_settings)\n self.config.register_user(**self.default_user_settings)\n self.cache: dict = {}\n self.tban_expiry_task = self.bot.loop.create_task(self.check_tempban_expirations())\n self.last_case: dict = defaultdict(dict)\n\n self._ready = asyncio.Event()\n\n async def initialize(self):\n await self._maybe_update_config()\n self._ready.set()\n\n async def cog_before_invoke(self, ctx: commands.Context) -> None:\n await self._ready.wait()\n\n def cog_unload(self):\n self.tban_expiry_task.cancel()\n\n async def _maybe_update_config(self):\n \"\"\"Maybe update `delete_delay` value set by Config prior to Mod 1.0.0.\"\"\"\n if not await self.config.version():\n guild_dict = await self.config.all_guilds()\n for guild_id, info in guild_dict.items():\n delete_repeats = info.get(\"delete_repeats\", False)\n if delete_repeats:\n val = 3\n else:\n val = -1\n await self.config.guild(discord.Object(id=guild_id)).delete_repeats.set(val)\n await self.config.version.set(\"1.0.0\") # set version of last update\n if await self.config.version() < \"1.1.0\":\n msg = _(\n \"Ignored guilds and channels have been moved. \"\n \"Please use `[p]moveignoredchannels` if \"\n \"you were previously using these functions.\"\n )\n self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n await self.config.version.set(\"1.1.0\")\n if await self.config.version() < \"1.2.0\":\n msg = _(\n \"Delete delay settings have been moved. \"\n \"Please use `[p]movedeletedelay` if \"\n \"you were previously using these functions.\"\n )\n self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n await self.config.version.set(\"1.2.0\")\n\n @commands.command()\n @commands.is_owner()\n async def moveignoredchannels(self, ctx: commands.Context) -> None:\n \"\"\"Move ignored channels and servers to core\"\"\"\n all_guilds = await self.config.all_guilds()\n all_channels = await self.config.all_channels()\n for guild_id, settings in all_guilds.items():\n await self.bot._config.guild_from_id(guild_id).ignored.set(settings[\"ignored\"])\n await self.config.guild_from_id(guild_id).ignored.clear()\n for channel_id, settings in all_channels.items():\n await self.bot._config.channel_from_id(channel_id).ignored.set(settings[\"ignored\"])\n await self.config.channel_from_id(channel_id).clear()\n await ctx.send(_(\"Ignored channels and guilds restored.\"))\n\n @commands.command()\n @commands.is_owner()\n async def movedeletedelay(self, ctx: commands.Context) -> None:\n \"\"\"\n Move deletedelay settings to core\n \"\"\"\n all_guilds = await self.config.all_guilds()\n for guild_id, settings in all_guilds.items():\n await self.bot._config.guild_from_id(guild_id).delete_delay.set(\n settings[\"delete_delay\"]\n )\n await self.config.guild_from_id(guild_id).delete_delay.clear()\n await ctx.send(_(\"Delete delay settings restored.\"))\n", "path": "redbot/cogs/mod/mod.py"}], "after_files": [{"content": "import asyncio\nimport logging\nimport re\nfrom abc import ABC\nfrom collections import defaultdict\nfrom typing import List, Tuple\n\nimport discord\nfrom redbot.core.utils import AsyncIter\n\nfrom redbot.core import Config, modlog, commands\nfrom redbot.core.bot import Red\nfrom redbot.core.i18n import Translator, cog_i18n\nfrom redbot.core.utils._internal_utils import send_to_owners_with_prefix_replaced\nfrom .casetypes import CASETYPES\nfrom .events import Events\nfrom .kickban import KickBanMixin\nfrom .mutes import MuteMixin\nfrom .names import ModInfo\nfrom .slowmode import Slowmode\nfrom .settings import ModSettings\n\n_ = T_ = Translator(\"Mod\", __file__)\n\n__version__ = \"1.2.0\"\n\n\nclass CompositeMetaClass(type(commands.Cog), type(ABC)):\n \"\"\"\n This allows the metaclass used for proper type detection to\n coexist with discord.py's metaclass\n \"\"\"\n\n pass\n\n\n@cog_i18n(_)\nclass Mod(\n ModSettings,\n Events,\n KickBanMixin,\n MuteMixin,\n ModInfo,\n Slowmode,\n commands.Cog,\n metaclass=CompositeMetaClass,\n):\n \"\"\"Moderation tools.\"\"\"\n\n default_global_settings = {\"version\": \"\"}\n\n default_guild_settings = {\n \"ban_mention_spam\": False,\n \"delete_repeats\": -1,\n \"ignored\": False,\n \"respect_hierarchy\": True,\n \"delete_delay\": -1,\n \"reinvite_on_unban\": False,\n \"current_tempbans\": [],\n \"dm_on_kickban\": False,\n \"default_days\": 0,\n }\n\n default_channel_settings = {\"ignored\": False}\n\n default_member_settings = {\"past_nicks\": [], \"perms_cache\": {}, \"banned_until\": False}\n\n default_user_settings = {\"past_names\": []}\n\n def __init__(self, bot: Red):\n super().__init__()\n self.bot = bot\n\n self.config = Config.get_conf(self, 4961522000, force_registration=True)\n self.config.register_global(**self.default_global_settings)\n self.config.register_guild(**self.default_guild_settings)\n self.config.register_channel(**self.default_channel_settings)\n self.config.register_member(**self.default_member_settings)\n self.config.register_user(**self.default_user_settings)\n self.cache: dict = {}\n self.tban_expiry_task = self.bot.loop.create_task(self.check_tempban_expirations())\n self.last_case: dict = defaultdict(dict)\n\n self._ready = asyncio.Event()\n\n async def initialize(self):\n await self._maybe_update_config()\n self._ready.set()\n\n async def cog_before_invoke(self, ctx: commands.Context) -> None:\n await self._ready.wait()\n\n def cog_unload(self):\n self.tban_expiry_task.cancel()\n\n async def _maybe_update_config(self):\n \"\"\"Maybe update `delete_delay` value set by Config prior to Mod 1.0.0.\"\"\"\n if not await self.config.version():\n guild_dict = await self.config.all_guilds()\n async for guild_id, info in AsyncIter(guild_dict.items(), steps=25):\n delete_repeats = info.get(\"delete_repeats\", False)\n if delete_repeats:\n val = 3\n else:\n val = -1\n await self.config.guild(discord.Object(id=guild_id)).delete_repeats.set(val)\n await self.config.version.set(\"1.0.0\") # set version of last update\n if await self.config.version() < \"1.1.0\":\n message_sent = False\n async for e in AsyncIter((await self.config.all_channels()).values(), steps=25):\n if e[\"ignored\"] is not False:\n msg = _(\n \"Ignored guilds and channels have been moved. \"\n \"Please use `[p]moveignoredchannels` to migrate the old settings.\"\n )\n self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n message_sent = True\n break\n if message_sent is False:\n async for e in AsyncIter((await self.config.all_guilds()).values(), steps=25):\n if e[\"ignored\"] is not False:\n msg = _(\n \"Ignored guilds and channels have been moved. \"\n \"Please use `[p]moveignoredchannels` to migrate the old settings.\"\n )\n self.bot.loop.create_task(\n send_to_owners_with_prefix_replaced(self.bot, msg)\n )\n break\n await self.config.version.set(\"1.1.0\")\n if await self.config.version() < \"1.2.0\":\n async for e in AsyncIter((await self.config.all_guilds()).values(), steps=25):\n if e[\"delete_delay\"] != -1:\n msg = _(\n \"Delete delay settings have been moved. \"\n \"Please use `[p]movedeletedelay` to migrate the old settings.\"\n )\n self.bot.loop.create_task(send_to_owners_with_prefix_replaced(self.bot, msg))\n break\n await self.config.version.set(\"1.2.0\")\n\n @commands.command()\n @commands.is_owner()\n async def moveignoredchannels(self, ctx: commands.Context) -> None:\n \"\"\"Move ignored channels and servers to core\"\"\"\n all_guilds = await self.config.all_guilds()\n all_channels = await self.config.all_channels()\n for guild_id, settings in all_guilds.items():\n await self.bot._config.guild_from_id(guild_id).ignored.set(settings[\"ignored\"])\n await self.config.guild_from_id(guild_id).ignored.clear()\n for channel_id, settings in all_channels.items():\n await self.bot._config.channel_from_id(channel_id).ignored.set(settings[\"ignored\"])\n await self.config.channel_from_id(channel_id).clear()\n await ctx.send(_(\"Ignored channels and guilds restored.\"))\n\n @commands.command()\n @commands.is_owner()\n async def movedeletedelay(self, ctx: commands.Context) -> None:\n \"\"\"\n Move deletedelay settings to core\n \"\"\"\n all_guilds = await self.config.all_guilds()\n for guild_id, settings in all_guilds.items():\n await self.bot._config.guild_from_id(guild_id).delete_delay.set(\n settings[\"delete_delay\"]\n )\n await self.config.guild_from_id(guild_id).delete_delay.clear()\n await ctx.send(_(\"Delete delay settings restored.\"))\n", "path": "redbot/cogs/mod/mod.py"}]} | 1,965 | 800 |
gh_patches_debug_577 | rasdani/github-patches | git_diff | numba__numba-1356 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use CPython allocator in NRT
NRT should optionally use the CPython memory allocation functions (when imported from CPython). This would allow Numba-allocated memory to be seen by other utilities such as `sys.getallocatedblocks()`, `sys.debugmallocstats()`, and `tracemalloc`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/runtime/nrt.py`
Content:
```
1 from __future__ import print_function, absolute_import, division
2
3 from collections import namedtuple
4
5 from . import atomicops
6 from llvmlite import binding as ll
7
8 from numba.utils import finalize as _finalize
9 from . import _nrt_python as _nrt
10
11 _nrt_mstats = namedtuple("nrt_mstats", ["alloc", "free", "mi_alloc", "mi_free"])
12
13
14 class _Runtime(object):
15 def __init__(self):
16 self._init = False
17
18 def initialize(self, ctx):
19 """Initializes the NRT
20
21 Must be called before any actual call to the NRT API.
22 Safe to be called multiple times.
23 """
24 if self._init:
25 # Already initialized
26 return
27
28 # Register globals into the system
29 for py_name in _nrt.c_helpers:
30 c_name = "NRT_" + py_name
31 c_address = _nrt.c_helpers[py_name]
32 ll.add_symbol(c_name, c_address)
33
34 # Compile atomic operations
35 self._library = atomicops.compile_nrt_functions(ctx)
36
37 self._ptr_inc = self._library.get_pointer_to_function("nrt_atomic_add")
38 self._ptr_dec = self._library.get_pointer_to_function("nrt_atomic_sub")
39 self._ptr_cas = self._library.get_pointer_to_function("nrt_atomic_cas")
40
41 # Install atomic ops to NRT
42 _nrt.memsys_set_atomic_inc_dec(self._ptr_inc, self._ptr_dec)
43 _nrt.memsys_set_atomic_cas(self._ptr_cas)
44
45 self._init = True
46
47 @staticmethod
48 def shutdown():
49 """
50 Shutdown the NRT
51 Safe to be called without calling Runtime.initialize first
52 """
53 _nrt.memsys_shutdown()
54
55 @property
56 def library(self):
57 """
58 Return the Library object containing the various NRT functions.
59 """
60 return self._library
61
62 def meminfo_new(self, data, pyobj):
63 """
64 Returns a MemInfo object that tracks memory at `data` owned by `pyobj`.
65 MemInfo will acquire a reference on `pyobj`.
66 The release of MemInfo will release a reference on `pyobj`.
67 """
68 mi = _nrt.meminfo_new(data, pyobj)
69 return MemInfo(mi)
70
71 def meminfo_alloc(self, size, safe=False):
72 """
73 Allocate a new memory of `size` bytes and returns a MemInfo object
74 that tracks the allocation. When there is no more reference to the
75 MemInfo object, the underlying memory will be deallocated.
76
77 If `safe` flag is True, the memory is allocated using the `safe` scheme.
78 This is used for debugging and testing purposes.
79 See `NRT_MemInfo_alloc_safe()` in "nrt.h" for details.
80 """
81 if safe:
82 mi = _nrt.meminfo_alloc_safe(size)
83 else:
84 mi = _nrt.meminfo_alloc(size)
85 return MemInfo(mi)
86
87 def get_allocation_stats(self):
88 """
89 Returns a namedtuple of (alloc, free, mi_alloc, mi_free) for count of
90 each memory operations.
91 """
92 return _nrt_mstats(alloc=_nrt.memsys_get_stats_alloc(),
93 free=_nrt.memsys_get_stats_free(),
94 mi_alloc=_nrt.memsys_get_stats_mi_alloc(),
95 mi_free=_nrt.memsys_get_stats_mi_free())
96
97
98 # Alias to _nrt_python._MemInfo
99 MemInfo = _nrt._MemInfo
100
101 # Create uninitialized runtime
102 rtsys = _Runtime()
103
104 # Install finalizer
105 _finalize(rtsys, _Runtime.shutdown)
106
107 # Avoid future use of the class
108 del _Runtime
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numba/runtime/nrt.py b/numba/runtime/nrt.py
--- a/numba/runtime/nrt.py
+++ b/numba/runtime/nrt.py
@@ -98,7 +98,8 @@
# Alias to _nrt_python._MemInfo
MemInfo = _nrt._MemInfo
-# Create uninitialized runtime
+# Create runtime
+_nrt.memsys_use_cpython_allocator()
rtsys = _Runtime()
# Install finalizer
| {"golden_diff": "diff --git a/numba/runtime/nrt.py b/numba/runtime/nrt.py\n--- a/numba/runtime/nrt.py\n+++ b/numba/runtime/nrt.py\n@@ -98,7 +98,8 @@\n # Alias to _nrt_python._MemInfo\n MemInfo = _nrt._MemInfo\n \n-# Create uninitialized runtime\n+# Create runtime\n+_nrt.memsys_use_cpython_allocator()\n rtsys = _Runtime()\n \n # Install finalizer\n", "issue": "Use CPython allocator in NRT\nNRT should optionally use the CPython memory allocation functions (when imported from CPython). This would allow Numba-allocated memory to be seen by other utilities such as `sys.getallocatedblocks()`, `sys.debugmallocstats()`, and `tracemalloc`.\n\n", "before_files": [{"content": "from __future__ import print_function, absolute_import, division\n\nfrom collections import namedtuple\n\nfrom . import atomicops\nfrom llvmlite import binding as ll\n\nfrom numba.utils import finalize as _finalize\nfrom . import _nrt_python as _nrt\n\n_nrt_mstats = namedtuple(\"nrt_mstats\", [\"alloc\", \"free\", \"mi_alloc\", \"mi_free\"])\n\n\nclass _Runtime(object):\n def __init__(self):\n self._init = False\n\n def initialize(self, ctx):\n \"\"\"Initializes the NRT\n\n Must be called before any actual call to the NRT API.\n Safe to be called multiple times.\n \"\"\"\n if self._init:\n # Already initialized\n return\n\n # Register globals into the system\n for py_name in _nrt.c_helpers:\n c_name = \"NRT_\" + py_name\n c_address = _nrt.c_helpers[py_name]\n ll.add_symbol(c_name, c_address)\n\n # Compile atomic operations\n self._library = atomicops.compile_nrt_functions(ctx)\n\n self._ptr_inc = self._library.get_pointer_to_function(\"nrt_atomic_add\")\n self._ptr_dec = self._library.get_pointer_to_function(\"nrt_atomic_sub\")\n self._ptr_cas = self._library.get_pointer_to_function(\"nrt_atomic_cas\")\n\n # Install atomic ops to NRT\n _nrt.memsys_set_atomic_inc_dec(self._ptr_inc, self._ptr_dec)\n _nrt.memsys_set_atomic_cas(self._ptr_cas)\n\n self._init = True\n\n @staticmethod\n def shutdown():\n \"\"\"\n Shutdown the NRT\n Safe to be called without calling Runtime.initialize first\n \"\"\"\n _nrt.memsys_shutdown()\n\n @property\n def library(self):\n \"\"\"\n Return the Library object containing the various NRT functions.\n \"\"\"\n return self._library\n\n def meminfo_new(self, data, pyobj):\n \"\"\"\n Returns a MemInfo object that tracks memory at `data` owned by `pyobj`.\n MemInfo will acquire a reference on `pyobj`.\n The release of MemInfo will release a reference on `pyobj`.\n \"\"\"\n mi = _nrt.meminfo_new(data, pyobj)\n return MemInfo(mi)\n\n def meminfo_alloc(self, size, safe=False):\n \"\"\"\n Allocate a new memory of `size` bytes and returns a MemInfo object\n that tracks the allocation. When there is no more reference to the\n MemInfo object, the underlying memory will be deallocated.\n\n If `safe` flag is True, the memory is allocated using the `safe` scheme.\n This is used for debugging and testing purposes.\n See `NRT_MemInfo_alloc_safe()` in \"nrt.h\" for details.\n \"\"\"\n if safe:\n mi = _nrt.meminfo_alloc_safe(size)\n else:\n mi = _nrt.meminfo_alloc(size)\n return MemInfo(mi)\n\n def get_allocation_stats(self):\n \"\"\"\n Returns a namedtuple of (alloc, free, mi_alloc, mi_free) for count of\n each memory operations.\n \"\"\"\n return _nrt_mstats(alloc=_nrt.memsys_get_stats_alloc(),\n free=_nrt.memsys_get_stats_free(),\n mi_alloc=_nrt.memsys_get_stats_mi_alloc(),\n mi_free=_nrt.memsys_get_stats_mi_free())\n\n\n# Alias to _nrt_python._MemInfo\nMemInfo = _nrt._MemInfo\n\n# Create uninitialized runtime\nrtsys = _Runtime()\n\n# Install finalizer\n_finalize(rtsys, _Runtime.shutdown)\n\n# Avoid future use of the class\ndel _Runtime\n", "path": "numba/runtime/nrt.py"}], "after_files": [{"content": "from __future__ import print_function, absolute_import, division\n\nfrom collections import namedtuple\n\nfrom . import atomicops\nfrom llvmlite import binding as ll\n\nfrom numba.utils import finalize as _finalize\nfrom . import _nrt_python as _nrt\n\n_nrt_mstats = namedtuple(\"nrt_mstats\", [\"alloc\", \"free\", \"mi_alloc\", \"mi_free\"])\n\n\nclass _Runtime(object):\n def __init__(self):\n self._init = False\n\n def initialize(self, ctx):\n \"\"\"Initializes the NRT\n\n Must be called before any actual call to the NRT API.\n Safe to be called multiple times.\n \"\"\"\n if self._init:\n # Already initialized\n return\n\n # Register globals into the system\n for py_name in _nrt.c_helpers:\n c_name = \"NRT_\" + py_name\n c_address = _nrt.c_helpers[py_name]\n ll.add_symbol(c_name, c_address)\n\n # Compile atomic operations\n self._library = atomicops.compile_nrt_functions(ctx)\n\n self._ptr_inc = self._library.get_pointer_to_function(\"nrt_atomic_add\")\n self._ptr_dec = self._library.get_pointer_to_function(\"nrt_atomic_sub\")\n self._ptr_cas = self._library.get_pointer_to_function(\"nrt_atomic_cas\")\n\n # Install atomic ops to NRT\n _nrt.memsys_set_atomic_inc_dec(self._ptr_inc, self._ptr_dec)\n _nrt.memsys_set_atomic_cas(self._ptr_cas)\n\n self._init = True\n\n @staticmethod\n def shutdown():\n \"\"\"\n Shutdown the NRT\n Safe to be called without calling Runtime.initialize first\n \"\"\"\n _nrt.memsys_shutdown()\n\n @property\n def library(self):\n \"\"\"\n Return the Library object containing the various NRT functions.\n \"\"\"\n return self._library\n\n def meminfo_new(self, data, pyobj):\n \"\"\"\n Returns a MemInfo object that tracks memory at `data` owned by `pyobj`.\n MemInfo will acquire a reference on `pyobj`.\n The release of MemInfo will release a reference on `pyobj`.\n \"\"\"\n mi = _nrt.meminfo_new(data, pyobj)\n return MemInfo(mi)\n\n def meminfo_alloc(self, size, safe=False):\n \"\"\"\n Allocate a new memory of `size` bytes and returns a MemInfo object\n that tracks the allocation. When there is no more reference to the\n MemInfo object, the underlying memory will be deallocated.\n\n If `safe` flag is True, the memory is allocated using the `safe` scheme.\n This is used for debugging and testing purposes.\n See `NRT_MemInfo_alloc_safe()` in \"nrt.h\" for details.\n \"\"\"\n if safe:\n mi = _nrt.meminfo_alloc_safe(size)\n else:\n mi = _nrt.meminfo_alloc(size)\n return MemInfo(mi)\n\n def get_allocation_stats(self):\n \"\"\"\n Returns a namedtuple of (alloc, free, mi_alloc, mi_free) for count of\n each memory operations.\n \"\"\"\n return _nrt_mstats(alloc=_nrt.memsys_get_stats_alloc(),\n free=_nrt.memsys_get_stats_free(),\n mi_alloc=_nrt.memsys_get_stats_mi_alloc(),\n mi_free=_nrt.memsys_get_stats_mi_free())\n\n\n# Alias to _nrt_python._MemInfo\nMemInfo = _nrt._MemInfo\n\n# Create runtime\n_nrt.memsys_use_cpython_allocator()\nrtsys = _Runtime()\n\n# Install finalizer\n_finalize(rtsys, _Runtime.shutdown)\n\n# Avoid future use of the class\ndel _Runtime\n", "path": "numba/runtime/nrt.py"}]} | 1,355 | 106 |
gh_patches_debug_36848 | rasdani/github-patches | git_diff | pwndbg__pwndbg-1920 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The `ctx threads` (or `threads`) should display all threads no matter of context threads limit
cc: @CptGibbon we should probably add this option for convenience :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/tls.py`
Content:
```
1 """
2 Command to print the information of the current Thread Local Storage (TLS).
3 """
4 from __future__ import annotations
5
6 import argparse
7
8 import pwndbg.commands
9 import pwndbg.gdblib.tls
10 from pwndbg.color import message
11 from pwndbg.commands import CommandCategory
12
13 parser = argparse.ArgumentParser(
14 formatter_class=argparse.RawTextHelpFormatter,
15 description="Print out base address of the current Thread Local Storage (TLS).",
16 )
17
18 parser.add_argument(
19 "-p",
20 "--pthread-self",
21 action="store_true",
22 default=False,
23 help="Try to get the address of TLS by calling pthread_self().",
24 )
25
26
27 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.LINUX)
28 @pwndbg.commands.OnlyWhenRunning
29 @pwndbg.commands.OnlyWhenUserspace
30 def tls(pthread_self=False) -> None:
31 tls_base = (
32 pwndbg.gdblib.tls.find_address_with_register()
33 if not pthread_self
34 else pwndbg.gdblib.tls.find_address_with_pthread_self()
35 )
36 if pwndbg.gdblib.memory.is_readable_address(tls_base):
37 print(message.success("Thread Local Storage (TLS) base: %#x" % tls_base))
38 print(message.success("TLS is located at:"))
39 print(message.notice(pwndbg.gdblib.vmmap.find(tls_base)))
40 return
41 print(message.error("Couldn't find Thread Local Storage (TLS) base."))
42 if not pthread_self:
43 print(
44 message.notice(
45 "You can try to use -p/--pthread option to get the address of TLS by calling pthread_self().\n"
46 "(This might cause problems if the pthread_self() is not in libc or not initialized yet.)"
47 )
48 )
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/tls.py b/pwndbg/commands/tls.py
--- a/pwndbg/commands/tls.py
+++ b/pwndbg/commands/tls.py
@@ -5,6 +5,10 @@
import argparse
+import gdb
+from tabulate import tabulate
+
+import pwndbg.color.memory as M
import pwndbg.commands
import pwndbg.gdblib.tls
from pwndbg.color import message
@@ -46,3 +50,97 @@
"(This might cause problems if the pthread_self() is not in libc or not initialized yet.)"
)
)
+
+
+parser = argparse.ArgumentParser(
+ formatter_class=argparse.RawTextHelpFormatter,
+ description="List all threads belonging to the selected inferior.",
+)
+group = parser.add_mutually_exclusive_group()
+
+group.add_argument(
+ "num_threads",
+ type=int,
+ nargs="?",
+ default=None,
+ help="Number of threads to display. Omit to display all threads.",
+)
+
+group.add_argument(
+ "-c",
+ "--config",
+ action="store_true",
+ dest="respect_config",
+ help="Respect context-max-threads config to limit number of threads displayed.",
+)
+
+
[email protected](parser, category=CommandCategory.LINUX)
[email protected]
[email protected]
+def threads(num_threads, respect_config) -> None:
+ table = []
+ headers = ["global_num", "name", "status", "pc", "symbol"]
+ bold_green = lambda text: pwndbg.color.bold(pwndbg.color.green(text))
+
+ try:
+ original_thread = gdb.selected_thread()
+ except SystemError:
+ original_thread = None
+
+ all_threads = gdb.selected_inferior().threads()[::-1]
+
+ displayed_threads = []
+
+ if original_thread is not None and original_thread.is_valid():
+ displayed_threads.append(original_thread)
+
+ for thread in all_threads:
+ if respect_config and len(displayed_threads) >= int(
+ pwndbg.commands.context.config_max_threads_display
+ ):
+ break
+ elif num_threads is not None and len(displayed_threads) >= num_threads:
+ break
+
+ if thread.is_valid() and thread is not original_thread:
+ displayed_threads.append(thread)
+
+ for thread in displayed_threads:
+ name = thread.name or ""
+
+ if thread is original_thread:
+ row = [
+ bold_green(thread.global_num),
+ bold_green(name),
+ ]
+ else:
+ row = [
+ thread.global_num,
+ name,
+ ]
+
+ row.append(pwndbg.commands.context.get_thread_status(thread))
+
+ if thread.is_stopped():
+ thread.switch()
+ pc = gdb.selected_frame().pc()
+
+ pc_colored = M.get(pc)
+ symbol = pwndbg.gdblib.symbol.get(pc)
+
+ row.append(pc_colored)
+
+ if symbol:
+ if thread is original_thread:
+ row.append(bold_green(symbol))
+ else:
+ row.append(symbol)
+
+ table.append(row)
+
+ if original_thread is not None and original_thread.is_valid():
+ original_thread.switch()
+
+ print(tabulate(table, headers))
+ print(f"\nShowing {len(displayed_threads)} of {len(all_threads)} threads.")
| {"golden_diff": "diff --git a/pwndbg/commands/tls.py b/pwndbg/commands/tls.py\n--- a/pwndbg/commands/tls.py\n+++ b/pwndbg/commands/tls.py\n@@ -5,6 +5,10 @@\n \n import argparse\n \n+import gdb\n+from tabulate import tabulate\n+\n+import pwndbg.color.memory as M\n import pwndbg.commands\n import pwndbg.gdblib.tls\n from pwndbg.color import message\n@@ -46,3 +50,97 @@\n \"(This might cause problems if the pthread_self() is not in libc or not initialized yet.)\"\n )\n )\n+\n+\n+parser = argparse.ArgumentParser(\n+ formatter_class=argparse.RawTextHelpFormatter,\n+ description=\"List all threads belonging to the selected inferior.\",\n+)\n+group = parser.add_mutually_exclusive_group()\n+\n+group.add_argument(\n+ \"num_threads\",\n+ type=int,\n+ nargs=\"?\",\n+ default=None,\n+ help=\"Number of threads to display. Omit to display all threads.\",\n+)\n+\n+group.add_argument(\n+ \"-c\",\n+ \"--config\",\n+ action=\"store_true\",\n+ dest=\"respect_config\",\n+ help=\"Respect context-max-threads config to limit number of threads displayed.\",\n+)\n+\n+\[email protected](parser, category=CommandCategory.LINUX)\[email protected]\[email protected]\n+def threads(num_threads, respect_config) -> None:\n+ table = []\n+ headers = [\"global_num\", \"name\", \"status\", \"pc\", \"symbol\"]\n+ bold_green = lambda text: pwndbg.color.bold(pwndbg.color.green(text))\n+\n+ try:\n+ original_thread = gdb.selected_thread()\n+ except SystemError:\n+ original_thread = None\n+\n+ all_threads = gdb.selected_inferior().threads()[::-1]\n+\n+ displayed_threads = []\n+\n+ if original_thread is not None and original_thread.is_valid():\n+ displayed_threads.append(original_thread)\n+\n+ for thread in all_threads:\n+ if respect_config and len(displayed_threads) >= int(\n+ pwndbg.commands.context.config_max_threads_display\n+ ):\n+ break\n+ elif num_threads is not None and len(displayed_threads) >= num_threads:\n+ break\n+\n+ if thread.is_valid() and thread is not original_thread:\n+ displayed_threads.append(thread)\n+\n+ for thread in displayed_threads:\n+ name = thread.name or \"\"\n+\n+ if thread is original_thread:\n+ row = [\n+ bold_green(thread.global_num),\n+ bold_green(name),\n+ ]\n+ else:\n+ row = [\n+ thread.global_num,\n+ name,\n+ ]\n+\n+ row.append(pwndbg.commands.context.get_thread_status(thread))\n+\n+ if thread.is_stopped():\n+ thread.switch()\n+ pc = gdb.selected_frame().pc()\n+\n+ pc_colored = M.get(pc)\n+ symbol = pwndbg.gdblib.symbol.get(pc)\n+\n+ row.append(pc_colored)\n+\n+ if symbol:\n+ if thread is original_thread:\n+ row.append(bold_green(symbol))\n+ else:\n+ row.append(symbol)\n+\n+ table.append(row)\n+\n+ if original_thread is not None and original_thread.is_valid():\n+ original_thread.switch()\n+\n+ print(tabulate(table, headers))\n+ print(f\"\\nShowing {len(displayed_threads)} of {len(all_threads)} threads.\")\n", "issue": "The `ctx threads` (or `threads`) should display all threads no matter of context threads limit\ncc: @CptGibbon we should probably add this option for convenience :)\n", "before_files": [{"content": "\"\"\"\nCommand to print the information of the current Thread Local Storage (TLS).\n\"\"\"\nfrom __future__ import annotations\n\nimport argparse\n\nimport pwndbg.commands\nimport pwndbg.gdblib.tls\nfrom pwndbg.color import message\nfrom pwndbg.commands import CommandCategory\n\nparser = argparse.ArgumentParser(\n formatter_class=argparse.RawTextHelpFormatter,\n description=\"Print out base address of the current Thread Local Storage (TLS).\",\n)\n\nparser.add_argument(\n \"-p\",\n \"--pthread-self\",\n action=\"store_true\",\n default=False,\n help=\"Try to get the address of TLS by calling pthread_self().\",\n)\n\n\[email protected](parser, category=CommandCategory.LINUX)\[email protected]\[email protected]\ndef tls(pthread_self=False) -> None:\n tls_base = (\n pwndbg.gdblib.tls.find_address_with_register()\n if not pthread_self\n else pwndbg.gdblib.tls.find_address_with_pthread_self()\n )\n if pwndbg.gdblib.memory.is_readable_address(tls_base):\n print(message.success(\"Thread Local Storage (TLS) base: %#x\" % tls_base))\n print(message.success(\"TLS is located at:\"))\n print(message.notice(pwndbg.gdblib.vmmap.find(tls_base)))\n return\n print(message.error(\"Couldn't find Thread Local Storage (TLS) base.\"))\n if not pthread_self:\n print(\n message.notice(\n \"You can try to use -p/--pthread option to get the address of TLS by calling pthread_self().\\n\"\n \"(This might cause problems if the pthread_self() is not in libc or not initialized yet.)\"\n )\n )\n", "path": "pwndbg/commands/tls.py"}], "after_files": [{"content": "\"\"\"\nCommand to print the information of the current Thread Local Storage (TLS).\n\"\"\"\nfrom __future__ import annotations\n\nimport argparse\n\nimport gdb\nfrom tabulate import tabulate\n\nimport pwndbg.color.memory as M\nimport pwndbg.commands\nimport pwndbg.gdblib.tls\nfrom pwndbg.color import message\nfrom pwndbg.commands import CommandCategory\n\nparser = argparse.ArgumentParser(\n formatter_class=argparse.RawTextHelpFormatter,\n description=\"Print out base address of the current Thread Local Storage (TLS).\",\n)\n\nparser.add_argument(\n \"-p\",\n \"--pthread-self\",\n action=\"store_true\",\n default=False,\n help=\"Try to get the address of TLS by calling pthread_self().\",\n)\n\n\[email protected](parser, category=CommandCategory.LINUX)\[email protected]\[email protected]\ndef tls(pthread_self=False) -> None:\n tls_base = (\n pwndbg.gdblib.tls.find_address_with_register()\n if not pthread_self\n else pwndbg.gdblib.tls.find_address_with_pthread_self()\n )\n if pwndbg.gdblib.memory.is_readable_address(tls_base):\n print(message.success(\"Thread Local Storage (TLS) base: %#x\" % tls_base))\n print(message.success(\"TLS is located at:\"))\n print(message.notice(pwndbg.gdblib.vmmap.find(tls_base)))\n return\n print(message.error(\"Couldn't find Thread Local Storage (TLS) base.\"))\n if not pthread_self:\n print(\n message.notice(\n \"You can try to use -p/--pthread option to get the address of TLS by calling pthread_self().\\n\"\n \"(This might cause problems if the pthread_self() is not in libc or not initialized yet.)\"\n )\n )\n\n\nparser = argparse.ArgumentParser(\n formatter_class=argparse.RawTextHelpFormatter,\n description=\"List all threads belonging to the selected inferior.\",\n)\ngroup = parser.add_mutually_exclusive_group()\n\ngroup.add_argument(\n \"num_threads\",\n type=int,\n nargs=\"?\",\n default=None,\n help=\"Number of threads to display. Omit to display all threads.\",\n)\n\ngroup.add_argument(\n \"-c\",\n \"--config\",\n action=\"store_true\",\n dest=\"respect_config\",\n help=\"Respect context-max-threads config to limit number of threads displayed.\",\n)\n\n\[email protected](parser, category=CommandCategory.LINUX)\[email protected]\[email protected]\ndef threads(num_threads, respect_config) -> None:\n table = []\n headers = [\"global_num\", \"name\", \"status\", \"pc\", \"symbol\"]\n bold_green = lambda text: pwndbg.color.bold(pwndbg.color.green(text))\n\n try:\n original_thread = gdb.selected_thread()\n except SystemError:\n original_thread = None\n\n all_threads = gdb.selected_inferior().threads()[::-1]\n\n displayed_threads = []\n\n if original_thread is not None and original_thread.is_valid():\n displayed_threads.append(original_thread)\n\n for thread in all_threads:\n if respect_config and len(displayed_threads) >= int(\n pwndbg.commands.context.config_max_threads_display\n ):\n break\n elif num_threads is not None and len(displayed_threads) >= num_threads:\n break\n\n if thread.is_valid() and thread is not original_thread:\n displayed_threads.append(thread)\n\n for thread in displayed_threads:\n name = thread.name or \"\"\n\n if thread is original_thread:\n row = [\n bold_green(thread.global_num),\n bold_green(name),\n ]\n else:\n row = [\n thread.global_num,\n name,\n ]\n\n row.append(pwndbg.commands.context.get_thread_status(thread))\n\n if thread.is_stopped():\n thread.switch()\n pc = gdb.selected_frame().pc()\n\n pc_colored = M.get(pc)\n symbol = pwndbg.gdblib.symbol.get(pc)\n\n row.append(pc_colored)\n\n if symbol:\n if thread is original_thread:\n row.append(bold_green(symbol))\n else:\n row.append(symbol)\n\n table.append(row)\n\n if original_thread is not None and original_thread.is_valid():\n original_thread.switch()\n\n print(tabulate(table, headers))\n print(f\"\\nShowing {len(displayed_threads)} of {len(all_threads)} threads.\")\n", "path": "pwndbg/commands/tls.py"}]} | 766 | 781 |
gh_patches_debug_8221 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1094 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check Domain availability via epp-Testing
### Issue Description
When adding the /availability endpoint we will need to send a CheckDomain request to epp to see if the domain is available. This epp function is already implemented in domain.py and is called available(). It just needs to be tested and updated if the test show any problem with the implementation
### AC
- [x] unit tests added for available
- [x] manually test via sandbox with OT&E to be sure that this is working as expected
- [x] update the implementation as needed or desired
- [x] in your tests, ensure that this function can be called by just doing Domain.available() and not by having an instance of a domain
### Additional Context (optional)
This must be tested by using Domain.available because the /availability endpoint (when implemented) will not have access to any particular domain object and this function needs to be able to be performed on its own.
### Issue Link
blocks: #1015
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/epplibwrapper/__init__.py`
Content:
```
1 import logging
2 from types import SimpleNamespace
3
4 try:
5 from epplib import constants
6 except ImportError:
7 # allow epplibwrapper to load without epplib, for testing and development
8 pass
9
10 logger = logging.getLogger(__name__)
11
12 NAMESPACE = SimpleNamespace(
13 EPP="urn:ietf:params:xml:ns:epp-1.0",
14 XSI="http://www.w3.org/2001/XMLSchema-instance",
15 FRED="noop",
16 NIC_CONTACT="urn:ietf:params:xml:ns:contact-1.0",
17 NIC_DOMAIN="urn:ietf:params:xml:ns:domain-1.0",
18 NIC_ENUMVAL="noop",
19 NIC_EXTRA_ADDR="noop",
20 NIC_HOST="urn:ietf:params:xml:ns:host-1.0",
21 NIC_KEYSET="noop",
22 NIC_NSSET="noop",
23 )
24
25 SCHEMA_LOCATION = SimpleNamespace(
26 XSI="urn:ietf:params:xml:ns:epp-1.0 epp-1.0.xsd",
27 FRED="noop fred-1.5.0.xsd",
28 NIC_CONTACT="urn:ietf:params:xml:ns:contact-1.0 contact-1.0.xsd",
29 NIC_DOMAIN="urn:ietf:params:xml:ns:domain-1.0 domain-1.0.xsd",
30 NIC_ENUMVAL="noop enumval-1.2.0.xsd",
31 NIC_EXTRA_ADDR="noop extra-addr-1.0.0.xsd",
32 NIC_HOST="urn:ietf:params:xml:ns:host-1.0 host-1.0.xsd",
33 NIC_KEYSET="noop keyset-1.3.2.xsd",
34 NIC_NSSET="noop nsset-1.2.2.xsd",
35 )
36
37 try:
38 constants.NAMESPACE = NAMESPACE
39 constants.SCHEMA_LOCATION = SCHEMA_LOCATION
40 except NameError:
41 pass
42
43 # Attn: these imports should NOT be at the top of the file
44 try:
45 from .client import CLIENT, commands
46 from .errors import RegistryError, ErrorCode
47 from epplib.models import common
48 except ImportError:
49 pass
50
51 __all__ = [
52 "CLIENT",
53 "commands",
54 "common",
55 "ErrorCode",
56 "RegistryError",
57 ]
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/epplibwrapper/__init__.py b/src/epplibwrapper/__init__.py
--- a/src/epplibwrapper/__init__.py
+++ b/src/epplibwrapper/__init__.py
@@ -45,6 +45,7 @@
from .client import CLIENT, commands
from .errors import RegistryError, ErrorCode
from epplib.models import common
+ from epplib import responses
except ImportError:
pass
@@ -52,6 +53,7 @@
"CLIENT",
"commands",
"common",
+ "responses",
"ErrorCode",
"RegistryError",
]
| {"golden_diff": "diff --git a/src/epplibwrapper/__init__.py b/src/epplibwrapper/__init__.py\n--- a/src/epplibwrapper/__init__.py\n+++ b/src/epplibwrapper/__init__.py\n@@ -45,6 +45,7 @@\n from .client import CLIENT, commands\n from .errors import RegistryError, ErrorCode\n from epplib.models import common\n+ from epplib import responses\n except ImportError:\n pass\n \n@@ -52,6 +53,7 @@\n \"CLIENT\",\n \"commands\",\n \"common\",\n+ \"responses\",\n \"ErrorCode\",\n \"RegistryError\",\n ]\n", "issue": "Check Domain availability via epp-Testing\n### Issue Description\r\n\r\nWhen adding the /availability endpoint we will need to send a CheckDomain request to epp to see if the domain is available. This epp function is already implemented in domain.py and is called available(). It just needs to be tested and updated if the test show any problem with the implementation\r\n\r\n### AC\r\n\r\n- [x] unit tests added for available\r\n- [x] manually test via sandbox with OT&E to be sure that this is working as expected \r\n- [x] update the implementation as needed or desired\r\n- [x] in your tests, ensure that this function can be called by just doing Domain.available() and not by having an instance of a domain\r\n\r\n### Additional Context (optional)\r\n\r\nThis must be tested by using Domain.available because the /availability endpoint (when implemented) will not have access to any particular domain object and this function needs to be able to be performed on its own.\r\n\r\n### Issue Link\r\nblocks: #1015 \n", "before_files": [{"content": "import logging\nfrom types import SimpleNamespace\n\ntry:\n from epplib import constants\nexcept ImportError:\n # allow epplibwrapper to load without epplib, for testing and development\n pass\n\nlogger = logging.getLogger(__name__)\n\nNAMESPACE = SimpleNamespace(\n EPP=\"urn:ietf:params:xml:ns:epp-1.0\",\n XSI=\"http://www.w3.org/2001/XMLSchema-instance\",\n FRED=\"noop\",\n NIC_CONTACT=\"urn:ietf:params:xml:ns:contact-1.0\",\n NIC_DOMAIN=\"urn:ietf:params:xml:ns:domain-1.0\",\n NIC_ENUMVAL=\"noop\",\n NIC_EXTRA_ADDR=\"noop\",\n NIC_HOST=\"urn:ietf:params:xml:ns:host-1.0\",\n NIC_KEYSET=\"noop\",\n NIC_NSSET=\"noop\",\n)\n\nSCHEMA_LOCATION = SimpleNamespace(\n XSI=\"urn:ietf:params:xml:ns:epp-1.0 epp-1.0.xsd\",\n FRED=\"noop fred-1.5.0.xsd\",\n NIC_CONTACT=\"urn:ietf:params:xml:ns:contact-1.0 contact-1.0.xsd\",\n NIC_DOMAIN=\"urn:ietf:params:xml:ns:domain-1.0 domain-1.0.xsd\",\n NIC_ENUMVAL=\"noop enumval-1.2.0.xsd\",\n NIC_EXTRA_ADDR=\"noop extra-addr-1.0.0.xsd\",\n NIC_HOST=\"urn:ietf:params:xml:ns:host-1.0 host-1.0.xsd\",\n NIC_KEYSET=\"noop keyset-1.3.2.xsd\",\n NIC_NSSET=\"noop nsset-1.2.2.xsd\",\n)\n\ntry:\n constants.NAMESPACE = NAMESPACE\n constants.SCHEMA_LOCATION = SCHEMA_LOCATION\nexcept NameError:\n pass\n\n# Attn: these imports should NOT be at the top of the file\ntry:\n from .client import CLIENT, commands\n from .errors import RegistryError, ErrorCode\n from epplib.models import common\nexcept ImportError:\n pass\n\n__all__ = [\n \"CLIENT\",\n \"commands\",\n \"common\",\n \"ErrorCode\",\n \"RegistryError\",\n]\n", "path": "src/epplibwrapper/__init__.py"}], "after_files": [{"content": "import logging\nfrom types import SimpleNamespace\n\ntry:\n from epplib import constants\nexcept ImportError:\n # allow epplibwrapper to load without epplib, for testing and development\n pass\n\nlogger = logging.getLogger(__name__)\n\nNAMESPACE = SimpleNamespace(\n EPP=\"urn:ietf:params:xml:ns:epp-1.0\",\n XSI=\"http://www.w3.org/2001/XMLSchema-instance\",\n FRED=\"noop\",\n NIC_CONTACT=\"urn:ietf:params:xml:ns:contact-1.0\",\n NIC_DOMAIN=\"urn:ietf:params:xml:ns:domain-1.0\",\n NIC_ENUMVAL=\"noop\",\n NIC_EXTRA_ADDR=\"noop\",\n NIC_HOST=\"urn:ietf:params:xml:ns:host-1.0\",\n NIC_KEYSET=\"noop\",\n NIC_NSSET=\"noop\",\n)\n\nSCHEMA_LOCATION = SimpleNamespace(\n XSI=\"urn:ietf:params:xml:ns:epp-1.0 epp-1.0.xsd\",\n FRED=\"noop fred-1.5.0.xsd\",\n NIC_CONTACT=\"urn:ietf:params:xml:ns:contact-1.0 contact-1.0.xsd\",\n NIC_DOMAIN=\"urn:ietf:params:xml:ns:domain-1.0 domain-1.0.xsd\",\n NIC_ENUMVAL=\"noop enumval-1.2.0.xsd\",\n NIC_EXTRA_ADDR=\"noop extra-addr-1.0.0.xsd\",\n NIC_HOST=\"urn:ietf:params:xml:ns:host-1.0 host-1.0.xsd\",\n NIC_KEYSET=\"noop keyset-1.3.2.xsd\",\n NIC_NSSET=\"noop nsset-1.2.2.xsd\",\n)\n\ntry:\n constants.NAMESPACE = NAMESPACE\n constants.SCHEMA_LOCATION = SCHEMA_LOCATION\nexcept NameError:\n pass\n\n# Attn: these imports should NOT be at the top of the file\ntry:\n from .client import CLIENT, commands\n from .errors import RegistryError, ErrorCode\n from epplib.models import common\n from epplib import responses\nexcept ImportError:\n pass\n\n__all__ = [\n \"CLIENT\",\n \"commands\",\n \"common\",\n \"responses\",\n \"ErrorCode\",\n \"RegistryError\",\n]\n", "path": "src/epplibwrapper/__init__.py"}]} | 1,067 | 140 |
gh_patches_debug_13162 | rasdani/github-patches | git_diff | chainer__chainer-2143 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stop using ABC in Serializer
AbstractSerializer is currently written as an abstract base class. I don't think it is needed to support ABC.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/serializer.py`
Content:
```
1 import abc
2
3 import six
4
5
6 @six.add_metaclass(abc.ABCMeta)
7 class AbstractSerializer(object):
8
9 """Abstract base class of all serializers and deserializers."""
10
11 @abc.abstractmethod
12 def __getitem__(self, key):
13 """Gets a child serializer.
14
15 This operator creates a _child_ serializer represented by the given
16 key.
17
18 Args:
19 key (str): Name of the child serializer.
20
21 """
22 raise NotImplementedError
23
24 @abc.abstractmethod
25 def __call__(self, key, value):
26 """Serializes or deserializes a value by given name.
27
28 This operator saves or loads a value by given name.
29
30 If this is a serializer, then the value is simply saved at the key.
31 Note that some type information might be missed depending on the
32 implementation (and the target file format).
33
34 If this is a deserializer, then the value is loaded by the key. The
35 deserialization differently works on scalars and arrays. For scalars,
36 the ``value`` argument is used just for determining the type of
37 restored value to be converted, and the converted value is returned.
38 For arrays, the restored elements are directly copied into the
39 ``value`` argument. String values are treated like scalars. If the
40 ``value`` argument is ``None``, the type of the restored value will
41 typically be a numpy array but can depend on the particular subclass
42 implementation.
43
44 Args:
45 key (str): Name of the serialization entry.
46 value (scalar, array, None, or str): Object to be (de)serialized.
47 ``None`` is only supported by deserializers.
48
49 Returns:
50 Serialized or deserialized value.
51
52 """
53 raise NotImplementedError
54
55
56 class Serializer(AbstractSerializer):
57
58 """Base class of all serializers."""
59
60 def save(self, obj):
61 """Saves an object by this serializer.
62
63 This is equivalent to ``obj.serialize(self)``.
64
65 Args:
66 obj: Target object to be serialized.
67
68 """
69 obj.serialize(self)
70
71
72 class Deserializer(AbstractSerializer):
73
74 """Base class of all deserializers."""
75
76 def load(self, obj):
77 """Loads an object from this deserializer.
78
79 This is equivalent to ``obj.serialize(self)``.
80
81 Args:
82 obj: Target object to be serialized.
83
84 """
85 obj.serialize(self)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/serializer.py b/chainer/serializer.py
--- a/chainer/serializer.py
+++ b/chainer/serializer.py
@@ -1,14 +1,7 @@
-import abc
-
-import six
-
-
[email protected]_metaclass(abc.ABCMeta)
class AbstractSerializer(object):
"""Abstract base class of all serializers and deserializers."""
- @abc.abstractmethod
def __getitem__(self, key):
"""Gets a child serializer.
@@ -21,7 +14,6 @@
"""
raise NotImplementedError
- @abc.abstractmethod
def __call__(self, key, value):
"""Serializes or deserializes a value by given name.
| {"golden_diff": "diff --git a/chainer/serializer.py b/chainer/serializer.py\n--- a/chainer/serializer.py\n+++ b/chainer/serializer.py\n@@ -1,14 +1,7 @@\n-import abc\n-\n-import six\n-\n-\[email protected]_metaclass(abc.ABCMeta)\n class AbstractSerializer(object):\n \n \"\"\"Abstract base class of all serializers and deserializers.\"\"\"\n \n- @abc.abstractmethod\n def __getitem__(self, key):\n \"\"\"Gets a child serializer.\n \n@@ -21,7 +14,6 @@\n \"\"\"\n raise NotImplementedError\n \n- @abc.abstractmethod\n def __call__(self, key, value):\n \"\"\"Serializes or deserializes a value by given name.\n", "issue": "Stop using ABC in Serializer\nAbstractSerializer is currently written as an abstract base class. I don't think it is needed to support ABC.\n", "before_files": [{"content": "import abc\n\nimport six\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass AbstractSerializer(object):\n\n \"\"\"Abstract base class of all serializers and deserializers.\"\"\"\n\n @abc.abstractmethod\n def __getitem__(self, key):\n \"\"\"Gets a child serializer.\n\n This operator creates a _child_ serializer represented by the given\n key.\n\n Args:\n key (str): Name of the child serializer.\n\n \"\"\"\n raise NotImplementedError\n\n @abc.abstractmethod\n def __call__(self, key, value):\n \"\"\"Serializes or deserializes a value by given name.\n\n This operator saves or loads a value by given name.\n\n If this is a serializer, then the value is simply saved at the key.\n Note that some type information might be missed depending on the\n implementation (and the target file format).\n\n If this is a deserializer, then the value is loaded by the key. The\n deserialization differently works on scalars and arrays. For scalars,\n the ``value`` argument is used just for determining the type of\n restored value to be converted, and the converted value is returned.\n For arrays, the restored elements are directly copied into the\n ``value`` argument. String values are treated like scalars. If the\n ``value`` argument is ``None``, the type of the restored value will\n typically be a numpy array but can depend on the particular subclass\n implementation.\n\n Args:\n key (str): Name of the serialization entry.\n value (scalar, array, None, or str): Object to be (de)serialized.\n ``None`` is only supported by deserializers.\n\n Returns:\n Serialized or deserialized value.\n\n \"\"\"\n raise NotImplementedError\n\n\nclass Serializer(AbstractSerializer):\n\n \"\"\"Base class of all serializers.\"\"\"\n\n def save(self, obj):\n \"\"\"Saves an object by this serializer.\n\n This is equivalent to ``obj.serialize(self)``.\n\n Args:\n obj: Target object to be serialized.\n\n \"\"\"\n obj.serialize(self)\n\n\nclass Deserializer(AbstractSerializer):\n\n \"\"\"Base class of all deserializers.\"\"\"\n\n def load(self, obj):\n \"\"\"Loads an object from this deserializer.\n\n This is equivalent to ``obj.serialize(self)``.\n\n Args:\n obj: Target object to be serialized.\n\n \"\"\"\n obj.serialize(self)\n", "path": "chainer/serializer.py"}], "after_files": [{"content": "class AbstractSerializer(object):\n\n \"\"\"Abstract base class of all serializers and deserializers.\"\"\"\n\n def __getitem__(self, key):\n \"\"\"Gets a child serializer.\n\n This operator creates a _child_ serializer represented by the given\n key.\n\n Args:\n key (str): Name of the child serializer.\n\n \"\"\"\n raise NotImplementedError\n\n def __call__(self, key, value):\n \"\"\"Serializes or deserializes a value by given name.\n\n This operator saves or loads a value by given name.\n\n If this is a serializer, then the value is simply saved at the key.\n Note that some type information might be missed depending on the\n implementation (and the target file format).\n\n If this is a deserializer, then the value is loaded by the key. The\n deserialization differently works on scalars and arrays. For scalars,\n the ``value`` argument is used just for determining the type of\n restored value to be converted, and the converted value is returned.\n For arrays, the restored elements are directly copied into the\n ``value`` argument. String values are treated like scalars. If the\n ``value`` argument is ``None``, the type of the restored value will\n typically be a numpy array but can depend on the particular subclass\n implementation.\n\n Args:\n key (str): Name of the serialization entry.\n value (scalar, array, None, or str): Object to be (de)serialized.\n ``None`` is only supported by deserializers.\n\n Returns:\n Serialized or deserialized value.\n\n \"\"\"\n raise NotImplementedError\n\n\nclass Serializer(AbstractSerializer):\n\n \"\"\"Base class of all serializers.\"\"\"\n\n def save(self, obj):\n \"\"\"Saves an object by this serializer.\n\n This is equivalent to ``obj.serialize(self)``.\n\n Args:\n obj: Target object to be serialized.\n\n \"\"\"\n obj.serialize(self)\n\n\nclass Deserializer(AbstractSerializer):\n\n \"\"\"Base class of all deserializers.\"\"\"\n\n def load(self, obj):\n \"\"\"Loads an object from this deserializer.\n\n This is equivalent to ``obj.serialize(self)``.\n\n Args:\n obj: Target object to be serialized.\n\n \"\"\"\n obj.serialize(self)\n", "path": "chainer/serializer.py"}]} | 969 | 158 |
gh_patches_debug_9091 | rasdani/github-patches | git_diff | pytorch__ignite-320 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ParamScheduler docs missing
No docs on `ParamScheduler` and related classes on the [site](https://pytorch.org/ignite/contrib/handlers.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/handlers/__init__.py`
Content:
```
1
2 from ignite.contrib.handlers.param_scheduler import ParamScheduler, CyclicalScheduler, \
3 LinearCyclicalScheduler, CosineAnnealingScheduler
4
5 from ignite.contrib.handlers.tqdm_logger import ProgressBar
6
7 __all__ = ['ProgressBar']
8
```
Path: `ignite/contrib/engines/__init__.py`
Content:
```
1 # coding: utf-8
2
3 from ignite.contrib.engines.tbptt import create_supervised_tbptt_trainer
4 from ignite.contrib.engines.tbptt import Tbptt_Events
5
6
7 __all__ = ["create_supervised_tbptt_trainer", "Tbptt_Events"]
8
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/contrib/engines/__init__.py b/ignite/contrib/engines/__init__.py
--- a/ignite/contrib/engines/__init__.py
+++ b/ignite/contrib/engines/__init__.py
@@ -2,6 +2,3 @@
from ignite.contrib.engines.tbptt import create_supervised_tbptt_trainer
from ignite.contrib.engines.tbptt import Tbptt_Events
-
-
-__all__ = ["create_supervised_tbptt_trainer", "Tbptt_Events"]
diff --git a/ignite/contrib/handlers/__init__.py b/ignite/contrib/handlers/__init__.py
--- a/ignite/contrib/handlers/__init__.py
+++ b/ignite/contrib/handlers/__init__.py
@@ -3,5 +3,3 @@
LinearCyclicalScheduler, CosineAnnealingScheduler
from ignite.contrib.handlers.tqdm_logger import ProgressBar
-
-__all__ = ['ProgressBar']
| {"golden_diff": "diff --git a/ignite/contrib/engines/__init__.py b/ignite/contrib/engines/__init__.py\n--- a/ignite/contrib/engines/__init__.py\n+++ b/ignite/contrib/engines/__init__.py\n@@ -2,6 +2,3 @@\n \n from ignite.contrib.engines.tbptt import create_supervised_tbptt_trainer\n from ignite.contrib.engines.tbptt import Tbptt_Events\n-\n-\n-__all__ = [\"create_supervised_tbptt_trainer\", \"Tbptt_Events\"]\ndiff --git a/ignite/contrib/handlers/__init__.py b/ignite/contrib/handlers/__init__.py\n--- a/ignite/contrib/handlers/__init__.py\n+++ b/ignite/contrib/handlers/__init__.py\n@@ -3,5 +3,3 @@\n LinearCyclicalScheduler, CosineAnnealingScheduler\n \n from ignite.contrib.handlers.tqdm_logger import ProgressBar\n-\n-__all__ = ['ProgressBar']\n", "issue": "ParamScheduler docs missing\nNo docs on `ParamScheduler` and related classes on the [site](https://pytorch.org/ignite/contrib/handlers.html).\n", "before_files": [{"content": "\nfrom ignite.contrib.handlers.param_scheduler import ParamScheduler, CyclicalScheduler, \\\n LinearCyclicalScheduler, CosineAnnealingScheduler\n\nfrom ignite.contrib.handlers.tqdm_logger import ProgressBar\n\n__all__ = ['ProgressBar']\n", "path": "ignite/contrib/handlers/__init__.py"}, {"content": "# coding: utf-8\n\nfrom ignite.contrib.engines.tbptt import create_supervised_tbptt_trainer\nfrom ignite.contrib.engines.tbptt import Tbptt_Events\n\n\n__all__ = [\"create_supervised_tbptt_trainer\", \"Tbptt_Events\"]\n", "path": "ignite/contrib/engines/__init__.py"}], "after_files": [{"content": "\nfrom ignite.contrib.handlers.param_scheduler import ParamScheduler, CyclicalScheduler, \\\n LinearCyclicalScheduler, CosineAnnealingScheduler\n\nfrom ignite.contrib.handlers.tqdm_logger import ProgressBar\n", "path": "ignite/contrib/handlers/__init__.py"}, {"content": "# coding: utf-8\n\nfrom ignite.contrib.engines.tbptt import create_supervised_tbptt_trainer\nfrom ignite.contrib.engines.tbptt import Tbptt_Events\n", "path": "ignite/contrib/engines/__init__.py"}]} | 449 | 224 |
gh_patches_debug_5758 | rasdani/github-patches | git_diff | fossasia__open-event-server-2489 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Propose attendees/ticketing API
With the orga app and the implementation of API endpoints in this PR https://github.com/fossasia/open-event-orga-server/pull/2379 we have the first steps to an attendee API. In how far would that overlap with a ticketing API?
What is the best way to implement this and keep it generic? Do we need two APIs - Attendees and Ticketing or would that be handled in one API?
Also related to https://github.com/fossasia/open-event-orga-server/issues/904
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/tickets.py`
Content:
```
1 from flask.ext.restplus import Namespace
2
3 from app.helpers.ticketing import TicketingManager
4
5 from .helpers.helpers import (
6 requires_auth,
7 can_access)
8 from .helpers.utils import POST_RESPONSES
9 from .helpers.utils import Resource
10 from .helpers import custom_fields as fields
11 from ..helpers.data_getter import DataGetter
12
13 api = Namespace('tickets', description='Tickets', path='/')
14
15 ORDER = api.model('Order', {
16 'id': fields.Integer(),
17 'identifier': fields.String(),
18 'amount': fields.Float(),
19 'paid_via': fields.String(),
20 'invoice_number': fields.String(),
21 'payment_mode': fields.String(),
22 'status': fields.String(),
23 'completed_at': fields.DateTime(),
24 })
25
26 TICKET = api.model('Ticket', {
27 'id': fields.Integer(),
28 'name': fields.String(),
29 'description': fields.String(),
30 'type': fields.String(),
31 'price': fields.Float(),
32 'quantity': fields.Integer(),
33 })
34
35
36 @api.route('/events/<int:event_id>/tickets/')
37 class TicketsList(Resource):
38 @requires_auth
39 @api.doc('tickets', responses=POST_RESPONSES)
40 @api.marshal_list_with(TICKET)
41 def get(self, event_id):
42 """Get tickets of the event"""
43 return DataGetter.get_sales_open_tickets(event_id=event_id).all()
44
45
46 @api.route('/events/<int:event_id>/tickets/<int:ticket_id>')
47 class Ticket(Resource):
48 @requires_auth
49 @api.doc('ticket', responses=POST_RESPONSES)
50 @api.marshal_with(TICKET)
51 def get(self, event_id, ticket_id):
52 """Get information about a ticket"""
53 return TicketingManager.get_ticket(ticket_id=ticket_id)
54
55
56
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/tickets.py b/app/api/tickets.py
--- a/app/api/tickets.py
+++ b/app/api/tickets.py
@@ -52,5 +52,13 @@
"""Get information about a ticket"""
return TicketingManager.get_ticket(ticket_id=ticket_id)
[email protected]('/events/<int:event_id>/orders/<string:identifier>')
+class Order(Resource):
+ @requires_auth
+ @api.doc('order', responses=POST_RESPONSES)
+ @api.marshal_with(ORDER)
+ def get(self, event_id, identifier):
+ """Get information about a ticket"""
+ return TicketingManager.get_order_by_identifier(identifier=identifier)
| {"golden_diff": "diff --git a/app/api/tickets.py b/app/api/tickets.py\n--- a/app/api/tickets.py\n+++ b/app/api/tickets.py\n@@ -52,5 +52,13 @@\n \"\"\"Get information about a ticket\"\"\"\n return TicketingManager.get_ticket(ticket_id=ticket_id)\n \[email protected]('/events/<int:event_id>/orders/<string:identifier>')\n+class Order(Resource):\n+ @requires_auth\n+ @api.doc('order', responses=POST_RESPONSES)\n+ @api.marshal_with(ORDER)\n+ def get(self, event_id, identifier):\n+ \"\"\"Get information about a ticket\"\"\"\n+ return TicketingManager.get_order_by_identifier(identifier=identifier)\n", "issue": "Propose attendees/ticketing API\nWith the orga app and the implementation of API endpoints in this PR https://github.com/fossasia/open-event-orga-server/pull/2379 we have the first steps to an attendee API. In how far would that overlap with a ticketing API?\n\nWhat is the best way to implement this and keep it generic? Do we need two APIs - Attendees and Ticketing or would that be handled in one API?\n\nAlso related to https://github.com/fossasia/open-event-orga-server/issues/904\n\n", "before_files": [{"content": "from flask.ext.restplus import Namespace\n\nfrom app.helpers.ticketing import TicketingManager\n\nfrom .helpers.helpers import (\n requires_auth,\n can_access)\nfrom .helpers.utils import POST_RESPONSES\nfrom .helpers.utils import Resource\nfrom .helpers import custom_fields as fields\nfrom ..helpers.data_getter import DataGetter\n\napi = Namespace('tickets', description='Tickets', path='/')\n\nORDER = api.model('Order', {\n 'id': fields.Integer(),\n 'identifier': fields.String(),\n 'amount': fields.Float(),\n 'paid_via': fields.String(),\n 'invoice_number': fields.String(),\n 'payment_mode': fields.String(),\n 'status': fields.String(),\n 'completed_at': fields.DateTime(),\n})\n\nTICKET = api.model('Ticket', {\n 'id': fields.Integer(),\n 'name': fields.String(),\n 'description': fields.String(),\n 'type': fields.String(),\n 'price': fields.Float(),\n 'quantity': fields.Integer(),\n})\n\n\[email protected]('/events/<int:event_id>/tickets/')\nclass TicketsList(Resource):\n @requires_auth\n @api.doc('tickets', responses=POST_RESPONSES)\n @api.marshal_list_with(TICKET)\n def get(self, event_id):\n \"\"\"Get tickets of the event\"\"\"\n return DataGetter.get_sales_open_tickets(event_id=event_id).all()\n\n\[email protected]('/events/<int:event_id>/tickets/<int:ticket_id>')\nclass Ticket(Resource):\n @requires_auth\n @api.doc('ticket', responses=POST_RESPONSES)\n @api.marshal_with(TICKET)\n def get(self, event_id, ticket_id):\n \"\"\"Get information about a ticket\"\"\"\n return TicketingManager.get_ticket(ticket_id=ticket_id)\n\n\n\n", "path": "app/api/tickets.py"}], "after_files": [{"content": "from flask.ext.restplus import Namespace\n\nfrom app.helpers.ticketing import TicketingManager\n\nfrom .helpers.helpers import (\n requires_auth,\n can_access)\nfrom .helpers.utils import POST_RESPONSES\nfrom .helpers.utils import Resource\nfrom .helpers import custom_fields as fields\nfrom ..helpers.data_getter import DataGetter\n\napi = Namespace('tickets', description='Tickets', path='/')\n\nORDER = api.model('Order', {\n 'id': fields.Integer(),\n 'identifier': fields.String(),\n 'amount': fields.Float(),\n 'paid_via': fields.String(),\n 'invoice_number': fields.String(),\n 'payment_mode': fields.String(),\n 'status': fields.String(),\n 'completed_at': fields.DateTime(),\n})\n\nTICKET = api.model('Ticket', {\n 'id': fields.Integer(),\n 'name': fields.String(),\n 'description': fields.String(),\n 'type': fields.String(),\n 'price': fields.Float(),\n 'quantity': fields.Integer(),\n})\n\n\[email protected]('/events/<int:event_id>/tickets/')\nclass TicketsList(Resource):\n @requires_auth\n @api.doc('tickets', responses=POST_RESPONSES)\n @api.marshal_list_with(TICKET)\n def get(self, event_id):\n \"\"\"Get tickets of the event\"\"\"\n return DataGetter.get_sales_open_tickets(event_id=event_id).all()\n\n\[email protected]('/events/<int:event_id>/tickets/<int:ticket_id>')\nclass Ticket(Resource):\n @requires_auth\n @api.doc('ticket', responses=POST_RESPONSES)\n @api.marshal_with(TICKET)\n def get(self, event_id, ticket_id):\n \"\"\"Get information about a ticket\"\"\"\n return TicketingManager.get_ticket(ticket_id=ticket_id)\n\[email protected]('/events/<int:event_id>/orders/<string:identifier>')\nclass Order(Resource):\n @requires_auth\n @api.doc('order', responses=POST_RESPONSES)\n @api.marshal_with(ORDER)\n def get(self, event_id, identifier):\n \"\"\"Get information about a ticket\"\"\"\n return TicketingManager.get_order_by_identifier(identifier=identifier)\n\n\n", "path": "app/api/tickets.py"}]} | 852 | 153 |
gh_patches_debug_17306 | rasdani/github-patches | git_diff | cal-itp__benefits-2116 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor claims handling for integer claims
During the OAuth `authorize` flow, we look for [boolean claim values](https://github.com/cal-itp/benefits/blob/dev/benefits/oauth/views.py#L75) to determine if the user is eligible.
IdG is changing their claims implementation to cut down on the size of the token being sent to Benefits. Instead of booleans, they will use integers to indicate claim values:
* `0` will indicate `False` (i.e. the claim indicates eligibility failed)
* `1` will indicate `True` (i.e. the claim indicates eligibility succeeded)
* Any other integer `>= 10` will indicate an error code
**Note:** the claim values are transmitted in the token as `str`, and should be parsed to `int` before usage.
## Acceptance Criteria
<!-- Remember to consider edge cases -->
- [ ] `authorize` processes integer claims as described above
## Additional context
While we work to implement this change, existing flows for Older Adults and Veterans will use both claim styles. New flows for CalFresh and the new Veterans API will ~only use the newer integer claim style, so this refactor is necessary for supporting those flows.~ also support both styles to allow us time to implement and cut over. There are an entirely new set of scopes created for the integer-based claims so as not to interfere with the existing implementation.
Once we have this change tested and deployed, IdG will cutover all flows to use the integer style only.
Mapping error codes to error messages and analytics will be handled in #2049.
See [this Slack thread](https://cal-itp.slack.com/archives/C037Y3UE71P/p1714434750536319) from @johnatstate for more context.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/oauth/views.py`
Content:
```
1 import logging
2
3 from django.shortcuts import redirect
4 from django.urls import reverse
5 from django.utils.decorators import decorator_from_middleware
6
7 from benefits.core import session
8 from . import analytics, redirects
9 from .client import oauth
10 from .middleware import VerifierUsesAuthVerificationSessionRequired
11
12
13 logger = logging.getLogger(__name__)
14
15
16 ROUTE_AUTH = "oauth:authorize"
17 ROUTE_START = "eligibility:start"
18 ROUTE_CONFIRM = "eligibility:confirm"
19 ROUTE_UNVERIFIED = "eligibility:unverified"
20 ROUTE_POST_LOGOUT = "oauth:post_logout"
21
22
23 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
24 def login(request):
25 """View implementing OIDC authorize_redirect."""
26 verifier = session.verifier(request)
27 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
28
29 if not oauth_client:
30 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
31
32 route = reverse(ROUTE_AUTH)
33 redirect_uri = redirects.generate_redirect_uri(request, route)
34
35 logger.debug(f"OAuth authorize_redirect with redirect_uri: {redirect_uri}")
36
37 analytics.started_sign_in(request)
38
39 return oauth_client.authorize_redirect(request, redirect_uri)
40
41
42 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
43 def authorize(request):
44 """View implementing OIDC token authorization."""
45 verifier = session.verifier(request)
46 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
47
48 if not oauth_client:
49 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
50
51 logger.debug("Attempting to authorize OAuth access token")
52 token = oauth_client.authorize_access_token(request)
53
54 if token is None:
55 logger.warning("Could not authorize OAuth access token")
56 return redirect(ROUTE_START)
57
58 logger.debug("OAuth access token authorized")
59
60 # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.
61 id_token = token["id_token"]
62
63 # We store the returned claim in case it can be used later in eligibility verification.
64 verifier_claim = verifier.auth_provider.claim
65 stored_claim = None
66
67 if verifier_claim:
68 userinfo = token.get("userinfo")
69
70 if userinfo:
71 claim_value = userinfo.get(verifier_claim)
72 # the claim comes back in userinfo like { "claim": "True" | "False" }
73 if claim_value is None:
74 logger.warning(f"userinfo did not contain: {verifier_claim}")
75 elif claim_value.lower() == "true":
76 # if userinfo contains our claim and the flag is true, store the *claim*
77 stored_claim = verifier_claim
78
79 session.update(request, oauth_token=id_token, oauth_claim=stored_claim)
80
81 analytics.finished_sign_in(request)
82
83 return redirect(ROUTE_CONFIRM)
84
85
86 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
87 def cancel(request):
88 """View implementing cancellation of OIDC authorization."""
89
90 analytics.canceled_sign_in(request)
91
92 return redirect(ROUTE_UNVERIFIED)
93
94
95 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
96 def logout(request):
97 """View implementing OIDC and application sign out."""
98 verifier = session.verifier(request)
99 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
100
101 if not oauth_client:
102 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
103
104 analytics.started_sign_out(request)
105
106 # overwrite the oauth session token, the user is signed out of the app
107 token = session.oauth_token(request)
108 session.logout(request)
109
110 route = reverse(ROUTE_POST_LOGOUT)
111 redirect_uri = redirects.generate_redirect_uri(request, route)
112
113 logger.debug(f"OAuth end_session_endpoint with redirect_uri: {redirect_uri}")
114
115 # send the user through the end_session_endpoint, redirecting back to
116 # the post_logout route
117 return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)
118
119
120 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
121 def post_logout(request):
122 """View routes the user to their origin after sign out."""
123
124 analytics.finished_sign_out(request)
125
126 origin = session.origin(request)
127 return redirect(origin)
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benefits/oauth/views.py b/benefits/oauth/views.py
--- a/benefits/oauth/views.py
+++ b/benefits/oauth/views.py
@@ -69,11 +69,12 @@
if userinfo:
claim_value = userinfo.get(verifier_claim)
- # the claim comes back in userinfo like { "claim": "True" | "False" }
+ # the claim comes back in userinfo like { "claim": "1" | "0" }
+ claim_value = int(claim_value) if claim_value else None
if claim_value is None:
logger.warning(f"userinfo did not contain: {verifier_claim}")
- elif claim_value.lower() == "true":
- # if userinfo contains our claim and the flag is true, store the *claim*
+ elif claim_value == 1:
+ # if userinfo contains our claim and the flag is 1 (true), store the *claim*
stored_claim = verifier_claim
session.update(request, oauth_token=id_token, oauth_claim=stored_claim)
| {"golden_diff": "diff --git a/benefits/oauth/views.py b/benefits/oauth/views.py\n--- a/benefits/oauth/views.py\n+++ b/benefits/oauth/views.py\n@@ -69,11 +69,12 @@\n \n if userinfo:\n claim_value = userinfo.get(verifier_claim)\n- # the claim comes back in userinfo like { \"claim\": \"True\" | \"False\" }\n+ # the claim comes back in userinfo like { \"claim\": \"1\" | \"0\" }\n+ claim_value = int(claim_value) if claim_value else None\n if claim_value is None:\n logger.warning(f\"userinfo did not contain: {verifier_claim}\")\n- elif claim_value.lower() == \"true\":\n- # if userinfo contains our claim and the flag is true, store the *claim*\n+ elif claim_value == 1:\n+ # if userinfo contains our claim and the flag is 1 (true), store the *claim*\n stored_claim = verifier_claim\n \n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n", "issue": "Refactor claims handling for integer claims\nDuring the OAuth `authorize` flow, we look for [boolean claim values](https://github.com/cal-itp/benefits/blob/dev/benefits/oauth/views.py#L75) to determine if the user is eligible.\n\nIdG is changing their claims implementation to cut down on the size of the token being sent to Benefits. Instead of booleans, they will use integers to indicate claim values:\n\n* `0` will indicate `False` (i.e. the claim indicates eligibility failed)\n* `1` will indicate `True` (i.e. the claim indicates eligibility succeeded)\n* Any other integer `>= 10` will indicate an error code\n\n**Note:** the claim values are transmitted in the token as `str`, and should be parsed to `int` before usage.\n\n## Acceptance Criteria\n\n<!-- Remember to consider edge cases -->\n\n- [ ] `authorize` processes integer claims as described above\n\n## Additional context\n\nWhile we work to implement this change, existing flows for Older Adults and Veterans will use both claim styles. New flows for CalFresh and the new Veterans API will ~only use the newer integer claim style, so this refactor is necessary for supporting those flows.~ also support both styles to allow us time to implement and cut over. There are an entirely new set of scopes created for the integer-based claims so as not to interfere with the existing implementation.\n\nOnce we have this change tested and deployed, IdG will cutover all flows to use the integer style only.\n\nMapping error codes to error messages and analytics will be handled in #2049.\n\nSee [this Slack thread](https://cal-itp.slack.com/archives/C037Y3UE71P/p1714434750536319) from @johnatstate for more context.\n", "before_files": [{"content": "import logging\n\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\n\nfrom benefits.core import session\nfrom . import analytics, redirects\nfrom .client import oauth\nfrom .middleware import VerifierUsesAuthVerificationSessionRequired\n\n\nlogger = logging.getLogger(__name__)\n\n\nROUTE_AUTH = \"oauth:authorize\"\nROUTE_START = \"eligibility:start\"\nROUTE_CONFIRM = \"eligibility:confirm\"\nROUTE_UNVERIFIED = \"eligibility:unverified\"\nROUTE_POST_LOGOUT = \"oauth:post_logout\"\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef login(request):\n \"\"\"View implementing OIDC authorize_redirect.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n route = reverse(ROUTE_AUTH)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth authorize_redirect with redirect_uri: {redirect_uri}\")\n\n analytics.started_sign_in(request)\n\n return oauth_client.authorize_redirect(request, redirect_uri)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef authorize(request):\n \"\"\"View implementing OIDC token authorization.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n logger.debug(\"Attempting to authorize OAuth access token\")\n token = oauth_client.authorize_access_token(request)\n\n if token is None:\n logger.warning(\"Could not authorize OAuth access token\")\n return redirect(ROUTE_START)\n\n logger.debug(\"OAuth access token authorized\")\n\n # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.\n id_token = token[\"id_token\"]\n\n # We store the returned claim in case it can be used later in eligibility verification.\n verifier_claim = verifier.auth_provider.claim\n stored_claim = None\n\n if verifier_claim:\n userinfo = token.get(\"userinfo\")\n\n if userinfo:\n claim_value = userinfo.get(verifier_claim)\n # the claim comes back in userinfo like { \"claim\": \"True\" | \"False\" }\n if claim_value is None:\n logger.warning(f\"userinfo did not contain: {verifier_claim}\")\n elif claim_value.lower() == \"true\":\n # if userinfo contains our claim and the flag is true, store the *claim*\n stored_claim = verifier_claim\n\n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n\n analytics.finished_sign_in(request)\n\n return redirect(ROUTE_CONFIRM)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef cancel(request):\n \"\"\"View implementing cancellation of OIDC authorization.\"\"\"\n\n analytics.canceled_sign_in(request)\n\n return redirect(ROUTE_UNVERIFIED)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef logout(request):\n \"\"\"View implementing OIDC and application sign out.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n analytics.started_sign_out(request)\n\n # overwrite the oauth session token, the user is signed out of the app\n token = session.oauth_token(request)\n session.logout(request)\n\n route = reverse(ROUTE_POST_LOGOUT)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth end_session_endpoint with redirect_uri: {redirect_uri}\")\n\n # send the user through the end_session_endpoint, redirecting back to\n # the post_logout route\n return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef post_logout(request):\n \"\"\"View routes the user to their origin after sign out.\"\"\"\n\n analytics.finished_sign_out(request)\n\n origin = session.origin(request)\n return redirect(origin)\n", "path": "benefits/oauth/views.py"}], "after_files": [{"content": "import logging\n\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\n\nfrom benefits.core import session\nfrom . import analytics, redirects\nfrom .client import oauth\nfrom .middleware import VerifierUsesAuthVerificationSessionRequired\n\n\nlogger = logging.getLogger(__name__)\n\n\nROUTE_AUTH = \"oauth:authorize\"\nROUTE_START = \"eligibility:start\"\nROUTE_CONFIRM = \"eligibility:confirm\"\nROUTE_UNVERIFIED = \"eligibility:unverified\"\nROUTE_POST_LOGOUT = \"oauth:post_logout\"\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef login(request):\n \"\"\"View implementing OIDC authorize_redirect.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n route = reverse(ROUTE_AUTH)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth authorize_redirect with redirect_uri: {redirect_uri}\")\n\n analytics.started_sign_in(request)\n\n return oauth_client.authorize_redirect(request, redirect_uri)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef authorize(request):\n \"\"\"View implementing OIDC token authorization.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n logger.debug(\"Attempting to authorize OAuth access token\")\n token = oauth_client.authorize_access_token(request)\n\n if token is None:\n logger.warning(\"Could not authorize OAuth access token\")\n return redirect(ROUTE_START)\n\n logger.debug(\"OAuth access token authorized\")\n\n # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.\n id_token = token[\"id_token\"]\n\n # We store the returned claim in case it can be used later in eligibility verification.\n verifier_claim = verifier.auth_provider.claim\n stored_claim = None\n\n if verifier_claim:\n userinfo = token.get(\"userinfo\")\n\n if userinfo:\n claim_value = userinfo.get(verifier_claim)\n # the claim comes back in userinfo like { \"claim\": \"1\" | \"0\" }\n claim_value = int(claim_value) if claim_value else None\n if claim_value is None:\n logger.warning(f\"userinfo did not contain: {verifier_claim}\")\n elif claim_value == 1:\n # if userinfo contains our claim and the flag is 1 (true), store the *claim*\n stored_claim = verifier_claim\n\n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n\n analytics.finished_sign_in(request)\n\n return redirect(ROUTE_CONFIRM)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef cancel(request):\n \"\"\"View implementing cancellation of OIDC authorization.\"\"\"\n\n analytics.canceled_sign_in(request)\n\n return redirect(ROUTE_UNVERIFIED)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef logout(request):\n \"\"\"View implementing OIDC and application sign out.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n analytics.started_sign_out(request)\n\n # overwrite the oauth session token, the user is signed out of the app\n token = session.oauth_token(request)\n session.logout(request)\n\n route = reverse(ROUTE_POST_LOGOUT)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth end_session_endpoint with redirect_uri: {redirect_uri}\")\n\n # send the user through the end_session_endpoint, redirecting back to\n # the post_logout route\n return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef post_logout(request):\n \"\"\"View routes the user to their origin after sign out.\"\"\"\n\n analytics.finished_sign_out(request)\n\n origin = session.origin(request)\n return redirect(origin)\n", "path": "benefits/oauth/views.py"}]} | 1,824 | 231 |
gh_patches_debug_43093 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-3261 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Suggestion] Add chat(s) parameter to ChatJoinRequestHandler
This param should allow to filter out chats which will be handled by the ChatJoinRequestHandler, much like the pattern argument of the CallbackQueryHandler. It should allow "username" strings as well as ids and if set, the handler should check if the incoming update is from that chat.
For first time contributors, check how CallbackQueryHandler implements the pattern argument in check_update: https://github.com/python-telegram-bot/python-telegram-bot/blob/master/telegram/ext/_callbackqueryhandler.py#L123
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/ext/_chatjoinrequesthandler.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2022
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the ChatJoinRequestHandler class."""
20
21
22 from telegram import Update
23 from telegram.ext._handler import BaseHandler
24 from telegram.ext._utils.types import CCT
25
26
27 class ChatJoinRequestHandler(BaseHandler[Update, CCT]):
28 """BaseHandler class to handle Telegram updates that contain
29 :attr:`telegram.Update.chat_join_request`.
30
31 Warning:
32 When setting :paramref:`block` to :obj:`False`, you cannot rely on adding custom
33 attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.
34
35 .. versionadded:: 13.8
36
37 Args:
38 callback (:term:`coroutine function`): The callback function for this handler. Will be
39 called when :meth:`check_update` has determined that an update should be processed by
40 this handler. Callback signature::
41
42 async def callback(update: Update, context: CallbackContext)
43
44 The return value of the callback is usually ignored except for the special case of
45 :class:`telegram.ext.ConversationHandler`.
46 block (:obj:`bool`, optional): Determines whether the return value of the callback should
47 be awaited before processing the next handler in
48 :meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.
49
50 Attributes:
51 callback (:term:`coroutine function`): The callback function for this handler.
52 block (:obj:`bool`): Determines whether the callback will run in a blocking way..
53
54 """
55
56 __slots__ = ()
57
58 def check_update(self, update: object) -> bool:
59 """Determines whether an update should be passed to this handler's :attr:`callback`.
60
61 Args:
62 update (:class:`telegram.Update` | :obj:`object`): Incoming update.
63
64 Returns:
65 :obj:`bool`
66
67 """
68 return isinstance(update, Update) and bool(update.chat_join_request)
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/telegram/ext/_chatjoinrequesthandler.py b/telegram/ext/_chatjoinrequesthandler.py
--- a/telegram/ext/_chatjoinrequesthandler.py
+++ b/telegram/ext/_chatjoinrequesthandler.py
@@ -18,16 +18,27 @@
# along with this program. If not, see [http://www.gnu.org/licenses/].
"""This module contains the ChatJoinRequestHandler class."""
+from typing import FrozenSet, Optional
from telegram import Update
+from telegram._utils.defaultvalue import DEFAULT_TRUE
+from telegram._utils.types import RT, SCT, DVInput
from telegram.ext._handler import BaseHandler
-from telegram.ext._utils.types import CCT
+from telegram.ext._utils.types import CCT, HandlerCallback
class ChatJoinRequestHandler(BaseHandler[Update, CCT]):
"""BaseHandler class to handle Telegram updates that contain
:attr:`telegram.Update.chat_join_request`.
+ Note:
+ If neither of :paramref:`username` and the :paramref:`chat_id` are passed, this handler
+ accepts *any* join request. Otherwise, this handler accepts all requests to join chats
+ for which the chat ID is listed in :paramref:`chat_id` or the username is listed in
+ :paramref:`username`, or both.
+
+ .. versionadded:: 20.0
+
Warning:
When setting :paramref:`block` to :obj:`False`, you cannot rely on adding custom
attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.
@@ -43,6 +54,14 @@
The return value of the callback is usually ignored except for the special case of
:class:`telegram.ext.ConversationHandler`.
+ chat_id (:obj:`int` | Collection[:obj:`int`], optional): Filters requests to allow only
+ those which are asking to join the specified chat ID(s).
+
+ .. versionadded:: 20.0
+ username (:obj:`str` | Collection[:obj:`str`], optional): Filters requests to allow only
+ those which are asking to join the specified username(s).
+
+ .. versionadded:: 20.0
block (:obj:`bool`, optional): Determines whether the return value of the callback should
be awaited before processing the next handler in
:meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.
@@ -53,7 +72,38 @@
"""
- __slots__ = ()
+ __slots__ = (
+ "_chat_ids",
+ "_usernames",
+ )
+
+ def __init__(
+ self,
+ callback: HandlerCallback[Update, CCT, RT],
+ chat_id: SCT[int] = None,
+ username: SCT[str] = None,
+ block: DVInput[bool] = DEFAULT_TRUE,
+ ):
+ super().__init__(callback, block=block)
+
+ self._chat_ids = self._parse_chat_id(chat_id)
+ self._usernames = self._parse_username(username)
+
+ @staticmethod
+ def _parse_chat_id(chat_id: Optional[SCT[int]]) -> FrozenSet[int]:
+ if chat_id is None:
+ return frozenset()
+ if isinstance(chat_id, int):
+ return frozenset({chat_id})
+ return frozenset(chat_id)
+
+ @staticmethod
+ def _parse_username(username: Optional[SCT[str]]) -> FrozenSet[str]:
+ if username is None:
+ return frozenset()
+ if isinstance(username, str):
+ return frozenset({username[1:] if username.startswith("@") else username})
+ return frozenset({usr[1:] if usr.startswith("@") else usr for usr in username})
def check_update(self, update: object) -> bool:
"""Determines whether an update should be passed to this handler's :attr:`callback`.
@@ -65,4 +115,12 @@
:obj:`bool`
"""
- return isinstance(update, Update) and bool(update.chat_join_request)
+ if isinstance(update, Update) and update.chat_join_request:
+ if not self._chat_ids and not self._usernames:
+ return True
+ if update.chat_join_request.chat.id in self._chat_ids:
+ return True
+ if update.chat_join_request.from_user.username in self._usernames:
+ return True
+ return False
+ return False
| {"golden_diff": "diff --git a/telegram/ext/_chatjoinrequesthandler.py b/telegram/ext/_chatjoinrequesthandler.py\n--- a/telegram/ext/_chatjoinrequesthandler.py\n+++ b/telegram/ext/_chatjoinrequesthandler.py\n@@ -18,16 +18,27 @@\n # along with this program. If not, see [http://www.gnu.org/licenses/].\n \"\"\"This module contains the ChatJoinRequestHandler class.\"\"\"\n \n+from typing import FrozenSet, Optional\n \n from telegram import Update\n+from telegram._utils.defaultvalue import DEFAULT_TRUE\n+from telegram._utils.types import RT, SCT, DVInput\n from telegram.ext._handler import BaseHandler\n-from telegram.ext._utils.types import CCT\n+from telegram.ext._utils.types import CCT, HandlerCallback\n \n \n class ChatJoinRequestHandler(BaseHandler[Update, CCT]):\n \"\"\"BaseHandler class to handle Telegram updates that contain\n :attr:`telegram.Update.chat_join_request`.\n \n+ Note:\n+ If neither of :paramref:`username` and the :paramref:`chat_id` are passed, this handler\n+ accepts *any* join request. Otherwise, this handler accepts all requests to join chats\n+ for which the chat ID is listed in :paramref:`chat_id` or the username is listed in\n+ :paramref:`username`, or both.\n+\n+ .. versionadded:: 20.0\n+\n Warning:\n When setting :paramref:`block` to :obj:`False`, you cannot rely on adding custom\n attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.\n@@ -43,6 +54,14 @@\n \n The return value of the callback is usually ignored except for the special case of\n :class:`telegram.ext.ConversationHandler`.\n+ chat_id (:obj:`int` | Collection[:obj:`int`], optional): Filters requests to allow only\n+ those which are asking to join the specified chat ID(s).\n+\n+ .. versionadded:: 20.0\n+ username (:obj:`str` | Collection[:obj:`str`], optional): Filters requests to allow only\n+ those which are asking to join the specified username(s).\n+\n+ .. versionadded:: 20.0\n block (:obj:`bool`, optional): Determines whether the return value of the callback should\n be awaited before processing the next handler in\n :meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.\n@@ -53,7 +72,38 @@\n \n \"\"\"\n \n- __slots__ = ()\n+ __slots__ = (\n+ \"_chat_ids\",\n+ \"_usernames\",\n+ )\n+\n+ def __init__(\n+ self,\n+ callback: HandlerCallback[Update, CCT, RT],\n+ chat_id: SCT[int] = None,\n+ username: SCT[str] = None,\n+ block: DVInput[bool] = DEFAULT_TRUE,\n+ ):\n+ super().__init__(callback, block=block)\n+\n+ self._chat_ids = self._parse_chat_id(chat_id)\n+ self._usernames = self._parse_username(username)\n+\n+ @staticmethod\n+ def _parse_chat_id(chat_id: Optional[SCT[int]]) -> FrozenSet[int]:\n+ if chat_id is None:\n+ return frozenset()\n+ if isinstance(chat_id, int):\n+ return frozenset({chat_id})\n+ return frozenset(chat_id)\n+\n+ @staticmethod\n+ def _parse_username(username: Optional[SCT[str]]) -> FrozenSet[str]:\n+ if username is None:\n+ return frozenset()\n+ if isinstance(username, str):\n+ return frozenset({username[1:] if username.startswith(\"@\") else username})\n+ return frozenset({usr[1:] if usr.startswith(\"@\") else usr for usr in username})\n \n def check_update(self, update: object) -> bool:\n \"\"\"Determines whether an update should be passed to this handler's :attr:`callback`.\n@@ -65,4 +115,12 @@\n :obj:`bool`\n \n \"\"\"\n- return isinstance(update, Update) and bool(update.chat_join_request)\n+ if isinstance(update, Update) and update.chat_join_request:\n+ if not self._chat_ids and not self._usernames:\n+ return True\n+ if update.chat_join_request.chat.id in self._chat_ids:\n+ return True\n+ if update.chat_join_request.from_user.username in self._usernames:\n+ return True\n+ return False\n+ return False\n", "issue": "[Suggestion] Add chat(s) parameter to ChatJoinRequestHandler\nThis param should allow to filter out chats which will be handled by the ChatJoinRequestHandler, much like the pattern argument of the CallbackQueryHandler. It should allow \"username\" strings as well as ids and if set, the handler should check if the incoming update is from that chat.\r\n\r\nFor first time contributors, check how CallbackQueryHandler implements the pattern argument in check_update: https://github.com/python-telegram-bot/python-telegram-bot/blob/master/telegram/ext/_callbackqueryhandler.py#L123\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2022\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the ChatJoinRequestHandler class.\"\"\"\n\n\nfrom telegram import Update\nfrom telegram.ext._handler import BaseHandler\nfrom telegram.ext._utils.types import CCT\n\n\nclass ChatJoinRequestHandler(BaseHandler[Update, CCT]):\n \"\"\"BaseHandler class to handle Telegram updates that contain\n :attr:`telegram.Update.chat_join_request`.\n\n Warning:\n When setting :paramref:`block` to :obj:`False`, you cannot rely on adding custom\n attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.\n\n .. versionadded:: 13.8\n\n Args:\n callback (:term:`coroutine function`): The callback function for this handler. Will be\n called when :meth:`check_update` has determined that an update should be processed by\n this handler. Callback signature::\n\n async def callback(update: Update, context: CallbackContext)\n\n The return value of the callback is usually ignored except for the special case of\n :class:`telegram.ext.ConversationHandler`.\n block (:obj:`bool`, optional): Determines whether the return value of the callback should\n be awaited before processing the next handler in\n :meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.\n\n Attributes:\n callback (:term:`coroutine function`): The callback function for this handler.\n block (:obj:`bool`): Determines whether the callback will run in a blocking way..\n\n \"\"\"\n\n __slots__ = ()\n\n def check_update(self, update: object) -> bool:\n \"\"\"Determines whether an update should be passed to this handler's :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update` | :obj:`object`): Incoming update.\n\n Returns:\n :obj:`bool`\n\n \"\"\"\n return isinstance(update, Update) and bool(update.chat_join_request)\n", "path": "telegram/ext/_chatjoinrequesthandler.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2022\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the ChatJoinRequestHandler class.\"\"\"\n\nfrom typing import FrozenSet, Optional\n\nfrom telegram import Update\nfrom telegram._utils.defaultvalue import DEFAULT_TRUE\nfrom telegram._utils.types import RT, SCT, DVInput\nfrom telegram.ext._handler import BaseHandler\nfrom telegram.ext._utils.types import CCT, HandlerCallback\n\n\nclass ChatJoinRequestHandler(BaseHandler[Update, CCT]):\n \"\"\"BaseHandler class to handle Telegram updates that contain\n :attr:`telegram.Update.chat_join_request`.\n\n Note:\n If neither of :paramref:`username` and the :paramref:`chat_id` are passed, this handler\n accepts *any* join request. Otherwise, this handler accepts all requests to join chats\n for which the chat ID is listed in :paramref:`chat_id` or the username is listed in\n :paramref:`username`, or both.\n\n .. versionadded:: 20.0\n\n Warning:\n When setting :paramref:`block` to :obj:`False`, you cannot rely on adding custom\n attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.\n\n .. versionadded:: 13.8\n\n Args:\n callback (:term:`coroutine function`): The callback function for this handler. Will be\n called when :meth:`check_update` has determined that an update should be processed by\n this handler. Callback signature::\n\n async def callback(update: Update, context: CallbackContext)\n\n The return value of the callback is usually ignored except for the special case of\n :class:`telegram.ext.ConversationHandler`.\n chat_id (:obj:`int` | Collection[:obj:`int`], optional): Filters requests to allow only\n those which are asking to join the specified chat ID(s).\n\n .. versionadded:: 20.0\n username (:obj:`str` | Collection[:obj:`str`], optional): Filters requests to allow only\n those which are asking to join the specified username(s).\n\n .. versionadded:: 20.0\n block (:obj:`bool`, optional): Determines whether the return value of the callback should\n be awaited before processing the next handler in\n :meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.\n\n Attributes:\n callback (:term:`coroutine function`): The callback function for this handler.\n block (:obj:`bool`): Determines whether the callback will run in a blocking way..\n\n \"\"\"\n\n __slots__ = (\n \"_chat_ids\",\n \"_usernames\",\n )\n\n def __init__(\n self,\n callback: HandlerCallback[Update, CCT, RT],\n chat_id: SCT[int] = None,\n username: SCT[str] = None,\n block: DVInput[bool] = DEFAULT_TRUE,\n ):\n super().__init__(callback, block=block)\n\n self._chat_ids = self._parse_chat_id(chat_id)\n self._usernames = self._parse_username(username)\n\n @staticmethod\n def _parse_chat_id(chat_id: Optional[SCT[int]]) -> FrozenSet[int]:\n if chat_id is None:\n return frozenset()\n if isinstance(chat_id, int):\n return frozenset({chat_id})\n return frozenset(chat_id)\n\n @staticmethod\n def _parse_username(username: Optional[SCT[str]]) -> FrozenSet[str]:\n if username is None:\n return frozenset()\n if isinstance(username, str):\n return frozenset({username[1:] if username.startswith(\"@\") else username})\n return frozenset({usr[1:] if usr.startswith(\"@\") else usr for usr in username})\n\n def check_update(self, update: object) -> bool:\n \"\"\"Determines whether an update should be passed to this handler's :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update` | :obj:`object`): Incoming update.\n\n Returns:\n :obj:`bool`\n\n \"\"\"\n if isinstance(update, Update) and update.chat_join_request:\n if not self._chat_ids and not self._usernames:\n return True\n if update.chat_join_request.chat.id in self._chat_ids:\n return True\n if update.chat_join_request.from_user.username in self._usernames:\n return True\n return False\n return False\n", "path": "telegram/ext/_chatjoinrequesthandler.py"}]} | 1,103 | 987 |
gh_patches_debug_8876 | rasdani/github-patches | git_diff | microsoft__MLOS-211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sphinx Python API docs generation broken in recent nightly CI runs
For example: <https://github.com/microsoft/MLOS/runs/1635132574?check_suite_focus=true>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/Mlos.Python/mlos/Spaces/Point.py`
Content:
```
1 #
2 # Copyright (c) Microsoft Corporation.
3 # Licensed under the MIT License.
4 #
5 import json
6 from numbers import Number
7
8 import pandas as pd
9 from mlos.Spaces.Dimensions.Dimension import Dimension
10
11
12 class Point:
13 """ Models a point in a Hypergrid.
14
15 """
16 def __init__(self, **kwargs):
17 self.dimension_value_dict = dict()
18 for dimension_name, value in kwargs.items():
19 self[dimension_name] = value
20
21 def copy(self):
22 return Point(**{key: value for key, value in self})
23
24 def flat_copy(self):
25 """ Creates a copy of the point but all dimension names are flattened.
26
27 :return:
28 """
29 flat_dict = {
30 Dimension.flatten_dimension_name(dimension_name): value
31 for dimension_name, value in self
32 }
33 return Point(**flat_dict)
34
35 def __eq__(self, other):
36 if not isinstance(other, Point):
37 return False
38 return \
39 all(other.get(dimension_name, None) == value for dimension_name, value in self) \
40 and \
41 all(self.get(dimension_name, None) == value for dimension_name, value in other)
42
43 def __ne__(self, other):
44 return not self == other
45
46 def __iter__(self):
47 for dimension_name, value in self.dimension_value_dict.items():
48 if not isinstance(value, Point):
49 yield dimension_name, value
50 else:
51 for sub_dimension_name, sub_dimension_value in value:
52 yield dimension_name + "." + sub_dimension_name, sub_dimension_value
53
54 def __getattr__(self, dimension_name):
55 if dimension_name == "__isabstractmethod__":
56 # A sad but necessary way to deal with ABC.
57 return False
58 return self[dimension_name]
59
60 def __setattr__(self, name, value):
61 if name == "dimension_value_dict":
62 self.__dict__[name] = value
63 else:
64 dimension_name = name
65 subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)
66 if subgrid_name is None:
67 self.dimension_value_dict[dimension_name] = value
68 else:
69 point_in_subgrid = self.dimension_value_dict.get(subgrid_name, Point())
70 point_in_subgrid[dimension_name_without_subgrid_name] = value
71 self.dimension_value_dict[subgrid_name] = point_in_subgrid
72
73 def __getitem__(self, dimension_name):
74 if dimension_name not in self:
75 raise KeyError(f"This Point does not have a value along dimension: {dimension_name}")
76 subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)
77 if subgrid_name is None:
78 return self.dimension_value_dict[dimension_name]
79 return self[subgrid_name][dimension_name_without_subgrid_name]
80
81 def get(self, dimension_name, default=None):
82 try:
83 return self[dimension_name]
84 except KeyError:
85 return default
86
87 def __setitem__(self, dimension_name, value):
88 subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)
89 if subgrid_name is None:
90 self.dimension_value_dict[dimension_name] = value
91 else:
92 point_in_subgrid = self.dimension_value_dict.get(subgrid_name, Point())
93 point_in_subgrid[dimension_name_without_subgrid_name] = value
94 self.dimension_value_dict[subgrid_name] = point_in_subgrid
95
96 def __contains__(self, dimension_name):
97 subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)
98 if subgrid_name is None:
99 return dimension_name in self.dimension_value_dict
100 if subgrid_name not in self.dimension_value_dict:
101 return False
102 return dimension_name_without_subgrid_name in self[subgrid_name]
103
104 def __repr__(self):
105 return self.__str__()
106
107 def __str__(self):
108 return str(self.to_json(indent=2))
109
110 def __getstate__(self):
111 return self.to_json()
112
113 def __setstate__(self, state):
114 temp_point = self.from_json(state)
115 self.dimension_value_dict = temp_point.dimension_value_dict
116
117 def to_json(self, indent=None):
118 if indent is not None:
119 return json.dumps(self.to_dict(), indent=indent)
120 return json.dumps(self.to_dict())
121
122 @classmethod
123 def from_json(cls, json_str):
124 coordinates = json.loads(json_str)
125 return Point(**coordinates)
126
127 def to_dict(self):
128 return_dict = {}
129 for param_name, value in self:
130 if isinstance(value, Number) and int(value) == value and not isinstance(value, bool):
131 value = int(value)
132 return_dict[param_name] = value
133 return return_dict
134
135 def to_dataframe(self):
136 return pd.DataFrame({param_name: [value] for param_name, value in self})
137
138 @classmethod
139 def from_dataframe(cls, dataframe: pd.DataFrame):
140 assert len(dataframe.index) == 1
141 dataframe = dataframe.dropna(axis=1)
142 dataframe_dict = dataframe.to_dict(orient='list')
143 point_dict = {key: values[0] for key, values in dataframe_dict.items()}
144 return Point(**point_dict)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source/Mlos.Python/mlos/Spaces/Point.py b/source/Mlos.Python/mlos/Spaces/Point.py
--- a/source/Mlos.Python/mlos/Spaces/Point.py
+++ b/source/Mlos.Python/mlos/Spaces/Point.py
@@ -55,7 +55,10 @@
if dimension_name == "__isabstractmethod__":
# A sad but necessary way to deal with ABC.
return False
- return self[dimension_name]
+ try:
+ return self[dimension_name]
+ except KeyError:
+ raise AttributeError(f"This Point does not have a {dimension_name} attribute.")
def __setattr__(self, name, value):
if name == "dimension_value_dict":
| {"golden_diff": "diff --git a/source/Mlos.Python/mlos/Spaces/Point.py b/source/Mlos.Python/mlos/Spaces/Point.py\n--- a/source/Mlos.Python/mlos/Spaces/Point.py\n+++ b/source/Mlos.Python/mlos/Spaces/Point.py\n@@ -55,7 +55,10 @@\n if dimension_name == \"__isabstractmethod__\":\r\n # A sad but necessary way to deal with ABC.\r\n return False\r\n- return self[dimension_name]\r\n+ try:\r\n+ return self[dimension_name]\r\n+ except KeyError:\r\n+ raise AttributeError(f\"This Point does not have a {dimension_name} attribute.\")\r\n \r\n def __setattr__(self, name, value):\r\n if name == \"dimension_value_dict\":\n", "issue": "Sphinx Python API docs generation broken in recent nightly CI runs\nFor example: <https://github.com/microsoft/MLOS/runs/1635132574?check_suite_focus=true>\n", "before_files": [{"content": "#\r\n# Copyright (c) Microsoft Corporation.\r\n# Licensed under the MIT License.\r\n#\r\nimport json\r\nfrom numbers import Number\r\n\r\nimport pandas as pd\r\nfrom mlos.Spaces.Dimensions.Dimension import Dimension\r\n\r\n\r\nclass Point:\r\n \"\"\" Models a point in a Hypergrid.\r\n\r\n \"\"\"\r\n def __init__(self, **kwargs):\r\n self.dimension_value_dict = dict()\r\n for dimension_name, value in kwargs.items():\r\n self[dimension_name] = value\r\n\r\n def copy(self):\r\n return Point(**{key: value for key, value in self})\r\n\r\n def flat_copy(self):\r\n \"\"\" Creates a copy of the point but all dimension names are flattened.\r\n\r\n :return:\r\n \"\"\"\r\n flat_dict = {\r\n Dimension.flatten_dimension_name(dimension_name): value\r\n for dimension_name, value in self\r\n }\r\n return Point(**flat_dict)\r\n\r\n def __eq__(self, other):\r\n if not isinstance(other, Point):\r\n return False\r\n return \\\r\n all(other.get(dimension_name, None) == value for dimension_name, value in self) \\\r\n and \\\r\n all(self.get(dimension_name, None) == value for dimension_name, value in other)\r\n\r\n def __ne__(self, other):\r\n return not self == other\r\n\r\n def __iter__(self):\r\n for dimension_name, value in self.dimension_value_dict.items():\r\n if not isinstance(value, Point):\r\n yield dimension_name, value\r\n else:\r\n for sub_dimension_name, sub_dimension_value in value:\r\n yield dimension_name + \".\" + sub_dimension_name, sub_dimension_value\r\n\r\n def __getattr__(self, dimension_name):\r\n if dimension_name == \"__isabstractmethod__\":\r\n # A sad but necessary way to deal with ABC.\r\n return False\r\n return self[dimension_name]\r\n\r\n def __setattr__(self, name, value):\r\n if name == \"dimension_value_dict\":\r\n self.__dict__[name] = value\r\n else:\r\n dimension_name = name\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n self.dimension_value_dict[dimension_name] = value\r\n else:\r\n point_in_subgrid = self.dimension_value_dict.get(subgrid_name, Point())\r\n point_in_subgrid[dimension_name_without_subgrid_name] = value\r\n self.dimension_value_dict[subgrid_name] = point_in_subgrid\r\n\r\n def __getitem__(self, dimension_name):\r\n if dimension_name not in self:\r\n raise KeyError(f\"This Point does not have a value along dimension: {dimension_name}\")\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n return self.dimension_value_dict[dimension_name]\r\n return self[subgrid_name][dimension_name_without_subgrid_name]\r\n\r\n def get(self, dimension_name, default=None):\r\n try:\r\n return self[dimension_name]\r\n except KeyError:\r\n return default\r\n\r\n def __setitem__(self, dimension_name, value):\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n self.dimension_value_dict[dimension_name] = value\r\n else:\r\n point_in_subgrid = self.dimension_value_dict.get(subgrid_name, Point())\r\n point_in_subgrid[dimension_name_without_subgrid_name] = value\r\n self.dimension_value_dict[subgrid_name] = point_in_subgrid\r\n\r\n def __contains__(self, dimension_name):\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n return dimension_name in self.dimension_value_dict\r\n if subgrid_name not in self.dimension_value_dict:\r\n return False\r\n return dimension_name_without_subgrid_name in self[subgrid_name]\r\n\r\n def __repr__(self):\r\n return self.__str__()\r\n\r\n def __str__(self):\r\n return str(self.to_json(indent=2))\r\n\r\n def __getstate__(self):\r\n return self.to_json()\r\n\r\n def __setstate__(self, state):\r\n temp_point = self.from_json(state)\r\n self.dimension_value_dict = temp_point.dimension_value_dict\r\n\r\n def to_json(self, indent=None):\r\n if indent is not None:\r\n return json.dumps(self.to_dict(), indent=indent)\r\n return json.dumps(self.to_dict())\r\n\r\n @classmethod\r\n def from_json(cls, json_str):\r\n coordinates = json.loads(json_str)\r\n return Point(**coordinates)\r\n\r\n def to_dict(self):\r\n return_dict = {}\r\n for param_name, value in self:\r\n if isinstance(value, Number) and int(value) == value and not isinstance(value, bool):\r\n value = int(value)\r\n return_dict[param_name] = value\r\n return return_dict\r\n\r\n def to_dataframe(self):\r\n return pd.DataFrame({param_name: [value] for param_name, value in self})\r\n\r\n @classmethod\r\n def from_dataframe(cls, dataframe: pd.DataFrame):\r\n assert len(dataframe.index) == 1\r\n dataframe = dataframe.dropna(axis=1)\r\n dataframe_dict = dataframe.to_dict(orient='list')\r\n point_dict = {key: values[0] for key, values in dataframe_dict.items()}\r\n return Point(**point_dict)\r\n", "path": "source/Mlos.Python/mlos/Spaces/Point.py"}], "after_files": [{"content": "#\r\n# Copyright (c) Microsoft Corporation.\r\n# Licensed under the MIT License.\r\n#\r\nimport json\r\nfrom numbers import Number\r\n\r\nimport pandas as pd\r\nfrom mlos.Spaces.Dimensions.Dimension import Dimension\r\n\r\n\r\nclass Point:\r\n \"\"\" Models a point in a Hypergrid.\r\n\r\n \"\"\"\r\n def __init__(self, **kwargs):\r\n self.dimension_value_dict = dict()\r\n for dimension_name, value in kwargs.items():\r\n self[dimension_name] = value\r\n\r\n def copy(self):\r\n return Point(**{key: value for key, value in self})\r\n\r\n def flat_copy(self):\r\n \"\"\" Creates a copy of the point but all dimension names are flattened.\r\n\r\n :return:\r\n \"\"\"\r\n flat_dict = {\r\n Dimension.flatten_dimension_name(dimension_name): value\r\n for dimension_name, value in self\r\n }\r\n return Point(**flat_dict)\r\n\r\n def __eq__(self, other):\r\n if not isinstance(other, Point):\r\n return False\r\n return \\\r\n all(other.get(dimension_name, None) == value for dimension_name, value in self) \\\r\n and \\\r\n all(self.get(dimension_name, None) == value for dimension_name, value in other)\r\n\r\n def __ne__(self, other):\r\n return not self == other\r\n\r\n def __iter__(self):\r\n for dimension_name, value in self.dimension_value_dict.items():\r\n if not isinstance(value, Point):\r\n yield dimension_name, value\r\n else:\r\n for sub_dimension_name, sub_dimension_value in value:\r\n yield dimension_name + \".\" + sub_dimension_name, sub_dimension_value\r\n\r\n def __getattr__(self, dimension_name):\r\n if dimension_name == \"__isabstractmethod__\":\r\n # A sad but necessary way to deal with ABC.\r\n return False\r\n try:\r\n return self[dimension_name]\r\n except KeyError:\r\n raise AttributeError(f\"This Point does not have a {dimension_name} attribute.\")\r\n\r\n def __setattr__(self, name, value):\r\n if name == \"dimension_value_dict\":\r\n self.__dict__[name] = value\r\n else:\r\n dimension_name = name\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n self.dimension_value_dict[dimension_name] = value\r\n else:\r\n point_in_subgrid = self.dimension_value_dict.get(subgrid_name, Point())\r\n point_in_subgrid[dimension_name_without_subgrid_name] = value\r\n self.dimension_value_dict[subgrid_name] = point_in_subgrid\r\n\r\n def __getitem__(self, dimension_name):\r\n if dimension_name not in self:\r\n raise KeyError(f\"This Point does not have a value along dimension: {dimension_name}\")\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n return self.dimension_value_dict[dimension_name]\r\n return self[subgrid_name][dimension_name_without_subgrid_name]\r\n\r\n def get(self, dimension_name, default=None):\r\n try:\r\n return self[dimension_name]\r\n except KeyError:\r\n return default\r\n\r\n def __setitem__(self, dimension_name, value):\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n self.dimension_value_dict[dimension_name] = value\r\n else:\r\n point_in_subgrid = self.dimension_value_dict.get(subgrid_name, Point())\r\n point_in_subgrid[dimension_name_without_subgrid_name] = value\r\n self.dimension_value_dict[subgrid_name] = point_in_subgrid\r\n\r\n def __contains__(self, dimension_name):\r\n subgrid_name, dimension_name_without_subgrid_name = Dimension.split_dimension_name(dimension_name)\r\n if subgrid_name is None:\r\n return dimension_name in self.dimension_value_dict\r\n if subgrid_name not in self.dimension_value_dict:\r\n return False\r\n return dimension_name_without_subgrid_name in self[subgrid_name]\r\n\r\n def __repr__(self):\r\n return self.__str__()\r\n\r\n def __str__(self):\r\n return str(self.to_json(indent=2))\r\n\r\n def __getstate__(self):\r\n return self.to_json()\r\n\r\n def __setstate__(self, state):\r\n temp_point = self.from_json(state)\r\n self.dimension_value_dict = temp_point.dimension_value_dict\r\n\r\n def to_json(self, indent=None):\r\n if indent is not None:\r\n return json.dumps(self.to_dict(), indent=indent)\r\n return json.dumps(self.to_dict())\r\n\r\n @classmethod\r\n def from_json(cls, json_str):\r\n coordinates = json.loads(json_str)\r\n return Point(**coordinates)\r\n\r\n def to_dict(self):\r\n return_dict = {}\r\n for param_name, value in self:\r\n if isinstance(value, Number) and int(value) == value and not isinstance(value, bool):\r\n value = int(value)\r\n return_dict[param_name] = value\r\n return return_dict\r\n\r\n def to_dataframe(self):\r\n return pd.DataFrame({param_name: [value] for param_name, value in self})\r\n\r\n @classmethod\r\n def from_dataframe(cls, dataframe: pd.DataFrame):\r\n assert len(dataframe.index) == 1\r\n dataframe = dataframe.dropna(axis=1)\r\n dataframe_dict = dataframe.to_dict(orient='list')\r\n point_dict = {key: values[0] for key, values in dataframe_dict.items()}\r\n return Point(**point_dict)\r\n", "path": "source/Mlos.Python/mlos/Spaces/Point.py"}]} | 1,758 | 162 |
gh_patches_debug_37529 | rasdani/github-patches | git_diff | dmlc__dgl-5377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Sparse] Support column-wise softmax.
## 🔨Work Item
**IMPORTANT:**
* This template is only for dev team to track project progress. For feature request or bug report, please use the corresponding issue templates.
* DO NOT create a new work item if the purpose is to fix an existing issue or feature request. We will directly use the issue in the project tracker.
Project tracker: https://github.com/orgs/dmlc/projects/2
## Description
<!-- short description of the work item -->
## Depending work items or issues
<!-- what must be done before this -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/sparse/softmax.py`
Content:
```
1 """Softmax op for SparseMatrix"""
2 # pylint: disable=invalid-name, W0622
3
4 import torch
5
6 from .sparse_matrix import SparseMatrix
7
8 __all__ = ["softmax"]
9
10
11 def softmax(input: SparseMatrix) -> SparseMatrix:
12 """Applies row-wise softmax to the non-zero elements of the sparse matrix.
13
14 Equivalently, applies softmax to the non-zero elements of the sparse
15 matrix along the column (``dim=1``) dimension.
16
17 If :attr:`input.val` takes shape ``(nnz, D)``, then the output matrix
18 :attr:`output` and :attr:`output.val` take the same shape as :attr:`input`
19 and :attr:`input.val`. :attr:`output.val[:, i]` is calculated based on
20 :attr:`input.val[:, i]`.
21
22 Parameters
23 ----------
24 input : SparseMatrix
25 The input sparse matrix
26
27 Returns
28 -------
29 SparseMatrix
30 The output sparse matrix
31
32 Examples
33 --------
34
35 Case1: matrix with values of shape (nnz)
36
37 >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])
38 >>> nnz = len(row)
39 >>> val = torch.arange(nnz).float()
40 >>> A = dglsp.spmatrix(indices, val)
41 >>> dglsp.softmax(A)
42 SparseMatrix(indices=tensor([[0, 0, 1, 2],
43 [1, 2, 2, 0]]),
44 values=tensor([0.2689, 0.7311, 1.0000, 1.0000]),
45 shape=(3, 3), nnz=4)
46
47 Case2: matrix with values of shape (nnz, D)
48
49 >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])
50 >>> val = torch.tensor([[0., 7.], [1., 3.], [2., 2.], [3., 1.]])
51 >>> A = dglsp.spmatrix(indices, val)
52 >>> dglsp.softmax(A)
53 SparseMatrix(indices=tensor([[0, 0, 1, 2],
54 [1, 2, 2, 0]]),
55 values=tensor([[0.2689, 0.9820],
56 [0.7311, 0.0180],
57 [1.0000, 1.0000],
58 [1.0000, 1.0000]]),
59 shape=(3, 3), nnz=4, val_size=(2,))
60 """
61 return SparseMatrix(torch.ops.dgl_sparse.softmax(input.c_sparse_matrix))
62
63
64 SparseMatrix.softmax = softmax
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/dgl/sparse/softmax.py b/python/dgl/sparse/softmax.py
--- a/python/dgl/sparse/softmax.py
+++ b/python/dgl/sparse/softmax.py
@@ -8,11 +8,10 @@
__all__ = ["softmax"]
-def softmax(input: SparseMatrix) -> SparseMatrix:
- """Applies row-wise softmax to the non-zero elements of the sparse matrix.
-
- Equivalently, applies softmax to the non-zero elements of the sparse
- matrix along the column (``dim=1``) dimension.
+def softmax(input: SparseMatrix, dim: int = 1) -> SparseMatrix:
+ """Applies softmax to the non-zero elements of the sparse matrix on the
+ dimension :attr:``dim``. dim = 0 or 1 indicates column-wise or row-wise
+ softmax respectively.
If :attr:`input.val` takes shape ``(nnz, D)``, then the output matrix
:attr:`output` and :attr:`output.val` take the same shape as :attr:`input`
@@ -32,11 +31,10 @@
Examples
--------
- Case1: matrix with values of shape (nnz)
+ Case1: row-wise softmax on matrix with values of shape (nnz)
>>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])
- >>> nnz = len(row)
- >>> val = torch.arange(nnz).float()
+ >>> val = torch.tensor([0., 1., 2., 3.])
>>> A = dglsp.spmatrix(indices, val)
>>> dglsp.softmax(A)
SparseMatrix(indices=tensor([[0, 0, 1, 2],
@@ -44,7 +42,7 @@
values=tensor([0.2689, 0.7311, 1.0000, 1.0000]),
shape=(3, 3), nnz=4)
- Case2: matrix with values of shape (nnz, D)
+ Case2: row-wise softmax on matrix with values of shape (nnz, D)
>>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])
>>> val = torch.tensor([[0., 7.], [1., 3.], [2., 2.], [3., 1.]])
@@ -57,8 +55,21 @@
[1.0000, 1.0000],
[1.0000, 1.0000]]),
shape=(3, 3), nnz=4, val_size=(2,))
+
+ Case3: column-wise softmax on matrix with values of shape (nnz)
+
+ >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])
+ >>> val = torch.tensor([0., 1., 2., 3.])
+ >>> A = dglsp.spmatrix(indices, val)
+ >>> dglsp.softmax(A, 0)
+ SparseMatrix(indices=tensor([[0, 0, 1, 2],
+ [1, 2, 2, 0]]),
+ values=tensor([1.0000, 0.2689, 0.7311, 1.0000]),
+ shape=(3, 3), nnz=4)
"""
- return SparseMatrix(torch.ops.dgl_sparse.softmax(input.c_sparse_matrix))
+ return SparseMatrix(
+ torch.ops.dgl_sparse.softmax(input.c_sparse_matrix, dim)
+ )
SparseMatrix.softmax = softmax
| {"golden_diff": "diff --git a/python/dgl/sparse/softmax.py b/python/dgl/sparse/softmax.py\n--- a/python/dgl/sparse/softmax.py\n+++ b/python/dgl/sparse/softmax.py\n@@ -8,11 +8,10 @@\n __all__ = [\"softmax\"]\n \n \n-def softmax(input: SparseMatrix) -> SparseMatrix:\n- \"\"\"Applies row-wise softmax to the non-zero elements of the sparse matrix.\n-\n- Equivalently, applies softmax to the non-zero elements of the sparse\n- matrix along the column (``dim=1``) dimension.\n+def softmax(input: SparseMatrix, dim: int = 1) -> SparseMatrix:\n+ \"\"\"Applies softmax to the non-zero elements of the sparse matrix on the\n+ dimension :attr:``dim``. dim = 0 or 1 indicates column-wise or row-wise\n+ softmax respectively.\n \n If :attr:`input.val` takes shape ``(nnz, D)``, then the output matrix\n :attr:`output` and :attr:`output.val` take the same shape as :attr:`input`\n@@ -32,11 +31,10 @@\n Examples\n --------\n \n- Case1: matrix with values of shape (nnz)\n+ Case1: row-wise softmax on matrix with values of shape (nnz)\n \n >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n- >>> nnz = len(row)\n- >>> val = torch.arange(nnz).float()\n+ >>> val = torch.tensor([0., 1., 2., 3.])\n >>> A = dglsp.spmatrix(indices, val)\n >>> dglsp.softmax(A)\n SparseMatrix(indices=tensor([[0, 0, 1, 2],\n@@ -44,7 +42,7 @@\n values=tensor([0.2689, 0.7311, 1.0000, 1.0000]),\n shape=(3, 3), nnz=4)\n \n- Case2: matrix with values of shape (nnz, D)\n+ Case2: row-wise softmax on matrix with values of shape (nnz, D)\n \n >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n >>> val = torch.tensor([[0., 7.], [1., 3.], [2., 2.], [3., 1.]])\n@@ -57,8 +55,21 @@\n [1.0000, 1.0000],\n [1.0000, 1.0000]]),\n shape=(3, 3), nnz=4, val_size=(2,))\n+\n+ Case3: column-wise softmax on matrix with values of shape (nnz)\n+\n+ >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n+ >>> val = torch.tensor([0., 1., 2., 3.])\n+ >>> A = dglsp.spmatrix(indices, val)\n+ >>> dglsp.softmax(A, 0)\n+ SparseMatrix(indices=tensor([[0, 0, 1, 2],\n+ [1, 2, 2, 0]]),\n+ values=tensor([1.0000, 0.2689, 0.7311, 1.0000]),\n+ shape=(3, 3), nnz=4)\n \"\"\"\n- return SparseMatrix(torch.ops.dgl_sparse.softmax(input.c_sparse_matrix))\n+ return SparseMatrix(\n+ torch.ops.dgl_sparse.softmax(input.c_sparse_matrix, dim)\n+ )\n \n \n SparseMatrix.softmax = softmax\n", "issue": "[Sparse] Support column-wise softmax.\n## \ud83d\udd28Work Item\r\n\r\n**IMPORTANT:**\r\n* This template is only for dev team to track project progress. For feature request or bug report, please use the corresponding issue templates.\r\n* DO NOT create a new work item if the purpose is to fix an existing issue or feature request. We will directly use the issue in the project tracker.\r\n\r\nProject tracker: https://github.com/orgs/dmlc/projects/2\r\n\r\n## Description\r\n\r\n<!-- short description of the work item -->\r\n\r\n## Depending work items or issues\r\n\r\n<!-- what must be done before this -->\r\n\n", "before_files": [{"content": "\"\"\"Softmax op for SparseMatrix\"\"\"\n# pylint: disable=invalid-name, W0622\n\nimport torch\n\nfrom .sparse_matrix import SparseMatrix\n\n__all__ = [\"softmax\"]\n\n\ndef softmax(input: SparseMatrix) -> SparseMatrix:\n \"\"\"Applies row-wise softmax to the non-zero elements of the sparse matrix.\n\n Equivalently, applies softmax to the non-zero elements of the sparse\n matrix along the column (``dim=1``) dimension.\n\n If :attr:`input.val` takes shape ``(nnz, D)``, then the output matrix\n :attr:`output` and :attr:`output.val` take the same shape as :attr:`input`\n and :attr:`input.val`. :attr:`output.val[:, i]` is calculated based on\n :attr:`input.val[:, i]`.\n\n Parameters\n ----------\n input : SparseMatrix\n The input sparse matrix\n\n Returns\n -------\n SparseMatrix\n The output sparse matrix\n\n Examples\n --------\n\n Case1: matrix with values of shape (nnz)\n\n >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n >>> nnz = len(row)\n >>> val = torch.arange(nnz).float()\n >>> A = dglsp.spmatrix(indices, val)\n >>> dglsp.softmax(A)\n SparseMatrix(indices=tensor([[0, 0, 1, 2],\n [1, 2, 2, 0]]),\n values=tensor([0.2689, 0.7311, 1.0000, 1.0000]),\n shape=(3, 3), nnz=4)\n\n Case2: matrix with values of shape (nnz, D)\n\n >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n >>> val = torch.tensor([[0., 7.], [1., 3.], [2., 2.], [3., 1.]])\n >>> A = dglsp.spmatrix(indices, val)\n >>> dglsp.softmax(A)\n SparseMatrix(indices=tensor([[0, 0, 1, 2],\n [1, 2, 2, 0]]),\n values=tensor([[0.2689, 0.9820],\n [0.7311, 0.0180],\n [1.0000, 1.0000],\n [1.0000, 1.0000]]),\n shape=(3, 3), nnz=4, val_size=(2,))\n \"\"\"\n return SparseMatrix(torch.ops.dgl_sparse.softmax(input.c_sparse_matrix))\n\n\nSparseMatrix.softmax = softmax\n", "path": "python/dgl/sparse/softmax.py"}], "after_files": [{"content": "\"\"\"Softmax op for SparseMatrix\"\"\"\n# pylint: disable=invalid-name, W0622\n\nimport torch\n\nfrom .sparse_matrix import SparseMatrix\n\n__all__ = [\"softmax\"]\n\n\ndef softmax(input: SparseMatrix, dim: int = 1) -> SparseMatrix:\n \"\"\"Applies softmax to the non-zero elements of the sparse matrix on the\n dimension :attr:``dim``. dim = 0 or 1 indicates column-wise or row-wise\n softmax respectively.\n\n If :attr:`input.val` takes shape ``(nnz, D)``, then the output matrix\n :attr:`output` and :attr:`output.val` take the same shape as :attr:`input`\n and :attr:`input.val`. :attr:`output.val[:, i]` is calculated based on\n :attr:`input.val[:, i]`.\n\n Parameters\n ----------\n input : SparseMatrix\n The input sparse matrix\n\n Returns\n -------\n SparseMatrix\n The output sparse matrix\n\n Examples\n --------\n\n Case1: row-wise softmax on matrix with values of shape (nnz)\n\n >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n >>> val = torch.tensor([0., 1., 2., 3.])\n >>> A = dglsp.spmatrix(indices, val)\n >>> dglsp.softmax(A)\n SparseMatrix(indices=tensor([[0, 0, 1, 2],\n [1, 2, 2, 0]]),\n values=tensor([0.2689, 0.7311, 1.0000, 1.0000]),\n shape=(3, 3), nnz=4)\n\n Case2: row-wise softmax on matrix with values of shape (nnz, D)\n\n >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n >>> val = torch.tensor([[0., 7.], [1., 3.], [2., 2.], [3., 1.]])\n >>> A = dglsp.spmatrix(indices, val)\n >>> dglsp.softmax(A)\n SparseMatrix(indices=tensor([[0, 0, 1, 2],\n [1, 2, 2, 0]]),\n values=tensor([[0.2689, 0.9820],\n [0.7311, 0.0180],\n [1.0000, 1.0000],\n [1.0000, 1.0000]]),\n shape=(3, 3), nnz=4, val_size=(2,))\n\n Case3: column-wise softmax on matrix with values of shape (nnz)\n\n >>> indices = torch.tensor([[0, 0, 1, 2], [1, 2, 2, 0]])\n >>> val = torch.tensor([0., 1., 2., 3.])\n >>> A = dglsp.spmatrix(indices, val)\n >>> dglsp.softmax(A, 0)\n SparseMatrix(indices=tensor([[0, 0, 1, 2],\n [1, 2, 2, 0]]),\n values=tensor([1.0000, 0.2689, 0.7311, 1.0000]),\n shape=(3, 3), nnz=4)\n \"\"\"\n return SparseMatrix(\n torch.ops.dgl_sparse.softmax(input.c_sparse_matrix, dim)\n )\n\n\nSparseMatrix.softmax = softmax\n", "path": "python/dgl/sparse/softmax.py"}]} | 1,143 | 863 |
gh_patches_debug_20857 | rasdani/github-patches | git_diff | bridgecrewio__checkov-3127 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
baseline output can change resource order for each run
If I generate a baseline file and I have then made some improvements to my Terraform code and I run the baseline again. What I am finding is that the order of the resources for each file can often change which then shows up as a diff against the prevous baseline file - when in reality nothing has change but the order of the resources in the findings array in the baseline output file
I was wondering could the findings array just be sorted before being output? Then the resource order should be fixed and any actual diffs should be real changes to check_ids (which is sorted already) or new resources being added?
e.g. this is a diff from two runs of generating a baseline file nothing has actually change just resources moved around in the array.
```
@@ -100,13 +100,12 @@
"file": "/main.tf",
"findings": [
{
- "resource": "aws_s3_bucket.canary_artifacts",
+ "resource": "aws_s3_bucket.backups",
"check_ids": [
"CKV2_AWS_6",
"CKV_AWS_144",
"CKV_AWS_145",
- "CKV_AWS_18",
- "CKV_AWS_21"
+ "CKV_AWS_18"
]
},
{
@@ -119,12 +118,13 @@
]
},
{
- "resource": "aws_s3_bucket.lambdas",
+ "resource": "aws_s3_bucket.canary_artifacts",
"check_ids": [
"CKV2_AWS_6",
"CKV_AWS_144",
"CKV_AWS_145",
- "CKV_AWS_18"
+ "CKV_AWS_18",
+ "CKV_AWS_21"
]
},
{
@@ -137,7 +137,7 @@
]
},
{
- "resource": "aws_s3_bucket.backups",
+ "resource": "aws_s3_bucket.lambdas",
"check_ids": [
"CKV2_AWS_6",
"CKV_AWS_144",
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/common/output/baseline.py`
Content:
```
1 from __future__ import annotations
2
3 import json
4 from collections import defaultdict
5 from checkov.common.models.enums import CheckResult
6 from typing import Any, TYPE_CHECKING
7
8 if TYPE_CHECKING:
9 from checkov.common.output.record import Record
10 from checkov.common.output.report import Report
11 from checkov.common.typing import _BaselineFinding, _BaselineFailedChecks
12
13
14 class Baseline:
15 def __init__(self, output_skipped: bool = False) -> None:
16 self.path = ""
17 self.path_failed_checks_map: dict[str, list[_BaselineFinding]] = defaultdict(list)
18 self.failed_checks: list[_BaselineFailedChecks] = []
19 self.output_skipped = output_skipped
20
21 def add_findings_from_report(self, report: Report) -> None:
22 for check in report.failed_checks:
23 try:
24 existing = next(
25 x for x in self.path_failed_checks_map[check.file_path] if x["resource"] == check.resource
26 )
27 except StopIteration:
28 existing = {"resource": check.resource, "check_ids": []}
29 self.path_failed_checks_map[check.file_path].append(existing)
30 existing["check_ids"].append(check.check_id)
31 existing["check_ids"].sort() # Sort the check IDs to be nicer to the eye
32
33 def to_dict(self) -> dict[str, Any]:
34 """
35 The output of this class needs to be very explicit, hence the following structure of the dict:
36 {
37 "failed_checks": [
38 {
39 "file": "path/to/file",
40 "findings: [
41 {
42 "resource": "aws_s3_bucket.this",
43 "check_ids": [
44 "CKV_AWS_1",
45 "CKV_AWS_2",
46 "CKV_AWS_3"
47 ]
48 }
49 ]
50 }
51 ]
52 }
53 """
54 failed_checks_list = []
55 for file, findings in self.path_failed_checks_map.items():
56 formatted_findings = []
57 for finding in findings:
58 formatted_findings.append({"resource": finding["resource"], "check_ids": finding["check_ids"]})
59 failed_checks_list.append({"file": file, "findings": formatted_findings})
60
61 resp = {"failed_checks": failed_checks_list}
62 return resp
63
64 def compare_and_reduce_reports(self, scan_reports: list[Report]) -> None:
65 for scan_report in scan_reports:
66 scan_report.passed_checks = [
67 check for check in scan_report.passed_checks if self._is_check_in_baseline(check)
68 ]
69 scan_report.skipped_checks = [
70 check for check in scan_report.skipped_checks if self._is_check_in_baseline(check)
71 ]
72 if self.output_skipped:
73 for check in scan_report.failed_checks:
74 if self._is_check_in_baseline(check):
75 check.check_result["suppress_comment"] = "baseline-skipped"
76 check.check_result["result"] = CheckResult.SKIPPED
77 scan_report.skipped_checks.append(check)
78 scan_report.failed_checks = [
79 check for check in scan_report.failed_checks if not self._is_check_in_baseline(check)
80 ]
81
82 def _is_check_in_baseline(self, check: Record) -> bool:
83 failed_check_id = check.check_id
84 failed_check_resource = check.resource
85 for baseline_failed_check in self.failed_checks:
86 for finding in baseline_failed_check["findings"]:
87 if finding["resource"] == failed_check_resource and failed_check_id in finding["check_ids"]:
88 return True
89 return False
90
91 def from_json(self, file_path: str) -> None:
92 self.path = file_path
93 with open(file_path, "r") as f:
94 baseline_raw = json.load(f)
95 self.failed_checks = baseline_raw.get("failed_checks", {})
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/common/output/baseline.py b/checkov/common/output/baseline.py
--- a/checkov/common/output/baseline.py
+++ b/checkov/common/output/baseline.py
@@ -2,6 +2,8 @@
import json
from collections import defaultdict
+from operator import itemgetter
+
from checkov.common.models.enums import CheckResult
from typing import Any, TYPE_CHECKING
@@ -56,9 +58,9 @@
formatted_findings = []
for finding in findings:
formatted_findings.append({"resource": finding["resource"], "check_ids": finding["check_ids"]})
- failed_checks_list.append({"file": file, "findings": formatted_findings})
+ failed_checks_list.append({"file": file, "findings": sorted(formatted_findings, key=itemgetter("resource"))})
- resp = {"failed_checks": failed_checks_list}
+ resp = {"failed_checks": sorted(failed_checks_list, key=itemgetter("file"))}
return resp
def compare_and_reduce_reports(self, scan_reports: list[Report]) -> None:
| {"golden_diff": "diff --git a/checkov/common/output/baseline.py b/checkov/common/output/baseline.py\n--- a/checkov/common/output/baseline.py\n+++ b/checkov/common/output/baseline.py\n@@ -2,6 +2,8 @@\n \n import json\n from collections import defaultdict\n+from operator import itemgetter\n+\n from checkov.common.models.enums import CheckResult\n from typing import Any, TYPE_CHECKING\n \n@@ -56,9 +58,9 @@\n formatted_findings = []\n for finding in findings:\n formatted_findings.append({\"resource\": finding[\"resource\"], \"check_ids\": finding[\"check_ids\"]})\n- failed_checks_list.append({\"file\": file, \"findings\": formatted_findings})\n+ failed_checks_list.append({\"file\": file, \"findings\": sorted(formatted_findings, key=itemgetter(\"resource\"))})\n \n- resp = {\"failed_checks\": failed_checks_list}\n+ resp = {\"failed_checks\": sorted(failed_checks_list, key=itemgetter(\"file\"))}\n return resp\n \n def compare_and_reduce_reports(self, scan_reports: list[Report]) -> None:\n", "issue": "baseline output can change resource order for each run\nIf I generate a baseline file and I have then made some improvements to my Terraform code and I run the baseline again. What I am finding is that the order of the resources for each file can often change which then shows up as a diff against the prevous baseline file - when in reality nothing has change but the order of the resources in the findings array in the baseline output file \r\n\r\nI was wondering could the findings array just be sorted before being output? Then the resource order should be fixed and any actual diffs should be real changes to check_ids (which is sorted already) or new resources being added?\r\n\r\ne.g. this is a diff from two runs of generating a baseline file nothing has actually change just resources moved around in the array.\r\n\r\n```\r\n@@ -100,13 +100,12 @@\r\n \"file\": \"/main.tf\",\r\n \"findings\": [\r\n {\r\n- \"resource\": \"aws_s3_bucket.canary_artifacts\",\r\n+ \"resource\": \"aws_s3_bucket.backups\",\r\n \"check_ids\": [\r\n \"CKV2_AWS_6\",\r\n \"CKV_AWS_144\",\r\n \"CKV_AWS_145\",\r\n- \"CKV_AWS_18\",\r\n- \"CKV_AWS_21\"\r\n+ \"CKV_AWS_18\"\r\n ]\r\n },\r\n {\r\n@@ -119,12 +118,13 @@\r\n ]\r\n },\r\n {\r\n- \"resource\": \"aws_s3_bucket.lambdas\",\r\n+ \"resource\": \"aws_s3_bucket.canary_artifacts\",\r\n \"check_ids\": [\r\n \"CKV2_AWS_6\",\r\n \"CKV_AWS_144\",\r\n \"CKV_AWS_145\",\r\n- \"CKV_AWS_18\"\r\n+ \"CKV_AWS_18\",\r\n+ \"CKV_AWS_21\"\r\n ]\r\n },\r\n {\r\n@@ -137,7 +137,7 @@\r\n ]\r\n },\r\n {\r\n- \"resource\": \"aws_s3_bucket.backups\",\r\n+ \"resource\": \"aws_s3_bucket.lambdas\",\r\n \"check_ids\": [\r\n \"CKV2_AWS_6\",\r\n \"CKV_AWS_144\",\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\nimport json\nfrom collections import defaultdict\nfrom checkov.common.models.enums import CheckResult\nfrom typing import Any, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from checkov.common.output.record import Record\n from checkov.common.output.report import Report\n from checkov.common.typing import _BaselineFinding, _BaselineFailedChecks\n\n\nclass Baseline:\n def __init__(self, output_skipped: bool = False) -> None:\n self.path = \"\"\n self.path_failed_checks_map: dict[str, list[_BaselineFinding]] = defaultdict(list)\n self.failed_checks: list[_BaselineFailedChecks] = []\n self.output_skipped = output_skipped\n\n def add_findings_from_report(self, report: Report) -> None:\n for check in report.failed_checks:\n try:\n existing = next(\n x for x in self.path_failed_checks_map[check.file_path] if x[\"resource\"] == check.resource\n )\n except StopIteration:\n existing = {\"resource\": check.resource, \"check_ids\": []}\n self.path_failed_checks_map[check.file_path].append(existing)\n existing[\"check_ids\"].append(check.check_id)\n existing[\"check_ids\"].sort() # Sort the check IDs to be nicer to the eye\n\n def to_dict(self) -> dict[str, Any]:\n \"\"\"\n The output of this class needs to be very explicit, hence the following structure of the dict:\n {\n \"failed_checks\": [\n {\n \"file\": \"path/to/file\",\n \"findings: [\n {\n \"resource\": \"aws_s3_bucket.this\",\n \"check_ids\": [\n \"CKV_AWS_1\",\n \"CKV_AWS_2\",\n \"CKV_AWS_3\"\n ]\n }\n ]\n }\n ]\n }\n \"\"\"\n failed_checks_list = []\n for file, findings in self.path_failed_checks_map.items():\n formatted_findings = []\n for finding in findings:\n formatted_findings.append({\"resource\": finding[\"resource\"], \"check_ids\": finding[\"check_ids\"]})\n failed_checks_list.append({\"file\": file, \"findings\": formatted_findings})\n\n resp = {\"failed_checks\": failed_checks_list}\n return resp\n\n def compare_and_reduce_reports(self, scan_reports: list[Report]) -> None:\n for scan_report in scan_reports:\n scan_report.passed_checks = [\n check for check in scan_report.passed_checks if self._is_check_in_baseline(check)\n ]\n scan_report.skipped_checks = [\n check for check in scan_report.skipped_checks if self._is_check_in_baseline(check)\n ]\n if self.output_skipped:\n for check in scan_report.failed_checks:\n if self._is_check_in_baseline(check):\n check.check_result[\"suppress_comment\"] = \"baseline-skipped\"\n check.check_result[\"result\"] = CheckResult.SKIPPED\n scan_report.skipped_checks.append(check)\n scan_report.failed_checks = [\n check for check in scan_report.failed_checks if not self._is_check_in_baseline(check)\n ]\n\n def _is_check_in_baseline(self, check: Record) -> bool:\n failed_check_id = check.check_id\n failed_check_resource = check.resource\n for baseline_failed_check in self.failed_checks:\n for finding in baseline_failed_check[\"findings\"]:\n if finding[\"resource\"] == failed_check_resource and failed_check_id in finding[\"check_ids\"]:\n return True\n return False\n\n def from_json(self, file_path: str) -> None:\n self.path = file_path\n with open(file_path, \"r\") as f:\n baseline_raw = json.load(f)\n self.failed_checks = baseline_raw.get(\"failed_checks\", {})\n", "path": "checkov/common/output/baseline.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport json\nfrom collections import defaultdict\nfrom operator import itemgetter\n\nfrom checkov.common.models.enums import CheckResult\nfrom typing import Any, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from checkov.common.output.record import Record\n from checkov.common.output.report import Report\n from checkov.common.typing import _BaselineFinding, _BaselineFailedChecks\n\n\nclass Baseline:\n def __init__(self, output_skipped: bool = False) -> None:\n self.path = \"\"\n self.path_failed_checks_map: dict[str, list[_BaselineFinding]] = defaultdict(list)\n self.failed_checks: list[_BaselineFailedChecks] = []\n self.output_skipped = output_skipped\n\n def add_findings_from_report(self, report: Report) -> None:\n for check in report.failed_checks:\n try:\n existing = next(\n x for x in self.path_failed_checks_map[check.file_path] if x[\"resource\"] == check.resource\n )\n except StopIteration:\n existing = {\"resource\": check.resource, \"check_ids\": []}\n self.path_failed_checks_map[check.file_path].append(existing)\n existing[\"check_ids\"].append(check.check_id)\n existing[\"check_ids\"].sort() # Sort the check IDs to be nicer to the eye\n\n def to_dict(self) -> dict[str, Any]:\n \"\"\"\n The output of this class needs to be very explicit, hence the following structure of the dict:\n {\n \"failed_checks\": [\n {\n \"file\": \"path/to/file\",\n \"findings: [\n {\n \"resource\": \"aws_s3_bucket.this\",\n \"check_ids\": [\n \"CKV_AWS_1\",\n \"CKV_AWS_2\",\n \"CKV_AWS_3\"\n ]\n }\n ]\n }\n ]\n }\n \"\"\"\n failed_checks_list = []\n for file, findings in self.path_failed_checks_map.items():\n formatted_findings = []\n for finding in findings:\n formatted_findings.append({\"resource\": finding[\"resource\"], \"check_ids\": finding[\"check_ids\"]})\n failed_checks_list.append({\"file\": file, \"findings\": sorted(formatted_findings, key=itemgetter(\"resource\"))})\n\n resp = {\"failed_checks\": sorted(failed_checks_list, key=itemgetter(\"file\"))}\n return resp\n\n def compare_and_reduce_reports(self, scan_reports: list[Report]) -> None:\n for scan_report in scan_reports:\n scan_report.passed_checks = [\n check for check in scan_report.passed_checks if self._is_check_in_baseline(check)\n ]\n scan_report.skipped_checks = [\n check for check in scan_report.skipped_checks if self._is_check_in_baseline(check)\n ]\n if self.output_skipped:\n for check in scan_report.failed_checks:\n if self._is_check_in_baseline(check):\n check.check_result[\"suppress_comment\"] = \"baseline-skipped\"\n check.check_result[\"result\"] = CheckResult.SKIPPED\n scan_report.skipped_checks.append(check)\n scan_report.failed_checks = [\n check for check in scan_report.failed_checks if not self._is_check_in_baseline(check)\n ]\n\n def _is_check_in_baseline(self, check: Record) -> bool:\n failed_check_id = check.check_id\n failed_check_resource = check.resource\n for baseline_failed_check in self.failed_checks:\n for finding in baseline_failed_check[\"findings\"]:\n if finding[\"resource\"] == failed_check_resource and failed_check_id in finding[\"check_ids\"]:\n return True\n return False\n\n def from_json(self, file_path: str) -> None:\n self.path = file_path\n with open(file_path, \"r\") as f:\n baseline_raw = json.load(f)\n self.failed_checks = baseline_raw.get(\"failed_checks\", {})\n", "path": "checkov/common/output/baseline.py"}]} | 1,745 | 235 |
gh_patches_debug_10241 | rasdani/github-patches | git_diff | rootpy__rootpy-748 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when using root_open: 'TDirectory' object has no attribute 'func'
As above:
`AttributeError: 'TDirectory' object has no attribute 'func'`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rootpy/ROOT.py`
Content:
```
1 # Copyright 2012 the rootpy developers
2 # distributed under the terms of the GNU General Public License
3 """
4 :py:mod:`rootpy.ROOT`
5 =====================
6
7 This module is intended to be a drop-in replacement for ordinary
8 PyROOT imports by mimicking PyROOT's interface. If you find a case where it is
9 not, please report an issue to the rootpy developers.
10
11 Both ROOT and rootpy classes can be accessed in a harmonized way through this
12 module. This means you can take advantage of rootpy classes automatically by
13 replacing ``import ROOT`` with ``import rootpy.ROOT as ROOT`` or
14 ``from rootpy import ROOT`` in your code, while maintaining backward
15 compatibility with existing use of ROOT's classes.
16
17 ROOT classes are automatically "asrootpy'd" *after* the constructor in ROOT has
18 been called:
19
20 .. sourcecode:: python
21
22 >>> import rootpy.ROOT as ROOT
23 >>> h = ROOT.TH1F('name', 'title', 10, 0, 1)
24 >>> h
25 Hist('name')
26 >>> h.TYPE
27 'F'
28
29 Also access rootpy classes under this same module without needing to remember
30 where to import them from in rootpy:
31
32 .. sourcecode:: python
33
34 >>> import rootpy.ROOT as ROOT
35 >>> h = ROOT.Hist(10, 0, 1, name='name', type='F')
36 >>> h
37 Hist('name')
38 >>> h.TYPE
39 'F'
40
41 Plain old ROOT can still be accessed through the ``R`` property:
42
43 .. sourcecode:: python
44
45 >>> from rootpy import ROOT
46 >>> ROOT.R.TFile
47 <class 'ROOT.TFile'>
48
49 """
50 from __future__ import absolute_import
51
52 from copy import copy
53
54 import ROOT
55
56 from . import asrootpy, lookup_rootpy, ROOT_VERSION
57 from . import QROOT, stl
58 from .utils.module_facade import Facade
59
60 __all__ = []
61
62
63 def proxy_global(name, no_expand_macro=False):
64 """
65 Used to automatically asrootpy ROOT's thread local variables
66 """
67 if no_expand_macro: # pragma: no cover
68 # handle older ROOT versions without _ExpandMacroFunction wrapping
69 @property
70 def gSomething_no_func(self):
71 glob = self(getattr(ROOT, name))
72 # create a fake func() that just returns self
73 def func():
74 return glob
75 glob.func = func
76 return glob
77 return gSomething_no_func
78
79 @property
80 def gSomething(self):
81 glob = getattr(ROOT, name)
82 orig_func = glob.func
83
84 def asrootpy_izing_func():
85 return self(orig_func())
86
87 # new_glob = copy(glob)
88 new_glob = glob.__class__.__new__(glob.__class__)
89 new_glob.func = asrootpy_izing_func
90 # Memoize
91 setattr(type(self), name, new_glob)
92 return new_glob
93 return gSomething
94
95
96 @Facade(__name__, expose_internal=False)
97 class Module(object):
98
99 __version__ = ROOT_VERSION
100
101 def __call__(self, arg, after_init=False):
102 return asrootpy(arg, warn=False, after_init=after_init)
103
104 def __getattr__(self, what):
105 try:
106 # check ROOT
107 result = self(getattr(ROOT, what), after_init=True)
108 except AttributeError:
109 # check rootpy
110 result = lookup_rootpy(what)
111 if result is None:
112 raise AttributeError(
113 'ROOT does not have the attribute `{0}` '
114 'and rootpy does not contain the class `{0}`'.format(what))
115 return result
116
117 try:
118 # Memoize
119 setattr(self, what, result)
120 except AttributeError:
121 # Oops... Oh well. I tried.
122 pass
123
124 return result
125
126 @property
127 def R(self):
128 return ROOT
129
130 gPad = proxy_global("gPad")
131 gVirtualX = proxy_global("gVirtualX")
132
133 if ROOT_VERSION < (5, 32, 0): # pragma: no cover
134 # handle versions of ROOT older than 5.32.00
135 gDirectory = proxy_global("gDirectory", no_expand_macro=True)
136 gFile = proxy_global("gFile", no_expand_macro=True)
137 gInterpreter = proxy_global("gInterpreter", no_expand_macro=True)
138 else:
139 gDirectory = proxy_global("gDirectory")
140 gFile = proxy_global("gFile")
141 gInterpreter = proxy_global("gInterpreter")
142
143 # use the smart template STL types from rootpy.stl instead
144 for t in QROOT.std.stlclasses:
145 locals()[t] = getattr(stl, t)
146 del t
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rootpy/ROOT.py b/rootpy/ROOT.py
--- a/rootpy/ROOT.py
+++ b/rootpy/ROOT.py
@@ -130,8 +130,7 @@
gPad = proxy_global("gPad")
gVirtualX = proxy_global("gVirtualX")
- if ROOT_VERSION < (5, 32, 0): # pragma: no cover
- # handle versions of ROOT older than 5.32.00
+ if ROOT_VERSION < (5, 32, 0) or ROOT_VERSION >= (6, 9, 2): # pragma: no cover
gDirectory = proxy_global("gDirectory", no_expand_macro=True)
gFile = proxy_global("gFile", no_expand_macro=True)
gInterpreter = proxy_global("gInterpreter", no_expand_macro=True)
| {"golden_diff": "diff --git a/rootpy/ROOT.py b/rootpy/ROOT.py\n--- a/rootpy/ROOT.py\n+++ b/rootpy/ROOT.py\n@@ -130,8 +130,7 @@\n gPad = proxy_global(\"gPad\")\n gVirtualX = proxy_global(\"gVirtualX\")\n \n- if ROOT_VERSION < (5, 32, 0): # pragma: no cover\n- # handle versions of ROOT older than 5.32.00\n+ if ROOT_VERSION < (5, 32, 0) or ROOT_VERSION >= (6, 9, 2): # pragma: no cover\n gDirectory = proxy_global(\"gDirectory\", no_expand_macro=True)\n gFile = proxy_global(\"gFile\", no_expand_macro=True)\n gInterpreter = proxy_global(\"gInterpreter\", no_expand_macro=True)\n", "issue": "Error when using root_open: 'TDirectory' object has no attribute 'func'\nAs above:\r\n\r\n`AttributeError: 'TDirectory' object has no attribute 'func'`\n", "before_files": [{"content": "# Copyright 2012 the rootpy developers\n# distributed under the terms of the GNU General Public License\n\"\"\"\n:py:mod:`rootpy.ROOT`\n=====================\n\nThis module is intended to be a drop-in replacement for ordinary\nPyROOT imports by mimicking PyROOT's interface. If you find a case where it is\nnot, please report an issue to the rootpy developers.\n\nBoth ROOT and rootpy classes can be accessed in a harmonized way through this\nmodule. This means you can take advantage of rootpy classes automatically by\nreplacing ``import ROOT`` with ``import rootpy.ROOT as ROOT`` or\n``from rootpy import ROOT`` in your code, while maintaining backward\ncompatibility with existing use of ROOT's classes.\n\nROOT classes are automatically \"asrootpy'd\" *after* the constructor in ROOT has\nbeen called:\n\n.. sourcecode:: python\n\n >>> import rootpy.ROOT as ROOT\n >>> h = ROOT.TH1F('name', 'title', 10, 0, 1)\n >>> h\n Hist('name')\n >>> h.TYPE\n 'F'\n\nAlso access rootpy classes under this same module without needing to remember\nwhere to import them from in rootpy:\n\n.. sourcecode:: python\n\n >>> import rootpy.ROOT as ROOT\n >>> h = ROOT.Hist(10, 0, 1, name='name', type='F')\n >>> h\n Hist('name')\n >>> h.TYPE\n 'F'\n\nPlain old ROOT can still be accessed through the ``R`` property:\n\n.. sourcecode:: python\n\n >>> from rootpy import ROOT\n >>> ROOT.R.TFile\n <class 'ROOT.TFile'>\n\n\"\"\"\nfrom __future__ import absolute_import\n\nfrom copy import copy\n\nimport ROOT\n\nfrom . import asrootpy, lookup_rootpy, ROOT_VERSION\nfrom . import QROOT, stl\nfrom .utils.module_facade import Facade\n\n__all__ = []\n\n\ndef proxy_global(name, no_expand_macro=False):\n \"\"\"\n Used to automatically asrootpy ROOT's thread local variables\n \"\"\"\n if no_expand_macro: # pragma: no cover\n # handle older ROOT versions without _ExpandMacroFunction wrapping\n @property\n def gSomething_no_func(self):\n glob = self(getattr(ROOT, name))\n # create a fake func() that just returns self\n def func():\n return glob\n glob.func = func\n return glob\n return gSomething_no_func\n\n @property\n def gSomething(self):\n glob = getattr(ROOT, name)\n orig_func = glob.func\n\n def asrootpy_izing_func():\n return self(orig_func())\n\n # new_glob = copy(glob)\n new_glob = glob.__class__.__new__(glob.__class__)\n new_glob.func = asrootpy_izing_func\n # Memoize\n setattr(type(self), name, new_glob)\n return new_glob\n return gSomething\n\n\n@Facade(__name__, expose_internal=False)\nclass Module(object):\n\n __version__ = ROOT_VERSION\n\n def __call__(self, arg, after_init=False):\n return asrootpy(arg, warn=False, after_init=after_init)\n\n def __getattr__(self, what):\n try:\n # check ROOT\n result = self(getattr(ROOT, what), after_init=True)\n except AttributeError:\n # check rootpy\n result = lookup_rootpy(what)\n if result is None:\n raise AttributeError(\n 'ROOT does not have the attribute `{0}` '\n 'and rootpy does not contain the class `{0}`'.format(what))\n return result\n\n try:\n # Memoize\n setattr(self, what, result)\n except AttributeError:\n # Oops... Oh well. I tried.\n pass\n\n return result\n\n @property\n def R(self):\n return ROOT\n\n gPad = proxy_global(\"gPad\")\n gVirtualX = proxy_global(\"gVirtualX\")\n\n if ROOT_VERSION < (5, 32, 0): # pragma: no cover\n # handle versions of ROOT older than 5.32.00\n gDirectory = proxy_global(\"gDirectory\", no_expand_macro=True)\n gFile = proxy_global(\"gFile\", no_expand_macro=True)\n gInterpreter = proxy_global(\"gInterpreter\", no_expand_macro=True)\n else:\n gDirectory = proxy_global(\"gDirectory\")\n gFile = proxy_global(\"gFile\")\n gInterpreter = proxy_global(\"gInterpreter\")\n\n # use the smart template STL types from rootpy.stl instead\n for t in QROOT.std.stlclasses:\n locals()[t] = getattr(stl, t)\n del t\n", "path": "rootpy/ROOT.py"}], "after_files": [{"content": "# Copyright 2012 the rootpy developers\n# distributed under the terms of the GNU General Public License\n\"\"\"\n:py:mod:`rootpy.ROOT`\n=====================\n\nThis module is intended to be a drop-in replacement for ordinary\nPyROOT imports by mimicking PyROOT's interface. If you find a case where it is\nnot, please report an issue to the rootpy developers.\n\nBoth ROOT and rootpy classes can be accessed in a harmonized way through this\nmodule. This means you can take advantage of rootpy classes automatically by\nreplacing ``import ROOT`` with ``import rootpy.ROOT as ROOT`` or\n``from rootpy import ROOT`` in your code, while maintaining backward\ncompatibility with existing use of ROOT's classes.\n\nROOT classes are automatically \"asrootpy'd\" *after* the constructor in ROOT has\nbeen called:\n\n.. sourcecode:: python\n\n >>> import rootpy.ROOT as ROOT\n >>> h = ROOT.TH1F('name', 'title', 10, 0, 1)\n >>> h\n Hist('name')\n >>> h.TYPE\n 'F'\n\nAlso access rootpy classes under this same module without needing to remember\nwhere to import them from in rootpy:\n\n.. sourcecode:: python\n\n >>> import rootpy.ROOT as ROOT\n >>> h = ROOT.Hist(10, 0, 1, name='name', type='F')\n >>> h\n Hist('name')\n >>> h.TYPE\n 'F'\n\nPlain old ROOT can still be accessed through the ``R`` property:\n\n.. sourcecode:: python\n\n >>> from rootpy import ROOT\n >>> ROOT.R.TFile\n <class 'ROOT.TFile'>\n\n\"\"\"\nfrom __future__ import absolute_import\n\nfrom copy import copy\n\nimport ROOT\n\nfrom . import asrootpy, lookup_rootpy, ROOT_VERSION\nfrom . import QROOT, stl\nfrom .utils.module_facade import Facade\n\n__all__ = []\n\n\ndef proxy_global(name, no_expand_macro=False):\n \"\"\"\n Used to automatically asrootpy ROOT's thread local variables\n \"\"\"\n if no_expand_macro: # pragma: no cover\n # handle older ROOT versions without _ExpandMacroFunction wrapping\n @property\n def gSomething_no_func(self):\n glob = self(getattr(ROOT, name))\n # create a fake func() that just returns self\n def func():\n return glob\n glob.func = func\n return glob\n return gSomething_no_func\n\n @property\n def gSomething(self):\n glob = getattr(ROOT, name)\n orig_func = glob.func\n\n def asrootpy_izing_func():\n return self(orig_func())\n\n # new_glob = copy(glob)\n new_glob = glob.__class__.__new__(glob.__class__)\n new_glob.func = asrootpy_izing_func\n # Memoize\n setattr(type(self), name, new_glob)\n return new_glob\n return gSomething\n\n\n@Facade(__name__, expose_internal=False)\nclass Module(object):\n\n __version__ = ROOT_VERSION\n\n def __call__(self, arg, after_init=False):\n return asrootpy(arg, warn=False, after_init=after_init)\n\n def __getattr__(self, what):\n try:\n # check ROOT\n result = self(getattr(ROOT, what), after_init=True)\n except AttributeError:\n # check rootpy\n result = lookup_rootpy(what)\n if result is None:\n raise AttributeError(\n 'ROOT does not have the attribute `{0}` '\n 'and rootpy does not contain the class `{0}`'.format(what))\n return result\n\n try:\n # Memoize\n setattr(self, what, result)\n except AttributeError:\n # Oops... Oh well. I tried.\n pass\n\n return result\n\n @property\n def R(self):\n return ROOT\n\n gPad = proxy_global(\"gPad\")\n gVirtualX = proxy_global(\"gVirtualX\")\n\n if ROOT_VERSION < (5, 32, 0) or ROOT_VERSION >= (6, 9, 2): # pragma: no cover\n gDirectory = proxy_global(\"gDirectory\", no_expand_macro=True)\n gFile = proxy_global(\"gFile\", no_expand_macro=True)\n gInterpreter = proxy_global(\"gInterpreter\", no_expand_macro=True)\n else:\n gDirectory = proxy_global(\"gDirectory\")\n gFile = proxy_global(\"gFile\")\n gInterpreter = proxy_global(\"gInterpreter\")\n\n # use the smart template STL types from rootpy.stl instead\n for t in QROOT.std.stlclasses:\n locals()[t] = getattr(stl, t)\n del t\n", "path": "rootpy/ROOT.py"}]} | 1,662 | 191 |
gh_patches_debug_22709 | rasdani/github-patches | git_diff | sopel-irc__sopel-2494 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Root module description is a mini-rant about LC_ALL rather than a description of the library
### Description
Looking at the `sopel` module with `pydoc` in an interactive prompt) exposes the user to [a short rant](https://github.com/sopel-irc/sopel/blob/c26914b68913bc25bdd1f5fed9c5942a87fdfee6/sopel/__init__.py#L1-L4) about the behavior of `LC_ALL` and instructions to use only ASCII in this module.
I'm sympathetic to the frustration over #984 that led to this, but it will be an improvement to add a docstring to the module with a short description.
### Reproduction steps
Run `python3 -m pydoc sopel` or `import sopel; help(sopel)` in an interactive prompt.
### Expected behavior
The user should see a short description of Sopel
### Relevant logs
_No response_
### Notes
_No response_
### Sopel version
c26914b
### Installation method
`pip install`
### Python version
_No response_
### Operating system
_No response_
### IRCd
_No response_
### Relevant plugins
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/__init__.py`
Content:
```
1 # ASCII ONLY IN THIS FILE THOUGH!!!!!!!
2 # Python does some stupid bullshit of respecting LC_ALL over the encoding on the
3 # file, so in order to undo Python's ridiculous fucking idiocy, we have to have
4 # our own check.
5
6 # Copyright 2008, Sean B. Palmer, inamidst.com
7 # Copyright 2012, Elsie Powell, http://embolalia.com
8 # Copyright 2012, Elad Alfassa <[email protected]>
9 #
10 # Licensed under the Eiffel Forum License 2.
11
12 from __future__ import annotations
13
14 from collections import namedtuple
15 import locale
16 import re
17 import sys
18
19 # TODO: replace with stdlib importlib.metadata when dropping py3.7
20 # version info used in this module works from py3.8+
21 import importlib_metadata
22
23 __all__ = [
24 'bot',
25 'config',
26 'db',
27 'formatting',
28 'irc',
29 'loader',
30 'logger',
31 'module', # deprecated in 7.1, removed in 9.0
32 'plugin',
33 'tools',
34 'trigger',
35 'version_info',
36 ]
37
38 loc = locale.getlocale()
39 if not loc[1] or ('UTF-8' not in loc[1] and 'utf8' not in loc[1]):
40 print('WARNING!!! You are running with a non-UTF8 locale environment '
41 'variable (e.g. LC_ALL is set to "C"), which makes Python 3 do '
42 'stupid things. If you get strange errors, please set it to '
43 'something like "en_US.UTF-8".', file=sys.stderr)
44
45
46 __version__ = importlib_metadata.version('sopel')
47
48
49 def _version_info(version=__version__):
50 regex = re.compile(r'(\d+)\.(\d+)\.(\d+)(?:[\-\.]?(a|b|rc)(\d+))?.*')
51 version_match = regex.match(version)
52
53 if version_match is None:
54 raise RuntimeError("Can't parse version number!")
55
56 version_groups = version_match.groups()
57 major, minor, micro = (int(piece) for piece in version_groups[0:3])
58 level = version_groups[3]
59 serial = int(version_groups[4] or 0)
60 if level == 'a':
61 level = 'alpha'
62 elif level == 'b':
63 level = 'beta'
64 elif level == 'rc':
65 level = 'candidate'
66 elif not level and version_groups[4] is None:
67 level = 'final'
68 else:
69 level = 'alpha'
70
71 VersionInfo = namedtuple('VersionInfo',
72 'major, minor, micro, releaselevel, serial')
73 return VersionInfo(major, minor, micro, level, serial)
74
75
76 version_info = _version_info()
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/__init__.py b/sopel/__init__.py
--- a/sopel/__init__.py
+++ b/sopel/__init__.py
@@ -1,8 +1,9 @@
-# ASCII ONLY IN THIS FILE THOUGH!!!!!!!
-# Python does some stupid bullshit of respecting LC_ALL over the encoding on the
-# file, so in order to undo Python's ridiculous fucking idiocy, we have to have
-# our own check.
+"""
+Sopel is a simple, easy-to-use, open-source IRC utility bot, written in Python.
+It’s designed to be easy to use, easy to run, and easy to extend.
+"""
+#
# Copyright 2008, Sean B. Palmer, inamidst.com
# Copyright 2012, Elsie Powell, http://embolalia.com
# Copyright 2012, Elad Alfassa <[email protected]>
@@ -37,9 +38,8 @@
loc = locale.getlocale()
if not loc[1] or ('UTF-8' not in loc[1] and 'utf8' not in loc[1]):
- print('WARNING!!! You are running with a non-UTF8 locale environment '
- 'variable (e.g. LC_ALL is set to "C"), which makes Python 3 do '
- 'stupid things. If you get strange errors, please set it to '
+ print('Warning: Running with a non-UTF8 locale. If you see strange '
+ 'encoding errors, try setting the LC_ALL environment variable to '
'something like "en_US.UTF-8".', file=sys.stderr)
| {"golden_diff": "diff --git a/sopel/__init__.py b/sopel/__init__.py\n--- a/sopel/__init__.py\n+++ b/sopel/__init__.py\n@@ -1,8 +1,9 @@\n-# ASCII ONLY IN THIS FILE THOUGH!!!!!!!\n-# Python does some stupid bullshit of respecting LC_ALL over the encoding on the\n-# file, so in order to undo Python's ridiculous fucking idiocy, we have to have\n-# our own check.\n+\"\"\"\n+Sopel is a simple, easy-to-use, open-source IRC utility bot, written in Python.\n \n+It\u2019s designed to be easy to use, easy to run, and easy to extend.\n+\"\"\"\n+#\n # Copyright 2008, Sean B. Palmer, inamidst.com\n # Copyright 2012, Elsie Powell, http://embolalia.com\n # Copyright 2012, Elad Alfassa <[email protected]>\n@@ -37,9 +38,8 @@\n \n loc = locale.getlocale()\n if not loc[1] or ('UTF-8' not in loc[1] and 'utf8' not in loc[1]):\n- print('WARNING!!! You are running with a non-UTF8 locale environment '\n- 'variable (e.g. LC_ALL is set to \"C\"), which makes Python 3 do '\n- 'stupid things. If you get strange errors, please set it to '\n+ print('Warning: Running with a non-UTF8 locale. If you see strange '\n+ 'encoding errors, try setting the LC_ALL environment variable to '\n 'something like \"en_US.UTF-8\".', file=sys.stderr)\n", "issue": "Root module description is a mini-rant about LC_ALL rather than a description of the library\n### Description\n\nLooking at the `sopel` module with `pydoc` in an interactive prompt) exposes the user to [a short rant](https://github.com/sopel-irc/sopel/blob/c26914b68913bc25bdd1f5fed9c5942a87fdfee6/sopel/__init__.py#L1-L4) about the behavior of `LC_ALL` and instructions to use only ASCII in this module.\r\n\r\nI'm sympathetic to the frustration over #984 that led to this, but it will be an improvement to add a docstring to the module with a short description.\n\n### Reproduction steps\n\nRun `python3 -m pydoc sopel` or `import sopel; help(sopel)` in an interactive prompt.\n\n### Expected behavior\n\nThe user should see a short description of Sopel\n\n### Relevant logs\n\n_No response_\n\n### Notes\n\n_No response_\n\n### Sopel version\n\nc26914b\n\n### Installation method\n\n`pip install`\n\n### Python version\n\n_No response_\n\n### Operating system\n\n_No response_\n\n### IRCd\n\n_No response_\n\n### Relevant plugins\n\n_No response_\n", "before_files": [{"content": "# ASCII ONLY IN THIS FILE THOUGH!!!!!!!\n# Python does some stupid bullshit of respecting LC_ALL over the encoding on the\n# file, so in order to undo Python's ridiculous fucking idiocy, we have to have\n# our own check.\n\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Copyright 2012, Elsie Powell, http://embolalia.com\n# Copyright 2012, Elad Alfassa <[email protected]>\n#\n# Licensed under the Eiffel Forum License 2.\n\nfrom __future__ import annotations\n\nfrom collections import namedtuple\nimport locale\nimport re\nimport sys\n\n# TODO: replace with stdlib importlib.metadata when dropping py3.7\n# version info used in this module works from py3.8+\nimport importlib_metadata\n\n__all__ = [\n 'bot',\n 'config',\n 'db',\n 'formatting',\n 'irc',\n 'loader',\n 'logger',\n 'module', # deprecated in 7.1, removed in 9.0\n 'plugin',\n 'tools',\n 'trigger',\n 'version_info',\n]\n\nloc = locale.getlocale()\nif not loc[1] or ('UTF-8' not in loc[1] and 'utf8' not in loc[1]):\n print('WARNING!!! You are running with a non-UTF8 locale environment '\n 'variable (e.g. LC_ALL is set to \"C\"), which makes Python 3 do '\n 'stupid things. If you get strange errors, please set it to '\n 'something like \"en_US.UTF-8\".', file=sys.stderr)\n\n\n__version__ = importlib_metadata.version('sopel')\n\n\ndef _version_info(version=__version__):\n regex = re.compile(r'(\\d+)\\.(\\d+)\\.(\\d+)(?:[\\-\\.]?(a|b|rc)(\\d+))?.*')\n version_match = regex.match(version)\n\n if version_match is None:\n raise RuntimeError(\"Can't parse version number!\")\n\n version_groups = version_match.groups()\n major, minor, micro = (int(piece) for piece in version_groups[0:3])\n level = version_groups[3]\n serial = int(version_groups[4] or 0)\n if level == 'a':\n level = 'alpha'\n elif level == 'b':\n level = 'beta'\n elif level == 'rc':\n level = 'candidate'\n elif not level and version_groups[4] is None:\n level = 'final'\n else:\n level = 'alpha'\n\n VersionInfo = namedtuple('VersionInfo',\n 'major, minor, micro, releaselevel, serial')\n return VersionInfo(major, minor, micro, level, serial)\n\n\nversion_info = _version_info()\n", "path": "sopel/__init__.py"}], "after_files": [{"content": "\"\"\"\nSopel is a simple, easy-to-use, open-source IRC utility bot, written in Python.\n\nIt\u2019s designed to be easy to use, easy to run, and easy to extend.\n\"\"\"\n#\n# Copyright 2008, Sean B. Palmer, inamidst.com\n# Copyright 2012, Elsie Powell, http://embolalia.com\n# Copyright 2012, Elad Alfassa <[email protected]>\n#\n# Licensed under the Eiffel Forum License 2.\n\nfrom __future__ import annotations\n\nfrom collections import namedtuple\nimport locale\nimport re\nimport sys\n\n# TODO: replace with stdlib importlib.metadata when dropping py3.7\n# version info used in this module works from py3.8+\nimport importlib_metadata\n\n__all__ = [\n 'bot',\n 'config',\n 'db',\n 'formatting',\n 'irc',\n 'loader',\n 'logger',\n 'module', # deprecated in 7.1, removed in 9.0\n 'plugin',\n 'tools',\n 'trigger',\n 'version_info',\n]\n\nloc = locale.getlocale()\nif not loc[1] or ('UTF-8' not in loc[1] and 'utf8' not in loc[1]):\n print('Warning: Running with a non-UTF8 locale. If you see strange '\n 'encoding errors, try setting the LC_ALL environment variable to '\n 'something like \"en_US.UTF-8\".', file=sys.stderr)\n\n\n__version__ = importlib_metadata.version('sopel')\n\n\ndef _version_info(version=__version__):\n regex = re.compile(r'(\\d+)\\.(\\d+)\\.(\\d+)(?:[\\-\\.]?(a|b|rc)(\\d+))?.*')\n version_match = regex.match(version)\n\n if version_match is None:\n raise RuntimeError(\"Can't parse version number!\")\n\n version_groups = version_match.groups()\n major, minor, micro = (int(piece) for piece in version_groups[0:3])\n level = version_groups[3]\n serial = int(version_groups[4] or 0)\n if level == 'a':\n level = 'alpha'\n elif level == 'b':\n level = 'beta'\n elif level == 'rc':\n level = 'candidate'\n elif not level and version_groups[4] is None:\n level = 'final'\n else:\n level = 'alpha'\n\n VersionInfo = namedtuple('VersionInfo',\n 'major, minor, micro, releaselevel, serial')\n return VersionInfo(major, minor, micro, level, serial)\n\n\nversion_info = _version_info()\n", "path": "sopel/__init__.py"}]} | 1,299 | 370 |
gh_patches_debug_30963 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5638 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AZURE_226: error in check and testcase
**Describe the issue**
CKV_AZURE_226 checks for ephemeral disks within the "main resource" azurerm_kubernetes_cluster but the cluster itself doesn't have any argument called os_disk_type. The argument os_disk_type is part of the node pool.
The testcase [here](https://github.com/bridgecrewio/checkov/pull/5584/files#diff-c0b8f08537766f6eff2a5d10b9439d227fdaaebe6ff7903008825c5f9d51c22dR1) is misleading and the check itself [here](https://github.com/bridgecrewio/checkov/pull/5584/files#diff-c9248390aa120f7af4643f1908d3d824fb903fd3c6cd63e9e77fe8e9ecd59289R28) too.
In my opinion this must be something like
```
def get_inspected_key(self) -> str:
return "default_node_pool/[0]/os_disk_type"
```
otherwise it won't work?
Same for CKV_AZURE_227.
**Examples**
```
[root] # head -30 aks.tf
resource "azurerm_kubernetes_cluster" "this" {
name = local.name_prefix
location = var.resource_group.location
resource_group_name = var.resource_group.name
node_resource_group = "${local.name_prefix}-node-pool"
dns_prefix = local.name_prefix
kubernetes_version = local.kubernetes_version
sku_tier = var.sku_tier
api_server_access_profile {
authorized_ip_ranges = var.api_server_authorized_ip_ranges
}
default_node_pool {
name = "default"
enable_host_encryption = true
vm_size = "Standard_E4ads_v5"
os_disk_type = "Ephemeral"
zones = [1, 2, 3]
only_critical_addons_enabled = true
type = "VirtualMachineScaleSets"
vnet_subnet_id = var.subnet_id
enable_auto_scaling = true
max_count = 6
min_count = 2
orchestrator_version = local.kubernetes_version
upgrade_settings {
```
results in
```
[root] # checkov --skip-framework kubernetes --skip-framework helm --quiet --compact -o junitxml -o cli --directory .
2023-10-02 11:58:47,399 [MainThread ] [WARNI] The framework "sca_image" is part of the "SCA" module, which is not enabled in the platform
2023-10-02 11:58:47,399 [MainThread ] [WARNI] The framework "sca_package" is part of the "SCA" module, which is not enabled in the platform
terraform scan results:
Passed checks: 6, Failed checks: 11, Skipped checks: 0
[...]
Check: CKV_AZURE_226: "Ensure ephemeral disks are used for OS disks"
FAILED for resource: azurerm_kubernetes_cluster.this
File: /aks.tf:1-64
Check: CKV_AZURE_227: "Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources"
FAILED for resource: azurerm_kubernetes_cluster.this
File: /aks.tf:1-64
[...]
```
Please also see https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster for code example.
**Version (please complete the following information):**
- Checkov Version 2.4.58
**Additional context**
This is related to https://github.com/bridgecrewio/checkov/pull/5584 and https://github.com/bridgecrewio/checkov/pull/5588.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3 from typing import Any
4
5
6 class AKSEphemeralOSDisks(BaseResourceValueCheck):
7 def __init__(self) -> None:
8 """
9 Temporary data can contain sensitive data at some points, by using ephemeral disks,
10 we ensure that data written to OS disk is stored on local VM storage and isn't persisted to Azure Storage
11
12 Azure automatically replicates data stored in the managed OS disk of a virtual machine to Azure storage
13 to avoid data loss in case the virtual machine needs to be relocated to another host.
14 Generally speaking, containers are not designed to have local state persisted to the managed OS disk,
15 hence this behavior offers limited value to AKS hosted while providing some drawbacks,
16 including slower node provisioning and higher read/write latency.
17
18 Ephemeral disks allow us also to have faster cluster operations like scale or upgrade
19 due to faster re-imaging and boot times.
20 """
21 name = "Ensure ephemeral disks are used for OS disks"
22 id = "CKV_AZURE_226"
23 supported_resources = ("azurerm_kubernetes_cluster",)
24 categories = (CheckCategories.KUBERNETES,)
25 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
26
27 def get_inspected_key(self) -> str:
28 return "os_disk_type"
29
30 def get_expected_value(self) -> Any:
31 return "Ephemeral"
32
33
34 check = AKSEphemeralOSDisks()
35
```
Path: `checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py`
Content:
```
1
2 from checkov.common.models.enums import CheckCategories, CheckResult
3 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
4
5
6 class AKSEncryptionAtHostEnabled(BaseResourceValueCheck):
7 def __init__(self) -> None:
8 """
9 With host-based encryption, the data stored on the VM host of
10 your AKS agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service.
11
12 This means the temp disks are encrypted at rest with platform-managed keys.
13 The cache of OS and data disks is encrypted at rest with either platform-managed keys
14 or customer-managed keys depending on the encryption type set on those disks.
15 """
16 name = "Ensure that the AKS cluster encrypt temp disks, caches, and data flows "
17 name += "between Compute and Storage resources"
18 id = "CKV_AZURE_227"
19 supported_resources = ("azurerm_kubernetes_cluster", "azurerm_kubernetes_cluster_node_pool")
20 categories = (CheckCategories.KUBERNETES,)
21 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
22 missing_block_result=CheckResult.FAILED)
23
24 def get_inspected_key(self) -> str:
25 return "enable_host_encryption"
26
27
28 check = AKSEncryptionAtHostEnabled()
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py b/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py
--- a/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py
+++ b/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py
@@ -1,4 +1,3 @@
-
from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
@@ -18,11 +17,19 @@
id = "CKV_AZURE_227"
supported_resources = ("azurerm_kubernetes_cluster", "azurerm_kubernetes_cluster_node_pool")
categories = (CheckCategories.KUBERNETES,)
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
- missing_block_result=CheckResult.FAILED)
+ super().__init__(
+ name=name,
+ id=id,
+ categories=categories,
+ supported_resources=supported_resources,
+ missing_block_result=CheckResult.FAILED,
+ )
def get_inspected_key(self) -> str:
- return "enable_host_encryption"
+ if self.entity_type == "azurerm_kubernetes_cluster":
+ return "default_node_pool/[0]/enable_host_encryption"
+ else:
+ return "enable_host_encryption"
check = AKSEncryptionAtHostEnabled()
diff --git a/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py b/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py
--- a/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py
+++ b/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py
@@ -25,7 +25,7 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def get_inspected_key(self) -> str:
- return "os_disk_type"
+ return "default_node_pool/[0]/os_disk_type"
def get_expected_value(self) -> Any:
return "Ephemeral"
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py b/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py\n--- a/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py\n+++ b/checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py\n@@ -1,4 +1,3 @@\n-\n from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n \n@@ -18,11 +17,19 @@\n id = \"CKV_AZURE_227\"\n supported_resources = (\"azurerm_kubernetes_cluster\", \"azurerm_kubernetes_cluster_node_pool\")\n categories = (CheckCategories.KUBERNETES,)\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n- missing_block_result=CheckResult.FAILED)\n+ super().__init__(\n+ name=name,\n+ id=id,\n+ categories=categories,\n+ supported_resources=supported_resources,\n+ missing_block_result=CheckResult.FAILED,\n+ )\n \n def get_inspected_key(self) -> str:\n- return \"enable_host_encryption\"\n+ if self.entity_type == \"azurerm_kubernetes_cluster\":\n+ return \"default_node_pool/[0]/enable_host_encryption\"\n+ else:\n+ return \"enable_host_encryption\"\n \n \n check = AKSEncryptionAtHostEnabled()\ndiff --git a/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py b/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py\n--- a/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py\n+++ b/checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py\n@@ -25,7 +25,7 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def get_inspected_key(self) -> str:\n- return \"os_disk_type\"\n+ return \"default_node_pool/[0]/os_disk_type\"\n \n def get_expected_value(self) -> Any:\n return \"Ephemeral\"\n", "issue": "CKV_AZURE_226: error in check and testcase\n**Describe the issue**\r\nCKV_AZURE_226 checks for ephemeral disks within the \"main resource\" azurerm_kubernetes_cluster but the cluster itself doesn't have any argument called os_disk_type. The argument os_disk_type is part of the node pool. \r\nThe testcase [here](https://github.com/bridgecrewio/checkov/pull/5584/files#diff-c0b8f08537766f6eff2a5d10b9439d227fdaaebe6ff7903008825c5f9d51c22dR1) is misleading and the check itself [here](https://github.com/bridgecrewio/checkov/pull/5584/files#diff-c9248390aa120f7af4643f1908d3d824fb903fd3c6cd63e9e77fe8e9ecd59289R28) too. \r\n\r\nIn my opinion this must be something like \r\n```\r\n def get_inspected_key(self) -> str:\r\n return \"default_node_pool/[0]/os_disk_type\"\r\n```\r\notherwise it won't work?\r\n\r\nSame for CKV_AZURE_227.\r\n\r\n**Examples**\r\n```\r\n[root] # head -30 aks.tf\r\nresource \"azurerm_kubernetes_cluster\" \"this\" {\r\n name = local.name_prefix\r\n location = var.resource_group.location\r\n resource_group_name = var.resource_group.name\r\n node_resource_group = \"${local.name_prefix}-node-pool\"\r\n dns_prefix = local.name_prefix\r\n kubernetes_version = local.kubernetes_version\r\n sku_tier = var.sku_tier\r\n\r\n api_server_access_profile {\r\n authorized_ip_ranges = var.api_server_authorized_ip_ranges\r\n }\r\n\r\n default_node_pool {\r\n name = \"default\"\r\n\r\n enable_host_encryption = true\r\n vm_size = \"Standard_E4ads_v5\"\r\n os_disk_type = \"Ephemeral\"\r\n zones = [1, 2, 3]\r\n only_critical_addons_enabled = true\r\n\r\n type = \"VirtualMachineScaleSets\"\r\n vnet_subnet_id = var.subnet_id\r\n enable_auto_scaling = true\r\n max_count = 6\r\n min_count = 2\r\n orchestrator_version = local.kubernetes_version\r\n\r\n upgrade_settings {\r\n```\r\n\r\nresults in\r\n```\r\n[root] # checkov --skip-framework kubernetes --skip-framework helm --quiet --compact -o junitxml -o cli --directory .\r\n2023-10-02 11:58:47,399 [MainThread ] [WARNI] The framework \"sca_image\" is part of the \"SCA\" module, which is not enabled in the platform\r\n2023-10-02 11:58:47,399 [MainThread ] [WARNI] The framework \"sca_package\" is part of the \"SCA\" module, which is not enabled in the platform\r\nterraform scan results:\r\n\r\nPassed checks: 6, Failed checks: 11, Skipped checks: 0\r\n\r\n[...]\r\nCheck: CKV_AZURE_226: \"Ensure ephemeral disks are used for OS disks\"\r\n FAILED for resource: azurerm_kubernetes_cluster.this\r\n File: /aks.tf:1-64\r\nCheck: CKV_AZURE_227: \"Ensure that the AKS cluster encrypt temp disks, caches, and data flows between Compute and Storage resources\"\r\n FAILED for resource: azurerm_kubernetes_cluster.this\r\n File: /aks.tf:1-64\r\n[...]\r\n```\r\n\r\nPlease also see https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster for code example.\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.4.58\r\n\r\n**Additional context**\r\nThis is related to https://github.com/bridgecrewio/checkov/pull/5584 and https://github.com/bridgecrewio/checkov/pull/5588.\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom typing import Any\n\n\nclass AKSEphemeralOSDisks(BaseResourceValueCheck):\n def __init__(self) -> None:\n \"\"\"\n Temporary data can contain sensitive data at some points, by using ephemeral disks,\n we ensure that data written to OS disk is stored on local VM storage and isn't persisted to Azure Storage\n\n Azure automatically replicates data stored in the managed OS disk of a virtual machine to Azure storage\n to avoid data loss in case the virtual machine needs to be relocated to another host.\n Generally speaking, containers are not designed to have local state persisted to the managed OS disk,\n hence this behavior offers limited value to AKS hosted while providing some drawbacks,\n including slower node provisioning and higher read/write latency.\n\n Ephemeral disks allow us also to have faster cluster operations like scale or upgrade\n due to faster re-imaging and boot times.\n \"\"\"\n name = \"Ensure ephemeral disks are used for OS disks\"\n id = \"CKV_AZURE_226\"\n supported_resources = (\"azurerm_kubernetes_cluster\",)\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"os_disk_type\"\n\n def get_expected_value(self) -> Any:\n return \"Ephemeral\"\n\n\ncheck = AKSEphemeralOSDisks()\n", "path": "checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py"}, {"content": "\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AKSEncryptionAtHostEnabled(BaseResourceValueCheck):\n def __init__(self) -> None:\n \"\"\"\n With host-based encryption, the data stored on the VM host of\n your AKS agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service.\n\n This means the temp disks are encrypted at rest with platform-managed keys.\n The cache of OS and data disks is encrypted at rest with either platform-managed keys\n or customer-managed keys depending on the encryption type set on those disks.\n \"\"\"\n name = \"Ensure that the AKS cluster encrypt temp disks, caches, and data flows \"\n name += \"between Compute and Storage resources\"\n id = \"CKV_AZURE_227\"\n supported_resources = (\"azurerm_kubernetes_cluster\", \"azurerm_kubernetes_cluster_node_pool\")\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.FAILED)\n\n def get_inspected_key(self) -> str:\n return \"enable_host_encryption\"\n\n\ncheck = AKSEncryptionAtHostEnabled()\n", "path": "checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom typing import Any\n\n\nclass AKSEphemeralOSDisks(BaseResourceValueCheck):\n def __init__(self) -> None:\n \"\"\"\n Temporary data can contain sensitive data at some points, by using ephemeral disks,\n we ensure that data written to OS disk is stored on local VM storage and isn't persisted to Azure Storage\n\n Azure automatically replicates data stored in the managed OS disk of a virtual machine to Azure storage\n to avoid data loss in case the virtual machine needs to be relocated to another host.\n Generally speaking, containers are not designed to have local state persisted to the managed OS disk,\n hence this behavior offers limited value to AKS hosted while providing some drawbacks,\n including slower node provisioning and higher read/write latency.\n\n Ephemeral disks allow us also to have faster cluster operations like scale or upgrade\n due to faster re-imaging and boot times.\n \"\"\"\n name = \"Ensure ephemeral disks are used for OS disks\"\n id = \"CKV_AZURE_226\"\n supported_resources = (\"azurerm_kubernetes_cluster\",)\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"default_node_pool/[0]/os_disk_type\"\n\n def get_expected_value(self) -> Any:\n return \"Ephemeral\"\n\n\ncheck = AKSEphemeralOSDisks()\n", "path": "checkov/terraform/checks/resource/azure/AKSEphemeralOSDisks.py"}, {"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AKSEncryptionAtHostEnabled(BaseResourceValueCheck):\n def __init__(self) -> None:\n \"\"\"\n With host-based encryption, the data stored on the VM host of\n your AKS agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service.\n\n This means the temp disks are encrypted at rest with platform-managed keys.\n The cache of OS and data disks is encrypted at rest with either platform-managed keys\n or customer-managed keys depending on the encryption type set on those disks.\n \"\"\"\n name = \"Ensure that the AKS cluster encrypt temp disks, caches, and data flows \"\n name += \"between Compute and Storage resources\"\n id = \"CKV_AZURE_227\"\n supported_resources = (\"azurerm_kubernetes_cluster\", \"azurerm_kubernetes_cluster_node_pool\")\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_resources=supported_resources,\n missing_block_result=CheckResult.FAILED,\n )\n\n def get_inspected_key(self) -> str:\n if self.entity_type == \"azurerm_kubernetes_cluster\":\n return \"default_node_pool/[0]/enable_host_encryption\"\n else:\n return \"enable_host_encryption\"\n\n\ncheck = AKSEncryptionAtHostEnabled()\n", "path": "checkov/terraform/checks/resource/azure/AKSEncryptionAtHostEnabled.py"}]} | 1,981 | 511 |
gh_patches_debug_20164 | rasdani/github-patches | git_diff | pytorch__vision-2258 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raise error if target boxes are degenerate in Faster R-CNN
We have had a number of reports with users saying that their training loss is nan after a few iterations.
Most of the time, this is due to degenerate boxes (i.e., boxes with negative sizes or zero area). We should improve the user experience in those situations.
I think that raising an error in `GeneralizedRCNN` if the target boxes are degenerate would be a good compromise.
Related issues: https://github.com/pytorch/vision/issues/2235 https://github.com/pytorch/vision/issues/1994 https://github.com/pytorch/vision/issues/1176 https://github.com/pytorch/vision/issues/1128 #1120 and #997
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/models/detection/generalized_rcnn.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2 """
3 Implements the Generalized R-CNN framework
4 """
5
6 from collections import OrderedDict
7 import torch
8 from torch import nn
9 import warnings
10 from torch.jit.annotations import Tuple, List, Dict, Optional
11 from torch import Tensor
12
13
14 class GeneralizedRCNN(nn.Module):
15 """
16 Main class for Generalized R-CNN.
17
18 Arguments:
19 backbone (nn.Module):
20 rpn (nn.Module):
21 roi_heads (nn.Module): takes the features + the proposals from the RPN and computes
22 detections / masks from it.
23 transform (nn.Module): performs the data transformation from the inputs to feed into
24 the model
25 """
26
27 def __init__(self, backbone, rpn, roi_heads, transform):
28 super(GeneralizedRCNN, self).__init__()
29 self.transform = transform
30 self.backbone = backbone
31 self.rpn = rpn
32 self.roi_heads = roi_heads
33 # used only on torchscript mode
34 self._has_warned = False
35
36 @torch.jit.unused
37 def eager_outputs(self, losses, detections):
38 # type: (Dict[str, Tensor], List[Dict[str, Tensor]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]]
39 if self.training:
40 return losses
41
42 return detections
43
44 def forward(self, images, targets=None):
45 # type: (List[Tensor], Optional[List[Dict[str, Tensor]]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]]
46 """
47 Arguments:
48 images (list[Tensor]): images to be processed
49 targets (list[Dict[Tensor]]): ground-truth boxes present in the image (optional)
50
51 Returns:
52 result (list[BoxList] or dict[Tensor]): the output from the model.
53 During training, it returns a dict[Tensor] which contains the losses.
54 During testing, it returns list[BoxList] contains additional fields
55 like `scores`, `labels` and `mask` (for Mask R-CNN models).
56
57 """
58 if self.training and targets is None:
59 raise ValueError("In training mode, targets should be passed")
60 if self.training:
61 assert targets is not None
62 for target in targets:
63 boxes = target["boxes"]
64 if isinstance(boxes, torch.Tensor):
65 if len(boxes.shape) != 2 or boxes.shape[-1] != 4:
66 raise ValueError("Expected target boxes to be a tensor"
67 "of shape [N, 4], got {:}.".format(
68 boxes.shape))
69 else:
70 raise ValueError("Expected target boxes to be of type "
71 "Tensor, got {:}.".format(type(boxes)))
72
73 original_image_sizes = torch.jit.annotate(List[Tuple[int, int]], [])
74 for img in images:
75 val = img.shape[-2:]
76 assert len(val) == 2
77 original_image_sizes.append((val[0], val[1]))
78
79 images, targets = self.transform(images, targets)
80 features = self.backbone(images.tensors)
81 if isinstance(features, torch.Tensor):
82 features = OrderedDict([('0', features)])
83 proposals, proposal_losses = self.rpn(images, features, targets)
84 detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
85 detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)
86
87 losses = {}
88 losses.update(detector_losses)
89 losses.update(proposal_losses)
90
91 if torch.jit.is_scripting():
92 if not self._has_warned:
93 warnings.warn("RCNN always returns a (Losses, Detections) tuple in scripting")
94 self._has_warned = True
95 return (losses, detections)
96 else:
97 return self.eager_outputs(losses, detections)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchvision/models/detection/generalized_rcnn.py b/torchvision/models/detection/generalized_rcnn.py
--- a/torchvision/models/detection/generalized_rcnn.py
+++ b/torchvision/models/detection/generalized_rcnn.py
@@ -77,6 +77,21 @@
original_image_sizes.append((val[0], val[1]))
images, targets = self.transform(images, targets)
+
+ # Check for degenerate boxes
+ # TODO: Move this to a function
+ if targets is not None:
+ for target_idx, target in enumerate(targets):
+ boxes = target["boxes"]
+ degenerate_boxes = boxes[:, 2:] <= boxes[:, :2]
+ if degenerate_boxes.any():
+ # print the first degenrate box
+ bb_idx = degenerate_boxes.any(dim=1).nonzero().view(-1)[0]
+ degen_bb: List[float] = boxes[bb_idx].tolist()
+ raise ValueError("All bounding boxes should have positive height and width."
+ " Found invaid box {} for target at index {}."
+ .format(degen_bb, target_idx))
+
features = self.backbone(images.tensors)
if isinstance(features, torch.Tensor):
features = OrderedDict([('0', features)])
| {"golden_diff": "diff --git a/torchvision/models/detection/generalized_rcnn.py b/torchvision/models/detection/generalized_rcnn.py\n--- a/torchvision/models/detection/generalized_rcnn.py\n+++ b/torchvision/models/detection/generalized_rcnn.py\n@@ -77,6 +77,21 @@\n original_image_sizes.append((val[0], val[1]))\n \n images, targets = self.transform(images, targets)\n+\n+ # Check for degenerate boxes\n+ # TODO: Move this to a function\n+ if targets is not None:\n+ for target_idx, target in enumerate(targets):\n+ boxes = target[\"boxes\"]\n+ degenerate_boxes = boxes[:, 2:] <= boxes[:, :2]\n+ if degenerate_boxes.any():\n+ # print the first degenrate box\n+ bb_idx = degenerate_boxes.any(dim=1).nonzero().view(-1)[0]\n+ degen_bb: List[float] = boxes[bb_idx].tolist()\n+ raise ValueError(\"All bounding boxes should have positive height and width.\"\n+ \" Found invaid box {} for target at index {}.\"\n+ .format(degen_bb, target_idx))\n+\n features = self.backbone(images.tensors)\n if isinstance(features, torch.Tensor):\n features = OrderedDict([('0', features)])\n", "issue": "Raise error if target boxes are degenerate in Faster R-CNN\nWe have had a number of reports with users saying that their training loss is nan after a few iterations.\r\n\r\nMost of the time, this is due to degenerate boxes (i.e., boxes with negative sizes or zero area). We should improve the user experience in those situations.\r\n\r\nI think that raising an error in `GeneralizedRCNN` if the target boxes are degenerate would be a good compromise.\r\n\r\nRelated issues: https://github.com/pytorch/vision/issues/2235 https://github.com/pytorch/vision/issues/1994 https://github.com/pytorch/vision/issues/1176 https://github.com/pytorch/vision/issues/1128 #1120 and #997\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nImplements the Generalized R-CNN framework\n\"\"\"\n\nfrom collections import OrderedDict\nimport torch\nfrom torch import nn\nimport warnings\nfrom torch.jit.annotations import Tuple, List, Dict, Optional\nfrom torch import Tensor\n\n\nclass GeneralizedRCNN(nn.Module):\n \"\"\"\n Main class for Generalized R-CNN.\n\n Arguments:\n backbone (nn.Module):\n rpn (nn.Module):\n roi_heads (nn.Module): takes the features + the proposals from the RPN and computes\n detections / masks from it.\n transform (nn.Module): performs the data transformation from the inputs to feed into\n the model\n \"\"\"\n\n def __init__(self, backbone, rpn, roi_heads, transform):\n super(GeneralizedRCNN, self).__init__()\n self.transform = transform\n self.backbone = backbone\n self.rpn = rpn\n self.roi_heads = roi_heads\n # used only on torchscript mode\n self._has_warned = False\n\n @torch.jit.unused\n def eager_outputs(self, losses, detections):\n # type: (Dict[str, Tensor], List[Dict[str, Tensor]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]]\n if self.training:\n return losses\n\n return detections\n\n def forward(self, images, targets=None):\n # type: (List[Tensor], Optional[List[Dict[str, Tensor]]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]]\n \"\"\"\n Arguments:\n images (list[Tensor]): images to be processed\n targets (list[Dict[Tensor]]): ground-truth boxes present in the image (optional)\n\n Returns:\n result (list[BoxList] or dict[Tensor]): the output from the model.\n During training, it returns a dict[Tensor] which contains the losses.\n During testing, it returns list[BoxList] contains additional fields\n like `scores`, `labels` and `mask` (for Mask R-CNN models).\n\n \"\"\"\n if self.training and targets is None:\n raise ValueError(\"In training mode, targets should be passed\")\n if self.training:\n assert targets is not None\n for target in targets:\n boxes = target[\"boxes\"]\n if isinstance(boxes, torch.Tensor):\n if len(boxes.shape) != 2 or boxes.shape[-1] != 4:\n raise ValueError(\"Expected target boxes to be a tensor\"\n \"of shape [N, 4], got {:}.\".format(\n boxes.shape))\n else:\n raise ValueError(\"Expected target boxes to be of type \"\n \"Tensor, got {:}.\".format(type(boxes)))\n\n original_image_sizes = torch.jit.annotate(List[Tuple[int, int]], [])\n for img in images:\n val = img.shape[-2:]\n assert len(val) == 2\n original_image_sizes.append((val[0], val[1]))\n\n images, targets = self.transform(images, targets)\n features = self.backbone(images.tensors)\n if isinstance(features, torch.Tensor):\n features = OrderedDict([('0', features)])\n proposals, proposal_losses = self.rpn(images, features, targets)\n detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)\n detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)\n\n losses = {}\n losses.update(detector_losses)\n losses.update(proposal_losses)\n\n if torch.jit.is_scripting():\n if not self._has_warned:\n warnings.warn(\"RCNN always returns a (Losses, Detections) tuple in scripting\")\n self._has_warned = True\n return (losses, detections)\n else:\n return self.eager_outputs(losses, detections)\n", "path": "torchvision/models/detection/generalized_rcnn.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nImplements the Generalized R-CNN framework\n\"\"\"\n\nfrom collections import OrderedDict\nimport torch\nfrom torch import nn\nimport warnings\nfrom torch.jit.annotations import Tuple, List, Dict, Optional\nfrom torch import Tensor\n\n\nclass GeneralizedRCNN(nn.Module):\n \"\"\"\n Main class for Generalized R-CNN.\n\n Arguments:\n backbone (nn.Module):\n rpn (nn.Module):\n roi_heads (nn.Module): takes the features + the proposals from the RPN and computes\n detections / masks from it.\n transform (nn.Module): performs the data transformation from the inputs to feed into\n the model\n \"\"\"\n\n def __init__(self, backbone, rpn, roi_heads, transform):\n super(GeneralizedRCNN, self).__init__()\n self.transform = transform\n self.backbone = backbone\n self.rpn = rpn\n self.roi_heads = roi_heads\n # used only on torchscript mode\n self._has_warned = False\n\n @torch.jit.unused\n def eager_outputs(self, losses, detections):\n # type: (Dict[str, Tensor], List[Dict[str, Tensor]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]]\n if self.training:\n return losses\n\n return detections\n\n def forward(self, images, targets=None):\n # type: (List[Tensor], Optional[List[Dict[str, Tensor]]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]]\n \"\"\"\n Arguments:\n images (list[Tensor]): images to be processed\n targets (list[Dict[Tensor]]): ground-truth boxes present in the image (optional)\n\n Returns:\n result (list[BoxList] or dict[Tensor]): the output from the model.\n During training, it returns a dict[Tensor] which contains the losses.\n During testing, it returns list[BoxList] contains additional fields\n like `scores`, `labels` and `mask` (for Mask R-CNN models).\n\n \"\"\"\n if self.training and targets is None:\n raise ValueError(\"In training mode, targets should be passed\")\n if self.training:\n assert targets is not None\n for target in targets:\n boxes = target[\"boxes\"]\n if isinstance(boxes, torch.Tensor):\n if len(boxes.shape) != 2 or boxes.shape[-1] != 4:\n raise ValueError(\"Expected target boxes to be a tensor\"\n \"of shape [N, 4], got {:}.\".format(\n boxes.shape))\n else:\n raise ValueError(\"Expected target boxes to be of type \"\n \"Tensor, got {:}.\".format(type(boxes)))\n\n original_image_sizes = torch.jit.annotate(List[Tuple[int, int]], [])\n for img in images:\n val = img.shape[-2:]\n assert len(val) == 2\n original_image_sizes.append((val[0], val[1]))\n\n images, targets = self.transform(images, targets)\n\n # Check for degenerate boxes\n # TODO: Move this to a function\n if targets is not None:\n for target_idx, target in enumerate(targets):\n boxes = target[\"boxes\"]\n degenerate_boxes = boxes[:, 2:] <= boxes[:, :2]\n if degenerate_boxes.any():\n # print the first degenrate box\n bb_idx = degenerate_boxes.any(dim=1).nonzero().view(-1)[0]\n degen_bb: List[float] = boxes[bb_idx].tolist()\n raise ValueError(\"All bounding boxes should have positive height and width.\"\n \" Found invaid box {} for target at index {}.\"\n .format(degen_bb, target_idx))\n\n features = self.backbone(images.tensors)\n if isinstance(features, torch.Tensor):\n features = OrderedDict([('0', features)])\n proposals, proposal_losses = self.rpn(images, features, targets)\n detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)\n detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)\n\n losses = {}\n losses.update(detector_losses)\n losses.update(proposal_losses)\n\n if torch.jit.is_scripting():\n if not self._has_warned:\n warnings.warn(\"RCNN always returns a (Losses, Detections) tuple in scripting\")\n self._has_warned = True\n return (losses, detections)\n else:\n return self.eager_outputs(losses, detections)\n", "path": "torchvision/models/detection/generalized_rcnn.py"}]} | 1,445 | 286 |
gh_patches_debug_1514 | rasdani/github-patches | git_diff | ocadotechnology__aimmo-543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Latest minikube not starting on Travis CI
Same issue and hopefully fix as this https://github.com/kubernetes/minikube/issues/2704
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from setuptools import find_packages, setup
3
4 import versioneer
5
6 setup(
7 name='aimmo',
8 cmdclass=versioneer.get_cmdclass(),
9 packages=find_packages(),
10 include_package_data=True,
11 install_requires=[
12 'django >= 1.8.3, < 1.9.0',
13 'django-autoconfig >= 0.3.6, < 1.0.0',
14 'django-forms-bootstrap',
15 'django-js-reverse',
16 'eventlet',
17 'flask',
18 'flask-socketio',
19 'requests',
20 'six',
21 'pykube',
22 'hypothesis',
23 'flask-cors >= 3.0, < 3.1',
24 'psutil >= 5.4, < 5.5',
25 ],
26 tests_require=[
27 'django-setuptest',
28 'httmock',
29 'mock == 2.0.0',
30 'docker == 2.7.0',
31 'kubernetes == 4.0.0',
32 'PyYAML == 3.12',
33 ],
34 test_suite='setuptest.setuptest.SetupTestSuite',
35 version=versioneer.get_version(),
36 zip_safe=False,
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,7 +28,7 @@
'httmock',
'mock == 2.0.0',
'docker == 2.7.0',
- 'kubernetes == 4.0.0',
+ 'kubernetes == 5.0.0',
'PyYAML == 3.12',
],
test_suite='setuptest.setuptest.SetupTestSuite',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,7 +28,7 @@\n 'httmock',\n 'mock == 2.0.0',\n 'docker == 2.7.0',\n- 'kubernetes == 4.0.0',\n+ 'kubernetes == 5.0.0',\n 'PyYAML == 3.12',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n", "issue": "Latest minikube not starting on Travis CI\nSame issue and hopefully fix as this https://github.com/kubernetes/minikube/issues/2704\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nsetup(\n name='aimmo',\n cmdclass=versioneer.get_cmdclass(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django >= 1.8.3, < 1.9.0',\n 'django-autoconfig >= 0.3.6, < 1.0.0',\n 'django-forms-bootstrap',\n 'django-js-reverse',\n 'eventlet',\n 'flask',\n 'flask-socketio',\n 'requests',\n 'six',\n 'pykube',\n 'hypothesis',\n 'flask-cors >= 3.0, < 3.1',\n 'psutil >= 5.4, < 5.5',\n ],\n tests_require=[\n 'django-setuptest',\n 'httmock',\n 'mock == 2.0.0',\n 'docker == 2.7.0',\n 'kubernetes == 4.0.0',\n 'PyYAML == 3.12',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n version=versioneer.get_version(),\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nsetup(\n name='aimmo',\n cmdclass=versioneer.get_cmdclass(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django >= 1.8.3, < 1.9.0',\n 'django-autoconfig >= 0.3.6, < 1.0.0',\n 'django-forms-bootstrap',\n 'django-js-reverse',\n 'eventlet',\n 'flask',\n 'flask-socketio',\n 'requests',\n 'six',\n 'pykube',\n 'hypothesis',\n 'flask-cors >= 3.0, < 3.1',\n 'psutil >= 5.4, < 5.5',\n ],\n tests_require=[\n 'django-setuptest',\n 'httmock',\n 'mock == 2.0.0',\n 'docker == 2.7.0',\n 'kubernetes == 5.0.0',\n 'PyYAML == 3.12',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n version=versioneer.get_version(),\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 630 | 114 |
gh_patches_debug_1492 | rasdani/github-patches | git_diff | wright-group__WrightTools-590 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change __version__ to match pep 440
Specifically, when a branch is specified, it should use a plus sign instead of minus
https://www.python.org/dev/peps/pep-0440/#local-version-identifiers
https://github.com/wright-group/WrightTools/blob/490a4a3d6fb6f016e7033d661b553b72c2d86fcb/WrightTools/__version__.py#L33
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/__version__.py`
Content:
```
1 """Define WrightTools version."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import os
8
9
10 # ---- define -------------------------------------------------------------------------------------
11
12
13 here = os.path.abspath(os.path.dirname(__file__))
14
15
16 __all__ = ['__version__', '__branch__']
17
18
19 # --- version -------------------------------------------------------------------------------------
20
21
22 # read from VERSION file
23 with open(os.path.join(os.path.dirname(here), 'VERSION')) as f:
24 __version__ = f.read().strip()
25
26
27 # add git branch, if appropriate
28 p = os.path.join(os.path.dirname(here), '.git', 'HEAD')
29 if os.path.isfile(p):
30 with open(p) as f:
31 __branch__ = f.readline().rstrip().split(r'/')[-1]
32 if __branch__ != 'master':
33 __version__ += '-' + __branch__
34 else:
35 __branch__ = None
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/WrightTools/__version__.py b/WrightTools/__version__.py
--- a/WrightTools/__version__.py
+++ b/WrightTools/__version__.py
@@ -30,6 +30,6 @@
with open(p) as f:
__branch__ = f.readline().rstrip().split(r'/')[-1]
if __branch__ != 'master':
- __version__ += '-' + __branch__
+ __version__ += '+' + __branch__
else:
__branch__ = None
| {"golden_diff": "diff --git a/WrightTools/__version__.py b/WrightTools/__version__.py\n--- a/WrightTools/__version__.py\n+++ b/WrightTools/__version__.py\n@@ -30,6 +30,6 @@\n with open(p) as f:\n __branch__ = f.readline().rstrip().split(r'/')[-1]\n if __branch__ != 'master':\n- __version__ += '-' + __branch__\n+ __version__ += '+' + __branch__\n else:\n __branch__ = None\n", "issue": "Change __version__ to match pep 440\nSpecifically, when a branch is specified, it should use a plus sign instead of minus\r\n\r\nhttps://www.python.org/dev/peps/pep-0440/#local-version-identifiers\r\n\r\nhttps://github.com/wright-group/WrightTools/blob/490a4a3d6fb6f016e7033d661b553b72c2d86fcb/WrightTools/__version__.py#L33\n", "before_files": [{"content": "\"\"\"Define WrightTools version.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport os\n\n\n# ---- define -------------------------------------------------------------------------------------\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\n__all__ = ['__version__', '__branch__']\n\n\n# --- version -------------------------------------------------------------------------------------\n\n\n# read from VERSION file\nwith open(os.path.join(os.path.dirname(here), 'VERSION')) as f:\n __version__ = f.read().strip()\n\n\n# add git branch, if appropriate\np = os.path.join(os.path.dirname(here), '.git', 'HEAD')\nif os.path.isfile(p):\n with open(p) as f:\n __branch__ = f.readline().rstrip().split(r'/')[-1]\n if __branch__ != 'master':\n __version__ += '-' + __branch__\nelse:\n __branch__ = None\n", "path": "WrightTools/__version__.py"}], "after_files": [{"content": "\"\"\"Define WrightTools version.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport os\n\n\n# ---- define -------------------------------------------------------------------------------------\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\n__all__ = ['__version__', '__branch__']\n\n\n# --- version -------------------------------------------------------------------------------------\n\n\n# read from VERSION file\nwith open(os.path.join(os.path.dirname(here), 'VERSION')) as f:\n __version__ = f.read().strip()\n\n\n# add git branch, if appropriate\np = os.path.join(os.path.dirname(here), '.git', 'HEAD')\nif os.path.isfile(p):\n with open(p) as f:\n __branch__ = f.readline().rstrip().split(r'/')[-1]\n if __branch__ != 'master':\n __version__ += '+' + __branch__\nelse:\n __branch__ = None\n", "path": "WrightTools/__version__.py"}]} | 621 | 117 |
gh_patches_debug_30334 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-1360 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WandbLogger cannot be used with 'ddp'
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug
wandb modifies `init` such that a child process calling init returns None if the master process has called init. This seems to cause a bug with ddp, and results in rank zero having experiment = None, which crashes the program.
### To Reproduce
Can be reproduced with the basic MNIST gpu template, simply add a WandbLogger and pass 'ddp' as the distributed backend.
```
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 331, in ddp_train
self.run_pretrain_routine(model)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 757, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/base.py", line 14, in wrapped_fn
fn(self, *args, **kwargs)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/wandb.py", line 79, in log_hyperparams
self.experiment.config.update(params)
AttributeError: 'NoneType' object has no attribute 'config'
```
This occurs with the latest wandb version and with pytorch-lightning 0.6.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/loggers/wandb.py`
Content:
```
1 r"""
2
3 .. _wandb:
4
5 WandbLogger
6 -------------
7 """
8 import os
9 from argparse import Namespace
10 from typing import Optional, List, Dict, Union, Any
11
12 import torch.nn as nn
13
14 try:
15 import wandb
16 from wandb.wandb_run import Run
17 except ImportError: # pragma: no-cover
18 raise ImportError('You want to use `wandb` logger which is not installed yet,' # pragma: no-cover
19 ' install it with `pip install wandb`.')
20
21 from pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only
22
23
24 class WandbLogger(LightningLoggerBase):
25 """
26 Logger for `W&B <https://www.wandb.com/>`_.
27
28 Args:
29 name (str): display name for the run.
30 save_dir (str): path where data is saved.
31 offline (bool): run offline (data can be streamed later to wandb servers).
32 id or version (str): sets the version, mainly used to resume a previous run.
33 anonymous (bool): enables or explicitly disables anonymous logging.
34 project (str): the name of the project to which this run will belong.
35 tags (list of str): tags associated with this run.
36 log_model (bool): save checkpoints in wandb dir to upload on W&B servers.
37
38 Example
39 --------
40 .. code-block:: python
41
42 from pytorch_lightning.loggers import WandbLogger
43 from pytorch_lightning import Trainer
44
45 wandb_logger = WandbLogger()
46 trainer = Trainer(logger=wandb_logger)
47 """
48
49 def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,
50 offline: bool = False, id: Optional[str] = None, anonymous: bool = False,
51 version: Optional[str] = None, project: Optional[str] = None,
52 tags: Optional[List[str]] = None, log_model: bool = False,
53 experiment=None, entity=None):
54 super().__init__()
55 self._name = name
56 self._save_dir = save_dir
57 self._anonymous = 'allow' if anonymous else None
58 self._id = version or id
59 self._tags = tags
60 self._project = project
61 self._experiment = experiment
62 self._offline = offline
63 self._entity = entity
64 self._log_model = log_model
65
66 def __getstate__(self):
67 state = self.__dict__.copy()
68 # cannot be pickled
69 state['_experiment'] = None
70 # args needed to reload correct experiment
71 state['_id'] = self.experiment.id
72 return state
73
74 @property
75 def experiment(self) -> Run:
76 r"""
77
78 Actual wandb object. To use wandb features do the following.
79
80 Example::
81
82 self.logger.experiment.some_wandb_function()
83
84 """
85 if self._experiment is None:
86 if self._offline:
87 os.environ['WANDB_MODE'] = 'dryrun'
88 self._experiment = wandb.init(
89 name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,
90 id=self._id, resume='allow', tags=self._tags, entity=self._entity)
91 # save checkpoints in wandb dir to upload on W&B servers
92 if self._log_model:
93 self.save_dir = self._experiment.dir
94 return self._experiment
95
96 def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):
97 wandb.watch(model, log=log, log_freq=log_freq)
98
99 @rank_zero_only
100 def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:
101 params = self._convert_params(params)
102 self.experiment.config.update(params)
103
104 @rank_zero_only
105 def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:
106 if step is not None:
107 metrics['global_step'] = step
108 self.experiment.log(metrics)
109
110 @property
111 def name(self) -> str:
112 return self.experiment.project_name()
113
114 @property
115 def version(self) -> str:
116 return self.experiment.id
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py
--- a/pytorch_lightning/loggers/wandb.py
+++ b/pytorch_lightning/loggers/wandb.py
@@ -65,10 +65,11 @@
def __getstate__(self):
state = self.__dict__.copy()
+ # args needed to reload correct experiment
+ state['_id'] = self._experiment.id if self._experiment is not None else None
+
# cannot be pickled
state['_experiment'] = None
- # args needed to reload correct experiment
- state['_id'] = self.experiment.id
return state
@property
@@ -87,7 +88,7 @@
os.environ['WANDB_MODE'] = 'dryrun'
self._experiment = wandb.init(
name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,
- id=self._id, resume='allow', tags=self._tags, entity=self._entity)
+ reinit=True, id=self._id, resume='allow', tags=self._tags, entity=self._entity)
# save checkpoints in wandb dir to upload on W&B servers
if self._log_model:
self.save_dir = self._experiment.dir
@@ -109,8 +110,11 @@
@property
def name(self) -> str:
- return self.experiment.project_name()
+ # don't create an experiment if we don't have one
+ name = self._experiment.project_name() if self._experiment else None
+ return name
@property
def version(self) -> str:
- return self.experiment.id
+ # don't create an experiment if we don't have one
+ return self._experiment.id if self._experiment else None
| {"golden_diff": "diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py\n--- a/pytorch_lightning/loggers/wandb.py\n+++ b/pytorch_lightning/loggers/wandb.py\n@@ -65,10 +65,11 @@\n \n def __getstate__(self):\n state = self.__dict__.copy()\n+ # args needed to reload correct experiment\n+ state['_id'] = self._experiment.id if self._experiment is not None else None\n+\n # cannot be pickled\n state['_experiment'] = None\n- # args needed to reload correct experiment\n- state['_id'] = self.experiment.id\n return state\n \n @property\n@@ -87,7 +88,7 @@\n os.environ['WANDB_MODE'] = 'dryrun'\n self._experiment = wandb.init(\n name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,\n- id=self._id, resume='allow', tags=self._tags, entity=self._entity)\n+ reinit=True, id=self._id, resume='allow', tags=self._tags, entity=self._entity)\n # save checkpoints in wandb dir to upload on W&B servers\n if self._log_model:\n self.save_dir = self._experiment.dir\n@@ -109,8 +110,11 @@\n \n @property\n def name(self) -> str:\n- return self.experiment.project_name()\n+ # don't create an experiment if we don't have one\n+ name = self._experiment.project_name() if self._experiment else None\n+ return name\n \n @property\n def version(self) -> str:\n- return self.experiment.id\n+ # don't create an experiment if we don't have one\n+ return self._experiment.id if self._experiment else None\n", "issue": "WandbLogger cannot be used with 'ddp'\n<!-- \r\n### Common bugs:\r\n1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). \r\n2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) \r\n-->\r\n\r\n## \ud83d\udc1b Bug\r\n\r\nwandb modifies `init` such that a child process calling init returns None if the master process has called init. This seems to cause a bug with ddp, and results in rank zero having experiment = None, which crashes the program.\r\n\r\n### To Reproduce\r\n\r\nCan be reproduced with the basic MNIST gpu template, simply add a WandbLogger and pass 'ddp' as the distributed backend.\r\n```\r\n-- Process 0 terminated with the following error:\r\nTraceback (most recent call last):\r\n File \"/home/rmrao/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 19, in _wrap\r\n fn(i, *args)\r\n File \"/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py\", line 331, in ddp_train\r\n self.run_pretrain_routine(model)\r\n File \"/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py\", line 757, in run_pretrain_routine\r\n self.logger.log_hyperparams(ref_model.hparams)\r\n File \"/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/base.py\", line 14, in wrapped_fn\r\n fn(self, *args, **kwargs)\r\n File \"/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/wandb.py\", line 79, in log_hyperparams\r\n self.experiment.config.update(params)\r\nAttributeError: 'NoneType' object has no attribute 'config'\r\n```\r\n\r\nThis occurs with the latest wandb version and with pytorch-lightning 0.6.\r\n\n", "before_files": [{"content": "r\"\"\"\n\n.. _wandb:\n\nWandbLogger\n-------------\n\"\"\"\nimport os\nfrom argparse import Namespace\nfrom typing import Optional, List, Dict, Union, Any\n\nimport torch.nn as nn\n\ntry:\n import wandb\n from wandb.wandb_run import Run\nexcept ImportError: # pragma: no-cover\n raise ImportError('You want to use `wandb` logger which is not installed yet,' # pragma: no-cover\n ' install it with `pip install wandb`.')\n\nfrom pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only\n\n\nclass WandbLogger(LightningLoggerBase):\n \"\"\"\n Logger for `W&B <https://www.wandb.com/>`_.\n\n Args:\n name (str): display name for the run.\n save_dir (str): path where data is saved.\n offline (bool): run offline (data can be streamed later to wandb servers).\n id or version (str): sets the version, mainly used to resume a previous run.\n anonymous (bool): enables or explicitly disables anonymous logging.\n project (str): the name of the project to which this run will belong.\n tags (list of str): tags associated with this run.\n log_model (bool): save checkpoints in wandb dir to upload on W&B servers.\n\n Example\n --------\n .. code-block:: python\n\n from pytorch_lightning.loggers import WandbLogger\n from pytorch_lightning import Trainer\n\n wandb_logger = WandbLogger()\n trainer = Trainer(logger=wandb_logger)\n \"\"\"\n\n def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,\n offline: bool = False, id: Optional[str] = None, anonymous: bool = False,\n version: Optional[str] = None, project: Optional[str] = None,\n tags: Optional[List[str]] = None, log_model: bool = False,\n experiment=None, entity=None):\n super().__init__()\n self._name = name\n self._save_dir = save_dir\n self._anonymous = 'allow' if anonymous else None\n self._id = version or id\n self._tags = tags\n self._project = project\n self._experiment = experiment\n self._offline = offline\n self._entity = entity\n self._log_model = log_model\n\n def __getstate__(self):\n state = self.__dict__.copy()\n # cannot be pickled\n state['_experiment'] = None\n # args needed to reload correct experiment\n state['_id'] = self.experiment.id\n return state\n\n @property\n def experiment(self) -> Run:\n r\"\"\"\n\n Actual wandb object. To use wandb features do the following.\n\n Example::\n\n self.logger.experiment.some_wandb_function()\n\n \"\"\"\n if self._experiment is None:\n if self._offline:\n os.environ['WANDB_MODE'] = 'dryrun'\n self._experiment = wandb.init(\n name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,\n id=self._id, resume='allow', tags=self._tags, entity=self._entity)\n # save checkpoints in wandb dir to upload on W&B servers\n if self._log_model:\n self.save_dir = self._experiment.dir\n return self._experiment\n\n def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):\n wandb.watch(model, log=log, log_freq=log_freq)\n\n @rank_zero_only\n def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:\n params = self._convert_params(params)\n self.experiment.config.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:\n if step is not None:\n metrics['global_step'] = step\n self.experiment.log(metrics)\n\n @property\n def name(self) -> str:\n return self.experiment.project_name()\n\n @property\n def version(self) -> str:\n return self.experiment.id\n", "path": "pytorch_lightning/loggers/wandb.py"}], "after_files": [{"content": "r\"\"\"\n\n.. _wandb:\n\nWandbLogger\n-------------\n\"\"\"\nimport os\nfrom argparse import Namespace\nfrom typing import Optional, List, Dict, Union, Any\n\nimport torch.nn as nn\n\ntry:\n import wandb\n from wandb.wandb_run import Run\nexcept ImportError: # pragma: no-cover\n raise ImportError('You want to use `wandb` logger which is not installed yet,' # pragma: no-cover\n ' install it with `pip install wandb`.')\n\nfrom pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only\n\n\nclass WandbLogger(LightningLoggerBase):\n \"\"\"\n Logger for `W&B <https://www.wandb.com/>`_.\n\n Args:\n name (str): display name for the run.\n save_dir (str): path where data is saved.\n offline (bool): run offline (data can be streamed later to wandb servers).\n id or version (str): sets the version, mainly used to resume a previous run.\n anonymous (bool): enables or explicitly disables anonymous logging.\n project (str): the name of the project to which this run will belong.\n tags (list of str): tags associated with this run.\n log_model (bool): save checkpoints in wandb dir to upload on W&B servers.\n\n Example\n --------\n .. code-block:: python\n\n from pytorch_lightning.loggers import WandbLogger\n from pytorch_lightning import Trainer\n\n wandb_logger = WandbLogger()\n trainer = Trainer(logger=wandb_logger)\n \"\"\"\n\n def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,\n offline: bool = False, id: Optional[str] = None, anonymous: bool = False,\n version: Optional[str] = None, project: Optional[str] = None,\n tags: Optional[List[str]] = None, log_model: bool = False,\n experiment=None, entity=None):\n super().__init__()\n self._name = name\n self._save_dir = save_dir\n self._anonymous = 'allow' if anonymous else None\n self._id = version or id\n self._tags = tags\n self._project = project\n self._experiment = experiment\n self._offline = offline\n self._entity = entity\n self._log_model = log_model\n\n def __getstate__(self):\n state = self.__dict__.copy()\n # args needed to reload correct experiment\n state['_id'] = self._experiment.id if self._experiment is not None else None\n\n # cannot be pickled\n state['_experiment'] = None\n return state\n\n @property\n def experiment(self) -> Run:\n r\"\"\"\n\n Actual wandb object. To use wandb features do the following.\n\n Example::\n\n self.logger.experiment.some_wandb_function()\n\n \"\"\"\n if self._experiment is None:\n if self._offline:\n os.environ['WANDB_MODE'] = 'dryrun'\n self._experiment = wandb.init(\n name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,\n reinit=True, id=self._id, resume='allow', tags=self._tags, entity=self._entity)\n # save checkpoints in wandb dir to upload on W&B servers\n if self._log_model:\n self.save_dir = self._experiment.dir\n return self._experiment\n\n def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):\n wandb.watch(model, log=log, log_freq=log_freq)\n\n @rank_zero_only\n def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:\n params = self._convert_params(params)\n self.experiment.config.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:\n if step is not None:\n metrics['global_step'] = step\n self.experiment.log(metrics)\n\n @property\n def name(self) -> str:\n # don't create an experiment if we don't have one\n name = self._experiment.project_name() if self._experiment else None\n return name\n\n @property\n def version(self) -> str:\n # don't create an experiment if we don't have one\n return self._experiment.id if self._experiment else None\n", "path": "pytorch_lightning/loggers/wandb.py"}]} | 1,917 | 419 |
gh_patches_debug_3450 | rasdani/github-patches | git_diff | astronomer__astro-sdk-176 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use standard AWS environment variables
**Context**
At the moment, Astro 0.6.x uses a custom environment variable `AIRFLOW__ASTRO__CONN_AWS_DEFAULT` to define AWS credentials. However, there are standard [AWS environment variables to define credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-environment-variables).
**Acceptance criteria**
* Replace any occurrence of `AIRFLOW__ASTRO__CONN_AWS_DEFAULT` by `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/astro/utils/cloud_storage_creds.py`
Content:
```
1 import json
2 import os
3 from urllib import parse
4
5 from airflow.hooks.base import BaseHook
6
7 from astro.utils.dependencies import (
8 AwsBaseHook,
9 BotoSession,
10 GCSClient,
11 GCSHook,
12 google_service_account,
13 )
14
15
16 def parse_s3_env_var():
17 raw_data = (
18 os.environ["AIRFLOW__ASTRO__CONN_AWS_DEFAULT"]
19 .replace("%2F", "/")
20 .replace("aws://", "")
21 .replace("@", "")
22 .split(":")
23 )
24 return [parse.unquote(r) for r in raw_data]
25
26
27 def s3fs_creds(conn_id=None):
28 """Structure s3fs credentials from Airflow connection.
29 s3fs enables pandas to write to s3
30 """
31 if conn_id:
32 # The following line raises a friendly exception
33 BaseHook.get_connection(conn_id)
34 aws_hook = AwsBaseHook(conn_id, client_type="S3")
35 session = aws_hook.get_session()
36 else:
37 key, secret = parse_s3_env_var()
38 session = BotoSession(
39 aws_access_key_id=key,
40 aws_secret_access_key=secret,
41 )
42 return dict(client=session.client("s3"))
43
44
45 def gcs_client(conn_id=None):
46 """
47 get GCS credentials for storage.
48 """
49 if conn_id:
50 gcs_hook = GCSHook(conn_id)
51 client = gcs_hook.get_conn()
52 else:
53 client = GCSClient()
54
55 return dict(client=client)
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/astro/utils/cloud_storage_creds.py b/src/astro/utils/cloud_storage_creds.py
--- a/src/astro/utils/cloud_storage_creds.py
+++ b/src/astro/utils/cloud_storage_creds.py
@@ -14,14 +14,7 @@
def parse_s3_env_var():
- raw_data = (
- os.environ["AIRFLOW__ASTRO__CONN_AWS_DEFAULT"]
- .replace("%2F", "/")
- .replace("aws://", "")
- .replace("@", "")
- .split(":")
- )
- return [parse.unquote(r) for r in raw_data]
+ return os.environ["AWS_ACCESS_KEY_ID"], os.environ["AWS_SECRET_ACCESS_KEY"]
def s3fs_creds(conn_id=None):
| {"golden_diff": "diff --git a/src/astro/utils/cloud_storage_creds.py b/src/astro/utils/cloud_storage_creds.py\n--- a/src/astro/utils/cloud_storage_creds.py\n+++ b/src/astro/utils/cloud_storage_creds.py\n@@ -14,14 +14,7 @@\n \n \n def parse_s3_env_var():\n- raw_data = (\n- os.environ[\"AIRFLOW__ASTRO__CONN_AWS_DEFAULT\"]\n- .replace(\"%2F\", \"/\")\n- .replace(\"aws://\", \"\")\n- .replace(\"@\", \"\")\n- .split(\":\")\n- )\n- return [parse.unquote(r) for r in raw_data]\n+ return os.environ[\"AWS_ACCESS_KEY_ID\"], os.environ[\"AWS_SECRET_ACCESS_KEY\"]\n \n \n def s3fs_creds(conn_id=None):\n", "issue": "Use standard AWS environment variables\n**Context**\r\nAt the moment, Astro 0.6.x uses a custom environment variable `AIRFLOW__ASTRO__CONN_AWS_DEFAULT` to define AWS credentials. However, there are standard [AWS environment variables to define credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#using-environment-variables).\r\n\r\n**Acceptance criteria**\r\n* Replace any occurrence of `AIRFLOW__ASTRO__CONN_AWS_DEFAULT` by `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`\n", "before_files": [{"content": "import json\nimport os\nfrom urllib import parse\n\nfrom airflow.hooks.base import BaseHook\n\nfrom astro.utils.dependencies import (\n AwsBaseHook,\n BotoSession,\n GCSClient,\n GCSHook,\n google_service_account,\n)\n\n\ndef parse_s3_env_var():\n raw_data = (\n os.environ[\"AIRFLOW__ASTRO__CONN_AWS_DEFAULT\"]\n .replace(\"%2F\", \"/\")\n .replace(\"aws://\", \"\")\n .replace(\"@\", \"\")\n .split(\":\")\n )\n return [parse.unquote(r) for r in raw_data]\n\n\ndef s3fs_creds(conn_id=None):\n \"\"\"Structure s3fs credentials from Airflow connection.\n s3fs enables pandas to write to s3\n \"\"\"\n if conn_id:\n # The following line raises a friendly exception\n BaseHook.get_connection(conn_id)\n aws_hook = AwsBaseHook(conn_id, client_type=\"S3\")\n session = aws_hook.get_session()\n else:\n key, secret = parse_s3_env_var()\n session = BotoSession(\n aws_access_key_id=key,\n aws_secret_access_key=secret,\n )\n return dict(client=session.client(\"s3\"))\n\n\ndef gcs_client(conn_id=None):\n \"\"\"\n get GCS credentials for storage.\n \"\"\"\n if conn_id:\n gcs_hook = GCSHook(conn_id)\n client = gcs_hook.get_conn()\n else:\n client = GCSClient()\n\n return dict(client=client)\n", "path": "src/astro/utils/cloud_storage_creds.py"}], "after_files": [{"content": "import json\nimport os\nfrom urllib import parse\n\nfrom airflow.hooks.base import BaseHook\n\nfrom astro.utils.dependencies import (\n AwsBaseHook,\n BotoSession,\n GCSClient,\n GCSHook,\n google_service_account,\n)\n\n\ndef parse_s3_env_var():\n return os.environ[\"AWS_ACCESS_KEY_ID\"], os.environ[\"AWS_SECRET_ACCESS_KEY\"]\n\n\ndef s3fs_creds(conn_id=None):\n \"\"\"Structure s3fs credentials from Airflow connection.\n s3fs enables pandas to write to s3\n \"\"\"\n if conn_id:\n # The following line raises a friendly exception\n BaseHook.get_connection(conn_id)\n aws_hook = AwsBaseHook(conn_id, client_type=\"S3\")\n session = aws_hook.get_session()\n else:\n key, secret = parse_s3_env_var()\n session = BotoSession(\n aws_access_key_id=key,\n aws_secret_access_key=secret,\n )\n return dict(client=session.client(\"s3\"))\n\n\ndef gcs_client(conn_id=None):\n \"\"\"\n get GCS credentials for storage.\n \"\"\"\n if conn_id:\n gcs_hook = GCSHook(conn_id)\n client = gcs_hook.get_conn()\n else:\n client = GCSClient()\n\n return dict(client=client)\n", "path": "src/astro/utils/cloud_storage_creds.py"}]} | 806 | 171 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.