problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_21692
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-4303
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Beta update assert error in dynamic rcnn
When the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.
The assert error occurred.
[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)
The code to fix it could be as follows:
`if np.abs(np.median(self.beta_history)) < 1e-15:
new_beta = self.bbox_head.loss_bbox.beta
else:
new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
np.median(self.beta_history))`
I am willing to create a PR to fix it as well.
Beta update assert error in dynamic rcnn
When the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.
The assert error occurred.
[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)
The code to fix it could be as follows:
`if np.abs(np.median(self.beta_history)) < 1e-15:
new_beta = self.bbox_head.loss_bbox.beta
else:
new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
np.median(self.beta_history))`
I am willing to create a PR to fix it as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/models/roi_heads/dynamic_roi_head.py`
Content:
```
1 import numpy as np
2 import torch
3
4 from mmdet.core import bbox2roi
5 from mmdet.models.losses import SmoothL1Loss
6 from ..builder import HEADS
7 from .standard_roi_head import StandardRoIHead
8
9
10 @HEADS.register_module()
11 class DynamicRoIHead(StandardRoIHead):
12 """RoI head for `Dynamic R-CNN <https://arxiv.org/abs/2004.06002>`_."""
13
14 def __init__(self, **kwargs):
15 super(DynamicRoIHead, self).__init__(**kwargs)
16 assert isinstance(self.bbox_head.loss_bbox, SmoothL1Loss)
17 # the IoU history of the past `update_iter_interval` iterations
18 self.iou_history = []
19 # the beta history of the past `update_iter_interval` iterations
20 self.beta_history = []
21
22 def forward_train(self,
23 x,
24 img_metas,
25 proposal_list,
26 gt_bboxes,
27 gt_labels,
28 gt_bboxes_ignore=None,
29 gt_masks=None):
30 """Forward function for training.
31
32 Args:
33 x (list[Tensor]): list of multi-level img features.
34
35 img_metas (list[dict]): list of image info dict where each dict
36 has: 'img_shape', 'scale_factor', 'flip', and may also contain
37 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
38 For details on the values of these keys see
39 `mmdet/datasets/pipelines/formatting.py:Collect`.
40
41 proposals (list[Tensors]): list of region proposals.
42
43 gt_bboxes (list[Tensor]): each item are the truth boxes for each
44 image in [tl_x, tl_y, br_x, br_y] format.
45
46 gt_labels (list[Tensor]): class indices corresponding to each box
47
48 gt_bboxes_ignore (None | list[Tensor]): specify which bounding
49 boxes can be ignored when computing the loss.
50
51 gt_masks (None | Tensor) : true segmentation masks for each box
52 used if the architecture supports a segmentation task.
53
54 Returns:
55 dict[str, Tensor]: a dictionary of loss components
56 """
57 # assign gts and sample proposals
58 if self.with_bbox or self.with_mask:
59 num_imgs = len(img_metas)
60 if gt_bboxes_ignore is None:
61 gt_bboxes_ignore = [None for _ in range(num_imgs)]
62 sampling_results = []
63 cur_iou = []
64 for i in range(num_imgs):
65 assign_result = self.bbox_assigner.assign(
66 proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i],
67 gt_labels[i])
68 sampling_result = self.bbox_sampler.sample(
69 assign_result,
70 proposal_list[i],
71 gt_bboxes[i],
72 gt_labels[i],
73 feats=[lvl_feat[i][None] for lvl_feat in x])
74 # record the `iou_topk`-th largest IoU in an image
75 iou_topk = min(self.train_cfg.dynamic_rcnn.iou_topk,
76 len(assign_result.max_overlaps))
77 ious, _ = torch.topk(assign_result.max_overlaps, iou_topk)
78 cur_iou.append(ious[-1].item())
79 sampling_results.append(sampling_result)
80 # average the current IoUs over images
81 cur_iou = np.mean(cur_iou)
82 self.iou_history.append(cur_iou)
83
84 losses = dict()
85 # bbox head forward and loss
86 if self.with_bbox:
87 bbox_results = self._bbox_forward_train(x, sampling_results,
88 gt_bboxes, gt_labels,
89 img_metas)
90 losses.update(bbox_results['loss_bbox'])
91
92 # mask head forward and loss
93 if self.with_mask:
94 mask_results = self._mask_forward_train(x, sampling_results,
95 bbox_results['bbox_feats'],
96 gt_masks, img_metas)
97 losses.update(mask_results['loss_mask'])
98
99 # update IoU threshold and SmoothL1 beta
100 update_iter_interval = self.train_cfg.dynamic_rcnn.update_iter_interval
101 if len(self.iou_history) % update_iter_interval == 0:
102 new_iou_thr, new_beta = self.update_hyperparameters()
103
104 return losses
105
106 def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels,
107 img_metas):
108 num_imgs = len(img_metas)
109 rois = bbox2roi([res.bboxes for res in sampling_results])
110 bbox_results = self._bbox_forward(x, rois)
111
112 bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes,
113 gt_labels, self.train_cfg)
114 # record the `beta_topk`-th smallest target
115 # `bbox_targets[2]` and `bbox_targets[3]` stand for bbox_targets
116 # and bbox_weights, respectively
117 pos_inds = bbox_targets[3][:, 0].nonzero().squeeze(1)
118 num_pos = len(pos_inds)
119 cur_target = bbox_targets[2][pos_inds, :2].abs().mean(dim=1)
120 beta_topk = min(self.train_cfg.dynamic_rcnn.beta_topk * num_imgs,
121 num_pos)
122 cur_target = torch.kthvalue(cur_target, beta_topk)[0].item()
123 self.beta_history.append(cur_target)
124 loss_bbox = self.bbox_head.loss(bbox_results['cls_score'],
125 bbox_results['bbox_pred'], rois,
126 *bbox_targets)
127
128 bbox_results.update(loss_bbox=loss_bbox)
129 return bbox_results
130
131 def update_hyperparameters(self):
132 """Update hyperparameters like IoU thresholds for assigner and beta for
133 SmoothL1 loss based on the training statistics.
134
135 Returns:
136 tuple[float]: the updated ``iou_thr`` and ``beta``.
137 """
138 new_iou_thr = max(self.train_cfg.dynamic_rcnn.initial_iou,
139 np.mean(self.iou_history))
140 self.iou_history = []
141 self.bbox_assigner.pos_iou_thr = new_iou_thr
142 self.bbox_assigner.neg_iou_thr = new_iou_thr
143 self.bbox_assigner.min_pos_iou = new_iou_thr
144 new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
145 np.median(self.beta_history))
146 self.beta_history = []
147 self.bbox_head.loss_bbox.beta = new_beta
148 return new_iou_thr, new_beta
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmdet/models/roi_heads/dynamic_roi_head.py b/mmdet/models/roi_heads/dynamic_roi_head.py
--- a/mmdet/models/roi_heads/dynamic_roi_head.py
+++ b/mmdet/models/roi_heads/dynamic_roi_head.py
@@ -6,6 +6,8 @@
from ..builder import HEADS
from .standard_roi_head import StandardRoIHead
+EPS = 1e-15
+
@HEADS.register_module()
class DynamicRoIHead(StandardRoIHead):
@@ -141,8 +143,12 @@
self.bbox_assigner.pos_iou_thr = new_iou_thr
self.bbox_assigner.neg_iou_thr = new_iou_thr
self.bbox_assigner.min_pos_iou = new_iou_thr
- new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
- np.median(self.beta_history))
+ if (np.median(self.beta_history) < EPS):
+ # avoid 0 or too small value for new_beta
+ new_beta = self.bbox_head.loss_bbox.beta
+ else:
+ new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,
+ np.median(self.beta_history))
self.beta_history = []
self.bbox_head.loss_bbox.beta = new_beta
return new_iou_thr, new_beta
|
{"golden_diff": "diff --git a/mmdet/models/roi_heads/dynamic_roi_head.py b/mmdet/models/roi_heads/dynamic_roi_head.py\n--- a/mmdet/models/roi_heads/dynamic_roi_head.py\n+++ b/mmdet/models/roi_heads/dynamic_roi_head.py\n@@ -6,6 +6,8 @@\n from ..builder import HEADS\n from .standard_roi_head import StandardRoIHead\n \n+EPS = 1e-15\n+\n \n @HEADS.register_module()\n class DynamicRoIHead(StandardRoIHead):\n@@ -141,8 +143,12 @@\n self.bbox_assigner.pos_iou_thr = new_iou_thr\n self.bbox_assigner.neg_iou_thr = new_iou_thr\n self.bbox_assigner.min_pos_iou = new_iou_thr\n- new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\n- np.median(self.beta_history))\n+ if (np.median(self.beta_history) < EPS):\n+ # avoid 0 or too small value for new_beta\n+ new_beta = self.bbox_head.loss_bbox.beta\n+ else:\n+ new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\n+ np.median(self.beta_history))\n self.beta_history = []\n self.bbox_head.loss_bbox.beta = new_beta\n return new_iou_thr, new_beta\n", "issue": "Beta update assert error in dynamic rcnn\nWhen the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.\r\nThe assert error occurred.\r\n[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)\r\nThe code to fix it could be as follows:\r\n`if np.abs(np.median(self.beta_history)) < 1e-15:\r\n new_beta = self.bbox_head.loss_bbox.beta\r\n else:\r\n new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\r\n np.median(self.beta_history))`\r\nI am willing to create a PR to fix it as well.\nBeta update assert error in dynamic rcnn\nWhen the beta updates, new beta will be the median of the beta_history. It might be 0.0 but the beta used in loss function should be greater than 0.\r\nThe assert error occurred.\r\n[https://github.com/open-mmlab/mmdetection/blob/5d07dda8719b0c67c11ad015410bc111e9f90200/mmdet/models/roi_heads/dynamic_roi_head.py#L144](url)\r\nThe code to fix it could be as follows:\r\n`if np.abs(np.median(self.beta_history)) < 1e-15:\r\n new_beta = self.bbox_head.loss_bbox.beta\r\n else:\r\n new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\r\n np.median(self.beta_history))`\r\nI am willing to create a PR to fix it as well.\n", "before_files": [{"content": "import numpy as np\nimport torch\n\nfrom mmdet.core import bbox2roi\nfrom mmdet.models.losses import SmoothL1Loss\nfrom ..builder import HEADS\nfrom .standard_roi_head import StandardRoIHead\n\n\[email protected]_module()\nclass DynamicRoIHead(StandardRoIHead):\n \"\"\"RoI head for `Dynamic R-CNN <https://arxiv.org/abs/2004.06002>`_.\"\"\"\n\n def __init__(self, **kwargs):\n super(DynamicRoIHead, self).__init__(**kwargs)\n assert isinstance(self.bbox_head.loss_bbox, SmoothL1Loss)\n # the IoU history of the past `update_iter_interval` iterations\n self.iou_history = []\n # the beta history of the past `update_iter_interval` iterations\n self.beta_history = []\n\n def forward_train(self,\n x,\n img_metas,\n proposal_list,\n gt_bboxes,\n gt_labels,\n gt_bboxes_ignore=None,\n gt_masks=None):\n \"\"\"Forward function for training.\n\n Args:\n x (list[Tensor]): list of multi-level img features.\n\n img_metas (list[dict]): list of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n `mmdet/datasets/pipelines/formatting.py:Collect`.\n\n proposals (list[Tensors]): list of region proposals.\n\n gt_bboxes (list[Tensor]): each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n\n gt_labels (list[Tensor]): class indices corresponding to each box\n\n gt_bboxes_ignore (None | list[Tensor]): specify which bounding\n boxes can be ignored when computing the loss.\n\n gt_masks (None | Tensor) : true segmentation masks for each box\n used if the architecture supports a segmentation task.\n\n Returns:\n dict[str, Tensor]: a dictionary of loss components\n \"\"\"\n # assign gts and sample proposals\n if self.with_bbox or self.with_mask:\n num_imgs = len(img_metas)\n if gt_bboxes_ignore is None:\n gt_bboxes_ignore = [None for _ in range(num_imgs)]\n sampling_results = []\n cur_iou = []\n for i in range(num_imgs):\n assign_result = self.bbox_assigner.assign(\n proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i],\n gt_labels[i])\n sampling_result = self.bbox_sampler.sample(\n assign_result,\n proposal_list[i],\n gt_bboxes[i],\n gt_labels[i],\n feats=[lvl_feat[i][None] for lvl_feat in x])\n # record the `iou_topk`-th largest IoU in an image\n iou_topk = min(self.train_cfg.dynamic_rcnn.iou_topk,\n len(assign_result.max_overlaps))\n ious, _ = torch.topk(assign_result.max_overlaps, iou_topk)\n cur_iou.append(ious[-1].item())\n sampling_results.append(sampling_result)\n # average the current IoUs over images\n cur_iou = np.mean(cur_iou)\n self.iou_history.append(cur_iou)\n\n losses = dict()\n # bbox head forward and loss\n if self.with_bbox:\n bbox_results = self._bbox_forward_train(x, sampling_results,\n gt_bboxes, gt_labels,\n img_metas)\n losses.update(bbox_results['loss_bbox'])\n\n # mask head forward and loss\n if self.with_mask:\n mask_results = self._mask_forward_train(x, sampling_results,\n bbox_results['bbox_feats'],\n gt_masks, img_metas)\n losses.update(mask_results['loss_mask'])\n\n # update IoU threshold and SmoothL1 beta\n update_iter_interval = self.train_cfg.dynamic_rcnn.update_iter_interval\n if len(self.iou_history) % update_iter_interval == 0:\n new_iou_thr, new_beta = self.update_hyperparameters()\n\n return losses\n\n def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels,\n img_metas):\n num_imgs = len(img_metas)\n rois = bbox2roi([res.bboxes for res in sampling_results])\n bbox_results = self._bbox_forward(x, rois)\n\n bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes,\n gt_labels, self.train_cfg)\n # record the `beta_topk`-th smallest target\n # `bbox_targets[2]` and `bbox_targets[3]` stand for bbox_targets\n # and bbox_weights, respectively\n pos_inds = bbox_targets[3][:, 0].nonzero().squeeze(1)\n num_pos = len(pos_inds)\n cur_target = bbox_targets[2][pos_inds, :2].abs().mean(dim=1)\n beta_topk = min(self.train_cfg.dynamic_rcnn.beta_topk * num_imgs,\n num_pos)\n cur_target = torch.kthvalue(cur_target, beta_topk)[0].item()\n self.beta_history.append(cur_target)\n loss_bbox = self.bbox_head.loss(bbox_results['cls_score'],\n bbox_results['bbox_pred'], rois,\n *bbox_targets)\n\n bbox_results.update(loss_bbox=loss_bbox)\n return bbox_results\n\n def update_hyperparameters(self):\n \"\"\"Update hyperparameters like IoU thresholds for assigner and beta for\n SmoothL1 loss based on the training statistics.\n\n Returns:\n tuple[float]: the updated ``iou_thr`` and ``beta``.\n \"\"\"\n new_iou_thr = max(self.train_cfg.dynamic_rcnn.initial_iou,\n np.mean(self.iou_history))\n self.iou_history = []\n self.bbox_assigner.pos_iou_thr = new_iou_thr\n self.bbox_assigner.neg_iou_thr = new_iou_thr\n self.bbox_assigner.min_pos_iou = new_iou_thr\n new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\n np.median(self.beta_history))\n self.beta_history = []\n self.bbox_head.loss_bbox.beta = new_beta\n return new_iou_thr, new_beta\n", "path": "mmdet/models/roi_heads/dynamic_roi_head.py"}], "after_files": [{"content": "import numpy as np\nimport torch\n\nfrom mmdet.core import bbox2roi\nfrom mmdet.models.losses import SmoothL1Loss\nfrom ..builder import HEADS\nfrom .standard_roi_head import StandardRoIHead\n\nEPS = 1e-15\n\n\[email protected]_module()\nclass DynamicRoIHead(StandardRoIHead):\n \"\"\"RoI head for `Dynamic R-CNN <https://arxiv.org/abs/2004.06002>`_.\"\"\"\n\n def __init__(self, **kwargs):\n super(DynamicRoIHead, self).__init__(**kwargs)\n assert isinstance(self.bbox_head.loss_bbox, SmoothL1Loss)\n # the IoU history of the past `update_iter_interval` iterations\n self.iou_history = []\n # the beta history of the past `update_iter_interval` iterations\n self.beta_history = []\n\n def forward_train(self,\n x,\n img_metas,\n proposal_list,\n gt_bboxes,\n gt_labels,\n gt_bboxes_ignore=None,\n gt_masks=None):\n \"\"\"Forward function for training.\n\n Args:\n x (list[Tensor]): list of multi-level img features.\n\n img_metas (list[dict]): list of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n `mmdet/datasets/pipelines/formatting.py:Collect`.\n\n proposals (list[Tensors]): list of region proposals.\n\n gt_bboxes (list[Tensor]): each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n\n gt_labels (list[Tensor]): class indices corresponding to each box\n\n gt_bboxes_ignore (None | list[Tensor]): specify which bounding\n boxes can be ignored when computing the loss.\n\n gt_masks (None | Tensor) : true segmentation masks for each box\n used if the architecture supports a segmentation task.\n\n Returns:\n dict[str, Tensor]: a dictionary of loss components\n \"\"\"\n # assign gts and sample proposals\n if self.with_bbox or self.with_mask:\n num_imgs = len(img_metas)\n if gt_bboxes_ignore is None:\n gt_bboxes_ignore = [None for _ in range(num_imgs)]\n sampling_results = []\n cur_iou = []\n for i in range(num_imgs):\n assign_result = self.bbox_assigner.assign(\n proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i],\n gt_labels[i])\n sampling_result = self.bbox_sampler.sample(\n assign_result,\n proposal_list[i],\n gt_bboxes[i],\n gt_labels[i],\n feats=[lvl_feat[i][None] for lvl_feat in x])\n # record the `iou_topk`-th largest IoU in an image\n iou_topk = min(self.train_cfg.dynamic_rcnn.iou_topk,\n len(assign_result.max_overlaps))\n ious, _ = torch.topk(assign_result.max_overlaps, iou_topk)\n cur_iou.append(ious[-1].item())\n sampling_results.append(sampling_result)\n # average the current IoUs over images\n cur_iou = np.mean(cur_iou)\n self.iou_history.append(cur_iou)\n\n losses = dict()\n # bbox head forward and loss\n if self.with_bbox:\n bbox_results = self._bbox_forward_train(x, sampling_results,\n gt_bboxes, gt_labels,\n img_metas)\n losses.update(bbox_results['loss_bbox'])\n\n # mask head forward and loss\n if self.with_mask:\n mask_results = self._mask_forward_train(x, sampling_results,\n bbox_results['bbox_feats'],\n gt_masks, img_metas)\n losses.update(mask_results['loss_mask'])\n\n # update IoU threshold and SmoothL1 beta\n update_iter_interval = self.train_cfg.dynamic_rcnn.update_iter_interval\n if len(self.iou_history) % update_iter_interval == 0:\n new_iou_thr, new_beta = self.update_hyperparameters()\n\n return losses\n\n def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels,\n img_metas):\n num_imgs = len(img_metas)\n rois = bbox2roi([res.bboxes for res in sampling_results])\n bbox_results = self._bbox_forward(x, rois)\n\n bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes,\n gt_labels, self.train_cfg)\n # record the `beta_topk`-th smallest target\n # `bbox_targets[2]` and `bbox_targets[3]` stand for bbox_targets\n # and bbox_weights, respectively\n pos_inds = bbox_targets[3][:, 0].nonzero().squeeze(1)\n num_pos = len(pos_inds)\n cur_target = bbox_targets[2][pos_inds, :2].abs().mean(dim=1)\n beta_topk = min(self.train_cfg.dynamic_rcnn.beta_topk * num_imgs,\n num_pos)\n cur_target = torch.kthvalue(cur_target, beta_topk)[0].item()\n self.beta_history.append(cur_target)\n loss_bbox = self.bbox_head.loss(bbox_results['cls_score'],\n bbox_results['bbox_pred'], rois,\n *bbox_targets)\n\n bbox_results.update(loss_bbox=loss_bbox)\n return bbox_results\n\n def update_hyperparameters(self):\n \"\"\"Update hyperparameters like IoU thresholds for assigner and beta for\n SmoothL1 loss based on the training statistics.\n\n Returns:\n tuple[float]: the updated ``iou_thr`` and ``beta``.\n \"\"\"\n new_iou_thr = max(self.train_cfg.dynamic_rcnn.initial_iou,\n np.mean(self.iou_history))\n self.iou_history = []\n self.bbox_assigner.pos_iou_thr = new_iou_thr\n self.bbox_assigner.neg_iou_thr = new_iou_thr\n self.bbox_assigner.min_pos_iou = new_iou_thr\n if (np.median(self.beta_history) < EPS):\n # avoid 0 or too small value for new_beta\n new_beta = self.bbox_head.loss_bbox.beta\n else:\n new_beta = min(self.train_cfg.dynamic_rcnn.initial_beta,\n np.median(self.beta_history))\n self.beta_history = []\n self.bbox_head.loss_bbox.beta = new_beta\n return new_iou_thr, new_beta\n", "path": "mmdet/models/roi_heads/dynamic_roi_head.py"}]}
| 2,377 | 298 |
gh_patches_debug_31349
|
rasdani/github-patches
|
git_diff
|
modoboa__modoboa-1823
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dnstools dkim TXT record evaluation fix
Description of the issue/feature this PR addresses:
The DKIM button was red, but my TXT record setted well. So i looked about the issue, and i founded the evaluation of the TXT record is not good. According to DNS standard, a TXT record what is longer than 256 character need to split to several strings: https://help.directadmin.com/item.php?id=552
Current behavior before PR:
I just merge the splitted parts together.
Desired behavior after PR is merged:
The DKIM button is now green as it should be.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modoboa/dnstools/lib.py`
Content:
```
1 """dnstools library."""
2
3 import ipaddress
4
5 from django.utils.translation import ugettext as _
6
7 from modoboa.admin import lib as admin_lib
8
9 from . import constants
10
11
12 def get_spf_record(domain):
13 """Return SPF record for domain (if any)."""
14 records = admin_lib.get_dns_records(domain, "TXT")
15 if records is None:
16 return None
17 for record in records:
18 value = str(record).strip('"')
19 if value.startswith("v=spf1"):
20 return value
21 return None
22
23
24 def get_dkim_record(domain, selector):
25 """Return DKIM records form domain (if any)."""
26 name = "{}._domainkey.{}".format(selector, domain)
27 records = admin_lib.get_dns_records(name, "TXT")
28 if records is None:
29 return None
30 for record in records:
31 value = str(record).strip('"')
32 if value.startswith("v=DKIM1"):
33 return value
34 return None
35
36
37 def get_dmarc_record(domain):
38 """Return DMARC record for domain (if any)."""
39 name = "_dmarc.{}".format(domain)
40 records = admin_lib.get_dns_records(name, "TXT")
41 if records is None:
42 return None
43 for record in records:
44 value = str(record).strip('"')
45 if value.startswith("v=DMARC1"):
46 return value
47 return None
48
49
50 def _get_simple_record(name):
51 """We just want to know if name is declared."""
52 for rdtype in ["A", "CNAME", "AAAA"]:
53 records = admin_lib.get_dns_records(name, rdtype)
54 if records is not None:
55 break
56 else:
57 return None
58 for record in records:
59 value = str(record).strip('"')
60 break
61 return value
62
63
64 def get_autoconfig_record(domain):
65 """Return autoconfig record for domain (if any)."""
66 return _get_simple_record("autoconfig.{}".format(domain))
67
68
69 def get_autodiscover_record(domain):
70 """Return autodiscover record for domain (if any)."""
71 return _get_simple_record("autodiscover.{}".format(domain))
72
73
74 class DNSSyntaxError(Exception):
75 """Custom exception for DNS errors."""
76
77 pass
78
79
80 def check_spf_ip4(value):
81 """Check syntax of ip4 mechanism."""
82 parts = value.split(":")
83 if len(parts) != 2:
84 raise DNSSyntaxError(_("Wrong ip4 mechanism syntax"))
85 try:
86 ipaddress.ip_network(parts[1], False)
87 except ValueError:
88 raise DNSSyntaxError(_("Wrong IPv4 address format"))
89
90
91 def check_spf_ip6(value):
92 """Check syntax of ip6 mechanism."""
93 if not value.startswith("ip6:"):
94 raise DNSSyntaxError(_("Wrong ip6 mechanism syntax"))
95 value = value.replace("ip6:", "")
96 try:
97 ipaddress.ip_network(value, False)
98 except ValueError:
99 raise DNSSyntaxError(_("Wrong IPv6 address format"))
100
101
102 def _check_domain_and_mask(value, mechanism):
103 """Check for valid domain / mask."""
104 domain = None
105 mask = None
106 if ":" in value:
107 mechanism, domain = value.split(":")
108 if "/" in domain:
109 domain, mask = domain.split("/")
110 elif "/" in value:
111 mechanism, mask = value.split("/")
112 else:
113 raise DNSSyntaxError(
114 _("Invalid syntax for {} mechanism").format(mechanism))
115 if mask and (not mask.isdigit() or int(mask) > 32):
116 raise DNSSyntaxError(_("Invalid mask found {}").format(mask))
117
118
119 def check_spf_a(value):
120 """Check syntax of a mechanism."""
121 if value == "a":
122 return
123 _check_domain_and_mask(value, "a")
124
125
126 def check_spf_mx(value):
127 """Check syntax of mx mechanism."""
128 if value == "mx":
129 return
130 _check_domain_and_mask(value, "mx")
131
132
133 def _check_simple(value, mechanism):
134 """Simple check."""
135 if value == mechanism:
136 return
137 parts = value.split(":")
138 if len(parts) != 2:
139 raise DNSSyntaxError(
140 _("Invalid syntax for {} mechanism").format(mechanism))
141
142
143 def check_spf_ptr(value):
144 """Check syntax of ptr mechanism."""
145 _check_simple(value, "ptr")
146
147
148 def check_spf_exists(value):
149 """Check syntax of ptr mechanism."""
150 _check_simple(value, "exists")
151
152
153 def check_spf_include(value):
154 """Check syntax of include mechanism."""
155 _check_simple(value, "include")
156
157
158 def check_spf_syntax(record):
159 """Check if record has a valid SPF syntax."""
160 if not record.startswith("v=spf1"):
161 raise DNSSyntaxError(_("Not an SPF record"))
162 parts = record.split(" ")[1:]
163 modifiers = []
164 mechanisms = []
165 for part in parts:
166 if part == "":
167 continue
168 qualifier = None
169 if part[0] in ["+", "-", "~", "?"]:
170 qualifier = part[0]
171 part = part[1:]
172 if part == "all":
173 continue
174 for mechanism in constants.SPF_MECHANISMS:
175 if part.startswith(mechanism):
176 globals()["check_spf_{}".format(mechanism)](part)
177 mechanisms.append(mechanism)
178 break
179 else:
180 # Look for modifier
181 modifier = part.split("=")
182 if len(modifier) != 2:
183 raise DNSSyntaxError(_("Unknown mechanism {}").format(part))
184 if modifier[0] not in ["redirect", "exp"]:
185 raise DNSSyntaxError(_("Unknown modifier {}").format(
186 modifier[0]))
187 if modifier[0] in modifiers:
188 raise DNSSyntaxError(_("Duplicate modifier {} found").format(
189 modifier[0]))
190 modifiers.append(modifier[0])
191 if not len(mechanisms) and not len(modifiers):
192 raise DNSSyntaxError(_("No mechanism found"))
193 return None
194
195
196 def check_dkim_syntax(record):
197 """Check if record has a valid DKIM syntax."""
198 if not record.startswith("v=DKIM1"):
199 raise DNSSyntaxError(_("Not a valid DKIM record"))
200 key = None
201 for tag in record.split(";")[1:]:
202 tag = tag.strip(" ")
203 if tag == "":
204 continue
205 parts = tag.split("=", 1)
206 if len(parts) != 2:
207 raise DNSSyntaxError(_("Invalid tag {}").format(tag))
208 name = parts[0].strip(" ")
209 if name == "p":
210 key = "".join(part.strip('"') for part in parts[1].split(" "))
211 if key is None:
212 raise DNSSyntaxError(_("No key found in record"))
213 return key
214
215
216 def check_dmarc_tag_string_value(tag, value):
217 """Check if value is valid for tag."""
218 tdef = constants.DMARC_TAGS[tag]
219 error = _("Wrong value {} for tag {}").format(value, tag)
220 if "values" in tdef and value not in tdef["values"]:
221 raise DNSSyntaxError(error)
222 elif "regex" in tdef and tdef["regex"].match(value) is None:
223 raise DNSSyntaxError(error)
224
225
226 def check_dmarc_tag(tag, value):
227 """Check if tag is valid."""
228 tdef = constants.DMARC_TAGS[tag]
229 ttype = tdef.get("type", "string")
230 if ttype == "list":
231 for svalue in value.split(","):
232 check_dmarc_tag_string_value(tag, svalue)
233 elif ttype == "int":
234 error = _("Wrong value {} for tag {}:").format(value, tag)
235 try:
236 value = int(value)
237 except ValueError:
238 raise DNSSyntaxError(error + _(" not an integer"))
239 if "min_value" in tdef and value < tdef["min_value"]:
240 raise DNSSyntaxError(
241 error + _(" less than {}").format(tdef["min_value"]))
242 if "max_value" in tdef and value > tdef["max_value"]:
243 raise DNSSyntaxError(
244 error + _(" greater than {}").format(tdef["max_value"]))
245 else:
246 check_dmarc_tag_string_value(tag, value)
247
248
249 def check_dmarc_syntax(record):
250 """Check if record has a valid DMARC syntax."""
251 if not record.startswith("v=DMARC1"):
252 raise DNSSyntaxError(_("Not a valid DMARC record"))
253 tags = {}
254 for tag in record.split(";")[1:]:
255 if tag == "":
256 continue
257 tag = tag.strip(" ")
258 parts = tag.split("=")
259 if len(parts) != 2:
260 raise DNSSyntaxError(_("Invalid tag {}").format(tag))
261 name = parts[0].strip(" ")
262 if name not in constants.DMARC_TAGS:
263 raise DNSSyntaxError(_("Unknown tag {}").format(name))
264 value = parts[1].strip(" ")
265 check_dmarc_tag(name, value)
266 tags[name] = value
267 if "p" not in tags:
268 raise DNSSyntaxError(_("Missing required p tag"))
269
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modoboa/dnstools/lib.py b/modoboa/dnstools/lib.py
--- a/modoboa/dnstools/lib.py
+++ b/modoboa/dnstools/lib.py
@@ -9,42 +9,36 @@
from . import constants
-def get_spf_record(domain):
- """Return SPF record for domain (if any)."""
- records = admin_lib.get_dns_records(domain, "TXT")
+def _get_record_type_value(records, rr_type):
if records is None:
return None
for record in records:
- value = str(record).strip('"')
- if value.startswith("v=spf1"):
+ # Multiple strings are separated by a space as described in:
+ # https://tools.ietf.org/html/rfc4408#section-3.1.3
+ value = str(record).replace('" "', '').strip('"')
+ if value.startswith('v={}'.format(rr_type)):
return value
return None
+def get_spf_record(domain):
+ """Return SPF record for domain (if any)."""
+ records = admin_lib.get_dns_records(domain, "TXT")
+ return _get_record_type_value(records, 'spf1')
+
+
def get_dkim_record(domain, selector):
"""Return DKIM records form domain (if any)."""
name = "{}._domainkey.{}".format(selector, domain)
records = admin_lib.get_dns_records(name, "TXT")
- if records is None:
- return None
- for record in records:
- value = str(record).strip('"')
- if value.startswith("v=DKIM1"):
- return value
- return None
+ return _get_record_type_value(records, 'DKIM1')
def get_dmarc_record(domain):
"""Return DMARC record for domain (if any)."""
name = "_dmarc.{}".format(domain)
records = admin_lib.get_dns_records(name, "TXT")
- if records is None:
- return None
- for record in records:
- value = str(record).strip('"')
- if value.startswith("v=DMARC1"):
- return value
- return None
+ return _get_record_type_value(records, 'DMARC1')
def _get_simple_record(name):
|
{"golden_diff": "diff --git a/modoboa/dnstools/lib.py b/modoboa/dnstools/lib.py\n--- a/modoboa/dnstools/lib.py\n+++ b/modoboa/dnstools/lib.py\n@@ -9,42 +9,36 @@\n from . import constants\n \n \n-def get_spf_record(domain):\n- \"\"\"Return SPF record for domain (if any).\"\"\"\n- records = admin_lib.get_dns_records(domain, \"TXT\")\n+def _get_record_type_value(records, rr_type):\n if records is None:\n return None\n for record in records:\n- value = str(record).strip('\"')\n- if value.startswith(\"v=spf1\"):\n+ # Multiple strings are separated by a space as described in:\n+ # https://tools.ietf.org/html/rfc4408#section-3.1.3\n+ value = str(record).replace('\" \"', '').strip('\"')\n+ if value.startswith('v={}'.format(rr_type)):\n return value\n return None\n \n \n+def get_spf_record(domain):\n+ \"\"\"Return SPF record for domain (if any).\"\"\"\n+ records = admin_lib.get_dns_records(domain, \"TXT\")\n+ return _get_record_type_value(records, 'spf1')\n+\n+\n def get_dkim_record(domain, selector):\n \"\"\"Return DKIM records form domain (if any).\"\"\"\n name = \"{}._domainkey.{}\".format(selector, domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n- if records is None:\n- return None\n- for record in records:\n- value = str(record).strip('\"')\n- if value.startswith(\"v=DKIM1\"):\n- return value\n- return None\n+ return _get_record_type_value(records, 'DKIM1')\n \n \n def get_dmarc_record(domain):\n \"\"\"Return DMARC record for domain (if any).\"\"\"\n name = \"_dmarc.{}\".format(domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n- if records is None:\n- return None\n- for record in records:\n- value = str(record).strip('\"')\n- if value.startswith(\"v=DMARC1\"):\n- return value\n- return None\n+ return _get_record_type_value(records, 'DMARC1')\n \n \n def _get_simple_record(name):\n", "issue": "dnstools dkim TXT record evaluation fix\nDescription of the issue/feature this PR addresses:\r\nThe DKIM button was red, but my TXT record setted well. So i looked about the issue, and i founded the evaluation of the TXT record is not good. According to DNS standard, a TXT record what is longer than 256 character need to split to several strings: https://help.directadmin.com/item.php?id=552\r\n\r\nCurrent behavior before PR:\r\nI just merge the splitted parts together.\r\n\r\nDesired behavior after PR is merged:\r\nThe DKIM button is now green as it should be.\n", "before_files": [{"content": "\"\"\"dnstools library.\"\"\"\n\nimport ipaddress\n\nfrom django.utils.translation import ugettext as _\n\nfrom modoboa.admin import lib as admin_lib\n\nfrom . import constants\n\n\ndef get_spf_record(domain):\n \"\"\"Return SPF record for domain (if any).\"\"\"\n records = admin_lib.get_dns_records(domain, \"TXT\")\n if records is None:\n return None\n for record in records:\n value = str(record).strip('\"')\n if value.startswith(\"v=spf1\"):\n return value\n return None\n\n\ndef get_dkim_record(domain, selector):\n \"\"\"Return DKIM records form domain (if any).\"\"\"\n name = \"{}._domainkey.{}\".format(selector, domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n if records is None:\n return None\n for record in records:\n value = str(record).strip('\"')\n if value.startswith(\"v=DKIM1\"):\n return value\n return None\n\n\ndef get_dmarc_record(domain):\n \"\"\"Return DMARC record for domain (if any).\"\"\"\n name = \"_dmarc.{}\".format(domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n if records is None:\n return None\n for record in records:\n value = str(record).strip('\"')\n if value.startswith(\"v=DMARC1\"):\n return value\n return None\n\n\ndef _get_simple_record(name):\n \"\"\"We just want to know if name is declared.\"\"\"\n for rdtype in [\"A\", \"CNAME\", \"AAAA\"]:\n records = admin_lib.get_dns_records(name, rdtype)\n if records is not None:\n break\n else:\n return None\n for record in records:\n value = str(record).strip('\"')\n break\n return value\n\n\ndef get_autoconfig_record(domain):\n \"\"\"Return autoconfig record for domain (if any).\"\"\"\n return _get_simple_record(\"autoconfig.{}\".format(domain))\n\n\ndef get_autodiscover_record(domain):\n \"\"\"Return autodiscover record for domain (if any).\"\"\"\n return _get_simple_record(\"autodiscover.{}\".format(domain))\n\n\nclass DNSSyntaxError(Exception):\n \"\"\"Custom exception for DNS errors.\"\"\"\n\n pass\n\n\ndef check_spf_ip4(value):\n \"\"\"Check syntax of ip4 mechanism.\"\"\"\n parts = value.split(\":\")\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Wrong ip4 mechanism syntax\"))\n try:\n ipaddress.ip_network(parts[1], False)\n except ValueError:\n raise DNSSyntaxError(_(\"Wrong IPv4 address format\"))\n\n\ndef check_spf_ip6(value):\n \"\"\"Check syntax of ip6 mechanism.\"\"\"\n if not value.startswith(\"ip6:\"):\n raise DNSSyntaxError(_(\"Wrong ip6 mechanism syntax\"))\n value = value.replace(\"ip6:\", \"\")\n try:\n ipaddress.ip_network(value, False)\n except ValueError:\n raise DNSSyntaxError(_(\"Wrong IPv6 address format\"))\n\n\ndef _check_domain_and_mask(value, mechanism):\n \"\"\"Check for valid domain / mask.\"\"\"\n domain = None\n mask = None\n if \":\" in value:\n mechanism, domain = value.split(\":\")\n if \"/\" in domain:\n domain, mask = domain.split(\"/\")\n elif \"/\" in value:\n mechanism, mask = value.split(\"/\")\n else:\n raise DNSSyntaxError(\n _(\"Invalid syntax for {} mechanism\").format(mechanism))\n if mask and (not mask.isdigit() or int(mask) > 32):\n raise DNSSyntaxError(_(\"Invalid mask found {}\").format(mask))\n\n\ndef check_spf_a(value):\n \"\"\"Check syntax of a mechanism.\"\"\"\n if value == \"a\":\n return\n _check_domain_and_mask(value, \"a\")\n\n\ndef check_spf_mx(value):\n \"\"\"Check syntax of mx mechanism.\"\"\"\n if value == \"mx\":\n return\n _check_domain_and_mask(value, \"mx\")\n\n\ndef _check_simple(value, mechanism):\n \"\"\"Simple check.\"\"\"\n if value == mechanism:\n return\n parts = value.split(\":\")\n if len(parts) != 2:\n raise DNSSyntaxError(\n _(\"Invalid syntax for {} mechanism\").format(mechanism))\n\n\ndef check_spf_ptr(value):\n \"\"\"Check syntax of ptr mechanism.\"\"\"\n _check_simple(value, \"ptr\")\n\n\ndef check_spf_exists(value):\n \"\"\"Check syntax of ptr mechanism.\"\"\"\n _check_simple(value, \"exists\")\n\n\ndef check_spf_include(value):\n \"\"\"Check syntax of include mechanism.\"\"\"\n _check_simple(value, \"include\")\n\n\ndef check_spf_syntax(record):\n \"\"\"Check if record has a valid SPF syntax.\"\"\"\n if not record.startswith(\"v=spf1\"):\n raise DNSSyntaxError(_(\"Not an SPF record\"))\n parts = record.split(\" \")[1:]\n modifiers = []\n mechanisms = []\n for part in parts:\n if part == \"\":\n continue\n qualifier = None\n if part[0] in [\"+\", \"-\", \"~\", \"?\"]:\n qualifier = part[0]\n part = part[1:]\n if part == \"all\":\n continue\n for mechanism in constants.SPF_MECHANISMS:\n if part.startswith(mechanism):\n globals()[\"check_spf_{}\".format(mechanism)](part)\n mechanisms.append(mechanism)\n break\n else:\n # Look for modifier\n modifier = part.split(\"=\")\n if len(modifier) != 2:\n raise DNSSyntaxError(_(\"Unknown mechanism {}\").format(part))\n if modifier[0] not in [\"redirect\", \"exp\"]:\n raise DNSSyntaxError(_(\"Unknown modifier {}\").format(\n modifier[0]))\n if modifier[0] in modifiers:\n raise DNSSyntaxError(_(\"Duplicate modifier {} found\").format(\n modifier[0]))\n modifiers.append(modifier[0])\n if not len(mechanisms) and not len(modifiers):\n raise DNSSyntaxError(_(\"No mechanism found\"))\n return None\n\n\ndef check_dkim_syntax(record):\n \"\"\"Check if record has a valid DKIM syntax.\"\"\"\n if not record.startswith(\"v=DKIM1\"):\n raise DNSSyntaxError(_(\"Not a valid DKIM record\"))\n key = None\n for tag in record.split(\";\")[1:]:\n tag = tag.strip(\" \")\n if tag == \"\":\n continue\n parts = tag.split(\"=\", 1)\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Invalid tag {}\").format(tag))\n name = parts[0].strip(\" \")\n if name == \"p\":\n key = \"\".join(part.strip('\"') for part in parts[1].split(\" \"))\n if key is None:\n raise DNSSyntaxError(_(\"No key found in record\"))\n return key\n\n\ndef check_dmarc_tag_string_value(tag, value):\n \"\"\"Check if value is valid for tag.\"\"\"\n tdef = constants.DMARC_TAGS[tag]\n error = _(\"Wrong value {} for tag {}\").format(value, tag)\n if \"values\" in tdef and value not in tdef[\"values\"]:\n raise DNSSyntaxError(error)\n elif \"regex\" in tdef and tdef[\"regex\"].match(value) is None:\n raise DNSSyntaxError(error)\n\n\ndef check_dmarc_tag(tag, value):\n \"\"\"Check if tag is valid.\"\"\"\n tdef = constants.DMARC_TAGS[tag]\n ttype = tdef.get(\"type\", \"string\")\n if ttype == \"list\":\n for svalue in value.split(\",\"):\n check_dmarc_tag_string_value(tag, svalue)\n elif ttype == \"int\":\n error = _(\"Wrong value {} for tag {}:\").format(value, tag)\n try:\n value = int(value)\n except ValueError:\n raise DNSSyntaxError(error + _(\" not an integer\"))\n if \"min_value\" in tdef and value < tdef[\"min_value\"]:\n raise DNSSyntaxError(\n error + _(\" less than {}\").format(tdef[\"min_value\"]))\n if \"max_value\" in tdef and value > tdef[\"max_value\"]:\n raise DNSSyntaxError(\n error + _(\" greater than {}\").format(tdef[\"max_value\"]))\n else:\n check_dmarc_tag_string_value(tag, value)\n\n\ndef check_dmarc_syntax(record):\n \"\"\"Check if record has a valid DMARC syntax.\"\"\"\n if not record.startswith(\"v=DMARC1\"):\n raise DNSSyntaxError(_(\"Not a valid DMARC record\"))\n tags = {}\n for tag in record.split(\";\")[1:]:\n if tag == \"\":\n continue\n tag = tag.strip(\" \")\n parts = tag.split(\"=\")\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Invalid tag {}\").format(tag))\n name = parts[0].strip(\" \")\n if name not in constants.DMARC_TAGS:\n raise DNSSyntaxError(_(\"Unknown tag {}\").format(name))\n value = parts[1].strip(\" \")\n check_dmarc_tag(name, value)\n tags[name] = value\n if \"p\" not in tags:\n raise DNSSyntaxError(_(\"Missing required p tag\"))\n", "path": "modoboa/dnstools/lib.py"}], "after_files": [{"content": "\"\"\"dnstools library.\"\"\"\n\nimport ipaddress\n\nfrom django.utils.translation import ugettext as _\n\nfrom modoboa.admin import lib as admin_lib\n\nfrom . import constants\n\n\ndef _get_record_type_value(records, rr_type):\n if records is None:\n return None\n for record in records:\n # Multiple strings are separated by a space as described in:\n # https://tools.ietf.org/html/rfc4408#section-3.1.3\n value = str(record).replace('\" \"', '').strip('\"')\n if value.startswith('v={}'.format(rr_type)):\n return value\n return None\n\n\ndef get_spf_record(domain):\n \"\"\"Return SPF record for domain (if any).\"\"\"\n records = admin_lib.get_dns_records(domain, \"TXT\")\n return _get_record_type_value(records, 'spf1')\n\n\ndef get_dkim_record(domain, selector):\n \"\"\"Return DKIM records form domain (if any).\"\"\"\n name = \"{}._domainkey.{}\".format(selector, domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n return _get_record_type_value(records, 'DKIM1')\n\n\ndef get_dmarc_record(domain):\n \"\"\"Return DMARC record for domain (if any).\"\"\"\n name = \"_dmarc.{}\".format(domain)\n records = admin_lib.get_dns_records(name, \"TXT\")\n return _get_record_type_value(records, 'DMARC1')\n\n\ndef _get_simple_record(name):\n \"\"\"We just want to know if name is declared.\"\"\"\n for rdtype in [\"A\", \"CNAME\", \"AAAA\"]:\n records = admin_lib.get_dns_records(name, rdtype)\n if records is not None:\n break\n else:\n return None\n for record in records:\n value = str(record).strip('\"')\n break\n return value\n\n\ndef get_autoconfig_record(domain):\n \"\"\"Return autoconfig record for domain (if any).\"\"\"\n return _get_simple_record(\"autoconfig.{}\".format(domain))\n\n\ndef get_autodiscover_record(domain):\n \"\"\"Return autodiscover record for domain (if any).\"\"\"\n return _get_simple_record(\"autodiscover.{}\".format(domain))\n\n\nclass DNSSyntaxError(Exception):\n \"\"\"Custom exception for DNS errors.\"\"\"\n\n pass\n\n\ndef check_spf_ip4(value):\n \"\"\"Check syntax of ip4 mechanism.\"\"\"\n parts = value.split(\":\")\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Wrong ip4 mechanism syntax\"))\n try:\n ipaddress.ip_network(parts[1], False)\n except ValueError:\n raise DNSSyntaxError(_(\"Wrong IPv4 address format\"))\n\n\ndef check_spf_ip6(value):\n \"\"\"Check syntax of ip6 mechanism.\"\"\"\n if not value.startswith(\"ip6:\"):\n raise DNSSyntaxError(_(\"Wrong ip6 mechanism syntax\"))\n value = value.replace(\"ip6:\", \"\")\n try:\n ipaddress.ip_network(value, False)\n except ValueError:\n raise DNSSyntaxError(_(\"Wrong IPv6 address format\"))\n\n\ndef _check_domain_and_mask(value, mechanism):\n \"\"\"Check for valid domain / mask.\"\"\"\n domain = None\n mask = None\n if \":\" in value:\n mechanism, domain = value.split(\":\")\n if \"/\" in domain:\n domain, mask = domain.split(\"/\")\n elif \"/\" in value:\n mechanism, mask = value.split(\"/\")\n else:\n raise DNSSyntaxError(\n _(\"Invalid syntax for {} mechanism\").format(mechanism))\n if mask and (not mask.isdigit() or int(mask) > 32):\n raise DNSSyntaxError(_(\"Invalid mask found {}\").format(mask))\n\n\ndef check_spf_a(value):\n \"\"\"Check syntax of a mechanism.\"\"\"\n if value == \"a\":\n return\n _check_domain_and_mask(value, \"a\")\n\n\ndef check_spf_mx(value):\n \"\"\"Check syntax of mx mechanism.\"\"\"\n if value == \"mx\":\n return\n _check_domain_and_mask(value, \"mx\")\n\n\ndef _check_simple(value, mechanism):\n \"\"\"Simple check.\"\"\"\n if value == mechanism:\n return\n parts = value.split(\":\")\n if len(parts) != 2:\n raise DNSSyntaxError(\n _(\"Invalid syntax for {} mechanism\").format(mechanism))\n\n\ndef check_spf_ptr(value):\n \"\"\"Check syntax of ptr mechanism.\"\"\"\n _check_simple(value, \"ptr\")\n\n\ndef check_spf_exists(value):\n \"\"\"Check syntax of ptr mechanism.\"\"\"\n _check_simple(value, \"exists\")\n\n\ndef check_spf_include(value):\n \"\"\"Check syntax of include mechanism.\"\"\"\n _check_simple(value, \"include\")\n\n\ndef check_spf_syntax(record):\n \"\"\"Check if record has a valid SPF syntax.\"\"\"\n if not record.startswith(\"v=spf1\"):\n raise DNSSyntaxError(_(\"Not an SPF record\"))\n parts = record.split(\" \")[1:]\n modifiers = []\n mechanisms = []\n for part in parts:\n if part == \"\":\n continue\n qualifier = None\n if part[0] in [\"+\", \"-\", \"~\", \"?\"]:\n qualifier = part[0]\n part = part[1:]\n if part == \"all\":\n continue\n for mechanism in constants.SPF_MECHANISMS:\n if part.startswith(mechanism):\n globals()[\"check_spf_{}\".format(mechanism)](part)\n mechanisms.append(mechanism)\n break\n else:\n # Look for modifier\n modifier = part.split(\"=\")\n if len(modifier) != 2:\n raise DNSSyntaxError(_(\"Unknown mechanism {}\").format(part))\n if modifier[0] not in [\"redirect\", \"exp\"]:\n raise DNSSyntaxError(_(\"Unknown modifier {}\").format(\n modifier[0]))\n if modifier[0] in modifiers:\n raise DNSSyntaxError(_(\"Duplicate modifier {} found\").format(\n modifier[0]))\n modifiers.append(modifier[0])\n if not len(mechanisms) and not len(modifiers):\n raise DNSSyntaxError(_(\"No mechanism found\"))\n return None\n\n\ndef check_dkim_syntax(record):\n \"\"\"Check if record has a valid DKIM syntax.\"\"\"\n if not record.startswith(\"v=DKIM1\"):\n raise DNSSyntaxError(_(\"Not a valid DKIM record\"))\n key = None\n for tag in record.split(\";\")[1:]:\n tag = tag.strip(\" \")\n if tag == \"\":\n continue\n parts = tag.split(\"=\", 1)\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Invalid tag {}\").format(tag))\n name = parts[0].strip(\" \")\n if name == \"p\":\n key = \"\".join(part.strip('\"') for part in parts[1].split(\" \"))\n if key is None:\n raise DNSSyntaxError(_(\"No key found in record\"))\n return key\n\n\ndef check_dmarc_tag_string_value(tag, value):\n \"\"\"Check if value is valid for tag.\"\"\"\n tdef = constants.DMARC_TAGS[tag]\n error = _(\"Wrong value {} for tag {}\").format(value, tag)\n if \"values\" in tdef and value not in tdef[\"values\"]:\n raise DNSSyntaxError(error)\n elif \"regex\" in tdef and tdef[\"regex\"].match(value) is None:\n raise DNSSyntaxError(error)\n\n\ndef check_dmarc_tag(tag, value):\n \"\"\"Check if tag is valid.\"\"\"\n tdef = constants.DMARC_TAGS[tag]\n ttype = tdef.get(\"type\", \"string\")\n if ttype == \"list\":\n for svalue in value.split(\",\"):\n check_dmarc_tag_string_value(tag, svalue)\n elif ttype == \"int\":\n error = _(\"Wrong value {} for tag {}:\").format(value, tag)\n try:\n value = int(value)\n except ValueError:\n raise DNSSyntaxError(error + _(\" not an integer\"))\n if \"min_value\" in tdef and value < tdef[\"min_value\"]:\n raise DNSSyntaxError(\n error + _(\" less than {}\").format(tdef[\"min_value\"]))\n if \"max_value\" in tdef and value > tdef[\"max_value\"]:\n raise DNSSyntaxError(\n error + _(\" greater than {}\").format(tdef[\"max_value\"]))\n else:\n check_dmarc_tag_string_value(tag, value)\n\n\ndef check_dmarc_syntax(record):\n \"\"\"Check if record has a valid DMARC syntax.\"\"\"\n if not record.startswith(\"v=DMARC1\"):\n raise DNSSyntaxError(_(\"Not a valid DMARC record\"))\n tags = {}\n for tag in record.split(\";\")[1:]:\n if tag == \"\":\n continue\n tag = tag.strip(\" \")\n parts = tag.split(\"=\")\n if len(parts) != 2:\n raise DNSSyntaxError(_(\"Invalid tag {}\").format(tag))\n name = parts[0].strip(\" \")\n if name not in constants.DMARC_TAGS:\n raise DNSSyntaxError(_(\"Unknown tag {}\").format(name))\n value = parts[1].strip(\" \")\n check_dmarc_tag(name, value)\n tags[name] = value\n if \"p\" not in tags:\n raise DNSSyntaxError(_(\"Missing required p tag\"))\n", "path": "modoboa/dnstools/lib.py"}]}
| 3,066 | 508 |
gh_patches_debug_11592
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-184
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API.AI Error when network is disconnected
The following error should be caught and warned about cleanly.
```
ERROR asyncio: Task exception was never retrieved
future: <Task finished coro=<parse_apiai() done, defined at /Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py:34> exception=ClientConnectorError(8, 'Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]')>
Traceback (most recent call last):
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 375, in connect
proto = yield from self._create_connection(req)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 632, in _create_connection
_, proto = yield from self._create_direct_connection(req)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 643, in _create_direct_connection
hosts = yield from self._resolve_host(req.url.raw_host, req.port)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 615, in _resolve_host
self._resolver.resolve(host, port, family=self._family)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/resolver.py", line 30, in resolve
host, port, type=socket.SOCK_STREAM, family=family)
File "/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/socket.py", line 743, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py", line 42, in parse_apiai
result = await call_apiai(message, config)
File "/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py", line 27, in call_apiai
headers=headers)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py", line 621, in __await__
resp = yield from self._coro
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py", line 225, in _request
conn = yield from self._connector.connect(req)
File "/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py", line 380, in connect
.format(key, exc.strerror)) from exc
aiohttp.client_exceptions.ClientConnectorError: [Errno 8] Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/parsers/apiai.py`
Content:
```
1 """A helper function for parsing and executing api.ai skills."""
2
3 import logging
4 import json
5
6 import aiohttp
7
8
9 _LOGGER = logging.getLogger(__name__)
10
11
12 async def call_apiai(message, config):
13 """Call the api.ai api and return the response."""
14 async with aiohttp.ClientSession() as session:
15 payload = {
16 "v": "20150910",
17 "lang": "en",
18 "sessionId": message.connector.name,
19 "query": message.text
20 }
21 headers = {
22 "Authorization": "Bearer " + config['access-token'],
23 "Content-Type": "application/json"
24 }
25 resp = await session.post("https://api.api.ai/v1/query",
26 data=json.dumps(payload),
27 headers=headers)
28 result = await resp.json()
29 _LOGGER.debug("api.ai response - " + json.dumps(result))
30
31 return result
32
33
34 async def parse_apiai(opsdroid, message, config):
35 """Parse a message against all apiai skills."""
36 # pylint: disable=broad-except
37 # We want to catch all exceptions coming from a skill module and not
38 # halt the application. If a skill throws an exception it just doesn't
39 # give a response to the user, so an error response should be given.
40 if 'access-token' in config:
41
42 result = await call_apiai(message, config)
43
44 if result["status"]["code"] >= 300:
45 _LOGGER.error("api.ai error - " +
46 str(result["status"]["code"]) + " " +
47 result["status"]["errorType"])
48 return
49
50 if "min-score" in config and \
51 result["result"]["score"] < config["min-score"]:
52 _LOGGER.debug("api.ai score lower than min-score")
53 return
54
55 if result:
56 for skill in opsdroid.skills:
57
58 if "apiai_action" in skill or "apiai_intent" in skill:
59 if ("action" in result["result"] and
60 skill["apiai_action"] in
61 result["result"]["action"]) \
62 or ("intentName" in result["result"] and
63 skill["apiai_intent"] in
64 result["result"]["intentName"]):
65 message.apiai = result
66 try:
67 await skill["skill"](opsdroid, skill["config"],
68 message)
69 except Exception:
70 await message.respond(
71 "Whoops there has been an error")
72 await message.respond(
73 "Check the log for details")
74 _LOGGER.exception("Exception when parsing '" +
75 message.text +
76 "' against skill '" +
77 result["result"]["action"] + "'")
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opsdroid/parsers/apiai.py b/opsdroid/parsers/apiai.py
--- a/opsdroid/parsers/apiai.py
+++ b/opsdroid/parsers/apiai.py
@@ -38,8 +38,11 @@
# halt the application. If a skill throws an exception it just doesn't
# give a response to the user, so an error response should be given.
if 'access-token' in config:
-
- result = await call_apiai(message, config)
+ try:
+ result = await call_apiai(message, config)
+ except aiohttp.ClientOSError:
+ _LOGGER.error("No response from api.ai, check your network.")
+ return
if result["status"]["code"] >= 300:
_LOGGER.error("api.ai error - " +
|
{"golden_diff": "diff --git a/opsdroid/parsers/apiai.py b/opsdroid/parsers/apiai.py\n--- a/opsdroid/parsers/apiai.py\n+++ b/opsdroid/parsers/apiai.py\n@@ -38,8 +38,11 @@\n # halt the application. If a skill throws an exception it just doesn't\n # give a response to the user, so an error response should be given.\n if 'access-token' in config:\n-\n- result = await call_apiai(message, config)\n+ try:\n+ result = await call_apiai(message, config)\n+ except aiohttp.ClientOSError:\n+ _LOGGER.error(\"No response from api.ai, check your network.\")\n+ return\n \n if result[\"status\"][\"code\"] >= 300:\n _LOGGER.error(\"api.ai error - \" +\n", "issue": "API.AI Error when network is disconnected\nThe following error should be caught and warned about cleanly.\r\n\r\n```\r\nERROR asyncio: Task exception was never retrieved\r\nfuture: <Task finished coro=<parse_apiai() done, defined at /Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py:34> exception=ClientConnectorError(8, 'Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]')>\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 375, in connect\r\n proto = yield from self._create_connection(req)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 632, in _create_connection\r\n _, proto = yield from self._create_direct_connection(req)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 643, in _create_direct_connection\r\n hosts = yield from self._resolve_host(req.url.raw_host, req.port)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 615, in _resolve_host\r\n self._resolver.resolve(host, port, family=self._family)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/resolver.py\", line 30, in resolve\r\n host, port, type=socket.SOCK_STREAM, family=family)\r\n File \"/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py\", line 55, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/opt/boxen/homebrew/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/socket.py\", line 743, in getaddrinfo\r\n for res in _socket.getaddrinfo(host, port, family, type, proto, flags):\r\nsocket.gaierror: [Errno 8] nodename nor servname provided, or not known\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py\", line 42, in parse_apiai\r\n result = await call_apiai(message, config)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/opsdroid/parsers/apiai.py\", line 27, in call_apiai\r\n headers=headers)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py\", line 621, in __await__\r\n resp = yield from self._coro\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/client.py\", line 225, in _request\r\n conn = yield from self._connector.connect(req)\r\n File \"/Users/jacob/Projects/opsdroid/opsdroid/venv/lib/python3.6/site-packages/aiohttp/connector.py\", line 380, in connect\r\n .format(key, exc.strerror)) from exc\r\naiohttp.client_exceptions.ClientConnectorError: [Errno 8] Cannot connect to host api.api.ai:443 ssl:True [nodename nor servname provided, or not known]\r\n```\n", "before_files": [{"content": "\"\"\"A helper function for parsing and executing api.ai skills.\"\"\"\n\nimport logging\nimport json\n\nimport aiohttp\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def call_apiai(message, config):\n \"\"\"Call the api.ai api and return the response.\"\"\"\n async with aiohttp.ClientSession() as session:\n payload = {\n \"v\": \"20150910\",\n \"lang\": \"en\",\n \"sessionId\": message.connector.name,\n \"query\": message.text\n }\n headers = {\n \"Authorization\": \"Bearer \" + config['access-token'],\n \"Content-Type\": \"application/json\"\n }\n resp = await session.post(\"https://api.api.ai/v1/query\",\n data=json.dumps(payload),\n headers=headers)\n result = await resp.json()\n _LOGGER.debug(\"api.ai response - \" + json.dumps(result))\n\n return result\n\n\nasync def parse_apiai(opsdroid, message, config):\n \"\"\"Parse a message against all apiai skills.\"\"\"\n # pylint: disable=broad-except\n # We want to catch all exceptions coming from a skill module and not\n # halt the application. If a skill throws an exception it just doesn't\n # give a response to the user, so an error response should be given.\n if 'access-token' in config:\n\n result = await call_apiai(message, config)\n\n if result[\"status\"][\"code\"] >= 300:\n _LOGGER.error(\"api.ai error - \" +\n str(result[\"status\"][\"code\"]) + \" \" +\n result[\"status\"][\"errorType\"])\n return\n\n if \"min-score\" in config and \\\n result[\"result\"][\"score\"] < config[\"min-score\"]:\n _LOGGER.debug(\"api.ai score lower than min-score\")\n return\n\n if result:\n for skill in opsdroid.skills:\n\n if \"apiai_action\" in skill or \"apiai_intent\" in skill:\n if (\"action\" in result[\"result\"] and\n skill[\"apiai_action\"] in\n result[\"result\"][\"action\"]) \\\n or (\"intentName\" in result[\"result\"] and\n skill[\"apiai_intent\"] in\n result[\"result\"][\"intentName\"]):\n message.apiai = result\n try:\n await skill[\"skill\"](opsdroid, skill[\"config\"],\n message)\n except Exception:\n await message.respond(\n \"Whoops there has been an error\")\n await message.respond(\n \"Check the log for details\")\n _LOGGER.exception(\"Exception when parsing '\" +\n message.text +\n \"' against skill '\" +\n result[\"result\"][\"action\"] + \"'\")\n", "path": "opsdroid/parsers/apiai.py"}], "after_files": [{"content": "\"\"\"A helper function for parsing and executing api.ai skills.\"\"\"\n\nimport logging\nimport json\n\nimport aiohttp\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def call_apiai(message, config):\n \"\"\"Call the api.ai api and return the response.\"\"\"\n async with aiohttp.ClientSession() as session:\n payload = {\n \"v\": \"20150910\",\n \"lang\": \"en\",\n \"sessionId\": message.connector.name,\n \"query\": message.text\n }\n headers = {\n \"Authorization\": \"Bearer \" + config['access-token'],\n \"Content-Type\": \"application/json\"\n }\n resp = await session.post(\"https://api.api.ai/v1/query\",\n data=json.dumps(payload),\n headers=headers)\n result = await resp.json()\n _LOGGER.debug(\"api.ai response - \" + json.dumps(result))\n\n return result\n\n\nasync def parse_apiai(opsdroid, message, config):\n \"\"\"Parse a message against all apiai skills.\"\"\"\n # pylint: disable=broad-except\n # We want to catch all exceptions coming from a skill module and not\n # halt the application. If a skill throws an exception it just doesn't\n # give a response to the user, so an error response should be given.\n if 'access-token' in config:\n try:\n result = await call_apiai(message, config)\n except aiohttp.ClientOSError:\n _LOGGER.error(\"No response from api.ai, check your network.\")\n return\n\n if result[\"status\"][\"code\"] >= 300:\n _LOGGER.error(\"api.ai error - \" +\n str(result[\"status\"][\"code\"]) + \" \" +\n result[\"status\"][\"errorType\"])\n return\n\n if \"min-score\" in config and \\\n result[\"result\"][\"score\"] < config[\"min-score\"]:\n _LOGGER.debug(\"api.ai score lower than min-score\")\n return\n\n if result:\n for skill in opsdroid.skills:\n\n if \"apiai_action\" in skill or \"apiai_intent\" in skill:\n if (\"action\" in result[\"result\"] and\n skill[\"apiai_action\"] in\n result[\"result\"][\"action\"]) \\\n or (\"intentName\" in result[\"result\"] and\n skill[\"apiai_intent\"] in\n result[\"result\"][\"intentName\"]):\n message.apiai = result\n try:\n await skill[\"skill\"](opsdroid, skill[\"config\"],\n message)\n except Exception:\n await message.respond(\n \"Whoops there has been an error\")\n await message.respond(\n \"Check the log for details\")\n _LOGGER.exception(\"Exception when parsing '\" +\n message.text +\n \"' against skill '\" +\n result[\"result\"][\"action\"] + \"'\")\n", "path": "opsdroid/parsers/apiai.py"}]}
| 1,862 | 193 |
gh_patches_debug_910
|
rasdani/github-patches
|
git_diff
|
d2l-ai__d2l-vi-115
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
test
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils.py`
Content:
```
1 # encoding=utf8
2 import codecs
3 import filecmp
4 import re
5 import sys
6 import argparse
7
8 # reload(sys)
9 # sys.setdefaultencoding('utf8')
10
11 BEGIN_BLOCK_COMMENT = '<!--\n'
12 END_BLOCK_COMMENT = '-->\n\n'
13 TRANSLATE_INDICATOR = '*dịch đoạn phía trên*'
14 HEADER_INDICATOR = ' *dịch tiêu đề phía trên*\n'
15 IMAGE_CAPTION_INDICATOR = '*dịch chú thích ảnh phía trên*'
16 START_FILE = '<!-- ===================== Bắt đầu dịch Phần 1 ==================== -->\n'
17 END_FILE = '<!-- ===================== Kết thúc dịch Phần 1 ==================== -->\n'
18 SUFIX_PATH = 'contributors_template_vn.md'
19
20 # Our special mark in markdown, e.g. :label:`chapter_intro`
21 MARK_RE_MD = re.compile(':([-\/\\._\w\d]+):`([\*-\/\\\._\w\d]+)`')
22
23 parser = argparse.ArgumentParser(description='Dịch Dive into Deep Learning')
24 parser.add_argument('--convert', type=str, help='path to md file')
25
26
27 def is_blank_line(line):
28 return line.strip() == ''
29
30
31 class MyLine(object):
32 def __init__(self, line_str, in_code_block):
33 self.line_str = line_str.replace(' -- ', ' \-\- ')
34 self.in_code_block = in_code_block
35 self.end_comment_if_next_line_blank = None
36
37 def process(self, file_writer, last_line):
38 if self.in_code_block:
39 file_writer.write(self.line_str)
40 else:
41 self._process(file_writer, last_line)
42 return self
43
44 def _process(self, file_writer, last_line):
45 raise NotImplementedError
46
47
48 class NormalLine(MyLine):
49 def __init__(self, line_str, in_code_block):
50 super(NormalLine, self).__init__(line_str, in_code_block)
51 self.end_comment_if_next_line_blank = True
52
53 def _process(self, file_writer, last_line):
54 if isinstance(last_line, BlankLine):
55 file_writer.write(BEGIN_BLOCK_COMMENT)
56 file_writer.write(self.line_str)
57
58
59 class BlankLine(MyLine):
60 def __init__(self, line_str, in_code_block):
61 super(BlankLine, self).__init__(line_str, in_code_block)
62 self.end_comment_if_next_line_blank = False
63
64 def _process(self, file_writer, last_line):
65 if last_line.end_comment_if_next_line_blank:
66 file_writer.write(END_BLOCK_COMMENT)
67 file_writer.write(TRANSLATE_INDICATOR)
68 file_writer.write('\n')
69 file_writer.write('\n')
70
71
72 class HeaderLine(MyLine):
73 def __init__(self, line_str, in_code_block):
74 super(HeaderLine, self).__init__(line_str, in_code_block)
75 self.end_comment_if_next_line_blank = False
76 self.heading = 0
77 cnt = 0
78 for char in self.line_str:
79 if char == '#':
80 cnt += 1
81 elif char == ' ':
82 self.heading = cnt
83 break
84 else:
85 assert False, self.line_str
86
87 def _process(self, file_writer, last_line):
88 assert isinstance(last_line, BlankLine),\
89 last_line.line_str
90 file_writer.write(BEGIN_BLOCK_COMMENT)
91 file_writer.write(self.line_str)
92 file_writer.write(END_BLOCK_COMMENT)
93 file_writer.write('#'*self.heading + HEADER_INDICATOR)
94
95
96 class ImageLine(MyLine):
97 def __init(self, line_str, in_code_block):
98 assert not in_code_block
99 super(ImageLine, self).__init__(line_str, in_code_block)
100
101 def _process(self, file_writer, last_line):
102 close_square_bracket_id = self.line_str.index(']')
103 assert self.line_str[close_square_bracket_id+1] == '(', self.line_str
104 # assert self.line_str.endswith(')'), self.line_str
105 file_writer.write(BEGIN_BLOCK_COMMENT)
106 file_writer.write(self.line_str)
107 file_writer.write(END_BLOCK_COMMENT)
108 file_writer.write(
109 '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]
110 )
111
112
113 class CodeMarkerLine(MyLine):
114 def __init__(self, line_str, in_code_block):
115 super(CodeMarkerLine, self).__init__(line_str, in_code_block)
116 self.end_comment_if_next_line_blank = False
117
118 def _process(self, file_writer, last_line):
119 """ the print is printed in the super class"""
120 file_writer.write(self.line_str)
121
122
123
124 class MathLine(MyLine):
125 def __init__(self, line_str, in_code_block):
126 super(MathLine, self).__init__(line_str, in_code_block)
127 self.end_comment_if_next_line_blank = False
128
129 def _process(self, file_writer, last_line):
130 file_writer.write(self.line_str)
131 return self
132
133
134 class LabelLine(MyLine):
135 def __init__(self, line_str, in_code_block):
136 super(LabelLine, self).__init__(line_str, in_code_block)
137 self.end_comment_if_next_line_blank = False
138
139 def _process(self, file_writer, last_line):
140 # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\nthis_line: {}'.format(
141 # last_line.line_str, self.line_str
142 # )
143 file_writer.write(self.line_str)
144 # file_writer.write('\n')
145 return self
146
147
148 def block_comment(input_md, output_md, add_prefix_suffix=False):
149 last_line = BlankLine('', False)
150 in_code_block = False
151 with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\
152 codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\
153 codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:
154 if add_prefix_suffix:
155 output_handle.write(START_FILE)
156 output_handle.write('\n')
157 for line_str in input_handle:
158 line_str = line_str.rstrip() + '\n'
159 line_str = line_str.replace(' -- ', ' \-\- ')
160 match = MARK_RE_MD.match(line_str)
161 if is_blank_line(line_str):
162 line_type = BlankLine
163 elif line_str.startswith('#'):
164 line_type = HeaderLine
165 elif line_str.startswith('!['):
166 line_type = ImageLine
167 elif line_str.startswith('$'):
168 line_type = MathLine
169 elif line_str.startswith('```'):
170 in_code_block = not in_code_block
171 line_type = CodeMarkerLine
172 elif match is not None and match[1] in ['label', 'eqlabel']:
173 line_type = LabelLine
174 else:
175 line_type = NormalLine
176
177 this_line = line_type(line_str, in_code_block)
178 last_line = this_line.process(output_handle, last_line)
179
180 assert in_code_block is False
181
182 # TODO: simplify 5 lines below
183 if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\
184 or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):
185 print('skip')
186 else:
187 output_handle.write(END_BLOCK_COMMENT)
188 output_handle.write(TRANSLATE_INDICATOR)
189 if add_prefix_suffix:
190 output_handle.write('\n')
191 output_handle.write(END_FILE)
192 output_handle.write('\n')
193 for line in surfix_handle:
194 output_handle.write(line)
195
196
197 if __name__ == '__main__':
198 args = parser.parse_args()
199 input_md = args.convert
200 output_md = input_md[:-len('.md')] + '_vn.md'
201 block_comment(input_md, output_md, add_prefix_suffix=True)
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/utils.py b/utils.py
--- a/utils.py
+++ b/utils.py
@@ -5,9 +5,6 @@
import sys
import argparse
-# reload(sys)
-# sys.setdefaultencoding('utf8')
-
BEGIN_BLOCK_COMMENT = '<!--\n'
END_BLOCK_COMMENT = '-->\n\n'
TRANSLATE_INDICATOR = '*dịch đoạn phía trên*'
|
{"golden_diff": "diff --git a/utils.py b/utils.py\n--- a/utils.py\n+++ b/utils.py\n@@ -5,9 +5,6 @@\n import sys\n import argparse\n \n-# reload(sys)\n-# sys.setdefaultencoding('utf8')\n-\n BEGIN_BLOCK_COMMENT = '<!--\\n'\n END_BLOCK_COMMENT = '-->\\n\\n'\n TRANSLATE_INDICATOR = '*d\u1ecbch \u0111o\u1ea1n ph\u00eda tr\u00ean*'\n", "issue": "test\n\n", "before_files": [{"content": "# encoding=utf8\nimport codecs\nimport filecmp\nimport re\nimport sys\nimport argparse\n\n# reload(sys)\n# sys.setdefaultencoding('utf8')\n\nBEGIN_BLOCK_COMMENT = '<!--\\n'\nEND_BLOCK_COMMENT = '-->\\n\\n'\nTRANSLATE_INDICATOR = '*d\u1ecbch \u0111o\u1ea1n ph\u00eda tr\u00ean*'\nHEADER_INDICATOR = ' *d\u1ecbch ti\u00eau \u0111\u1ec1 ph\u00eda tr\u00ean*\\n'\nIMAGE_CAPTION_INDICATOR = '*d\u1ecbch ch\u00fa th\u00edch \u1ea3nh ph\u00eda tr\u00ean*'\nSTART_FILE = '<!-- ===================== B\u1eaft \u0111\u1ea7u d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nEND_FILE = '<!-- ===================== K\u1ebft th\u00fac d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nSUFIX_PATH = 'contributors_template_vn.md'\n\n# Our special mark in markdown, e.g. :label:`chapter_intro`\nMARK_RE_MD = re.compile(':([-\\/\\\\._\\w\\d]+):`([\\*-\\/\\\\\\._\\w\\d]+)`')\n\nparser = argparse.ArgumentParser(description='D\u1ecbch Dive into Deep Learning')\nparser.add_argument('--convert', type=str, help='path to md file')\n\n\ndef is_blank_line(line):\n return line.strip() == ''\n\n\nclass MyLine(object):\n def __init__(self, line_str, in_code_block):\n self.line_str = line_str.replace(' -- ', ' \\-\\- ')\n self.in_code_block = in_code_block\n self.end_comment_if_next_line_blank = None\n\n def process(self, file_writer, last_line):\n if self.in_code_block:\n file_writer.write(self.line_str)\n else:\n self._process(file_writer, last_line)\n return self\n\n def _process(self, file_writer, last_line):\n raise NotImplementedError\n\n\nclass NormalLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(NormalLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = True\n\n def _process(self, file_writer, last_line):\n if isinstance(last_line, BlankLine):\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n\n\nclass BlankLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(BlankLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n if last_line.end_comment_if_next_line_blank:\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(TRANSLATE_INDICATOR)\n file_writer.write('\\n')\n file_writer.write('\\n')\n\n\nclass HeaderLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(HeaderLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n self.heading = 0\n cnt = 0\n for char in self.line_str:\n if char == '#':\n cnt += 1\n elif char == ' ':\n self.heading = cnt\n break\n else:\n assert False, self.line_str\n\n def _process(self, file_writer, last_line):\n assert isinstance(last_line, BlankLine),\\\n last_line.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write('#'*self.heading + HEADER_INDICATOR)\n\n\nclass ImageLine(MyLine):\n def __init(self, line_str, in_code_block):\n assert not in_code_block\n super(ImageLine, self).__init__(line_str, in_code_block)\n\n def _process(self, file_writer, last_line):\n close_square_bracket_id = self.line_str.index(']')\n assert self.line_str[close_square_bracket_id+1] == '(', self.line_str\n # assert self.line_str.endswith(')'), self.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(\n '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]\n )\n\n\nclass CodeMarkerLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(CodeMarkerLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n \"\"\" the print is printed in the super class\"\"\"\n file_writer.write(self.line_str)\n\n\n\nclass MathLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(MathLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n file_writer.write(self.line_str)\n return self\n\n\nclass LabelLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(LabelLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\\nthis_line: {}'.format(\n # last_line.line_str, self.line_str\n # )\n file_writer.write(self.line_str)\n # file_writer.write('\\n')\n return self\n\n\ndef block_comment(input_md, output_md, add_prefix_suffix=False):\n last_line = BlankLine('', False)\n in_code_block = False\n with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\\\n codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\\\n codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:\n if add_prefix_suffix:\n output_handle.write(START_FILE)\n output_handle.write('\\n')\n for line_str in input_handle:\n line_str = line_str.rstrip() + '\\n'\n line_str = line_str.replace(' -- ', ' \\-\\- ')\n match = MARK_RE_MD.match(line_str)\n if is_blank_line(line_str):\n line_type = BlankLine\n elif line_str.startswith('#'):\n line_type = HeaderLine\n elif line_str.startswith('!['):\n line_type = ImageLine\n elif line_str.startswith('$'):\n line_type = MathLine\n elif line_str.startswith('```'):\n in_code_block = not in_code_block\n line_type = CodeMarkerLine\n elif match is not None and match[1] in ['label', 'eqlabel']:\n line_type = LabelLine\n else:\n line_type = NormalLine\n\n this_line = line_type(line_str, in_code_block)\n last_line = this_line.process(output_handle, last_line)\n\n assert in_code_block is False\n\n # TODO: simplify 5 lines below\n if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\\\n or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):\n print('skip')\n else:\n output_handle.write(END_BLOCK_COMMENT)\n output_handle.write(TRANSLATE_INDICATOR)\n if add_prefix_suffix:\n output_handle.write('\\n')\n output_handle.write(END_FILE)\n output_handle.write('\\n')\n for line in surfix_handle:\n output_handle.write(line)\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n input_md = args.convert\n output_md = input_md[:-len('.md')] + '_vn.md'\n block_comment(input_md, output_md, add_prefix_suffix=True)\n", "path": "utils.py"}], "after_files": [{"content": "# encoding=utf8\nimport codecs\nimport filecmp\nimport re\nimport sys\nimport argparse\n\nBEGIN_BLOCK_COMMENT = '<!--\\n'\nEND_BLOCK_COMMENT = '-->\\n\\n'\nTRANSLATE_INDICATOR = '*d\u1ecbch \u0111o\u1ea1n ph\u00eda tr\u00ean*'\nHEADER_INDICATOR = ' *d\u1ecbch ti\u00eau \u0111\u1ec1 ph\u00eda tr\u00ean*\\n'\nIMAGE_CAPTION_INDICATOR = '*d\u1ecbch ch\u00fa th\u00edch \u1ea3nh ph\u00eda tr\u00ean*'\nSTART_FILE = '<!-- ===================== B\u1eaft \u0111\u1ea7u d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nEND_FILE = '<!-- ===================== K\u1ebft th\u00fac d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nSUFIX_PATH = 'contributors_template_vn.md'\n\n# Our special mark in markdown, e.g. :label:`chapter_intro`\nMARK_RE_MD = re.compile(':([-\\/\\\\._\\w\\d]+):`([\\*-\\/\\\\\\._\\w\\d]+)`')\n\nparser = argparse.ArgumentParser(description='D\u1ecbch Dive into Deep Learning')\nparser.add_argument('--convert', type=str, help='path to md file')\n\n\ndef is_blank_line(line):\n return line.strip() == ''\n\n\nclass MyLine(object):\n def __init__(self, line_str, in_code_block):\n self.line_str = line_str.replace(' -- ', ' \\-\\- ')\n self.in_code_block = in_code_block\n self.end_comment_if_next_line_blank = None\n\n def process(self, file_writer, last_line):\n if self.in_code_block:\n file_writer.write(self.line_str)\n else:\n self._process(file_writer, last_line)\n return self\n\n def _process(self, file_writer, last_line):\n raise NotImplementedError\n\n\nclass NormalLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(NormalLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = True\n\n def _process(self, file_writer, last_line):\n if isinstance(last_line, BlankLine):\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n\n\nclass BlankLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(BlankLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n if last_line.end_comment_if_next_line_blank:\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(TRANSLATE_INDICATOR)\n file_writer.write('\\n')\n file_writer.write('\\n')\n\n\nclass HeaderLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(HeaderLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n self.heading = 0\n cnt = 0\n for char in self.line_str:\n if char == '#':\n cnt += 1\n elif char == ' ':\n self.heading = cnt\n break\n else:\n assert False, self.line_str\n\n def _process(self, file_writer, last_line):\n assert isinstance(last_line, BlankLine),\\\n last_line.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write('#'*self.heading + HEADER_INDICATOR)\n\n\nclass ImageLine(MyLine):\n def __init(self, line_str, in_code_block):\n assert not in_code_block\n super(ImageLine, self).__init__(line_str, in_code_block)\n\n def _process(self, file_writer, last_line):\n close_square_bracket_id = self.line_str.index(']')\n assert self.line_str[close_square_bracket_id+1] == '(', self.line_str\n # assert self.line_str.endswith(')'), self.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(\n '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]\n )\n\n\nclass CodeMarkerLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(CodeMarkerLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n \"\"\" the print is printed in the super class\"\"\"\n file_writer.write(self.line_str)\n\n\n\nclass MathLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(MathLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n file_writer.write(self.line_str)\n return self\n\n\nclass LabelLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(LabelLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\\nthis_line: {}'.format(\n # last_line.line_str, self.line_str\n # )\n file_writer.write(self.line_str)\n # file_writer.write('\\n')\n return self\n\n\ndef block_comment(input_md, output_md, add_prefix_suffix=False):\n last_line = BlankLine('', False)\n in_code_block = False\n with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\\\n codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\\\n codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:\n if add_prefix_suffix:\n output_handle.write(START_FILE)\n output_handle.write('\\n')\n for line_str in input_handle:\n line_str = line_str.rstrip() + '\\n'\n line_str = line_str.replace(' -- ', ' \\-\\- ')\n match = MARK_RE_MD.match(line_str)\n if is_blank_line(line_str):\n line_type = BlankLine\n elif line_str.startswith('#'):\n line_type = HeaderLine\n elif line_str.startswith('!['):\n line_type = ImageLine\n elif line_str.startswith('$'):\n line_type = MathLine\n elif line_str.startswith('```'):\n in_code_block = not in_code_block\n line_type = CodeMarkerLine\n elif match is not None and match[1] in ['label', 'eqlabel']:\n line_type = LabelLine\n else:\n line_type = NormalLine\n\n this_line = line_type(line_str, in_code_block)\n last_line = this_line.process(output_handle, last_line)\n\n assert in_code_block is False\n\n # TODO: simplify 5 lines below\n if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\\\n or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):\n print('skip')\n else:\n output_handle.write(END_BLOCK_COMMENT)\n output_handle.write(TRANSLATE_INDICATOR)\n if add_prefix_suffix:\n output_handle.write('\\n')\n output_handle.write(END_FILE)\n output_handle.write('\\n')\n for line in surfix_handle:\n output_handle.write(line)\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n input_md = args.convert\n output_md = input_md[:-len('.md')] + '_vn.md'\n block_comment(input_md, output_md, add_prefix_suffix=True)\n", "path": "utils.py"}]}
| 2,425 | 84 |
gh_patches_debug_9216
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-2102
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Audiências Públicas sem possibilidade de Edição
Ao criar uma Audiência Pública e salva-la, não aparecem os metadados da matéria legislativa inseridas no preenchimento.
Ao clicar em Editar, só aparece o título da audiência criada.
grato
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/audiencia/forms.py`
Content:
```
1 from django import forms
2 from django.core.exceptions import ObjectDoesNotExist, ValidationError
3 from django.db import transaction
4 from django.utils.translation import ugettext_lazy as _
5 from sapl.audiencia.models import AudienciaPublica, TipoAudienciaPublica
6 from sapl.materia.models import MateriaLegislativa, TipoMateriaLegislativa
7 from sapl.utils import timezone
8
9 class AudienciaForm(forms.ModelForm):
10
11 data_atual = timezone.now()
12
13 tipo = forms.ModelChoiceField(required=True,
14 label='Tipo de Audiência Pública',
15 queryset=TipoAudienciaPublica.objects.all().order_by('nome'))
16
17 tipo_materia = forms.ModelChoiceField(
18 label=_('Tipo Matéria'),
19 required=True,
20 queryset=TipoMateriaLegislativa.objects.all(),
21 empty_label='Selecione',
22 )
23
24 numero_materia = forms.CharField(
25 label='Número Matéria', required=True)
26
27 ano_materia = forms.CharField(
28 label='Ano Matéria',
29 initial=int(data_atual.year),
30 required=True)
31
32 class Meta:
33 model = AudienciaPublica
34 fields = ['tipo', 'numero', 'nome',
35 'tema', 'data', 'hora_inicio', 'hora_fim',
36 'observacao', 'audiencia_cancelada', 'url_audio',
37 'url_video', 'upload_pauta', 'upload_ata',
38 'upload_anexo', 'tipo_materia', 'numero_materia',
39 'ano_materia']
40
41
42 def __init__(self, **kwargs):
43 super(AudienciaForm, self).__init__(**kwargs)
44
45 tipos = []
46
47 if not self.fields['tipo'].queryset:
48 tipos.append(TipoAudienciaPublica.objects.create(nome='Audiência Pública', tipo='A'))
49 tipos.append(TipoAudienciaPublica.objects.create(nome='Plebiscito', tipo='P'))
50 tipos.append(TipoAudienciaPublica.objects.create(nome='Referendo', tipo='R'))
51 tipos.append(TipoAudienciaPublica.objects.create(nome='Iniciativa Popular', tipo='I'))
52
53 for t in tipos:
54 t.save()
55
56
57 def clean(self):
58 cleaned_data = super(AudienciaForm, self).clean()
59 if not self.is_valid():
60 return cleaned_data
61
62 try:
63 materia = MateriaLegislativa.objects.get(
64 numero=self.cleaned_data['numero_materia'],
65 ano=self.cleaned_data['ano_materia'],
66 tipo=self.cleaned_data['tipo_materia'])
67 except ObjectDoesNotExist:
68 msg = _('A matéria a ser inclusa não existe no cadastro'
69 ' de matérias legislativas.')
70 raise ValidationError(msg)
71 else:
72 cleaned_data['materia'] = materia
73
74 if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:
75 if (self.cleaned_data['hora_fim'] <
76 self.cleaned_data['hora_inicio']):
77 msg = _('A hora de fim não pode ser anterior a hora de ínicio')
78 raise ValidationError(msg)
79
80 return self.cleaned_data
81
82 @transaction.atomic()
83 def save(self, commit=True):
84 audiencia = super(AudienciaForm, self).save(commit)
85 return audiencia
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sapl/audiencia/forms.py b/sapl/audiencia/forms.py
--- a/sapl/audiencia/forms.py
+++ b/sapl/audiencia/forms.py
@@ -77,9 +77,11 @@
msg = _('A hora de fim não pode ser anterior a hora de ínicio')
raise ValidationError(msg)
- return self.cleaned_data
+ return cleaned_data
@transaction.atomic()
def save(self, commit=True):
- audiencia = super(AudienciaForm, self).save(commit)
+ audiencia = super(AudienciaForm, self).save(False)
+ audiencia.materia = self.cleaned_data['materia']
+ audiencia.save()
return audiencia
\ No newline at end of file
|
{"golden_diff": "diff --git a/sapl/audiencia/forms.py b/sapl/audiencia/forms.py\n--- a/sapl/audiencia/forms.py\n+++ b/sapl/audiencia/forms.py\n@@ -77,9 +77,11 @@\n msg = _('A hora de fim n\u00e3o pode ser anterior a hora de \u00ednicio')\n raise ValidationError(msg)\n \n- return self.cleaned_data\n+ return cleaned_data\n \n @transaction.atomic()\n def save(self, commit=True):\n- audiencia = super(AudienciaForm, self).save(commit)\n+ audiencia = super(AudienciaForm, self).save(False)\n+ audiencia.materia = self.cleaned_data['materia']\n+ audiencia.save()\n return audiencia\n\\ No newline at end of file\n", "issue": "Audi\u00eancias P\u00fablicas sem possibilidade de Edi\u00e7\u00e3o\nAo criar uma Audi\u00eancia P\u00fablica e salva-la, n\u00e3o aparecem os metadados da mat\u00e9ria legislativa inseridas no preenchimento. \r\nAo clicar em Editar, s\u00f3 aparece o t\u00edtulo da audi\u00eancia criada.\r\ngrato\n", "before_files": [{"content": "from django import forms\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import transaction\nfrom django.utils.translation import ugettext_lazy as _\nfrom sapl.audiencia.models import AudienciaPublica, TipoAudienciaPublica\nfrom sapl.materia.models import MateriaLegislativa, TipoMateriaLegislativa\nfrom sapl.utils import timezone\n\nclass AudienciaForm(forms.ModelForm):\n\n data_atual = timezone.now()\n\n tipo = forms.ModelChoiceField(required=True,\n label='Tipo de Audi\u00eancia P\u00fablica',\n queryset=TipoAudienciaPublica.objects.all().order_by('nome'))\n\n tipo_materia = forms.ModelChoiceField(\n label=_('Tipo Mat\u00e9ria'),\n required=True,\n queryset=TipoMateriaLegislativa.objects.all(),\n empty_label='Selecione',\n )\n\n numero_materia = forms.CharField(\n label='N\u00famero Mat\u00e9ria', required=True)\n\n ano_materia = forms.CharField(\n label='Ano Mat\u00e9ria',\n initial=int(data_atual.year),\n required=True)\n\n class Meta:\n model = AudienciaPublica\n fields = ['tipo', 'numero', 'nome',\n 'tema', 'data', 'hora_inicio', 'hora_fim',\n 'observacao', 'audiencia_cancelada', 'url_audio',\n 'url_video', 'upload_pauta', 'upload_ata',\n 'upload_anexo', 'tipo_materia', 'numero_materia',\n 'ano_materia']\n\n\n def __init__(self, **kwargs):\n super(AudienciaForm, self).__init__(**kwargs)\n\n tipos = []\n\n if not self.fields['tipo'].queryset:\n tipos.append(TipoAudienciaPublica.objects.create(nome='Audi\u00eancia P\u00fablica', tipo='A'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Plebiscito', tipo='P'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Referendo', tipo='R'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Iniciativa Popular', tipo='I'))\n\n for t in tipos:\n t.save()\n\n\n def clean(self):\n cleaned_data = super(AudienciaForm, self).clean()\n if not self.is_valid():\n return cleaned_data\n\n try:\n materia = MateriaLegislativa.objects.get(\n numero=self.cleaned_data['numero_materia'],\n ano=self.cleaned_data['ano_materia'],\n tipo=self.cleaned_data['tipo_materia'])\n except ObjectDoesNotExist:\n msg = _('A mat\u00e9ria a ser inclusa n\u00e3o existe no cadastro'\n ' de mat\u00e9rias legislativas.')\n raise ValidationError(msg)\n else:\n cleaned_data['materia'] = materia\n\n if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:\n if (self.cleaned_data['hora_fim'] <\n self.cleaned_data['hora_inicio']):\n msg = _('A hora de fim n\u00e3o pode ser anterior a hora de \u00ednicio')\n raise ValidationError(msg)\n\n return self.cleaned_data\n\n @transaction.atomic()\n def save(self, commit=True):\n audiencia = super(AudienciaForm, self).save(commit)\n return audiencia", "path": "sapl/audiencia/forms.py"}], "after_files": [{"content": "from django import forms\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import transaction\nfrom django.utils.translation import ugettext_lazy as _\nfrom sapl.audiencia.models import AudienciaPublica, TipoAudienciaPublica\nfrom sapl.materia.models import MateriaLegislativa, TipoMateriaLegislativa\nfrom sapl.utils import timezone\n\nclass AudienciaForm(forms.ModelForm):\n\n data_atual = timezone.now()\n\n tipo = forms.ModelChoiceField(required=True,\n label='Tipo de Audi\u00eancia P\u00fablica',\n queryset=TipoAudienciaPublica.objects.all().order_by('nome'))\n\n tipo_materia = forms.ModelChoiceField(\n label=_('Tipo Mat\u00e9ria'),\n required=True,\n queryset=TipoMateriaLegislativa.objects.all(),\n empty_label='Selecione',\n )\n\n numero_materia = forms.CharField(\n label='N\u00famero Mat\u00e9ria', required=True)\n\n ano_materia = forms.CharField(\n label='Ano Mat\u00e9ria',\n initial=int(data_atual.year),\n required=True)\n\n class Meta:\n model = AudienciaPublica\n fields = ['tipo', 'numero', 'nome',\n 'tema', 'data', 'hora_inicio', 'hora_fim',\n 'observacao', 'audiencia_cancelada', 'url_audio',\n 'url_video', 'upload_pauta', 'upload_ata',\n 'upload_anexo', 'tipo_materia', 'numero_materia',\n 'ano_materia']\n\n\n def __init__(self, **kwargs):\n super(AudienciaForm, self).__init__(**kwargs)\n\n tipos = []\n\n if not self.fields['tipo'].queryset:\n tipos.append(TipoAudienciaPublica.objects.create(nome='Audi\u00eancia P\u00fablica', tipo='A'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Plebiscito', tipo='P'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Referendo', tipo='R'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Iniciativa Popular', tipo='I'))\n\n for t in tipos:\n t.save()\n\n\n def clean(self):\n cleaned_data = super(AudienciaForm, self).clean()\n if not self.is_valid():\n return cleaned_data\n\n try:\n materia = MateriaLegislativa.objects.get(\n numero=self.cleaned_data['numero_materia'],\n ano=self.cleaned_data['ano_materia'],\n tipo=self.cleaned_data['tipo_materia'])\n except ObjectDoesNotExist:\n msg = _('A mat\u00e9ria a ser inclusa n\u00e3o existe no cadastro'\n ' de mat\u00e9rias legislativas.')\n raise ValidationError(msg)\n else:\n cleaned_data['materia'] = materia\n\n if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:\n if (self.cleaned_data['hora_fim'] <\n self.cleaned_data['hora_inicio']):\n msg = _('A hora de fim n\u00e3o pode ser anterior a hora de \u00ednicio')\n raise ValidationError(msg)\n\n return cleaned_data\n\n @transaction.atomic()\n def save(self, commit=True):\n audiencia = super(AudienciaForm, self).save(False)\n audiencia.materia = self.cleaned_data['materia']\n audiencia.save()\n return audiencia", "path": "sapl/audiencia/forms.py"}]}
| 1,188 | 169 |
gh_patches_debug_43933
|
rasdani/github-patches
|
git_diff
|
huggingface__trl-566
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Potential bug of model output dim in reward trainer example
Hi!
I've been exploring the `examples/scripts/reward_trainer.py` and believe I've found a potential bug that I'd like to address.
## Potential bug.
To my understanding, the reward model should produce a scalar output. However, when we load the pre-trained model in the [example script](https://github.com/lvwerra/trl/blob/main/examples/scripts/reward_trainer.py#L75-L80), the model will be a binary classifier by default, with a 2-dim logit output:
```python
import torch
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
model = model.cpu()
with torch.no_grad():
out = model(**inputs)
print(out.logits.shape) # torch.Size([1, 2])
```
During training, this issue was hidden by the [the loss computation part](https://github.com/lvwerra/trl/blob/main/trl/trainer/reward_trainer.py#L169-L176) of the reward trainer:
```python
rewards_chosen = model(
input_ids=inputs["input_ids_chosen"],
attention_mask=inputs["attention_mask_chosen"],
)[0]
rewards_rejected = model(
input_ids=inputs["input_ids_rejected"],
attention_mask=inputs["attention_mask_rejected"],
)[0]
loss = -torch.nn.functional.logsigmoid(rewards_chosen - rewards_rejected).mean()
```
There, the `rewards_rejected` and `rewards_chosen` are dim [sample, 2] matrices, which I think should be dim [sample, 1] instead. The `.mean()` operator hides the issue as it returns a scalar value.
Here is a [colab](https://colab.research.google.com/drive/1szzGC8H1DQ7329_idVnnOnPIPE6D7J4j?usp=sharing) that shows the dimensionality problem.
## Proposed fix
To resolve this, in the model training example, we can use:
```python
model = AutoModelForSequenceClassification.from_pretrained(
script_args.model_name,
quantization_config=quantization_config,
device_map=device_map,
trust_remote_code=script_args.trust_remote_code,
num_labels=1 # <---- make sure that the model output is a scalar
)
```
I'll submit a pull request if it is OK.
In addition, while exploring `examples/scripts/reward_trainer.py`, I noticed a few other places that can be improved. For example, there is currently no validation steps that show the genearlization performance of the reward model; the `log_with` option in `ScriptArguments` is currently not being used anywhere and therefore has no effect. I'm happy to include the validation and fix the logging glitch in my pull request as well.
Tianlin
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/scripts/reward_trainer.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2023 The HuggingFace Inc. team. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 from dataclasses import dataclass, field
16 from typing import Optional
17
18 from datasets import load_dataset
19 from peft import LoraConfig
20 from tqdm import tqdm
21 from transformers import (
22 AutoModelForSequenceClassification,
23 AutoTokenizer,
24 BitsAndBytesConfig,
25 HfArgumentParser,
26 TrainingArguments,
27 )
28
29 from trl import RewardTrainer
30
31
32 tqdm.pandas()
33
34
35 # Define and parse arguments.
36 @dataclass
37 class ScriptArguments:
38 """
39 The name of the Casual LM model we wish to fine with SFTTrainer
40 """
41
42 model_name: Optional[str] = field(default="facebook/opt-350m", metadata={"help": "the model name"})
43 dataset_name: Optional[str] = field(default="Anthropic/hh-rlhf", metadata={"help": "the model name"})
44 dataset_text_field: Optional[str] = field(default="text", metadata={"help": "the text field of the dataset"})
45 log_with: Optional[str] = field(default=None, metadata={"help": "use 'wandb' to log with wandb"})
46 learning_rate: Optional[float] = field(default=1.41e-5, metadata={"help": "the learning rate"})
47 batch_size: Optional[int] = field(default=64, metadata={"help": "the batch size"})
48 seq_length: Optional[int] = field(default=512, metadata={"help": "Input sequence length"})
49 gradient_accumulation_steps: Optional[int] = field(
50 default=16, metadata={"help": "the number of gradient accumulation steps"}
51 )
52 load_in_8bit: Optional[bool] = field(default=False, metadata={"help": "load the model in 8 bits precision"})
53 load_in_4bit: Optional[bool] = field(default=False, metadata={"help": "load the model in 4 bits precision"})
54 use_peft: Optional[bool] = field(default=False, metadata={"help": "Wether to use PEFT or not to train adapters"})
55 trust_remote_code: Optional[bool] = field(default=True, metadata={"help": "Enable `trust_remote_code`"})
56 output_dir: Optional[str] = field(default="output", metadata={"help": "the output directory"})
57
58
59 parser = HfArgumentParser(ScriptArguments)
60 script_args = parser.parse_args_into_dataclasses()[0]
61
62 # Step 1: Load the model
63 if script_args.load_in_8bit and script_args.load_in_4bit:
64 raise ValueError("You can't load the model in 8 bits and 4 bits at the same time")
65 elif script_args.load_in_8bit or script_args.load_in_4bit:
66 quantization_config = BitsAndBytesConfig(
67 load_in_8bit=script_args.load_in_8bit, load_in_4bit=script_args.load_in_4bit
68 )
69 # This means: fit the entire model on the GPU:0
70 device_map = {"": 0}
71 else:
72 device_map = None
73 quantization_config = None
74
75 model = AutoModelForSequenceClassification.from_pretrained(
76 script_args.model_name,
77 quantization_config=quantization_config,
78 device_map=device_map,
79 trust_remote_code=script_args.trust_remote_code,
80 )
81
82 # Step 2: Load the dataset and pre-process it
83 tokenizer = AutoTokenizer.from_pretrained(script_args.model_name)
84 dataset = load_dataset(script_args.dataset_name, split="train")
85 # Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.
86 # Then tokenize the dataset.
87 # Adapt this section to your needs for custom datasets
88
89
90 def preprocess_function(examples):
91 new_examples = {
92 "input_ids_chosen": [],
93 "attention_mask_chosen": [],
94 "input_ids_rejected": [],
95 "attention_mask_rejected": [],
96 }
97 for chosen, rejected in zip(examples["chosen"], examples["rejected"]):
98 tokenized_j = tokenizer(chosen, truncation=True)
99 tokenized_k = tokenizer(rejected, truncation=True)
100
101 new_examples["input_ids_chosen"].append(tokenized_j["input_ids"])
102 new_examples["attention_mask_chosen"].append(tokenized_j["attention_mask"])
103 new_examples["input_ids_rejected"].append(tokenized_k["input_ids"])
104 new_examples["attention_mask_rejected"].append(tokenized_k["attention_mask"])
105
106 return new_examples
107
108
109 # preprocess the dataset and filter out QAs that are longer than script_args.max_length
110 original_columns = dataset.column_names
111 train_dataset = dataset.map(
112 preprocess_function,
113 batched=True,
114 num_proc=4,
115 )
116 train_dataset = train_dataset.filter(
117 lambda x: len(x["input_ids_chosen"]) <= script_args.seq_length
118 and len(x["input_ids_rejected"]) <= script_args.seq_length
119 )
120
121
122 # Step 3: Define the training arguments
123 training_args = TrainingArguments(
124 output_dir=script_args.output_dir,
125 per_device_train_batch_size=script_args.batch_size,
126 gradient_accumulation_steps=script_args.gradient_accumulation_steps,
127 learning_rate=script_args.learning_rate,
128 )
129
130 # Step 4: Define the LoraConfig
131 if script_args.use_peft:
132 peft_config = LoraConfig(r=16, lora_alpha=16, bias="none", task_type="SEQ_CLS", modules_to_save=["scores"])
133 else:
134 peft_config = None
135
136 # Step 5: Define the Trainer
137 trainer = RewardTrainer(
138 model=model,
139 tokenizer=tokenizer,
140 args=training_args,
141 train_dataset=train_dataset,
142 peft_config=peft_config,
143 max_length=script_args.seq_length,
144 )
145
146 trainer.train()
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/scripts/reward_trainer.py b/examples/scripts/reward_trainer.py
--- a/examples/scripts/reward_trainer.py
+++ b/examples/scripts/reward_trainer.py
@@ -43,8 +43,13 @@
dataset_name: Optional[str] = field(default="Anthropic/hh-rlhf", metadata={"help": "the model name"})
dataset_text_field: Optional[str] = field(default="text", metadata={"help": "the text field of the dataset"})
log_with: Optional[str] = field(default=None, metadata={"help": "use 'wandb' to log with wandb"})
+ logging_steps: Optional[int] = field(default=500, metadata={"help": "the number of update steps between two logs"})
+ eval_split: Optional[str] = field(
+ default="none", metadata={"help": "the dataset split to evaluate on; default to 'none' (no evaluation)"}
+ )
learning_rate: Optional[float] = field(default=1.41e-5, metadata={"help": "the learning rate"})
batch_size: Optional[int] = field(default=64, metadata={"help": "the batch size"})
+ num_train_epochs: Optional[int] = field(default=1, metadata={"help": "the number of training epochs"})
seq_length: Optional[int] = field(default=512, metadata={"help": "Input sequence length"})
gradient_accumulation_steps: Optional[int] = field(
default=16, metadata={"help": "the number of gradient accumulation steps"}
@@ -77,11 +82,14 @@
quantization_config=quantization_config,
device_map=device_map,
trust_remote_code=script_args.trust_remote_code,
+ num_labels=1,
)
# Step 2: Load the dataset and pre-process it
tokenizer = AutoTokenizer.from_pretrained(script_args.model_name)
-dataset = load_dataset(script_args.dataset_name, split="train")
+train_dataset = load_dataset(script_args.dataset_name, split="train")
+
+
# Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.
# Then tokenize the dataset.
# Adapt this section to your needs for custom datasets
@@ -107,8 +115,7 @@
# preprocess the dataset and filter out QAs that are longer than script_args.max_length
-original_columns = dataset.column_names
-train_dataset = dataset.map(
+train_dataset = train_dataset.map(
preprocess_function,
batched=True,
num_proc=4,
@@ -118,13 +125,34 @@
and len(x["input_ids_rejected"]) <= script_args.seq_length
)
+if script_args.eval_split == "none":
+ eval_dataset = None
+else:
+ eval_dataset = load_dataset(script_args.dataset_name, split=script_args.eval_split)
+
+ eval_dataset = eval_dataset.map(
+ preprocess_function,
+ batched=True,
+ num_proc=4,
+ )
+ eval_dataset = eval_dataset.filter(
+ lambda x: len(x["input_ids_chosen"]) <= script_args.seq_length
+ and len(x["input_ids_rejected"]) <= script_args.seq_length
+ )
+
# Step 3: Define the training arguments
training_args = TrainingArguments(
output_dir=script_args.output_dir,
per_device_train_batch_size=script_args.batch_size,
+ num_train_epochs=script_args.num_train_epochs,
gradient_accumulation_steps=script_args.gradient_accumulation_steps,
learning_rate=script_args.learning_rate,
+ report_to="wandb" if script_args.log_with == "wandb" else "tensorboard",
+ remove_unused_columns=False,
+ optim="adamw_torch",
+ logging_steps=script_args.logging_steps,
+ evaluation_strategy="steps" if script_args.eval_split != "none" else "no",
)
# Step 4: Define the LoraConfig
@@ -139,6 +167,7 @@
tokenizer=tokenizer,
args=training_args,
train_dataset=train_dataset,
+ eval_dataset=eval_dataset,
peft_config=peft_config,
max_length=script_args.seq_length,
)
|
{"golden_diff": "diff --git a/examples/scripts/reward_trainer.py b/examples/scripts/reward_trainer.py\n--- a/examples/scripts/reward_trainer.py\n+++ b/examples/scripts/reward_trainer.py\n@@ -43,8 +43,13 @@\n dataset_name: Optional[str] = field(default=\"Anthropic/hh-rlhf\", metadata={\"help\": \"the model name\"})\n dataset_text_field: Optional[str] = field(default=\"text\", metadata={\"help\": \"the text field of the dataset\"})\n log_with: Optional[str] = field(default=None, metadata={\"help\": \"use 'wandb' to log with wandb\"})\n+ logging_steps: Optional[int] = field(default=500, metadata={\"help\": \"the number of update steps between two logs\"})\n+ eval_split: Optional[str] = field(\n+ default=\"none\", metadata={\"help\": \"the dataset split to evaluate on; default to 'none' (no evaluation)\"}\n+ )\n learning_rate: Optional[float] = field(default=1.41e-5, metadata={\"help\": \"the learning rate\"})\n batch_size: Optional[int] = field(default=64, metadata={\"help\": \"the batch size\"})\n+ num_train_epochs: Optional[int] = field(default=1, metadata={\"help\": \"the number of training epochs\"})\n seq_length: Optional[int] = field(default=512, metadata={\"help\": \"Input sequence length\"})\n gradient_accumulation_steps: Optional[int] = field(\n default=16, metadata={\"help\": \"the number of gradient accumulation steps\"}\n@@ -77,11 +82,14 @@\n quantization_config=quantization_config,\n device_map=device_map,\n trust_remote_code=script_args.trust_remote_code,\n+ num_labels=1,\n )\n \n # Step 2: Load the dataset and pre-process it\n tokenizer = AutoTokenizer.from_pretrained(script_args.model_name)\n-dataset = load_dataset(script_args.dataset_name, split=\"train\")\n+train_dataset = load_dataset(script_args.dataset_name, split=\"train\")\n+\n+\n # Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.\n # Then tokenize the dataset.\n # Adapt this section to your needs for custom datasets\n@@ -107,8 +115,7 @@\n \n \n # preprocess the dataset and filter out QAs that are longer than script_args.max_length\n-original_columns = dataset.column_names\n-train_dataset = dataset.map(\n+train_dataset = train_dataset.map(\n preprocess_function,\n batched=True,\n num_proc=4,\n@@ -118,13 +125,34 @@\n and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n )\n \n+if script_args.eval_split == \"none\":\n+ eval_dataset = None\n+else:\n+ eval_dataset = load_dataset(script_args.dataset_name, split=script_args.eval_split)\n+\n+ eval_dataset = eval_dataset.map(\n+ preprocess_function,\n+ batched=True,\n+ num_proc=4,\n+ )\n+ eval_dataset = eval_dataset.filter(\n+ lambda x: len(x[\"input_ids_chosen\"]) <= script_args.seq_length\n+ and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n+ )\n+\n \n # Step 3: Define the training arguments\n training_args = TrainingArguments(\n output_dir=script_args.output_dir,\n per_device_train_batch_size=script_args.batch_size,\n+ num_train_epochs=script_args.num_train_epochs,\n gradient_accumulation_steps=script_args.gradient_accumulation_steps,\n learning_rate=script_args.learning_rate,\n+ report_to=\"wandb\" if script_args.log_with == \"wandb\" else \"tensorboard\",\n+ remove_unused_columns=False,\n+ optim=\"adamw_torch\",\n+ logging_steps=script_args.logging_steps,\n+ evaluation_strategy=\"steps\" if script_args.eval_split != \"none\" else \"no\",\n )\n \n # Step 4: Define the LoraConfig\n@@ -139,6 +167,7 @@\n tokenizer=tokenizer,\n args=training_args,\n train_dataset=train_dataset,\n+ eval_dataset=eval_dataset,\n peft_config=peft_config,\n max_length=script_args.seq_length,\n )\n", "issue": "Potential bug of model output dim in reward trainer example\nHi!\r\n\r\nI've been exploring the `examples/scripts/reward_trainer.py` and believe I've found a potential bug that I'd like to address.\r\n\r\n## Potential bug.\r\nTo my understanding, the reward model should produce a scalar output. However, when we load the pre-trained model in the [example script](https://github.com/lvwerra/trl/blob/main/examples/scripts/reward_trainer.py#L75-L80), the model will be a binary classifier by default, with a 2-dim logit output:\r\n\r\n```python\r\nimport torch\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\nmodel = model.cpu()\r\nwith torch.no_grad():\r\n out = model(**inputs)\r\nprint(out.logits.shape) # torch.Size([1, 2])\r\n```\r\n\r\nDuring training, this issue was hidden by the [the loss computation part](https://github.com/lvwerra/trl/blob/main/trl/trainer/reward_trainer.py#L169-L176) of the reward trainer:\r\n\r\n```python\r\n rewards_chosen = model(\r\n input_ids=inputs[\"input_ids_chosen\"],\r\n attention_mask=inputs[\"attention_mask_chosen\"],\r\n )[0]\r\n\r\n rewards_rejected = model(\r\n input_ids=inputs[\"input_ids_rejected\"],\r\n attention_mask=inputs[\"attention_mask_rejected\"],\r\n )[0]\r\n\r\n loss = -torch.nn.functional.logsigmoid(rewards_chosen - rewards_rejected).mean()\r\n```\r\nThere, the `rewards_rejected` and `rewards_chosen` are dim [sample, 2] matrices, which I think should be dim [sample, 1] instead. The `.mean()` operator hides the issue as it returns a scalar value.\r\n\r\nHere is a [colab](https://colab.research.google.com/drive/1szzGC8H1DQ7329_idVnnOnPIPE6D7J4j?usp=sharing) that shows the dimensionality problem.\r\n\r\n## Proposed fix\r\n\r\nTo resolve this, in the model training example, we can use:\r\n\r\n```python\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n script_args.model_name,\r\n quantization_config=quantization_config,\r\n device_map=device_map,\r\n trust_remote_code=script_args.trust_remote_code,\r\n num_labels=1 # <---- make sure that the model output is a scalar\r\n)\r\n```\r\nI'll submit a pull request if it is OK. \r\n\r\nIn addition, while exploring `examples/scripts/reward_trainer.py`, I noticed a few other places that can be improved. For example, there is currently no validation steps that show the genearlization performance of the reward model; the `log_with` option in `ScriptArguments` is currently not being used anywhere and therefore has no effect. I'm happy to include the validation and fix the logging glitch in my pull request as well.\r\n\r\nTianlin\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2023 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nfrom datasets import load_dataset\nfrom peft import LoraConfig\nfrom tqdm import tqdm\nfrom transformers import (\n AutoModelForSequenceClassification,\n AutoTokenizer,\n BitsAndBytesConfig,\n HfArgumentParser,\n TrainingArguments,\n)\n\nfrom trl import RewardTrainer\n\n\ntqdm.pandas()\n\n\n# Define and parse arguments.\n@dataclass\nclass ScriptArguments:\n \"\"\"\n The name of the Casual LM model we wish to fine with SFTTrainer\n \"\"\"\n\n model_name: Optional[str] = field(default=\"facebook/opt-350m\", metadata={\"help\": \"the model name\"})\n dataset_name: Optional[str] = field(default=\"Anthropic/hh-rlhf\", metadata={\"help\": \"the model name\"})\n dataset_text_field: Optional[str] = field(default=\"text\", metadata={\"help\": \"the text field of the dataset\"})\n log_with: Optional[str] = field(default=None, metadata={\"help\": \"use 'wandb' to log with wandb\"})\n learning_rate: Optional[float] = field(default=1.41e-5, metadata={\"help\": \"the learning rate\"})\n batch_size: Optional[int] = field(default=64, metadata={\"help\": \"the batch size\"})\n seq_length: Optional[int] = field(default=512, metadata={\"help\": \"Input sequence length\"})\n gradient_accumulation_steps: Optional[int] = field(\n default=16, metadata={\"help\": \"the number of gradient accumulation steps\"}\n )\n load_in_8bit: Optional[bool] = field(default=False, metadata={\"help\": \"load the model in 8 bits precision\"})\n load_in_4bit: Optional[bool] = field(default=False, metadata={\"help\": \"load the model in 4 bits precision\"})\n use_peft: Optional[bool] = field(default=False, metadata={\"help\": \"Wether to use PEFT or not to train adapters\"})\n trust_remote_code: Optional[bool] = field(default=True, metadata={\"help\": \"Enable `trust_remote_code`\"})\n output_dir: Optional[str] = field(default=\"output\", metadata={\"help\": \"the output directory\"})\n\n\nparser = HfArgumentParser(ScriptArguments)\nscript_args = parser.parse_args_into_dataclasses()[0]\n\n# Step 1: Load the model\nif script_args.load_in_8bit and script_args.load_in_4bit:\n raise ValueError(\"You can't load the model in 8 bits and 4 bits at the same time\")\nelif script_args.load_in_8bit or script_args.load_in_4bit:\n quantization_config = BitsAndBytesConfig(\n load_in_8bit=script_args.load_in_8bit, load_in_4bit=script_args.load_in_4bit\n )\n # This means: fit the entire model on the GPU:0\n device_map = {\"\": 0}\nelse:\n device_map = None\n quantization_config = None\n\nmodel = AutoModelForSequenceClassification.from_pretrained(\n script_args.model_name,\n quantization_config=quantization_config,\n device_map=device_map,\n trust_remote_code=script_args.trust_remote_code,\n)\n\n# Step 2: Load the dataset and pre-process it\ntokenizer = AutoTokenizer.from_pretrained(script_args.model_name)\ndataset = load_dataset(script_args.dataset_name, split=\"train\")\n# Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.\n# Then tokenize the dataset.\n# Adapt this section to your needs for custom datasets\n\n\ndef preprocess_function(examples):\n new_examples = {\n \"input_ids_chosen\": [],\n \"attention_mask_chosen\": [],\n \"input_ids_rejected\": [],\n \"attention_mask_rejected\": [],\n }\n for chosen, rejected in zip(examples[\"chosen\"], examples[\"rejected\"]):\n tokenized_j = tokenizer(chosen, truncation=True)\n tokenized_k = tokenizer(rejected, truncation=True)\n\n new_examples[\"input_ids_chosen\"].append(tokenized_j[\"input_ids\"])\n new_examples[\"attention_mask_chosen\"].append(tokenized_j[\"attention_mask\"])\n new_examples[\"input_ids_rejected\"].append(tokenized_k[\"input_ids\"])\n new_examples[\"attention_mask_rejected\"].append(tokenized_k[\"attention_mask\"])\n\n return new_examples\n\n\n# preprocess the dataset and filter out QAs that are longer than script_args.max_length\noriginal_columns = dataset.column_names\ntrain_dataset = dataset.map(\n preprocess_function,\n batched=True,\n num_proc=4,\n)\ntrain_dataset = train_dataset.filter(\n lambda x: len(x[\"input_ids_chosen\"]) <= script_args.seq_length\n and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n)\n\n\n# Step 3: Define the training arguments\ntraining_args = TrainingArguments(\n output_dir=script_args.output_dir,\n per_device_train_batch_size=script_args.batch_size,\n gradient_accumulation_steps=script_args.gradient_accumulation_steps,\n learning_rate=script_args.learning_rate,\n)\n\n# Step 4: Define the LoraConfig\nif script_args.use_peft:\n peft_config = LoraConfig(r=16, lora_alpha=16, bias=\"none\", task_type=\"SEQ_CLS\", modules_to_save=[\"scores\"])\nelse:\n peft_config = None\n\n# Step 5: Define the Trainer\ntrainer = RewardTrainer(\n model=model,\n tokenizer=tokenizer,\n args=training_args,\n train_dataset=train_dataset,\n peft_config=peft_config,\n max_length=script_args.seq_length,\n)\n\ntrainer.train()\n", "path": "examples/scripts/reward_trainer.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2023 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nfrom datasets import load_dataset\nfrom peft import LoraConfig\nfrom tqdm import tqdm\nfrom transformers import (\n AutoModelForSequenceClassification,\n AutoTokenizer,\n BitsAndBytesConfig,\n HfArgumentParser,\n TrainingArguments,\n)\n\nfrom trl import RewardTrainer\n\n\ntqdm.pandas()\n\n\n# Define and parse arguments.\n@dataclass\nclass ScriptArguments:\n \"\"\"\n The name of the Casual LM model we wish to fine with SFTTrainer\n \"\"\"\n\n model_name: Optional[str] = field(default=\"facebook/opt-350m\", metadata={\"help\": \"the model name\"})\n dataset_name: Optional[str] = field(default=\"Anthropic/hh-rlhf\", metadata={\"help\": \"the model name\"})\n dataset_text_field: Optional[str] = field(default=\"text\", metadata={\"help\": \"the text field of the dataset\"})\n log_with: Optional[str] = field(default=None, metadata={\"help\": \"use 'wandb' to log with wandb\"})\n logging_steps: Optional[int] = field(default=500, metadata={\"help\": \"the number of update steps between two logs\"})\n eval_split: Optional[str] = field(\n default=\"none\", metadata={\"help\": \"the dataset split to evaluate on; default to 'none' (no evaluation)\"}\n )\n learning_rate: Optional[float] = field(default=1.41e-5, metadata={\"help\": \"the learning rate\"})\n batch_size: Optional[int] = field(default=64, metadata={\"help\": \"the batch size\"})\n num_train_epochs: Optional[int] = field(default=1, metadata={\"help\": \"the number of training epochs\"})\n seq_length: Optional[int] = field(default=512, metadata={\"help\": \"Input sequence length\"})\n gradient_accumulation_steps: Optional[int] = field(\n default=16, metadata={\"help\": \"the number of gradient accumulation steps\"}\n )\n load_in_8bit: Optional[bool] = field(default=False, metadata={\"help\": \"load the model in 8 bits precision\"})\n load_in_4bit: Optional[bool] = field(default=False, metadata={\"help\": \"load the model in 4 bits precision\"})\n use_peft: Optional[bool] = field(default=False, metadata={\"help\": \"Wether to use PEFT or not to train adapters\"})\n trust_remote_code: Optional[bool] = field(default=True, metadata={\"help\": \"Enable `trust_remote_code`\"})\n output_dir: Optional[str] = field(default=\"output\", metadata={\"help\": \"the output directory\"})\n\n\nparser = HfArgumentParser(ScriptArguments)\nscript_args = parser.parse_args_into_dataclasses()[0]\n\n# Step 1: Load the model\nif script_args.load_in_8bit and script_args.load_in_4bit:\n raise ValueError(\"You can't load the model in 8 bits and 4 bits at the same time\")\nelif script_args.load_in_8bit or script_args.load_in_4bit:\n quantization_config = BitsAndBytesConfig(\n load_in_8bit=script_args.load_in_8bit, load_in_4bit=script_args.load_in_4bit\n )\n # This means: fit the entire model on the GPU:0\n device_map = {\"\": 0}\nelse:\n device_map = None\n quantization_config = None\n\nmodel = AutoModelForSequenceClassification.from_pretrained(\n script_args.model_name,\n quantization_config=quantization_config,\n device_map=device_map,\n trust_remote_code=script_args.trust_remote_code,\n num_labels=1,\n)\n\n# Step 2: Load the dataset and pre-process it\ntokenizer = AutoTokenizer.from_pretrained(script_args.model_name)\ntrain_dataset = load_dataset(script_args.dataset_name, split=\"train\")\n\n\n# Turn the dataset into pairs of post + summaries, where text_j is the preferred question + answer and text_k is the other.\n# Then tokenize the dataset.\n# Adapt this section to your needs for custom datasets\n\n\ndef preprocess_function(examples):\n new_examples = {\n \"input_ids_chosen\": [],\n \"attention_mask_chosen\": [],\n \"input_ids_rejected\": [],\n \"attention_mask_rejected\": [],\n }\n for chosen, rejected in zip(examples[\"chosen\"], examples[\"rejected\"]):\n tokenized_j = tokenizer(chosen, truncation=True)\n tokenized_k = tokenizer(rejected, truncation=True)\n\n new_examples[\"input_ids_chosen\"].append(tokenized_j[\"input_ids\"])\n new_examples[\"attention_mask_chosen\"].append(tokenized_j[\"attention_mask\"])\n new_examples[\"input_ids_rejected\"].append(tokenized_k[\"input_ids\"])\n new_examples[\"attention_mask_rejected\"].append(tokenized_k[\"attention_mask\"])\n\n return new_examples\n\n\n# preprocess the dataset and filter out QAs that are longer than script_args.max_length\ntrain_dataset = train_dataset.map(\n preprocess_function,\n batched=True,\n num_proc=4,\n)\ntrain_dataset = train_dataset.filter(\n lambda x: len(x[\"input_ids_chosen\"]) <= script_args.seq_length\n and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n)\n\nif script_args.eval_split == \"none\":\n eval_dataset = None\nelse:\n eval_dataset = load_dataset(script_args.dataset_name, split=script_args.eval_split)\n\n eval_dataset = eval_dataset.map(\n preprocess_function,\n batched=True,\n num_proc=4,\n )\n eval_dataset = eval_dataset.filter(\n lambda x: len(x[\"input_ids_chosen\"]) <= script_args.seq_length\n and len(x[\"input_ids_rejected\"]) <= script_args.seq_length\n )\n\n\n# Step 3: Define the training arguments\ntraining_args = TrainingArguments(\n output_dir=script_args.output_dir,\n per_device_train_batch_size=script_args.batch_size,\n num_train_epochs=script_args.num_train_epochs,\n gradient_accumulation_steps=script_args.gradient_accumulation_steps,\n learning_rate=script_args.learning_rate,\n report_to=\"wandb\" if script_args.log_with == \"wandb\" else \"tensorboard\",\n remove_unused_columns=False,\n optim=\"adamw_torch\",\n logging_steps=script_args.logging_steps,\n evaluation_strategy=\"steps\" if script_args.eval_split != \"none\" else \"no\",\n)\n\n# Step 4: Define the LoraConfig\nif script_args.use_peft:\n peft_config = LoraConfig(r=16, lora_alpha=16, bias=\"none\", task_type=\"SEQ_CLS\", modules_to_save=[\"scores\"])\nelse:\n peft_config = None\n\n# Step 5: Define the Trainer\ntrainer = RewardTrainer(\n model=model,\n tokenizer=tokenizer,\n args=training_args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n peft_config=peft_config,\n max_length=script_args.seq_length,\n)\n\ntrainer.train()\n", "path": "examples/scripts/reward_trainer.py"}]}
| 2,576 | 923 |
gh_patches_debug_4862
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-420
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not working on macOS Sierra?
Attempting to utilize a collection of hooks from the default repo, I get the following:
```
An unexpected error has occurred: CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
Return code: 100
Expected return code: 0
Output:
Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3
Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python
ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning
ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
ERROR: virtualenv is not compatible with this system or executable
Errors: (none)
Traceback (most recent call last):
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/error_handler.py", line 47, in error_handler
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/main.py", line 157, in main
return run(runner, args)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 212, in run
return _run_hooks(repo_hooks, args, write, environ)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/staged_files_only.py", line 63, in staged_files_only
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 195, in run
repo_hooks = list(get_repo_hooks(runner))
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py", line 141, in get_repo_hooks
for repo in runner.repositories:
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/cached_property.py", line 26, in __get__
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/runner.py", line 47, in repositories
repository.require_installed()
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py", line 117, in require_installed
self.install()
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py", line 187, in install
self.additional_dependencies[language_name][language_version],
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py", line 78, in install_environment
('pip', 'install', '.') + additional_dependencies,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py", line 58, in clean_path_on_failure
yield
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py", line 74, in install_environment
repo_cmd_runner.run(venv_cmd)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/prefixed_command_runner.py", line 39, in run
return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)
File "/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py", line 189, in cmd_output
returncode, cmd, retcode, output=(stdout, stderr),
pre_commit.util.CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
Return code: 100
Expected return code: 0
Output:
Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'
New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3
Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python
ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning
ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')
ERROR: virtualenv is not compatible with this system or executable
Errors: (none)
```
Using the Python.org-provided Python 3.5 installer package. I'm already within a virtual environment at the point of execution.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/python.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6 import sys
7
8 from pre_commit.envcontext import envcontext
9 from pre_commit.envcontext import UNSET
10 from pre_commit.envcontext import Var
11 from pre_commit.languages import helpers
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.xargs import xargs
14
15
16 ENVIRONMENT_DIR = 'py_env'
17
18
19 def bin_dir(venv):
20 """On windows there's a different directory for the virtualenv"""
21 bin_part = 'Scripts' if os.name == 'nt' else 'bin'
22 return os.path.join(venv, bin_part)
23
24
25 def get_env_patch(venv):
26 return (
27 ('PYTHONHOME', UNSET),
28 ('VIRTUAL_ENV', venv),
29 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
30 )
31
32
33 @contextlib.contextmanager
34 def in_env(repo_cmd_runner, language_version):
35 envdir = os.path.join(
36 repo_cmd_runner.prefix_dir,
37 helpers.environment_dir(ENVIRONMENT_DIR, language_version),
38 )
39 with envcontext(get_env_patch(envdir)):
40 yield
41
42
43 def norm_version(version):
44 if os.name == 'nt': # pragma: no cover (windows)
45 # Try looking up by name
46 if distutils.spawn.find_executable(version):
47 return version
48
49 # If it is in the form pythonx.x search in the default
50 # place on windows
51 if version.startswith('python'):
52 return r'C:\{}\python.exe'.format(version.replace('.', ''))
53
54 # Otherwise assume it is a path
55 return os.path.expanduser(version)
56
57
58 def install_environment(
59 repo_cmd_runner,
60 version='default',
61 additional_dependencies=(),
62 ):
63 additional_dependencies = tuple(additional_dependencies)
64 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
65
66 # Install a virtualenv
67 with clean_path_on_failure(repo_cmd_runner.path(directory)):
68 venv_cmd = [
69 sys.executable, '-m', 'virtualenv',
70 '{{prefix}}{}'.format(directory)
71 ]
72 if version != 'default':
73 venv_cmd.extend(['-p', norm_version(version)])
74 repo_cmd_runner.run(venv_cmd)
75 with in_env(repo_cmd_runner, version):
76 helpers.run_setup_cmd(
77 repo_cmd_runner,
78 ('pip', 'install', '.') + additional_dependencies,
79 )
80
81
82 def run_hook(repo_cmd_runner, hook, file_args):
83 with in_env(repo_cmd_runner, hook['language_version']):
84 return xargs((hook['entry'],) + tuple(hook['args']), file_args)
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -71,6 +71,8 @@
]
if version != 'default':
venv_cmd.extend(['-p', norm_version(version)])
+ else:
+ venv_cmd.extend(['-p', os.path.realpath(sys.executable)])
repo_cmd_runner.run(venv_cmd)
with in_env(repo_cmd_runner, version):
helpers.run_setup_cmd(
|
{"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -71,6 +71,8 @@\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n+ else:\n+ venv_cmd.extend(['-p', os.path.realpath(sys.executable)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version):\n helpers.run_setup_cmd(\n", "issue": "Not working on macOS Sierra?\nAttempting to utilize a collection of hooks from the default repo, I get the following:\n\n```\nAn unexpected error has occurred: CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\nReturn code: 100\nExpected return code: 0\nOutput: \n Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'\n New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3\n Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python\n ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning\n ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\n ERROR: virtualenv is not compatible with this system or executable\n\nErrors: (none)\n\nTraceback (most recent call last):\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/error_handler.py\", line 47, in error_handler\n yield\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/main.py\", line 157, in main\n return run(runner, args)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py\", line 212, in run\n return _run_hooks(repo_hooks, args, write, environ)\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/staged_files_only.py\", line 63, in staged_files_only\n yield\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py\", line 195, in run\n repo_hooks = list(get_repo_hooks(runner))\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/commands/run.py\", line 141, in get_repo_hooks\n for repo in runner.repositories:\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/cached_property.py\", line 26, in __get__\n value = obj.__dict__[self.func.__name__] = self.func(obj)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/runner.py\", line 47, in repositories\n repository.require_installed()\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py\", line 117, in require_installed\n self.install()\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/repository.py\", line 187, in install\n self.additional_dependencies[language_name][language_version],\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py\", line 78, in install_environment\n ('pip', 'install', '.') + additional_dependencies,\n File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py\", line 58, in clean_path_on_failure\n yield\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/languages/python.py\", line 74, in install_environment\n repo_cmd_runner.run(venv_cmd)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/prefixed_command_runner.py\", line 39, in run\n return cmd_output(*replaced_cmd, __popen=self.__popen, **kwargs)\n File \"/Users/amcgregor/Projects/marrow/.venv/lib/python3.5/site-packages/pre_commit/util.py\", line 189, in cmd_output\n returncode, cmd, retcode, output=(stdout, stderr),\npre_commit.util.CalledProcessError: Command: ('/Users/amcgregor/Projects/marrow/.venv/bin/python3', '-m', 'virtualenv', '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\nReturn code: 100\nExpected return code: 0\nOutput: \n Using base prefix '/usr/local/bin/../../../Library/Frameworks/Python.framework/Versions/3.5'\n New python executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3\n Also creating executable in /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python\n ERROR: The executable /Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default/bin/python3 is not functioning\n ERROR: It thinks sys.prefix is '/Library/Frameworks/Python.framework/Versions/3.5' (should be '/Users/amcgregor/.pre-commit/repofu57ylaa/py_env-default')\n ERROR: virtualenv is not compatible with this system or executable\n\nErrors: (none)\n```\n\nUsing the Python.org-provided Python 3.5 installer package. I'm already within a virtual environment at the point of execution.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv):\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv):\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n envdir = os.path.join(\n repo_cmd_runner.prefix_dir,\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version):\n helpers.run_setup_cmd(\n repo_cmd_runner,\n ('pip', 'install', '.') + additional_dependencies,\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']):\n return xargs((hook['entry'],) + tuple(hook['args']), file_args)\n", "path": "pre_commit/languages/python.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv):\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv):\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n envdir = os.path.join(\n repo_cmd_runner.prefix_dir,\n helpers.environment_dir(ENVIRONMENT_DIR, language_version),\n )\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n else:\n venv_cmd.extend(['-p', os.path.realpath(sys.executable)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version):\n helpers.run_setup_cmd(\n repo_cmd_runner,\n ('pip', 'install', '.') + additional_dependencies,\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']):\n return xargs((hook['entry'],) + tuple(hook['args']), file_args)\n", "path": "pre_commit/languages/python.py"}]}
| 2,346 | 117 |
gh_patches_debug_23913
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-1238
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show all user trades/actions in a new page
## Problem Definition
We have the user data action but users can't actually explore it in any way in the UI. That's not good and we should definitely offer a way to display all actions/trades in a dedicated page.
That would mean showing user's:
- Exchange trades
- Exchange deposit/withdrawals
- DEX trades (when we support them)
## Considerations
Number of visible actions will be limited whether user has premium or not. Need to think a bit here on how to make this a premium feature but also available to normal users. At the moment can only think of limit in the number of trades/actions shown.
## Task
## Backend
The backend already has an api call for this. It's actually being used with a filter for external location for the OTC trades.
## Frontend
The frontend need a new dedicated page. User tactions or something like that. And there all trades, deposits/withdrawals of all locations should be displayed. This is going to be a very long list in most cases so we should think on optimizations and how to show the data in visually appealing and also useful way.
I would imagine it would make sense to have different tables for trades, different for asset movements (deposit/withdrawals) etc. And perhaps a differen component per location. Or simply offer filtering.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/data_faker/data_faker/actions.py`
Content:
```
1 import logging
2 import random
3 from typing import Tuple
4
5 from rotkehlchen.assets.asset import Asset
6 from rotkehlchen.balances.manual import ManuallyTrackedBalance
7 from rotkehlchen.constants.assets import A_BTC, A_EUR, A_USD, FIAT_CURRENCIES
8 from rotkehlchen.exchanges.data_structures import Trade, TradeType
9 from rotkehlchen.fval import FVal
10 from rotkehlchen.history import PriceHistorian
11 from rotkehlchen.serialization.deserialize import deserialize_location, pair_get_assets
12 from rotkehlchen.typing import Location, Timestamp, TradePair
13
14 STARTING_TIMESTAMP = 1464739200 # 01/06/2016
15 NUMBER_OF_TRADES = 5
16 STARTING_FUNDS = {A_EUR: FVal(100000), A_BTC: FVal(10)}
17
18 MAX_TRADE_DIFF_VARIANCE = 14400
19
20 ALLOWED_EXCHANGES = ['kraken', 'binance']
21 KRAKEN_PAIRS = [TradePair('ETH_EUR'), TradePair('BTC_EUR')]
22
23 MAX_TRADE_USD_VALUE = FVal(100)
24 MAX_FEE_USD_VALUE = 1
25
26
27 logger = logging.getLogger(__name__)
28
29
30 class ActionWriter():
31
32 def __init__(
33 self,
34 trades_number: int,
35 seconds_between_trades: int,
36 seconds_between_balance_save: int,
37 rotkehlchen,
38 fake_kraken,
39 fake_binance,
40 ):
41 self.seconds_between_trades = seconds_between_trades
42 self.seconds_between_balance_save = seconds_between_balance_save
43 self.trades_number = trades_number
44 self.current_ts = STARTING_TIMESTAMP
45 self.last_trade_ts = 0
46 self.last_balance_save_ts = 0
47 self.funds = STARTING_FUNDS
48 self.rotki = rotkehlchen
49 self.kraken = fake_kraken
50 self.binance = fake_binance
51
52 timestamp, _, _ = self.get_next_ts()
53 for asset, value in self.funds.items():
54 if asset.is_fiat():
55 self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(
56 asset=asset,
57 label=f'{asset.identifier} balance',
58 amount=value,
59 location=Location.BANKS,
60 tags=None,
61 )])
62 self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)
63
64 # divide our starting funds between exchanges and keep a part out
65 divide_by = len(ALLOWED_EXCHANGES) + 1
66 for asset, value in self.funds.items():
67 amount = value / divide_by
68 for exchange in ALLOWED_EXCHANGES:
69 timestamp, _, _ = self.get_next_ts()
70
71 skip_exchange = asset in FIAT_CURRENCIES and exchange != 'kraken'
72
73 if not skip_exchange:
74 getattr(self, exchange).deposit(
75 asset=asset,
76 amount=amount,
77 time=timestamp,
78 )
79 if asset in FIAT_CURRENCIES:
80 self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(
81 asset=asset,
82 label=f'{asset.identifier} balance',
83 amount=value,
84 location=Location.BANKS,
85 tags=None,
86 )])
87
88 self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)
89 self.last_balance_save_ts = timestamp
90
91 def maybe_save_balances(self, save_ts: Timestamp) -> None:
92 """Maybe Save all current balances in the fake user's DB at the current timestamp
93
94 If the save_ts is not after the time we save balances then nothing happens
95 """
96 if save_ts - self.last_balance_save_ts < self.seconds_between_balance_save:
97 return
98 self.rotki.query_balances(requested_save_data=True, timestamp=save_ts)
99 self.last_balance_save_ts = save_ts
100
101 def generate_history(self) -> None:
102 created_trades = 0
103 while created_trades <= self.trades_number:
104 current_ts, save_balances, make_trade = self.get_next_ts()
105
106 if make_trade:
107 self.create_action(created_trades, current_ts)
108 created_trades += 1
109
110 if save_balances:
111 self.maybe_save_balances(save_ts=current_ts)
112
113 @staticmethod
114 def query_historical_price(from_asset: Asset, to_asset: Asset, timestamp: Timestamp):
115 return PriceHistorian().query_historical_price(
116 from_asset=from_asset,
117 to_asset=to_asset,
118 timestamp=timestamp,
119 )
120
121 def increase_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:
122 if asset not in self.funds:
123 self.funds[asset] = amount
124 else:
125 self.funds[asset] += amount
126
127 getattr(self, exchange).increase_asset(asset, amount)
128
129 def decrease_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:
130 assert asset in self.funds, 'Asset should exist in funds'
131 assert amount <= self.funds[asset], 'We should have enough funds to decrease asset'
132 self.funds[asset] -= amount
133
134 getattr(self, exchange).decrease_asset(asset, amount)
135
136 def get_next_ts(self) -> Tuple[Timestamp, bool, bool]:
137 current_ts = self.current_ts
138 advance_by_secs = min(self.seconds_between_trades, self.seconds_between_balance_save)
139 secs_in_future = random.randint(
140 advance_by_secs,
141 advance_by_secs + MAX_TRADE_DIFF_VARIANCE,
142 )
143 self.current_ts += secs_in_future
144
145 save_balances = False
146 if self.current_ts - self.last_balance_save_ts >= self.seconds_between_balance_save:
147 save_balances = True
148
149 make_trade = False
150 if self.current_ts - self.last_trade_ts >= self.seconds_between_trades:
151 make_trade = True
152
153 return Timestamp(current_ts), save_balances, make_trade
154
155 def create_action(self, index: int, ts: Timestamp):
156 """Create a random trade action on a random exchange depending
157 on the funds that are available in that exchange"""
158 # choose an exchange at random
159 exchange_name = random.choice(ALLOWED_EXCHANGES)
160 exchange = getattr(self, exchange_name)
161 # choose a random pair at that exchange
162 pair = exchange.choose_pair(
163 timestamp=ts,
164 price_query=self.query_historical_price,
165 )
166 print(
167 f'Creating trade {index + 1} / {self.trades_number} in {exchange_name}'
168 f' for the pair: {pair} at timestamp {ts}',
169 )
170 # depending on our funds decide on what to do. Buy/sell
171 base, quote = pair_get_assets(pair)
172 if exchange.get_balance(base) is None:
173 action_type = TradeType.BUY
174 elif exchange.get_balance(quote) is None:
175 action_type = TradeType.SELL
176 else:
177 # TODO: trade the one we have most of
178 action_type = random.choice(list(TradeType))
179
180 # if we are buying we are going to spend from the quote asset
181 if action_type == TradeType.BUY:
182 spending_asset = quote
183 else: # selling spends from the base asset
184 spending_asset = base
185 # get a spending asset amount within our per-trade equivalent range and
186 # our available funds
187 spending_usd_rate = self.query_historical_price(spending_asset, A_USD, ts)
188 max_usd_in_spending_asset = spending_usd_rate * exchange.get_balance(spending_asset)
189 max_usd_equivalent_to_spend = min(max_usd_in_spending_asset, MAX_TRADE_USD_VALUE)
190 rate = self.query_historical_price(base, quote, ts)
191 usd_to_spend = FVal(random.uniform(0.01, float(max_usd_equivalent_to_spend)))
192 amount_in_spending_asset = usd_to_spend / spending_usd_rate
193 # if we are buying then the amount is the amount of asset we bought
194 if action_type == TradeType.BUY:
195 amount = amount_in_spending_asset / rate
196 # if we are selling the amount is the spending asset amount
197 else:
198 amount = amount_in_spending_asset
199
200 quote_asset_usd_rate = self.query_historical_price(quote, A_USD, ts)
201 fee_in_quote_currency = FVal(random.uniform(0, MAX_FEE_USD_VALUE)) / quote_asset_usd_rate
202
203 # create the trade
204 trade = Trade(
205 timestamp=ts,
206 location=deserialize_location(exchange_name),
207 pair=pair,
208 trade_type=action_type,
209 amount=amount,
210 rate=rate,
211 fee=fee_in_quote_currency,
212 fee_currency=quote,
213 link='',
214 notes='',
215 )
216 logger.info(f'Created trade: {trade}')
217
218 # Adjust our global and per exchange accounting
219 if action_type == TradeType.BUY:
220 # we buy so we increase our base asset by amount
221 self.increase_asset(base, amount, exchange_name)
222 # and decrease quote by amount * rate
223 self.decrease_asset(quote, amount * rate, exchange_name)
224 else:
225 # we sell so we increase our quote asset
226 self.increase_asset(quote, amount * rate, exchange_name)
227 # and decrease our base asset
228 self.decrease_asset(base, amount, exchange_name)
229
230 # finally add it to the exchange
231 exchange.append_trade(trade)
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/data_faker/data_faker/actions.py b/tools/data_faker/data_faker/actions.py
--- a/tools/data_faker/data_faker/actions.py
+++ b/tools/data_faker/data_faker/actions.py
@@ -79,7 +79,7 @@
if asset in FIAT_CURRENCIES:
self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(
asset=asset,
- label=f'{asset.identifier} balance',
+ label=f'{asset.identifier} balance {timestamp}',
amount=value,
location=Location.BANKS,
tags=None,
@@ -104,8 +104,11 @@
current_ts, save_balances, make_trade = self.get_next_ts()
if make_trade:
- self.create_action(created_trades, current_ts)
- created_trades += 1
+ try:
+ self.create_action(created_trades, current_ts)
+ created_trades += 1
+ except Exception as e:
+ logger.error(f'failed to create trade: {e}')
if save_balances:
self.maybe_save_balances(save_ts=current_ts)
|
{"golden_diff": "diff --git a/tools/data_faker/data_faker/actions.py b/tools/data_faker/data_faker/actions.py\n--- a/tools/data_faker/data_faker/actions.py\n+++ b/tools/data_faker/data_faker/actions.py\n@@ -79,7 +79,7 @@\n if asset in FIAT_CURRENCIES:\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n- label=f'{asset.identifier} balance',\n+ label=f'{asset.identifier} balance {timestamp}',\n amount=value,\n location=Location.BANKS,\n tags=None,\n@@ -104,8 +104,11 @@\n current_ts, save_balances, make_trade = self.get_next_ts()\n \n if make_trade:\n- self.create_action(created_trades, current_ts)\n- created_trades += 1\n+ try:\n+ self.create_action(created_trades, current_ts)\n+ created_trades += 1\n+ except Exception as e:\n+ logger.error(f'failed to create trade: {e}')\n \n if save_balances:\n self.maybe_save_balances(save_ts=current_ts)\n", "issue": "Show all user trades/actions in a new page\n## Problem Definition\r\n\r\nWe have the user data action but users can't actually explore it in any way in the UI. That's not good and we should definitely offer a way to display all actions/trades in a dedicated page.\r\n\r\nThat would mean showing user's:\r\n- Exchange trades\r\n- Exchange deposit/withdrawals\r\n- DEX trades (when we support them)\r\n\r\n\r\n## Considerations\r\n\r\nNumber of visible actions will be limited whether user has premium or not. Need to think a bit here on how to make this a premium feature but also available to normal users. At the moment can only think of limit in the number of trades/actions shown.\r\n\r\n## Task\r\n\r\n## Backend\r\nThe backend already has an api call for this. It's actually being used with a filter for external location for the OTC trades.\r\n\r\n## Frontend\r\n\r\nThe frontend need a new dedicated page. User tactions or something like that. And there all trades, deposits/withdrawals of all locations should be displayed. This is going to be a very long list in most cases so we should think on optimizations and how to show the data in visually appealing and also useful way.\r\n\r\nI would imagine it would make sense to have different tables for trades, different for asset movements (deposit/withdrawals) etc. And perhaps a differen component per location. Or simply offer filtering.\r\n\n", "before_files": [{"content": "import logging\nimport random\nfrom typing import Tuple\n\nfrom rotkehlchen.assets.asset import Asset\nfrom rotkehlchen.balances.manual import ManuallyTrackedBalance\nfrom rotkehlchen.constants.assets import A_BTC, A_EUR, A_USD, FIAT_CURRENCIES\nfrom rotkehlchen.exchanges.data_structures import Trade, TradeType\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.history import PriceHistorian\nfrom rotkehlchen.serialization.deserialize import deserialize_location, pair_get_assets\nfrom rotkehlchen.typing import Location, Timestamp, TradePair\n\nSTARTING_TIMESTAMP = 1464739200 # 01/06/2016\nNUMBER_OF_TRADES = 5\nSTARTING_FUNDS = {A_EUR: FVal(100000), A_BTC: FVal(10)}\n\nMAX_TRADE_DIFF_VARIANCE = 14400\n\nALLOWED_EXCHANGES = ['kraken', 'binance']\nKRAKEN_PAIRS = [TradePair('ETH_EUR'), TradePair('BTC_EUR')]\n\nMAX_TRADE_USD_VALUE = FVal(100)\nMAX_FEE_USD_VALUE = 1\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ActionWriter():\n\n def __init__(\n self,\n trades_number: int,\n seconds_between_trades: int,\n seconds_between_balance_save: int,\n rotkehlchen,\n fake_kraken,\n fake_binance,\n ):\n self.seconds_between_trades = seconds_between_trades\n self.seconds_between_balance_save = seconds_between_balance_save\n self.trades_number = trades_number\n self.current_ts = STARTING_TIMESTAMP\n self.last_trade_ts = 0\n self.last_balance_save_ts = 0\n self.funds = STARTING_FUNDS\n self.rotki = rotkehlchen\n self.kraken = fake_kraken\n self.binance = fake_binance\n\n timestamp, _, _ = self.get_next_ts()\n for asset, value in self.funds.items():\n if asset.is_fiat():\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n label=f'{asset.identifier} balance',\n amount=value,\n location=Location.BANKS,\n tags=None,\n )])\n self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)\n\n # divide our starting funds between exchanges and keep a part out\n divide_by = len(ALLOWED_EXCHANGES) + 1\n for asset, value in self.funds.items():\n amount = value / divide_by\n for exchange in ALLOWED_EXCHANGES:\n timestamp, _, _ = self.get_next_ts()\n\n skip_exchange = asset in FIAT_CURRENCIES and exchange != 'kraken'\n\n if not skip_exchange:\n getattr(self, exchange).deposit(\n asset=asset,\n amount=amount,\n time=timestamp,\n )\n if asset in FIAT_CURRENCIES:\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n label=f'{asset.identifier} balance',\n amount=value,\n location=Location.BANKS,\n tags=None,\n )])\n\n self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)\n self.last_balance_save_ts = timestamp\n\n def maybe_save_balances(self, save_ts: Timestamp) -> None:\n \"\"\"Maybe Save all current balances in the fake user's DB at the current timestamp\n\n If the save_ts is not after the time we save balances then nothing happens\n \"\"\"\n if save_ts - self.last_balance_save_ts < self.seconds_between_balance_save:\n return\n self.rotki.query_balances(requested_save_data=True, timestamp=save_ts)\n self.last_balance_save_ts = save_ts\n\n def generate_history(self) -> None:\n created_trades = 0\n while created_trades <= self.trades_number:\n current_ts, save_balances, make_trade = self.get_next_ts()\n\n if make_trade:\n self.create_action(created_trades, current_ts)\n created_trades += 1\n\n if save_balances:\n self.maybe_save_balances(save_ts=current_ts)\n\n @staticmethod\n def query_historical_price(from_asset: Asset, to_asset: Asset, timestamp: Timestamp):\n return PriceHistorian().query_historical_price(\n from_asset=from_asset,\n to_asset=to_asset,\n timestamp=timestamp,\n )\n\n def increase_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:\n if asset not in self.funds:\n self.funds[asset] = amount\n else:\n self.funds[asset] += amount\n\n getattr(self, exchange).increase_asset(asset, amount)\n\n def decrease_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:\n assert asset in self.funds, 'Asset should exist in funds'\n assert amount <= self.funds[asset], 'We should have enough funds to decrease asset'\n self.funds[asset] -= amount\n\n getattr(self, exchange).decrease_asset(asset, amount)\n\n def get_next_ts(self) -> Tuple[Timestamp, bool, bool]:\n current_ts = self.current_ts\n advance_by_secs = min(self.seconds_between_trades, self.seconds_between_balance_save)\n secs_in_future = random.randint(\n advance_by_secs,\n advance_by_secs + MAX_TRADE_DIFF_VARIANCE,\n )\n self.current_ts += secs_in_future\n\n save_balances = False\n if self.current_ts - self.last_balance_save_ts >= self.seconds_between_balance_save:\n save_balances = True\n\n make_trade = False\n if self.current_ts - self.last_trade_ts >= self.seconds_between_trades:\n make_trade = True\n\n return Timestamp(current_ts), save_balances, make_trade\n\n def create_action(self, index: int, ts: Timestamp):\n \"\"\"Create a random trade action on a random exchange depending\n on the funds that are available in that exchange\"\"\"\n # choose an exchange at random\n exchange_name = random.choice(ALLOWED_EXCHANGES)\n exchange = getattr(self, exchange_name)\n # choose a random pair at that exchange\n pair = exchange.choose_pair(\n timestamp=ts,\n price_query=self.query_historical_price,\n )\n print(\n f'Creating trade {index + 1} / {self.trades_number} in {exchange_name}'\n f' for the pair: {pair} at timestamp {ts}',\n )\n # depending on our funds decide on what to do. Buy/sell\n base, quote = pair_get_assets(pair)\n if exchange.get_balance(base) is None:\n action_type = TradeType.BUY\n elif exchange.get_balance(quote) is None:\n action_type = TradeType.SELL\n else:\n # TODO: trade the one we have most of\n action_type = random.choice(list(TradeType))\n\n # if we are buying we are going to spend from the quote asset\n if action_type == TradeType.BUY:\n spending_asset = quote\n else: # selling spends from the base asset\n spending_asset = base\n # get a spending asset amount within our per-trade equivalent range and\n # our available funds\n spending_usd_rate = self.query_historical_price(spending_asset, A_USD, ts)\n max_usd_in_spending_asset = spending_usd_rate * exchange.get_balance(spending_asset)\n max_usd_equivalent_to_spend = min(max_usd_in_spending_asset, MAX_TRADE_USD_VALUE)\n rate = self.query_historical_price(base, quote, ts)\n usd_to_spend = FVal(random.uniform(0.01, float(max_usd_equivalent_to_spend)))\n amount_in_spending_asset = usd_to_spend / spending_usd_rate\n # if we are buying then the amount is the amount of asset we bought\n if action_type == TradeType.BUY:\n amount = amount_in_spending_asset / rate\n # if we are selling the amount is the spending asset amount\n else:\n amount = amount_in_spending_asset\n\n quote_asset_usd_rate = self.query_historical_price(quote, A_USD, ts)\n fee_in_quote_currency = FVal(random.uniform(0, MAX_FEE_USD_VALUE)) / quote_asset_usd_rate\n\n # create the trade\n trade = Trade(\n timestamp=ts,\n location=deserialize_location(exchange_name),\n pair=pair,\n trade_type=action_type,\n amount=amount,\n rate=rate,\n fee=fee_in_quote_currency,\n fee_currency=quote,\n link='',\n notes='',\n )\n logger.info(f'Created trade: {trade}')\n\n # Adjust our global and per exchange accounting\n if action_type == TradeType.BUY:\n # we buy so we increase our base asset by amount\n self.increase_asset(base, amount, exchange_name)\n # and decrease quote by amount * rate\n self.decrease_asset(quote, amount * rate, exchange_name)\n else:\n # we sell so we increase our quote asset\n self.increase_asset(quote, amount * rate, exchange_name)\n # and decrease our base asset\n self.decrease_asset(base, amount, exchange_name)\n\n # finally add it to the exchange\n exchange.append_trade(trade)\n", "path": "tools/data_faker/data_faker/actions.py"}], "after_files": [{"content": "import logging\nimport random\nfrom typing import Tuple\n\nfrom rotkehlchen.assets.asset import Asset\nfrom rotkehlchen.balances.manual import ManuallyTrackedBalance\nfrom rotkehlchen.constants.assets import A_BTC, A_EUR, A_USD, FIAT_CURRENCIES\nfrom rotkehlchen.exchanges.data_structures import Trade, TradeType\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.history import PriceHistorian\nfrom rotkehlchen.serialization.deserialize import deserialize_location, pair_get_assets\nfrom rotkehlchen.typing import Location, Timestamp, TradePair\n\nSTARTING_TIMESTAMP = 1464739200 # 01/06/2016\nNUMBER_OF_TRADES = 5\nSTARTING_FUNDS = {A_EUR: FVal(100000), A_BTC: FVal(10)}\n\nMAX_TRADE_DIFF_VARIANCE = 14400\n\nALLOWED_EXCHANGES = ['kraken', 'binance']\nKRAKEN_PAIRS = [TradePair('ETH_EUR'), TradePair('BTC_EUR')]\n\nMAX_TRADE_USD_VALUE = FVal(100)\nMAX_FEE_USD_VALUE = 1\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ActionWriter():\n\n def __init__(\n self,\n trades_number: int,\n seconds_between_trades: int,\n seconds_between_balance_save: int,\n rotkehlchen,\n fake_kraken,\n fake_binance,\n ):\n self.seconds_between_trades = seconds_between_trades\n self.seconds_between_balance_save = seconds_between_balance_save\n self.trades_number = trades_number\n self.current_ts = STARTING_TIMESTAMP\n self.last_trade_ts = 0\n self.last_balance_save_ts = 0\n self.funds = STARTING_FUNDS\n self.rotki = rotkehlchen\n self.kraken = fake_kraken\n self.binance = fake_binance\n\n timestamp, _, _ = self.get_next_ts()\n for asset, value in self.funds.items():\n if asset.is_fiat():\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n label=f'{asset.identifier} balance',\n amount=value,\n location=Location.BANKS,\n tags=None,\n )])\n self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)\n\n # divide our starting funds between exchanges and keep a part out\n divide_by = len(ALLOWED_EXCHANGES) + 1\n for asset, value in self.funds.items():\n amount = value / divide_by\n for exchange in ALLOWED_EXCHANGES:\n timestamp, _, _ = self.get_next_ts()\n\n skip_exchange = asset in FIAT_CURRENCIES and exchange != 'kraken'\n\n if not skip_exchange:\n getattr(self, exchange).deposit(\n asset=asset,\n amount=amount,\n time=timestamp,\n )\n if asset in FIAT_CURRENCIES:\n self.rotki.data.db.add_manually_tracked_balances([ManuallyTrackedBalance(\n asset=asset,\n label=f'{asset.identifier} balance {timestamp}',\n amount=value,\n location=Location.BANKS,\n tags=None,\n )])\n\n self.rotki.query_balances(requested_save_data=True, timestamp=timestamp)\n self.last_balance_save_ts = timestamp\n\n def maybe_save_balances(self, save_ts: Timestamp) -> None:\n \"\"\"Maybe Save all current balances in the fake user's DB at the current timestamp\n\n If the save_ts is not after the time we save balances then nothing happens\n \"\"\"\n if save_ts - self.last_balance_save_ts < self.seconds_between_balance_save:\n return\n self.rotki.query_balances(requested_save_data=True, timestamp=save_ts)\n self.last_balance_save_ts = save_ts\n\n def generate_history(self) -> None:\n created_trades = 0\n while created_trades <= self.trades_number:\n current_ts, save_balances, make_trade = self.get_next_ts()\n\n if make_trade:\n try:\n self.create_action(created_trades, current_ts)\n created_trades += 1\n except Exception as e:\n logger.error(f'failed to create trade: {e}')\n\n if save_balances:\n self.maybe_save_balances(save_ts=current_ts)\n\n @staticmethod\n def query_historical_price(from_asset: Asset, to_asset: Asset, timestamp: Timestamp):\n return PriceHistorian().query_historical_price(\n from_asset=from_asset,\n to_asset=to_asset,\n timestamp=timestamp,\n )\n\n def increase_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:\n if asset not in self.funds:\n self.funds[asset] = amount\n else:\n self.funds[asset] += amount\n\n getattr(self, exchange).increase_asset(asset, amount)\n\n def decrease_asset(self, asset: Asset, amount: FVal, exchange: str) -> None:\n assert asset in self.funds, 'Asset should exist in funds'\n assert amount <= self.funds[asset], 'We should have enough funds to decrease asset'\n self.funds[asset] -= amount\n\n getattr(self, exchange).decrease_asset(asset, amount)\n\n def get_next_ts(self) -> Tuple[Timestamp, bool, bool]:\n current_ts = self.current_ts\n advance_by_secs = min(self.seconds_between_trades, self.seconds_between_balance_save)\n secs_in_future = random.randint(\n advance_by_secs,\n advance_by_secs + MAX_TRADE_DIFF_VARIANCE,\n )\n self.current_ts += secs_in_future\n\n save_balances = False\n if self.current_ts - self.last_balance_save_ts >= self.seconds_between_balance_save:\n save_balances = True\n\n make_trade = False\n if self.current_ts - self.last_trade_ts >= self.seconds_between_trades:\n make_trade = True\n\n return Timestamp(current_ts), save_balances, make_trade\n\n def create_action(self, index: int, ts: Timestamp):\n \"\"\"Create a random trade action on a random exchange depending\n on the funds that are available in that exchange\"\"\"\n # choose an exchange at random\n exchange_name = random.choice(ALLOWED_EXCHANGES)\n exchange = getattr(self, exchange_name)\n # choose a random pair at that exchange\n pair = exchange.choose_pair(\n timestamp=ts,\n price_query=self.query_historical_price,\n )\n print(\n f'Creating trade {index + 1} / {self.trades_number} in {exchange_name}'\n f' for the pair: {pair} at timestamp {ts}',\n )\n # depending on our funds decide on what to do. Buy/sell\n base, quote = pair_get_assets(pair)\n if exchange.get_balance(base) is None:\n action_type = TradeType.BUY\n elif exchange.get_balance(quote) is None:\n action_type = TradeType.SELL\n else:\n # TODO: trade the one we have most of\n action_type = random.choice(list(TradeType))\n\n # if we are buying we are going to spend from the quote asset\n if action_type == TradeType.BUY:\n spending_asset = quote\n else: # selling spends from the base asset\n spending_asset = base\n # get a spending asset amount within our per-trade equivalent range and\n # our available funds\n spending_usd_rate = self.query_historical_price(spending_asset, A_USD, ts)\n max_usd_in_spending_asset = spending_usd_rate * exchange.get_balance(spending_asset)\n max_usd_equivalent_to_spend = min(max_usd_in_spending_asset, MAX_TRADE_USD_VALUE)\n rate = self.query_historical_price(base, quote, ts)\n usd_to_spend = FVal(random.uniform(0.01, float(max_usd_equivalent_to_spend)))\n amount_in_spending_asset = usd_to_spend / spending_usd_rate\n # if we are buying then the amount is the amount of asset we bought\n if action_type == TradeType.BUY:\n amount = amount_in_spending_asset / rate\n # if we are selling the amount is the spending asset amount\n else:\n amount = amount_in_spending_asset\n\n quote_asset_usd_rate = self.query_historical_price(quote, A_USD, ts)\n fee_in_quote_currency = FVal(random.uniform(0, MAX_FEE_USD_VALUE)) / quote_asset_usd_rate\n\n # create the trade\n trade = Trade(\n timestamp=ts,\n location=deserialize_location(exchange_name),\n pair=pair,\n trade_type=action_type,\n amount=amount,\n rate=rate,\n fee=fee_in_quote_currency,\n fee_currency=quote,\n link='',\n notes='',\n )\n logger.info(f'Created trade: {trade}')\n\n # Adjust our global and per exchange accounting\n if action_type == TradeType.BUY:\n # we buy so we increase our base asset by amount\n self.increase_asset(base, amount, exchange_name)\n # and decrease quote by amount * rate\n self.decrease_asset(quote, amount * rate, exchange_name)\n else:\n # we sell so we increase our quote asset\n self.increase_asset(quote, amount * rate, exchange_name)\n # and decrease our base asset\n self.decrease_asset(base, amount, exchange_name)\n\n # finally add it to the exchange\n exchange.append_trade(trade)\n", "path": "tools/data_faker/data_faker/actions.py"}]}
| 3,212 | 257 |
gh_patches_debug_15313
|
rasdani/github-patches
|
git_diff
|
SciTools__cartopy-1027
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Doc: gallery code links don't need to be underlined
As an example, https://2-5282596-gh.circle-artifacts.com/0/root/project/docs/build/html/gallery/miscellanea/star_shaped_boundary.html#sphx-glr-gallery-miscellanea-star-shaped-boundary-py has all of the functions that have links underlined. This adds visual noise that distracts from the message.
We want to keep the links, but define the CSS to avoid the underlining.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/sphinxext/pre_sphinx_gallery.py`
Content:
```
1 """
2 Override sphinx_gallery's treatment of groups (folders) with cartopy's
3 ``__tags__`` semantics. This is tightly bound to the sphinx_gallery
4 implementation, hence the explicit version checking.
5
6 """
7 from collections import OrderedDict
8 import os.path
9 import shutil
10 import tempfile
11 import textwrap
12
13 import sphinx_gallery.gen_gallery
14 import sphinx_gallery.gen_rst
15 from sphinx_gallery.gen_rst import (
16 write_backreferences, extract_intro, _thumbnail_div,
17 generate_file_rst, sphinx_compatibility)
18
19
20 if sphinx_gallery.__version__ not in ['0.1.12']:
21 raise RuntimeError('not tested with this version of sphinx_gallery ({}). '
22 'Please modify this check, and validate sphinx_gallery'
23 ' behaves as expected.'
24 ''.format(sphinx_gallery.__version__))
25
26
27 GALLERY_HEADER = textwrap.dedent("""
28
29 Cartopy Gallery
30 ---------------
31
32 The following visual examples demonstrate some of the functionality of
33 Cartopy, particularly its matplotlib interface.
34
35 For a structured introduction to cartopy, including some of these
36 examples, see :ref:`getting-started-with-cartopy`.
37
38 """)
39
40
41 def example_groups(src_dir):
42 """Return a dictionary of {tag: [example filenames]} for the given dir."""
43
44 sorted_listdir = [fname for fname in sorted(os.listdir(src_dir))
45 if fname.endswith('.py') and not fname.startswith('_')]
46 tagged_examples = {}
47
48 for fname in sorted_listdir:
49 fpath = os.path.join(src_dir, fname)
50 __tags__ = []
51 with open(fpath, 'r') as fh:
52 for line in fh:
53 # Crudely remove the __tags__ line.
54 if line.startswith('__tags__ = '):
55 exec(line.strip(), locals(), globals())
56 for tag in __tags__:
57 tagged_examples.setdefault(tag, []).append(fname)
58 break
59 else:
60 tag = 'Miscellanea'
61 tagged_examples.setdefault(tag, []).append(fname)
62 return tagged_examples
63
64
65 def order_examples(tagged_examples):
66 """Order the tags and their examples."""
67 preferred_tag_order = ['Introductory',
68 'Lines and polygons',
69 'Scalar data',
70 'Vector data',
71 'Web services']
72
73 def sort_key(item):
74 tag = item[0]
75 try:
76 index = preferred_tag_order.index(tag)
77 except ValueError:
78 index = len(preferred_tag_order) + 1
79
80 return (index, tag.lower())
81 sorted_items = sorted(tagged_examples.items(), key=sort_key)
82 return OrderedDict(sorted_items)
83
84
85 def write_example(src_fpath, target_dir):
86 target_fpath = os.path.join(target_dir, os.path.basename(src_fpath))
87 with open(src_fpath, 'r') as fh:
88 with open(target_fpath, 'w') as fh_out:
89 for line in fh:
90 # Crudely remove the __tags__ line.
91 if line.startswith('__tags__ = '):
92 continue
93 fh_out.write(line)
94
95
96 def generate_dir_rst(src_dir, target_dir, gallery_conf, seen_backrefs):
97 """Generate the gallery reStructuredText for an example directory"""
98
99 fhindex = GALLERY_HEADER
100
101 if not os.path.exists(target_dir):
102 os.makedirs(target_dir)
103 tagged_examples = example_groups(src_dir)
104 tagged_examples = order_examples(tagged_examples)
105
106 computation_times = []
107 build_target_dir = os.path.relpath(target_dir, gallery_conf['src_dir'])
108
109 seen = set()
110 tmp_dir = tempfile.mkdtemp()
111
112 for tag, examples in tagged_examples.items():
113 sorted_listdir = examples
114
115 entries_text = []
116 iterator = sphinx_compatibility.status_iterator(
117 sorted_listdir,
118 'Generating gallery for %s ' % tag,
119 length=len(sorted_listdir))
120 for fname in iterator:
121 write_example(os.path.join(src_dir, fname), tmp_dir)
122 amount_of_code, time_elapsed = generate_file_rst(
123 fname, target_dir, tmp_dir, gallery_conf)
124
125 if fname not in seen:
126 seen.add(fname)
127 computation_times.append((time_elapsed, fname))
128
129 new_fname = os.path.join(src_dir, fname)
130 intro = extract_intro(new_fname)
131 this_entry = _thumbnail_div(build_target_dir, fname, intro) + textwrap.dedent("""
132
133 .. toctree::
134 :hidden:
135
136 /%s
137
138 """) % os.path.join(build_target_dir, fname[:-3]).replace(os.sep, '/') # noqa: E501
139
140 entries_text.append((amount_of_code, this_entry))
141
142 if gallery_conf['backreferences_dir']:
143 write_backreferences(seen_backrefs, gallery_conf,
144 target_dir, fname, intro)
145
146 # sort to have the smallest entries in the beginning
147 entries_text.sort()
148
149 fhindex += textwrap.dedent("""
150
151 {tag}
152 {tag_underline}
153
154 .. container:: gallery_images
155
156 """.format(tag=tag, tag_underline='-' * len(tag)))
157
158 for _, entry_text in entries_text:
159 fhindex += '\n '.join(entry_text.split('\n'))
160
161 # clear at the end of the section
162 fhindex += """.. raw:: html\n
163 <div style='clear:both'></div>\n\n"""
164
165 # Tidy up the temp directory
166 shutil.rmtree(tmp_dir)
167
168 return fhindex, computation_times
169
170
171 # Monkey-patch sphinx_gallery to handle cartopy's example format.
172 sphinx_gallery.gen_rst.generate_dir_rst = generate_dir_rst
173 sphinx_gallery.gen_gallery.generate_dir_rst = generate_dir_rst
174
175
176 def setup(app):
177 pass
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/sphinxext/pre_sphinx_gallery.py b/docs/source/sphinxext/pre_sphinx_gallery.py
--- a/docs/source/sphinxext/pre_sphinx_gallery.py
+++ b/docs/source/sphinxext/pre_sphinx_gallery.py
@@ -47,13 +47,12 @@
for fname in sorted_listdir:
fpath = os.path.join(src_dir, fname)
- __tags__ = []
with open(fpath, 'r') as fh:
for line in fh:
# Crudely remove the __tags__ line.
if line.startswith('__tags__ = '):
exec(line.strip(), locals(), globals())
- for tag in __tags__:
+ for tag in __tags__: # noqa:
tagged_examples.setdefault(tag, []).append(fname)
break
else:
|
{"golden_diff": "diff --git a/docs/source/sphinxext/pre_sphinx_gallery.py b/docs/source/sphinxext/pre_sphinx_gallery.py\n--- a/docs/source/sphinxext/pre_sphinx_gallery.py\n+++ b/docs/source/sphinxext/pre_sphinx_gallery.py\n@@ -47,13 +47,12 @@\n \n for fname in sorted_listdir:\n fpath = os.path.join(src_dir, fname)\n- __tags__ = []\n with open(fpath, 'r') as fh:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n exec(line.strip(), locals(), globals())\n- for tag in __tags__:\n+ for tag in __tags__: # noqa:\n tagged_examples.setdefault(tag, []).append(fname)\n break\n else:\n", "issue": "Doc: gallery code links don't need to be underlined\nAs an example, https://2-5282596-gh.circle-artifacts.com/0/root/project/docs/build/html/gallery/miscellanea/star_shaped_boundary.html#sphx-glr-gallery-miscellanea-star-shaped-boundary-py has all of the functions that have links underlined. This adds visual noise that distracts from the message.\r\n\r\nWe want to keep the links, but define the CSS to avoid the underlining.\n", "before_files": [{"content": "\"\"\"\nOverride sphinx_gallery's treatment of groups (folders) with cartopy's\n``__tags__`` semantics. This is tightly bound to the sphinx_gallery\nimplementation, hence the explicit version checking.\n\n\"\"\"\nfrom collections import OrderedDict\nimport os.path\nimport shutil\nimport tempfile\nimport textwrap\n\nimport sphinx_gallery.gen_gallery\nimport sphinx_gallery.gen_rst\nfrom sphinx_gallery.gen_rst import (\n write_backreferences, extract_intro, _thumbnail_div,\n generate_file_rst, sphinx_compatibility)\n\n\nif sphinx_gallery.__version__ not in ['0.1.12']:\n raise RuntimeError('not tested with this version of sphinx_gallery ({}). '\n 'Please modify this check, and validate sphinx_gallery'\n ' behaves as expected.'\n ''.format(sphinx_gallery.__version__))\n\n\nGALLERY_HEADER = textwrap.dedent(\"\"\"\n\n Cartopy Gallery\n ---------------\n\n The following visual examples demonstrate some of the functionality of\n Cartopy, particularly its matplotlib interface.\n\n For a structured introduction to cartopy, including some of these\n examples, see :ref:`getting-started-with-cartopy`.\n\n\"\"\")\n\n\ndef example_groups(src_dir):\n \"\"\"Return a dictionary of {tag: [example filenames]} for the given dir.\"\"\"\n\n sorted_listdir = [fname for fname in sorted(os.listdir(src_dir))\n if fname.endswith('.py') and not fname.startswith('_')]\n tagged_examples = {}\n\n for fname in sorted_listdir:\n fpath = os.path.join(src_dir, fname)\n __tags__ = []\n with open(fpath, 'r') as fh:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n exec(line.strip(), locals(), globals())\n for tag in __tags__:\n tagged_examples.setdefault(tag, []).append(fname)\n break\n else:\n tag = 'Miscellanea'\n tagged_examples.setdefault(tag, []).append(fname)\n return tagged_examples\n\n\ndef order_examples(tagged_examples):\n \"\"\"Order the tags and their examples.\"\"\"\n preferred_tag_order = ['Introductory',\n 'Lines and polygons',\n 'Scalar data',\n 'Vector data',\n 'Web services']\n\n def sort_key(item):\n tag = item[0]\n try:\n index = preferred_tag_order.index(tag)\n except ValueError:\n index = len(preferred_tag_order) + 1\n\n return (index, tag.lower())\n sorted_items = sorted(tagged_examples.items(), key=sort_key)\n return OrderedDict(sorted_items)\n\n\ndef write_example(src_fpath, target_dir):\n target_fpath = os.path.join(target_dir, os.path.basename(src_fpath))\n with open(src_fpath, 'r') as fh:\n with open(target_fpath, 'w') as fh_out:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n continue\n fh_out.write(line)\n\n\ndef generate_dir_rst(src_dir, target_dir, gallery_conf, seen_backrefs):\n \"\"\"Generate the gallery reStructuredText for an example directory\"\"\"\n\n fhindex = GALLERY_HEADER\n\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n tagged_examples = example_groups(src_dir)\n tagged_examples = order_examples(tagged_examples)\n\n computation_times = []\n build_target_dir = os.path.relpath(target_dir, gallery_conf['src_dir'])\n\n seen = set()\n tmp_dir = tempfile.mkdtemp()\n\n for tag, examples in tagged_examples.items():\n sorted_listdir = examples\n\n entries_text = []\n iterator = sphinx_compatibility.status_iterator(\n sorted_listdir,\n 'Generating gallery for %s ' % tag,\n length=len(sorted_listdir))\n for fname in iterator:\n write_example(os.path.join(src_dir, fname), tmp_dir)\n amount_of_code, time_elapsed = generate_file_rst(\n fname, target_dir, tmp_dir, gallery_conf)\n\n if fname not in seen:\n seen.add(fname)\n computation_times.append((time_elapsed, fname))\n\n new_fname = os.path.join(src_dir, fname)\n intro = extract_intro(new_fname)\n this_entry = _thumbnail_div(build_target_dir, fname, intro) + textwrap.dedent(\"\"\"\n\n .. toctree::\n :hidden:\n\n /%s\n\n \"\"\") % os.path.join(build_target_dir, fname[:-3]).replace(os.sep, '/') # noqa: E501\n\n entries_text.append((amount_of_code, this_entry))\n\n if gallery_conf['backreferences_dir']:\n write_backreferences(seen_backrefs, gallery_conf,\n target_dir, fname, intro)\n\n # sort to have the smallest entries in the beginning\n entries_text.sort()\n\n fhindex += textwrap.dedent(\"\"\"\n\n {tag}\n {tag_underline}\n\n .. container:: gallery_images\n\n \"\"\".format(tag=tag, tag_underline='-' * len(tag)))\n\n for _, entry_text in entries_text:\n fhindex += '\\n '.join(entry_text.split('\\n'))\n\n # clear at the end of the section\n fhindex += \"\"\".. raw:: html\\n\n <div style='clear:both'></div>\\n\\n\"\"\"\n\n # Tidy up the temp directory\n shutil.rmtree(tmp_dir)\n\n return fhindex, computation_times\n\n\n# Monkey-patch sphinx_gallery to handle cartopy's example format.\nsphinx_gallery.gen_rst.generate_dir_rst = generate_dir_rst\nsphinx_gallery.gen_gallery.generate_dir_rst = generate_dir_rst\n\n\ndef setup(app):\n pass\n", "path": "docs/source/sphinxext/pre_sphinx_gallery.py"}], "after_files": [{"content": "\"\"\"\nOverride sphinx_gallery's treatment of groups (folders) with cartopy's\n``__tags__`` semantics. This is tightly bound to the sphinx_gallery\nimplementation, hence the explicit version checking.\n\n\"\"\"\nfrom collections import OrderedDict\nimport os.path\nimport shutil\nimport tempfile\nimport textwrap\n\nimport sphinx_gallery.gen_gallery\nimport sphinx_gallery.gen_rst\nfrom sphinx_gallery.gen_rst import (\n write_backreferences, extract_intro, _thumbnail_div,\n generate_file_rst, sphinx_compatibility)\n\n\nif sphinx_gallery.__version__ not in ['0.1.12']:\n raise RuntimeError('not tested with this version of sphinx_gallery ({}). '\n 'Please modify this check, and validate sphinx_gallery'\n ' behaves as expected.'\n ''.format(sphinx_gallery.__version__))\n\n\nGALLERY_HEADER = textwrap.dedent(\"\"\"\n\n Cartopy Gallery\n ---------------\n\n The following visual examples demonstrate some of the functionality of\n Cartopy, particularly its matplotlib interface.\n\n For a structured introduction to cartopy, including some of these\n examples, see :ref:`getting-started-with-cartopy`.\n\n\"\"\")\n\n\ndef example_groups(src_dir):\n \"\"\"Return a dictionary of {tag: [example filenames]} for the given dir.\"\"\"\n\n sorted_listdir = [fname for fname in sorted(os.listdir(src_dir))\n if fname.endswith('.py') and not fname.startswith('_')]\n tagged_examples = {}\n\n for fname in sorted_listdir:\n fpath = os.path.join(src_dir, fname)\n with open(fpath, 'r') as fh:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n exec(line.strip(), locals(), globals())\n for tag in __tags__: # noqa:\n tagged_examples.setdefault(tag, []).append(fname)\n break\n else:\n tag = 'Miscellanea'\n tagged_examples.setdefault(tag, []).append(fname)\n return tagged_examples\n\n\ndef order_examples(tagged_examples):\n \"\"\"Order the tags and their examples.\"\"\"\n preferred_tag_order = ['Introductory',\n 'Lines and polygons',\n 'Scalar data',\n 'Vector data',\n 'Web services']\n\n def sort_key(item):\n tag = item[0]\n try:\n index = preferred_tag_order.index(tag)\n except ValueError:\n index = len(preferred_tag_order) + 1\n\n return (index, tag.lower())\n sorted_items = sorted(tagged_examples.items(), key=sort_key)\n return OrderedDict(sorted_items)\n\n\ndef write_example(src_fpath, target_dir):\n target_fpath = os.path.join(target_dir, os.path.basename(src_fpath))\n with open(src_fpath, 'r') as fh:\n with open(target_fpath, 'w') as fh_out:\n for line in fh:\n # Crudely remove the __tags__ line.\n if line.startswith('__tags__ = '):\n continue\n fh_out.write(line)\n\n\ndef generate_dir_rst(src_dir, target_dir, gallery_conf, seen_backrefs):\n \"\"\"Generate the gallery reStructuredText for an example directory\"\"\"\n\n fhindex = GALLERY_HEADER\n\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n tagged_examples = example_groups(src_dir)\n tagged_examples = order_examples(tagged_examples)\n\n computation_times = []\n build_target_dir = os.path.relpath(target_dir, gallery_conf['src_dir'])\n\n seen = set()\n tmp_dir = tempfile.mkdtemp()\n\n for tag, examples in tagged_examples.items():\n sorted_listdir = examples\n\n entries_text = []\n iterator = sphinx_compatibility.status_iterator(\n sorted_listdir,\n 'Generating gallery for %s ' % tag,\n length=len(sorted_listdir))\n for fname in iterator:\n write_example(os.path.join(src_dir, fname), tmp_dir)\n amount_of_code, time_elapsed = generate_file_rst(\n fname, target_dir, tmp_dir, gallery_conf)\n\n if fname not in seen:\n seen.add(fname)\n computation_times.append((time_elapsed, fname))\n\n new_fname = os.path.join(src_dir, fname)\n intro = extract_intro(new_fname)\n this_entry = _thumbnail_div(build_target_dir, fname, intro) + textwrap.dedent(\"\"\"\n\n .. toctree::\n :hidden:\n\n /%s\n\n \"\"\") % os.path.join(build_target_dir, fname[:-3]).replace(os.sep, '/') # noqa: E501\n\n entries_text.append((amount_of_code, this_entry))\n\n if gallery_conf['backreferences_dir']:\n write_backreferences(seen_backrefs, gallery_conf,\n target_dir, fname, intro)\n\n # sort to have the smallest entries in the beginning\n entries_text.sort()\n\n fhindex += textwrap.dedent(\"\"\"\n\n {tag}\n {tag_underline}\n\n .. container:: gallery_images\n\n \"\"\".format(tag=tag, tag_underline='-' * len(tag)))\n\n for _, entry_text in entries_text:\n fhindex += '\\n '.join(entry_text.split('\\n'))\n\n # clear at the end of the section\n fhindex += \"\"\".. raw:: html\\n\n <div style='clear:both'></div>\\n\\n\"\"\"\n\n # Tidy up the temp directory\n shutil.rmtree(tmp_dir)\n\n return fhindex, computation_times\n\n\n# Monkey-patch sphinx_gallery to handle cartopy's example format.\nsphinx_gallery.gen_rst.generate_dir_rst = generate_dir_rst\nsphinx_gallery.gen_gallery.generate_dir_rst = generate_dir_rst\n\n\ndef setup(app):\n pass\n", "path": "docs/source/sphinxext/pre_sphinx_gallery.py"}]}
| 2,027 | 179 |
gh_patches_debug_33694
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1467
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Project templating using TemplatedConfigLoader raises ValidationError in 0.18.0
## Description
When following the guide for [Template configuration](https://kedro.readthedocs.io/en/stable/kedro_project_setup/configuration.html#template-configuration), Kedro 0.18.0 raises ValidationError as `kedro.config.templated_config.TemplatedConfigLoader` is not recognized as a subclass of `kedro.config.config.ConfigLoader`.
## Context
This bug hinders me from using the `TemplatedConfigLoader`, according to the documentation.
## Steps to Reproduce
1. Create new Kedro project
```
kedro new --starter=pandas-iris
cd new-kedro-project
```
2. Create and activate venv. Install requirements.
```
python -m venv my_venv
source my_venv/bin/activate
pip install -r src/requirements.txt
```
3. Open the `settings.py` file.
```
nano src/new_kedro_project/settings.py
```
Uncomment lines 28,29,31,32,33 as suggested in the guide. Expected config:
```
# Class that manages how configuration is loaded.
from kedro.config import TemplatedConfigLoader
CONFIG_LOADER_CLASS = TemplatedConfigLoader
# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.
CONFIG_LOADER_ARGS = {
"globals_pattern": "*globals.yml",
}
```
4. Run the default pipeline
```
kedro run
```
## Expected Result
Defeault pipeline should run with global config keys overwritten.
## Actual Result
ValidationError is raised.
```
Traceback (most recent call last):
File "/usr/local/bin/kedro", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py", line 205, in main
cli_collection = KedroCLI(project_path=Path.cwd())
File "/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py", line 114, in __init__
self._metadata = bootstrap_project(project_path)
File "/usr/local/lib/python3.9/site-packages/kedro/framework/startup.py", line 155, in bootstrap_project
configure_project(metadata.package_name)
File "/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py", line 166, in configure_project
settings.configure(settings_module)
File "/usr/local/lib/python3.9/site-packages/dynaconf/base.py", line 182, in configure
self._wrapped = Settings(settings_module=settings_module, **kwargs)
File "/usr/local/lib/python3.9/site-packages/dynaconf/base.py", line 235, in __init__
self.validators.validate(
File "/usr/local/lib/python3.9/site-packages/dynaconf/validator.py", line 417, in validate
validator.validate(self.settings, only=only, exclude=exclude)
File "/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py", line 35, in validate
raise ValidationError(
dynaconf.validator.ValidationError: Invalid value `kedro.config.templated_config.TemplatedConfigLoader` received for setting `CONFIG_LOADER_CLASS`. It must be a subclass of `kedro.config.config.ConfigLoader`.
```
## Your Environment
* Kedro version used (`pip show kedro` or `kedro -V`): `0.18.0`
* Python version used (`python -V`): `3.9.10`
* Operating system and version: `macOS 12.3`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/framework/project/__init__.py`
Content:
```
1 """``kedro.framework.project`` module provides utitlity to
2 configure a Kedro project and access its settings."""
3 # pylint: disable=redefined-outer-name,unused-argument,global-statement
4 import importlib
5 import logging.config
6 import operator
7 from collections.abc import MutableMapping
8 from typing import Any, Dict, Optional
9
10 from dynaconf import LazySettings
11 from dynaconf.validator import ValidationError, Validator
12
13 from kedro.pipeline import Pipeline
14
15
16 def _get_default_class(class_import_path):
17 module, _, class_name = class_import_path.rpartition(".")
18
19 def validator_func(settings, validators):
20 return getattr(importlib.import_module(module), class_name)
21
22 return validator_func
23
24
25 class _IsSubclassValidator(Validator):
26 """A validator to check if the supplied setting value is a subclass of the default class"""
27
28 def validate(self, settings, *args, **kwargs):
29 super().validate(settings, *args, **kwargs)
30
31 default_class = self.default(settings, self)
32 for name in self.names:
33 setting_value = getattr(settings, name)
34 if not issubclass(setting_value, default_class):
35 raise ValidationError(
36 f"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` "
37 f"received for setting `{name}`. It must be a subclass of "
38 f"`{default_class.__module__}.{default_class.__qualname__}`."
39 )
40
41
42 class _ProjectSettings(LazySettings):
43 """Define all settings available for users to configure in Kedro,
44 along with their validation rules and default values.
45 Use Dynaconf's LazySettings as base.
46 """
47
48 _CONF_SOURCE = Validator("CONF_SOURCE", default="conf")
49 _HOOKS = Validator("HOOKS", default=tuple())
50 _CONTEXT_CLASS = _IsSubclassValidator(
51 "CONTEXT_CLASS",
52 default=_get_default_class("kedro.framework.context.KedroContext"),
53 )
54 _SESSION_STORE_CLASS = _IsSubclassValidator(
55 "SESSION_STORE_CLASS",
56 default=_get_default_class("kedro.framework.session.session.BaseSessionStore"),
57 )
58 _SESSION_STORE_ARGS = Validator("SESSION_STORE_ARGS", default={})
59 _DISABLE_HOOKS_FOR_PLUGINS = Validator("DISABLE_HOOKS_FOR_PLUGINS", default=tuple())
60 _CONFIG_LOADER_CLASS = _IsSubclassValidator(
61 "CONFIG_LOADER_CLASS", default=_get_default_class("kedro.config.ConfigLoader")
62 )
63 _CONFIG_LOADER_ARGS = Validator("CONFIG_LOADER_ARGS", default={})
64 _DATA_CATALOG_CLASS = _IsSubclassValidator(
65 "DATA_CATALOG_CLASS", default=_get_default_class("kedro.io.DataCatalog")
66 )
67
68 def __init__(self, *args, **kwargs):
69
70 kwargs.update(
71 validators=[
72 self._CONF_SOURCE,
73 self._HOOKS,
74 self._CONTEXT_CLASS,
75 self._SESSION_STORE_CLASS,
76 self._SESSION_STORE_ARGS,
77 self._DISABLE_HOOKS_FOR_PLUGINS,
78 self._CONFIG_LOADER_CLASS,
79 self._CONFIG_LOADER_ARGS,
80 self._DATA_CATALOG_CLASS,
81 ]
82 )
83 super().__init__(*args, **kwargs)
84
85
86 def _load_data_wrapper(func):
87 """Wrap a method in _ProjectPipelines so that data is loaded on first access.
88 Taking inspiration from dynaconf.utils.functional.new_method_proxy
89 """
90 # pylint: disable=protected-access
91 def inner(self, *args, **kwargs):
92 self._load_data()
93 return func(self._content, *args, **kwargs)
94
95 return inner
96
97
98 class _ProjectPipelines(MutableMapping):
99 """A read-only lazy dictionary-like object to hold the project pipelines.
100 On configure it will store the pipelines module.
101 On first data access, e.g. through __getitem__, it will load the registered pipelines and merge
102 them with pipelines defined from hooks.
103 """
104
105 def __init__(self) -> None:
106 self._pipelines_module: Optional[str] = None
107 self._is_data_loaded = False
108 self._content: Dict[str, Pipeline] = {}
109
110 @staticmethod
111 def _get_pipelines_registry_callable(pipelines_module: str):
112 module_obj = importlib.import_module(pipelines_module)
113 register_pipelines = getattr(module_obj, "register_pipelines")
114 return register_pipelines
115
116 def _load_data(self):
117 """Lazily read pipelines defined in the pipelines registry module"""
118
119 # If the pipelines dictionary has not been configured with a pipelines module
120 # or if data has been loaded
121 if self._pipelines_module is None or self._is_data_loaded:
122 return
123
124 register_pipelines = self._get_pipelines_registry_callable(
125 self._pipelines_module
126 )
127 project_pipelines = register_pipelines()
128
129 self._content = project_pipelines
130 self._is_data_loaded = True
131
132 def configure(self, pipelines_module: Optional[str] = None) -> None:
133 """Configure the pipelines_module to load the pipelines dictionary.
134 Reset the data loading state so that after every `configure` call,
135 data are reloaded.
136 """
137 self._pipelines_module = pipelines_module
138 self._is_data_loaded = False
139 self._content = {}
140
141 # Dict-like interface
142 __getitem__ = _load_data_wrapper(operator.getitem)
143 __setitem__ = _load_data_wrapper(operator.setitem)
144 __delitem__ = _load_data_wrapper(operator.delitem)
145 __iter__ = _load_data_wrapper(iter)
146 __len__ = _load_data_wrapper(len)
147
148 # Presentation methods
149 __repr__ = _load_data_wrapper(repr)
150 __str__ = _load_data_wrapper(str)
151
152
153 PACKAGE_NAME = None
154 LOGGING = None
155
156 settings = _ProjectSettings()
157
158 pipelines = _ProjectPipelines()
159
160
161 def configure_project(package_name: str):
162 """Configure a Kedro project by populating its settings with values
163 defined in user's settings.py and pipeline_registry.py.
164 """
165 settings_module = f"{package_name}.settings"
166 settings.configure(settings_module)
167
168 pipelines_module = f"{package_name}.pipeline_registry"
169 pipelines.configure(pipelines_module)
170
171 # Once the project is successfully configured once, store PACKAGE_NAME as a
172 # global variable to make it easily accessible. This is used by validate_settings()
173 # below, and also by ParallelRunner on Windows, as package_name is required every
174 # time a new subprocess is spawned.
175 global PACKAGE_NAME
176 PACKAGE_NAME = package_name
177
178
179 def configure_logging(logging_config: Dict[str, Any]) -> None:
180 """Configure logging to make it available as a global variable."""
181 logging.config.dictConfig(logging_config)
182 global LOGGING
183 LOGGING = logging_config
184
185
186 def validate_settings():
187 """Eagerly validate that the settings module is importable. This is desirable to
188 surface any syntax or import errors early. In particular, without eagerly importing
189 the settings module, dynaconf would silence any import error (e.g. missing
190 dependency, missing/mislabelled pipeline), and users would instead get a cryptic
191 error message ``Expected an instance of `ConfigLoader`, got `NoneType` instead``.
192 More info on the dynaconf issue: https://github.com/rochacbruno/dynaconf/issues/460
193 """
194 importlib.import_module(f"{PACKAGE_NAME}.settings")
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/framework/project/__init__.py b/kedro/framework/project/__init__.py
--- a/kedro/framework/project/__init__.py
+++ b/kedro/framework/project/__init__.py
@@ -39,6 +39,35 @@
)
+class _HasSharedParentClassValidator(Validator):
+ """A validator to check that the parent of the default class is an ancestor of
+ the settings value."""
+
+ def validate(self, settings, *args, **kwargs):
+ super().validate(settings, *args, **kwargs)
+
+ default_class = self.default(settings, self)
+ for name in self.names:
+ setting_value = getattr(settings, name)
+ # In the case of ConfigLoader, default_class.mro() will be:
+ # [kedro.config.config.ConfigLoader,
+ # kedro.config.abstract_config.AbstractConfigLoader,
+ # abc.ABC,
+ # object]
+ # We pick out the direct parent and check if it's in any of the ancestors of
+ # the supplied setting_value. This assumes that the direct parent is
+ # the abstract class that must be inherited from.
+ # A more general check just for a shared ancestor would be:
+ # set(default_class.mro()) & set(setting_value.mro()) - {abc.ABC, object}
+ default_class_parent = default_class.mro()[1]
+ if default_class_parent not in setting_value.mro():
+ raise ValidationError(
+ f"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` "
+ f"received for setting `{name}`. It must be a subclass of "
+ f"`{default_class_parent.__module__}.{default_class_parent.__qualname__}`."
+ )
+
+
class _ProjectSettings(LazySettings):
"""Define all settings available for users to configure in Kedro,
along with their validation rules and default values.
@@ -57,7 +86,7 @@
)
_SESSION_STORE_ARGS = Validator("SESSION_STORE_ARGS", default={})
_DISABLE_HOOKS_FOR_PLUGINS = Validator("DISABLE_HOOKS_FOR_PLUGINS", default=tuple())
- _CONFIG_LOADER_CLASS = _IsSubclassValidator(
+ _CONFIG_LOADER_CLASS = _HasSharedParentClassValidator(
"CONFIG_LOADER_CLASS", default=_get_default_class("kedro.config.ConfigLoader")
)
_CONFIG_LOADER_ARGS = Validator("CONFIG_LOADER_ARGS", default={})
|
{"golden_diff": "diff --git a/kedro/framework/project/__init__.py b/kedro/framework/project/__init__.py\n--- a/kedro/framework/project/__init__.py\n+++ b/kedro/framework/project/__init__.py\n@@ -39,6 +39,35 @@\n )\n \n \n+class _HasSharedParentClassValidator(Validator):\n+ \"\"\"A validator to check that the parent of the default class is an ancestor of\n+ the settings value.\"\"\"\n+\n+ def validate(self, settings, *args, **kwargs):\n+ super().validate(settings, *args, **kwargs)\n+\n+ default_class = self.default(settings, self)\n+ for name in self.names:\n+ setting_value = getattr(settings, name)\n+ # In the case of ConfigLoader, default_class.mro() will be:\n+ # [kedro.config.config.ConfigLoader,\n+ # kedro.config.abstract_config.AbstractConfigLoader,\n+ # abc.ABC,\n+ # object]\n+ # We pick out the direct parent and check if it's in any of the ancestors of\n+ # the supplied setting_value. This assumes that the direct parent is\n+ # the abstract class that must be inherited from.\n+ # A more general check just for a shared ancestor would be:\n+ # set(default_class.mro()) & set(setting_value.mro()) - {abc.ABC, object}\n+ default_class_parent = default_class.mro()[1]\n+ if default_class_parent not in setting_value.mro():\n+ raise ValidationError(\n+ f\"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` \"\n+ f\"received for setting `{name}`. It must be a subclass of \"\n+ f\"`{default_class_parent.__module__}.{default_class_parent.__qualname__}`.\"\n+ )\n+\n+\n class _ProjectSettings(LazySettings):\n \"\"\"Define all settings available for users to configure in Kedro,\n along with their validation rules and default values.\n@@ -57,7 +86,7 @@\n )\n _SESSION_STORE_ARGS = Validator(\"SESSION_STORE_ARGS\", default={})\n _DISABLE_HOOKS_FOR_PLUGINS = Validator(\"DISABLE_HOOKS_FOR_PLUGINS\", default=tuple())\n- _CONFIG_LOADER_CLASS = _IsSubclassValidator(\n+ _CONFIG_LOADER_CLASS = _HasSharedParentClassValidator(\n \"CONFIG_LOADER_CLASS\", default=_get_default_class(\"kedro.config.ConfigLoader\")\n )\n _CONFIG_LOADER_ARGS = Validator(\"CONFIG_LOADER_ARGS\", default={})\n", "issue": "Project templating using TemplatedConfigLoader raises ValidationError in 0.18.0\n## Description\r\nWhen following the guide for [Template configuration](https://kedro.readthedocs.io/en/stable/kedro_project_setup/configuration.html#template-configuration), Kedro 0.18.0 raises ValidationError as `kedro.config.templated_config.TemplatedConfigLoader` is not recognized as a subclass of `kedro.config.config.ConfigLoader`.\r\n\r\n## Context\r\nThis bug hinders me from using the `TemplatedConfigLoader`, according to the documentation.\r\n\r\n## Steps to Reproduce\r\n1. Create new Kedro project\r\n```\r\nkedro new --starter=pandas-iris\r\ncd new-kedro-project\r\n```\r\n2. Create and activate venv. Install requirements.\r\n```\r\npython -m venv my_venv\r\nsource my_venv/bin/activate\r\npip install -r src/requirements.txt\r\n```\r\n3. Open the `settings.py` file.\r\n```\r\nnano src/new_kedro_project/settings.py\r\n```\r\nUncomment lines 28,29,31,32,33 as suggested in the guide. Expected config:\r\n```\r\n# Class that manages how configuration is loaded.\r\nfrom kedro.config import TemplatedConfigLoader\r\nCONFIG_LOADER_CLASS = TemplatedConfigLoader\r\n# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.\r\nCONFIG_LOADER_ARGS = {\r\n \"globals_pattern\": \"*globals.yml\",\r\n}\r\n```\r\n4. Run the default pipeline\r\n```\r\nkedro run\r\n```\r\n\r\n## Expected Result\r\nDefeault pipeline should run with global config keys overwritten.\r\n\r\n## Actual Result\r\nValidationError is raised.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/kedro\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py\", line 205, in main\r\n cli_collection = KedroCLI(project_path=Path.cwd())\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/cli/cli.py\", line 114, in __init__\r\n self._metadata = bootstrap_project(project_path)\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/startup.py\", line 155, in bootstrap_project\r\n configure_project(metadata.package_name)\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py\", line 166, in configure_project\r\n settings.configure(settings_module)\r\n File \"/usr/local/lib/python3.9/site-packages/dynaconf/base.py\", line 182, in configure\r\n self._wrapped = Settings(settings_module=settings_module, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/dynaconf/base.py\", line 235, in __init__\r\n self.validators.validate(\r\n File \"/usr/local/lib/python3.9/site-packages/dynaconf/validator.py\", line 417, in validate\r\n validator.validate(self.settings, only=only, exclude=exclude)\r\n File \"/usr/local/lib/python3.9/site-packages/kedro/framework/project/__init__.py\", line 35, in validate\r\n raise ValidationError(\r\ndynaconf.validator.ValidationError: Invalid value `kedro.config.templated_config.TemplatedConfigLoader` received for setting `CONFIG_LOADER_CLASS`. It must be a subclass of `kedro.config.config.ConfigLoader`.\r\n```\r\n\r\n## Your Environment\r\n* Kedro version used (`pip show kedro` or `kedro -V`): `0.18.0`\r\n* Python version used (`python -V`): `3.9.10`\r\n* Operating system and version: `macOS 12.3`\r\n\n", "before_files": [{"content": "\"\"\"``kedro.framework.project`` module provides utitlity to\nconfigure a Kedro project and access its settings.\"\"\"\n# pylint: disable=redefined-outer-name,unused-argument,global-statement\nimport importlib\nimport logging.config\nimport operator\nfrom collections.abc import MutableMapping\nfrom typing import Any, Dict, Optional\n\nfrom dynaconf import LazySettings\nfrom dynaconf.validator import ValidationError, Validator\n\nfrom kedro.pipeline import Pipeline\n\n\ndef _get_default_class(class_import_path):\n module, _, class_name = class_import_path.rpartition(\".\")\n\n def validator_func(settings, validators):\n return getattr(importlib.import_module(module), class_name)\n\n return validator_func\n\n\nclass _IsSubclassValidator(Validator):\n \"\"\"A validator to check if the supplied setting value is a subclass of the default class\"\"\"\n\n def validate(self, settings, *args, **kwargs):\n super().validate(settings, *args, **kwargs)\n\n default_class = self.default(settings, self)\n for name in self.names:\n setting_value = getattr(settings, name)\n if not issubclass(setting_value, default_class):\n raise ValidationError(\n f\"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` \"\n f\"received for setting `{name}`. It must be a subclass of \"\n f\"`{default_class.__module__}.{default_class.__qualname__}`.\"\n )\n\n\nclass _ProjectSettings(LazySettings):\n \"\"\"Define all settings available for users to configure in Kedro,\n along with their validation rules and default values.\n Use Dynaconf's LazySettings as base.\n \"\"\"\n\n _CONF_SOURCE = Validator(\"CONF_SOURCE\", default=\"conf\")\n _HOOKS = Validator(\"HOOKS\", default=tuple())\n _CONTEXT_CLASS = _IsSubclassValidator(\n \"CONTEXT_CLASS\",\n default=_get_default_class(\"kedro.framework.context.KedroContext\"),\n )\n _SESSION_STORE_CLASS = _IsSubclassValidator(\n \"SESSION_STORE_CLASS\",\n default=_get_default_class(\"kedro.framework.session.session.BaseSessionStore\"),\n )\n _SESSION_STORE_ARGS = Validator(\"SESSION_STORE_ARGS\", default={})\n _DISABLE_HOOKS_FOR_PLUGINS = Validator(\"DISABLE_HOOKS_FOR_PLUGINS\", default=tuple())\n _CONFIG_LOADER_CLASS = _IsSubclassValidator(\n \"CONFIG_LOADER_CLASS\", default=_get_default_class(\"kedro.config.ConfigLoader\")\n )\n _CONFIG_LOADER_ARGS = Validator(\"CONFIG_LOADER_ARGS\", default={})\n _DATA_CATALOG_CLASS = _IsSubclassValidator(\n \"DATA_CATALOG_CLASS\", default=_get_default_class(\"kedro.io.DataCatalog\")\n )\n\n def __init__(self, *args, **kwargs):\n\n kwargs.update(\n validators=[\n self._CONF_SOURCE,\n self._HOOKS,\n self._CONTEXT_CLASS,\n self._SESSION_STORE_CLASS,\n self._SESSION_STORE_ARGS,\n self._DISABLE_HOOKS_FOR_PLUGINS,\n self._CONFIG_LOADER_CLASS,\n self._CONFIG_LOADER_ARGS,\n self._DATA_CATALOG_CLASS,\n ]\n )\n super().__init__(*args, **kwargs)\n\n\ndef _load_data_wrapper(func):\n \"\"\"Wrap a method in _ProjectPipelines so that data is loaded on first access.\n Taking inspiration from dynaconf.utils.functional.new_method_proxy\n \"\"\"\n # pylint: disable=protected-access\n def inner(self, *args, **kwargs):\n self._load_data()\n return func(self._content, *args, **kwargs)\n\n return inner\n\n\nclass _ProjectPipelines(MutableMapping):\n \"\"\"A read-only lazy dictionary-like object to hold the project pipelines.\n On configure it will store the pipelines module.\n On first data access, e.g. through __getitem__, it will load the registered pipelines and merge\n them with pipelines defined from hooks.\n \"\"\"\n\n def __init__(self) -> None:\n self._pipelines_module: Optional[str] = None\n self._is_data_loaded = False\n self._content: Dict[str, Pipeline] = {}\n\n @staticmethod\n def _get_pipelines_registry_callable(pipelines_module: str):\n module_obj = importlib.import_module(pipelines_module)\n register_pipelines = getattr(module_obj, \"register_pipelines\")\n return register_pipelines\n\n def _load_data(self):\n \"\"\"Lazily read pipelines defined in the pipelines registry module\"\"\"\n\n # If the pipelines dictionary has not been configured with a pipelines module\n # or if data has been loaded\n if self._pipelines_module is None or self._is_data_loaded:\n return\n\n register_pipelines = self._get_pipelines_registry_callable(\n self._pipelines_module\n )\n project_pipelines = register_pipelines()\n\n self._content = project_pipelines\n self._is_data_loaded = True\n\n def configure(self, pipelines_module: Optional[str] = None) -> None:\n \"\"\"Configure the pipelines_module to load the pipelines dictionary.\n Reset the data loading state so that after every `configure` call,\n data are reloaded.\n \"\"\"\n self._pipelines_module = pipelines_module\n self._is_data_loaded = False\n self._content = {}\n\n # Dict-like interface\n __getitem__ = _load_data_wrapper(operator.getitem)\n __setitem__ = _load_data_wrapper(operator.setitem)\n __delitem__ = _load_data_wrapper(operator.delitem)\n __iter__ = _load_data_wrapper(iter)\n __len__ = _load_data_wrapper(len)\n\n # Presentation methods\n __repr__ = _load_data_wrapper(repr)\n __str__ = _load_data_wrapper(str)\n\n\nPACKAGE_NAME = None\nLOGGING = None\n\nsettings = _ProjectSettings()\n\npipelines = _ProjectPipelines()\n\n\ndef configure_project(package_name: str):\n \"\"\"Configure a Kedro project by populating its settings with values\n defined in user's settings.py and pipeline_registry.py.\n \"\"\"\n settings_module = f\"{package_name}.settings\"\n settings.configure(settings_module)\n\n pipelines_module = f\"{package_name}.pipeline_registry\"\n pipelines.configure(pipelines_module)\n\n # Once the project is successfully configured once, store PACKAGE_NAME as a\n # global variable to make it easily accessible. This is used by validate_settings()\n # below, and also by ParallelRunner on Windows, as package_name is required every\n # time a new subprocess is spawned.\n global PACKAGE_NAME\n PACKAGE_NAME = package_name\n\n\ndef configure_logging(logging_config: Dict[str, Any]) -> None:\n \"\"\"Configure logging to make it available as a global variable.\"\"\"\n logging.config.dictConfig(logging_config)\n global LOGGING\n LOGGING = logging_config\n\n\ndef validate_settings():\n \"\"\"Eagerly validate that the settings module is importable. This is desirable to\n surface any syntax or import errors early. In particular, without eagerly importing\n the settings module, dynaconf would silence any import error (e.g. missing\n dependency, missing/mislabelled pipeline), and users would instead get a cryptic\n error message ``Expected an instance of `ConfigLoader`, got `NoneType` instead``.\n More info on the dynaconf issue: https://github.com/rochacbruno/dynaconf/issues/460\n \"\"\"\n importlib.import_module(f\"{PACKAGE_NAME}.settings\")\n", "path": "kedro/framework/project/__init__.py"}], "after_files": [{"content": "\"\"\"``kedro.framework.project`` module provides utitlity to\nconfigure a Kedro project and access its settings.\"\"\"\n# pylint: disable=redefined-outer-name,unused-argument,global-statement\nimport importlib\nimport logging.config\nimport operator\nfrom collections.abc import MutableMapping\nfrom typing import Any, Dict, Optional\n\nfrom dynaconf import LazySettings\nfrom dynaconf.validator import ValidationError, Validator\n\nfrom kedro.pipeline import Pipeline\n\n\ndef _get_default_class(class_import_path):\n module, _, class_name = class_import_path.rpartition(\".\")\n\n def validator_func(settings, validators):\n return getattr(importlib.import_module(module), class_name)\n\n return validator_func\n\n\nclass _IsSubclassValidator(Validator):\n \"\"\"A validator to check if the supplied setting value is a subclass of the default class\"\"\"\n\n def validate(self, settings, *args, **kwargs):\n super().validate(settings, *args, **kwargs)\n\n default_class = self.default(settings, self)\n for name in self.names:\n setting_value = getattr(settings, name)\n if not issubclass(setting_value, default_class):\n raise ValidationError(\n f\"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` \"\n f\"received for setting `{name}`. It must be a subclass of \"\n f\"`{default_class.__module__}.{default_class.__qualname__}`.\"\n )\n\n\nclass _HasSharedParentClassValidator(Validator):\n \"\"\"A validator to check that the parent of the default class is an ancestor of\n the settings value.\"\"\"\n\n def validate(self, settings, *args, **kwargs):\n super().validate(settings, *args, **kwargs)\n\n default_class = self.default(settings, self)\n for name in self.names:\n setting_value = getattr(settings, name)\n # In the case of ConfigLoader, default_class.mro() will be:\n # [kedro.config.config.ConfigLoader,\n # kedro.config.abstract_config.AbstractConfigLoader,\n # abc.ABC,\n # object]\n # We pick out the direct parent and check if it's in any of the ancestors of\n # the supplied setting_value. This assumes that the direct parent is\n # the abstract class that must be inherited from.\n # A more general check just for a shared ancestor would be:\n # set(default_class.mro()) & set(setting_value.mro()) - {abc.ABC, object}\n default_class_parent = default_class.mro()[1]\n if default_class_parent not in setting_value.mro():\n raise ValidationError(\n f\"Invalid value `{setting_value.__module__}.{setting_value.__qualname__}` \"\n f\"received for setting `{name}`. It must be a subclass of \"\n f\"`{default_class_parent.__module__}.{default_class_parent.__qualname__}`.\"\n )\n\n\nclass _ProjectSettings(LazySettings):\n \"\"\"Define all settings available for users to configure in Kedro,\n along with their validation rules and default values.\n Use Dynaconf's LazySettings as base.\n \"\"\"\n\n _CONF_SOURCE = Validator(\"CONF_SOURCE\", default=\"conf\")\n _HOOKS = Validator(\"HOOKS\", default=tuple())\n _CONTEXT_CLASS = _IsSubclassValidator(\n \"CONTEXT_CLASS\",\n default=_get_default_class(\"kedro.framework.context.KedroContext\"),\n )\n _SESSION_STORE_CLASS = _IsSubclassValidator(\n \"SESSION_STORE_CLASS\",\n default=_get_default_class(\"kedro.framework.session.session.BaseSessionStore\"),\n )\n _SESSION_STORE_ARGS = Validator(\"SESSION_STORE_ARGS\", default={})\n _DISABLE_HOOKS_FOR_PLUGINS = Validator(\"DISABLE_HOOKS_FOR_PLUGINS\", default=tuple())\n _CONFIG_LOADER_CLASS = _HasSharedParentClassValidator(\n \"CONFIG_LOADER_CLASS\", default=_get_default_class(\"kedro.config.ConfigLoader\")\n )\n _CONFIG_LOADER_ARGS = Validator(\"CONFIG_LOADER_ARGS\", default={})\n _DATA_CATALOG_CLASS = _IsSubclassValidator(\n \"DATA_CATALOG_CLASS\", default=_get_default_class(\"kedro.io.DataCatalog\")\n )\n\n def __init__(self, *args, **kwargs):\n\n kwargs.update(\n validators=[\n self._CONF_SOURCE,\n self._HOOKS,\n self._CONTEXT_CLASS,\n self._SESSION_STORE_CLASS,\n self._SESSION_STORE_ARGS,\n self._DISABLE_HOOKS_FOR_PLUGINS,\n self._CONFIG_LOADER_CLASS,\n self._CONFIG_LOADER_ARGS,\n self._DATA_CATALOG_CLASS,\n ]\n )\n super().__init__(*args, **kwargs)\n\n\ndef _load_data_wrapper(func):\n \"\"\"Wrap a method in _ProjectPipelines so that data is loaded on first access.\n Taking inspiration from dynaconf.utils.functional.new_method_proxy\n \"\"\"\n # pylint: disable=protected-access\n def inner(self, *args, **kwargs):\n self._load_data()\n return func(self._content, *args, **kwargs)\n\n return inner\n\n\nclass _ProjectPipelines(MutableMapping):\n \"\"\"A read-only lazy dictionary-like object to hold the project pipelines.\n On configure it will store the pipelines module.\n On first data access, e.g. through __getitem__, it will load the registered pipelines and merge\n them with pipelines defined from hooks.\n \"\"\"\n\n def __init__(self) -> None:\n self._pipelines_module: Optional[str] = None\n self._is_data_loaded = False\n self._content: Dict[str, Pipeline] = {}\n\n @staticmethod\n def _get_pipelines_registry_callable(pipelines_module: str):\n module_obj = importlib.import_module(pipelines_module)\n register_pipelines = getattr(module_obj, \"register_pipelines\")\n return register_pipelines\n\n def _load_data(self):\n \"\"\"Lazily read pipelines defined in the pipelines registry module\"\"\"\n\n # If the pipelines dictionary has not been configured with a pipelines module\n # or if data has been loaded\n if self._pipelines_module is None or self._is_data_loaded:\n return\n\n register_pipelines = self._get_pipelines_registry_callable(\n self._pipelines_module\n )\n project_pipelines = register_pipelines()\n\n self._content = project_pipelines\n self._is_data_loaded = True\n\n def configure(self, pipelines_module: Optional[str] = None) -> None:\n \"\"\"Configure the pipelines_module to load the pipelines dictionary.\n Reset the data loading state so that after every `configure` call,\n data are reloaded.\n \"\"\"\n self._pipelines_module = pipelines_module\n self._is_data_loaded = False\n self._content = {}\n\n # Dict-like interface\n __getitem__ = _load_data_wrapper(operator.getitem)\n __setitem__ = _load_data_wrapper(operator.setitem)\n __delitem__ = _load_data_wrapper(operator.delitem)\n __iter__ = _load_data_wrapper(iter)\n __len__ = _load_data_wrapper(len)\n\n # Presentation methods\n __repr__ = _load_data_wrapper(repr)\n __str__ = _load_data_wrapper(str)\n\n\nPACKAGE_NAME = None\nLOGGING = None\n\nsettings = _ProjectSettings()\n\npipelines = _ProjectPipelines()\n\n\ndef configure_project(package_name: str):\n \"\"\"Configure a Kedro project by populating its settings with values\n defined in user's settings.py and pipeline_registry.py.\n \"\"\"\n settings_module = f\"{package_name}.settings\"\n settings.configure(settings_module)\n\n pipelines_module = f\"{package_name}.pipeline_registry\"\n pipelines.configure(pipelines_module)\n\n # Once the project is successfully configured once, store PACKAGE_NAME as a\n # global variable to make it easily accessible. This is used by validate_settings()\n # below, and also by ParallelRunner on Windows, as package_name is required every\n # time a new subprocess is spawned.\n global PACKAGE_NAME\n PACKAGE_NAME = package_name\n\n\ndef configure_logging(logging_config: Dict[str, Any]) -> None:\n \"\"\"Configure logging to make it available as a global variable.\"\"\"\n logging.config.dictConfig(logging_config)\n global LOGGING\n LOGGING = logging_config\n\n\ndef validate_settings():\n \"\"\"Eagerly validate that the settings module is importable. This is desirable to\n surface any syntax or import errors early. In particular, without eagerly importing\n the settings module, dynaconf would silence any import error (e.g. missing\n dependency, missing/mislabelled pipeline), and users would instead get a cryptic\n error message ``Expected an instance of `ConfigLoader`, got `NoneType` instead``.\n More info on the dynaconf issue: https://github.com/rochacbruno/dynaconf/issues/460\n \"\"\"\n importlib.import_module(f\"{PACKAGE_NAME}.settings\")\n", "path": "kedro/framework/project/__init__.py"}]}
| 3,139 | 546 |
gh_patches_debug_31210
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-lint-480
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E202 (no leading zero for octal permission) False positive
# Issue Type
- Bug report
# Ansible and Ansible Lint details
- ansible-lint installation method: Ansible Galaxy use of ansible-lint
# Desired Behaviour
ansible-lint provides correct reason to following the lint
# Actual Behaviour (Bug report only)
Please give some details of what is actually happening.
Include a [minimum complete verifiable example](http://stackoverflow.com/help/mcve)
with:
- playbook
- output of running ansible-lint
- if you're getting a stack trace, output of
`ansible-playbook --syntax-check playbook`
"Numeric file permissions without leading zero can behave in unexpected ways. See http://docs.ansible.com/ansible/file_module.html" - https://docs.ansible.com/ansible-lint/rules/default_rules.html
./galaxy/downloads/tmp2REDBx/tasks/main-tasks.yml:4: [E202] Octal file permissions must contain leading zero
```
# Lets create the configuration first...
# avoid locking ourself out.
- name: Create firewalld config directories
file:
dest: "{{ item }}"
state: directory
mode: "750"
with_items:
- /etc/firewalld
- /etc/firewalld/zones
notify: firewalld__reload
```
It is not correct that "750" is mis-interpreted. "750" is a string, and is correctly interpreted as the octal permission number `750` i.e. `rwxr-x---`.
Personally I have been using the string "750", rather than remember that YAML supports octal numeric literals using a leading zero.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansiblelint/rules/OctalPermissionsRule.py`
Content:
```
1 # Copyright (c) 2013-2014 Will Thames <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 # THE SOFTWARE.
20
21 from ansiblelint import AnsibleLintRule
22 import re
23 import six
24
25
26 class OctalPermissionsRule(AnsibleLintRule):
27 id = '202'
28 shortdesc = 'Octal file permissions must contain leading zero'
29 description = (
30 'Numeric file permissions without leading zero can behave '
31 'in unexpected ways. See '
32 'http://docs.ansible.com/ansible/file_module.html'
33 )
34 severity = 'VERY_HIGH'
35 tags = ['formatting', 'ANSIBLE0009']
36 version_added = 'historic'
37
38 _modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',
39 'replace', 'synchronize', 'template', 'unarchive']
40
41 mode_regex = re.compile(r'^\s*[0-9]+\s*$')
42 valid_mode_regex = re.compile(r'^\s*0[0-7]{3,4}\s*$')
43
44 def is_invalid_permission(self, mode):
45 # sensible file permission modes don't
46 # have write bit set when read bit is
47 # not set and don't have execute bit set
48 # when user execute bit is not set.
49 # also, user permissions are more generous than
50 # group permissions and user and group permissions
51 # are more generous than world permissions
52
53 other_write_without_read = (mode % 8 and mode % 8 < 4 and
54 not (mode % 8 == 1 and (mode >> 6) % 2 == 1))
55 group_write_without_read = ((mode >> 3) % 8 and (mode >> 3) % 8 < 4 and
56 not ((mode >> 3) % 8 == 1 and (mode >> 6) % 2 == 1))
57 user_write_without_read = ((mode >> 6) % 8 and (mode >> 6) % 8 < 4 and
58 not (mode >> 6) % 8 == 1)
59 other_more_generous_than_group = mode % 8 > (mode >> 3) % 8
60 other_more_generous_than_user = mode % 8 > (mode >> 6) % 8
61 group_more_generous_than_user = (mode >> 3) % 8 > (mode >> 6) % 8
62
63 return (other_write_without_read or
64 group_write_without_read or
65 user_write_without_read or
66 other_more_generous_than_group or
67 other_more_generous_than_user or
68 group_more_generous_than_user)
69
70 def matchtask(self, file, task):
71 if task["action"]["__ansible_module__"] in self._modules:
72 mode = task['action'].get('mode', None)
73 if isinstance(mode, six.string_types) and self.mode_regex.match(mode):
74 return not self.valid_mode_regex.match(mode)
75 if isinstance(mode, int):
76 return self.is_invalid_permission(mode)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py
--- a/lib/ansiblelint/rules/OctalPermissionsRule.py
+++ b/lib/ansiblelint/rules/OctalPermissionsRule.py
@@ -19,13 +19,12 @@
# THE SOFTWARE.
from ansiblelint import AnsibleLintRule
-import re
import six
class OctalPermissionsRule(AnsibleLintRule):
id = '202'
- shortdesc = 'Octal file permissions must contain leading zero'
+ shortdesc = 'Octal file permissions must contain leading zero or be a string'
description = (
'Numeric file permissions without leading zero can behave '
'in unexpected ways. See '
@@ -38,9 +37,6 @@
_modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',
'replace', 'synchronize', 'template', 'unarchive']
- mode_regex = re.compile(r'^\s*[0-9]+\s*$')
- valid_mode_regex = re.compile(r'^\s*0[0-7]{3,4}\s*$')
-
def is_invalid_permission(self, mode):
# sensible file permission modes don't
# have write bit set when read bit is
@@ -70,7 +66,9 @@
def matchtask(self, file, task):
if task["action"]["__ansible_module__"] in self._modules:
mode = task['action'].get('mode', None)
- if isinstance(mode, six.string_types) and self.mode_regex.match(mode):
- return not self.valid_mode_regex.match(mode)
+
+ if isinstance(mode, six.string_types):
+ return False
+
if isinstance(mode, int):
return self.is_invalid_permission(mode)
|
{"golden_diff": "diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py\n--- a/lib/ansiblelint/rules/OctalPermissionsRule.py\n+++ b/lib/ansiblelint/rules/OctalPermissionsRule.py\n@@ -19,13 +19,12 @@\n # THE SOFTWARE.\n \n from ansiblelint import AnsibleLintRule\n-import re\n import six\n \n \n class OctalPermissionsRule(AnsibleLintRule):\n id = '202'\n- shortdesc = 'Octal file permissions must contain leading zero'\n+ shortdesc = 'Octal file permissions must contain leading zero or be a string'\n description = (\n 'Numeric file permissions without leading zero can behave '\n 'in unexpected ways. See '\n@@ -38,9 +37,6 @@\n _modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',\n 'replace', 'synchronize', 'template', 'unarchive']\n \n- mode_regex = re.compile(r'^\\s*[0-9]+\\s*$')\n- valid_mode_regex = re.compile(r'^\\s*0[0-7]{3,4}\\s*$')\n-\n def is_invalid_permission(self, mode):\n # sensible file permission modes don't\n # have write bit set when read bit is\n@@ -70,7 +66,9 @@\n def matchtask(self, file, task):\n if task[\"action\"][\"__ansible_module__\"] in self._modules:\n mode = task['action'].get('mode', None)\n- if isinstance(mode, six.string_types) and self.mode_regex.match(mode):\n- return not self.valid_mode_regex.match(mode)\n+\n+ if isinstance(mode, six.string_types):\n+ return False\n+\n if isinstance(mode, int):\n return self.is_invalid_permission(mode)\n", "issue": "E202 (no leading zero for octal permission) False positive\n# Issue Type\r\n- Bug report\r\n\r\n# Ansible and Ansible Lint details\r\n\r\n- ansible-lint installation method: Ansible Galaxy use of ansible-lint\r\n\r\n# Desired Behaviour\r\n\r\nansible-lint provides correct reason to following the lint\r\n\r\n# Actual Behaviour (Bug report only)\r\n\r\nPlease give some details of what is actually happening.\r\nInclude a [minimum complete verifiable example](http://stackoverflow.com/help/mcve)\r\nwith:\r\n- playbook\r\n- output of running ansible-lint\r\n- if you're getting a stack trace, output of\r\n `ansible-playbook --syntax-check playbook`\r\n\r\n\"Numeric file permissions without leading zero can behave in unexpected ways. See http://docs.ansible.com/ansible/file_module.html\" - https://docs.ansible.com/ansible-lint/rules/default_rules.html\r\n\r\n./galaxy/downloads/tmp2REDBx/tasks/main-tasks.yml:4: [E202] Octal file permissions must contain leading zero\r\n\r\n```\r\n# Lets create the configuration first...\r\n# avoid locking ourself out.\r\n\r\n- name: Create firewalld config directories\r\n file:\r\n dest: \"{{ item }}\"\r\n state: directory\r\n mode: \"750\"\r\n with_items:\r\n - /etc/firewalld\r\n - /etc/firewalld/zones\r\n notify: firewalld__reload\r\n```\r\n\r\nIt is not correct that \"750\" is mis-interpreted. \"750\" is a string, and is correctly interpreted as the octal permission number `750` i.e. `rwxr-x---`.\r\n\r\nPersonally I have been using the string \"750\", rather than remember that YAML supports octal numeric literals using a leading zero.\n", "before_files": [{"content": "# Copyright (c) 2013-2014 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nimport re\nimport six\n\n\nclass OctalPermissionsRule(AnsibleLintRule):\n id = '202'\n shortdesc = 'Octal file permissions must contain leading zero'\n description = (\n 'Numeric file permissions without leading zero can behave '\n 'in unexpected ways. See '\n 'http://docs.ansible.com/ansible/file_module.html'\n )\n severity = 'VERY_HIGH'\n tags = ['formatting', 'ANSIBLE0009']\n version_added = 'historic'\n\n _modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',\n 'replace', 'synchronize', 'template', 'unarchive']\n\n mode_regex = re.compile(r'^\\s*[0-9]+\\s*$')\n valid_mode_regex = re.compile(r'^\\s*0[0-7]{3,4}\\s*$')\n\n def is_invalid_permission(self, mode):\n # sensible file permission modes don't\n # have write bit set when read bit is\n # not set and don't have execute bit set\n # when user execute bit is not set.\n # also, user permissions are more generous than\n # group permissions and user and group permissions\n # are more generous than world permissions\n\n other_write_without_read = (mode % 8 and mode % 8 < 4 and\n not (mode % 8 == 1 and (mode >> 6) % 2 == 1))\n group_write_without_read = ((mode >> 3) % 8 and (mode >> 3) % 8 < 4 and\n not ((mode >> 3) % 8 == 1 and (mode >> 6) % 2 == 1))\n user_write_without_read = ((mode >> 6) % 8 and (mode >> 6) % 8 < 4 and\n not (mode >> 6) % 8 == 1)\n other_more_generous_than_group = mode % 8 > (mode >> 3) % 8\n other_more_generous_than_user = mode % 8 > (mode >> 6) % 8\n group_more_generous_than_user = (mode >> 3) % 8 > (mode >> 6) % 8\n\n return (other_write_without_read or\n group_write_without_read or\n user_write_without_read or\n other_more_generous_than_group or\n other_more_generous_than_user or\n group_more_generous_than_user)\n\n def matchtask(self, file, task):\n if task[\"action\"][\"__ansible_module__\"] in self._modules:\n mode = task['action'].get('mode', None)\n if isinstance(mode, six.string_types) and self.mode_regex.match(mode):\n return not self.valid_mode_regex.match(mode)\n if isinstance(mode, int):\n return self.is_invalid_permission(mode)\n", "path": "lib/ansiblelint/rules/OctalPermissionsRule.py"}], "after_files": [{"content": "# Copyright (c) 2013-2014 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nimport six\n\n\nclass OctalPermissionsRule(AnsibleLintRule):\n id = '202'\n shortdesc = 'Octal file permissions must contain leading zero or be a string'\n description = (\n 'Numeric file permissions without leading zero can behave '\n 'in unexpected ways. See '\n 'http://docs.ansible.com/ansible/file_module.html'\n )\n severity = 'VERY_HIGH'\n tags = ['formatting', 'ANSIBLE0009']\n version_added = 'historic'\n\n _modules = ['assemble', 'copy', 'file', 'ini_file', 'lineinfile',\n 'replace', 'synchronize', 'template', 'unarchive']\n\n def is_invalid_permission(self, mode):\n # sensible file permission modes don't\n # have write bit set when read bit is\n # not set and don't have execute bit set\n # when user execute bit is not set.\n # also, user permissions are more generous than\n # group permissions and user and group permissions\n # are more generous than world permissions\n\n other_write_without_read = (mode % 8 and mode % 8 < 4 and\n not (mode % 8 == 1 and (mode >> 6) % 2 == 1))\n group_write_without_read = ((mode >> 3) % 8 and (mode >> 3) % 8 < 4 and\n not ((mode >> 3) % 8 == 1 and (mode >> 6) % 2 == 1))\n user_write_without_read = ((mode >> 6) % 8 and (mode >> 6) % 8 < 4 and\n not (mode >> 6) % 8 == 1)\n other_more_generous_than_group = mode % 8 > (mode >> 3) % 8\n other_more_generous_than_user = mode % 8 > (mode >> 6) % 8\n group_more_generous_than_user = (mode >> 3) % 8 > (mode >> 6) % 8\n\n return (other_write_without_read or\n group_write_without_read or\n user_write_without_read or\n other_more_generous_than_group or\n other_more_generous_than_user or\n group_more_generous_than_user)\n\n def matchtask(self, file, task):\n if task[\"action\"][\"__ansible_module__\"] in self._modules:\n mode = task['action'].get('mode', None)\n\n if isinstance(mode, six.string_types):\n return False\n\n if isinstance(mode, int):\n return self.is_invalid_permission(mode)\n", "path": "lib/ansiblelint/rules/OctalPermissionsRule.py"}]}
| 1,661 | 398 |
gh_patches_debug_32269
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-491
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add mypy plugin for strawberry.union
We have a function to create union types and also add name and description that are used when generating the GraphQL schema[1].
Unfortunately MyPy complains when using the annotation, since it doesn't recognise it as a type, here's an example:
```python
ExampleUnion = strawberry.union(
"ExampleUnion",
types=(
ABC,
CDE,
),
)
@strawberry.mutation
def abc(phone_number: str) -> ExampleUnion:
# ...
```
Error:
```
Variable "api.mutations.abc.ExampleUnion" is not valid as a type
```
[1] This is why we can't use typing.Union directly, as we can't pass arguments to it
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/ext/mypy_plugin.py`
Content:
```
1 from typing import Callable, Optional
2
3 from mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin
4 from mypy.plugins import dataclasses
5 from mypy.types import Type
6
7
8 def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
9 type_name = ctx.type.args[0]
10 type_ = ctx.api.analyze_type(type_name)
11
12 return type_
13
14
15 def private_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
16 type_name = ctx.type.args[0]
17 type_ = ctx.api.analyze_type(type_name)
18
19 return type_
20
21
22 class StrawberryPlugin(Plugin):
23 def get_type_analyze_hook(self, fullname: str):
24 if fullname == "strawberry.lazy_type.LazyType":
25 return lazy_type_analyze_callback
26
27 if any(
28 name in fullname
29 for name in {"strawberry.private.Private", "strawberry.Private"}
30 ):
31 return private_type_analyze_callback
32
33 return None
34
35 def get_class_decorator_hook(
36 self, fullname: str
37 ) -> Optional[Callable[[ClassDefContext], None]]:
38 if any(
39 strawberry_decorator in fullname
40 for strawberry_decorator in {
41 "strawberry.type",
42 "strawberry.federation.type",
43 "strawberry.input",
44 "strawberry.interface",
45 }
46 ):
47 return dataclasses.dataclass_class_maker_callback
48 return None
49
50
51 def plugin(version: str):
52 return StrawberryPlugin
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/ext/mypy_plugin.py b/strawberry/ext/mypy_plugin.py
--- a/strawberry/ext/mypy_plugin.py
+++ b/strawberry/ext/mypy_plugin.py
@@ -1,8 +1,23 @@
from typing import Callable, Optional
-from mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin
+from mypy.nodes import (
+ GDEF,
+ Expression,
+ IndexExpr,
+ NameExpr,
+ SymbolTableNode,
+ TupleExpr,
+ TypeAlias,
+)
+from mypy.plugin import (
+ AnalyzeTypeContext,
+ ClassDefContext,
+ DynamicClassDefContext,
+ Plugin,
+ SemanticAnalyzerPluginInterface,
+)
from mypy.plugins import dataclasses
-from mypy.types import Type
+from mypy.types import Type, UnionType
def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
@@ -19,7 +34,48 @@
return type_
+def _get_type_for_expr(expr: Expression, api: SemanticAnalyzerPluginInterface):
+ if isinstance(expr, NameExpr):
+ return api.named_type(expr.name)
+
+ if isinstance(expr, IndexExpr):
+ type_ = _get_type_for_expr(expr.base, api)
+ type_.args = [_get_type_for_expr(expr.index, api)]
+
+ return type_
+
+ raise ValueError(f"Unsupported expression f{type(expr)}")
+
+
+def union_hook(ctx: DynamicClassDefContext) -> None:
+ types = ctx.call.args[1]
+
+ if isinstance(types, TupleExpr):
+ type_ = UnionType(tuple(_get_type_for_expr(x, ctx.api) for x in types.items))
+
+ type_alias = TypeAlias(
+ type_,
+ fullname=ctx.api.qualified_name(ctx.name),
+ line=ctx.call.line,
+ column=ctx.call.column,
+ )
+
+ ctx.api.add_symbol_table_node(
+ ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)
+ )
+
+
class StrawberryPlugin(Plugin):
+ def get_dynamic_class_hook(
+ self, fullname: str
+ ) -> Optional[Callable[[DynamicClassDefContext], None]]:
+ # TODO: investigate why we need this instead of `strawberry.union.union` on CI
+ # we have the same issue in the other hooks
+ if "strawberry.union" in fullname:
+ return union_hook
+
+ return None
+
def get_type_analyze_hook(self, fullname: str):
if fullname == "strawberry.lazy_type.LazyType":
return lazy_type_analyze_callback
|
{"golden_diff": "diff --git a/strawberry/ext/mypy_plugin.py b/strawberry/ext/mypy_plugin.py\n--- a/strawberry/ext/mypy_plugin.py\n+++ b/strawberry/ext/mypy_plugin.py\n@@ -1,8 +1,23 @@\n from typing import Callable, Optional\n \n-from mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin\n+from mypy.nodes import (\n+ GDEF,\n+ Expression,\n+ IndexExpr,\n+ NameExpr,\n+ SymbolTableNode,\n+ TupleExpr,\n+ TypeAlias,\n+)\n+from mypy.plugin import (\n+ AnalyzeTypeContext,\n+ ClassDefContext,\n+ DynamicClassDefContext,\n+ Plugin,\n+ SemanticAnalyzerPluginInterface,\n+)\n from mypy.plugins import dataclasses\n-from mypy.types import Type\n+from mypy.types import Type, UnionType\n \n \n def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n@@ -19,7 +34,48 @@\n return type_\n \n \n+def _get_type_for_expr(expr: Expression, api: SemanticAnalyzerPluginInterface):\n+ if isinstance(expr, NameExpr):\n+ return api.named_type(expr.name)\n+\n+ if isinstance(expr, IndexExpr):\n+ type_ = _get_type_for_expr(expr.base, api)\n+ type_.args = [_get_type_for_expr(expr.index, api)]\n+\n+ return type_\n+\n+ raise ValueError(f\"Unsupported expression f{type(expr)}\")\n+\n+\n+def union_hook(ctx: DynamicClassDefContext) -> None:\n+ types = ctx.call.args[1]\n+\n+ if isinstance(types, TupleExpr):\n+ type_ = UnionType(tuple(_get_type_for_expr(x, ctx.api) for x in types.items))\n+\n+ type_alias = TypeAlias(\n+ type_,\n+ fullname=ctx.api.qualified_name(ctx.name),\n+ line=ctx.call.line,\n+ column=ctx.call.column,\n+ )\n+\n+ ctx.api.add_symbol_table_node(\n+ ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)\n+ )\n+\n+\n class StrawberryPlugin(Plugin):\n+ def get_dynamic_class_hook(\n+ self, fullname: str\n+ ) -> Optional[Callable[[DynamicClassDefContext], None]]:\n+ # TODO: investigate why we need this instead of `strawberry.union.union` on CI\n+ # we have the same issue in the other hooks\n+ if \"strawberry.union\" in fullname:\n+ return union_hook\n+\n+ return None\n+\n def get_type_analyze_hook(self, fullname: str):\n if fullname == \"strawberry.lazy_type.LazyType\":\n return lazy_type_analyze_callback\n", "issue": "Add mypy plugin for strawberry.union\nWe have a function to create union types and also add name and description that are used when generating the GraphQL schema[1].\r\n\r\nUnfortunately MyPy complains when using the annotation, since it doesn't recognise it as a type, here's an example:\r\n\r\n```python\r\nExampleUnion = strawberry.union(\r\n \"ExampleUnion\",\r\n types=(\r\n ABC,\r\n CDE,\r\n ),\r\n)\r\n\r\[email protected]\r\ndef abc(phone_number: str) -> ExampleUnion:\r\n # ...\r\n```\r\n\r\nError:\r\n\r\n```\r\nVariable \"api.mutations.abc.ExampleUnion\" is not valid as a type\r\n```\r\n\r\n[1] This is why we can't use typing.Union directly, as we can't pass arguments to it\n", "before_files": [{"content": "from typing import Callable, Optional\n\nfrom mypy.plugin import AnalyzeTypeContext, ClassDefContext, Plugin\nfrom mypy.plugins import dataclasses\nfrom mypy.types import Type\n\n\ndef lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\ndef private_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\nclass StrawberryPlugin(Plugin):\n def get_type_analyze_hook(self, fullname: str):\n if fullname == \"strawberry.lazy_type.LazyType\":\n return lazy_type_analyze_callback\n\n if any(\n name in fullname\n for name in {\"strawberry.private.Private\", \"strawberry.Private\"}\n ):\n return private_type_analyze_callback\n\n return None\n\n def get_class_decorator_hook(\n self, fullname: str\n ) -> Optional[Callable[[ClassDefContext], None]]:\n if any(\n strawberry_decorator in fullname\n for strawberry_decorator in {\n \"strawberry.type\",\n \"strawberry.federation.type\",\n \"strawberry.input\",\n \"strawberry.interface\",\n }\n ):\n return dataclasses.dataclass_class_maker_callback\n return None\n\n\ndef plugin(version: str):\n return StrawberryPlugin\n", "path": "strawberry/ext/mypy_plugin.py"}], "after_files": [{"content": "from typing import Callable, Optional\n\nfrom mypy.nodes import (\n GDEF,\n Expression,\n IndexExpr,\n NameExpr,\n SymbolTableNode,\n TupleExpr,\n TypeAlias,\n)\nfrom mypy.plugin import (\n AnalyzeTypeContext,\n ClassDefContext,\n DynamicClassDefContext,\n Plugin,\n SemanticAnalyzerPluginInterface,\n)\nfrom mypy.plugins import dataclasses\nfrom mypy.types import Type, UnionType\n\n\ndef lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\ndef private_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\ndef _get_type_for_expr(expr: Expression, api: SemanticAnalyzerPluginInterface):\n if isinstance(expr, NameExpr):\n return api.named_type(expr.name)\n\n if isinstance(expr, IndexExpr):\n type_ = _get_type_for_expr(expr.base, api)\n type_.args = [_get_type_for_expr(expr.index, api)]\n\n return type_\n\n raise ValueError(f\"Unsupported expression f{type(expr)}\")\n\n\ndef union_hook(ctx: DynamicClassDefContext) -> None:\n types = ctx.call.args[1]\n\n if isinstance(types, TupleExpr):\n type_ = UnionType(tuple(_get_type_for_expr(x, ctx.api) for x in types.items))\n\n type_alias = TypeAlias(\n type_,\n fullname=ctx.api.qualified_name(ctx.name),\n line=ctx.call.line,\n column=ctx.call.column,\n )\n\n ctx.api.add_symbol_table_node(\n ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)\n )\n\n\nclass StrawberryPlugin(Plugin):\n def get_dynamic_class_hook(\n self, fullname: str\n ) -> Optional[Callable[[DynamicClassDefContext], None]]:\n # TODO: investigate why we need this instead of `strawberry.union.union` on CI\n # we have the same issue in the other hooks\n if \"strawberry.union\" in fullname:\n return union_hook\n\n return None\n\n def get_type_analyze_hook(self, fullname: str):\n if fullname == \"strawberry.lazy_type.LazyType\":\n return lazy_type_analyze_callback\n\n if any(\n name in fullname\n for name in {\"strawberry.private.Private\", \"strawberry.Private\"}\n ):\n return private_type_analyze_callback\n\n return None\n\n def get_class_decorator_hook(\n self, fullname: str\n ) -> Optional[Callable[[ClassDefContext], None]]:\n if any(\n strawberry_decorator in fullname\n for strawberry_decorator in {\n \"strawberry.type\",\n \"strawberry.federation.type\",\n \"strawberry.input\",\n \"strawberry.interface\",\n }\n ):\n return dataclasses.dataclass_class_maker_callback\n return None\n\n\ndef plugin(version: str):\n return StrawberryPlugin\n", "path": "strawberry/ext/mypy_plugin.py"}]}
| 835 | 594 |
gh_patches_debug_4568
|
rasdani/github-patches
|
git_diff
|
medtagger__MedTagger-466
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add information about Dataset to Tasks endpoint
## Current Behavior
Tasks endpoint returns list of all Tasks **without** information about their Datasets.
## Expected Behavior
Each Task returned by this endpoint should also contain information about its Dataset.
## Tasks to do
- [ ] Add Dataset key to the Task returned by Tasks endpoint.
## Additional comment
Dataset key will be fine only if there will be an endpoint that will return all Datasets available in MedTagger.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/medtagger/api/tasks/serializers.py`
Content:
```
1 """Module responsible for storage of serializers used in Tasks endpoints."""
2 from flask_restplus import fields
3
4 from medtagger.api import api
5 from medtagger.definitions import LabelTool
6
7 out__label_tag = api.model('Label Tag model', {
8 'key': fields.String(),
9 'name': fields.String(),
10 'actions_ids': fields.List(fields.Integer(),
11 attribute=lambda label_tag: [action.id for action in label_tag.actions]),
12 'tools': fields.List(fields.String(), description='Available tools for Label Tag',
13 enum=[tool.name for tool in LabelTool],
14 attribute=lambda label_tag: [tool.name for tool in label_tag.tools]),
15 })
16
17 in__label_tag = api.model('Label Tag model', {
18 'key': fields.String(),
19 'name': fields.String(),
20 'actions_ids': fields.List(fields.Integer()),
21 'tools': fields.List(fields.String(), description='Available tools for Label Tag',
22 enum=[tool.name for tool in LabelTool]),
23 })
24
25 out__task = api.model('Task model', {
26 'key': fields.String(),
27 'name': fields.String(),
28 'image_path': fields.String(),
29 'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),
30 })
31
32 in__task = api.model('New Task model', {
33 'key': fields.String(),
34 'name': fields.String(),
35 'image_path': fields.String(),
36 'datasets_keys': fields.List(fields.String()),
37 'tags': fields.List(fields.Nested(in__label_tag), attribute='available_tags'),
38 })
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/backend/medtagger/api/tasks/serializers.py b/backend/medtagger/api/tasks/serializers.py
--- a/backend/medtagger/api/tasks/serializers.py
+++ b/backend/medtagger/api/tasks/serializers.py
@@ -27,6 +27,7 @@
'name': fields.String(),
'image_path': fields.String(),
'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),
+ 'datasets_keys': fields.List(fields.String(), attribute=lambda task: [dataset.key for dataset in task.datasets]),
})
in__task = api.model('New Task model', {
|
{"golden_diff": "diff --git a/backend/medtagger/api/tasks/serializers.py b/backend/medtagger/api/tasks/serializers.py\n--- a/backend/medtagger/api/tasks/serializers.py\n+++ b/backend/medtagger/api/tasks/serializers.py\n@@ -27,6 +27,7 @@\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),\n+ 'datasets_keys': fields.List(fields.String(), attribute=lambda task: [dataset.key for dataset in task.datasets]),\n })\n \n in__task = api.model('New Task model', {\n", "issue": "Add information about Dataset to Tasks endpoint\n## Current Behavior\r\n\r\nTasks endpoint returns list of all Tasks **without** information about their Datasets.\r\n\r\n## Expected Behavior\r\n\r\nEach Task returned by this endpoint should also contain information about its Dataset.\r\n\r\n## Tasks to do\r\n\r\n- [ ] Add Dataset key to the Task returned by Tasks endpoint.\r\n\r\n## Additional comment\r\n\r\nDataset key will be fine only if there will be an endpoint that will return all Datasets available in MedTagger.\n", "before_files": [{"content": "\"\"\"Module responsible for storage of serializers used in Tasks endpoints.\"\"\"\nfrom flask_restplus import fields\n\nfrom medtagger.api import api\nfrom medtagger.definitions import LabelTool\n\nout__label_tag = api.model('Label Tag model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'actions_ids': fields.List(fields.Integer(),\n attribute=lambda label_tag: [action.id for action in label_tag.actions]),\n 'tools': fields.List(fields.String(), description='Available tools for Label Tag',\n enum=[tool.name for tool in LabelTool],\n attribute=lambda label_tag: [tool.name for tool in label_tag.tools]),\n})\n\nin__label_tag = api.model('Label Tag model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'actions_ids': fields.List(fields.Integer()),\n 'tools': fields.List(fields.String(), description='Available tools for Label Tag',\n enum=[tool.name for tool in LabelTool]),\n})\n\nout__task = api.model('Task model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),\n})\n\nin__task = api.model('New Task model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'datasets_keys': fields.List(fields.String()),\n 'tags': fields.List(fields.Nested(in__label_tag), attribute='available_tags'),\n})\n", "path": "backend/medtagger/api/tasks/serializers.py"}], "after_files": [{"content": "\"\"\"Module responsible for storage of serializers used in Tasks endpoints.\"\"\"\nfrom flask_restplus import fields\n\nfrom medtagger.api import api\nfrom medtagger.definitions import LabelTool\n\nout__label_tag = api.model('Label Tag model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'actions_ids': fields.List(fields.Integer(),\n attribute=lambda label_tag: [action.id for action in label_tag.actions]),\n 'tools': fields.List(fields.String(), description='Available tools for Label Tag',\n enum=[tool.name for tool in LabelTool],\n attribute=lambda label_tag: [tool.name for tool in label_tag.tools]),\n})\n\nin__label_tag = api.model('Label Tag model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'actions_ids': fields.List(fields.Integer()),\n 'tools': fields.List(fields.String(), description='Available tools for Label Tag',\n enum=[tool.name for tool in LabelTool]),\n})\n\nout__task = api.model('Task model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'tags': fields.List(fields.Nested(out__label_tag), attribute='available_tags'),\n 'datasets_keys': fields.List(fields.String(), attribute=lambda task: [dataset.key for dataset in task.datasets]),\n})\n\nin__task = api.model('New Task model', {\n 'key': fields.String(),\n 'name': fields.String(),\n 'image_path': fields.String(),\n 'datasets_keys': fields.List(fields.String()),\n 'tags': fields.List(fields.Nested(in__label_tag), attribute='available_tags'),\n})\n", "path": "backend/medtagger/api/tasks/serializers.py"}]}
| 750 | 140 |
gh_patches_debug_36259
|
rasdani/github-patches
|
git_diff
|
piskvorky__gensim-1833
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BM25 : Incorrect scoring function
https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py
**Instead of "len(document)" it should be the length of the _index_ document of the corpus.**
```
def get_score(self, document, index, average_idf):
# in this line it should be the length of the index document in the corpus
score += (idf * self.f[index][word] * (PARAM_K1 + 1)
/ (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
```
BM25 : Incorrect scoring function
https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py
**Instead of "len(document)" it should be the length of the _index_ document of the corpus.**
```
def get_score(self, document, index, average_idf):
# in this line it should be the length of the index document in the corpus
score += (idf * self.f[index][word] * (PARAM_K1 + 1)
/ (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gensim/summarization/bm25.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
5
6 """This module contains function of computing rank scores for documents in
7 corpus and helper class `BM25` used in calculations. Original alhorithm
8 descibed in [1]_, also you may check Wikipedia page [2]_.
9
10
11 .. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,
12 http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf
13 .. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25
14
15
16
17 Examples
18 --------
19 >>> from gensim.summarization.bm25 import get_bm25_weights
20 >>> corpus = [
21 ... ["black", "cat", "white", "cat"],
22 ... ["cat", "outer", "space"],
23 ... ["wag", "dog"]
24 ... ]
25 >>> result = get_bm25_weights(corpus)
26
27
28 Data:
29 -----
30 .. data:: PARAM_K1 - Free smoothing parameter for BM25.
31 .. data:: PARAM_B - Free smoothing parameter for BM25.
32 .. data:: EPSILON - Constant used for negative idf of document in corpus.
33
34 """
35
36
37 import math
38 from six import iteritems
39 from six.moves import xrange
40
41
42 PARAM_K1 = 1.5
43 PARAM_B = 0.75
44 EPSILON = 0.25
45
46
47 class BM25(object):
48 """Implementation of Best Matching 25 ranking function.
49
50 Attributes
51 ----------
52 corpus_size : int
53 Size of corpus (number of documents).
54 avgdl : float
55 Average length of document in `corpus`.
56 corpus : list of list of str
57 Corpus of documents.
58 f : list of dicts of int
59 Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.
60 df : dict
61 Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
62 idf : dict
63 Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
64
65 """
66
67 def __init__(self, corpus):
68 """
69 Parameters
70 ----------
71 corpus : list of list of str
72 Given corpus.
73
74 """
75 self.corpus_size = len(corpus)
76 self.avgdl = sum(float(len(x)) for x in corpus) / self.corpus_size
77 self.corpus = corpus
78 self.f = []
79 self.df = {}
80 self.idf = {}
81 self.initialize()
82
83 def initialize(self):
84 """Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies."""
85 for document in self.corpus:
86 frequencies = {}
87 for word in document:
88 if word not in frequencies:
89 frequencies[word] = 0
90 frequencies[word] += 1
91 self.f.append(frequencies)
92
93 for word, freq in iteritems(frequencies):
94 if word not in self.df:
95 self.df[word] = 0
96 self.df[word] += 1
97
98 for word, freq in iteritems(self.df):
99 self.idf[word] = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)
100
101 def get_score(self, document, index, average_idf):
102 """Computes BM25 score of given `document` in relation to item of corpus selected by `index`.
103
104 Parameters
105 ----------
106 document : list of str
107 Document to be scored.
108 index : int
109 Index of document in corpus selected to score with `document`.
110 average_idf : float
111 Average idf in corpus.
112
113 Returns
114 -------
115 float
116 BM25 score.
117
118 """
119 score = 0
120 for word in document:
121 if word not in self.f[index]:
122 continue
123 idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf
124 score += (idf * self.f[index][word] * (PARAM_K1 + 1)
125 / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
126 return score
127
128 def get_scores(self, document, average_idf):
129 """Computes and returns BM25 scores of given `document` in relation to
130 every item in corpus.
131
132 Parameters
133 ----------
134 document : list of str
135 Document to be scored.
136 average_idf : float
137 Average idf in corpus.
138
139 Returns
140 -------
141 list of float
142 BM25 scores.
143
144 """
145 scores = []
146 for index in xrange(self.corpus_size):
147 score = self.get_score(document, index, average_idf)
148 scores.append(score)
149 return scores
150
151
152 def get_bm25_weights(corpus):
153 """Returns BM25 scores (weights) of documents in corpus.
154 Each document has to be weighted with every document in given corpus.
155
156 Parameters
157 ----------
158 corpus : list of list of str
159 Corpus of documents.
160
161 Returns
162 -------
163 list of list of float
164 BM25 scores.
165
166 Examples
167 --------
168 >>> from gensim.summarization.bm25 import get_bm25_weights
169 >>> corpus = [
170 ... ["black", "cat", "white", "cat"],
171 ... ["cat", "outer", "space"],
172 ... ["wag", "dog"]
173 ... ]
174 >>> result = get_bm25_weights(corpus)
175
176 """
177 bm25 = BM25(corpus)
178 average_idf = sum(float(val) for val in bm25.idf.values()) / len(bm25.idf)
179
180 weights = []
181 for doc in corpus:
182 scores = bm25.get_scores(doc, average_idf)
183 weights.append(scores)
184
185 return weights
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gensim/summarization/bm25.py b/gensim/summarization/bm25.py
--- a/gensim/summarization/bm25.py
+++ b/gensim/summarization/bm25.py
@@ -4,7 +4,7 @@
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
"""This module contains function of computing rank scores for documents in
-corpus and helper class `BM25` used in calculations. Original alhorithm
+corpus and helper class `BM25` used in calculations. Original algorithm
descibed in [1]_, also you may check Wikipedia page [2]_.
@@ -61,7 +61,8 @@
Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
idf : dict
Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.
-
+ doc_len : list of int
+ List of document lengths.
"""
def __init__(self, corpus):
@@ -78,12 +79,14 @@
self.f = []
self.df = {}
self.idf = {}
+ self.doc_len = []
self.initialize()
def initialize(self):
"""Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies."""
for document in self.corpus:
frequencies = {}
+ self.doc_len.append(len(document))
for word in document:
if word not in frequencies:
frequencies[word] = 0
@@ -122,7 +125,7 @@
continue
idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf
score += (idf * self.f[index][word] * (PARAM_K1 + 1)
- / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))
+ / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))
return score
def get_scores(self, document, average_idf):
|
{"golden_diff": "diff --git a/gensim/summarization/bm25.py b/gensim/summarization/bm25.py\n--- a/gensim/summarization/bm25.py\n+++ b/gensim/summarization/bm25.py\n@@ -4,7 +4,7 @@\n # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n \n \"\"\"This module contains function of computing rank scores for documents in\n-corpus and helper class `BM25` used in calculations. Original alhorithm\n+corpus and helper class `BM25` used in calculations. Original algorithm\n descibed in [1]_, also you may check Wikipedia page [2]_.\n \n \n@@ -61,7 +61,8 @@\n Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n idf : dict\n Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n-\n+ doc_len : list of int\n+ List of document lengths.\n \"\"\"\n \n def __init__(self, corpus):\n@@ -78,12 +79,14 @@\n self.f = []\n self.df = {}\n self.idf = {}\n+ self.doc_len = []\n self.initialize()\n \n def initialize(self):\n \"\"\"Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies.\"\"\"\n for document in self.corpus:\n frequencies = {}\n+ self.doc_len.append(len(document))\n for word in document:\n if word not in frequencies:\n frequencies[word] = 0\n@@ -122,7 +125,7 @@\n continue\n idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\n- / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\n+ / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))\n return score\n \n def get_scores(self, document, average_idf):\n", "issue": "BM25 : Incorrect scoring function\nhttps://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py\r\n**Instead of \"len(document)\" it should be the length of the _index_ document of the corpus.**\r\n \r\n```\r\n def get_score(self, document, index, average_idf): \r\n # in this line it should be the length of the index document in the corpus\r\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\r\n / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\r\n```\r\n\nBM25 : Incorrect scoring function\nhttps://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/bm25.py\r\n**Instead of \"len(document)\" it should be the length of the _index_ document of the corpus.**\r\n \r\n```\r\n def get_score(self, document, index, average_idf): \r\n # in this line it should be the length of the index document in the corpus\r\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\r\n / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"This module contains function of computing rank scores for documents in\ncorpus and helper class `BM25` used in calculations. Original alhorithm\ndescibed in [1]_, also you may check Wikipedia page [2]_.\n\n\n.. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,\n http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf\n.. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25\n\n\n\nExamples\n--------\n>>> from gensim.summarization.bm25 import get_bm25_weights\n>>> corpus = [\n... [\"black\", \"cat\", \"white\", \"cat\"],\n... [\"cat\", \"outer\", \"space\"],\n... [\"wag\", \"dog\"]\n... ]\n>>> result = get_bm25_weights(corpus)\n\n\nData:\n-----\n.. data:: PARAM_K1 - Free smoothing parameter for BM25.\n.. data:: PARAM_B - Free smoothing parameter for BM25.\n.. data:: EPSILON - Constant used for negative idf of document in corpus.\n\n\"\"\"\n\n\nimport math\nfrom six import iteritems\nfrom six.moves import xrange\n\n\nPARAM_K1 = 1.5\nPARAM_B = 0.75\nEPSILON = 0.25\n\n\nclass BM25(object):\n \"\"\"Implementation of Best Matching 25 ranking function.\n\n Attributes\n ----------\n corpus_size : int\n Size of corpus (number of documents).\n avgdl : float\n Average length of document in `corpus`.\n corpus : list of list of str\n Corpus of documents.\n f : list of dicts of int\n Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.\n df : dict\n Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n idf : dict\n Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n\n \"\"\"\n\n def __init__(self, corpus):\n \"\"\"\n Parameters\n ----------\n corpus : list of list of str\n Given corpus.\n\n \"\"\"\n self.corpus_size = len(corpus)\n self.avgdl = sum(float(len(x)) for x in corpus) / self.corpus_size\n self.corpus = corpus\n self.f = []\n self.df = {}\n self.idf = {}\n self.initialize()\n\n def initialize(self):\n \"\"\"Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies.\"\"\"\n for document in self.corpus:\n frequencies = {}\n for word in document:\n if word not in frequencies:\n frequencies[word] = 0\n frequencies[word] += 1\n self.f.append(frequencies)\n\n for word, freq in iteritems(frequencies):\n if word not in self.df:\n self.df[word] = 0\n self.df[word] += 1\n\n for word, freq in iteritems(self.df):\n self.idf[word] = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)\n\n def get_score(self, document, index, average_idf):\n \"\"\"Computes BM25 score of given `document` in relation to item of corpus selected by `index`.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n index : int\n Index of document in corpus selected to score with `document`.\n average_idf : float\n Average idf in corpus.\n\n Returns\n -------\n float\n BM25 score.\n\n \"\"\"\n score = 0\n for word in document:\n if word not in self.f[index]:\n continue\n idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\n / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * len(document) / self.avgdl)))\n return score\n\n def get_scores(self, document, average_idf):\n \"\"\"Computes and returns BM25 scores of given `document` in relation to\n every item in corpus.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n average_idf : float\n Average idf in corpus.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n scores = []\n for index in xrange(self.corpus_size):\n score = self.get_score(document, index, average_idf)\n scores.append(score)\n return scores\n\n\ndef get_bm25_weights(corpus):\n \"\"\"Returns BM25 scores (weights) of documents in corpus.\n Each document has to be weighted with every document in given corpus.\n\n Parameters\n ----------\n corpus : list of list of str\n Corpus of documents.\n\n Returns\n -------\n list of list of float\n BM25 scores.\n\n Examples\n --------\n >>> from gensim.summarization.bm25 import get_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = get_bm25_weights(corpus)\n\n \"\"\"\n bm25 = BM25(corpus)\n average_idf = sum(float(val) for val in bm25.idf.values()) / len(bm25.idf)\n\n weights = []\n for doc in corpus:\n scores = bm25.get_scores(doc, average_idf)\n weights.append(scores)\n\n return weights\n", "path": "gensim/summarization/bm25.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"This module contains function of computing rank scores for documents in\ncorpus and helper class `BM25` used in calculations. Original algorithm\ndescibed in [1]_, also you may check Wikipedia page [2]_.\n\n\n.. [1] Robertson, Stephen; Zaragoza, Hugo (2009). The Probabilistic Relevance Framework: BM25 and Beyond,\n http://www.staff.city.ac.uk/~sb317/papers/foundations_bm25_review.pdf\n.. [2] Okapi BM25 on Wikipedia, https://en.wikipedia.org/wiki/Okapi_BM25\n\n\n\nExamples\n--------\n>>> from gensim.summarization.bm25 import get_bm25_weights\n>>> corpus = [\n... [\"black\", \"cat\", \"white\", \"cat\"],\n... [\"cat\", \"outer\", \"space\"],\n... [\"wag\", \"dog\"]\n... ]\n>>> result = get_bm25_weights(corpus)\n\n\nData:\n-----\n.. data:: PARAM_K1 - Free smoothing parameter for BM25.\n.. data:: PARAM_B - Free smoothing parameter for BM25.\n.. data:: EPSILON - Constant used for negative idf of document in corpus.\n\n\"\"\"\n\n\nimport math\nfrom six import iteritems\nfrom six.moves import xrange\n\n\nPARAM_K1 = 1.5\nPARAM_B = 0.75\nEPSILON = 0.25\n\n\nclass BM25(object):\n \"\"\"Implementation of Best Matching 25 ranking function.\n\n Attributes\n ----------\n corpus_size : int\n Size of corpus (number of documents).\n avgdl : float\n Average length of document in `corpus`.\n corpus : list of list of str\n Corpus of documents.\n f : list of dicts of int\n Dictionary with terms frequencies for each document in `corpus`. Words used as keys and frequencies as values.\n df : dict\n Dictionary with terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n idf : dict\n Dictionary with inversed terms frequencies for whole `corpus`. Words used as keys and frequencies as values.\n doc_len : list of int\n List of document lengths.\n \"\"\"\n\n def __init__(self, corpus):\n \"\"\"\n Parameters\n ----------\n corpus : list of list of str\n Given corpus.\n\n \"\"\"\n self.corpus_size = len(corpus)\n self.avgdl = sum(float(len(x)) for x in corpus) / self.corpus_size\n self.corpus = corpus\n self.f = []\n self.df = {}\n self.idf = {}\n self.doc_len = []\n self.initialize()\n\n def initialize(self):\n \"\"\"Calculates frequencies of terms in documents and in corpus. Also computes inverse document frequencies.\"\"\"\n for document in self.corpus:\n frequencies = {}\n self.doc_len.append(len(document))\n for word in document:\n if word not in frequencies:\n frequencies[word] = 0\n frequencies[word] += 1\n self.f.append(frequencies)\n\n for word, freq in iteritems(frequencies):\n if word not in self.df:\n self.df[word] = 0\n self.df[word] += 1\n\n for word, freq in iteritems(self.df):\n self.idf[word] = math.log(self.corpus_size - freq + 0.5) - math.log(freq + 0.5)\n\n def get_score(self, document, index, average_idf):\n \"\"\"Computes BM25 score of given `document` in relation to item of corpus selected by `index`.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n index : int\n Index of document in corpus selected to score with `document`.\n average_idf : float\n Average idf in corpus.\n\n Returns\n -------\n float\n BM25 score.\n\n \"\"\"\n score = 0\n for word in document:\n if word not in self.f[index]:\n continue\n idf = self.idf[word] if self.idf[word] >= 0 else EPSILON * average_idf\n score += (idf * self.f[index][word] * (PARAM_K1 + 1)\n / (self.f[index][word] + PARAM_K1 * (1 - PARAM_B + PARAM_B * self.doc_len[index] / self.avgdl)))\n return score\n\n def get_scores(self, document, average_idf):\n \"\"\"Computes and returns BM25 scores of given `document` in relation to\n every item in corpus.\n\n Parameters\n ----------\n document : list of str\n Document to be scored.\n average_idf : float\n Average idf in corpus.\n\n Returns\n -------\n list of float\n BM25 scores.\n\n \"\"\"\n scores = []\n for index in xrange(self.corpus_size):\n score = self.get_score(document, index, average_idf)\n scores.append(score)\n return scores\n\n\ndef get_bm25_weights(corpus):\n \"\"\"Returns BM25 scores (weights) of documents in corpus.\n Each document has to be weighted with every document in given corpus.\n\n Parameters\n ----------\n corpus : list of list of str\n Corpus of documents.\n\n Returns\n -------\n list of list of float\n BM25 scores.\n\n Examples\n --------\n >>> from gensim.summarization.bm25 import get_bm25_weights\n >>> corpus = [\n ... [\"black\", \"cat\", \"white\", \"cat\"],\n ... [\"cat\", \"outer\", \"space\"],\n ... [\"wag\", \"dog\"]\n ... ]\n >>> result = get_bm25_weights(corpus)\n\n \"\"\"\n bm25 = BM25(corpus)\n average_idf = sum(float(val) for val in bm25.idf.values()) / len(bm25.idf)\n\n weights = []\n for doc in corpus:\n scores = bm25.get_scores(doc, average_idf)\n weights.append(scores)\n\n return weights\n", "path": "gensim/summarization/bm25.py"}]}
| 2,345 | 504 |
gh_patches_debug_721
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-3958
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Misleading description in the doc
In this page
https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.structuralholes.effective_size.html
The description of *Return* is "Dictionary with nodes as keys and the constraint on the node as values."
But this is effective size. I think it should be "Dictionary with nodes as keys and the **effective size of** the node as values."
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `networkx/algorithms/structuralholes.py`
Content:
```
1 """Functions for computing measures of structural holes."""
2
3 import networkx as nx
4
5 __all__ = ['constraint', 'local_constraint', 'effective_size']
6
7
8 def mutual_weight(G, u, v, weight=None):
9 """Returns the sum of the weights of the edge from `u` to `v` and
10 the edge from `v` to `u` in `G`.
11
12 `weight` is the edge data key that represents the edge weight. If
13 the specified key is `None` or is not in the edge data for an edge,
14 that edge is assumed to have weight 1.
15
16 Pre-conditions: `u` and `v` must both be in `G`.
17
18 """
19 try:
20 a_uv = G[u][v].get(weight, 1)
21 except KeyError:
22 a_uv = 0
23 try:
24 a_vu = G[v][u].get(weight, 1)
25 except KeyError:
26 a_vu = 0
27 return a_uv + a_vu
28
29
30 def normalized_mutual_weight(G, u, v, norm=sum, weight=None):
31 """Returns normalized mutual weight of the edges from `u` to `v`
32 with respect to the mutual weights of the neighbors of `u` in `G`.
33
34 `norm` specifies how the normalization factor is computed. It must
35 be a function that takes a single argument and returns a number.
36 The argument will be an iterable of mutual weights
37 of pairs ``(u, w)``, where ``w`` ranges over each (in- and
38 out-)neighbor of ``u``. Commons values for `normalization` are
39 ``sum`` and ``max``.
40
41 `weight` can be ``None`` or a string, if None, all edge weights
42 are considered equal. Otherwise holds the name of the edge
43 attribute used as weight.
44
45 """
46 scale = norm(mutual_weight(G, u, w, weight)
47 for w in set(nx.all_neighbors(G, u)))
48 return 0 if scale == 0 else mutual_weight(G, u, v, weight) / scale
49
50
51 def effective_size(G, nodes=None, weight=None):
52 r"""Returns the effective size of all nodes in the graph ``G``.
53
54 The *effective size* of a node's ego network is based on the concept
55 of redundancy. A person's ego network has redundancy to the extent
56 that her contacts are connected to each other as well. The
57 nonredundant part of a person's relationships it's the effective
58 size of her ego network [1]_. Formally, the effective size of a
59 node $u$, denoted $e(u)$, is defined by
60
61 .. math::
62
63 e(u) = \sum_{v \in N(u) \setminus \{u\}}
64 \left(1 - \sum_{w \in N(v)} p_{uw} m_{vw}\right)
65
66 where $N(u)$ is the set of neighbors of $u$ and $p_{uw}$ is the
67 normalized mutual weight of the (directed or undirected) edges
68 joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. And $m_{vw}$
69 is the mutual weight of $v$ and $w$ divided by $v$ highest mutual
70 weight with any of its neighbors. The *mutual weight* of $u$ and $v$
71 is the sum of the weights of edges joining them (edge weights are
72 assumed to be one if the graph is unweighted).
73
74 For the case of unweighted and undirected graphs, Borgatti proposed
75 a simplified formula to compute effective size [2]_
76
77 .. math::
78
79 e(u) = n - \frac{2t}{n}
80
81 where `t` is the number of ties in the ego network (not including
82 ties to ego) and `n` is the number of nodes (excluding ego).
83
84 Parameters
85 ----------
86 G : NetworkX graph
87 The graph containing ``v``. Directed graphs are treated like
88 undirected graphs when computing neighbors of ``v``.
89
90 nodes : container, optional
91 Container of nodes in the graph ``G`` to compute the effective size.
92 If None, the effective size of every node is computed.
93
94 weight : None or string, optional
95 If None, all edge weights are considered equal.
96 Otherwise holds the name of the edge attribute used as weight.
97
98 Returns
99 -------
100 dict
101 Dictionary with nodes as keys and the constraint on the node as values.
102
103 Notes
104 -----
105 Burt also defined the related concept of *efficiency* of a node's ego
106 network, which is its effective size divided by the degree of that
107 node [1]_. So you can easily compute efficiency:
108
109 >>> G = nx.DiGraph()
110 >>> G.add_edges_from([(0, 1), (0, 2), (1, 0), (2, 1)])
111 >>> esize = nx.effective_size(G)
112 >>> efficiency = {n: v / G.degree(n) for n, v in esize.items()}
113
114 See also
115 --------
116 constraint
117
118 References
119 ----------
120 .. [1] Burt, Ronald S.
121 *Structural Holes: The Social Structure of Competition.*
122 Cambridge: Harvard University Press, 1995.
123
124 .. [2] Borgatti, S.
125 "Structural Holes: Unpacking Burt's Redundancy Measures"
126 CONNECTIONS 20(1):35-38.
127 http://www.analytictech.com/connections/v20(1)/holes.htm
128
129 """
130 def redundancy(G, u, v, weight=None):
131 nmw = normalized_mutual_weight
132 r = sum(nmw(G, u, w, weight=weight) * nmw(G, v, w, norm=max, weight=weight)
133 for w in set(nx.all_neighbors(G, u)))
134 return 1 - r
135 effective_size = {}
136 if nodes is None:
137 nodes = G
138 # Use Borgatti's simplified formula for unweighted and undirected graphs
139 if not G.is_directed() and weight is None:
140 for v in nodes:
141 # Effective size is not defined for isolated nodes
142 if len(G[v]) == 0:
143 effective_size[v] = float('nan')
144 continue
145 E = nx.ego_graph(G, v, center=False, undirected=True)
146 effective_size[v] = len(E) - (2 * E.size()) / len(E)
147 else:
148 for v in nodes:
149 # Effective size is not defined for isolated nodes
150 if len(G[v]) == 0:
151 effective_size[v] = float('nan')
152 continue
153 effective_size[v] = sum(redundancy(G, v, u, weight)
154 for u in set(nx.all_neighbors(G, v)))
155 return effective_size
156
157
158 def constraint(G, nodes=None, weight=None):
159 r"""Returns the constraint on all nodes in the graph ``G``.
160
161 The *constraint* is a measure of the extent to which a node *v* is
162 invested in those nodes that are themselves invested in the
163 neighbors of *v*. Formally, the *constraint on v*, denoted `c(v)`,
164 is defined by
165
166 .. math::
167
168 c(v) = \sum_{w \in N(v) \setminus \{v\}} \ell(v, w)
169
170 where `N(v)` is the subset of the neighbors of `v` that are either
171 predecessors or successors of `v` and `\ell(v, w)` is the local
172 constraint on `v` with respect to `w` [1]_. For the definition of local
173 constraint, see :func:`local_constraint`.
174
175 Parameters
176 ----------
177 G : NetworkX graph
178 The graph containing ``v``. This can be either directed or undirected.
179
180 nodes : container, optional
181 Container of nodes in the graph ``G`` to compute the constraint. If
182 None, the constraint of every node is computed.
183
184 weight : None or string, optional
185 If None, all edge weights are considered equal.
186 Otherwise holds the name of the edge attribute used as weight.
187
188 Returns
189 -------
190 dict
191 Dictionary with nodes as keys and the constraint on the node as values.
192
193 See also
194 --------
195 local_constraint
196
197 References
198 ----------
199 .. [1] Burt, Ronald S.
200 "Structural holes and good ideas".
201 American Journal of Sociology (110): 349–399.
202
203 """
204 if nodes is None:
205 nodes = G
206 constraint = {}
207 for v in nodes:
208 # Constraint is not defined for isolated nodes
209 if len(G[v]) == 0:
210 constraint[v] = float('nan')
211 continue
212 constraint[v] = sum(local_constraint(G, v, n, weight)
213 for n in set(nx.all_neighbors(G, v)))
214 return constraint
215
216
217 def local_constraint(G, u, v, weight=None):
218 r"""Returns the local constraint on the node ``u`` with respect to
219 the node ``v`` in the graph ``G``.
220
221 Formally, the *local constraint on u with respect to v*, denoted
222 $\ell(v)$, is defined by
223
224 .. math::
225
226 \ell(u, v) = \left(p_{uv} + \sum_{w \in N(v)} p_{uw} p{wv}\right)^2,
227
228 where $N(v)$ is the set of neighbors of $v$ and $p_{uv}$ is the
229 normalized mutual weight of the (directed or undirected) edges
230 joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. The *mutual
231 weight* of $u$ and $v$ is the sum of the weights of edges joining
232 them (edge weights are assumed to be one if the graph is
233 unweighted).
234
235 Parameters
236 ----------
237 G : NetworkX graph
238 The graph containing ``u`` and ``v``. This can be either
239 directed or undirected.
240
241 u : node
242 A node in the graph ``G``.
243
244 v : node
245 A node in the graph ``G``.
246
247 weight : None or string, optional
248 If None, all edge weights are considered equal.
249 Otherwise holds the name of the edge attribute used as weight.
250
251 Returns
252 -------
253 float
254 The constraint of the node ``v`` in the graph ``G``.
255
256 See also
257 --------
258 constraint
259
260 References
261 ----------
262 .. [1] Burt, Ronald S.
263 "Structural holes and good ideas".
264 American Journal of Sociology (110): 349–399.
265
266 """
267 nmw = normalized_mutual_weight
268 direct = nmw(G, u, v, weight=weight)
269 indirect = sum(nmw(G, u, w, weight=weight) * nmw(G, w, v, weight=weight)
270 for w in set(nx.all_neighbors(G, u)))
271 return (direct + indirect) ** 2
272
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/networkx/algorithms/structuralholes.py b/networkx/algorithms/structuralholes.py
--- a/networkx/algorithms/structuralholes.py
+++ b/networkx/algorithms/structuralholes.py
@@ -98,7 +98,7 @@
Returns
-------
dict
- Dictionary with nodes as keys and the constraint on the node as values.
+ Dictionary with nodes as keys and the effective size of the node as values.
Notes
-----
|
{"golden_diff": "diff --git a/networkx/algorithms/structuralholes.py b/networkx/algorithms/structuralholes.py\n--- a/networkx/algorithms/structuralholes.py\n+++ b/networkx/algorithms/structuralholes.py\n@@ -98,7 +98,7 @@\n Returns\n -------\n dict\n- Dictionary with nodes as keys and the constraint on the node as values.\n+ Dictionary with nodes as keys and the effective size of the node as values.\n \n Notes\n -----\n", "issue": "Misleading description in the doc\nIn this page\r\nhttps://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.structuralholes.effective_size.html\r\nThe description of *Return* is \"Dictionary with nodes as keys and the constraint on the node as values.\"\r\nBut this is effective size. I think it should be \"Dictionary with nodes as keys and the **effective size of** the node as values.\"\n", "before_files": [{"content": "\"\"\"Functions for computing measures of structural holes.\"\"\"\n\nimport networkx as nx\n\n__all__ = ['constraint', 'local_constraint', 'effective_size']\n\n\ndef mutual_weight(G, u, v, weight=None):\n \"\"\"Returns the sum of the weights of the edge from `u` to `v` and\n the edge from `v` to `u` in `G`.\n\n `weight` is the edge data key that represents the edge weight. If\n the specified key is `None` or is not in the edge data for an edge,\n that edge is assumed to have weight 1.\n\n Pre-conditions: `u` and `v` must both be in `G`.\n\n \"\"\"\n try:\n a_uv = G[u][v].get(weight, 1)\n except KeyError:\n a_uv = 0\n try:\n a_vu = G[v][u].get(weight, 1)\n except KeyError:\n a_vu = 0\n return a_uv + a_vu\n\n\ndef normalized_mutual_weight(G, u, v, norm=sum, weight=None):\n \"\"\"Returns normalized mutual weight of the edges from `u` to `v`\n with respect to the mutual weights of the neighbors of `u` in `G`.\n\n `norm` specifies how the normalization factor is computed. It must\n be a function that takes a single argument and returns a number.\n The argument will be an iterable of mutual weights\n of pairs ``(u, w)``, where ``w`` ranges over each (in- and\n out-)neighbor of ``u``. Commons values for `normalization` are\n ``sum`` and ``max``.\n\n `weight` can be ``None`` or a string, if None, all edge weights\n are considered equal. Otherwise holds the name of the edge\n attribute used as weight.\n\n \"\"\"\n scale = norm(mutual_weight(G, u, w, weight)\n for w in set(nx.all_neighbors(G, u)))\n return 0 if scale == 0 else mutual_weight(G, u, v, weight) / scale\n\n\ndef effective_size(G, nodes=None, weight=None):\n r\"\"\"Returns the effective size of all nodes in the graph ``G``.\n\n The *effective size* of a node's ego network is based on the concept\n of redundancy. A person's ego network has redundancy to the extent\n that her contacts are connected to each other as well. The\n nonredundant part of a person's relationships it's the effective\n size of her ego network [1]_. Formally, the effective size of a\n node $u$, denoted $e(u)$, is defined by\n\n .. math::\n\n e(u) = \\sum_{v \\in N(u) \\setminus \\{u\\}}\n \\left(1 - \\sum_{w \\in N(v)} p_{uw} m_{vw}\\right)\n\n where $N(u)$ is the set of neighbors of $u$ and $p_{uw}$ is the\n normalized mutual weight of the (directed or undirected) edges\n joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. And $m_{vw}$\n is the mutual weight of $v$ and $w$ divided by $v$ highest mutual\n weight with any of its neighbors. The *mutual weight* of $u$ and $v$\n is the sum of the weights of edges joining them (edge weights are\n assumed to be one if the graph is unweighted).\n\n For the case of unweighted and undirected graphs, Borgatti proposed\n a simplified formula to compute effective size [2]_\n\n .. math::\n\n e(u) = n - \\frac{2t}{n}\n\n where `t` is the number of ties in the ego network (not including\n ties to ego) and `n` is the number of nodes (excluding ego).\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``v``. Directed graphs are treated like\n undirected graphs when computing neighbors of ``v``.\n\n nodes : container, optional\n Container of nodes in the graph ``G`` to compute the effective size.\n If None, the effective size of every node is computed.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n dict\n Dictionary with nodes as keys and the constraint on the node as values.\n\n Notes\n -----\n Burt also defined the related concept of *efficiency* of a node's ego\n network, which is its effective size divided by the degree of that\n node [1]_. So you can easily compute efficiency:\n\n >>> G = nx.DiGraph()\n >>> G.add_edges_from([(0, 1), (0, 2), (1, 0), (2, 1)])\n >>> esize = nx.effective_size(G)\n >>> efficiency = {n: v / G.degree(n) for n, v in esize.items()}\n\n See also\n --------\n constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n *Structural Holes: The Social Structure of Competition.*\n Cambridge: Harvard University Press, 1995.\n\n .. [2] Borgatti, S.\n \"Structural Holes: Unpacking Burt's Redundancy Measures\"\n CONNECTIONS 20(1):35-38.\n http://www.analytictech.com/connections/v20(1)/holes.htm\n\n \"\"\"\n def redundancy(G, u, v, weight=None):\n nmw = normalized_mutual_weight\n r = sum(nmw(G, u, w, weight=weight) * nmw(G, v, w, norm=max, weight=weight)\n for w in set(nx.all_neighbors(G, u)))\n return 1 - r\n effective_size = {}\n if nodes is None:\n nodes = G\n # Use Borgatti's simplified formula for unweighted and undirected graphs\n if not G.is_directed() and weight is None:\n for v in nodes:\n # Effective size is not defined for isolated nodes\n if len(G[v]) == 0:\n effective_size[v] = float('nan')\n continue\n E = nx.ego_graph(G, v, center=False, undirected=True)\n effective_size[v] = len(E) - (2 * E.size()) / len(E)\n else:\n for v in nodes:\n # Effective size is not defined for isolated nodes\n if len(G[v]) == 0:\n effective_size[v] = float('nan')\n continue\n effective_size[v] = sum(redundancy(G, v, u, weight)\n for u in set(nx.all_neighbors(G, v)))\n return effective_size\n\n\ndef constraint(G, nodes=None, weight=None):\n r\"\"\"Returns the constraint on all nodes in the graph ``G``.\n\n The *constraint* is a measure of the extent to which a node *v* is\n invested in those nodes that are themselves invested in the\n neighbors of *v*. Formally, the *constraint on v*, denoted `c(v)`,\n is defined by\n\n .. math::\n\n c(v) = \\sum_{w \\in N(v) \\setminus \\{v\\}} \\ell(v, w)\n\n where `N(v)` is the subset of the neighbors of `v` that are either\n predecessors or successors of `v` and `\\ell(v, w)` is the local\n constraint on `v` with respect to `w` [1]_. For the definition of local\n constraint, see :func:`local_constraint`.\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``v``. This can be either directed or undirected.\n\n nodes : container, optional\n Container of nodes in the graph ``G`` to compute the constraint. If\n None, the constraint of every node is computed.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n dict\n Dictionary with nodes as keys and the constraint on the node as values.\n\n See also\n --------\n local_constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n \"Structural holes and good ideas\".\n American Journal of Sociology (110): 349\u2013399.\n\n \"\"\"\n if nodes is None:\n nodes = G\n constraint = {}\n for v in nodes:\n # Constraint is not defined for isolated nodes\n if len(G[v]) == 0:\n constraint[v] = float('nan')\n continue\n constraint[v] = sum(local_constraint(G, v, n, weight)\n for n in set(nx.all_neighbors(G, v)))\n return constraint\n\n\ndef local_constraint(G, u, v, weight=None):\n r\"\"\"Returns the local constraint on the node ``u`` with respect to\n the node ``v`` in the graph ``G``.\n\n Formally, the *local constraint on u with respect to v*, denoted\n $\\ell(v)$, is defined by\n\n .. math::\n\n \\ell(u, v) = \\left(p_{uv} + \\sum_{w \\in N(v)} p_{uw} p{wv}\\right)^2,\n\n where $N(v)$ is the set of neighbors of $v$ and $p_{uv}$ is the\n normalized mutual weight of the (directed or undirected) edges\n joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. The *mutual\n weight* of $u$ and $v$ is the sum of the weights of edges joining\n them (edge weights are assumed to be one if the graph is\n unweighted).\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``u`` and ``v``. This can be either\n directed or undirected.\n\n u : node\n A node in the graph ``G``.\n\n v : node\n A node in the graph ``G``.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n float\n The constraint of the node ``v`` in the graph ``G``.\n\n See also\n --------\n constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n \"Structural holes and good ideas\".\n American Journal of Sociology (110): 349\u2013399.\n\n \"\"\"\n nmw = normalized_mutual_weight\n direct = nmw(G, u, v, weight=weight)\n indirect = sum(nmw(G, u, w, weight=weight) * nmw(G, w, v, weight=weight)\n for w in set(nx.all_neighbors(G, u)))\n return (direct + indirect) ** 2\n", "path": "networkx/algorithms/structuralholes.py"}], "after_files": [{"content": "\"\"\"Functions for computing measures of structural holes.\"\"\"\n\nimport networkx as nx\n\n__all__ = ['constraint', 'local_constraint', 'effective_size']\n\n\ndef mutual_weight(G, u, v, weight=None):\n \"\"\"Returns the sum of the weights of the edge from `u` to `v` and\n the edge from `v` to `u` in `G`.\n\n `weight` is the edge data key that represents the edge weight. If\n the specified key is `None` or is not in the edge data for an edge,\n that edge is assumed to have weight 1.\n\n Pre-conditions: `u` and `v` must both be in `G`.\n\n \"\"\"\n try:\n a_uv = G[u][v].get(weight, 1)\n except KeyError:\n a_uv = 0\n try:\n a_vu = G[v][u].get(weight, 1)\n except KeyError:\n a_vu = 0\n return a_uv + a_vu\n\n\ndef normalized_mutual_weight(G, u, v, norm=sum, weight=None):\n \"\"\"Returns normalized mutual weight of the edges from `u` to `v`\n with respect to the mutual weights of the neighbors of `u` in `G`.\n\n `norm` specifies how the normalization factor is computed. It must\n be a function that takes a single argument and returns a number.\n The argument will be an iterable of mutual weights\n of pairs ``(u, w)``, where ``w`` ranges over each (in- and\n out-)neighbor of ``u``. Commons values for `normalization` are\n ``sum`` and ``max``.\n\n `weight` can be ``None`` or a string, if None, all edge weights\n are considered equal. Otherwise holds the name of the edge\n attribute used as weight.\n\n \"\"\"\n scale = norm(mutual_weight(G, u, w, weight)\n for w in set(nx.all_neighbors(G, u)))\n return 0 if scale == 0 else mutual_weight(G, u, v, weight) / scale\n\n\ndef effective_size(G, nodes=None, weight=None):\n r\"\"\"Returns the effective size of all nodes in the graph ``G``.\n\n The *effective size* of a node's ego network is based on the concept\n of redundancy. A person's ego network has redundancy to the extent\n that her contacts are connected to each other as well. The\n nonredundant part of a person's relationships it's the effective\n size of her ego network [1]_. Formally, the effective size of a\n node $u$, denoted $e(u)$, is defined by\n\n .. math::\n\n e(u) = \\sum_{v \\in N(u) \\setminus \\{u\\}}\n \\left(1 - \\sum_{w \\in N(v)} p_{uw} m_{vw}\\right)\n\n where $N(u)$ is the set of neighbors of $u$ and $p_{uw}$ is the\n normalized mutual weight of the (directed or undirected) edges\n joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. And $m_{vw}$\n is the mutual weight of $v$ and $w$ divided by $v$ highest mutual\n weight with any of its neighbors. The *mutual weight* of $u$ and $v$\n is the sum of the weights of edges joining them (edge weights are\n assumed to be one if the graph is unweighted).\n\n For the case of unweighted and undirected graphs, Borgatti proposed\n a simplified formula to compute effective size [2]_\n\n .. math::\n\n e(u) = n - \\frac{2t}{n}\n\n where `t` is the number of ties in the ego network (not including\n ties to ego) and `n` is the number of nodes (excluding ego).\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``v``. Directed graphs are treated like\n undirected graphs when computing neighbors of ``v``.\n\n nodes : container, optional\n Container of nodes in the graph ``G`` to compute the effective size.\n If None, the effective size of every node is computed.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n dict\n Dictionary with nodes as keys and the effective size of the node as values.\n\n Notes\n -----\n Burt also defined the related concept of *efficiency* of a node's ego\n network, which is its effective size divided by the degree of that\n node [1]_. So you can easily compute efficiency:\n\n >>> G = nx.DiGraph()\n >>> G.add_edges_from([(0, 1), (0, 2), (1, 0), (2, 1)])\n >>> esize = nx.effective_size(G)\n >>> efficiency = {n: v / G.degree(n) for n, v in esize.items()}\n\n See also\n --------\n constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n *Structural Holes: The Social Structure of Competition.*\n Cambridge: Harvard University Press, 1995.\n\n .. [2] Borgatti, S.\n \"Structural Holes: Unpacking Burt's Redundancy Measures\"\n CONNECTIONS 20(1):35-38.\n http://www.analytictech.com/connections/v20(1)/holes.htm\n\n \"\"\"\n def redundancy(G, u, v, weight=None):\n nmw = normalized_mutual_weight\n r = sum(nmw(G, u, w, weight=weight) * nmw(G, v, w, norm=max, weight=weight)\n for w in set(nx.all_neighbors(G, u)))\n return 1 - r\n effective_size = {}\n if nodes is None:\n nodes = G\n # Use Borgatti's simplified formula for unweighted and undirected graphs\n if not G.is_directed() and weight is None:\n for v in nodes:\n # Effective size is not defined for isolated nodes\n if len(G[v]) == 0:\n effective_size[v] = float('nan')\n continue\n E = nx.ego_graph(G, v, center=False, undirected=True)\n effective_size[v] = len(E) - (2 * E.size()) / len(E)\n else:\n for v in nodes:\n # Effective size is not defined for isolated nodes\n if len(G[v]) == 0:\n effective_size[v] = float('nan')\n continue\n effective_size[v] = sum(redundancy(G, v, u, weight)\n for u in set(nx.all_neighbors(G, v)))\n return effective_size\n\n\ndef constraint(G, nodes=None, weight=None):\n r\"\"\"Returns the constraint on all nodes in the graph ``G``.\n\n The *constraint* is a measure of the extent to which a node *v* is\n invested in those nodes that are themselves invested in the\n neighbors of *v*. Formally, the *constraint on v*, denoted `c(v)`,\n is defined by\n\n .. math::\n\n c(v) = \\sum_{w \\in N(v) \\setminus \\{v\\}} \\ell(v, w)\n\n where `N(v)` is the subset of the neighbors of `v` that are either\n predecessors or successors of `v` and `\\ell(v, w)` is the local\n constraint on `v` with respect to `w` [1]_. For the definition of local\n constraint, see :func:`local_constraint`.\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``v``. This can be either directed or undirected.\n\n nodes : container, optional\n Container of nodes in the graph ``G`` to compute the constraint. If\n None, the constraint of every node is computed.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n dict\n Dictionary with nodes as keys and the constraint on the node as values.\n\n See also\n --------\n local_constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n \"Structural holes and good ideas\".\n American Journal of Sociology (110): 349\u2013399.\n\n \"\"\"\n if nodes is None:\n nodes = G\n constraint = {}\n for v in nodes:\n # Constraint is not defined for isolated nodes\n if len(G[v]) == 0:\n constraint[v] = float('nan')\n continue\n constraint[v] = sum(local_constraint(G, v, n, weight)\n for n in set(nx.all_neighbors(G, v)))\n return constraint\n\n\ndef local_constraint(G, u, v, weight=None):\n r\"\"\"Returns the local constraint on the node ``u`` with respect to\n the node ``v`` in the graph ``G``.\n\n Formally, the *local constraint on u with respect to v*, denoted\n $\\ell(v)$, is defined by\n\n .. math::\n\n \\ell(u, v) = \\left(p_{uv} + \\sum_{w \\in N(v)} p_{uw} p{wv}\\right)^2,\n\n where $N(v)$ is the set of neighbors of $v$ and $p_{uv}$ is the\n normalized mutual weight of the (directed or undirected) edges\n joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. The *mutual\n weight* of $u$ and $v$ is the sum of the weights of edges joining\n them (edge weights are assumed to be one if the graph is\n unweighted).\n\n Parameters\n ----------\n G : NetworkX graph\n The graph containing ``u`` and ``v``. This can be either\n directed or undirected.\n\n u : node\n A node in the graph ``G``.\n\n v : node\n A node in the graph ``G``.\n\n weight : None or string, optional\n If None, all edge weights are considered equal.\n Otherwise holds the name of the edge attribute used as weight.\n\n Returns\n -------\n float\n The constraint of the node ``v`` in the graph ``G``.\n\n See also\n --------\n constraint\n\n References\n ----------\n .. [1] Burt, Ronald S.\n \"Structural holes and good ideas\".\n American Journal of Sociology (110): 349\u2013399.\n\n \"\"\"\n nmw = normalized_mutual_weight\n direct = nmw(G, u, v, weight=weight)\n indirect = sum(nmw(G, u, w, weight=weight) * nmw(G, w, v, weight=weight)\n for w in set(nx.all_neighbors(G, u)))\n return (direct + indirect) ** 2\n", "path": "networkx/algorithms/structuralholes.py"}]}
| 3,527 | 103 |
gh_patches_debug_29120
|
rasdani/github-patches
|
git_diff
|
OCA__social-262
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[10.0] mail_sendrid, Error: No module named simplejson
> File ".../addons/oca-social/mail_sendgrid/controllers/json_request.py", line 4, in <module>
> import simplejson
> ImportError: No module named simplejson
I put `simplejson` into file requirements.txt and it solved this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mail_sendgrid/controllers/json_request.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2016-2017 Compassion CH (http://www.compassion.ch)
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
4 import simplejson
5
6 from odoo.http import JsonRequest, Root, Response
7
8 # Monkeypatch type of request rooter to use RESTJsonRequest
9 old_get_request = Root.get_request
10
11
12 def get_request(self, httprequest):
13 if (httprequest.mimetype == "application/json" and
14 httprequest.environ['PATH_INFO'].startswith('/mail')):
15 return RESTJsonRequest(httprequest)
16 return old_get_request(self, httprequest)
17
18
19 Root.get_request = get_request
20
21
22 class RESTJsonRequest(JsonRequest):
23 """ Special RestJson Handler to enable receiving lists in JSON
24 body
25 """
26 def __init__(self, *args):
27 try:
28 super(RESTJsonRequest, self).__init__(*args)
29 except AttributeError:
30 # The JSON may contain a list
31 self.params = dict()
32 self.context = dict(self.session.context)
33
34 def _json_response(self, result=None, error=None):
35 response = {}
36 if error is not None:
37 response['error'] = error
38 if result is not None:
39 response['result'] = result
40
41 mime = 'application/json'
42 body = simplejson.dumps(response)
43
44 return Response(
45 body, headers=[('Content-Type', mime),
46 ('Content-Length', len(body))])
47
```
Path: `mail_sendgrid/__manifest__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015-2017 Compassion CH (http://www.compassion.ch)
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
4 {
5 'name': 'SendGrid',
6 'version': '10.0.1.0.0',
7 'category': 'Social Network',
8 'author': 'Compassion CH, Odoo Community Association (OCA)',
9 'license': 'AGPL-3',
10 'website': 'https://github.com/OCA/social',
11 'depends': ['mail_tracking'],
12 'data': [
13 'security/ir.model.access.csv',
14 'views/sendgrid_email_view.xml',
15 'views/sendgrid_template_view.xml',
16 'views/mail_compose_message_view.xml',
17 'views/email_template_view.xml',
18 ],
19 'demo': [],
20 'installable': True,
21 'auto_install': False,
22 'external_dependencies': {
23 'python': ['sendgrid'],
24 },
25 }
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mail_sendgrid/__manifest__.py b/mail_sendgrid/__manifest__.py
--- a/mail_sendgrid/__manifest__.py
+++ b/mail_sendgrid/__manifest__.py
@@ -1,9 +1,9 @@
# -*- coding: utf-8 -*-
-# Copyright 2015-2017 Compassion CH (http://www.compassion.ch)
+# Copyright 2015-2018 Compassion CH (http://www.compassion.ch)
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
{
'name': 'SendGrid',
- 'version': '10.0.1.0.0',
+ 'version': '10.0.1.0.1',
'category': 'Social Network',
'author': 'Compassion CH, Odoo Community Association (OCA)',
'license': 'AGPL-3',
@@ -20,6 +20,6 @@
'installable': True,
'auto_install': False,
'external_dependencies': {
- 'python': ['sendgrid'],
+ 'python': ['sendgrid', 'simplejson'],
},
}
diff --git a/mail_sendgrid/controllers/json_request.py b/mail_sendgrid/controllers/json_request.py
--- a/mail_sendgrid/controllers/json_request.py
+++ b/mail_sendgrid/controllers/json_request.py
@@ -1,12 +1,18 @@
# -*- coding: utf-8 -*-
# Copyright 2016-2017 Compassion CH (http://www.compassion.ch)
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
-import simplejson
-
+import logging
from odoo.http import JsonRequest, Root, Response
# Monkeypatch type of request rooter to use RESTJsonRequest
old_get_request = Root.get_request
+_logger = logging.getLogger(__name__)
+
+try:
+ import simplejson
+except ImportError:
+ _logger.error("Please install simplejson tu use mail_sendgrid module")
+ _logger.debug("ImportError details:", exc_info=True)
def get_request(self, httprequest):
|
{"golden_diff": "diff --git a/mail_sendgrid/__manifest__.py b/mail_sendgrid/__manifest__.py\n--- a/mail_sendgrid/__manifest__.py\n+++ b/mail_sendgrid/__manifest__.py\n@@ -1,9 +1,9 @@\n # -*- coding: utf-8 -*-\n-# Copyright 2015-2017 Compassion CH (http://www.compassion.ch)\n+# Copyright 2015-2018 Compassion CH (http://www.compassion.ch)\n # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n {\n 'name': 'SendGrid',\n- 'version': '10.0.1.0.0',\n+ 'version': '10.0.1.0.1',\n 'category': 'Social Network',\n 'author': 'Compassion CH, Odoo Community Association (OCA)',\n 'license': 'AGPL-3',\n@@ -20,6 +20,6 @@\n 'installable': True,\n 'auto_install': False,\n 'external_dependencies': {\n- 'python': ['sendgrid'],\n+ 'python': ['sendgrid', 'simplejson'],\n },\n }\ndiff --git a/mail_sendgrid/controllers/json_request.py b/mail_sendgrid/controllers/json_request.py\n--- a/mail_sendgrid/controllers/json_request.py\n+++ b/mail_sendgrid/controllers/json_request.py\n@@ -1,12 +1,18 @@\n # -*- coding: utf-8 -*-\n # Copyright 2016-2017 Compassion CH (http://www.compassion.ch)\n # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n-import simplejson\n-\n+import logging\n from odoo.http import JsonRequest, Root, Response\n \n # Monkeypatch type of request rooter to use RESTJsonRequest\n old_get_request = Root.get_request\n+_logger = logging.getLogger(__name__)\n+\n+try:\n+ import simplejson\n+except ImportError:\n+ _logger.error(\"Please install simplejson tu use mail_sendgrid module\")\n+ _logger.debug(\"ImportError details:\", exc_info=True)\n \n \n def get_request(self, httprequest):\n", "issue": "[10.0] mail_sendrid, Error: No module named simplejson\n> File \".../addons/oca-social/mail_sendgrid/controllers/json_request.py\", line 4, in <module>\r\n> import simplejson\r\n> ImportError: No module named simplejson\r\n\r\nI put `simplejson` into file requirements.txt and it solved this issue.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2016-2017 Compassion CH (http://www.compassion.ch)\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\nimport simplejson\n\nfrom odoo.http import JsonRequest, Root, Response\n\n# Monkeypatch type of request rooter to use RESTJsonRequest\nold_get_request = Root.get_request\n\n\ndef get_request(self, httprequest):\n if (httprequest.mimetype == \"application/json\" and\n httprequest.environ['PATH_INFO'].startswith('/mail')):\n return RESTJsonRequest(httprequest)\n return old_get_request(self, httprequest)\n\n\nRoot.get_request = get_request\n\n\nclass RESTJsonRequest(JsonRequest):\n \"\"\" Special RestJson Handler to enable receiving lists in JSON\n body\n \"\"\"\n def __init__(self, *args):\n try:\n super(RESTJsonRequest, self).__init__(*args)\n except AttributeError:\n # The JSON may contain a list\n self.params = dict()\n self.context = dict(self.session.context)\n\n def _json_response(self, result=None, error=None):\n response = {}\n if error is not None:\n response['error'] = error\n if result is not None:\n response['result'] = result\n\n mime = 'application/json'\n body = simplejson.dumps(response)\n\n return Response(\n body, headers=[('Content-Type', mime),\n ('Content-Length', len(body))])\n", "path": "mail_sendgrid/controllers/json_request.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2017 Compassion CH (http://www.compassion.ch)\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n{\n 'name': 'SendGrid',\n 'version': '10.0.1.0.0',\n 'category': 'Social Network',\n 'author': 'Compassion CH, Odoo Community Association (OCA)',\n 'license': 'AGPL-3',\n 'website': 'https://github.com/OCA/social',\n 'depends': ['mail_tracking'],\n 'data': [\n 'security/ir.model.access.csv',\n 'views/sendgrid_email_view.xml',\n 'views/sendgrid_template_view.xml',\n 'views/mail_compose_message_view.xml',\n 'views/email_template_view.xml',\n ],\n 'demo': [],\n 'installable': True,\n 'auto_install': False,\n 'external_dependencies': {\n 'python': ['sendgrid'],\n },\n}\n", "path": "mail_sendgrid/__manifest__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2016-2017 Compassion CH (http://www.compassion.ch)\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\nimport logging\nfrom odoo.http import JsonRequest, Root, Response\n\n# Monkeypatch type of request rooter to use RESTJsonRequest\nold_get_request = Root.get_request\n_logger = logging.getLogger(__name__)\n\ntry:\n import simplejson\nexcept ImportError:\n _logger.error(\"Please install simplejson tu use mail_sendgrid module\")\n _logger.debug(\"ImportError details:\", exc_info=True)\n\n\ndef get_request(self, httprequest):\n if (httprequest.mimetype == \"application/json\" and\n httprequest.environ['PATH_INFO'].startswith('/mail')):\n return RESTJsonRequest(httprequest)\n return old_get_request(self, httprequest)\n\n\nRoot.get_request = get_request\n\n\nclass RESTJsonRequest(JsonRequest):\n \"\"\" Special RestJson Handler to enable receiving lists in JSON\n body\n \"\"\"\n def __init__(self, *args):\n try:\n super(RESTJsonRequest, self).__init__(*args)\n except AttributeError:\n # The JSON may contain a list\n self.params = dict()\n self.context = dict(self.session.context)\n\n def _json_response(self, result=None, error=None):\n response = {}\n if error is not None:\n response['error'] = error\n if result is not None:\n response['result'] = result\n\n mime = 'application/json'\n body = simplejson.dumps(response)\n\n return Response(\n body, headers=[('Content-Type', mime),\n ('Content-Length', len(body))])\n", "path": "mail_sendgrid/controllers/json_request.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright 2015-2018 Compassion CH (http://www.compassion.ch)\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n{\n 'name': 'SendGrid',\n 'version': '10.0.1.0.1',\n 'category': 'Social Network',\n 'author': 'Compassion CH, Odoo Community Association (OCA)',\n 'license': 'AGPL-3',\n 'website': 'https://github.com/OCA/social',\n 'depends': ['mail_tracking'],\n 'data': [\n 'security/ir.model.access.csv',\n 'views/sendgrid_email_view.xml',\n 'views/sendgrid_template_view.xml',\n 'views/mail_compose_message_view.xml',\n 'views/email_template_view.xml',\n ],\n 'demo': [],\n 'installable': True,\n 'auto_install': False,\n 'external_dependencies': {\n 'python': ['sendgrid', 'simplejson'],\n },\n}\n", "path": "mail_sendgrid/__manifest__.py"}]}
| 1,026 | 471 |
gh_patches_debug_27142
|
rasdani/github-patches
|
git_diff
|
Netflix__lemur-302
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Chain Certificate is not exporting
Hi Team,
While extracting .p12 formatted file (which was exported from Lemur) we cannot find the Chain file on the same. Could you please let us know if we need to perform any additional step to download the .p12 along with the chain.
Thanks,
Akash John
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lemur/plugins/lemur_openssl/plugin.py`
Content:
```
1 """
2 .. module: lemur.plugins.lemur_openssl.plugin
3 :platform: Unix
4 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
5 :license: Apache, see LICENSE for more details.
6
7 .. moduleauthor:: Kevin Glisson <[email protected]>
8 """
9 import subprocess
10
11 from flask import current_app
12
13 from lemur.utils import mktempfile, mktemppath
14 from lemur.plugins.bases import ExportPlugin
15 from lemur.plugins import lemur_openssl as openssl
16 from lemur.common.utils import get_psuedo_random_string
17
18
19 def run_process(command):
20 """
21 Runs a given command with pOpen and wraps some
22 error handling around it.
23 :param command:
24 :return:
25 """
26 p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
27 current_app.logger.debug(command)
28 stdout, stderr = p.communicate()
29
30 if p.returncode != 0:
31 current_app.logger.debug(" ".join(command))
32 current_app.logger.error(stderr)
33 raise Exception(stderr)
34
35
36 def create_pkcs12(cert, p12_tmp, key, alias, passphrase):
37 """
38 Creates a pkcs12 formated file.
39 :param cert:
40 :param jks_tmp:
41 :param key:
42 :param alias:
43 :param passphrase:
44 """
45 with mktempfile() as key_tmp:
46 with open(key_tmp, 'w') as f:
47 f.write(key)
48
49 # Create PKCS12 keystore from private key and public certificate
50 with mktempfile() as cert_tmp:
51 with open(cert_tmp, 'w') as f:
52 f.write(cert)
53
54 run_process([
55 "openssl",
56 "pkcs12",
57 "-export",
58 "-name", alias,
59 "-in", cert_tmp,
60 "-inkey", key_tmp,
61 "-out", p12_tmp,
62 "-password", "pass:{}".format(passphrase)
63 ])
64
65
66 class OpenSSLExportPlugin(ExportPlugin):
67 title = 'OpenSSL'
68 slug = 'openssl-export'
69 description = 'Is a loose interface to openssl and support various formats'
70 version = openssl.VERSION
71
72 author = 'Kevin Glisson'
73 author_url = 'https://github.com/netflix/lemur'
74
75 options = [
76 {
77 'name': 'type',
78 'type': 'select',
79 'required': True,
80 'available': ['PKCS12 (.p12)'],
81 'helpMessage': 'Choose the format you wish to export',
82 },
83 {
84 'name': 'passphrase',
85 'type': 'str',
86 'required': False,
87 'helpMessage': 'If no passphrase is given one will be generated for you, we highly recommend this. Minimum length is 8.',
88 'validation': ''
89 },
90 {
91 'name': 'alias',
92 'type': 'str',
93 'required': False,
94 'helpMessage': 'Enter the alias you wish to use for the keystore.',
95 }
96 ]
97
98 def export(self, body, chain, key, options, **kwargs):
99 """
100 Generates a Java Keystore or Truststore
101
102 :param key:
103 :param chain:
104 :param body:
105 :param options:
106 :param kwargs:
107 """
108 if self.get_option('passphrase', options):
109 passphrase = self.get_option('passphrase', options)
110 else:
111 passphrase = get_psuedo_random_string()
112
113 if self.get_option('alias', options):
114 alias = self.get_option('alias', options)
115 else:
116 alias = "blah"
117
118 type = self.get_option('type', options)
119
120 with mktemppath() as output_tmp:
121 if type == 'PKCS12 (.p12)':
122 create_pkcs12(body, output_tmp, key, alias, passphrase)
123 extension = "p12"
124 else:
125 raise Exception("Unable to export, unsupported type: {0}".format(type))
126
127 with open(output_tmp, 'rb') as f:
128 raw = f.read()
129
130 return extension, passphrase, raw
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lemur/plugins/lemur_openssl/plugin.py b/lemur/plugins/lemur_openssl/plugin.py
--- a/lemur/plugins/lemur_openssl/plugin.py
+++ b/lemur/plugins/lemur_openssl/plugin.py
@@ -33,11 +33,12 @@
raise Exception(stderr)
-def create_pkcs12(cert, p12_tmp, key, alias, passphrase):
+def create_pkcs12(cert, chain, p12_tmp, key, alias, passphrase):
"""
Creates a pkcs12 formated file.
:param cert:
- :param jks_tmp:
+ :param chain:
+ :param p12_tmp:
:param key:
:param alias:
:param passphrase:
@@ -49,7 +50,7 @@
# Create PKCS12 keystore from private key and public certificate
with mktempfile() as cert_tmp:
with open(cert_tmp, 'w') as f:
- f.write(cert)
+ f.writelines([cert + "\n", chain + "\n"])
run_process([
"openssl",
@@ -119,7 +120,7 @@
with mktemppath() as output_tmp:
if type == 'PKCS12 (.p12)':
- create_pkcs12(body, output_tmp, key, alias, passphrase)
+ create_pkcs12(body, chain, output_tmp, key, alias, passphrase)
extension = "p12"
else:
raise Exception("Unable to export, unsupported type: {0}".format(type))
|
{"golden_diff": "diff --git a/lemur/plugins/lemur_openssl/plugin.py b/lemur/plugins/lemur_openssl/plugin.py\n--- a/lemur/plugins/lemur_openssl/plugin.py\n+++ b/lemur/plugins/lemur_openssl/plugin.py\n@@ -33,11 +33,12 @@\n raise Exception(stderr)\n \n \n-def create_pkcs12(cert, p12_tmp, key, alias, passphrase):\n+def create_pkcs12(cert, chain, p12_tmp, key, alias, passphrase):\n \"\"\"\n Creates a pkcs12 formated file.\n :param cert:\n- :param jks_tmp:\n+ :param chain:\n+ :param p12_tmp:\n :param key:\n :param alias:\n :param passphrase:\n@@ -49,7 +50,7 @@\n # Create PKCS12 keystore from private key and public certificate\n with mktempfile() as cert_tmp:\n with open(cert_tmp, 'w') as f:\n- f.write(cert)\n+ f.writelines([cert + \"\\n\", chain + \"\\n\"])\n \n run_process([\n \"openssl\",\n@@ -119,7 +120,7 @@\n \n with mktemppath() as output_tmp:\n if type == 'PKCS12 (.p12)':\n- create_pkcs12(body, output_tmp, key, alias, passphrase)\n+ create_pkcs12(body, chain, output_tmp, key, alias, passphrase)\n extension = \"p12\"\n else:\n raise Exception(\"Unable to export, unsupported type: {0}\".format(type))\n", "issue": "Chain Certificate is not exporting\nHi Team,\n\nWhile extracting .p12 formatted file (which was exported from Lemur) we cannot find the Chain file on the same. Could you please let us know if we need to perform any additional step to download the .p12 along with the chain. \n\nThanks,\nAkash John\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.plugins.lemur_openssl.plugin\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nimport subprocess\n\nfrom flask import current_app\n\nfrom lemur.utils import mktempfile, mktemppath\nfrom lemur.plugins.bases import ExportPlugin\nfrom lemur.plugins import lemur_openssl as openssl\nfrom lemur.common.utils import get_psuedo_random_string\n\n\ndef run_process(command):\n \"\"\"\n Runs a given command with pOpen and wraps some\n error handling around it.\n :param command:\n :return:\n \"\"\"\n p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n current_app.logger.debug(command)\n stdout, stderr = p.communicate()\n\n if p.returncode != 0:\n current_app.logger.debug(\" \".join(command))\n current_app.logger.error(stderr)\n raise Exception(stderr)\n\n\ndef create_pkcs12(cert, p12_tmp, key, alias, passphrase):\n \"\"\"\n Creates a pkcs12 formated file.\n :param cert:\n :param jks_tmp:\n :param key:\n :param alias:\n :param passphrase:\n \"\"\"\n with mktempfile() as key_tmp:\n with open(key_tmp, 'w') as f:\n f.write(key)\n\n # Create PKCS12 keystore from private key and public certificate\n with mktempfile() as cert_tmp:\n with open(cert_tmp, 'w') as f:\n f.write(cert)\n\n run_process([\n \"openssl\",\n \"pkcs12\",\n \"-export\",\n \"-name\", alias,\n \"-in\", cert_tmp,\n \"-inkey\", key_tmp,\n \"-out\", p12_tmp,\n \"-password\", \"pass:{}\".format(passphrase)\n ])\n\n\nclass OpenSSLExportPlugin(ExportPlugin):\n title = 'OpenSSL'\n slug = 'openssl-export'\n description = 'Is a loose interface to openssl and support various formats'\n version = openssl.VERSION\n\n author = 'Kevin Glisson'\n author_url = 'https://github.com/netflix/lemur'\n\n options = [\n {\n 'name': 'type',\n 'type': 'select',\n 'required': True,\n 'available': ['PKCS12 (.p12)'],\n 'helpMessage': 'Choose the format you wish to export',\n },\n {\n 'name': 'passphrase',\n 'type': 'str',\n 'required': False,\n 'helpMessage': 'If no passphrase is given one will be generated for you, we highly recommend this. Minimum length is 8.',\n 'validation': ''\n },\n {\n 'name': 'alias',\n 'type': 'str',\n 'required': False,\n 'helpMessage': 'Enter the alias you wish to use for the keystore.',\n }\n ]\n\n def export(self, body, chain, key, options, **kwargs):\n \"\"\"\n Generates a Java Keystore or Truststore\n\n :param key:\n :param chain:\n :param body:\n :param options:\n :param kwargs:\n \"\"\"\n if self.get_option('passphrase', options):\n passphrase = self.get_option('passphrase', options)\n else:\n passphrase = get_psuedo_random_string()\n\n if self.get_option('alias', options):\n alias = self.get_option('alias', options)\n else:\n alias = \"blah\"\n\n type = self.get_option('type', options)\n\n with mktemppath() as output_tmp:\n if type == 'PKCS12 (.p12)':\n create_pkcs12(body, output_tmp, key, alias, passphrase)\n extension = \"p12\"\n else:\n raise Exception(\"Unable to export, unsupported type: {0}\".format(type))\n\n with open(output_tmp, 'rb') as f:\n raw = f.read()\n\n return extension, passphrase, raw\n", "path": "lemur/plugins/lemur_openssl/plugin.py"}], "after_files": [{"content": "\"\"\"\n.. module: lemur.plugins.lemur_openssl.plugin\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nimport subprocess\n\nfrom flask import current_app\n\nfrom lemur.utils import mktempfile, mktemppath\nfrom lemur.plugins.bases import ExportPlugin\nfrom lemur.plugins import lemur_openssl as openssl\nfrom lemur.common.utils import get_psuedo_random_string\n\n\ndef run_process(command):\n \"\"\"\n Runs a given command with pOpen and wraps some\n error handling around it.\n :param command:\n :return:\n \"\"\"\n p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n current_app.logger.debug(command)\n stdout, stderr = p.communicate()\n\n if p.returncode != 0:\n current_app.logger.debug(\" \".join(command))\n current_app.logger.error(stderr)\n raise Exception(stderr)\n\n\ndef create_pkcs12(cert, chain, p12_tmp, key, alias, passphrase):\n \"\"\"\n Creates a pkcs12 formated file.\n :param cert:\n :param chain:\n :param p12_tmp:\n :param key:\n :param alias:\n :param passphrase:\n \"\"\"\n with mktempfile() as key_tmp:\n with open(key_tmp, 'w') as f:\n f.write(key)\n\n # Create PKCS12 keystore from private key and public certificate\n with mktempfile() as cert_tmp:\n with open(cert_tmp, 'w') as f:\n f.writelines([cert + \"\\n\", chain + \"\\n\"])\n\n run_process([\n \"openssl\",\n \"pkcs12\",\n \"-export\",\n \"-name\", alias,\n \"-in\", cert_tmp,\n \"-inkey\", key_tmp,\n \"-out\", p12_tmp,\n \"-password\", \"pass:{}\".format(passphrase)\n ])\n\n\nclass OpenSSLExportPlugin(ExportPlugin):\n title = 'OpenSSL'\n slug = 'openssl-export'\n description = 'Is a loose interface to openssl and support various formats'\n version = openssl.VERSION\n\n author = 'Kevin Glisson'\n author_url = 'https://github.com/netflix/lemur'\n\n options = [\n {\n 'name': 'type',\n 'type': 'select',\n 'required': True,\n 'available': ['PKCS12 (.p12)'],\n 'helpMessage': 'Choose the format you wish to export',\n },\n {\n 'name': 'passphrase',\n 'type': 'str',\n 'required': False,\n 'helpMessage': 'If no passphrase is given one will be generated for you, we highly recommend this. Minimum length is 8.',\n 'validation': ''\n },\n {\n 'name': 'alias',\n 'type': 'str',\n 'required': False,\n 'helpMessage': 'Enter the alias you wish to use for the keystore.',\n }\n ]\n\n def export(self, body, chain, key, options, **kwargs):\n \"\"\"\n Generates a Java Keystore or Truststore\n\n :param key:\n :param chain:\n :param body:\n :param options:\n :param kwargs:\n \"\"\"\n if self.get_option('passphrase', options):\n passphrase = self.get_option('passphrase', options)\n else:\n passphrase = get_psuedo_random_string()\n\n if self.get_option('alias', options):\n alias = self.get_option('alias', options)\n else:\n alias = \"blah\"\n\n type = self.get_option('type', options)\n\n with mktemppath() as output_tmp:\n if type == 'PKCS12 (.p12)':\n create_pkcs12(body, chain, output_tmp, key, alias, passphrase)\n extension = \"p12\"\n else:\n raise Exception(\"Unable to export, unsupported type: {0}\".format(type))\n\n with open(output_tmp, 'rb') as f:\n raw = f.read()\n\n return extension, passphrase, raw\n", "path": "lemur/plugins/lemur_openssl/plugin.py"}]}
| 1,515 | 365 |
gh_patches_debug_4433
|
rasdani/github-patches
|
git_diff
|
liberapay__liberapay.com-1314
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug during payment modification
I try to donate to this account: https://liberapay.com/lowtechlab/ after a previous promise on the twitter account.
lowtechlab created its account on the plateform but if I see that I donate to it, the actual number of mecene is still 0.
I tried to change the amount since I know that with the new system we need to do a payment in advance but I had a 500 error with the following error code: `fc677c3e6c54434ba77bcf1eac628a18`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liberapay/utils/currencies.py`
Content:
```
1 from __future__ import absolute_import, division, print_function, unicode_literals
2
3 from collections import defaultdict, OrderedDict
4 from decimal import Decimal, ROUND_DOWN, ROUND_HALF_UP, ROUND_UP
5 from numbers import Number
6 import operator
7
8 from mangopay.exceptions import CurrencyMismatch
9 from mangopay.utils import Money
10 import requests
11 import xmltodict
12
13 from liberapay.constants import CURRENCIES, D_CENT, D_ZERO
14 from liberapay.website import website
15
16
17 def _convert(self, c, rounding=ROUND_HALF_UP):
18 if self.currency == c:
19 return self
20 if 'EUR' in (self.currency, c):
21 rate = website.currency_exchange_rates[(self.currency, c)]
22 else:
23 rate = (
24 website.currency_exchange_rates[(self.currency, 'EUR')] *
25 website.currency_exchange_rates[('EUR', c)]
26 )
27 amount = self.amount * rate
28 return Money(amount, c, rounding=rounding)
29
30 def _sum(cls, amounts, currency):
31 a = Money.ZEROS[currency].amount
32 for m in amounts:
33 if m.currency != currency:
34 raise CurrencyMismatch(m.currency, currency, 'sum')
35 a += m.amount
36 return cls(a, currency)
37
38 def _Money_init(self, amount=Decimal('0'), currency=None, rounding=None):
39 if not isinstance(amount, Decimal):
40 amount = Decimal(str(amount))
41 # Why `str(amount)`? Because:
42 # >>> Decimal(0.23)
43 # Decimal('0.2300000000000000099920072216264088638126850128173828125')
44 # >>> Decimal(str(0.23))
45 # Decimal('0.23')
46 if rounding is not None:
47 minimum = Money.MINIMUMS[currency].amount
48 amount = amount.quantize(minimum, rounding=rounding)
49 self.amount = amount
50 self.currency = currency
51
52 def _Money_eq(self, other):
53 if isinstance(other, self.__class__):
54 return self.__dict__ == other.__dict__
55 if isinstance(other, (Decimal, Number)):
56 return self.amount == other
57 if isinstance(other, MoneyBasket):
58 return other.__eq__(self)
59 return False
60
61 def _Money_round(self, rounding=ROUND_HALF_UP):
62 return Money(self.amount, self.currency, rounding=rounding)
63
64 class _Minimums(defaultdict):
65 def __missing__(self, currency):
66 exponent = website.db.one("SELECT get_currency_exponent(%s)", (currency,))
67 minimum = Money((D_CENT if exponent == 2 else Decimal(10) ** (-exponent)), currency)
68 self[currency] = minimum
69 return minimum
70
71 class _Zeros(defaultdict):
72 def __missing__(self, currency):
73 minimum = Money.MINIMUMS[currency].amount
74 zero = Money((D_ZERO if minimum is D_CENT else minimum - minimum), currency)
75 self[currency] = zero
76 return zero
77
78
79 Money.__init__ = _Money_init
80 Money.__nonzero__ = Money.__bool__
81 Money.__eq__ = _Money_eq
82 Money.__iter__ = lambda m: iter((m.amount, m.currency))
83 Money.__repr__ = lambda m: '<Money "%s">' % m
84 Money.__str__ = lambda m: '%(amount)s %(currency)s' % m.__dict__
85 Money.__unicode__ = Money.__str__
86 Money.convert = _convert
87 Money.minimum = lambda m: Money.MINIMUMS[m.currency]
88 Money.MINIMUMS = _Minimums()
89 Money.round = _Money_round
90 Money.round_down = lambda m: m.round(ROUND_DOWN)
91 Money.round_up = lambda m: m.round(ROUND_UP)
92 Money.sum = classmethod(_sum)
93 Money.zero = lambda m: Money.ZEROS[m.currency]
94 Money.ZEROS = _Zeros()
95
96
97 class MoneyBasket(object):
98
99 def __init__(self, *args, **decimals):
100 self.amounts = OrderedDict(
101 (currency, decimals.get(currency, Money.ZEROS[currency].amount))
102 for currency in CURRENCIES
103 )
104 for arg in args:
105 if isinstance(arg, Money):
106 self.amounts[arg.currency] += arg.amount
107 else:
108 for m in arg:
109 self.amounts[m.currency] += m.amount
110
111 def __getitem__(self, currency):
112 return Money(self.amounts[currency], currency)
113
114 def __iter__(self):
115 return (Money(amount, currency) for currency, amount in self.amounts.items())
116
117 def __eq__(self, other):
118 if isinstance(other, self.__class__):
119 return self.amounts == other.amounts
120 elif isinstance(other, Money):
121 return self.amounts == MoneyBasket(other).amounts
122 elif other == 0:
123 return all(v == 0 for v in self.amounts.values())
124 return False
125
126 def __ne__(self, other):
127 return not self.__eq__(other)
128
129 def _compare(self, op, other):
130 if isinstance(other, self.__class__):
131 return all(op(a, b) for a, b in zip(self.amounts.values(), other.amounts.values()))
132 elif isinstance(other, Money):
133 return op(self.amounts[other.currency], other.amount)
134 elif other == 0:
135 return any(op(v, 0) for v in self.amounts.values())
136 else:
137 raise TypeError(
138 "can't compare %r and %r" % (self.__class__, other.__class__)
139 )
140
141 def __ge__(self, other):
142 return self._compare(operator.ge, other)
143
144 def __gt__(self, other):
145 return self._compare(operator.gt, other)
146
147 def __add__(self, other):
148 if other is 0:
149 return self
150 r = self.__class__(**self.amounts)
151 if isinstance(other, self.__class__):
152 for currency, amount in other.amounts.items():
153 if currency in r.amounts:
154 r.amounts[currency] += amount
155 else:
156 r.amounts[currency] = amount
157 elif isinstance(other, Money):
158 currency = other.currency
159 if currency in r.amounts:
160 r.amounts[currency] += other.amount
161 else:
162 r.amounts[currency] = other.amount
163 elif other == 0:
164 return r
165 else:
166 raise TypeError(other)
167 return r
168
169 def __radd__(self, other):
170 return self.__add__(other)
171
172 def __sub__(self, other):
173 if other is 0:
174 return self
175 r = self.__class__(**self.amounts)
176 if isinstance(other, self.__class__):
177 for currency, v in other.amounts.items():
178 if currency in r.amounts:
179 r.amounts[currency] -= v
180 else:
181 r.amounts[currency] = -v
182 elif isinstance(other, Money):
183 currency = other.currency
184 if currency in r.amounts:
185 r.amounts[currency] -= other.amount
186 else:
187 r.amounts[currency] = -other.amount
188 else:
189 raise TypeError(other)
190 return r
191
192 def __repr__(self):
193 return '%s[%s]' % (
194 self.__class__.__name__,
195 ', '.join('%s %s' % (a, c) for c, a in self.amounts.items() if a)
196 )
197
198 def __bool__(self):
199 return any(v for v in self.amounts.values())
200
201 __nonzero__ = __bool__
202
203 def __setstate__(self, state):
204 """Backward-compatible unpickling
205
206 The original version of `MoneyBasket` stored `Money` objects in its
207 `__dict__`, whereas the current version stores `Decimal`s in the
208 `amounts` attribute.
209 """
210 if 'amounts' in state:
211 self.__dict__ = state
212 else:
213 self.amounts = {m.currency: m.amount for m in state.values()}
214
215 @property
216 def currencies_present(self):
217 return [k for k, v in self.amounts.items() if v > 0]
218
219 def fuzzy_sum(self, currency, rounding=ROUND_UP):
220 a = Money.ZEROS[currency].amount
221 fuzzy = False
222 for m in self:
223 if m.currency == currency:
224 a += m.amount
225 elif m.amount:
226 a += m.convert(currency, rounding=None).amount
227 fuzzy = True
228 r = Money(a, currency, rounding=rounding)
229 r.fuzzy = fuzzy
230 return r
231
232
233 def fetch_currency_exchange_rates(db):
234 currencies = set(db.one("SELECT array_to_json(enum_range(NULL::currency))"))
235 r = requests.get('https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml')
236 rates = xmltodict.parse(r.text)['gesmes:Envelope']['Cube']['Cube']['Cube']
237 for fx in rates:
238 currency = fx['@currency']
239 if currency not in currencies:
240 continue
241 db.run("""
242 INSERT INTO currency_exchange_rates
243 (source_currency, target_currency, rate)
244 VALUES ('EUR', %(target)s, %(rate)s)
245 , (%(target)s, 'EUR', 1 / %(rate)s)
246 ON CONFLICT (source_currency, target_currency) DO UPDATE
247 SET rate = excluded.rate
248 """, dict(target=currency, rate=Decimal(fx['@rate'])))
249
250
251 def get_currency_exchange_rates(db):
252 r = {(r[0], r[1]): r[2] for r in db.all("SELECT * FROM currency_exchange_rates")}
253 if r:
254 return r
255 fetch_currency_exchange_rates(db)
256 return get_currency_exchange_rates(db)
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/liberapay/utils/currencies.py b/liberapay/utils/currencies.py
--- a/liberapay/utils/currencies.py
+++ b/liberapay/utils/currencies.py
@@ -51,7 +51,7 @@
def _Money_eq(self, other):
if isinstance(other, self.__class__):
- return self.__dict__ == other.__dict__
+ return self.amount == other.amount and self.currency == other.currency
if isinstance(other, (Decimal, Number)):
return self.amount == other
if isinstance(other, MoneyBasket):
|
{"golden_diff": "diff --git a/liberapay/utils/currencies.py b/liberapay/utils/currencies.py\n--- a/liberapay/utils/currencies.py\n+++ b/liberapay/utils/currencies.py\n@@ -51,7 +51,7 @@\n \n def _Money_eq(self, other):\n if isinstance(other, self.__class__):\n- return self.__dict__ == other.__dict__\n+ return self.amount == other.amount and self.currency == other.currency\n if isinstance(other, (Decimal, Number)):\n return self.amount == other\n if isinstance(other, MoneyBasket):\n", "issue": "Bug during payment modification\nI try to donate to this account: https://liberapay.com/lowtechlab/ after a previous promise on the twitter account.\r\n\r\nlowtechlab created its account on the plateform but if I see that I donate to it, the actual number of mecene is still 0.\r\n\r\nI tried to change the amount since I know that with the new system we need to do a payment in advance but I had a 500 error with the following error code: `fc677c3e6c54434ba77bcf1eac628a18`\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import defaultdict, OrderedDict\nfrom decimal import Decimal, ROUND_DOWN, ROUND_HALF_UP, ROUND_UP\nfrom numbers import Number\nimport operator\n\nfrom mangopay.exceptions import CurrencyMismatch\nfrom mangopay.utils import Money\nimport requests\nimport xmltodict\n\nfrom liberapay.constants import CURRENCIES, D_CENT, D_ZERO\nfrom liberapay.website import website\n\n\ndef _convert(self, c, rounding=ROUND_HALF_UP):\n if self.currency == c:\n return self\n if 'EUR' in (self.currency, c):\n rate = website.currency_exchange_rates[(self.currency, c)]\n else:\n rate = (\n website.currency_exchange_rates[(self.currency, 'EUR')] *\n website.currency_exchange_rates[('EUR', c)]\n )\n amount = self.amount * rate\n return Money(amount, c, rounding=rounding)\n\ndef _sum(cls, amounts, currency):\n a = Money.ZEROS[currency].amount\n for m in amounts:\n if m.currency != currency:\n raise CurrencyMismatch(m.currency, currency, 'sum')\n a += m.amount\n return cls(a, currency)\n\ndef _Money_init(self, amount=Decimal('0'), currency=None, rounding=None):\n if not isinstance(amount, Decimal):\n amount = Decimal(str(amount))\n # Why `str(amount)`? Because:\n # >>> Decimal(0.23)\n # Decimal('0.2300000000000000099920072216264088638126850128173828125')\n # >>> Decimal(str(0.23))\n # Decimal('0.23')\n if rounding is not None:\n minimum = Money.MINIMUMS[currency].amount\n amount = amount.quantize(minimum, rounding=rounding)\n self.amount = amount\n self.currency = currency\n\ndef _Money_eq(self, other):\n if isinstance(other, self.__class__):\n return self.__dict__ == other.__dict__\n if isinstance(other, (Decimal, Number)):\n return self.amount == other\n if isinstance(other, MoneyBasket):\n return other.__eq__(self)\n return False\n\ndef _Money_round(self, rounding=ROUND_HALF_UP):\n return Money(self.amount, self.currency, rounding=rounding)\n\nclass _Minimums(defaultdict):\n def __missing__(self, currency):\n exponent = website.db.one(\"SELECT get_currency_exponent(%s)\", (currency,))\n minimum = Money((D_CENT if exponent == 2 else Decimal(10) ** (-exponent)), currency)\n self[currency] = minimum\n return minimum\n\nclass _Zeros(defaultdict):\n def __missing__(self, currency):\n minimum = Money.MINIMUMS[currency].amount\n zero = Money((D_ZERO if minimum is D_CENT else minimum - minimum), currency)\n self[currency] = zero\n return zero\n\n\nMoney.__init__ = _Money_init\nMoney.__nonzero__ = Money.__bool__\nMoney.__eq__ = _Money_eq\nMoney.__iter__ = lambda m: iter((m.amount, m.currency))\nMoney.__repr__ = lambda m: '<Money \"%s\">' % m\nMoney.__str__ = lambda m: '%(amount)s %(currency)s' % m.__dict__\nMoney.__unicode__ = Money.__str__\nMoney.convert = _convert\nMoney.minimum = lambda m: Money.MINIMUMS[m.currency]\nMoney.MINIMUMS = _Minimums()\nMoney.round = _Money_round\nMoney.round_down = lambda m: m.round(ROUND_DOWN)\nMoney.round_up = lambda m: m.round(ROUND_UP)\nMoney.sum = classmethod(_sum)\nMoney.zero = lambda m: Money.ZEROS[m.currency]\nMoney.ZEROS = _Zeros()\n\n\nclass MoneyBasket(object):\n\n def __init__(self, *args, **decimals):\n self.amounts = OrderedDict(\n (currency, decimals.get(currency, Money.ZEROS[currency].amount))\n for currency in CURRENCIES\n )\n for arg in args:\n if isinstance(arg, Money):\n self.amounts[arg.currency] += arg.amount\n else:\n for m in arg:\n self.amounts[m.currency] += m.amount\n\n def __getitem__(self, currency):\n return Money(self.amounts[currency], currency)\n\n def __iter__(self):\n return (Money(amount, currency) for currency, amount in self.amounts.items())\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self.amounts == other.amounts\n elif isinstance(other, Money):\n return self.amounts == MoneyBasket(other).amounts\n elif other == 0:\n return all(v == 0 for v in self.amounts.values())\n return False\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _compare(self, op, other):\n if isinstance(other, self.__class__):\n return all(op(a, b) for a, b in zip(self.amounts.values(), other.amounts.values()))\n elif isinstance(other, Money):\n return op(self.amounts[other.currency], other.amount)\n elif other == 0:\n return any(op(v, 0) for v in self.amounts.values())\n else:\n raise TypeError(\n \"can't compare %r and %r\" % (self.__class__, other.__class__)\n )\n\n def __ge__(self, other):\n return self._compare(operator.ge, other)\n\n def __gt__(self, other):\n return self._compare(operator.gt, other)\n\n def __add__(self, other):\n if other is 0:\n return self\n r = self.__class__(**self.amounts)\n if isinstance(other, self.__class__):\n for currency, amount in other.amounts.items():\n if currency in r.amounts:\n r.amounts[currency] += amount\n else:\n r.amounts[currency] = amount\n elif isinstance(other, Money):\n currency = other.currency\n if currency in r.amounts:\n r.amounts[currency] += other.amount\n else:\n r.amounts[currency] = other.amount\n elif other == 0:\n return r\n else:\n raise TypeError(other)\n return r\n\n def __radd__(self, other):\n return self.__add__(other)\n\n def __sub__(self, other):\n if other is 0:\n return self\n r = self.__class__(**self.amounts)\n if isinstance(other, self.__class__):\n for currency, v in other.amounts.items():\n if currency in r.amounts:\n r.amounts[currency] -= v\n else:\n r.amounts[currency] = -v\n elif isinstance(other, Money):\n currency = other.currency\n if currency in r.amounts:\n r.amounts[currency] -= other.amount\n else:\n r.amounts[currency] = -other.amount\n else:\n raise TypeError(other)\n return r\n\n def __repr__(self):\n return '%s[%s]' % (\n self.__class__.__name__,\n ', '.join('%s %s' % (a, c) for c, a in self.amounts.items() if a)\n )\n\n def __bool__(self):\n return any(v for v in self.amounts.values())\n\n __nonzero__ = __bool__\n\n def __setstate__(self, state):\n \"\"\"Backward-compatible unpickling\n\n The original version of `MoneyBasket` stored `Money` objects in its\n `__dict__`, whereas the current version stores `Decimal`s in the\n `amounts` attribute.\n \"\"\"\n if 'amounts' in state:\n self.__dict__ = state\n else:\n self.amounts = {m.currency: m.amount for m in state.values()}\n\n @property\n def currencies_present(self):\n return [k for k, v in self.amounts.items() if v > 0]\n\n def fuzzy_sum(self, currency, rounding=ROUND_UP):\n a = Money.ZEROS[currency].amount\n fuzzy = False\n for m in self:\n if m.currency == currency:\n a += m.amount\n elif m.amount:\n a += m.convert(currency, rounding=None).amount\n fuzzy = True\n r = Money(a, currency, rounding=rounding)\n r.fuzzy = fuzzy\n return r\n\n\ndef fetch_currency_exchange_rates(db):\n currencies = set(db.one(\"SELECT array_to_json(enum_range(NULL::currency))\"))\n r = requests.get('https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml')\n rates = xmltodict.parse(r.text)['gesmes:Envelope']['Cube']['Cube']['Cube']\n for fx in rates:\n currency = fx['@currency']\n if currency not in currencies:\n continue\n db.run(\"\"\"\n INSERT INTO currency_exchange_rates\n (source_currency, target_currency, rate)\n VALUES ('EUR', %(target)s, %(rate)s)\n , (%(target)s, 'EUR', 1 / %(rate)s)\n ON CONFLICT (source_currency, target_currency) DO UPDATE\n SET rate = excluded.rate\n \"\"\", dict(target=currency, rate=Decimal(fx['@rate'])))\n\n\ndef get_currency_exchange_rates(db):\n r = {(r[0], r[1]): r[2] for r in db.all(\"SELECT * FROM currency_exchange_rates\")}\n if r:\n return r\n fetch_currency_exchange_rates(db)\n return get_currency_exchange_rates(db)\n", "path": "liberapay/utils/currencies.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import defaultdict, OrderedDict\nfrom decimal import Decimal, ROUND_DOWN, ROUND_HALF_UP, ROUND_UP\nfrom numbers import Number\nimport operator\n\nfrom mangopay.exceptions import CurrencyMismatch\nfrom mangopay.utils import Money\nimport requests\nimport xmltodict\n\nfrom liberapay.constants import CURRENCIES, D_CENT, D_ZERO\nfrom liberapay.website import website\n\n\ndef _convert(self, c, rounding=ROUND_HALF_UP):\n if self.currency == c:\n return self\n if 'EUR' in (self.currency, c):\n rate = website.currency_exchange_rates[(self.currency, c)]\n else:\n rate = (\n website.currency_exchange_rates[(self.currency, 'EUR')] *\n website.currency_exchange_rates[('EUR', c)]\n )\n amount = self.amount * rate\n return Money(amount, c, rounding=rounding)\n\ndef _sum(cls, amounts, currency):\n a = Money.ZEROS[currency].amount\n for m in amounts:\n if m.currency != currency:\n raise CurrencyMismatch(m.currency, currency, 'sum')\n a += m.amount\n return cls(a, currency)\n\ndef _Money_init(self, amount=Decimal('0'), currency=None, rounding=None):\n if not isinstance(amount, Decimal):\n amount = Decimal(str(amount))\n # Why `str(amount)`? Because:\n # >>> Decimal(0.23)\n # Decimal('0.2300000000000000099920072216264088638126850128173828125')\n # >>> Decimal(str(0.23))\n # Decimal('0.23')\n if rounding is not None:\n minimum = Money.MINIMUMS[currency].amount\n amount = amount.quantize(minimum, rounding=rounding)\n self.amount = amount\n self.currency = currency\n\ndef _Money_eq(self, other):\n if isinstance(other, self.__class__):\n return self.amount == other.amount and self.currency == other.currency\n if isinstance(other, (Decimal, Number)):\n return self.amount == other\n if isinstance(other, MoneyBasket):\n return other.__eq__(self)\n return False\n\ndef _Money_round(self, rounding=ROUND_HALF_UP):\n return Money(self.amount, self.currency, rounding=rounding)\n\nclass _Minimums(defaultdict):\n def __missing__(self, currency):\n exponent = website.db.one(\"SELECT get_currency_exponent(%s)\", (currency,))\n minimum = Money((D_CENT if exponent == 2 else Decimal(10) ** (-exponent)), currency)\n self[currency] = minimum\n return minimum\n\nclass _Zeros(defaultdict):\n def __missing__(self, currency):\n minimum = Money.MINIMUMS[currency].amount\n zero = Money((D_ZERO if minimum is D_CENT else minimum - minimum), currency)\n self[currency] = zero\n return zero\n\n\nMoney.__init__ = _Money_init\nMoney.__nonzero__ = Money.__bool__\nMoney.__eq__ = _Money_eq\nMoney.__iter__ = lambda m: iter((m.amount, m.currency))\nMoney.__repr__ = lambda m: '<Money \"%s\">' % m\nMoney.__str__ = lambda m: '%(amount)s %(currency)s' % m.__dict__\nMoney.__unicode__ = Money.__str__\nMoney.convert = _convert\nMoney.minimum = lambda m: Money.MINIMUMS[m.currency]\nMoney.MINIMUMS = _Minimums()\nMoney.round = _Money_round\nMoney.round_down = lambda m: m.round(ROUND_DOWN)\nMoney.round_up = lambda m: m.round(ROUND_UP)\nMoney.sum = classmethod(_sum)\nMoney.zero = lambda m: Money.ZEROS[m.currency]\nMoney.ZEROS = _Zeros()\n\n\nclass MoneyBasket(object):\n\n def __init__(self, *args, **decimals):\n self.amounts = OrderedDict(\n (currency, decimals.get(currency, Money.ZEROS[currency].amount))\n for currency in CURRENCIES\n )\n for arg in args:\n if isinstance(arg, Money):\n self.amounts[arg.currency] += arg.amount\n else:\n for m in arg:\n self.amounts[m.currency] += m.amount\n\n def __getitem__(self, currency):\n return Money(self.amounts[currency], currency)\n\n def __iter__(self):\n return (Money(amount, currency) for currency, amount in self.amounts.items())\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self.amounts == other.amounts\n elif isinstance(other, Money):\n return self.amounts == MoneyBasket(other).amounts\n elif other == 0:\n return all(v == 0 for v in self.amounts.values())\n return False\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _compare(self, op, other):\n if isinstance(other, self.__class__):\n return all(op(a, b) for a, b in zip(self.amounts.values(), other.amounts.values()))\n elif isinstance(other, Money):\n return op(self.amounts[other.currency], other.amount)\n elif other == 0:\n return any(op(v, 0) for v in self.amounts.values())\n else:\n raise TypeError(\n \"can't compare %r and %r\" % (self.__class__, other.__class__)\n )\n\n def __ge__(self, other):\n return self._compare(operator.ge, other)\n\n def __gt__(self, other):\n return self._compare(operator.gt, other)\n\n def __add__(self, other):\n if other is 0:\n return self\n r = self.__class__(**self.amounts)\n if isinstance(other, self.__class__):\n for currency, amount in other.amounts.items():\n if currency in r.amounts:\n r.amounts[currency] += amount\n else:\n r.amounts[currency] = amount\n elif isinstance(other, Money):\n currency = other.currency\n if currency in r.amounts:\n r.amounts[currency] += other.amount\n else:\n r.amounts[currency] = other.amount\n elif other == 0:\n return r\n else:\n raise TypeError(other)\n return r\n\n def __radd__(self, other):\n return self.__add__(other)\n\n def __sub__(self, other):\n if other is 0:\n return self\n r = self.__class__(**self.amounts)\n if isinstance(other, self.__class__):\n for currency, v in other.amounts.items():\n if currency in r.amounts:\n r.amounts[currency] -= v\n else:\n r.amounts[currency] = -v\n elif isinstance(other, Money):\n currency = other.currency\n if currency in r.amounts:\n r.amounts[currency] -= other.amount\n else:\n r.amounts[currency] = -other.amount\n else:\n raise TypeError(other)\n return r\n\n def __repr__(self):\n return '%s[%s]' % (\n self.__class__.__name__,\n ', '.join('%s %s' % (a, c) for c, a in self.amounts.items() if a)\n )\n\n def __bool__(self):\n return any(v for v in self.amounts.values())\n\n __nonzero__ = __bool__\n\n def __setstate__(self, state):\n \"\"\"Backward-compatible unpickling\n\n The original version of `MoneyBasket` stored `Money` objects in its\n `__dict__`, whereas the current version stores `Decimal`s in the\n `amounts` attribute.\n \"\"\"\n if 'amounts' in state:\n self.__dict__ = state\n else:\n self.amounts = {m.currency: m.amount for m in state.values()}\n\n @property\n def currencies_present(self):\n return [k for k, v in self.amounts.items() if v > 0]\n\n def fuzzy_sum(self, currency, rounding=ROUND_UP):\n a = Money.ZEROS[currency].amount\n fuzzy = False\n for m in self:\n if m.currency == currency:\n a += m.amount\n elif m.amount:\n a += m.convert(currency, rounding=None).amount\n fuzzy = True\n r = Money(a, currency, rounding=rounding)\n r.fuzzy = fuzzy\n return r\n\n\ndef fetch_currency_exchange_rates(db):\n currencies = set(db.one(\"SELECT array_to_json(enum_range(NULL::currency))\"))\n r = requests.get('https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml')\n rates = xmltodict.parse(r.text)['gesmes:Envelope']['Cube']['Cube']['Cube']\n for fx in rates:\n currency = fx['@currency']\n if currency not in currencies:\n continue\n db.run(\"\"\"\n INSERT INTO currency_exchange_rates\n (source_currency, target_currency, rate)\n VALUES ('EUR', %(target)s, %(rate)s)\n , (%(target)s, 'EUR', 1 / %(rate)s)\n ON CONFLICT (source_currency, target_currency) DO UPDATE\n SET rate = excluded.rate\n \"\"\", dict(target=currency, rate=Decimal(fx['@rate'])))\n\n\ndef get_currency_exchange_rates(db):\n r = {(r[0], r[1]): r[2] for r in db.all(\"SELECT * FROM currency_exchange_rates\")}\n if r:\n return r\n fetch_currency_exchange_rates(db)\n return get_currency_exchange_rates(db)\n", "path": "liberapay/utils/currencies.py"}]}
| 3,195 | 127 |
gh_patches_debug_1332
|
rasdani/github-patches
|
git_diff
|
SeldonIO__MLServer-1064
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
decode_args with tuple return value
I'm confused about how to use `decode_args()` when the model returns a tuple of, let's say, a numpy array.
If I have an inference function with the following signature
```python
import numpy as np
from mlserver.codecs.decorator import decode_args
def predict(input: np.ndarray) -> tuple[np.ndarray]:
# simple identity model
return (input, )
decode_args(predict)
```
I receive a
```
CodecNotFound: Codec not found for output field with type tuple[numpy.ndarray]
```
error message.
I think the issue lies in the following line that is used to unwrap the tuple signature and convert it to a list of types.
https://github.com/SeldonIO/MLServer/blob/9d7045cc3c50a14201733865ca0c3e8480ebd942/mlserver/codecs/decorator.py#L37
However the line checks if the type hint is a tuple instance, which is not true in the above case: `isinstance(tuple[np.ndarray], tuple) == False)`
To make `decode_args` work with a tuple return signature, I have to change the function signature to
```python
def predict(input: np.ndarray) -> tuple([np.ndarray]):
```
which is an actual tuple instance and not a type hint. This looks like a bug to me.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlserver/codecs/decorator.py`
Content:
```
1 from functools import wraps, partial
2 from typing import (
3 Any,
4 Callable,
5 Coroutine,
6 Dict,
7 List,
8 Optional,
9 Union,
10 Type,
11 Tuple,
12 get_origin,
13 get_args,
14 get_type_hints,
15 TYPE_CHECKING,
16 )
17
18
19 from ..types import InferenceRequest, InferenceResponse, ResponseOutput
20
21 from .base import RequestCodec, InputCodec, find_input_codec, find_request_codec
22 from .errors import InputsNotFound, OutputNotFound, CodecNotFound
23 from .utils import Codec
24
25 if TYPE_CHECKING:
26 from ..model import MLModel
27
28 PredictFunc = Callable[
29 ["MLModel", InferenceRequest], Coroutine[Any, Any, InferenceResponse]
30 ]
31
32
33 def _as_list(a: Optional[Union[Any, Tuple[Any]]]) -> List[Any]:
34 if a is None:
35 return []
36
37 if isinstance(a, tuple):
38 # Split into components
39 return list(a)
40
41 # Otherwise, assume it's a single element
42 return [a]
43
44
45 def _is_codec_type(c: Codec, t: Type) -> bool:
46 if issubclass(c, t): # type: ignore
47 return True
48
49 if isinstance(c, t):
50 return True
51
52 return False
53
54
55 _is_input_codec = partial(_is_codec_type, t=InputCodec)
56 _is_request_codec = partial(_is_codec_type, t=RequestCodec)
57
58
59 def _is_optional(t: Type) -> bool:
60 origin = get_origin(t)
61 if origin == Optional:
62 return True
63
64 if origin == Union:
65 # Cover case where Optional[a] is reported as Union[a, None]
66 args = get_args(t)
67 if len(args) == 2 and type(None) in args:
68 return True
69
70 return False
71
72
73 def _unwrap_optional(t: Type) -> Type:
74 args = get_args(t)
75 for arg in args:
76 if not isinstance(arg, type(None)):
77 return arg
78
79 return t
80
81
82 class SignatureCodec(RequestCodec):
83 """
84 Internal codec that knows how to map type hints to codecs.
85 """
86
87 # TODO: Should this receive the whole class as argument?
88 def __init__(self, predict: Callable):
89 self._predict = predict
90 self._input_codecs, self._output_codecs = self._get_codecs(predict)
91
92 def _get_codecs(self, pred: Callable) -> Tuple[Dict[str, Codec], List[Codec]]:
93 self._input_hints = self._get_type_hints(pred)
94 self._output_hints = _as_list(self._input_hints.pop("return", None))
95
96 input_codecs = {}
97 for name, type_hint in self._input_hints.items():
98 codec = self._find_codec(name=name, type_hint=type_hint, is_input=True)
99 # TODO: Consider metadata as well! (needs to be done at runtime)
100 input_codecs[name] = codec
101
102 output_codecs = []
103 for type_hint in self._output_hints:
104 # Try either as an input or as a request codec
105 codec = self._find_codec(name=None, type_hint=type_hint, is_input=False)
106 output_codecs.append(codec)
107
108 return input_codecs, output_codecs
109
110 def _get_type_hints(self, pred: Callable) -> Dict[str, Type]:
111 type_hints = get_type_hints(pred)
112 # For us, `typing.Optional` is just syntactic sugar, so let's ensure we
113 # unwrap it
114 for name, hint in type_hints.items():
115 if _is_optional(hint):
116 type_hints[name] = _unwrap_optional(hint)
117
118 return type_hints
119
120 def _find_codec(
121 self, name: Optional[str], type_hint: Type, is_input: bool = False
122 ) -> Codec:
123 codec = find_input_codec(type_hint=type_hint)
124 if codec is not None:
125 return codec
126
127 codec = find_request_codec(type_hint=type_hint) # type: ignore
128 if codec is not None:
129 return codec
130
131 raise CodecNotFound(name=name, payload_type=str(type_hint), is_input=is_input)
132
133 def decode_request( # type: ignore
134 self, request: InferenceRequest
135 ) -> Dict[str, Any]:
136 inputs = {}
137 extra_request_inputs = []
138 for request_input in request.inputs:
139 input_name = request_input.name
140 if input_name not in self._input_codecs:
141 # Aggregate extra request inputs to check later, as they could
142 # be part of aggregated request codecs (e.g. like dataframes)
143 extra_request_inputs.append(request_input)
144 continue
145
146 # Ensure matching codec is an input codec
147 codec = self._input_codecs[input_name]
148 if not _is_input_codec(codec):
149 raise CodecNotFound(name=input_name, is_input=True)
150
151 inputs[input_name] = codec.decode_input(request_input) # type: ignore
152
153 if extra_request_inputs:
154 request_codec = self._get_request_codec()
155 if not request_codec:
156 # If there are no request codecs that can aggregate all
157 # remaining inputs, raise an error
158 raise InputsNotFound(extra_request_inputs, self._input_codecs)
159
160 # We create a fake request built from the extra request inputs
161 name, codec = request_codec
162 extra_inputs = InferenceRequest(inputs=extra_request_inputs)
163 inputs[name] = codec.decode_request(extra_inputs)
164
165 return inputs
166
167 def _get_request_codec(self) -> Optional[Tuple[str, RequestCodec]]:
168 for name, codec in self._input_codecs.items():
169 if _is_request_codec(codec):
170 return name, codec # type: ignore
171
172 return None
173
174 def encode_response( # type: ignore
175 self, model_name: str, payload: Any, model_version: Optional[str] = None
176 ) -> InferenceResponse:
177 payloads = _as_list(payload)
178 outputs = []
179 for idx, payload in enumerate(payloads):
180 outputs += self._encode_outputs(idx, payload)
181
182 return InferenceResponse(
183 model_name=model_name, model_version=model_version, outputs=outputs
184 )
185
186 def _encode_outputs(self, idx: int, payload: Any) -> List[ResponseOutput]:
187 output_type = type(payload)
188 if idx >= len(self._output_codecs):
189 raise OutputNotFound(idx, output_type, self._output_hints)
190
191 # TODO: Fallback to encode_by_payload?
192 codec = self._output_codecs[idx]
193 if not codec.can_encode(payload):
194 raise OutputNotFound(idx, output_type, self._output_hints)
195
196 if _is_input_codec(codec):
197 # TODO: Check model metadata for output names
198 output_name = f"output-{idx}"
199 response_output = codec.encode_output( # type: ignore
200 name=output_name, payload=payload
201 )
202 return [response_output]
203
204 if _is_request_codec(codec):
205 # NOTE: We will ignore `model_name` and only grab the outputs
206 response = codec.encode_response( # type: ignore
207 model_name="", payload=payload
208 )
209 return response.outputs
210
211 return []
212
213
214 def decode_args(predict: Callable) -> PredictFunc:
215 codec = SignatureCodec(predict)
216
217 @wraps(predict)
218 async def _f(self: "MLModel", request: InferenceRequest) -> InferenceResponse:
219 inputs = codec.decode_request(request=request)
220
221 outputs = await predict(self, **inputs)
222
223 return codec.encode_response(
224 model_name=self.name, payload=outputs, model_version=self.version
225 )
226
227 return _f
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlserver/codecs/decorator.py b/mlserver/codecs/decorator.py
--- a/mlserver/codecs/decorator.py
+++ b/mlserver/codecs/decorator.py
@@ -38,6 +38,10 @@
# Split into components
return list(a)
+ if get_origin(a) is tuple:
+ # Convert type arguments into list
+ return list(get_args(a))
+
# Otherwise, assume it's a single element
return [a]
|
{"golden_diff": "diff --git a/mlserver/codecs/decorator.py b/mlserver/codecs/decorator.py\n--- a/mlserver/codecs/decorator.py\n+++ b/mlserver/codecs/decorator.py\n@@ -38,6 +38,10 @@\n # Split into components\n return list(a)\n \n+ if get_origin(a) is tuple:\n+ # Convert type arguments into list\n+ return list(get_args(a))\n+\n # Otherwise, assume it's a single element\n return [a]\n", "issue": "decode_args with tuple return value\nI'm confused about how to use `decode_args()` when the model returns a tuple of, let's say, a numpy array.\r\n\r\nIf I have an inference function with the following signature\r\n\r\n```python\r\nimport numpy as np\r\nfrom mlserver.codecs.decorator import decode_args\r\n\r\ndef predict(input: np.ndarray) -> tuple[np.ndarray]:\r\n # simple identity model\r\n return (input, )\r\n\r\ndecode_args(predict)\r\n```\r\nI receive a\r\n\r\n```\r\nCodecNotFound: Codec not found for output field with type tuple[numpy.ndarray]\r\n```\r\n\r\nerror message.\r\n\r\nI think the issue lies in the following line that is used to unwrap the tuple signature and convert it to a list of types.\r\n\r\nhttps://github.com/SeldonIO/MLServer/blob/9d7045cc3c50a14201733865ca0c3e8480ebd942/mlserver/codecs/decorator.py#L37\r\n\r\nHowever the line checks if the type hint is a tuple instance, which is not true in the above case: `isinstance(tuple[np.ndarray], tuple) == False)`\r\n\r\nTo make `decode_args` work with a tuple return signature, I have to change the function signature to\r\n\r\n```python\r\ndef predict(input: np.ndarray) -> tuple([np.ndarray]):\r\n```\r\nwhich is an actual tuple instance and not a type hint. This looks like a bug to me. \n", "before_files": [{"content": "from functools import wraps, partial\nfrom typing import (\n Any,\n Callable,\n Coroutine,\n Dict,\n List,\n Optional,\n Union,\n Type,\n Tuple,\n get_origin,\n get_args,\n get_type_hints,\n TYPE_CHECKING,\n)\n\n\nfrom ..types import InferenceRequest, InferenceResponse, ResponseOutput\n\nfrom .base import RequestCodec, InputCodec, find_input_codec, find_request_codec\nfrom .errors import InputsNotFound, OutputNotFound, CodecNotFound\nfrom .utils import Codec\n\nif TYPE_CHECKING:\n from ..model import MLModel\n\nPredictFunc = Callable[\n [\"MLModel\", InferenceRequest], Coroutine[Any, Any, InferenceResponse]\n]\n\n\ndef _as_list(a: Optional[Union[Any, Tuple[Any]]]) -> List[Any]:\n if a is None:\n return []\n\n if isinstance(a, tuple):\n # Split into components\n return list(a)\n\n # Otherwise, assume it's a single element\n return [a]\n\n\ndef _is_codec_type(c: Codec, t: Type) -> bool:\n if issubclass(c, t): # type: ignore\n return True\n\n if isinstance(c, t):\n return True\n\n return False\n\n\n_is_input_codec = partial(_is_codec_type, t=InputCodec)\n_is_request_codec = partial(_is_codec_type, t=RequestCodec)\n\n\ndef _is_optional(t: Type) -> bool:\n origin = get_origin(t)\n if origin == Optional:\n return True\n\n if origin == Union:\n # Cover case where Optional[a] is reported as Union[a, None]\n args = get_args(t)\n if len(args) == 2 and type(None) in args:\n return True\n\n return False\n\n\ndef _unwrap_optional(t: Type) -> Type:\n args = get_args(t)\n for arg in args:\n if not isinstance(arg, type(None)):\n return arg\n\n return t\n\n\nclass SignatureCodec(RequestCodec):\n \"\"\"\n Internal codec that knows how to map type hints to codecs.\n \"\"\"\n\n # TODO: Should this receive the whole class as argument?\n def __init__(self, predict: Callable):\n self._predict = predict\n self._input_codecs, self._output_codecs = self._get_codecs(predict)\n\n def _get_codecs(self, pred: Callable) -> Tuple[Dict[str, Codec], List[Codec]]:\n self._input_hints = self._get_type_hints(pred)\n self._output_hints = _as_list(self._input_hints.pop(\"return\", None))\n\n input_codecs = {}\n for name, type_hint in self._input_hints.items():\n codec = self._find_codec(name=name, type_hint=type_hint, is_input=True)\n # TODO: Consider metadata as well! (needs to be done at runtime)\n input_codecs[name] = codec\n\n output_codecs = []\n for type_hint in self._output_hints:\n # Try either as an input or as a request codec\n codec = self._find_codec(name=None, type_hint=type_hint, is_input=False)\n output_codecs.append(codec)\n\n return input_codecs, output_codecs\n\n def _get_type_hints(self, pred: Callable) -> Dict[str, Type]:\n type_hints = get_type_hints(pred)\n # For us, `typing.Optional` is just syntactic sugar, so let's ensure we\n # unwrap it\n for name, hint in type_hints.items():\n if _is_optional(hint):\n type_hints[name] = _unwrap_optional(hint)\n\n return type_hints\n\n def _find_codec(\n self, name: Optional[str], type_hint: Type, is_input: bool = False\n ) -> Codec:\n codec = find_input_codec(type_hint=type_hint)\n if codec is not None:\n return codec\n\n codec = find_request_codec(type_hint=type_hint) # type: ignore\n if codec is not None:\n return codec\n\n raise CodecNotFound(name=name, payload_type=str(type_hint), is_input=is_input)\n\n def decode_request( # type: ignore\n self, request: InferenceRequest\n ) -> Dict[str, Any]:\n inputs = {}\n extra_request_inputs = []\n for request_input in request.inputs:\n input_name = request_input.name\n if input_name not in self._input_codecs:\n # Aggregate extra request inputs to check later, as they could\n # be part of aggregated request codecs (e.g. like dataframes)\n extra_request_inputs.append(request_input)\n continue\n\n # Ensure matching codec is an input codec\n codec = self._input_codecs[input_name]\n if not _is_input_codec(codec):\n raise CodecNotFound(name=input_name, is_input=True)\n\n inputs[input_name] = codec.decode_input(request_input) # type: ignore\n\n if extra_request_inputs:\n request_codec = self._get_request_codec()\n if not request_codec:\n # If there are no request codecs that can aggregate all\n # remaining inputs, raise an error\n raise InputsNotFound(extra_request_inputs, self._input_codecs)\n\n # We create a fake request built from the extra request inputs\n name, codec = request_codec\n extra_inputs = InferenceRequest(inputs=extra_request_inputs)\n inputs[name] = codec.decode_request(extra_inputs)\n\n return inputs\n\n def _get_request_codec(self) -> Optional[Tuple[str, RequestCodec]]:\n for name, codec in self._input_codecs.items():\n if _is_request_codec(codec):\n return name, codec # type: ignore\n\n return None\n\n def encode_response( # type: ignore\n self, model_name: str, payload: Any, model_version: Optional[str] = None\n ) -> InferenceResponse:\n payloads = _as_list(payload)\n outputs = []\n for idx, payload in enumerate(payloads):\n outputs += self._encode_outputs(idx, payload)\n\n return InferenceResponse(\n model_name=model_name, model_version=model_version, outputs=outputs\n )\n\n def _encode_outputs(self, idx: int, payload: Any) -> List[ResponseOutput]:\n output_type = type(payload)\n if idx >= len(self._output_codecs):\n raise OutputNotFound(idx, output_type, self._output_hints)\n\n # TODO: Fallback to encode_by_payload?\n codec = self._output_codecs[idx]\n if not codec.can_encode(payload):\n raise OutputNotFound(idx, output_type, self._output_hints)\n\n if _is_input_codec(codec):\n # TODO: Check model metadata for output names\n output_name = f\"output-{idx}\"\n response_output = codec.encode_output( # type: ignore\n name=output_name, payload=payload\n )\n return [response_output]\n\n if _is_request_codec(codec):\n # NOTE: We will ignore `model_name` and only grab the outputs\n response = codec.encode_response( # type: ignore\n model_name=\"\", payload=payload\n )\n return response.outputs\n\n return []\n\n\ndef decode_args(predict: Callable) -> PredictFunc:\n codec = SignatureCodec(predict)\n\n @wraps(predict)\n async def _f(self: \"MLModel\", request: InferenceRequest) -> InferenceResponse:\n inputs = codec.decode_request(request=request)\n\n outputs = await predict(self, **inputs)\n\n return codec.encode_response(\n model_name=self.name, payload=outputs, model_version=self.version\n )\n\n return _f\n", "path": "mlserver/codecs/decorator.py"}], "after_files": [{"content": "from functools import wraps, partial\nfrom typing import (\n Any,\n Callable,\n Coroutine,\n Dict,\n List,\n Optional,\n Union,\n Type,\n Tuple,\n get_origin,\n get_args,\n get_type_hints,\n TYPE_CHECKING,\n)\n\n\nfrom ..types import InferenceRequest, InferenceResponse, ResponseOutput\n\nfrom .base import RequestCodec, InputCodec, find_input_codec, find_request_codec\nfrom .errors import InputsNotFound, OutputNotFound, CodecNotFound\nfrom .utils import Codec\n\nif TYPE_CHECKING:\n from ..model import MLModel\n\nPredictFunc = Callable[\n [\"MLModel\", InferenceRequest], Coroutine[Any, Any, InferenceResponse]\n]\n\n\ndef _as_list(a: Optional[Union[Any, Tuple[Any]]]) -> List[Any]:\n if a is None:\n return []\n\n if isinstance(a, tuple):\n # Split into components\n return list(a)\n\n if get_origin(a) is tuple:\n # Convert type arguments into list\n return list(get_args(a))\n\n # Otherwise, assume it's a single element\n return [a]\n\n\ndef _is_codec_type(c: Codec, t: Type) -> bool:\n if issubclass(c, t): # type: ignore\n return True\n\n if isinstance(c, t):\n return True\n\n return False\n\n\n_is_input_codec = partial(_is_codec_type, t=InputCodec)\n_is_request_codec = partial(_is_codec_type, t=RequestCodec)\n\n\ndef _is_optional(t: Type) -> bool:\n origin = get_origin(t)\n if origin == Optional:\n return True\n\n if origin == Union:\n # Cover case where Optional[a] is reported as Union[a, None]\n args = get_args(t)\n if len(args) == 2 and type(None) in args:\n return True\n\n return False\n\n\ndef _unwrap_optional(t: Type) -> Type:\n args = get_args(t)\n for arg in args:\n if not isinstance(arg, type(None)):\n return arg\n\n return t\n\n\nclass SignatureCodec(RequestCodec):\n \"\"\"\n Internal codec that knows how to map type hints to codecs.\n \"\"\"\n\n # TODO: Should this receive the whole class as argument?\n def __init__(self, predict: Callable):\n self._predict = predict\n self._input_codecs, self._output_codecs = self._get_codecs(predict)\n\n def _get_codecs(self, pred: Callable) -> Tuple[Dict[str, Codec], List[Codec]]:\n self._input_hints = self._get_type_hints(pred)\n self._output_hints = _as_list(self._input_hints.pop(\"return\", None))\n\n input_codecs = {}\n for name, type_hint in self._input_hints.items():\n codec = self._find_codec(name=name, type_hint=type_hint, is_input=True)\n # TODO: Consider metadata as well! (needs to be done at runtime)\n input_codecs[name] = codec\n\n output_codecs = []\n for type_hint in self._output_hints:\n # Try either as an input or as a request codec\n codec = self._find_codec(name=None, type_hint=type_hint, is_input=False)\n output_codecs.append(codec)\n\n return input_codecs, output_codecs\n\n def _get_type_hints(self, pred: Callable) -> Dict[str, Type]:\n type_hints = get_type_hints(pred)\n # For us, `typing.Optional` is just syntactic sugar, so let's ensure we\n # unwrap it\n for name, hint in type_hints.items():\n if _is_optional(hint):\n type_hints[name] = _unwrap_optional(hint)\n\n return type_hints\n\n def _find_codec(\n self, name: Optional[str], type_hint: Type, is_input: bool = False\n ) -> Codec:\n codec = find_input_codec(type_hint=type_hint)\n if codec is not None:\n return codec\n\n codec = find_request_codec(type_hint=type_hint) # type: ignore\n if codec is not None:\n return codec\n\n raise CodecNotFound(name=name, payload_type=str(type_hint), is_input=is_input)\n\n def decode_request( # type: ignore\n self, request: InferenceRequest\n ) -> Dict[str, Any]:\n inputs = {}\n extra_request_inputs = []\n for request_input in request.inputs:\n input_name = request_input.name\n if input_name not in self._input_codecs:\n # Aggregate extra request inputs to check later, as they could\n # be part of aggregated request codecs (e.g. like dataframes)\n extra_request_inputs.append(request_input)\n continue\n\n # Ensure matching codec is an input codec\n codec = self._input_codecs[input_name]\n if not _is_input_codec(codec):\n raise CodecNotFound(name=input_name, is_input=True)\n\n inputs[input_name] = codec.decode_input(request_input) # type: ignore\n\n if extra_request_inputs:\n request_codec = self._get_request_codec()\n if not request_codec:\n # If there are no request codecs that can aggregate all\n # remaining inputs, raise an error\n raise InputsNotFound(extra_request_inputs, self._input_codecs)\n\n # We create a fake request built from the extra request inputs\n name, codec = request_codec\n extra_inputs = InferenceRequest(inputs=extra_request_inputs)\n inputs[name] = codec.decode_request(extra_inputs)\n\n return inputs\n\n def _get_request_codec(self) -> Optional[Tuple[str, RequestCodec]]:\n for name, codec in self._input_codecs.items():\n if _is_request_codec(codec):\n return name, codec # type: ignore\n\n return None\n\n def encode_response( # type: ignore\n self, model_name: str, payload: Any, model_version: Optional[str] = None\n ) -> InferenceResponse:\n payloads = _as_list(payload)\n outputs = []\n for idx, payload in enumerate(payloads):\n outputs += self._encode_outputs(idx, payload)\n\n return InferenceResponse(\n model_name=model_name, model_version=model_version, outputs=outputs\n )\n\n def _encode_outputs(self, idx: int, payload: Any) -> List[ResponseOutput]:\n output_type = type(payload)\n if idx >= len(self._output_codecs):\n raise OutputNotFound(idx, output_type, self._output_hints)\n\n # TODO: Fallback to encode_by_payload?\n codec = self._output_codecs[idx]\n if not codec.can_encode(payload):\n raise OutputNotFound(idx, output_type, self._output_hints)\n\n if _is_input_codec(codec):\n # TODO: Check model metadata for output names\n output_name = f\"output-{idx}\"\n response_output = codec.encode_output( # type: ignore\n name=output_name, payload=payload\n )\n return [response_output]\n\n if _is_request_codec(codec):\n # NOTE: We will ignore `model_name` and only grab the outputs\n response = codec.encode_response( # type: ignore\n model_name=\"\", payload=payload\n )\n return response.outputs\n\n return []\n\n\ndef decode_args(predict: Callable) -> PredictFunc:\n codec = SignatureCodec(predict)\n\n @wraps(predict)\n async def _f(self: \"MLModel\", request: InferenceRequest) -> InferenceResponse:\n inputs = codec.decode_request(request=request)\n\n outputs = await predict(self, **inputs)\n\n return codec.encode_response(\n model_name=self.name, payload=outputs, model_version=self.version\n )\n\n return _f\n", "path": "mlserver/codecs/decorator.py"}]}
| 2,801 | 113 |
gh_patches_debug_21335
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-5189
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CKV_AZURE_6] AKS API Server White Tests Failing with Correct Code
**Describe the issue**
[CKV_AZURE_6](https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSApiServerAuthorizedIpRanges.py)
This check should trigger when an API Server whitelist IP isn't found in the TF code.
**Examples**
Please share an example code sample (in the IaC of your choice) + the expected outcomes.
Sample (Pre 3.39.0 Provider Version):
```
resource "azurerm_kubernetes_cluster" "aks_k2" {
name = var.cluster_name
location = azurerm_resource_group.rg_aks.location
resource_group_name = azurerm_resource_group.rg_aks.name
sku_tier = var.sku_tier
dns_prefix = var.dns_name
api_server_authorized_ip_ranges = [my_ip_list]
}
```
Sample (Post 3.39.0):
```
resource "azurerm_kubernetes_cluster" "aks_k2" {
name = var.cluster_name
location = azurerm_resource_group.rg_aks.location
resource_group_name = azurerm_resource_group.rg_aks.name
sku_tier = var.sku_tier
dns_prefix = var.dns_name
api_server_access_profile {
authorized_ip_ranges = [my_ip_list]
}
}
```
Both have expected outcome of passing this test, as we list 4 IP's for whitelisting.
We are failing tests

**Version (please complete the following information):**
- Checkov Version: checkov-2.3.272
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.common.models.consts import ANY_VALUE
6 from checkov.common.models.enums import CheckCategories, CheckResult
7 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
8
9
10 class AKSApiServerAuthorizedIpRanges(BaseResourceValueCheck):
11 def __init__(self) -> None:
12 name = "Ensure AKS has an API Server Authorized IP Ranges enabled"
13 id = "CKV_AZURE_6"
14 supported_resources = ("azurerm_kubernetes_cluster",)
15 categories = (CheckCategories.KUBERNETES,)
16 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
17
18 def get_inspected_key(self) -> str:
19 return "api_server_authorized_ip_ranges/[0]"
20
21 def get_expected_value(self) -> Any:
22 return ANY_VALUE
23
24 def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
25 # can't be set for private cluster
26 private_cluster_enabled = conf.get("private_cluster_enabled", [False])[0]
27 if private_cluster_enabled:
28 return CheckResult.PASSED
29 return super().scan_resource_conf(conf)
30
31
32 check = AKSApiServerAuthorizedIpRanges()
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py
--- a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py
+++ b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py
@@ -16,7 +16,7 @@
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def get_inspected_key(self) -> str:
- return "api_server_authorized_ip_ranges/[0]"
+ return "api_server_access_profile/[0]/authorized_ip_ranges/[0]"
def get_expected_value(self) -> Any:
return ANY_VALUE
@@ -26,6 +26,12 @@
private_cluster_enabled = conf.get("private_cluster_enabled", [False])[0]
if private_cluster_enabled:
return CheckResult.PASSED
+
+ # provider version <=3.38.0
+ api_server = conf.get("api_server_authorized_ip_ranges")
+ if api_server and isinstance(api_server, list) and api_server[0]:
+ return CheckResult.PASSED
+
return super().scan_resource_conf(conf)
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py\n--- a/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py\n+++ b/checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py\n@@ -16,7 +16,7 @@\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def get_inspected_key(self) -> str:\n- return \"api_server_authorized_ip_ranges/[0]\"\n+ return \"api_server_access_profile/[0]/authorized_ip_ranges/[0]\"\n \n def get_expected_value(self) -> Any:\n return ANY_VALUE\n@@ -26,6 +26,12 @@\n private_cluster_enabled = conf.get(\"private_cluster_enabled\", [False])[0]\n if private_cluster_enabled:\n return CheckResult.PASSED\n+\n+ # provider version <=3.38.0\n+ api_server = conf.get(\"api_server_authorized_ip_ranges\")\n+ if api_server and isinstance(api_server, list) and api_server[0]:\n+ return CheckResult.PASSED\n+\n return super().scan_resource_conf(conf)\n", "issue": "[CKV_AZURE_6] AKS API Server White Tests Failing with Correct Code\n**Describe the issue**\r\n[CKV_AZURE_6](https://github.com/bridgecrewio/checkov/tree/master/checkov/arm/checks/resource/AKSApiServerAuthorizedIpRanges.py)\r\n\r\nThis check should trigger when an API Server whitelist IP isn't found in the TF code. \r\n\r\n**Examples**\r\nPlease share an example code sample (in the IaC of your choice) + the expected outcomes.\r\n\r\nSample (Pre 3.39.0 Provider Version):\r\n```\r\nresource \"azurerm_kubernetes_cluster\" \"aks_k2\" {\r\n name = var.cluster_name\r\n location = azurerm_resource_group.rg_aks.location\r\n resource_group_name = azurerm_resource_group.rg_aks.name\r\n sku_tier = var.sku_tier\r\n dns_prefix = var.dns_name\r\n api_server_authorized_ip_ranges = [my_ip_list]\r\n}\r\n```\r\nSample (Post 3.39.0):\r\n```\r\nresource \"azurerm_kubernetes_cluster\" \"aks_k2\" {\r\n name = var.cluster_name\r\n location = azurerm_resource_group.rg_aks.location\r\n resource_group_name = azurerm_resource_group.rg_aks.name\r\n sku_tier = var.sku_tier\r\n dns_prefix = var.dns_name\r\n api_server_access_profile {\r\n authorized_ip_ranges = [my_ip_list]\r\n }\r\n}\r\n```\r\n\r\nBoth have expected outcome of passing this test, as we list 4 IP's for whitelisting.\r\nWe are failing tests\r\n\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version: checkov-2.3.272\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AKSApiServerAuthorizedIpRanges(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure AKS has an API Server Authorized IP Ranges enabled\"\n id = \"CKV_AZURE_6\"\n supported_resources = (\"azurerm_kubernetes_cluster\",)\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"api_server_authorized_ip_ranges/[0]\"\n\n def get_expected_value(self) -> Any:\n return ANY_VALUE\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n # can't be set for private cluster\n private_cluster_enabled = conf.get(\"private_cluster_enabled\", [False])[0]\n if private_cluster_enabled:\n return CheckResult.PASSED\n return super().scan_resource_conf(conf)\n\n\ncheck = AKSApiServerAuthorizedIpRanges()\n", "path": "checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.consts import ANY_VALUE\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AKSApiServerAuthorizedIpRanges(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure AKS has an API Server Authorized IP Ranges enabled\"\n id = \"CKV_AZURE_6\"\n supported_resources = (\"azurerm_kubernetes_cluster\",)\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"api_server_access_profile/[0]/authorized_ip_ranges/[0]\"\n\n def get_expected_value(self) -> Any:\n return ANY_VALUE\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n # can't be set for private cluster\n private_cluster_enabled = conf.get(\"private_cluster_enabled\", [False])[0]\n if private_cluster_enabled:\n return CheckResult.PASSED\n\n # provider version <=3.38.0\n api_server = conf.get(\"api_server_authorized_ip_ranges\")\n if api_server and isinstance(api_server, list) and api_server[0]:\n return CheckResult.PASSED\n\n return super().scan_resource_conf(conf)\n\n\ncheck = AKSApiServerAuthorizedIpRanges()\n", "path": "checkov/terraform/checks/resource/azure/AKSApiServerAuthorizedIpRanges.py"}]}
| 1,029 | 281 |
gh_patches_debug_43784
|
rasdani/github-patches
|
git_diff
|
biolab__orange3-text-234
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WordEnrichment Nans & BoW Features Only
<!--
This is an issue template. Please fill in the relevant details in the
sections below.
-->
##### Text version
<!-- From menu _Options→Add-ons→Orange3-Text_ or code `orangecontrib.text.version.full_version` -->
0.2.4.dev
##### WordEnrichment issues:
* WE takes all features. It should only consider BoW features. If none BoW features present raise an error.
* WE doesn't work if nans in sparse matrix. Probably cannot occur is we use only BoW features.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `orangecontrib/text/vectorization/base.py`
Content:
```
1 import numpy as np
2 from gensim import matutils
3 from gensim.corpora import Dictionary
4
5
6 class BaseVectorizer:
7 """Base class for vectorization objects. """
8 name = NotImplemented
9
10 def transform(self, corpus, copy=True, source_dict=None):
11 """Transforms a corpus to a new one with additional attributes. """
12 if copy:
13 corpus = corpus.copy()
14
15 if not len(corpus.dictionary):
16 return corpus
17 else:
18 return self._transform(corpus, source_dict)
19
20 def _transform(self, corpus, source_dict):
21 raise NotImplementedError
22
23 def report(self):
24 """Reports configuration items."""
25 raise NotImplementedError
26
27 @staticmethod
28 def add_features(corpus, X, dictionary, compute_values=None):
29 order = np.argsort([dictionary[i] for i in range(len(dictionary))])
30 compute_values = np.array(compute_values)[order]
31 corpus.extend_attributes(X[:, order],
32 feature_names=(dictionary[i] for i in order),
33 var_attrs={'hidden': True, 'skip-normalization': True},
34 compute_values=compute_values)
35 corpus.ngrams_corpus = matutils.Sparse2Corpus(X.T)
36
```
Path: `orangecontrib/text/vectorization/bagofwords.py`
Content:
```
1 from collections import OrderedDict
2 from functools import partial
3
4 import numpy as np
5 from gensim import corpora, models, matutils
6 from sklearn.preprocessing import normalize
7
8 from Orange.data.util import SharedComputeValue
9 from orangecontrib.text.vectorization.base import BaseVectorizer
10
11
12 class BoWPreprocessTransform:
13 """
14 Shared computation for transforming new data set into the classifiers's BoW domain.
15 This will run preprocessing as well as BoW transformation itself.
16 """
17 def __init__(self, preprocessor, bow_vectorizer, dictionary):
18 self.preprocessor = preprocessor
19 self.bow_vectorizer = bow_vectorizer
20 self.dictionary = dictionary
21
22 def __call__(self, new_corpus):
23 new_corpus = self.preprocessor(new_corpus)
24 bow_corpus = self.bow_vectorizer.transform(new_corpus, copy=True, source_dict=self.dictionary)
25 # store name to indices mapping so BoWComputeValue can run faster
26 bow_corpus.feature_name_to_index = {attr.name: i for i, attr in enumerate(bow_corpus.domain.attributes)}
27 return bow_corpus
28
29
30 class BoWComputeValue(SharedComputeValue):
31 """
32 Compute Value for Bow features. This enables applying a
33 classifier — that was trained on a BoW model — on new data.
34 """
35 def __init__(self, name, compute_shared):
36 super().__init__(compute_shared)
37 self.name = name
38
39 def compute(self, data, shared_data):
40 ind = shared_data.feature_name_to_index[self.name]
41 return shared_data.X[:, ind]
42
43
44 class BowVectorizer(BaseVectorizer):
45 name = 'BoW Vectorizer'
46
47 COUNT = 'Count'
48 BINARY = 'Binary'
49 SUBLINEAR = 'Sublinear'
50 NONE = '(None)'
51 IDF = 'IDF'
52 SMOOTH = 'Smooth IDF'
53 L1 = 'L1 (Sum of elements)'
54 L2 = 'L2 (Euclidean)'
55
56 wlocals = OrderedDict((
57 (COUNT, lambda tf: tf),
58 (BINARY, lambda tf: int(tf > 0)),
59 (SUBLINEAR, lambda tf: 1 + np.log(tf)),
60 ))
61
62 wglobals = OrderedDict((
63 (NONE, lambda df, N: 1),
64 (IDF, lambda df, N: np.log(N/df)),
65 (SMOOTH, lambda df, N: np.log(1 + N/df)),
66 ))
67
68 norms = OrderedDict((
69 (NONE, None),
70 (L1, partial(normalize, norm='l1')),
71 (L2, partial(normalize, norm='l2')),
72 ))
73
74 def __init__(self, norm=NONE, wlocal=COUNT, wglobal=NONE):
75 self.norm = norm
76 self.wlocal = wlocal
77 self.wglobal = wglobal
78
79 def _transform(self, corpus, source_dict=None):
80 temp_corpus = list(corpus.ngrams_iterator(' ', include_postags=True))
81 dic = corpora.Dictionary(temp_corpus, prune_at=None) if not source_dict else source_dict
82 temp_corpus = [dic.doc2bow(doc) for doc in temp_corpus]
83 model = models.TfidfModel(temp_corpus, normalize=False,
84 wlocal=self.wlocals[self.wlocal],
85 wglobal=self.wglobals[self.wglobal])
86
87 X = matutils.corpus2csc(model[temp_corpus], dtype=np.float, num_terms=len(dic)).T
88 norm = self.norms[self.norm]
89 if norm:
90 X = norm(X)
91
92 # set compute values
93 shared_cv = BoWPreprocessTransform(corpus.used_preprocessor, self, dic)
94 cv = [BoWComputeValue(dic[i], shared_cv) for i in range(len(dic))]
95
96 self.add_features(corpus, X, dic, cv)
97 return corpus
98
99 def report(self):
100 return (('Term Frequency', self.wlocal),
101 ('Document Frequency', self.wglobal),
102 ('Regularization', self.norm),)
103
```
Path: `orangecontrib/text/widgets/owwordenrichment.py`
Content:
```
1 import math
2
3 import numpy as np
4 from AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem
5
6 from Orange.data import Table
7 from Orange.widgets import gui
8 from Orange.widgets.settings import Setting
9 from Orange.widgets.widget import OWWidget, Msg
10 from orangecontrib.text.util import np_sp_sum
11 from orangecontrib.text.stats import false_discovery_rate, hypergeom_p_values
12
13
14 class OWWordEnrichment(OWWidget):
15 # Basic widget info
16 name = "Word Enrichment"
17 description = "Word enrichment analysis for selected documents."
18 icon = "icons/SetEnrichment.svg"
19 priority = 60
20
21 # Input/output
22 inputs = [("Selected Data", Table, "set_data_selected"),
23 ("Data", Table, "set_data"),]
24 want_main_area = True
25
26 class Error(OWWidget.Error):
27 no_words_overlap = Msg('No words overlap!')
28 empty_selection = Msg('Selected data is empty!')
29 all_selected = Msg('All examples can not be selected!')
30
31 # Settings
32 filter_by_p = Setting(False)
33 filter_p_value = Setting(0.01)
34 filter_by_fdr = Setting(True)
35 filter_fdr_value = Setting(0.2)
36
37 def __init__(self):
38 super().__init__()
39
40 # Init data
41 self.data = None
42 self.selected_data = None
43 self.selected_data_transformed = None # used for transforming the 'selected data' into the 'data' domain
44
45 self.words = []
46 self.p_values = []
47 self.fdr_values = []
48
49 # Info section
50 fbox = gui.widgetBox(self.controlArea, "Info")
51 self.info_all = gui.label(fbox, self, 'Cluster words:')
52 self.info_sel = gui.label(fbox, self, 'Selected words:')
53 self.info_fil = gui.label(fbox, self, 'After filtering:')
54
55 # Filtering settings
56 fbox = gui.widgetBox(self.controlArea, "Filter")
57 hbox = gui.widgetBox(fbox, orientation=0)
58
59 self.chb_p = gui.checkBox(hbox, self, "filter_by_p", "p-value",
60 callback=self.filter_and_display,
61 tooltip="Filter by word p-value")
62 self.spin_p = gui.doubleSpin(hbox, self, 'filter_p_value',
63 1e-4, 1, step=1e-4, labelWidth=15,
64 callback=self.filter_and_display,
65 callbackOnReturn=True,
66 tooltip="Max p-value for word")
67 self.spin_p.setEnabled(self.filter_by_p)
68
69 hbox = gui.widgetBox(fbox, orientation=0)
70 self.chb_fdr = gui.checkBox(hbox, self, "filter_by_fdr", "FDR",
71 callback=self.filter_and_display,
72 tooltip="Filter by word FDR")
73 self.spin_fdr = gui.doubleSpin(hbox, self, 'filter_fdr_value',
74 1e-4, 1, step=1e-4, labelWidth=15,
75 callback=self.filter_and_display,
76 callbackOnReturn=True,
77 tooltip="Max p-value for word")
78 self.spin_fdr.setEnabled(self.filter_by_fdr)
79 gui.rubber(self.controlArea)
80
81 # Word's list view
82 self.cols = ['Word', 'p-value', 'FDR']
83 self.sig_words = QTreeWidget()
84 self.sig_words.setColumnCount(len(self.cols))
85 self.sig_words.setHeaderLabels(self.cols)
86 self.sig_words.setSortingEnabled(True)
87 self.sig_words.setSelectionMode(QTreeView.ExtendedSelection)
88 self.sig_words.sortByColumn(2, 0) # 0 is ascending order
89 for i in range(len(self.cols)):
90 self.sig_words.resizeColumnToContents(i)
91 self.mainArea.layout().addWidget(self.sig_words)
92
93 def set_data(self, data=None):
94 self.data = data
95
96 def set_data_selected(self, data=None):
97 self.selected_data = data
98
99 def handleNewSignals(self):
100 self.check_data()
101
102 def check_data(self):
103 self.Error.clear()
104 if isinstance(self.data, Table) and \
105 isinstance(self.selected_data, Table):
106 if len(self.selected_data) == 0:
107 self.Error.empty_selection()
108 self.clear()
109 return
110
111 self.selected_data_transformed = Table.from_table(
112 self.data.domain, self.selected_data)
113
114 sum_X = np_sp_sum(self.selected_data_transformed.X)
115 if sum_X == 0 or math.isnan(sum_X):
116 self.Error.no_words_overlap()
117 self.clear()
118 elif len(self.data) == len(self.selected_data):
119 self.Error.all_selected()
120 self.clear()
121 else:
122 self.apply()
123 else:
124 self.clear()
125
126 def clear(self):
127 self.sig_words.clear()
128 self.info_all.setText('Cluster words:')
129 self.info_sel.setText('Selected words:')
130 self.info_fil.setText('After filtering:')
131
132 def filter_enabled(self, b):
133 self.chb_p.setEnabled(b)
134 self.chb_fdr.setEnabled(b)
135 self.spin_p.setEnabled(b)
136 self.spin_fdr.setEnabled(b)
137
138 def filter_and_display(self):
139 self.spin_p.setEnabled(self.filter_by_p)
140 self.spin_fdr.setEnabled(self.filter_by_fdr)
141 self.sig_words.clear()
142
143 if self.selected_data_transformed is None: # do nothing when no Data
144 return
145
146 count = 0
147 if self.words:
148 for word, pval, fval in zip(self.words, self.p_values, self.fdr_values):
149 if (not self.filter_by_p or pval <= self.filter_p_value) and \
150 (not self.filter_by_fdr or fval <= self.filter_fdr_value):
151 it = EATreeWidgetItem(word, pval, fval, self.sig_words)
152 self.sig_words.addTopLevelItem(it)
153 count += 1
154
155 for i in range(len(self.cols)):
156 self.sig_words.resizeColumnToContents(i)
157
158 self.info_all.setText('Cluster words: {}'.format(len(self.selected_data_transformed.domain.attributes)))
159 self.info_sel.setText('Selected words: {}'.format(np.count_nonzero(np_sp_sum(self.selected_data_transformed.X, axis=0))))
160 if not self.filter_by_p and not self.filter_by_fdr:
161 self.info_fil.setText('After filtering:')
162 self.info_fil.setEnabled(False)
163 else:
164 self.info_fil.setEnabled(True)
165 self.info_fil.setText('After filtering: {}'.format(count))
166
167 def progress(self, p):
168 self.progressBarSet(p)
169
170 def apply(self):
171 self.clear()
172 self.progressBarInit()
173 self.filter_enabled(False)
174
175 self.words = [i.name for i in self.selected_data_transformed.domain.attributes]
176 self.p_values = hypergeom_p_values(self.data.X,
177 self.selected_data_transformed.X,
178 callback=self.progress)
179 self.fdr_values = false_discovery_rate(self.p_values)
180 self.filter_and_display()
181 self.filter_enabled(True)
182 self.progressBarFinished()
183
184
185 fp = lambda score: "%0.5f" % score if score > 10e-3 else "%0.1e" % score
186 fpt = lambda score: "%0.9f" % score if score > 10e-3 else "%0.5e" % score
187
188
189 class EATreeWidgetItem(QTreeWidgetItem):
190 def __init__(self, word, p_value, f_value, parent):
191 super().__init__(parent)
192 self.data = [word, p_value, f_value]
193 self.setText(0, word)
194 self.setText(1, fp(p_value))
195 self.setToolTip(1, fpt(p_value))
196 self.setText(2, fp(f_value))
197 self.setToolTip(2, fpt(f_value))
198
199 def __lt__(self, other):
200 col = self.treeWidget().sortColumn()
201 return self.data[col] < other.data[col]
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/orangecontrib/text/vectorization/bagofwords.py b/orangecontrib/text/vectorization/bagofwords.py
--- a/orangecontrib/text/vectorization/bagofwords.py
+++ b/orangecontrib/text/vectorization/bagofwords.py
@@ -93,7 +93,7 @@
shared_cv = BoWPreprocessTransform(corpus.used_preprocessor, self, dic)
cv = [BoWComputeValue(dic[i], shared_cv) for i in range(len(dic))]
- self.add_features(corpus, X, dic, cv)
+ self.add_features(corpus, X, dic, cv, var_attrs={'bow-feature': True})
return corpus
def report(self):
diff --git a/orangecontrib/text/vectorization/base.py b/orangecontrib/text/vectorization/base.py
--- a/orangecontrib/text/vectorization/base.py
+++ b/orangecontrib/text/vectorization/base.py
@@ -25,11 +25,20 @@
raise NotImplementedError
@staticmethod
- def add_features(corpus, X, dictionary, compute_values=None):
+ def add_features(corpus, X, dictionary, compute_values=None, var_attrs=None):
order = np.argsort([dictionary[i] for i in range(len(dictionary))])
- compute_values = np.array(compute_values)[order]
+ if compute_values is not None:
+ compute_values = np.array(compute_values)[order]
+
+ variable_attrs = {
+ 'hidden': True,
+ 'skip-normalization': True,
+ }
+ if isinstance(var_attrs, dict):
+ variable_attrs.update(var_attrs)
+
corpus.extend_attributes(X[:, order],
feature_names=(dictionary[i] for i in order),
- var_attrs={'hidden': True, 'skip-normalization': True},
+ var_attrs=variable_attrs,
compute_values=compute_values)
corpus.ngrams_corpus = matutils.Sparse2Corpus(X.T)
diff --git a/orangecontrib/text/widgets/owwordenrichment.py b/orangecontrib/text/widgets/owwordenrichment.py
--- a/orangecontrib/text/widgets/owwordenrichment.py
+++ b/orangecontrib/text/widgets/owwordenrichment.py
@@ -1,12 +1,11 @@
-import math
-
import numpy as np
from AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem
-from Orange.data import Table
+from Orange.data import Table, Domain
from Orange.widgets import gui
from Orange.widgets.settings import Setting
from Orange.widgets.widget import OWWidget, Msg
+from orangecontrib.text import Corpus
from orangecontrib.text.util import np_sp_sum
from orangecontrib.text.stats import false_discovery_rate, hypergeom_p_values
@@ -24,6 +23,7 @@
want_main_area = True
class Error(OWWidget.Error):
+ no_bow_features = Msg('No bag-of-words features!')
no_words_overlap = Msg('No words overlap!')
empty_selection = Msg('Selected data is empty!')
all_selected = Msg('All examples can not be selected!')
@@ -99,6 +99,15 @@
def handleNewSignals(self):
self.check_data()
+ def get_bow_domain(self):
+ domain = self.data.domain
+ return Domain(
+ attributes=[a for a in domain.attributes
+ if a.attributes.get('bow-feature', False)],
+ class_vars=domain.class_vars,
+ metas=domain.metas,
+ source=domain)
+
def check_data(self):
self.Error.clear()
if isinstance(self.data, Table) and \
@@ -108,11 +117,16 @@
self.clear()
return
- self.selected_data_transformed = Table.from_table(
- self.data.domain, self.selected_data)
+ # keep only BoW features
+ bow_domain = self.get_bow_domain()
+ if len(bow_domain.attributes) == 0:
+ self.Error.no_bow_features()
+ self.clear()
+ return
+ self.data = Corpus.from_table(bow_domain, self.data)
+ self.selected_data_transformed = Corpus.from_table(bow_domain, self.selected_data)
- sum_X = np_sp_sum(self.selected_data_transformed.X)
- if sum_X == 0 or math.isnan(sum_X):
+ if np_sp_sum(self.selected_data_transformed.X) == 0:
self.Error.no_words_overlap()
self.clear()
elif len(self.data) == len(self.selected_data):
|
{"golden_diff": "diff --git a/orangecontrib/text/vectorization/bagofwords.py b/orangecontrib/text/vectorization/bagofwords.py\n--- a/orangecontrib/text/vectorization/bagofwords.py\n+++ b/orangecontrib/text/vectorization/bagofwords.py\n@@ -93,7 +93,7 @@\n shared_cv = BoWPreprocessTransform(corpus.used_preprocessor, self, dic)\n cv = [BoWComputeValue(dic[i], shared_cv) for i in range(len(dic))]\n \n- self.add_features(corpus, X, dic, cv)\n+ self.add_features(corpus, X, dic, cv, var_attrs={'bow-feature': True})\n return corpus\n \n def report(self):\ndiff --git a/orangecontrib/text/vectorization/base.py b/orangecontrib/text/vectorization/base.py\n--- a/orangecontrib/text/vectorization/base.py\n+++ b/orangecontrib/text/vectorization/base.py\n@@ -25,11 +25,20 @@\n raise NotImplementedError\n \n @staticmethod\n- def add_features(corpus, X, dictionary, compute_values=None):\n+ def add_features(corpus, X, dictionary, compute_values=None, var_attrs=None):\n order = np.argsort([dictionary[i] for i in range(len(dictionary))])\n- compute_values = np.array(compute_values)[order]\n+ if compute_values is not None:\n+ compute_values = np.array(compute_values)[order]\n+\n+ variable_attrs = {\n+ 'hidden': True,\n+ 'skip-normalization': True,\n+ }\n+ if isinstance(var_attrs, dict):\n+ variable_attrs.update(var_attrs)\n+\n corpus.extend_attributes(X[:, order],\n feature_names=(dictionary[i] for i in order),\n- var_attrs={'hidden': True, 'skip-normalization': True},\n+ var_attrs=variable_attrs,\n compute_values=compute_values)\n corpus.ngrams_corpus = matutils.Sparse2Corpus(X.T)\ndiff --git a/orangecontrib/text/widgets/owwordenrichment.py b/orangecontrib/text/widgets/owwordenrichment.py\n--- a/orangecontrib/text/widgets/owwordenrichment.py\n+++ b/orangecontrib/text/widgets/owwordenrichment.py\n@@ -1,12 +1,11 @@\n-import math\n-\n import numpy as np\n from AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem\n \n-from Orange.data import Table\n+from Orange.data import Table, Domain\n from Orange.widgets import gui\n from Orange.widgets.settings import Setting\n from Orange.widgets.widget import OWWidget, Msg\n+from orangecontrib.text import Corpus\n from orangecontrib.text.util import np_sp_sum\n from orangecontrib.text.stats import false_discovery_rate, hypergeom_p_values\n \n@@ -24,6 +23,7 @@\n want_main_area = True\n \n class Error(OWWidget.Error):\n+ no_bow_features = Msg('No bag-of-words features!')\n no_words_overlap = Msg('No words overlap!')\n empty_selection = Msg('Selected data is empty!')\n all_selected = Msg('All examples can not be selected!')\n@@ -99,6 +99,15 @@\n def handleNewSignals(self):\n self.check_data()\n \n+ def get_bow_domain(self):\n+ domain = self.data.domain\n+ return Domain(\n+ attributes=[a for a in domain.attributes\n+ if a.attributes.get('bow-feature', False)],\n+ class_vars=domain.class_vars,\n+ metas=domain.metas,\n+ source=domain)\n+\n def check_data(self):\n self.Error.clear()\n if isinstance(self.data, Table) and \\\n@@ -108,11 +117,16 @@\n self.clear()\n return\n \n- self.selected_data_transformed = Table.from_table(\n- self.data.domain, self.selected_data)\n+ # keep only BoW features\n+ bow_domain = self.get_bow_domain()\n+ if len(bow_domain.attributes) == 0:\n+ self.Error.no_bow_features()\n+ self.clear()\n+ return\n+ self.data = Corpus.from_table(bow_domain, self.data)\n+ self.selected_data_transformed = Corpus.from_table(bow_domain, self.selected_data)\n \n- sum_X = np_sp_sum(self.selected_data_transformed.X)\n- if sum_X == 0 or math.isnan(sum_X):\n+ if np_sp_sum(self.selected_data_transformed.X) == 0:\n self.Error.no_words_overlap()\n self.clear()\n elif len(self.data) == len(self.selected_data):\n", "issue": "WordEnrichment Nans & BoW Features Only\n<!--\r\nThis is an issue template. Please fill in the relevant details in the\r\nsections below.\r\n-->\r\n\r\n##### Text version\r\n<!-- From menu _Options\u2192Add-ons\u2192Orange3-Text_ or code `orangecontrib.text.version.full_version` -->\r\n0.2.4.dev\r\n\r\n##### WordEnrichment issues:\r\n* WE takes all features. It should only consider BoW features. If none BoW features present raise an error.\r\n* WE doesn't work if nans in sparse matrix. Probably cannot occur is we use only BoW features.\r\n\n", "before_files": [{"content": "import numpy as np\nfrom gensim import matutils\nfrom gensim.corpora import Dictionary\n\n\nclass BaseVectorizer:\n \"\"\"Base class for vectorization objects. \"\"\"\n name = NotImplemented\n\n def transform(self, corpus, copy=True, source_dict=None):\n \"\"\"Transforms a corpus to a new one with additional attributes. \"\"\"\n if copy:\n corpus = corpus.copy()\n\n if not len(corpus.dictionary):\n return corpus\n else:\n return self._transform(corpus, source_dict)\n\n def _transform(self, corpus, source_dict):\n raise NotImplementedError\n\n def report(self):\n \"\"\"Reports configuration items.\"\"\"\n raise NotImplementedError\n\n @staticmethod\n def add_features(corpus, X, dictionary, compute_values=None):\n order = np.argsort([dictionary[i] for i in range(len(dictionary))])\n compute_values = np.array(compute_values)[order]\n corpus.extend_attributes(X[:, order],\n feature_names=(dictionary[i] for i in order),\n var_attrs={'hidden': True, 'skip-normalization': True},\n compute_values=compute_values)\n corpus.ngrams_corpus = matutils.Sparse2Corpus(X.T)\n", "path": "orangecontrib/text/vectorization/base.py"}, {"content": "from collections import OrderedDict\nfrom functools import partial\n\nimport numpy as np\nfrom gensim import corpora, models, matutils\nfrom sklearn.preprocessing import normalize\n\nfrom Orange.data.util import SharedComputeValue\nfrom orangecontrib.text.vectorization.base import BaseVectorizer\n\n\nclass BoWPreprocessTransform:\n \"\"\"\n Shared computation for transforming new data set into the classifiers's BoW domain.\n This will run preprocessing as well as BoW transformation itself.\n \"\"\"\n def __init__(self, preprocessor, bow_vectorizer, dictionary):\n self.preprocessor = preprocessor\n self.bow_vectorizer = bow_vectorizer\n self.dictionary = dictionary\n\n def __call__(self, new_corpus):\n new_corpus = self.preprocessor(new_corpus)\n bow_corpus = self.bow_vectorizer.transform(new_corpus, copy=True, source_dict=self.dictionary)\n # store name to indices mapping so BoWComputeValue can run faster\n bow_corpus.feature_name_to_index = {attr.name: i for i, attr in enumerate(bow_corpus.domain.attributes)}\n return bow_corpus\n\n\nclass BoWComputeValue(SharedComputeValue):\n \"\"\"\n Compute Value for Bow features. This enables applying a\n classifier \u2014 that was trained on a BoW model \u2014 on new data.\n \"\"\"\n def __init__(self, name, compute_shared):\n super().__init__(compute_shared)\n self.name = name\n\n def compute(self, data, shared_data):\n ind = shared_data.feature_name_to_index[self.name]\n return shared_data.X[:, ind]\n\n\nclass BowVectorizer(BaseVectorizer):\n name = 'BoW Vectorizer'\n\n COUNT = 'Count'\n BINARY = 'Binary'\n SUBLINEAR = 'Sublinear'\n NONE = '(None)'\n IDF = 'IDF'\n SMOOTH = 'Smooth IDF'\n L1 = 'L1 (Sum of elements)'\n L2 = 'L2 (Euclidean)'\n\n wlocals = OrderedDict((\n (COUNT, lambda tf: tf),\n (BINARY, lambda tf: int(tf > 0)),\n (SUBLINEAR, lambda tf: 1 + np.log(tf)),\n ))\n\n wglobals = OrderedDict((\n (NONE, lambda df, N: 1),\n (IDF, lambda df, N: np.log(N/df)),\n (SMOOTH, lambda df, N: np.log(1 + N/df)),\n ))\n\n norms = OrderedDict((\n (NONE, None),\n (L1, partial(normalize, norm='l1')),\n (L2, partial(normalize, norm='l2')),\n ))\n\n def __init__(self, norm=NONE, wlocal=COUNT, wglobal=NONE):\n self.norm = norm\n self.wlocal = wlocal\n self.wglobal = wglobal\n\n def _transform(self, corpus, source_dict=None):\n temp_corpus = list(corpus.ngrams_iterator(' ', include_postags=True))\n dic = corpora.Dictionary(temp_corpus, prune_at=None) if not source_dict else source_dict\n temp_corpus = [dic.doc2bow(doc) for doc in temp_corpus]\n model = models.TfidfModel(temp_corpus, normalize=False,\n wlocal=self.wlocals[self.wlocal],\n wglobal=self.wglobals[self.wglobal])\n\n X = matutils.corpus2csc(model[temp_corpus], dtype=np.float, num_terms=len(dic)).T\n norm = self.norms[self.norm]\n if norm:\n X = norm(X)\n\n # set compute values\n shared_cv = BoWPreprocessTransform(corpus.used_preprocessor, self, dic)\n cv = [BoWComputeValue(dic[i], shared_cv) for i in range(len(dic))]\n\n self.add_features(corpus, X, dic, cv)\n return corpus\n\n def report(self):\n return (('Term Frequency', self.wlocal),\n ('Document Frequency', self.wglobal),\n ('Regularization', self.norm),)\n", "path": "orangecontrib/text/vectorization/bagofwords.py"}, {"content": "import math\n\nimport numpy as np\nfrom AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem\n\nfrom Orange.data import Table\nfrom Orange.widgets import gui\nfrom Orange.widgets.settings import Setting\nfrom Orange.widgets.widget import OWWidget, Msg\nfrom orangecontrib.text.util import np_sp_sum\nfrom orangecontrib.text.stats import false_discovery_rate, hypergeom_p_values\n\n\nclass OWWordEnrichment(OWWidget):\n # Basic widget info\n name = \"Word Enrichment\"\n description = \"Word enrichment analysis for selected documents.\"\n icon = \"icons/SetEnrichment.svg\"\n priority = 60\n\n # Input/output\n inputs = [(\"Selected Data\", Table, \"set_data_selected\"),\n (\"Data\", Table, \"set_data\"),]\n want_main_area = True\n\n class Error(OWWidget.Error):\n no_words_overlap = Msg('No words overlap!')\n empty_selection = Msg('Selected data is empty!')\n all_selected = Msg('All examples can not be selected!')\n\n # Settings\n filter_by_p = Setting(False)\n filter_p_value = Setting(0.01)\n filter_by_fdr = Setting(True)\n filter_fdr_value = Setting(0.2)\n\n def __init__(self):\n super().__init__()\n\n # Init data\n self.data = None\n self.selected_data = None\n self.selected_data_transformed = None # used for transforming the 'selected data' into the 'data' domain\n\n self.words = []\n self.p_values = []\n self.fdr_values = []\n\n # Info section\n fbox = gui.widgetBox(self.controlArea, \"Info\")\n self.info_all = gui.label(fbox, self, 'Cluster words:')\n self.info_sel = gui.label(fbox, self, 'Selected words:')\n self.info_fil = gui.label(fbox, self, 'After filtering:')\n\n # Filtering settings\n fbox = gui.widgetBox(self.controlArea, \"Filter\")\n hbox = gui.widgetBox(fbox, orientation=0)\n\n self.chb_p = gui.checkBox(hbox, self, \"filter_by_p\", \"p-value\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word p-value\")\n self.spin_p = gui.doubleSpin(hbox, self, 'filter_p_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n callbackOnReturn=True,\n tooltip=\"Max p-value for word\")\n self.spin_p.setEnabled(self.filter_by_p)\n\n hbox = gui.widgetBox(fbox, orientation=0)\n self.chb_fdr = gui.checkBox(hbox, self, \"filter_by_fdr\", \"FDR\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word FDR\")\n self.spin_fdr = gui.doubleSpin(hbox, self, 'filter_fdr_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n callbackOnReturn=True,\n tooltip=\"Max p-value for word\")\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n gui.rubber(self.controlArea)\n\n # Word's list view\n self.cols = ['Word', 'p-value', 'FDR']\n self.sig_words = QTreeWidget()\n self.sig_words.setColumnCount(len(self.cols))\n self.sig_words.setHeaderLabels(self.cols)\n self.sig_words.setSortingEnabled(True)\n self.sig_words.setSelectionMode(QTreeView.ExtendedSelection)\n self.sig_words.sortByColumn(2, 0) # 0 is ascending order\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n self.mainArea.layout().addWidget(self.sig_words)\n\n def set_data(self, data=None):\n self.data = data\n\n def set_data_selected(self, data=None):\n self.selected_data = data\n\n def handleNewSignals(self):\n self.check_data()\n\n def check_data(self):\n self.Error.clear()\n if isinstance(self.data, Table) and \\\n isinstance(self.selected_data, Table):\n if len(self.selected_data) == 0:\n self.Error.empty_selection()\n self.clear()\n return\n\n self.selected_data_transformed = Table.from_table(\n self.data.domain, self.selected_data)\n\n sum_X = np_sp_sum(self.selected_data_transformed.X)\n if sum_X == 0 or math.isnan(sum_X):\n self.Error.no_words_overlap()\n self.clear()\n elif len(self.data) == len(self.selected_data):\n self.Error.all_selected()\n self.clear()\n else:\n self.apply()\n else:\n self.clear()\n\n def clear(self):\n self.sig_words.clear()\n self.info_all.setText('Cluster words:')\n self.info_sel.setText('Selected words:')\n self.info_fil.setText('After filtering:')\n\n def filter_enabled(self, b):\n self.chb_p.setEnabled(b)\n self.chb_fdr.setEnabled(b)\n self.spin_p.setEnabled(b)\n self.spin_fdr.setEnabled(b)\n\n def filter_and_display(self):\n self.spin_p.setEnabled(self.filter_by_p)\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n self.sig_words.clear()\n\n if self.selected_data_transformed is None: # do nothing when no Data\n return\n\n count = 0\n if self.words:\n for word, pval, fval in zip(self.words, self.p_values, self.fdr_values):\n if (not self.filter_by_p or pval <= self.filter_p_value) and \\\n (not self.filter_by_fdr or fval <= self.filter_fdr_value):\n it = EATreeWidgetItem(word, pval, fval, self.sig_words)\n self.sig_words.addTopLevelItem(it)\n count += 1\n\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n\n self.info_all.setText('Cluster words: {}'.format(len(self.selected_data_transformed.domain.attributes)))\n self.info_sel.setText('Selected words: {}'.format(np.count_nonzero(np_sp_sum(self.selected_data_transformed.X, axis=0))))\n if not self.filter_by_p and not self.filter_by_fdr:\n self.info_fil.setText('After filtering:')\n self.info_fil.setEnabled(False)\n else:\n self.info_fil.setEnabled(True)\n self.info_fil.setText('After filtering: {}'.format(count))\n\n def progress(self, p):\n self.progressBarSet(p)\n\n def apply(self):\n self.clear()\n self.progressBarInit()\n self.filter_enabled(False)\n\n self.words = [i.name for i in self.selected_data_transformed.domain.attributes]\n self.p_values = hypergeom_p_values(self.data.X,\n self.selected_data_transformed.X,\n callback=self.progress)\n self.fdr_values = false_discovery_rate(self.p_values)\n self.filter_and_display()\n self.filter_enabled(True)\n self.progressBarFinished()\n\n\nfp = lambda score: \"%0.5f\" % score if score > 10e-3 else \"%0.1e\" % score\nfpt = lambda score: \"%0.9f\" % score if score > 10e-3 else \"%0.5e\" % score\n\n\nclass EATreeWidgetItem(QTreeWidgetItem):\n def __init__(self, word, p_value, f_value, parent):\n super().__init__(parent)\n self.data = [word, p_value, f_value]\n self.setText(0, word)\n self.setText(1, fp(p_value))\n self.setToolTip(1, fpt(p_value))\n self.setText(2, fp(f_value))\n self.setToolTip(2, fpt(f_value))\n\n def __lt__(self, other):\n col = self.treeWidget().sortColumn()\n return self.data[col] < other.data[col]\n", "path": "orangecontrib/text/widgets/owwordenrichment.py"}], "after_files": [{"content": "import numpy as np\nfrom gensim import matutils\nfrom gensim.corpora import Dictionary\n\n\nclass BaseVectorizer:\n \"\"\"Base class for vectorization objects. \"\"\"\n name = NotImplemented\n\n def transform(self, corpus, copy=True, source_dict=None):\n \"\"\"Transforms a corpus to a new one with additional attributes. \"\"\"\n if copy:\n corpus = corpus.copy()\n\n if not len(corpus.dictionary):\n return corpus\n else:\n return self._transform(corpus, source_dict)\n\n def _transform(self, corpus, source_dict):\n raise NotImplementedError\n\n def report(self):\n \"\"\"Reports configuration items.\"\"\"\n raise NotImplementedError\n\n @staticmethod\n def add_features(corpus, X, dictionary, compute_values=None, var_attrs=None):\n order = np.argsort([dictionary[i] for i in range(len(dictionary))])\n if compute_values is not None:\n compute_values = np.array(compute_values)[order]\n\n variable_attrs = {\n 'hidden': True,\n 'skip-normalization': True,\n }\n if isinstance(var_attrs, dict):\n variable_attrs.update(var_attrs)\n\n corpus.extend_attributes(X[:, order],\n feature_names=(dictionary[i] for i in order),\n var_attrs=variable_attrs,\n compute_values=compute_values)\n corpus.ngrams_corpus = matutils.Sparse2Corpus(X.T)\n", "path": "orangecontrib/text/vectorization/base.py"}, {"content": "from collections import OrderedDict\nfrom functools import partial\n\nimport numpy as np\nfrom gensim import corpora, models, matutils\nfrom sklearn.preprocessing import normalize\n\nfrom Orange.data.util import SharedComputeValue\nfrom orangecontrib.text.vectorization.base import BaseVectorizer\n\n\nclass BoWPreprocessTransform:\n \"\"\"\n Shared computation for transforming new data set into the classifiers's BoW domain.\n This will run preprocessing as well as BoW transformation itself.\n \"\"\"\n def __init__(self, preprocessor, bow_vectorizer, dictionary):\n self.preprocessor = preprocessor\n self.bow_vectorizer = bow_vectorizer\n self.dictionary = dictionary\n\n def __call__(self, new_corpus):\n new_corpus = self.preprocessor(new_corpus)\n bow_corpus = self.bow_vectorizer.transform(new_corpus, copy=True, source_dict=self.dictionary)\n # store name to indices mapping so BoWComputeValue can run faster\n bow_corpus.feature_name_to_index = {attr.name: i for i, attr in enumerate(bow_corpus.domain.attributes)}\n return bow_corpus\n\n\nclass BoWComputeValue(SharedComputeValue):\n \"\"\"\n Compute Value for Bow features. This enables applying a\n classifier \u2014 that was trained on a BoW model \u2014 on new data.\n \"\"\"\n def __init__(self, name, compute_shared):\n super().__init__(compute_shared)\n self.name = name\n\n def compute(self, data, shared_data):\n ind = shared_data.feature_name_to_index[self.name]\n return shared_data.X[:, ind]\n\n\nclass BowVectorizer(BaseVectorizer):\n name = 'BoW Vectorizer'\n\n COUNT = 'Count'\n BINARY = 'Binary'\n SUBLINEAR = 'Sublinear'\n NONE = '(None)'\n IDF = 'IDF'\n SMOOTH = 'Smooth IDF'\n L1 = 'L1 (Sum of elements)'\n L2 = 'L2 (Euclidean)'\n\n wlocals = OrderedDict((\n (COUNT, lambda tf: tf),\n (BINARY, lambda tf: int(tf > 0)),\n (SUBLINEAR, lambda tf: 1 + np.log(tf)),\n ))\n\n wglobals = OrderedDict((\n (NONE, lambda df, N: 1),\n (IDF, lambda df, N: np.log(N/df)),\n (SMOOTH, lambda df, N: np.log(1 + N/df)),\n ))\n\n norms = OrderedDict((\n (NONE, None),\n (L1, partial(normalize, norm='l1')),\n (L2, partial(normalize, norm='l2')),\n ))\n\n def __init__(self, norm=NONE, wlocal=COUNT, wglobal=NONE):\n self.norm = norm\n self.wlocal = wlocal\n self.wglobal = wglobal\n\n def _transform(self, corpus, source_dict=None):\n temp_corpus = list(corpus.ngrams_iterator(' ', include_postags=True))\n dic = corpora.Dictionary(temp_corpus, prune_at=None) if not source_dict else source_dict\n temp_corpus = [dic.doc2bow(doc) for doc in temp_corpus]\n model = models.TfidfModel(temp_corpus, normalize=False,\n wlocal=self.wlocals[self.wlocal],\n wglobal=self.wglobals[self.wglobal])\n\n X = matutils.corpus2csc(model[temp_corpus], dtype=np.float, num_terms=len(dic)).T\n norm = self.norms[self.norm]\n if norm:\n X = norm(X)\n\n # set compute values\n shared_cv = BoWPreprocessTransform(corpus.used_preprocessor, self, dic)\n cv = [BoWComputeValue(dic[i], shared_cv) for i in range(len(dic))]\n\n self.add_features(corpus, X, dic, cv, var_attrs={'bow-feature': True})\n return corpus\n\n def report(self):\n return (('Term Frequency', self.wlocal),\n ('Document Frequency', self.wglobal),\n ('Regularization', self.norm),)\n", "path": "orangecontrib/text/vectorization/bagofwords.py"}, {"content": "import numpy as np\nfrom AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem\n\nfrom Orange.data import Table, Domain\nfrom Orange.widgets import gui\nfrom Orange.widgets.settings import Setting\nfrom Orange.widgets.widget import OWWidget, Msg\nfrom orangecontrib.text import Corpus\nfrom orangecontrib.text.util import np_sp_sum\nfrom orangecontrib.text.stats import false_discovery_rate, hypergeom_p_values\n\n\nclass OWWordEnrichment(OWWidget):\n # Basic widget info\n name = \"Word Enrichment\"\n description = \"Word enrichment analysis for selected documents.\"\n icon = \"icons/SetEnrichment.svg\"\n priority = 60\n\n # Input/output\n inputs = [(\"Selected Data\", Table, \"set_data_selected\"),\n (\"Data\", Table, \"set_data\"),]\n want_main_area = True\n\n class Error(OWWidget.Error):\n no_bow_features = Msg('No bag-of-words features!')\n no_words_overlap = Msg('No words overlap!')\n empty_selection = Msg('Selected data is empty!')\n all_selected = Msg('All examples can not be selected!')\n\n # Settings\n filter_by_p = Setting(False)\n filter_p_value = Setting(0.01)\n filter_by_fdr = Setting(True)\n filter_fdr_value = Setting(0.2)\n\n def __init__(self):\n super().__init__()\n\n # Init data\n self.data = None\n self.selected_data = None\n self.selected_data_transformed = None # used for transforming the 'selected data' into the 'data' domain\n\n self.words = []\n self.p_values = []\n self.fdr_values = []\n\n # Info section\n fbox = gui.widgetBox(self.controlArea, \"Info\")\n self.info_all = gui.label(fbox, self, 'Cluster words:')\n self.info_sel = gui.label(fbox, self, 'Selected words:')\n self.info_fil = gui.label(fbox, self, 'After filtering:')\n\n # Filtering settings\n fbox = gui.widgetBox(self.controlArea, \"Filter\")\n hbox = gui.widgetBox(fbox, orientation=0)\n\n self.chb_p = gui.checkBox(hbox, self, \"filter_by_p\", \"p-value\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word p-value\")\n self.spin_p = gui.doubleSpin(hbox, self, 'filter_p_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n callbackOnReturn=True,\n tooltip=\"Max p-value for word\")\n self.spin_p.setEnabled(self.filter_by_p)\n\n hbox = gui.widgetBox(fbox, orientation=0)\n self.chb_fdr = gui.checkBox(hbox, self, \"filter_by_fdr\", \"FDR\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word FDR\")\n self.spin_fdr = gui.doubleSpin(hbox, self, 'filter_fdr_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n callbackOnReturn=True,\n tooltip=\"Max p-value for word\")\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n gui.rubber(self.controlArea)\n\n # Word's list view\n self.cols = ['Word', 'p-value', 'FDR']\n self.sig_words = QTreeWidget()\n self.sig_words.setColumnCount(len(self.cols))\n self.sig_words.setHeaderLabels(self.cols)\n self.sig_words.setSortingEnabled(True)\n self.sig_words.setSelectionMode(QTreeView.ExtendedSelection)\n self.sig_words.sortByColumn(2, 0) # 0 is ascending order\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n self.mainArea.layout().addWidget(self.sig_words)\n\n def set_data(self, data=None):\n self.data = data\n\n def set_data_selected(self, data=None):\n self.selected_data = data\n\n def handleNewSignals(self):\n self.check_data()\n\n def get_bow_domain(self):\n domain = self.data.domain\n return Domain(\n attributes=[a for a in domain.attributes\n if a.attributes.get('bow-feature', False)],\n class_vars=domain.class_vars,\n metas=domain.metas,\n source=domain)\n\n def check_data(self):\n self.Error.clear()\n if isinstance(self.data, Table) and \\\n isinstance(self.selected_data, Table):\n if len(self.selected_data) == 0:\n self.Error.empty_selection()\n self.clear()\n return\n\n # keep only BoW features\n bow_domain = self.get_bow_domain()\n if len(bow_domain.attributes) == 0:\n self.Error.no_bow_features()\n self.clear()\n return\n self.data = Corpus.from_table(bow_domain, self.data)\n self.selected_data_transformed = Corpus.from_table(bow_domain, self.selected_data)\n\n if np_sp_sum(self.selected_data_transformed.X) == 0:\n self.Error.no_words_overlap()\n self.clear()\n elif len(self.data) == len(self.selected_data):\n self.Error.all_selected()\n self.clear()\n else:\n self.apply()\n else:\n self.clear()\n\n def clear(self):\n self.sig_words.clear()\n self.info_all.setText('Cluster words:')\n self.info_sel.setText('Selected words:')\n self.info_fil.setText('After filtering:')\n\n def filter_enabled(self, b):\n self.chb_p.setEnabled(b)\n self.chb_fdr.setEnabled(b)\n self.spin_p.setEnabled(b)\n self.spin_fdr.setEnabled(b)\n\n def filter_and_display(self):\n self.spin_p.setEnabled(self.filter_by_p)\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n self.sig_words.clear()\n\n if self.selected_data_transformed is None: # do nothing when no Data\n return\n\n count = 0\n if self.words:\n for word, pval, fval in zip(self.words, self.p_values, self.fdr_values):\n if (not self.filter_by_p or pval <= self.filter_p_value) and \\\n (not self.filter_by_fdr or fval <= self.filter_fdr_value):\n it = EATreeWidgetItem(word, pval, fval, self.sig_words)\n self.sig_words.addTopLevelItem(it)\n count += 1\n\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n\n self.info_all.setText('Cluster words: {}'.format(len(self.selected_data_transformed.domain.attributes)))\n self.info_sel.setText('Selected words: {}'.format(np.count_nonzero(np_sp_sum(self.selected_data_transformed.X, axis=0))))\n if not self.filter_by_p and not self.filter_by_fdr:\n self.info_fil.setText('After filtering:')\n self.info_fil.setEnabled(False)\n else:\n self.info_fil.setEnabled(True)\n self.info_fil.setText('After filtering: {}'.format(count))\n\n def progress(self, p):\n self.progressBarSet(p)\n\n def apply(self):\n self.clear()\n self.progressBarInit()\n self.filter_enabled(False)\n\n self.words = [i.name for i in self.selected_data_transformed.domain.attributes]\n self.p_values = hypergeom_p_values(self.data.X,\n self.selected_data_transformed.X,\n callback=self.progress)\n self.fdr_values = false_discovery_rate(self.p_values)\n self.filter_and_display()\n self.filter_enabled(True)\n self.progressBarFinished()\n\n\nfp = lambda score: \"%0.5f\" % score if score > 10e-3 else \"%0.1e\" % score\nfpt = lambda score: \"%0.9f\" % score if score > 10e-3 else \"%0.5e\" % score\n\n\nclass EATreeWidgetItem(QTreeWidgetItem):\n def __init__(self, word, p_value, f_value, parent):\n super().__init__(parent)\n self.data = [word, p_value, f_value]\n self.setText(0, word)\n self.setText(1, fp(p_value))\n self.setToolTip(1, fpt(p_value))\n self.setText(2, fp(f_value))\n self.setToolTip(2, fpt(f_value))\n\n def __lt__(self, other):\n col = self.treeWidget().sortColumn()\n return self.data[col] < other.data[col]\n", "path": "orangecontrib/text/widgets/owwordenrichment.py"}]}
| 3,974 | 983 |
gh_patches_debug_48523
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6488
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support Node v16
Currently building the Meltano UI with Node 16 results in a crash while building libsass. [That library is deprecated](https://sass-lang.com/blog/libsass-is-deprecated), so we should switch to using Dart-sass instead.
CC @alexmarple
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/alembic_freeze.py`
Content:
```
1 #!/usr/bin/env python3
2
3 """Script to freeze the Meltano database - executed by the Makefile."""
4
5 from __future__ import annotations
6
7 from alembic.script import ScriptDirectory
8
9 from meltano.migrations import LOCK_PATH, MIGRATION_DIR
10
11 scripts = ScriptDirectory(str(MIGRATION_DIR))
12
13 with LOCK_PATH.open("w") as lock:
14 HEAD = scripts.get_current_head()
15 lock.write(HEAD)
16
17 print(f"Meltano database frozen at {HEAD}.")
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/alembic_freeze.py b/scripts/alembic_freeze.py
--- a/scripts/alembic_freeze.py
+++ b/scripts/alembic_freeze.py
@@ -1,6 +1,6 @@
#!/usr/bin/env python3
-"""Script to freeze the Meltano database - executed by the Makefile."""
+"""Script to freeze the Meltano database - executed by GitHub CI."""
from __future__ import annotations
|
{"golden_diff": "diff --git a/scripts/alembic_freeze.py b/scripts/alembic_freeze.py\n--- a/scripts/alembic_freeze.py\n+++ b/scripts/alembic_freeze.py\n@@ -1,6 +1,6 @@\n #!/usr/bin/env python3\n \n-\"\"\"Script to freeze the Meltano database - executed by the Makefile.\"\"\"\n+\"\"\"Script to freeze the Meltano database - executed by GitHub CI.\"\"\"\n \n from __future__ import annotations\n", "issue": "Support Node v16\nCurrently building the Meltano UI with Node 16 results in a crash while building libsass. [That library is deprecated](https://sass-lang.com/blog/libsass-is-deprecated), so we should switch to using Dart-sass instead.\r\n\r\nCC @alexmarple \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Script to freeze the Meltano database - executed by the Makefile.\"\"\"\n\nfrom __future__ import annotations\n\nfrom alembic.script import ScriptDirectory\n\nfrom meltano.migrations import LOCK_PATH, MIGRATION_DIR\n\nscripts = ScriptDirectory(str(MIGRATION_DIR))\n\nwith LOCK_PATH.open(\"w\") as lock:\n HEAD = scripts.get_current_head()\n lock.write(HEAD)\n\nprint(f\"Meltano database frozen at {HEAD}.\")\n", "path": "scripts/alembic_freeze.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Script to freeze the Meltano database - executed by GitHub CI.\"\"\"\n\nfrom __future__ import annotations\n\nfrom alembic.script import ScriptDirectory\n\nfrom meltano.migrations import LOCK_PATH, MIGRATION_DIR\n\nscripts = ScriptDirectory(str(MIGRATION_DIR))\n\nwith LOCK_PATH.open(\"w\") as lock:\n HEAD = scripts.get_current_head()\n lock.write(HEAD)\n\nprint(f\"Meltano database frozen at {HEAD}.\")\n", "path": "scripts/alembic_freeze.py"}]}
| 458 | 103 |
gh_patches_debug_25821
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-439
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC]add new software in Wiki
#### Describe the issue linked to the documentation
I cannot find the LoudML in the Related Software in wiki page.https://github.com/alan-turing-institute/sktime/wiki/related-software
<!--
Tell us about the confusion introduced in the documentation.
-->
Loud ML is an open source time series inference engine built on top of TensorFlow. It's useful to forecast data, detect outliers, and automate your process using future knowledge. here is link: https://github.com/regel/loudml
#### Suggest a potential alternative/fix
could you add it into the list?
<!--
Tell us how we could improve the documentation in this regard.
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Configuration file for the Sphinx documentation builder.
5 #
6 # This file does only contain a selection of the most common options. For a
7 # full list see the documentation:
8 # http://www.sphinx-doc.org/en/master/config
9
10 import os
11 import sys
12
13 import sktime
14
15 # -- Path setup --------------------------------------------------------------
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20
21 # When we build the docs on readthedocs, we build the package and want to
22 # use the built files in order for sphinx to be able to properly read the
23 # Cython files. Hence, we do not add the source code path to the system
24 # path.
25 ON_READTHEDOCS = os.environ.get("READTHEDOCS") == "True"
26 if not ON_READTHEDOCS:
27 sys.path.insert(0, os.path.abspath("../.."))
28
29 # -- Project information -----------------------------------------------------
30 PROJECT = "sktime"
31 COPYRIGHT = "2019 - 2020 (BSD-3-Clause License)"
32 AUTHOR = "sktime developers"
33
34 # The full version, including alpha/beta/rc tags
35 CURRENT_VERSION = f"v{sktime.__version__}"
36
37 # If on readthedocs, and we're building the latest version, update tag to generate
38 # correct links in notebooks
39 if ON_READTHEDOCS:
40 READTHEDOCS_VERSION = os.environ.get("READTHEDOCS_VERSION")
41 if READTHEDOCS_VERSION == "latest":
42 CURRENT_VERSION = "master"
43
44 # -- General configuration ---------------------------------------------------
45
46 # Add any Sphinx extension module names here, as strings. They can be
47 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
48 # ones.
49 extensions = [
50 "sphinx.ext.autodoc",
51 "sphinx.ext.autosummary",
52 "sphinx.ext.intersphinx",
53 "sphinx.ext.autosectionlabel",
54 "sphinx.ext.todo",
55 "sphinx.ext.mathjax",
56 # 'sphinx.ext.viewcode', # link to auto-generated source code files (rst)
57 "sphinx.ext.githubpages",
58 "sphinx.ext.linkcode", # link to GitHub source code via linkcode_resolve()
59 "sphinx.ext.napoleon",
60 "nbsphinx", # integrates example notebooks
61 "m2r2", # markdown rendering
62 ]
63
64 # Add any paths that contain templates here, relative to this directory.
65 templates_path = ["_templates"]
66
67 # The suffix(es) of source filenames.
68 # You can specify multiple suffix as a list of string:
69 source_suffix = {
70 ".rst": "restructuredtext",
71 ".md": "markdown",
72 }
73
74 # The master toctree document.
75 master_doc = "index"
76
77 # The language for content autogenerated by Sphinx. Refer to documentation
78 # for a list of supported languages.
79 #
80 # This is also used if you do content translation via gettext catalogs.
81 # Usually you set "language" from the command line for these cases.
82 language = None
83
84 # List of patterns, relative to source directory, that match files and
85 # directories to ignore when looking for source files.
86 # This pattern also affects html_static_path and html_extra_path.
87 exclude_patterns = ["_build", ".ipynb_checkpoints", "Thumbs.db", ".DS_Store"]
88
89 # The name of the Pygments (syntax highlighting) style to use.
90 pygments_style = "sphinx"
91
92 # see http://stackoverflow.com/q/12206334/562769
93 numpydoc_show_class_members = True
94 numpydoc_class_members_toctree = False
95
96 # generate autosummary even if no references
97 autosummary_generate = True
98 autodoc_default_flags = ["members", "inherited-members"]
99
100
101 def linkcode_resolve(domain, info):
102 def find_source():
103 # try to find the file and line number, based on code from numpy:
104 # https://github.com/numpy/numpy/blob/master/doc/source/conf.py#L286
105 obj = sys.modules[info["module"]]
106 for part in info["fullname"].split("."):
107 obj = getattr(obj, part)
108 import inspect
109 import os
110
111 fn = inspect.getsourcefile(obj)
112 fn = os.path.relpath(fn, start=os.path.dirname(sktime.__file__))
113 source, lineno = inspect.getsourcelines(obj)
114 return fn, lineno, lineno + len(source) - 1
115
116 if domain != "py" or not info["module"]:
117 return None
118 try:
119 filename = "sktime/%s#L%d-L%d" % find_source()
120 except Exception:
121 filename = info["module"].replace(".", "/") + ".py"
122 return "https://github.com/alan-turing-institute/sktime/blob/%s/%s" % (
123 CURRENT_VERSION,
124 filename,
125 )
126
127
128 # -- Options for HTML output -------------------------------------------------
129
130 # The theme to use for HTML and HTML Help pages. See the documentation for
131 # a list of builtin themes.
132
133 html_theme = "sphinx_rtd_theme"
134 # html_theme = 'bootstrap'
135
136 # Theme options are theme-specific and customize the look and feel of a theme
137 # further. For a list of options available for each theme, see the
138 # documentation.
139
140 html_theme_options = {
141 "prev_next_buttons_location": None,
142 }
143
144 html_favicon = "images/sktime-favicon.ico"
145
146 # Add any paths that contain custom static files (such as style sheets) here,
147 # relative to this directory. They are copied after the builtin static files,
148 # so a file named "default.css" will overwrite the builtin "default.css".
149 html_static_path = ["_static"]
150
151 # Custom sidebar templates, must be a dictionary that maps document names
152 # to template names.
153 #
154 # The default sidebars (for documents that don't match any pattern) are
155 # defined by theme itself. Builtin themes are using these templates by
156 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
157 # 'searchbox.html']``.
158 #
159 # html_sidebars = {}
160
161 html_show_sourcelink = False
162
163 # -- Options for HTMLHelp output ---------------------------------------------
164
165 # Output file base name for HTML help builder.
166 htmlhelp_basename = "sktimedoc"
167
168 # -- Options for LaTeX output ------------------------------------------------
169
170 latex_elements = {
171 # The paper size ('letterpaper' or 'a4paper').
172 # 'papersize': 'letterpaper',
173 # The font size ('10pt', '11pt' or '12pt').
174 # 'pointsize': '10pt',
175 # Additional stuff for the LaTeX preamble.
176 # 'preamble': '',
177 # Latex figure (float) alignment
178 # 'figure_align': 'htbp',
179 }
180
181 # Grouping the document tree into LaTeX files. List of tuples
182 # (source start file, target name, title,
183 # author, documentclass [howto, manual, or own class]).
184 latex_documents = [
185 (master_doc, "sktime.tex", "sktime Documentation", "sktime developers", "manual"),
186 ]
187
188 # -- Options for manual page output ------------------------------------------
189
190 # One entry per manual page. List of tuples
191 # (source start file, name, description, authors, manual section).
192 man_pages = [(master_doc, "sktime", "sktime Documentation", [AUTHOR], 1)]
193
194 # -- Options for Texinfo output ----------------------------------------------
195
196 # Grouping the document tree into Texinfo files. List of tuples
197 # (source start file, target name, title, author,
198 # dir menu entry, description, category)
199 texinfo_documents = [
200 (
201 master_doc,
202 "sktime",
203 "sktime Documentation",
204 AUTHOR,
205 "sktime",
206 "One line description of project.",
207 "Miscellaneous",
208 ),
209 ]
210
211
212 def setup(app):
213 def adds(pth):
214 print("Adding stylesheet: %s" % pth) # noqa: T001
215 app.add_css_file(pth)
216
217 adds("fields.css") # for parameters, etc.
218
219
220 # -- Extension configuration -------------------------------------------------
221
222 # -- Options for nbsphinx extension ---------------------------------------
223 nbsphinx_execute = "always" # whether or not to run notebooks
224 nbsphinx_allow_errors = False # False
225 nbsphinx_timeout = 600 # time out in secs, set to -1 to disable timeout
226
227 # add Binder launch buttom at the top
228 CURRENT_FILE = "{{ env.doc2path( env.docname, base=None) }}"
229
230 # make sure Binder points to latest stable release, not master
231 BINDER_URL = f"https://mybinder.org/v2/gh/alan-turing-institute/sktime/{CURRENT_VERSION}?filepath={CURRENT_FILE}" # noqa
232 nbsphinx_prolog = f"""
233 .. |binder| image:: https://mybinder.org/badge_logo.svg
234 .. _Binder: {BINDER_URL}
235
236 |Binder|_
237 """
238
239 # add link to original notebook at the bottom
240 NOTEBOOK_URL = f"https://github.com/alan-turing-institute/sktime/tree/{CURRENT_VERSION}/{CURRENT_FILE}" # noqa
241 nbsphinx_epilog = f"""
242 ----
243
244 Generated by nbsphinx_. The Jupyter notebook can be found here_.
245
246 .. _here: {NOTEBOOK_URL}
247 .. _nbsphinx: https://nbsphinx.readthedocs.io/
248 """
249
250 # -- Options for intersphinx extension ---------------------------------------
251
252 # Example configuration for intersphinx: refer to the Python standard library.
253 intersphinx_mapping = {"https://docs.python.org/": None}
254
255 # -- Options for _todo extension ----------------------------------------------
256 todo_include_todos = False
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -27,9 +27,9 @@
sys.path.insert(0, os.path.abspath("../.."))
# -- Project information -----------------------------------------------------
-PROJECT = "sktime"
-COPYRIGHT = "2019 - 2020 (BSD-3-Clause License)"
-AUTHOR = "sktime developers"
+project = "sktime"
+copyright = "2019 - 2020 (BSD-3-Clause License)"
+author = "sktime developers"
# The full version, including alpha/beta/rc tags
CURRENT_VERSION = f"v{sktime.__version__}"
@@ -189,7 +189,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
-man_pages = [(master_doc, "sktime", "sktime Documentation", [AUTHOR], 1)]
+man_pages = [(master_doc, "sktime", "sktime Documentation", [author], 1)]
# -- Options for Texinfo output ----------------------------------------------
@@ -201,7 +201,7 @@
master_doc,
"sktime",
"sktime Documentation",
- AUTHOR,
+ author,
"sktime",
"One line description of project.",
"Miscellaneous",
|
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -27,9 +27,9 @@\n sys.path.insert(0, os.path.abspath(\"../..\"))\n \n # -- Project information -----------------------------------------------------\n-PROJECT = \"sktime\"\n-COPYRIGHT = \"2019 - 2020 (BSD-3-Clause License)\"\n-AUTHOR = \"sktime developers\"\n+project = \"sktime\"\n+copyright = \"2019 - 2020 (BSD-3-Clause License)\"\n+author = \"sktime developers\"\n \n # The full version, including alpha/beta/rc tags\n CURRENT_VERSION = f\"v{sktime.__version__}\"\n@@ -189,7 +189,7 @@\n \n # One entry per manual page. List of tuples\n # (source start file, name, description, authors, manual section).\n-man_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [AUTHOR], 1)]\n+man_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [author], 1)]\n \n # -- Options for Texinfo output ----------------------------------------------\n \n@@ -201,7 +201,7 @@\n master_doc,\n \"sktime\",\n \"sktime Documentation\",\n- AUTHOR,\n+ author,\n \"sktime\",\n \"One line description of project.\",\n \"Miscellaneous\",\n", "issue": "[DOC]add new software in Wiki\n#### Describe the issue linked to the documentation\r\nI cannot find the LoudML in the Related Software in wiki page.https://github.com/alan-turing-institute/sktime/wiki/related-software\r\n<!--\r\nTell us about the confusion introduced in the documentation.\r\n-->\r\nLoud ML is an open source time series inference engine built on top of TensorFlow. It's useful to forecast data, detect outliers, and automate your process using future knowledge. here is link\uff1a https://github.com/regel/loudml\r\n#### Suggest a potential alternative/fix\r\ncould you add it into the list?\r\n<!--\r\nTell us how we could improve the documentation in this regard.\r\n-->\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\nimport os\nimport sys\n\nimport sktime\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n# When we build the docs on readthedocs, we build the package and want to\n# use the built files in order for sphinx to be able to properly read the\n# Cython files. Hence, we do not add the source code path to the system\n# path.\nON_READTHEDOCS = os.environ.get(\"READTHEDOCS\") == \"True\"\nif not ON_READTHEDOCS:\n sys.path.insert(0, os.path.abspath(\"../..\"))\n\n# -- Project information -----------------------------------------------------\nPROJECT = \"sktime\"\nCOPYRIGHT = \"2019 - 2020 (BSD-3-Clause License)\"\nAUTHOR = \"sktime developers\"\n\n# The full version, including alpha/beta/rc tags\nCURRENT_VERSION = f\"v{sktime.__version__}\"\n\n# If on readthedocs, and we're building the latest version, update tag to generate\n# correct links in notebooks\nif ON_READTHEDOCS:\n READTHEDOCS_VERSION = os.environ.get(\"READTHEDOCS_VERSION\")\n if READTHEDOCS_VERSION == \"latest\":\n CURRENT_VERSION = \"master\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.mathjax\",\n # 'sphinx.ext.viewcode', # link to auto-generated source code files (rst)\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.linkcode\", # link to GitHub source code via linkcode_resolve()\n \"sphinx.ext.napoleon\",\n \"nbsphinx\", # integrates example notebooks\n \"m2r2\", # markdown rendering\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \".ipynb_checkpoints\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# see http://stackoverflow.com/q/12206334/562769\nnumpydoc_show_class_members = True\nnumpydoc_class_members_toctree = False\n\n# generate autosummary even if no references\nautosummary_generate = True\nautodoc_default_flags = [\"members\", \"inherited-members\"]\n\n\ndef linkcode_resolve(domain, info):\n def find_source():\n # try to find the file and line number, based on code from numpy:\n # https://github.com/numpy/numpy/blob/master/doc/source/conf.py#L286\n obj = sys.modules[info[\"module\"]]\n for part in info[\"fullname\"].split(\".\"):\n obj = getattr(obj, part)\n import inspect\n import os\n\n fn = inspect.getsourcefile(obj)\n fn = os.path.relpath(fn, start=os.path.dirname(sktime.__file__))\n source, lineno = inspect.getsourcelines(obj)\n return fn, lineno, lineno + len(source) - 1\n\n if domain != \"py\" or not info[\"module\"]:\n return None\n try:\n filename = \"sktime/%s#L%d-L%d\" % find_source()\n except Exception:\n filename = info[\"module\"].replace(\".\", \"/\") + \".py\"\n return \"https://github.com/alan-turing-institute/sktime/blob/%s/%s\" % (\n CURRENT_VERSION,\n filename,\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n\nhtml_theme = \"sphinx_rtd_theme\"\n# html_theme = 'bootstrap'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n\nhtml_theme_options = {\n \"prev_next_buttons_location\": None,\n}\n\nhtml_favicon = \"images/sktime-favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\nhtml_show_sourcelink = False\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"sktimedoc\"\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"sktime.tex\", \"sktime Documentation\", \"sktime developers\", \"manual\"),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [AUTHOR], 1)]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"sktime\",\n \"sktime Documentation\",\n AUTHOR,\n \"sktime\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\ndef setup(app):\n def adds(pth):\n print(\"Adding stylesheet: %s\" % pth) # noqa: T001\n app.add_css_file(pth)\n\n adds(\"fields.css\") # for parameters, etc.\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for nbsphinx extension ---------------------------------------\nnbsphinx_execute = \"always\" # whether or not to run notebooks\nnbsphinx_allow_errors = False # False\nnbsphinx_timeout = 600 # time out in secs, set to -1 to disable timeout\n\n# add Binder launch buttom at the top\nCURRENT_FILE = \"{{ env.doc2path( env.docname, base=None) }}\"\n\n# make sure Binder points to latest stable release, not master\nBINDER_URL = f\"https://mybinder.org/v2/gh/alan-turing-institute/sktime/{CURRENT_VERSION}?filepath={CURRENT_FILE}\" # noqa\nnbsphinx_prolog = f\"\"\"\n.. |binder| image:: https://mybinder.org/badge_logo.svg\n.. _Binder: {BINDER_URL}\n\n|Binder|_\n\"\"\"\n\n# add link to original notebook at the bottom\nNOTEBOOK_URL = f\"https://github.com/alan-turing-institute/sktime/tree/{CURRENT_VERSION}/{CURRENT_FILE}\" # noqa\nnbsphinx_epilog = f\"\"\"\n----\n\nGenerated by nbsphinx_. The Jupyter notebook can be found here_.\n\n.. _here: {NOTEBOOK_URL}\n.. _nbsphinx: https://nbsphinx.readthedocs.io/\n\"\"\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/\": None}\n\n# -- Options for _todo extension ----------------------------------------------\ntodo_include_todos = False\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\nimport os\nimport sys\n\nimport sktime\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n# When we build the docs on readthedocs, we build the package and want to\n# use the built files in order for sphinx to be able to properly read the\n# Cython files. Hence, we do not add the source code path to the system\n# path.\nON_READTHEDOCS = os.environ.get(\"READTHEDOCS\") == \"True\"\nif not ON_READTHEDOCS:\n sys.path.insert(0, os.path.abspath(\"../..\"))\n\n# -- Project information -----------------------------------------------------\nproject = \"sktime\"\ncopyright = \"2019 - 2020 (BSD-3-Clause License)\"\nauthor = \"sktime developers\"\n\n# The full version, including alpha/beta/rc tags\nCURRENT_VERSION = f\"v{sktime.__version__}\"\n\n# If on readthedocs, and we're building the latest version, update tag to generate\n# correct links in notebooks\nif ON_READTHEDOCS:\n READTHEDOCS_VERSION = os.environ.get(\"READTHEDOCS_VERSION\")\n if READTHEDOCS_VERSION == \"latest\":\n CURRENT_VERSION = \"master\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.mathjax\",\n # 'sphinx.ext.viewcode', # link to auto-generated source code files (rst)\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.linkcode\", # link to GitHub source code via linkcode_resolve()\n \"sphinx.ext.napoleon\",\n \"nbsphinx\", # integrates example notebooks\n \"m2r2\", # markdown rendering\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \".ipynb_checkpoints\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# see http://stackoverflow.com/q/12206334/562769\nnumpydoc_show_class_members = True\nnumpydoc_class_members_toctree = False\n\n# generate autosummary even if no references\nautosummary_generate = True\nautodoc_default_flags = [\"members\", \"inherited-members\"]\n\n\ndef linkcode_resolve(domain, info):\n def find_source():\n # try to find the file and line number, based on code from numpy:\n # https://github.com/numpy/numpy/blob/master/doc/source/conf.py#L286\n obj = sys.modules[info[\"module\"]]\n for part in info[\"fullname\"].split(\".\"):\n obj = getattr(obj, part)\n import inspect\n import os\n\n fn = inspect.getsourcefile(obj)\n fn = os.path.relpath(fn, start=os.path.dirname(sktime.__file__))\n source, lineno = inspect.getsourcelines(obj)\n return fn, lineno, lineno + len(source) - 1\n\n if domain != \"py\" or not info[\"module\"]:\n return None\n try:\n filename = \"sktime/%s#L%d-L%d\" % find_source()\n except Exception:\n filename = info[\"module\"].replace(\".\", \"/\") + \".py\"\n return \"https://github.com/alan-turing-institute/sktime/blob/%s/%s\" % (\n CURRENT_VERSION,\n filename,\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n\nhtml_theme = \"sphinx_rtd_theme\"\n# html_theme = 'bootstrap'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n\nhtml_theme_options = {\n \"prev_next_buttons_location\": None,\n}\n\nhtml_favicon = \"images/sktime-favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\nhtml_show_sourcelink = False\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"sktimedoc\"\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"sktime.tex\", \"sktime Documentation\", \"sktime developers\", \"manual\"),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"sktime\", \"sktime Documentation\", [author], 1)]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"sktime\",\n \"sktime Documentation\",\n author,\n \"sktime\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\ndef setup(app):\n def adds(pth):\n print(\"Adding stylesheet: %s\" % pth) # noqa: T001\n app.add_css_file(pth)\n\n adds(\"fields.css\") # for parameters, etc.\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for nbsphinx extension ---------------------------------------\nnbsphinx_execute = \"always\" # whether or not to run notebooks\nnbsphinx_allow_errors = False # False\nnbsphinx_timeout = 600 # time out in secs, set to -1 to disable timeout\n\n# add Binder launch buttom at the top\nCURRENT_FILE = \"{{ env.doc2path( env.docname, base=None) }}\"\n\n# make sure Binder points to latest stable release, not master\nBINDER_URL = f\"https://mybinder.org/v2/gh/alan-turing-institute/sktime/{CURRENT_VERSION}?filepath={CURRENT_FILE}\" # noqa\nnbsphinx_prolog = f\"\"\"\n.. |binder| image:: https://mybinder.org/badge_logo.svg\n.. _Binder: {BINDER_URL}\n\n|Binder|_\n\"\"\"\n\n# add link to original notebook at the bottom\nNOTEBOOK_URL = f\"https://github.com/alan-turing-institute/sktime/tree/{CURRENT_VERSION}/{CURRENT_FILE}\" # noqa\nnbsphinx_epilog = f\"\"\"\n----\n\nGenerated by nbsphinx_. The Jupyter notebook can be found here_.\n\n.. _here: {NOTEBOOK_URL}\n.. _nbsphinx: https://nbsphinx.readthedocs.io/\n\"\"\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/\": None}\n\n# -- Options for _todo extension ----------------------------------------------\ntodo_include_todos = False\n", "path": "docs/source/conf.py"}]}
| 3,203 | 313 |
gh_patches_debug_2696
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-7080
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-2225] [Bug] Suddenly getting ModuleNotFoundError: No module named 'pytz'
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
I am installing dbt-bigquery with meltano (which installs it in a isolated *venv*).
Today when invoking `dbt deps` using `meltano invoke dbt-bigquery:deps` I am getting a stacktrace with
ModuleNotFoundError: No module named 'pytz'
### Expected Behavior
`pytz` should be found. I have noted that it is not included in the requirements. So while it's strange that it suddenly started failing, maybe it was more of an accident that it ever worked in the first place?
### Steps To Reproduce
With versions specified as
dbt-core~=1.3.0
dbt-bigquery~=1.3.0
invoking `dbt deps` should not throw a ModuleNotFoundError
### Relevant log output
```shell
Traceback (most recent call last):
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/bin/dbt", line 5, in <module>
from dbt.main import main
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/main.py", line 24, in <module>
import dbt.task.build as build_task
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/build.py", line 1, in <module>
from .run import RunTask, ModelRunner as run_model_runner
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/run.py", line 8, in <module>
from .compile import CompileRunner, CompileTask
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/compile.py", line 4, in <module>
from .runnable import GraphRunnableTask
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/runnable.py", line 11, in <module>
from .printer import (
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/printer.py", line 22, in <module>
from dbt.tracking import InvocationProcessor
File "/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/tracking.py", line 25, in <module>
import pytz
ModuleNotFoundError: No module named 'pytz'
```
### Environment
```markdown
- OS: Linux (fresh docker container inside virtual environment)
- Python: 3.9
- dbt: 1.3.1 (~=1.3.0)
```
### Which database adapter are you using with dbt?
other (mention it in "Additional Context")
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 7, 2):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.7.2 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.2.4"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": [
47 "dbt = dbt.main:main",
48 ],
49 },
50 install_requires=[
51 "Jinja2==2.11.3",
52 "MarkupSafe>=0.23,<2.1",
53 "agate>=1.6,<1.6.4",
54 "click>=7.0,<9",
55 "colorama>=0.3.9,<0.4.6",
56 "hologram>=0.0.14,<=0.0.15",
57 "isodate>=0.6,<0.7",
58 "logbook>=1.5,<1.6",
59 "mashumaro==2.9",
60 "minimal-snowplow-tracker==0.0.2",
61 "networkx>=2.3,<2.8.1;python_version<'3.8'",
62 "networkx>=2.3,<3;python_version>='3.8'",
63 "packaging>=20.9,<22.0",
64 "sqlparse>=0.2.3,<0.5",
65 "dbt-extractor~=0.4.1",
66 "typing-extensions>=3.7.4",
67 "werkzeug>=1,<3",
68 # the following are all to match snowflake-connector-python
69 "requests<3.0.0",
70 "idna>=2.5,<4",
71 "cffi>=1.9,<2.0.0",
72 ],
73 zip_safe=False,
74 classifiers=[
75 "Development Status :: 5 - Production/Stable",
76 "License :: OSI Approved :: Apache Software License",
77 "Operating System :: Microsoft :: Windows",
78 "Operating System :: MacOS :: MacOS X",
79 "Operating System :: POSIX :: Linux",
80 "Programming Language :: Python :: 3.7",
81 "Programming Language :: Python :: 3.8",
82 "Programming Language :: Python :: 3.9",
83 "Programming Language :: Python :: 3.10",
84 ],
85 python_requires=">=3.7.2",
86 )
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -65,6 +65,7 @@
"dbt-extractor~=0.4.1",
"typing-extensions>=3.7.4",
"werkzeug>=1,<3",
+ "pytz>=2015.7",
# the following are all to match snowflake-connector-python
"requests<3.0.0",
"idna>=2.5,<4",
|
{"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -65,6 +65,7 @@\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n+ \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n", "issue": "[CT-2225] [Bug] Suddenly getting ModuleNotFoundError: No module named 'pytz'\n### Is this a new bug in dbt-core?\n\n- [X] I believe this is a new bug in dbt-core\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\n\n### Current Behavior\n\nI am installing dbt-bigquery with meltano (which installs it in a isolated *venv*).\r\n\r\nToday when invoking `dbt deps` using `meltano invoke dbt-bigquery:deps` I am getting a stacktrace with \r\n\r\n ModuleNotFoundError: No module named 'pytz'\r\n\r\n\n\n### Expected Behavior\n\n`pytz` should be found. I have noted that it is not included in the requirements. So while it's strange that it suddenly started failing, maybe it was more of an accident that it ever worked in the first place?\n\n### Steps To Reproduce\n\nWith versions specified as\r\n\r\n dbt-core~=1.3.0\r\n dbt-bigquery~=1.3.0\r\n\r\ninvoking `dbt deps` should not throw a ModuleNotFoundError\n\n### Relevant log output\n\n```shell\nTraceback (most recent call last):\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/bin/dbt\", line 5, in <module>\r\n from dbt.main import main\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/main.py\", line 24, in <module>\r\n import dbt.task.build as build_task\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/build.py\", line 1, in <module>\r\n from .run import RunTask, ModelRunner as run_model_runner\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/run.py\", line 8, in <module>\r\n from .compile import CompileRunner, CompileTask\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/compile.py\", line 4, in <module>\r\n from .runnable import GraphRunnableTask\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/runnable.py\", line 11, in <module>\r\n from .printer import (\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/task/printer.py\", line 22, in <module>\r\n from dbt.tracking import InvocationProcessor\r\n File \"/workspaces/elt/.meltano/transformers/dbt-bigquery/venv/lib/python3.9/site-packages/dbt/tracking.py\", line 25, in <module>\r\n import pytz\r\nModuleNotFoundError: No module named 'pytz'\n```\n\n\n### Environment\n\n```markdown\n- OS: Linux (fresh docker container inside virtual environment)\r\n- Python: 3.9\r\n- dbt: 1.3.1 (~=1.3.0)\n```\n\n\n### Which database adapter are you using with dbt?\n\nother (mention it in \"Additional Context\")\n\n### Additional Context\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.2.4\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==2.11.3\",\n \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro==2.9\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.2.4\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==2.11.3\",\n \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro==2.9\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}]}
| 1,940 | 118 |
gh_patches_debug_21129
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1626
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
images.build failing on container build output
The ImageCollection.build() method parses the `docker build` output to extract the name of the image being built by looking for the string "Successfully built {sha256}". Some commands in a Dockerfile can also produce this string causing the build() method to return early with an incorrect image name.
Example output from a `RUN pip install cffi` command:
```
...
Building wheels for collected packages: cffi, pycparser
Running setup.py bdist_wheel for cffi
Stored in directory: /root/.cache/pip/wheels/ea/df/5c/7d11cd4eec7b94be3b0ec4ed0076e0f38846a22c7200576149
Running setup.py bdist_wheel for pycparser
Stored in directory: /root/.cache/pip/wheels/bc/07/7e/cd81c0b23417ebaed9eb2584c00bd539871d5309a7a2324953
Successfully built cffi pycparser
...
```
A partial fix is to make the regex used to search for the string more restrictive by adding start and end anchors. This can still result in some false positives.
A more complete fix only checks the last line of output for the "Success..." string.
PR in progress.
```
docker==2.3.0
docker-pycreds==0.2.1
Python 2.7.5
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-16.el7.x86_64
Go version: go1.7.4
Git commit: 3a094bd/1.12.6
Built: Tue Mar 21 13:30:59 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.1-ce
API version: 1.27
Package version:
Go version: go1.7.5
Git commit: c6d412e
Built: Fri Mar 24 00:00:50 2017
OS/Arch: linux/amd64
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/models/images.py`
Content:
```
1 import re
2
3 import six
4
5 from ..api import APIClient
6 from ..errors import BuildError
7 from ..utils.json_stream import json_stream
8 from .resource import Collection, Model
9
10
11 class Image(Model):
12 """
13 An image on the server.
14 """
15 def __repr__(self):
16 return "<%s: '%s'>" % (self.__class__.__name__, "', '".join(self.tags))
17
18 @property
19 def labels(self):
20 """
21 The labels of an image as dictionary.
22 """
23 result = self.attrs['Config'].get('Labels')
24 return result or {}
25
26 @property
27 def short_id(self):
28 """
29 The ID of the image truncated to 10 characters, plus the ``sha256:``
30 prefix.
31 """
32 if self.id.startswith('sha256:'):
33 return self.id[:17]
34 return self.id[:10]
35
36 @property
37 def tags(self):
38 """
39 The image's tags.
40 """
41 tags = self.attrs.get('RepoTags')
42 if tags is None:
43 tags = []
44 return [tag for tag in tags if tag != '<none>:<none>']
45
46 def history(self):
47 """
48 Show the history of an image.
49
50 Returns:
51 (str): The history of the image.
52
53 Raises:
54 :py:class:`docker.errors.APIError`
55 If the server returns an error.
56 """
57 return self.client.api.history(self.id)
58
59 def save(self):
60 """
61 Get a tarball of an image. Similar to the ``docker save`` command.
62
63 Returns:
64 (urllib3.response.HTTPResponse object): The response from the
65 daemon.
66
67 Raises:
68 :py:class:`docker.errors.APIError`
69 If the server returns an error.
70
71 Example:
72
73 >>> image = cli.images.get("fedora:latest")
74 >>> resp = image.save()
75 >>> f = open('/tmp/fedora-latest.tar', 'w')
76 >>> for chunk in resp.stream():
77 >>> f.write(chunk)
78 >>> f.close()
79 """
80 return self.client.api.get_image(self.id)
81
82 def tag(self, repository, tag=None, **kwargs):
83 """
84 Tag this image into a repository. Similar to the ``docker tag``
85 command.
86
87 Args:
88 repository (str): The repository to set for the tag
89 tag (str): The tag name
90 force (bool): Force
91
92 Raises:
93 :py:class:`docker.errors.APIError`
94 If the server returns an error.
95
96 Returns:
97 (bool): ``True`` if successful
98 """
99 self.client.api.tag(self.id, repository, tag=tag, **kwargs)
100
101
102 class ImageCollection(Collection):
103 model = Image
104
105 def build(self, **kwargs):
106 """
107 Build an image and return it. Similar to the ``docker build``
108 command. Either ``path`` or ``fileobj`` must be set.
109
110 If you have a tar file for the Docker build context (including a
111 Dockerfile) already, pass a readable file-like object to ``fileobj``
112 and also pass ``custom_context=True``. If the stream is compressed
113 also, set ``encoding`` to the correct value (e.g ``gzip``).
114
115 If you want to get the raw output of the build, use the
116 :py:meth:`~docker.api.build.BuildApiMixin.build` method in the
117 low-level API.
118
119 Args:
120 path (str): Path to the directory containing the Dockerfile
121 fileobj: A file object to use as the Dockerfile. (Or a file-like
122 object)
123 tag (str): A tag to add to the final image
124 quiet (bool): Whether to return the status
125 nocache (bool): Don't use the cache when set to ``True``
126 rm (bool): Remove intermediate containers. The ``docker build``
127 command now defaults to ``--rm=true``, but we have kept the old
128 default of `False` to preserve backward compatibility
129 stream (bool): *Deprecated for API version > 1.8 (always True)*.
130 Return a blocking generator you can iterate over to retrieve
131 build output as it happens
132 timeout (int): HTTP timeout
133 custom_context (bool): Optional if using ``fileobj``
134 encoding (str): The encoding for a stream. Set to ``gzip`` for
135 compressing
136 pull (bool): Downloads any updates to the FROM image in Dockerfiles
137 forcerm (bool): Always remove intermediate containers, even after
138 unsuccessful builds
139 dockerfile (str): path within the build context to the Dockerfile
140 buildargs (dict): A dictionary of build arguments
141 container_limits (dict): A dictionary of limits applied to each
142 container created by the build process. Valid keys:
143
144 - memory (int): set memory limit for build
145 - memswap (int): Total memory (memory + swap), -1 to disable
146 swap
147 - cpushares (int): CPU shares (relative weight)
148 - cpusetcpus (str): CPUs in which to allow execution, e.g.,
149 ``"0-3"``, ``"0,1"``
150 decode (bool): If set to ``True``, the returned stream will be
151 decoded into dicts on the fly. Default ``False``.
152 cache_from (list): A list of images used for build cache
153 resolution.
154 target (str): Name of the build-stage to build in a multi-stage
155 Dockerfile.
156
157 Returns:
158 (:py:class:`Image`): The built image.
159
160 Raises:
161 :py:class:`docker.errors.BuildError`
162 If there is an error during the build.
163 :py:class:`docker.errors.APIError`
164 If the server returns any other error.
165 ``TypeError``
166 If neither ``path`` nor ``fileobj`` is specified.
167 """
168 resp = self.client.api.build(**kwargs)
169 if isinstance(resp, six.string_types):
170 return self.get(resp)
171 last_event = None
172 for chunk in json_stream(resp):
173 if 'error' in chunk:
174 raise BuildError(chunk['error'])
175 if 'stream' in chunk:
176 match = re.search(
177 r'(Successfully built |sha256:)([0-9a-f]+)',
178 chunk['stream']
179 )
180 if match:
181 image_id = match.group(2)
182 return self.get(image_id)
183 last_event = chunk
184
185 raise BuildError(last_event or 'Unknown')
186
187 def get(self, name):
188 """
189 Gets an image.
190
191 Args:
192 name (str): The name of the image.
193
194 Returns:
195 (:py:class:`Image`): The image.
196
197 Raises:
198 :py:class:`docker.errors.ImageNotFound`
199 If the image does not exist.
200 :py:class:`docker.errors.APIError`
201 If the server returns an error.
202 """
203 return self.prepare_model(self.client.api.inspect_image(name))
204
205 def list(self, name=None, all=False, filters=None):
206 """
207 List images on the server.
208
209 Args:
210 name (str): Only show images belonging to the repository ``name``
211 all (bool): Show intermediate image layers. By default, these are
212 filtered out.
213 filters (dict): Filters to be processed on the image list.
214 Available filters:
215 - ``dangling`` (bool)
216 - ``label`` (str): format either ``key`` or ``key=value``
217
218 Returns:
219 (list of :py:class:`Image`): The images.
220
221 Raises:
222 :py:class:`docker.errors.APIError`
223 If the server returns an error.
224 """
225 resp = self.client.api.images(name=name, all=all, filters=filters)
226 return [self.prepare_model(r) for r in resp]
227
228 def load(self, data):
229 """
230 Load an image that was previously saved using
231 :py:meth:`~docker.models.images.Image.save` (or ``docker save``).
232 Similar to ``docker load``.
233
234 Args:
235 data (binary): Image data to be loaded.
236
237 Raises:
238 :py:class:`docker.errors.APIError`
239 If the server returns an error.
240 """
241 return self.client.api.load_image(data)
242
243 def pull(self, name, tag=None, **kwargs):
244 """
245 Pull an image of the given name and return it. Similar to the
246 ``docker pull`` command.
247
248 If you want to get the raw pull output, use the
249 :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the
250 low-level API.
251
252 Args:
253 repository (str): The repository to pull
254 tag (str): The tag to pull
255 insecure_registry (bool): Use an insecure registry
256 auth_config (dict): Override the credentials that
257 :py:meth:`~docker.client.DockerClient.login` has set for
258 this request. ``auth_config`` should contain the ``username``
259 and ``password`` keys to be valid.
260
261 Returns:
262 (:py:class:`Image`): The image that has been pulled.
263
264 Raises:
265 :py:class:`docker.errors.APIError`
266 If the server returns an error.
267
268 Example:
269
270 >>> image = client.images.pull('busybox')
271 """
272 self.client.api.pull(name, tag=tag, **kwargs)
273 return self.get('{0}:{1}'.format(name, tag) if tag else name)
274
275 def push(self, repository, tag=None, **kwargs):
276 return self.client.api.push(repository, tag=tag, **kwargs)
277 push.__doc__ = APIClient.push.__doc__
278
279 def remove(self, *args, **kwargs):
280 self.client.api.remove_image(*args, **kwargs)
281 remove.__doc__ = APIClient.remove_image.__doc__
282
283 def search(self, *args, **kwargs):
284 return self.client.api.search(*args, **kwargs)
285 search.__doc__ = APIClient.search.__doc__
286
287 def prune(self, filters=None):
288 return self.client.api.prune_images(filters=filters)
289 prune.__doc__ = APIClient.prune_images.__doc__
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/models/images.py b/docker/models/images.py
--- a/docker/models/images.py
+++ b/docker/models/images.py
@@ -169,19 +169,20 @@
if isinstance(resp, six.string_types):
return self.get(resp)
last_event = None
+ image_id = None
for chunk in json_stream(resp):
if 'error' in chunk:
raise BuildError(chunk['error'])
if 'stream' in chunk:
match = re.search(
- r'(Successfully built |sha256:)([0-9a-f]+)',
+ r'(^Successfully built |sha256:)([0-9a-f]+)$',
chunk['stream']
)
if match:
image_id = match.group(2)
- return self.get(image_id)
last_event = chunk
-
+ if image_id:
+ return self.get(image_id)
raise BuildError(last_event or 'Unknown')
def get(self, name):
|
{"golden_diff": "diff --git a/docker/models/images.py b/docker/models/images.py\n--- a/docker/models/images.py\n+++ b/docker/models/images.py\n@@ -169,19 +169,20 @@\n if isinstance(resp, six.string_types):\n return self.get(resp)\n last_event = None\n+ image_id = None\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n if 'stream' in chunk:\n match = re.search(\n- r'(Successfully built |sha256:)([0-9a-f]+)',\n+ r'(^Successfully built |sha256:)([0-9a-f]+)$',\n chunk['stream']\n )\n if match:\n image_id = match.group(2)\n- return self.get(image_id)\n last_event = chunk\n-\n+ if image_id:\n+ return self.get(image_id)\n raise BuildError(last_event or 'Unknown')\n \n def get(self, name):\n", "issue": "images.build failing on container build output\nThe ImageCollection.build() method parses the `docker build` output to extract the name of the image being built by looking for the string \"Successfully built {sha256}\". Some commands in a Dockerfile can also produce this string causing the build() method to return early with an incorrect image name.\r\n\r\nExample output from a `RUN pip install cffi` command:\r\n```\r\n ...\r\nBuilding wheels for collected packages: cffi, pycparser\r\n Running setup.py bdist_wheel for cffi\r\n Stored in directory: /root/.cache/pip/wheels/ea/df/5c/7d11cd4eec7b94be3b0ec4ed0076e0f38846a22c7200576149\r\n Running setup.py bdist_wheel for pycparser\r\n Stored in directory: /root/.cache/pip/wheels/bc/07/7e/cd81c0b23417ebaed9eb2584c00bd539871d5309a7a2324953\r\nSuccessfully built cffi pycparser\r\n...\r\n```\r\n\r\nA partial fix is to make the regex used to search for the string more restrictive by adding start and end anchors. This can still result in some false positives.\r\nA more complete fix only checks the last line of output for the \"Success...\" string.\r\nPR in progress.\r\n\r\n```\r\ndocker==2.3.0\r\ndocker-pycreds==0.2.1\r\nPython 2.7.5\r\nClient:\r\n Version: 1.12.6\r\n API version: 1.24\r\n Package version: docker-common-1.12.6-16.el7.x86_64\r\n Go version: go1.7.4\r\n Git commit: 3a094bd/1.12.6\r\n Built: Tue Mar 21 13:30:59 2017\r\n OS/Arch: linux/amd64\r\n\r\nServer:\r\n Version: 17.03.1-ce\r\n API version: 1.27\r\n Package version: \r\n Go version: go1.7.5\r\n Git commit: c6d412e\r\n Built: Fri Mar 24 00:00:50 2017\r\n OS/Arch: linux/amd64\r\n```\n", "before_files": [{"content": "import re\n\nimport six\n\nfrom ..api import APIClient\nfrom ..errors import BuildError\nfrom ..utils.json_stream import json_stream\nfrom .resource import Collection, Model\n\n\nclass Image(Model):\n \"\"\"\n An image on the server.\n \"\"\"\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, \"', '\".join(self.tags))\n\n @property\n def labels(self):\n \"\"\"\n The labels of an image as dictionary.\n \"\"\"\n result = self.attrs['Config'].get('Labels')\n return result or {}\n\n @property\n def short_id(self):\n \"\"\"\n The ID of the image truncated to 10 characters, plus the ``sha256:``\n prefix.\n \"\"\"\n if self.id.startswith('sha256:'):\n return self.id[:17]\n return self.id[:10]\n\n @property\n def tags(self):\n \"\"\"\n The image's tags.\n \"\"\"\n tags = self.attrs.get('RepoTags')\n if tags is None:\n tags = []\n return [tag for tag in tags if tag != '<none>:<none>']\n\n def history(self):\n \"\"\"\n Show the history of an image.\n\n Returns:\n (str): The history of the image.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.history(self.id)\n\n def save(self):\n \"\"\"\n Get a tarball of an image. Similar to the ``docker save`` command.\n\n Returns:\n (urllib3.response.HTTPResponse object): The response from the\n daemon.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = cli.images.get(\"fedora:latest\")\n >>> resp = image.save()\n >>> f = open('/tmp/fedora-latest.tar', 'w')\n >>> for chunk in resp.stream():\n >>> f.write(chunk)\n >>> f.close()\n \"\"\"\n return self.client.api.get_image(self.id)\n\n def tag(self, repository, tag=None, **kwargs):\n \"\"\"\n Tag this image into a repository. Similar to the ``docker tag``\n command.\n\n Args:\n repository (str): The repository to set for the tag\n tag (str): The tag name\n force (bool): Force\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n\n\nclass ImageCollection(Collection):\n model = Image\n\n def build(self, **kwargs):\n \"\"\"\n Build an image and return it. Similar to the ``docker build``\n command. Either ``path`` or ``fileobj`` must be set.\n\n If you have a tar file for the Docker build context (including a\n Dockerfile) already, pass a readable file-like object to ``fileobj``\n and also pass ``custom_context=True``. If the stream is compressed\n also, set ``encoding`` to the correct value (e.g ``gzip``).\n\n If you want to get the raw output of the build, use the\n :py:meth:`~docker.api.build.BuildApiMixin.build` method in the\n low-level API.\n\n Args:\n path (str): Path to the directory containing the Dockerfile\n fileobj: A file object to use as the Dockerfile. (Or a file-like\n object)\n tag (str): A tag to add to the final image\n quiet (bool): Whether to return the status\n nocache (bool): Don't use the cache when set to ``True``\n rm (bool): Remove intermediate containers. The ``docker build``\n command now defaults to ``--rm=true``, but we have kept the old\n default of `False` to preserve backward compatibility\n stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve\n build output as it happens\n timeout (int): HTTP timeout\n custom_context (bool): Optional if using ``fileobj``\n encoding (str): The encoding for a stream. Set to ``gzip`` for\n compressing\n pull (bool): Downloads any updates to the FROM image in Dockerfiles\n forcerm (bool): Always remove intermediate containers, even after\n unsuccessful builds\n dockerfile (str): path within the build context to the Dockerfile\n buildargs (dict): A dictionary of build arguments\n container_limits (dict): A dictionary of limits applied to each\n container created by the build process. Valid keys:\n\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable\n swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g.,\n ``\"0-3\"``, ``\"0,1\"``\n decode (bool): If set to ``True``, the returned stream will be\n decoded into dicts on the fly. Default ``False``.\n cache_from (list): A list of images used for build cache\n resolution.\n target (str): Name of the build-stage to build in a multi-stage\n Dockerfile.\n\n Returns:\n (:py:class:`Image`): The built image.\n\n Raises:\n :py:class:`docker.errors.BuildError`\n If there is an error during the build.\n :py:class:`docker.errors.APIError`\n If the server returns any other error.\n ``TypeError``\n If neither ``path`` nor ``fileobj`` is specified.\n \"\"\"\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n last_event = None\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n if 'stream' in chunk:\n match = re.search(\n r'(Successfully built |sha256:)([0-9a-f]+)',\n chunk['stream']\n )\n if match:\n image_id = match.group(2)\n return self.get(image_id)\n last_event = chunk\n\n raise BuildError(last_event or 'Unknown')\n\n def get(self, name):\n \"\"\"\n Gets an image.\n\n Args:\n name (str): The name of the image.\n\n Returns:\n (:py:class:`Image`): The image.\n\n Raises:\n :py:class:`docker.errors.ImageNotFound`\n If the image does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_image(name))\n\n def list(self, name=None, all=False, filters=None):\n \"\"\"\n List images on the server.\n\n Args:\n name (str): Only show images belonging to the repository ``name``\n all (bool): Show intermediate image layers. By default, these are\n filtered out.\n filters (dict): Filters to be processed on the image list.\n Available filters:\n - ``dangling`` (bool)\n - ``label`` (str): format either ``key`` or ``key=value``\n\n Returns:\n (list of :py:class:`Image`): The images.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.images(name=name, all=all, filters=filters)\n return [self.prepare_model(r) for r in resp]\n\n def load(self, data):\n \"\"\"\n Load an image that was previously saved using\n :py:meth:`~docker.models.images.Image.save` (or ``docker save``).\n Similar to ``docker load``.\n\n Args:\n data (binary): Image data to be loaded.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.load_image(data)\n\n def pull(self, name, tag=None, **kwargs):\n \"\"\"\n Pull an image of the given name and return it. Similar to the\n ``docker pull`` command.\n\n If you want to get the raw pull output, use the\n :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the\n low-level API.\n\n Args:\n repository (str): The repository to pull\n tag (str): The tag to pull\n insecure_registry (bool): Use an insecure registry\n auth_config (dict): Override the credentials that\n :py:meth:`~docker.client.DockerClient.login` has set for\n this request. ``auth_config`` should contain the ``username``\n and ``password`` keys to be valid.\n\n Returns:\n (:py:class:`Image`): The image that has been pulled.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = client.images.pull('busybox')\n \"\"\"\n self.client.api.pull(name, tag=tag, **kwargs)\n return self.get('{0}:{1}'.format(name, tag) if tag else name)\n\n def push(self, repository, tag=None, **kwargs):\n return self.client.api.push(repository, tag=tag, **kwargs)\n push.__doc__ = APIClient.push.__doc__\n\n def remove(self, *args, **kwargs):\n self.client.api.remove_image(*args, **kwargs)\n remove.__doc__ = APIClient.remove_image.__doc__\n\n def search(self, *args, **kwargs):\n return self.client.api.search(*args, **kwargs)\n search.__doc__ = APIClient.search.__doc__\n\n def prune(self, filters=None):\n return self.client.api.prune_images(filters=filters)\n prune.__doc__ = APIClient.prune_images.__doc__\n", "path": "docker/models/images.py"}], "after_files": [{"content": "import re\n\nimport six\n\nfrom ..api import APIClient\nfrom ..errors import BuildError\nfrom ..utils.json_stream import json_stream\nfrom .resource import Collection, Model\n\n\nclass Image(Model):\n \"\"\"\n An image on the server.\n \"\"\"\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, \"', '\".join(self.tags))\n\n @property\n def labels(self):\n \"\"\"\n The labels of an image as dictionary.\n \"\"\"\n result = self.attrs['Config'].get('Labels')\n return result or {}\n\n @property\n def short_id(self):\n \"\"\"\n The ID of the image truncated to 10 characters, plus the ``sha256:``\n prefix.\n \"\"\"\n if self.id.startswith('sha256:'):\n return self.id[:17]\n return self.id[:10]\n\n @property\n def tags(self):\n \"\"\"\n The image's tags.\n \"\"\"\n tags = self.attrs.get('RepoTags')\n if tags is None:\n tags = []\n return [tag for tag in tags if tag != '<none>:<none>']\n\n def history(self):\n \"\"\"\n Show the history of an image.\n\n Returns:\n (str): The history of the image.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.history(self.id)\n\n def save(self):\n \"\"\"\n Get a tarball of an image. Similar to the ``docker save`` command.\n\n Returns:\n (urllib3.response.HTTPResponse object): The response from the\n daemon.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = cli.images.get(\"fedora:latest\")\n >>> resp = image.save()\n >>> f = open('/tmp/fedora-latest.tar', 'w')\n >>> for chunk in resp.stream():\n >>> f.write(chunk)\n >>> f.close()\n \"\"\"\n return self.client.api.get_image(self.id)\n\n def tag(self, repository, tag=None, **kwargs):\n \"\"\"\n Tag this image into a repository. Similar to the ``docker tag``\n command.\n\n Args:\n repository (str): The repository to set for the tag\n tag (str): The tag name\n force (bool): Force\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n\n\nclass ImageCollection(Collection):\n model = Image\n\n def build(self, **kwargs):\n \"\"\"\n Build an image and return it. Similar to the ``docker build``\n command. Either ``path`` or ``fileobj`` must be set.\n\n If you have a tar file for the Docker build context (including a\n Dockerfile) already, pass a readable file-like object to ``fileobj``\n and also pass ``custom_context=True``. If the stream is compressed\n also, set ``encoding`` to the correct value (e.g ``gzip``).\n\n If you want to get the raw output of the build, use the\n :py:meth:`~docker.api.build.BuildApiMixin.build` method in the\n low-level API.\n\n Args:\n path (str): Path to the directory containing the Dockerfile\n fileobj: A file object to use as the Dockerfile. (Or a file-like\n object)\n tag (str): A tag to add to the final image\n quiet (bool): Whether to return the status\n nocache (bool): Don't use the cache when set to ``True``\n rm (bool): Remove intermediate containers. The ``docker build``\n command now defaults to ``--rm=true``, but we have kept the old\n default of `False` to preserve backward compatibility\n stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve\n build output as it happens\n timeout (int): HTTP timeout\n custom_context (bool): Optional if using ``fileobj``\n encoding (str): The encoding for a stream. Set to ``gzip`` for\n compressing\n pull (bool): Downloads any updates to the FROM image in Dockerfiles\n forcerm (bool): Always remove intermediate containers, even after\n unsuccessful builds\n dockerfile (str): path within the build context to the Dockerfile\n buildargs (dict): A dictionary of build arguments\n container_limits (dict): A dictionary of limits applied to each\n container created by the build process. Valid keys:\n\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable\n swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g.,\n ``\"0-3\"``, ``\"0,1\"``\n decode (bool): If set to ``True``, the returned stream will be\n decoded into dicts on the fly. Default ``False``.\n cache_from (list): A list of images used for build cache\n resolution.\n target (str): Name of the build-stage to build in a multi-stage\n Dockerfile.\n\n Returns:\n (:py:class:`Image`): The built image.\n\n Raises:\n :py:class:`docker.errors.BuildError`\n If there is an error during the build.\n :py:class:`docker.errors.APIError`\n If the server returns any other error.\n ``TypeError``\n If neither ``path`` nor ``fileobj`` is specified.\n \"\"\"\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n last_event = None\n image_id = None\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n if 'stream' in chunk:\n match = re.search(\n r'(^Successfully built |sha256:)([0-9a-f]+)$',\n chunk['stream']\n )\n if match:\n image_id = match.group(2)\n last_event = chunk\n if image_id:\n return self.get(image_id)\n raise BuildError(last_event or 'Unknown')\n\n def get(self, name):\n \"\"\"\n Gets an image.\n\n Args:\n name (str): The name of the image.\n\n Returns:\n (:py:class:`Image`): The image.\n\n Raises:\n :py:class:`docker.errors.ImageNotFound`\n If the image does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_image(name))\n\n def list(self, name=None, all=False, filters=None):\n \"\"\"\n List images on the server.\n\n Args:\n name (str): Only show images belonging to the repository ``name``\n all (bool): Show intermediate image layers. By default, these are\n filtered out.\n filters (dict): Filters to be processed on the image list.\n Available filters:\n - ``dangling`` (bool)\n - ``label`` (str): format either ``key`` or ``key=value``\n\n Returns:\n (list of :py:class:`Image`): The images.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.images(name=name, all=all, filters=filters)\n return [self.prepare_model(r) for r in resp]\n\n def load(self, data):\n \"\"\"\n Load an image that was previously saved using\n :py:meth:`~docker.models.images.Image.save` (or ``docker save``).\n Similar to ``docker load``.\n\n Args:\n data (binary): Image data to be loaded.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.load_image(data)\n\n def pull(self, name, tag=None, **kwargs):\n \"\"\"\n Pull an image of the given name and return it. Similar to the\n ``docker pull`` command.\n\n If you want to get the raw pull output, use the\n :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the\n low-level API.\n\n Args:\n repository (str): The repository to pull\n tag (str): The tag to pull\n insecure_registry (bool): Use an insecure registry\n auth_config (dict): Override the credentials that\n :py:meth:`~docker.client.DockerClient.login` has set for\n this request. ``auth_config`` should contain the ``username``\n and ``password`` keys to be valid.\n\n Returns:\n (:py:class:`Image`): The image that has been pulled.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = client.images.pull('busybox')\n \"\"\"\n self.client.api.pull(name, tag=tag, **kwargs)\n return self.get('{0}:{1}'.format(name, tag) if tag else name)\n\n def push(self, repository, tag=None, **kwargs):\n return self.client.api.push(repository, tag=tag, **kwargs)\n push.__doc__ = APIClient.push.__doc__\n\n def remove(self, *args, **kwargs):\n self.client.api.remove_image(*args, **kwargs)\n remove.__doc__ = APIClient.remove_image.__doc__\n\n def search(self, *args, **kwargs):\n return self.client.api.search(*args, **kwargs)\n search.__doc__ = APIClient.search.__doc__\n\n def prune(self, filters=None):\n return self.client.api.prune_images(filters=filters)\n prune.__doc__ = APIClient.prune_images.__doc__\n", "path": "docker/models/images.py"}]}
| 3,750 | 221 |
gh_patches_debug_1373
|
rasdani/github-patches
|
git_diff
|
ros__ros_comm-2007
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rospy import *
Hi,
Doing
```python
from rospy import *
```
raises the following exception :
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'rospy' has no attribute 'NodeProxy'
```
After some investigations, `NodeProxy` doesn't seem to exist anymore in the codebase. Simply removing it from the exports should do the trick.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `clients/rospy/src/rospy/__init__.py`
Content:
```
1 # Software License Agreement (BSD License)
2 #
3 # Copyright (c) 2008, Willow Garage, Inc.
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions
8 # are met:
9 #
10 # * Redistributions of source code must retain the above copyright
11 # notice, this list of conditions and the following disclaimer.
12 # * Redistributions in binary form must reproduce the above
13 # copyright notice, this list of conditions and the following
14 # disclaimer in the documentation and/or other materials provided
15 # with the distribution.
16 # * Neither the name of Willow Garage, Inc. nor the names of its
17 # contributors may be used to endorse or promote products derived
18 # from this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
21 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
22 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
23 # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
24 # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
25 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
26 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
27 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
29 # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
30 # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
31 # POSSIBILITY OF SUCH DAMAGE.
32 #
33 # Copyright (c) 2008, Willow Garage, Inc.
34 # Revision $Id$
35
36 """
37 ROS client library for Python.
38 See U{http://ros.org/wiki/rospy}
39 @author: Ken Conley (kwc)
40 """
41
42 # import symbols into rospy namespace
43 # NOTE: there are much better ways to configure python module
44 # dictionaries, but the rospy codebase isn't quite in shape for that
45 # yet
46
47 from std_msgs.msg import Header
48
49 from .client import spin, myargv, init_node, \
50 get_published_topics, \
51 wait_for_message, \
52 get_master, \
53 on_shutdown, \
54 get_param, get_param_cached, get_param_names, set_param, delete_param, has_param, search_param,\
55 DEBUG, INFO, WARN, ERROR, FATAL
56 from .timer import sleep, Rate, Timer
57 from .core import is_shutdown, signal_shutdown, \
58 get_node_uri, get_ros_root, \
59 logdebug, logwarn, loginfo, logout, logerr, logfatal, \
60 logdebug_throttle, logwarn_throttle, loginfo_throttle, logerr_throttle, logfatal_throttle, \
61 logdebug_throttle_identical, logwarn_throttle_identical, loginfo_throttle_identical, logerr_throttle_identical, logfatal_throttle_identical, \
62 logdebug_once, logwarn_once, loginfo_once, logerr_once, logfatal_once, \
63 parse_rosrpc_uri
64 from .exceptions import *
65 from .msg import AnyMsg
66 from .msproxy import MasterProxy
67 from .names import get_name, get_caller_id, get_namespace, resolve_name, remap_name
68 from .rostime import Time, Duration, get_rostime, get_time
69 from .service import ServiceException
70
71 # - use tcp ros implementation of services
72 from .impl.tcpros_service import Service, ServiceProxy, wait_for_service
73 from .topics import Message, SubscribeListener, Publisher, Subscriber
74
75 ## \defgroup validators Validators
76 ## \defgroup clientapi Client API
77
78 __all__ = [
79 'Header',
80 'spin',
81 'myargv',
82 'init_node',
83 'get_master',
84 'get_published_topics',
85 'wait_for_service',
86 'on_shutdown',
87 'get_param',
88 'get_param_cached',
89 'get_param_names',
90 'set_param',
91 'delete_param',
92 'has_param',
93 'search_param',
94 'sleep',
95 'Rate',
96 'DEBUG',
97 'INFO',
98 'WARN',
99 'ERROR',
100 'FATAL',
101 'is_shutdown',
102 'signal_shutdown',
103 'get_node_uri',
104 'get_ros_root',
105 'logdebug',
106 'logwarn', 'loginfo',
107 'logout', 'logerr', 'logfatal',
108 'logdebug_throttle',
109 'logwarn_throttle', 'loginfo_throttle',
110 'logerr_throttle', 'logfatal_throttle',
111 'logdebug_once',
112 'logwarn_once', 'loginfo_once',
113 'logerr_once', 'logfatal_once',
114 'parse_rosrpc_uri',
115 'MasterProxy',
116 'NodeProxy',
117 'ROSException',
118 'ROSSerializationException',
119 'ROSInitException',
120 'ROSInterruptException',
121 'ROSInternalException',
122 'TransportException',
123 'TransportTerminated',
124 'TransportInitError',
125 'AnyMsg', 'Message',
126 'get_name',
127 'get_caller_id',
128 'get_namespace',
129 'resolve_name',
130 'remap_name',
131 'Time', 'Duration', 'get_rostime', 'get_time',
132 'ServiceException',
133 'Service', 'ServiceProxy',
134 'SubscribeListener', 'Publisher', 'Subscriber',
135 ]
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/clients/rospy/src/rospy/__init__.py b/clients/rospy/src/rospy/__init__.py
--- a/clients/rospy/src/rospy/__init__.py
+++ b/clients/rospy/src/rospy/__init__.py
@@ -113,7 +113,6 @@
'logerr_once', 'logfatal_once',
'parse_rosrpc_uri',
'MasterProxy',
- 'NodeProxy',
'ROSException',
'ROSSerializationException',
'ROSInitException',
|
{"golden_diff": "diff --git a/clients/rospy/src/rospy/__init__.py b/clients/rospy/src/rospy/__init__.py\n--- a/clients/rospy/src/rospy/__init__.py\n+++ b/clients/rospy/src/rospy/__init__.py\n@@ -113,7 +113,6 @@\n 'logerr_once', 'logfatal_once',\n 'parse_rosrpc_uri',\n 'MasterProxy',\n- 'NodeProxy', \n 'ROSException',\n 'ROSSerializationException',\n 'ROSInitException',\n", "issue": "Rospy import *\nHi, \r\n\r\nDoing \r\n```python\r\nfrom rospy import *\r\n```\r\nraises the following exception : \r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nAttributeError: module 'rospy' has no attribute 'NodeProxy'\r\n```\r\nAfter some investigations, `NodeProxy` doesn't seem to exist anymore in the codebase. Simply removing it from the exports should do the trick.\r\n\n", "before_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# Revision $Id$\n\n\"\"\"\nROS client library for Python.\nSee U{http://ros.org/wiki/rospy}\n@author: Ken Conley (kwc)\n\"\"\"\n\n# import symbols into rospy namespace\n# NOTE: there are much better ways to configure python module\n# dictionaries, but the rospy codebase isn't quite in shape for that\n# yet\n\nfrom std_msgs.msg import Header\n\nfrom .client import spin, myargv, init_node, \\\n get_published_topics, \\\n wait_for_message, \\\n get_master, \\\n on_shutdown, \\\n get_param, get_param_cached, get_param_names, set_param, delete_param, has_param, search_param,\\\n DEBUG, INFO, WARN, ERROR, FATAL\nfrom .timer import sleep, Rate, Timer\nfrom .core import is_shutdown, signal_shutdown, \\\n get_node_uri, get_ros_root, \\\n logdebug, logwarn, loginfo, logout, logerr, logfatal, \\\n logdebug_throttle, logwarn_throttle, loginfo_throttle, logerr_throttle, logfatal_throttle, \\\n logdebug_throttle_identical, logwarn_throttle_identical, loginfo_throttle_identical, logerr_throttle_identical, logfatal_throttle_identical, \\\n logdebug_once, logwarn_once, loginfo_once, logerr_once, logfatal_once, \\\n parse_rosrpc_uri\nfrom .exceptions import *\nfrom .msg import AnyMsg\nfrom .msproxy import MasterProxy\nfrom .names import get_name, get_caller_id, get_namespace, resolve_name, remap_name\nfrom .rostime import Time, Duration, get_rostime, get_time\nfrom .service import ServiceException\n\n# - use tcp ros implementation of services\nfrom .impl.tcpros_service import Service, ServiceProxy, wait_for_service\nfrom .topics import Message, SubscribeListener, Publisher, Subscriber\n\n## \\defgroup validators Validators\n## \\defgroup clientapi Client API\n\n__all__ = [\n 'Header',\n 'spin',\n 'myargv',\n 'init_node',\n 'get_master',\n 'get_published_topics',\n 'wait_for_service',\n 'on_shutdown',\n 'get_param',\n 'get_param_cached',\n 'get_param_names',\n 'set_param',\n 'delete_param',\n 'has_param',\n 'search_param',\n 'sleep',\n 'Rate',\n 'DEBUG',\n 'INFO',\n 'WARN',\n 'ERROR',\n 'FATAL',\n 'is_shutdown',\n 'signal_shutdown',\n 'get_node_uri',\n 'get_ros_root',\n 'logdebug',\n 'logwarn', 'loginfo',\n 'logout', 'logerr', 'logfatal',\n 'logdebug_throttle',\n 'logwarn_throttle', 'loginfo_throttle',\n 'logerr_throttle', 'logfatal_throttle',\n 'logdebug_once',\n 'logwarn_once', 'loginfo_once',\n 'logerr_once', 'logfatal_once',\n 'parse_rosrpc_uri',\n 'MasterProxy',\n 'NodeProxy', \n 'ROSException',\n 'ROSSerializationException',\n 'ROSInitException',\n 'ROSInterruptException',\n 'ROSInternalException',\n 'TransportException',\n 'TransportTerminated',\n 'TransportInitError',\n 'AnyMsg', 'Message',\n 'get_name',\n 'get_caller_id',\n 'get_namespace',\n 'resolve_name',\n 'remap_name',\n 'Time', 'Duration', 'get_rostime', 'get_time',\n 'ServiceException', \n 'Service', 'ServiceProxy',\n 'SubscribeListener', 'Publisher', 'Subscriber',\n ]\n", "path": "clients/rospy/src/rospy/__init__.py"}], "after_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n#\n# Copyright (c) 2008, Willow Garage, Inc.\n# Revision $Id$\n\n\"\"\"\nROS client library for Python.\nSee U{http://ros.org/wiki/rospy}\n@author: Ken Conley (kwc)\n\"\"\"\n\n# import symbols into rospy namespace\n# NOTE: there are much better ways to configure python module\n# dictionaries, but the rospy codebase isn't quite in shape for that\n# yet\n\nfrom std_msgs.msg import Header\n\nfrom .client import spin, myargv, init_node, \\\n get_published_topics, \\\n wait_for_message, \\\n get_master, \\\n on_shutdown, \\\n get_param, get_param_cached, get_param_names, set_param, delete_param, has_param, search_param,\\\n DEBUG, INFO, WARN, ERROR, FATAL\nfrom .timer import sleep, Rate, Timer\nfrom .core import is_shutdown, signal_shutdown, \\\n get_node_uri, get_ros_root, \\\n logdebug, logwarn, loginfo, logout, logerr, logfatal, \\\n logdebug_throttle, logwarn_throttle, loginfo_throttle, logerr_throttle, logfatal_throttle, \\\n logdebug_throttle_identical, logwarn_throttle_identical, loginfo_throttle_identical, logerr_throttle_identical, logfatal_throttle_identical, \\\n logdebug_once, logwarn_once, loginfo_once, logerr_once, logfatal_once, \\\n parse_rosrpc_uri\nfrom .exceptions import *\nfrom .msg import AnyMsg\nfrom .msproxy import MasterProxy\nfrom .names import get_name, get_caller_id, get_namespace, resolve_name, remap_name\nfrom .rostime import Time, Duration, get_rostime, get_time\nfrom .service import ServiceException\n\n# - use tcp ros implementation of services\nfrom .impl.tcpros_service import Service, ServiceProxy, wait_for_service\nfrom .topics import Message, SubscribeListener, Publisher, Subscriber\n\n## \\defgroup validators Validators\n## \\defgroup clientapi Client API\n\n__all__ = [\n 'Header',\n 'spin',\n 'myargv',\n 'init_node',\n 'get_master',\n 'get_published_topics',\n 'wait_for_service',\n 'on_shutdown',\n 'get_param',\n 'get_param_cached',\n 'get_param_names',\n 'set_param',\n 'delete_param',\n 'has_param',\n 'search_param',\n 'sleep',\n 'Rate',\n 'DEBUG',\n 'INFO',\n 'WARN',\n 'ERROR',\n 'FATAL',\n 'is_shutdown',\n 'signal_shutdown',\n 'get_node_uri',\n 'get_ros_root',\n 'logdebug',\n 'logwarn', 'loginfo',\n 'logout', 'logerr', 'logfatal',\n 'logdebug_throttle',\n 'logwarn_throttle', 'loginfo_throttle',\n 'logerr_throttle', 'logfatal_throttle',\n 'logdebug_once',\n 'logwarn_once', 'loginfo_once',\n 'logerr_once', 'logfatal_once',\n 'parse_rosrpc_uri',\n 'MasterProxy',\n 'ROSException',\n 'ROSSerializationException',\n 'ROSInitException',\n 'ROSInterruptException',\n 'ROSInternalException',\n 'TransportException',\n 'TransportTerminated',\n 'TransportInitError',\n 'AnyMsg', 'Message',\n 'get_name',\n 'get_caller_id',\n 'get_namespace',\n 'resolve_name',\n 'remap_name',\n 'Time', 'Duration', 'get_rostime', 'get_time',\n 'ServiceException', \n 'Service', 'ServiceProxy',\n 'SubscribeListener', 'Publisher', 'Subscriber',\n ]\n", "path": "clients/rospy/src/rospy/__init__.py"}]}
| 1,795 | 125 |
gh_patches_debug_2776
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-1505
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EIT data fails with wcsaxes
The wcs information in the EIT header is not being identified as celestial axes by wcslib (inside astropy.wcs). This means that wcs is not detecting them as angular axes properly so therefore the set to arcsecond formatting is failing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/visualization/wcsaxes_compat.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Helpers and Functions to make WCSAxes work in SunPy
4 """
5 import warnings
6
7 import matplotlib.pyplot as plt
8
9 import astropy.units as u
10
11 try:
12 import wcsaxes
13 HAVE_WCSAXES = True
14
15 except ImportError:
16 HAVE_WCSAXES = False
17 warnings.warn("SunPy plotting is improved by installing the WCSAxes module: http://wcsaxes.readthedocs.org/en/latest/index.html")
18
19 FORCE_NO_WCSAXES = False
20
21 __all__ = ['HAVE_WCSAXES', 'is_wcsaxes', 'FORCE_NO_WCSAXES']
22
23 def is_wcsaxes(axes):
24 """
25 Test a matplotlib Axes object to see if it is an instance of WCSAxes
26
27 Parameters
28 ----------
29 axes : matplotlib Axes Object
30 Axes to test
31
32 Returns
33 -------
34 result : bool
35 Result of the test
36 """
37
38 if HAVE_WCSAXES and not FORCE_NO_WCSAXES:
39 return isinstance(axes, wcsaxes.WCSAxes)
40 else:
41 return False
42
43
44 def gca_wcs(wcs, fig=None):
45 """
46 Get the current axes, and return a WCSAxes if possible
47 """
48
49 if not fig:
50 fig = plt.gcf()
51
52 if not len(fig.get_axes()):
53 if HAVE_WCSAXES and not FORCE_NO_WCSAXES:
54 ax = plt.gca(projection=wcs)
55 else:
56 ax = plt.gca()
57
58 else:
59 ax = plt.gca()
60
61 return ax
62
63 def get_world_transform(axes):
64 if is_wcsaxes(axes):
65 transform = axes.get_transform('world')
66 else:
67 transform = axes.transData
68
69 return transform
70
71 def default_wcs_grid(axes):
72 """
73 Apply some default wcsaxes grid formatting
74 """
75 if not isinstance(axes, wcsaxes.WCSAxes):
76 raise TypeError("This axes is not a WCSAxes")
77
78 x = axes.coords[0]
79 y = axes.coords[1]
80
81 x.set_ticks(color='white')
82 y.set_ticks(color='white')
83
84 x.set_ticks_position('bl')
85 y.set_ticks_position('bl')
86
87 x.set_major_formatter('s.s')
88 y.set_major_formatter('s.s')
89
90 axes.coords.grid(color='white', alpha=0.6)
91
92 def wcsaxes_heliographic_overlay(axes):
93 """
94 Draw a heliographic overlay using wcsaxes
95 """
96 overlay = axes.get_coords_overlay('heliographicstonyhurst')
97
98 lon = overlay[0]
99 lat = overlay[1]
100
101 lon.coord_wrap = 180
102 lon.set_major_formatter('dd')
103
104 lon.set_axislabel('Solar Longitude')
105 lat.set_axislabel('Solar Latitude')
106
107 lon.set_ticks_position('tr')
108 lat.set_ticks_position('tr')
109
110 lon.set_ticks(spacing=10. * u.deg, color='white')
111 lat.set_ticks(spacing=10. * u.deg, color='white')
112
113 overlay.grid(color='white', alpha=0.5)
114
115 return overlay
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py
--- a/sunpy/visualization/wcsaxes_compat.py
+++ b/sunpy/visualization/wcsaxes_compat.py
@@ -84,6 +84,11 @@
x.set_ticks_position('bl')
y.set_ticks_position('bl')
+ if x.coord_type != 'longitude':
+ x.set_coord_type('longitude', coord_wrap=180.)
+ if y.coord_type != 'latitude':
+ y.set_coord_type('latitude')
+
x.set_major_formatter('s.s')
y.set_major_formatter('s.s')
|
{"golden_diff": "diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py\n--- a/sunpy/visualization/wcsaxes_compat.py\n+++ b/sunpy/visualization/wcsaxes_compat.py\n@@ -84,6 +84,11 @@\n x.set_ticks_position('bl')\n y.set_ticks_position('bl')\n \n+ if x.coord_type != 'longitude':\n+ x.set_coord_type('longitude', coord_wrap=180.)\n+ if y.coord_type != 'latitude':\n+ y.set_coord_type('latitude')\n+\n x.set_major_formatter('s.s')\n y.set_major_formatter('s.s')\n", "issue": "EIT data fails with wcsaxes\nThe wcs information in the EIT header is not being identified as celestial axes by wcslib (inside astropy.wcs). This means that wcs is not detecting them as angular axes properly so therefore the set to arcsecond formatting is failing.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nHelpers and Functions to make WCSAxes work in SunPy\n\"\"\"\nimport warnings\n\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\n\ntry:\n import wcsaxes\n HAVE_WCSAXES = True\n\nexcept ImportError:\n HAVE_WCSAXES = False\n warnings.warn(\"SunPy plotting is improved by installing the WCSAxes module: http://wcsaxes.readthedocs.org/en/latest/index.html\")\n\nFORCE_NO_WCSAXES = False\n\n__all__ = ['HAVE_WCSAXES', 'is_wcsaxes', 'FORCE_NO_WCSAXES']\n\ndef is_wcsaxes(axes):\n \"\"\"\n Test a matplotlib Axes object to see if it is an instance of WCSAxes\n\n Parameters\n ----------\n axes : matplotlib Axes Object\n Axes to test\n\n Returns\n -------\n result : bool\n Result of the test\n \"\"\"\n\n if HAVE_WCSAXES and not FORCE_NO_WCSAXES:\n return isinstance(axes, wcsaxes.WCSAxes)\n else:\n return False\n\n\ndef gca_wcs(wcs, fig=None):\n \"\"\"\n Get the current axes, and return a WCSAxes if possible\n \"\"\"\n\n if not fig:\n fig = plt.gcf()\n\n if not len(fig.get_axes()):\n if HAVE_WCSAXES and not FORCE_NO_WCSAXES:\n ax = plt.gca(projection=wcs)\n else:\n ax = plt.gca()\n\n else:\n ax = plt.gca()\n\n return ax\n\ndef get_world_transform(axes):\n if is_wcsaxes(axes):\n transform = axes.get_transform('world')\n else:\n transform = axes.transData\n\n return transform\n\ndef default_wcs_grid(axes):\n \"\"\"\n Apply some default wcsaxes grid formatting\n \"\"\"\n if not isinstance(axes, wcsaxes.WCSAxes):\n raise TypeError(\"This axes is not a WCSAxes\")\n\n x = axes.coords[0]\n y = axes.coords[1]\n\n x.set_ticks(color='white')\n y.set_ticks(color='white')\n\n x.set_ticks_position('bl')\n y.set_ticks_position('bl')\n\n x.set_major_formatter('s.s')\n y.set_major_formatter('s.s')\n\n axes.coords.grid(color='white', alpha=0.6)\n\ndef wcsaxes_heliographic_overlay(axes):\n \"\"\"\n Draw a heliographic overlay using wcsaxes\n \"\"\"\n overlay = axes.get_coords_overlay('heliographicstonyhurst')\n\n lon = overlay[0]\n lat = overlay[1]\n\n lon.coord_wrap = 180\n lon.set_major_formatter('dd')\n\n lon.set_axislabel('Solar Longitude')\n lat.set_axislabel('Solar Latitude')\n\n lon.set_ticks_position('tr')\n lat.set_ticks_position('tr')\n\n lon.set_ticks(spacing=10. * u.deg, color='white')\n lat.set_ticks(spacing=10. * u.deg, color='white')\n\n overlay.grid(color='white', alpha=0.5)\n\n return overlay\n", "path": "sunpy/visualization/wcsaxes_compat.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nHelpers and Functions to make WCSAxes work in SunPy\n\"\"\"\nimport warnings\n\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\n\ntry:\n import wcsaxes\n HAVE_WCSAXES = True\n\nexcept ImportError:\n HAVE_WCSAXES = False\n warnings.warn(\"SunPy plotting is improved by installing the WCSAxes module: http://wcsaxes.readthedocs.org/en/latest/index.html\")\n\nFORCE_NO_WCSAXES = False\n\n__all__ = ['HAVE_WCSAXES', 'is_wcsaxes', 'FORCE_NO_WCSAXES']\n\ndef is_wcsaxes(axes):\n \"\"\"\n Test a matplotlib Axes object to see if it is an instance of WCSAxes\n\n Parameters\n ----------\n axes : matplotlib Axes Object\n Axes to test\n\n Returns\n -------\n result : bool\n Result of the test\n \"\"\"\n\n if HAVE_WCSAXES and not FORCE_NO_WCSAXES:\n return isinstance(axes, wcsaxes.WCSAxes)\n else:\n return False\n\n\ndef gca_wcs(wcs, fig=None):\n \"\"\"\n Get the current axes, and return a WCSAxes if possible\n \"\"\"\n\n if not fig:\n fig = plt.gcf()\n\n if not len(fig.get_axes()):\n if HAVE_WCSAXES and not FORCE_NO_WCSAXES:\n ax = plt.gca(projection=wcs)\n else:\n ax = plt.gca()\n\n else:\n ax = plt.gca()\n\n return ax\n\ndef get_world_transform(axes):\n if is_wcsaxes(axes):\n transform = axes.get_transform('world')\n else:\n transform = axes.transData\n\n return transform\n\ndef default_wcs_grid(axes):\n \"\"\"\n Apply some default wcsaxes grid formatting\n \"\"\"\n if not isinstance(axes, wcsaxes.WCSAxes):\n raise TypeError(\"This axes is not a WCSAxes\")\n\n x = axes.coords[0]\n y = axes.coords[1]\n\n x.set_ticks(color='white')\n y.set_ticks(color='white')\n\n x.set_ticks_position('bl')\n y.set_ticks_position('bl')\n\n if x.coord_type != 'longitude':\n x.set_coord_type('longitude', coord_wrap=180.)\n if y.coord_type != 'latitude':\n y.set_coord_type('latitude')\n\n x.set_major_formatter('s.s')\n y.set_major_formatter('s.s')\n\n axes.coords.grid(color='white', alpha=0.6)\n\ndef wcsaxes_heliographic_overlay(axes):\n \"\"\"\n Draw a heliographic overlay using wcsaxes\n \"\"\"\n overlay = axes.get_coords_overlay('heliographicstonyhurst')\n\n lon = overlay[0]\n lat = overlay[1]\n\n lon.coord_wrap = 180\n lon.set_major_formatter('dd')\n\n lon.set_axislabel('Solar Longitude')\n lat.set_axislabel('Solar Latitude')\n\n lon.set_ticks_position('tr')\n lat.set_ticks_position('tr')\n\n lon.set_ticks(spacing=10. * u.deg, color='white')\n lat.set_ticks(spacing=10. * u.deg, color='white')\n\n overlay.grid(color='white', alpha=0.5)\n\n return overlay\n", "path": "sunpy/visualization/wcsaxes_compat.py"}]}
| 1,238 | 146 |
gh_patches_debug_41320
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1528
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEAT] MixedDataTypes to display a few examples of each type
Current display:

Desired - to see a few unique examples from each type. currently have to inspect column in order to do it, e.g.:

would want the check to display it by itself
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/tabular/checks/data_integrity/mixed_data_types.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """module contains Mixed Types check."""
12 from typing import List, Tuple, Union
13
14 import numpy as np
15 import pandas as pd
16
17 from deepchecks.core import CheckResult, ConditionCategory, ConditionResult
18 from deepchecks.tabular import Context, SingleDatasetCheck
19 from deepchecks.utils.dataframes import select_from_dataframe
20 from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df
21 from deepchecks.utils.strings import format_percent, is_string_column
22 from deepchecks.utils.typing import Hashable
23
24 __all__ = ['MixedDataTypes']
25
26
27 class MixedDataTypes(SingleDatasetCheck):
28 """Detect columns which contain a mix of numerical and string values.
29
30 Parameters
31 ----------
32 columns : Union[Hashable, List[Hashable]] , default: None
33 Columns to check, if none are given checks all columns
34 except ignored ones.
35 ignore_columns : Union[Hashable, List[Hashable]] , default: None
36 Columns to ignore, if none given checks based on columns
37 variable.
38 n_top_columns : int , optional
39 amount of columns to show ordered by feature importance (date, index, label are first)
40 """
41
42 def __init__(
43 self,
44 columns: Union[Hashable, List[Hashable], None] = None,
45 ignore_columns: Union[Hashable, List[Hashable], None] = None,
46 n_top_columns: int = 10,
47 **kwargs
48 ):
49 super().__init__(**kwargs)
50 self.columns = columns
51 self.ignore_columns = ignore_columns
52 self.n_top_columns = n_top_columns
53
54 def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:
55 """Run check.
56
57 Returns
58 -------
59 CheckResult
60 value is a dict where the key is the column name as key and the value is the ratio 'strings' and 'numbers'
61 for any column with mixed data types.
62 numbers will also include hidden numbers in string representation.
63 """
64 if dataset_type == 'train':
65 dataset = context.train
66 else:
67 dataset = context.test
68 features_importance = context.features_importance
69
70 df = select_from_dataframe(dataset.data, self.columns, self.ignore_columns)
71
72 # Result value: { Column Name: {string: pct, numbers: pct}}
73 display_dict = {}
74 result_dict = {}
75
76 for column_name in df.columns:
77 column_data = df[column_name].dropna()
78 mix = self._get_data_mix(column_data)
79 result_dict[column_name] = mix
80 if mix:
81 # Format percents for display
82 display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}
83
84 if display_dict:
85 df_graph = pd.DataFrame.from_dict(display_dict)
86 df_graph = column_importance_sorter_df(df_graph.T, dataset, features_importance,
87 self.n_top_columns).T
88 display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]
89 else:
90 display = None
91
92 return CheckResult(result_dict, display=display)
93
94 @classmethod
95 def _get_data_mix(cls, column_data: pd.Series) -> dict:
96 if is_string_column(column_data):
97 return cls._check_mixed_percentage(column_data)
98 return {}
99
100 @classmethod
101 def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:
102 total_rows = column_data.count()
103
104 def is_float(x) -> bool:
105 try:
106 float(x)
107 return True
108 except ValueError:
109 return False
110
111 nums = sum(column_data.apply(is_float))
112 if nums in (total_rows, 0):
113 return {}
114
115 # Then we've got a mix
116 nums_pct = nums / total_rows
117 strs_pct = (np.abs(nums - total_rows)) / total_rows
118
119 return {'strings': strs_pct, 'numbers': nums_pct}
120
121 def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):
122 """Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the "danger zone".
123
124 The "danger zone" represents the following logic - if the rarer data type is, for example, 30% of the data,
125 than the column is presumably supposed to contain both numbers and string values. If the rarer data type is,
126 for example, less than 1% of the data, than it's presumably a contamination, but a negligible one. In the range
127 between, there is a real chance that the rarer data type may represent a problem to model training and
128 inference.
129
130 Parameters
131 ----------
132 ratio_range : Tuple[float, float] , default: (0.01 , 0.1)
133 The range between which the ratio of rarer data type in the column is
134 considered a problem.
135 """
136 def condition(result):
137 no_mix_columns = []
138 failing_columns = []
139 for col, ratios in result.items():
140 # Columns without a mix contains empty dict for ratios
141 if not ratios:
142 no_mix_columns.append(col)
143 continue
144 rarer_ratio = min(ratios['strings'], ratios['numbers'])
145 if ratio_range[0] < rarer_ratio < ratio_range[1]:
146 failing_columns.append(col)
147 if failing_columns:
148 details = f'Found {len(failing_columns)} out of {len(result)} columns with non-negligible quantities ' \
149 f'of samples with a different data type from the majority of samples: {failing_columns}'
150 return ConditionResult(ConditionCategory.WARN, details)
151 details = f'{len(result)} columns passed: found {len(result) - len(no_mix_columns)} columns with ' \
152 f'negligible types mix, and {len(no_mix_columns)} columns without any types mix'
153 return ConditionResult(ConditionCategory.PASS, details)
154
155 name = f'Rare data types in column are either more than {format_percent(ratio_range[1])} or less ' \
156 f'than {format_percent(ratio_range[0])} of the data'
157 return self.add_condition(name, condition)
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py
--- a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py
+++ b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py
@@ -18,7 +18,7 @@
from deepchecks.tabular import Context, SingleDatasetCheck
from deepchecks.utils.dataframes import select_from_dataframe
from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df
-from deepchecks.utils.strings import format_percent, is_string_column
+from deepchecks.utils.strings import format_list, format_number, format_percent, get_ellipsis, is_string_column
from deepchecks.utils.typing import Hashable
__all__ = ['MixedDataTypes']
@@ -79,7 +79,13 @@
result_dict[column_name] = mix
if mix:
# Format percents for display
- display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}
+ formated_mix = {}
+ formated_mix['Strings'] = format_percent(mix['strings'])
+ formated_mix['Numbers'] = format_percent(mix['numbers'])
+ formated_mix['Strings examples'] = [get_ellipsis(strr, 15) for strr in mix['strings_examples']]
+ formated_mix['Numbers examples'] = '[' + format_list([format_number(float(num))
+ for num in mix['numbers_examples']]) + ']'
+ display_dict[column_name] = formated_mix
if display_dict:
df_graph = pd.DataFrame.from_dict(display_dict)
@@ -91,21 +97,26 @@
return CheckResult(result_dict, display=display)
- @classmethod
- def _get_data_mix(cls, column_data: pd.Series) -> dict:
+ def _get_data_mix(self, column_data: pd.Series) -> dict:
if is_string_column(column_data):
- return cls._check_mixed_percentage(column_data)
+ return self._check_mixed_percentage(column_data)
return {}
- @classmethod
- def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:
+ def _check_mixed_percentage(self, column_data: pd.Series) -> dict:
total_rows = column_data.count()
+ numbers_in_col = set()
+ strings_in_col = set()
+
def is_float(x) -> bool:
try:
float(x)
+ if len(numbers_in_col) < 3:
+ numbers_in_col.add(x)
return True
except ValueError:
+ if len(strings_in_col) < 3:
+ strings_in_col.add(x)
return False
nums = sum(column_data.apply(is_float))
@@ -116,7 +127,8 @@
nums_pct = nums / total_rows
strs_pct = (np.abs(nums - total_rows)) / total_rows
- return {'strings': strs_pct, 'numbers': nums_pct}
+ return {'strings': strs_pct, 'numbers': nums_pct,
+ 'strings_examples': strings_in_col, 'numbers_examples': numbers_in_col}
def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):
"""Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the "danger zone".
|
{"golden_diff": "diff --git a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py\n--- a/deepchecks/tabular/checks/data_integrity/mixed_data_types.py\n+++ b/deepchecks/tabular/checks/data_integrity/mixed_data_types.py\n@@ -18,7 +18,7 @@\n from deepchecks.tabular import Context, SingleDatasetCheck\n from deepchecks.utils.dataframes import select_from_dataframe\n from deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\n-from deepchecks.utils.strings import format_percent, is_string_column\n+from deepchecks.utils.strings import format_list, format_number, format_percent, get_ellipsis, is_string_column\n from deepchecks.utils.typing import Hashable\n \n __all__ = ['MixedDataTypes']\n@@ -79,7 +79,13 @@\n result_dict[column_name] = mix\n if mix:\n # Format percents for display\n- display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}\n+ formated_mix = {}\n+ formated_mix['Strings'] = format_percent(mix['strings'])\n+ formated_mix['Numbers'] = format_percent(mix['numbers'])\n+ formated_mix['Strings examples'] = [get_ellipsis(strr, 15) for strr in mix['strings_examples']]\n+ formated_mix['Numbers examples'] = '[' + format_list([format_number(float(num))\n+ for num in mix['numbers_examples']]) + ']'\n+ display_dict[column_name] = formated_mix\n \n if display_dict:\n df_graph = pd.DataFrame.from_dict(display_dict)\n@@ -91,21 +97,26 @@\n \n return CheckResult(result_dict, display=display)\n \n- @classmethod\n- def _get_data_mix(cls, column_data: pd.Series) -> dict:\n+ def _get_data_mix(self, column_data: pd.Series) -> dict:\n if is_string_column(column_data):\n- return cls._check_mixed_percentage(column_data)\n+ return self._check_mixed_percentage(column_data)\n return {}\n \n- @classmethod\n- def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:\n+ def _check_mixed_percentage(self, column_data: pd.Series) -> dict:\n total_rows = column_data.count()\n \n+ numbers_in_col = set()\n+ strings_in_col = set()\n+\n def is_float(x) -> bool:\n try:\n float(x)\n+ if len(numbers_in_col) < 3:\n+ numbers_in_col.add(x)\n return True\n except ValueError:\n+ if len(strings_in_col) < 3:\n+ strings_in_col.add(x)\n return False\n \n nums = sum(column_data.apply(is_float))\n@@ -116,7 +127,8 @@\n nums_pct = nums / total_rows\n strs_pct = (np.abs(nums - total_rows)) / total_rows\n \n- return {'strings': strs_pct, 'numbers': nums_pct}\n+ return {'strings': strs_pct, 'numbers': nums_pct,\n+ 'strings_examples': strings_in_col, 'numbers_examples': numbers_in_col}\n \n def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):\n \"\"\"Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the \"danger zone\".\n", "issue": "[FEAT] MixedDataTypes to display a few examples of each type\nCurrent display:\r\n\r\n\r\nDesired - to see a few unique examples from each type. currently have to inspect column in order to do it, e.g.:\r\n\r\nwould want the check to display it by itself\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"module contains Mixed Types check.\"\"\"\nfrom typing import List, Tuple, Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom deepchecks.core import CheckResult, ConditionCategory, ConditionResult\nfrom deepchecks.tabular import Context, SingleDatasetCheck\nfrom deepchecks.utils.dataframes import select_from_dataframe\nfrom deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\nfrom deepchecks.utils.strings import format_percent, is_string_column\nfrom deepchecks.utils.typing import Hashable\n\n__all__ = ['MixedDataTypes']\n\n\nclass MixedDataTypes(SingleDatasetCheck):\n \"\"\"Detect columns which contain a mix of numerical and string values.\n\n Parameters\n ----------\n columns : Union[Hashable, List[Hashable]] , default: None\n Columns to check, if none are given checks all columns\n except ignored ones.\n ignore_columns : Union[Hashable, List[Hashable]] , default: None\n Columns to ignore, if none given checks based on columns\n variable.\n n_top_columns : int , optional\n amount of columns to show ordered by feature importance (date, index, label are first)\n \"\"\"\n\n def __init__(\n self,\n columns: Union[Hashable, List[Hashable], None] = None,\n ignore_columns: Union[Hashable, List[Hashable], None] = None,\n n_top_columns: int = 10,\n **kwargs\n ):\n super().__init__(**kwargs)\n self.columns = columns\n self.ignore_columns = ignore_columns\n self.n_top_columns = n_top_columns\n\n def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:\n \"\"\"Run check.\n\n Returns\n -------\n CheckResult\n value is a dict where the key is the column name as key and the value is the ratio 'strings' and 'numbers'\n for any column with mixed data types.\n numbers will also include hidden numbers in string representation.\n \"\"\"\n if dataset_type == 'train':\n dataset = context.train\n else:\n dataset = context.test\n features_importance = context.features_importance\n\n df = select_from_dataframe(dataset.data, self.columns, self.ignore_columns)\n\n # Result value: { Column Name: {string: pct, numbers: pct}}\n display_dict = {}\n result_dict = {}\n\n for column_name in df.columns:\n column_data = df[column_name].dropna()\n mix = self._get_data_mix(column_data)\n result_dict[column_name] = mix\n if mix:\n # Format percents for display\n display_dict[column_name] = {k: format_percent(v) for k, v in mix.items()}\n\n if display_dict:\n df_graph = pd.DataFrame.from_dict(display_dict)\n df_graph = column_importance_sorter_df(df_graph.T, dataset, features_importance,\n self.n_top_columns).T\n display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]\n else:\n display = None\n\n return CheckResult(result_dict, display=display)\n\n @classmethod\n def _get_data_mix(cls, column_data: pd.Series) -> dict:\n if is_string_column(column_data):\n return cls._check_mixed_percentage(column_data)\n return {}\n\n @classmethod\n def _check_mixed_percentage(cls, column_data: pd.Series) -> dict:\n total_rows = column_data.count()\n\n def is_float(x) -> bool:\n try:\n float(x)\n return True\n except ValueError:\n return False\n\n nums = sum(column_data.apply(is_float))\n if nums in (total_rows, 0):\n return {}\n\n # Then we've got a mix\n nums_pct = nums / total_rows\n strs_pct = (np.abs(nums - total_rows)) / total_rows\n\n return {'strings': strs_pct, 'numbers': nums_pct}\n\n def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):\n \"\"\"Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the \"danger zone\".\n\n The \"danger zone\" represents the following logic - if the rarer data type is, for example, 30% of the data,\n than the column is presumably supposed to contain both numbers and string values. If the rarer data type is,\n for example, less than 1% of the data, than it's presumably a contamination, but a negligible one. In the range\n between, there is a real chance that the rarer data type may represent a problem to model training and\n inference.\n\n Parameters\n ----------\n ratio_range : Tuple[float, float] , default: (0.01 , 0.1)\n The range between which the ratio of rarer data type in the column is\n considered a problem.\n \"\"\"\n def condition(result):\n no_mix_columns = []\n failing_columns = []\n for col, ratios in result.items():\n # Columns without a mix contains empty dict for ratios\n if not ratios:\n no_mix_columns.append(col)\n continue\n rarer_ratio = min(ratios['strings'], ratios['numbers'])\n if ratio_range[0] < rarer_ratio < ratio_range[1]:\n failing_columns.append(col)\n if failing_columns:\n details = f'Found {len(failing_columns)} out of {len(result)} columns with non-negligible quantities ' \\\n f'of samples with a different data type from the majority of samples: {failing_columns}'\n return ConditionResult(ConditionCategory.WARN, details)\n details = f'{len(result)} columns passed: found {len(result) - len(no_mix_columns)} columns with ' \\\n f'negligible types mix, and {len(no_mix_columns)} columns without any types mix'\n return ConditionResult(ConditionCategory.PASS, details)\n\n name = f'Rare data types in column are either more than {format_percent(ratio_range[1])} or less ' \\\n f'than {format_percent(ratio_range[0])} of the data'\n return self.add_condition(name, condition)\n", "path": "deepchecks/tabular/checks/data_integrity/mixed_data_types.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"module contains Mixed Types check.\"\"\"\nfrom typing import List, Tuple, Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom deepchecks.core import CheckResult, ConditionCategory, ConditionResult\nfrom deepchecks.tabular import Context, SingleDatasetCheck\nfrom deepchecks.utils.dataframes import select_from_dataframe\nfrom deepchecks.utils.features import N_TOP_MESSAGE, column_importance_sorter_df\nfrom deepchecks.utils.strings import format_list, format_number, format_percent, get_ellipsis, is_string_column\nfrom deepchecks.utils.typing import Hashable\n\n__all__ = ['MixedDataTypes']\n\n\nclass MixedDataTypes(SingleDatasetCheck):\n \"\"\"Detect columns which contain a mix of numerical and string values.\n\n Parameters\n ----------\n columns : Union[Hashable, List[Hashable]] , default: None\n Columns to check, if none are given checks all columns\n except ignored ones.\n ignore_columns : Union[Hashable, List[Hashable]] , default: None\n Columns to ignore, if none given checks based on columns\n variable.\n n_top_columns : int , optional\n amount of columns to show ordered by feature importance (date, index, label are first)\n \"\"\"\n\n def __init__(\n self,\n columns: Union[Hashable, List[Hashable], None] = None,\n ignore_columns: Union[Hashable, List[Hashable], None] = None,\n n_top_columns: int = 10,\n **kwargs\n ):\n super().__init__(**kwargs)\n self.columns = columns\n self.ignore_columns = ignore_columns\n self.n_top_columns = n_top_columns\n\n def run_logic(self, context: Context, dataset_type: str = 'train') -> CheckResult:\n \"\"\"Run check.\n\n Returns\n -------\n CheckResult\n value is a dict where the key is the column name as key and the value is the ratio 'strings' and 'numbers'\n for any column with mixed data types.\n numbers will also include hidden numbers in string representation.\n \"\"\"\n if dataset_type == 'train':\n dataset = context.train\n else:\n dataset = context.test\n features_importance = context.features_importance\n\n df = select_from_dataframe(dataset.data, self.columns, self.ignore_columns)\n\n # Result value: { Column Name: {string: pct, numbers: pct}}\n display_dict = {}\n result_dict = {}\n\n for column_name in df.columns:\n column_data = df[column_name].dropna()\n mix = self._get_data_mix(column_data)\n result_dict[column_name] = mix\n if mix:\n # Format percents for display\n formated_mix = {}\n formated_mix['Strings'] = format_percent(mix['strings'])\n formated_mix['Numbers'] = format_percent(mix['numbers'])\n formated_mix['Strings examples'] = [get_ellipsis(strr, 15) for strr in mix['strings_examples']]\n formated_mix['Numbers examples'] = '[' + format_list([format_number(float(num))\n for num in mix['numbers_examples']]) + ']'\n display_dict[column_name] = formated_mix\n\n if display_dict:\n df_graph = pd.DataFrame.from_dict(display_dict)\n df_graph = column_importance_sorter_df(df_graph.T, dataset, features_importance,\n self.n_top_columns).T\n display = [N_TOP_MESSAGE % self.n_top_columns, df_graph]\n else:\n display = None\n\n return CheckResult(result_dict, display=display)\n\n def _get_data_mix(self, column_data: pd.Series) -> dict:\n if is_string_column(column_data):\n return self._check_mixed_percentage(column_data)\n return {}\n\n def _check_mixed_percentage(self, column_data: pd.Series) -> dict:\n total_rows = column_data.count()\n\n numbers_in_col = set()\n strings_in_col = set()\n\n def is_float(x) -> bool:\n try:\n float(x)\n if len(numbers_in_col) < 3:\n numbers_in_col.add(x)\n return True\n except ValueError:\n if len(strings_in_col) < 3:\n strings_in_col.add(x)\n return False\n\n nums = sum(column_data.apply(is_float))\n if nums in (total_rows, 0):\n return {}\n\n # Then we've got a mix\n nums_pct = nums / total_rows\n strs_pct = (np.abs(nums - total_rows)) / total_rows\n\n return {'strings': strs_pct, 'numbers': nums_pct,\n 'strings_examples': strings_in_col, 'numbers_examples': numbers_in_col}\n\n def add_condition_rare_type_ratio_not_in_range(self, ratio_range: Tuple[float, float] = (0.01, 0.1)):\n \"\"\"Add condition - Whether the ratio of rarer data type (strings or numbers) is not in the \"danger zone\".\n\n The \"danger zone\" represents the following logic - if the rarer data type is, for example, 30% of the data,\n than the column is presumably supposed to contain both numbers and string values. If the rarer data type is,\n for example, less than 1% of the data, than it's presumably a contamination, but a negligible one. In the range\n between, there is a real chance that the rarer data type may represent a problem to model training and\n inference.\n\n Parameters\n ----------\n ratio_range : Tuple[float, float] , default: (0.01 , 0.1)\n The range between which the ratio of rarer data type in the column is\n considered a problem.\n \"\"\"\n def condition(result):\n no_mix_columns = []\n failing_columns = []\n for col, ratios in result.items():\n # Columns without a mix contains empty dict for ratios\n if not ratios:\n no_mix_columns.append(col)\n continue\n rarer_ratio = min(ratios['strings'], ratios['numbers'])\n if ratio_range[0] < rarer_ratio < ratio_range[1]:\n failing_columns.append(col)\n if failing_columns:\n details = f'Found {len(failing_columns)} out of {len(result)} columns with non-negligible quantities ' \\\n f'of samples with a different data type from the majority of samples: {failing_columns}'\n return ConditionResult(ConditionCategory.WARN, details)\n details = f'{len(result)} columns passed: found {len(result) - len(no_mix_columns)} columns with ' \\\n f'negligible types mix, and {len(no_mix_columns)} columns without any types mix'\n return ConditionResult(ConditionCategory.PASS, details)\n\n name = f'Rare data types in column are either more than {format_percent(ratio_range[1])} or less ' \\\n f'than {format_percent(ratio_range[0])} of the data'\n return self.add_condition(name, condition)\n", "path": "deepchecks/tabular/checks/data_integrity/mixed_data_types.py"}]}
| 2,248 | 765 |
gh_patches_debug_29812
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-1416
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
import collision between http.py and Python 3's http modules
Here is a (granted, obscure) way to reproduce this bug:
1. `cd` into the `werkzeug` directory (i.e. where `serving.py`, `http.py`, etc. are located).
2. start a Python interpreter
3. type `import test`. This works for Python 2.x, but in Python 3.x this is what happens:
```
(venv34) werkzeug $ python
Python 3.4.1 (default, Sep 6 2014, 18:06:58)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
Traceback (most recent call last):
File "/home/miguel/Documents/dev/flask/werkzeug/werkzeug/test.py", line 20, in <module>
from urllib2 import Request as U2Request
ImportError: No module named 'urllib2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/miguel/Documents/dev/flask/werkzeug/werkzeug/http.py", line 26, in <module>
from urllib2 import parse_http_list as _parse_list_header
ImportError: No module named 'urllib2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/miguel/Documents/dev/flask/werkzeug/werkzeug/test.py", line 22, in <module>
from urllib.request import Request as U2Request
File "/opt/python3.4/lib/python3.4/urllib/request.py", line 88, in <module>
import http.client
File "/home/miguel/Documents/dev/flask/werkzeug/werkzeug/http.py", line 28, in <module>
from urllib.request import parse_http_list as _parse_list_header
ImportError: cannot import name 'parse_http_list'
>>>
```
The problem is that Python 3's reorg of `httplib` into `http.xxx` modules causes an import collision between `http.client` and werkzeug's `http.py`, and `http.py` wins because its directory is in `sys.path`.
I agree that the example above is pretty obscure and unlike to ever be a problem. A more likely example is when you start an application with the syntax `python -m werkzeug.serving -r module:app`. In this situation the process starts and then spawns a sub-process for the reloader. The sub-process is going to have the directory where `werkzeug.serving` is located in the `sys.path`, so the bug reproduces there as well.
Would you be open to rename `http.py`? Or do you have a better idea?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `werkzeug/_reloader.py`
Content:
```
1 import os
2 import sys
3 import time
4 import subprocess
5 import threading
6 from itertools import chain
7
8 from werkzeug._internal import _log
9 from werkzeug._compat import PY2, iteritems, text_type
10
11
12 def _iter_module_files():
13 """This iterates over all relevant Python files. It goes through all
14 loaded files from modules, all files in folders of already loaded modules
15 as well as all files reachable through a package.
16 """
17 # The list call is necessary on Python 3 in case the module
18 # dictionary modifies during iteration.
19 for module in list(sys.modules.values()):
20 if module is None:
21 continue
22 filename = getattr(module, '__file__', None)
23 if filename:
24 if os.path.isdir(filename) and \
25 os.path.exists(os.path.join(filename, "__init__.py")):
26 filename = os.path.join(filename, "__init__.py")
27
28 old = None
29 while not os.path.isfile(filename):
30 old = filename
31 filename = os.path.dirname(filename)
32 if filename == old:
33 break
34 else:
35 if filename[-4:] in ('.pyc', '.pyo'):
36 filename = filename[:-1]
37 yield filename
38
39
40 def _find_observable_paths(extra_files=None):
41 """Finds all paths that should be observed."""
42 rv = set(os.path.dirname(os.path.abspath(x))
43 if os.path.isfile(x) else os.path.abspath(x)
44 for x in sys.path)
45
46 for filename in extra_files or ():
47 rv.add(os.path.dirname(os.path.abspath(filename)))
48
49 for module in list(sys.modules.values()):
50 fn = getattr(module, '__file__', None)
51 if fn is None:
52 continue
53 fn = os.path.abspath(fn)
54 rv.add(os.path.dirname(fn))
55
56 return _find_common_roots(rv)
57
58
59 def _get_args_for_reloading():
60 """Returns the executable. This contains a workaround for windows
61 if the executable is incorrectly reported to not have the .exe
62 extension which can cause bugs on reloading. This also contains
63 a workaround for linux where the file is executable (possibly with
64 a program other than python)
65 """
66 rv = [sys.executable]
67 py_script = os.path.abspath(sys.argv[0])
68
69 if os.name == 'nt' and not os.path.exists(py_script) and \
70 os.path.exists(py_script + '.exe'):
71 py_script += '.exe'
72
73 windows_workaround = (
74 os.path.splitext(rv[0])[1] == '.exe'
75 and os.path.splitext(py_script)[1] == '.exe'
76 )
77 nix_workaround = os.path.isfile(py_script) and os.access(py_script, os.X_OK)
78
79 if windows_workaround or nix_workaround:
80 rv.pop(0)
81
82 rv.append(py_script)
83 rv.extend(sys.argv[1:])
84 return rv
85
86
87 def _find_common_roots(paths):
88 """Out of some paths it finds the common roots that need monitoring."""
89 paths = [x.split(os.path.sep) for x in paths]
90 root = {}
91 for chunks in sorted(paths, key=len, reverse=True):
92 node = root
93 for chunk in chunks:
94 node = node.setdefault(chunk, {})
95 node.clear()
96
97 rv = set()
98
99 def _walk(node, path):
100 for prefix, child in iteritems(node):
101 _walk(child, path + (prefix,))
102 if not node:
103 rv.add('/'.join(path))
104 _walk(root, ())
105 return rv
106
107
108 class ReloaderLoop(object):
109 name = None
110
111 # monkeypatched by testsuite. wrapping with `staticmethod` is required in
112 # case time.sleep has been replaced by a non-c function (e.g. by
113 # `eventlet.monkey_patch`) before we get here
114 _sleep = staticmethod(time.sleep)
115
116 def __init__(self, extra_files=None, interval=1):
117 self.extra_files = set(os.path.abspath(x)
118 for x in extra_files or ())
119 self.interval = interval
120
121 def run(self):
122 pass
123
124 def restart_with_reloader(self):
125 """Spawn a new Python interpreter with the same arguments as this one,
126 but running the reloader thread.
127 """
128 while 1:
129 _log('info', ' * Restarting with %s' % self.name)
130 args = _get_args_for_reloading()
131
132 # a weird bug on windows. sometimes unicode strings end up in the
133 # environment and subprocess.call does not like this, encode them
134 # to latin1 and continue.
135 if os.name == 'nt' and PY2:
136 new_environ = {}
137 for key, value in iteritems(os.environ):
138 if isinstance(key, text_type):
139 key = key.encode('iso-8859-1')
140 if isinstance(value, text_type):
141 value = value.encode('iso-8859-1')
142 new_environ[key] = value
143 else:
144 new_environ = os.environ.copy()
145
146 new_environ['WERKZEUG_RUN_MAIN'] = 'true'
147 exit_code = subprocess.call(args, env=new_environ,
148 close_fds=False)
149 if exit_code != 3:
150 return exit_code
151
152 def trigger_reload(self, filename):
153 self.log_reload(filename)
154 sys.exit(3)
155
156 def log_reload(self, filename):
157 filename = os.path.abspath(filename)
158 _log('info', ' * Detected change in %r, reloading' % filename)
159
160
161 class StatReloaderLoop(ReloaderLoop):
162 name = 'stat'
163
164 def run(self):
165 mtimes = {}
166 while 1:
167 for filename in chain(_iter_module_files(),
168 self.extra_files):
169 try:
170 mtime = os.stat(filename).st_mtime
171 except OSError:
172 continue
173
174 old_time = mtimes.get(filename)
175 if old_time is None:
176 mtimes[filename] = mtime
177 continue
178 elif mtime > old_time:
179 self.trigger_reload(filename)
180 self._sleep(self.interval)
181
182
183 class WatchdogReloaderLoop(ReloaderLoop):
184
185 def __init__(self, *args, **kwargs):
186 ReloaderLoop.__init__(self, *args, **kwargs)
187 from watchdog.observers import Observer
188 from watchdog.events import FileSystemEventHandler
189 self.observable_paths = set()
190
191 def _check_modification(filename):
192 if filename in self.extra_files:
193 self.trigger_reload(filename)
194 dirname = os.path.dirname(filename)
195 if dirname.startswith(tuple(self.observable_paths)):
196 if filename.endswith(('.pyc', '.pyo', '.py')):
197 self.trigger_reload(filename)
198
199 class _CustomHandler(FileSystemEventHandler):
200
201 def on_created(self, event):
202 _check_modification(event.src_path)
203
204 def on_modified(self, event):
205 _check_modification(event.src_path)
206
207 def on_moved(self, event):
208 _check_modification(event.src_path)
209 _check_modification(event.dest_path)
210
211 def on_deleted(self, event):
212 _check_modification(event.src_path)
213
214 reloader_name = Observer.__name__.lower()
215 if reloader_name.endswith('observer'):
216 reloader_name = reloader_name[:-8]
217 reloader_name += ' reloader'
218
219 self.name = reloader_name
220
221 self.observer_class = Observer
222 self.event_handler = _CustomHandler()
223 self.should_reload = False
224
225 def trigger_reload(self, filename):
226 # This is called inside an event handler, which means throwing
227 # SystemExit has no effect.
228 # https://github.com/gorakhargosh/watchdog/issues/294
229 self.should_reload = True
230 self.log_reload(filename)
231
232 def run(self):
233 watches = {}
234 observer = self.observer_class()
235 observer.start()
236
237 try:
238 while not self.should_reload:
239 to_delete = set(watches)
240 paths = _find_observable_paths(self.extra_files)
241 for path in paths:
242 if path not in watches:
243 try:
244 watches[path] = observer.schedule(
245 self.event_handler, path, recursive=True)
246 except OSError:
247 # Clear this path from list of watches We don't want
248 # the same error message showing again in the next
249 # iteration.
250 watches[path] = None
251 to_delete.discard(path)
252 for path in to_delete:
253 watch = watches.pop(path, None)
254 if watch is not None:
255 observer.unschedule(watch)
256 self.observable_paths = paths
257 self._sleep(self.interval)
258 finally:
259 observer.stop()
260 observer.join()
261
262 sys.exit(3)
263
264
265 reloader_loops = {
266 'stat': StatReloaderLoop,
267 'watchdog': WatchdogReloaderLoop,
268 }
269
270 try:
271 __import__('watchdog.observers')
272 except ImportError:
273 reloader_loops['auto'] = reloader_loops['stat']
274 else:
275 reloader_loops['auto'] = reloader_loops['watchdog']
276
277
278 def ensure_echo_on():
279 """Ensure that echo mode is enabled. Some tools such as PDB disable
280 it which causes usability issues after reload."""
281 # tcgetattr will fail if stdin isn't a tty
282 if not sys.stdin.isatty():
283 return
284 try:
285 import termios
286 except ImportError:
287 return
288 attributes = termios.tcgetattr(sys.stdin)
289 if not attributes[3] & termios.ECHO:
290 attributes[3] |= termios.ECHO
291 termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)
292
293
294 def run_with_reloader(main_func, extra_files=None, interval=1,
295 reloader_type='auto'):
296 """Run the given function in an independent python interpreter."""
297 import signal
298 reloader = reloader_loops[reloader_type](extra_files, interval)
299 signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
300 try:
301 if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
302 ensure_echo_on()
303 t = threading.Thread(target=main_func, args=())
304 t.setDaemon(True)
305 t.start()
306 reloader.run()
307 else:
308 sys.exit(reloader.restart_with_reloader())
309 except KeyboardInterrupt:
310 pass
311
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/werkzeug/_reloader.py b/werkzeug/_reloader.py
--- a/werkzeug/_reloader.py
+++ b/werkzeug/_reloader.py
@@ -65,22 +65,47 @@
"""
rv = [sys.executable]
py_script = os.path.abspath(sys.argv[0])
+ args = sys.argv[1:]
+ # Need to look at main module to determine how it was executed.
+ __main__ = sys.modules["__main__"]
+
+ if __main__.__package__ is None:
+ # Executed a file, like "python app.py".
+ if os.name == "nt":
+ # Windows entry points have ".exe" extension and should be
+ # called directly.
+ if not os.path.exists(py_script) and os.path.exists(py_script + ".exe"):
+ py_script += ".exe"
+
+ if (
+ os.path.splitext(rv[0])[1] == ".exe"
+ and os.path.splitext(py_script)[1] == ".exe"
+ ):
+ rv.pop(0)
+
+ elif os.path.isfile(py_script) and os.access(py_script, os.X_OK):
+ # The file is marked as executable. Nix adds a wrapper that
+ # shouldn't be called with the Python executable.
+ rv.pop(0)
+
+ rv.append(py_script)
+ else:
+ # Executed a module, like "python -m werkzeug.serving".
+ if sys.argv[0] == "-m":
+ # Flask works around previous behavior by putting
+ # "-m flask" in sys.argv.
+ # TODO remove this once Flask no longer misbehaves
+ args = sys.argv
+ else:
+ py_module = __main__.__package__
+ name = os.path.splitext(os.path.basename(py_script))[0]
- if os.name == 'nt' and not os.path.exists(py_script) and \
- os.path.exists(py_script + '.exe'):
- py_script += '.exe'
-
- windows_workaround = (
- os.path.splitext(rv[0])[1] == '.exe'
- and os.path.splitext(py_script)[1] == '.exe'
- )
- nix_workaround = os.path.isfile(py_script) and os.access(py_script, os.X_OK)
+ if name != "__main__":
+ py_module += "." + name
- if windows_workaround or nix_workaround:
- rv.pop(0)
+ rv.extend(("-m", py_module.lstrip(".")))
- rv.append(py_script)
- rv.extend(sys.argv[1:])
+ rv.extend(args)
return rv
|
{"golden_diff": "diff --git a/werkzeug/_reloader.py b/werkzeug/_reloader.py\n--- a/werkzeug/_reloader.py\n+++ b/werkzeug/_reloader.py\n@@ -65,22 +65,47 @@\n \"\"\"\n rv = [sys.executable]\n py_script = os.path.abspath(sys.argv[0])\n+ args = sys.argv[1:]\n+ # Need to look at main module to determine how it was executed.\n+ __main__ = sys.modules[\"__main__\"]\n+\n+ if __main__.__package__ is None:\n+ # Executed a file, like \"python app.py\".\n+ if os.name == \"nt\":\n+ # Windows entry points have \".exe\" extension and should be\n+ # called directly.\n+ if not os.path.exists(py_script) and os.path.exists(py_script + \".exe\"):\n+ py_script += \".exe\"\n+\n+ if (\n+ os.path.splitext(rv[0])[1] == \".exe\"\n+ and os.path.splitext(py_script)[1] == \".exe\"\n+ ):\n+ rv.pop(0)\n+\n+ elif os.path.isfile(py_script) and os.access(py_script, os.X_OK):\n+ # The file is marked as executable. Nix adds a wrapper that\n+ # shouldn't be called with the Python executable.\n+ rv.pop(0)\n+\n+ rv.append(py_script)\n+ else:\n+ # Executed a module, like \"python -m werkzeug.serving\".\n+ if sys.argv[0] == \"-m\":\n+ # Flask works around previous behavior by putting\n+ # \"-m flask\" in sys.argv.\n+ # TODO remove this once Flask no longer misbehaves\n+ args = sys.argv\n+ else:\n+ py_module = __main__.__package__\n+ name = os.path.splitext(os.path.basename(py_script))[0]\n \n- if os.name == 'nt' and not os.path.exists(py_script) and \\\n- os.path.exists(py_script + '.exe'):\n- py_script += '.exe'\n-\n- windows_workaround = (\n- os.path.splitext(rv[0])[1] == '.exe'\n- and os.path.splitext(py_script)[1] == '.exe'\n- )\n- nix_workaround = os.path.isfile(py_script) and os.access(py_script, os.X_OK)\n+ if name != \"__main__\":\n+ py_module += \".\" + name\n \n- if windows_workaround or nix_workaround:\n- rv.pop(0)\n+ rv.extend((\"-m\", py_module.lstrip(\".\")))\n \n- rv.append(py_script)\n- rv.extend(sys.argv[1:])\n+ rv.extend(args)\n return rv\n", "issue": "import collision between http.py and Python 3's http modules\nHere is a (granted, obscure) way to reproduce this bug:\n1. `cd` into the `werkzeug` directory (i.e. where `serving.py`, `http.py`, etc. are located).\n2. start a Python interpreter\n3. type `import test`. This works for Python 2.x, but in Python 3.x this is what happens:\n \n ```\n (venv34) werkzeug $ python\n Python 3.4.1 (default, Sep 6 2014, 18:06:58) \n [GCC 4.8.2] on linux\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> import test\n Traceback (most recent call last):\n File \"/home/miguel/Documents/dev/flask/werkzeug/werkzeug/test.py\", line 20, in <module>\n from urllib2 import Request as U2Request\n ImportError: No module named 'urllib2'\n \n During handling of the above exception, another exception occurred:\n \n Traceback (most recent call last):\n File \"/home/miguel/Documents/dev/flask/werkzeug/werkzeug/http.py\", line 26, in <module>\n from urllib2 import parse_http_list as _parse_list_header\n ImportError: No module named 'urllib2'\n \n During handling of the above exception, another exception occurred:\n \n Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/home/miguel/Documents/dev/flask/werkzeug/werkzeug/test.py\", line 22, in <module>\n from urllib.request import Request as U2Request\n File \"/opt/python3.4/lib/python3.4/urllib/request.py\", line 88, in <module>\n import http.client\n File \"/home/miguel/Documents/dev/flask/werkzeug/werkzeug/http.py\", line 28, in <module>\n from urllib.request import parse_http_list as _parse_list_header\n ImportError: cannot import name 'parse_http_list'\n >>> \n ```\n\nThe problem is that Python 3's reorg of `httplib` into `http.xxx` modules causes an import collision between `http.client` and werkzeug's `http.py`, and `http.py` wins because its directory is in `sys.path`.\n\nI agree that the example above is pretty obscure and unlike to ever be a problem. A more likely example is when you start an application with the syntax `python -m werkzeug.serving -r module:app`. In this situation the process starts and then spawns a sub-process for the reloader. The sub-process is going to have the directory where `werkzeug.serving` is located in the `sys.path`, so the bug reproduces there as well.\n\nWould you be open to rename `http.py`? Or do you have a better idea?\n\n", "before_files": [{"content": "import os\nimport sys\nimport time\nimport subprocess\nimport threading\nfrom itertools import chain\n\nfrom werkzeug._internal import _log\nfrom werkzeug._compat import PY2, iteritems, text_type\n\n\ndef _iter_module_files():\n \"\"\"This iterates over all relevant Python files. It goes through all\n loaded files from modules, all files in folders of already loaded modules\n as well as all files reachable through a package.\n \"\"\"\n # The list call is necessary on Python 3 in case the module\n # dictionary modifies during iteration.\n for module in list(sys.modules.values()):\n if module is None:\n continue\n filename = getattr(module, '__file__', None)\n if filename:\n if os.path.isdir(filename) and \\\n os.path.exists(os.path.join(filename, \"__init__.py\")):\n filename = os.path.join(filename, \"__init__.py\")\n\n old = None\n while not os.path.isfile(filename):\n old = filename\n filename = os.path.dirname(filename)\n if filename == old:\n break\n else:\n if filename[-4:] in ('.pyc', '.pyo'):\n filename = filename[:-1]\n yield filename\n\n\ndef _find_observable_paths(extra_files=None):\n \"\"\"Finds all paths that should be observed.\"\"\"\n rv = set(os.path.dirname(os.path.abspath(x))\n if os.path.isfile(x) else os.path.abspath(x)\n for x in sys.path)\n\n for filename in extra_files or ():\n rv.add(os.path.dirname(os.path.abspath(filename)))\n\n for module in list(sys.modules.values()):\n fn = getattr(module, '__file__', None)\n if fn is None:\n continue\n fn = os.path.abspath(fn)\n rv.add(os.path.dirname(fn))\n\n return _find_common_roots(rv)\n\n\ndef _get_args_for_reloading():\n \"\"\"Returns the executable. This contains a workaround for windows\n if the executable is incorrectly reported to not have the .exe\n extension which can cause bugs on reloading. This also contains\n a workaround for linux where the file is executable (possibly with\n a program other than python)\n \"\"\"\n rv = [sys.executable]\n py_script = os.path.abspath(sys.argv[0])\n\n if os.name == 'nt' and not os.path.exists(py_script) and \\\n os.path.exists(py_script + '.exe'):\n py_script += '.exe'\n\n windows_workaround = (\n os.path.splitext(rv[0])[1] == '.exe'\n and os.path.splitext(py_script)[1] == '.exe'\n )\n nix_workaround = os.path.isfile(py_script) and os.access(py_script, os.X_OK)\n\n if windows_workaround or nix_workaround:\n rv.pop(0)\n\n rv.append(py_script)\n rv.extend(sys.argv[1:])\n return rv\n\n\ndef _find_common_roots(paths):\n \"\"\"Out of some paths it finds the common roots that need monitoring.\"\"\"\n paths = [x.split(os.path.sep) for x in paths]\n root = {}\n for chunks in sorted(paths, key=len, reverse=True):\n node = root\n for chunk in chunks:\n node = node.setdefault(chunk, {})\n node.clear()\n\n rv = set()\n\n def _walk(node, path):\n for prefix, child in iteritems(node):\n _walk(child, path + (prefix,))\n if not node:\n rv.add('/'.join(path))\n _walk(root, ())\n return rv\n\n\nclass ReloaderLoop(object):\n name = None\n\n # monkeypatched by testsuite. wrapping with `staticmethod` is required in\n # case time.sleep has been replaced by a non-c function (e.g. by\n # `eventlet.monkey_patch`) before we get here\n _sleep = staticmethod(time.sleep)\n\n def __init__(self, extra_files=None, interval=1):\n self.extra_files = set(os.path.abspath(x)\n for x in extra_files or ())\n self.interval = interval\n\n def run(self):\n pass\n\n def restart_with_reloader(self):\n \"\"\"Spawn a new Python interpreter with the same arguments as this one,\n but running the reloader thread.\n \"\"\"\n while 1:\n _log('info', ' * Restarting with %s' % self.name)\n args = _get_args_for_reloading()\n\n # a weird bug on windows. sometimes unicode strings end up in the\n # environment and subprocess.call does not like this, encode them\n # to latin1 and continue.\n if os.name == 'nt' and PY2:\n new_environ = {}\n for key, value in iteritems(os.environ):\n if isinstance(key, text_type):\n key = key.encode('iso-8859-1')\n if isinstance(value, text_type):\n value = value.encode('iso-8859-1')\n new_environ[key] = value\n else:\n new_environ = os.environ.copy()\n\n new_environ['WERKZEUG_RUN_MAIN'] = 'true'\n exit_code = subprocess.call(args, env=new_environ,\n close_fds=False)\n if exit_code != 3:\n return exit_code\n\n def trigger_reload(self, filename):\n self.log_reload(filename)\n sys.exit(3)\n\n def log_reload(self, filename):\n filename = os.path.abspath(filename)\n _log('info', ' * Detected change in %r, reloading' % filename)\n\n\nclass StatReloaderLoop(ReloaderLoop):\n name = 'stat'\n\n def run(self):\n mtimes = {}\n while 1:\n for filename in chain(_iter_module_files(),\n self.extra_files):\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n self.trigger_reload(filename)\n self._sleep(self.interval)\n\n\nclass WatchdogReloaderLoop(ReloaderLoop):\n\n def __init__(self, *args, **kwargs):\n ReloaderLoop.__init__(self, *args, **kwargs)\n from watchdog.observers import Observer\n from watchdog.events import FileSystemEventHandler\n self.observable_paths = set()\n\n def _check_modification(filename):\n if filename in self.extra_files:\n self.trigger_reload(filename)\n dirname = os.path.dirname(filename)\n if dirname.startswith(tuple(self.observable_paths)):\n if filename.endswith(('.pyc', '.pyo', '.py')):\n self.trigger_reload(filename)\n\n class _CustomHandler(FileSystemEventHandler):\n\n def on_created(self, event):\n _check_modification(event.src_path)\n\n def on_modified(self, event):\n _check_modification(event.src_path)\n\n def on_moved(self, event):\n _check_modification(event.src_path)\n _check_modification(event.dest_path)\n\n def on_deleted(self, event):\n _check_modification(event.src_path)\n\n reloader_name = Observer.__name__.lower()\n if reloader_name.endswith('observer'):\n reloader_name = reloader_name[:-8]\n reloader_name += ' reloader'\n\n self.name = reloader_name\n\n self.observer_class = Observer\n self.event_handler = _CustomHandler()\n self.should_reload = False\n\n def trigger_reload(self, filename):\n # This is called inside an event handler, which means throwing\n # SystemExit has no effect.\n # https://github.com/gorakhargosh/watchdog/issues/294\n self.should_reload = True\n self.log_reload(filename)\n\n def run(self):\n watches = {}\n observer = self.observer_class()\n observer.start()\n\n try:\n while not self.should_reload:\n to_delete = set(watches)\n paths = _find_observable_paths(self.extra_files)\n for path in paths:\n if path not in watches:\n try:\n watches[path] = observer.schedule(\n self.event_handler, path, recursive=True)\n except OSError:\n # Clear this path from list of watches We don't want\n # the same error message showing again in the next\n # iteration.\n watches[path] = None\n to_delete.discard(path)\n for path in to_delete:\n watch = watches.pop(path, None)\n if watch is not None:\n observer.unschedule(watch)\n self.observable_paths = paths\n self._sleep(self.interval)\n finally:\n observer.stop()\n observer.join()\n\n sys.exit(3)\n\n\nreloader_loops = {\n 'stat': StatReloaderLoop,\n 'watchdog': WatchdogReloaderLoop,\n}\n\ntry:\n __import__('watchdog.observers')\nexcept ImportError:\n reloader_loops['auto'] = reloader_loops['stat']\nelse:\n reloader_loops['auto'] = reloader_loops['watchdog']\n\n\ndef ensure_echo_on():\n \"\"\"Ensure that echo mode is enabled. Some tools such as PDB disable\n it which causes usability issues after reload.\"\"\"\n # tcgetattr will fail if stdin isn't a tty\n if not sys.stdin.isatty():\n return\n try:\n import termios\n except ImportError:\n return\n attributes = termios.tcgetattr(sys.stdin)\n if not attributes[3] & termios.ECHO:\n attributes[3] |= termios.ECHO\n termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)\n\n\ndef run_with_reloader(main_func, extra_files=None, interval=1,\n reloader_type='auto'):\n \"\"\"Run the given function in an independent python interpreter.\"\"\"\n import signal\n reloader = reloader_loops[reloader_type](extra_files, interval)\n signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))\n try:\n if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':\n ensure_echo_on()\n t = threading.Thread(target=main_func, args=())\n t.setDaemon(True)\n t.start()\n reloader.run()\n else:\n sys.exit(reloader.restart_with_reloader())\n except KeyboardInterrupt:\n pass\n", "path": "werkzeug/_reloader.py"}], "after_files": [{"content": "import os\nimport sys\nimport time\nimport subprocess\nimport threading\nfrom itertools import chain\n\nfrom werkzeug._internal import _log\nfrom werkzeug._compat import PY2, iteritems, text_type\n\n\ndef _iter_module_files():\n \"\"\"This iterates over all relevant Python files. It goes through all\n loaded files from modules, all files in folders of already loaded modules\n as well as all files reachable through a package.\n \"\"\"\n # The list call is necessary on Python 3 in case the module\n # dictionary modifies during iteration.\n for module in list(sys.modules.values()):\n if module is None:\n continue\n filename = getattr(module, '__file__', None)\n if filename:\n if os.path.isdir(filename) and \\\n os.path.exists(os.path.join(filename, \"__init__.py\")):\n filename = os.path.join(filename, \"__init__.py\")\n\n old = None\n while not os.path.isfile(filename):\n old = filename\n filename = os.path.dirname(filename)\n if filename == old:\n break\n else:\n if filename[-4:] in ('.pyc', '.pyo'):\n filename = filename[:-1]\n yield filename\n\n\ndef _find_observable_paths(extra_files=None):\n \"\"\"Finds all paths that should be observed.\"\"\"\n rv = set(os.path.dirname(os.path.abspath(x))\n if os.path.isfile(x) else os.path.abspath(x)\n for x in sys.path)\n\n for filename in extra_files or ():\n rv.add(os.path.dirname(os.path.abspath(filename)))\n\n for module in list(sys.modules.values()):\n fn = getattr(module, '__file__', None)\n if fn is None:\n continue\n fn = os.path.abspath(fn)\n rv.add(os.path.dirname(fn))\n\n return _find_common_roots(rv)\n\n\ndef _get_args_for_reloading():\n \"\"\"Returns the executable. This contains a workaround for windows\n if the executable is incorrectly reported to not have the .exe\n extension which can cause bugs on reloading. This also contains\n a workaround for linux where the file is executable (possibly with\n a program other than python)\n \"\"\"\n rv = [sys.executable]\n py_script = os.path.abspath(sys.argv[0])\n args = sys.argv[1:]\n # Need to look at main module to determine how it was executed.\n __main__ = sys.modules[\"__main__\"]\n\n if __main__.__package__ is None:\n # Executed a file, like \"python app.py\".\n if os.name == \"nt\":\n # Windows entry points have \".exe\" extension and should be\n # called directly.\n if not os.path.exists(py_script) and os.path.exists(py_script + \".exe\"):\n py_script += \".exe\"\n\n if (\n os.path.splitext(rv[0])[1] == \".exe\"\n and os.path.splitext(py_script)[1] == \".exe\"\n ):\n rv.pop(0)\n\n elif os.path.isfile(py_script) and os.access(py_script, os.X_OK):\n # The file is marked as executable. Nix adds a wrapper that\n # shouldn't be called with the Python executable.\n rv.pop(0)\n\n rv.append(py_script)\n else:\n # Executed a module, like \"python -m werkzeug.serving\".\n if sys.argv[0] == \"-m\":\n # Flask works around previous behavior by putting\n # \"-m flask\" in sys.argv.\n # TODO remove this once Flask no longer misbehaves\n args = sys.argv\n else:\n py_module = __main__.__package__\n name = os.path.splitext(os.path.basename(py_script))[0]\n\n if name != \"__main__\":\n py_module += \".\" + name\n\n rv.extend((\"-m\", py_module.lstrip(\".\")))\n\n rv.extend(args)\n return rv\n\n\ndef _find_common_roots(paths):\n \"\"\"Out of some paths it finds the common roots that need monitoring.\"\"\"\n paths = [x.split(os.path.sep) for x in paths]\n root = {}\n for chunks in sorted(paths, key=len, reverse=True):\n node = root\n for chunk in chunks:\n node = node.setdefault(chunk, {})\n node.clear()\n\n rv = set()\n\n def _walk(node, path):\n for prefix, child in iteritems(node):\n _walk(child, path + (prefix,))\n if not node:\n rv.add('/'.join(path))\n _walk(root, ())\n return rv\n\n\nclass ReloaderLoop(object):\n name = None\n\n # monkeypatched by testsuite. wrapping with `staticmethod` is required in\n # case time.sleep has been replaced by a non-c function (e.g. by\n # `eventlet.monkey_patch`) before we get here\n _sleep = staticmethod(time.sleep)\n\n def __init__(self, extra_files=None, interval=1):\n self.extra_files = set(os.path.abspath(x)\n for x in extra_files or ())\n self.interval = interval\n\n def run(self):\n pass\n\n def restart_with_reloader(self):\n \"\"\"Spawn a new Python interpreter with the same arguments as this one,\n but running the reloader thread.\n \"\"\"\n while 1:\n _log('info', ' * Restarting with %s' % self.name)\n args = _get_args_for_reloading()\n\n # a weird bug on windows. sometimes unicode strings end up in the\n # environment and subprocess.call does not like this, encode them\n # to latin1 and continue.\n if os.name == 'nt' and PY2:\n new_environ = {}\n for key, value in iteritems(os.environ):\n if isinstance(key, text_type):\n key = key.encode('iso-8859-1')\n if isinstance(value, text_type):\n value = value.encode('iso-8859-1')\n new_environ[key] = value\n else:\n new_environ = os.environ.copy()\n\n new_environ['WERKZEUG_RUN_MAIN'] = 'true'\n exit_code = subprocess.call(args, env=new_environ,\n close_fds=False)\n if exit_code != 3:\n return exit_code\n\n def trigger_reload(self, filename):\n self.log_reload(filename)\n sys.exit(3)\n\n def log_reload(self, filename):\n filename = os.path.abspath(filename)\n _log('info', ' * Detected change in %r, reloading' % filename)\n\n\nclass StatReloaderLoop(ReloaderLoop):\n name = 'stat'\n\n def run(self):\n mtimes = {}\n while 1:\n for filename in chain(_iter_module_files(),\n self.extra_files):\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n self.trigger_reload(filename)\n self._sleep(self.interval)\n\n\nclass WatchdogReloaderLoop(ReloaderLoop):\n\n def __init__(self, *args, **kwargs):\n ReloaderLoop.__init__(self, *args, **kwargs)\n from watchdog.observers import Observer\n from watchdog.events import FileSystemEventHandler\n self.observable_paths = set()\n\n def _check_modification(filename):\n if filename in self.extra_files:\n self.trigger_reload(filename)\n dirname = os.path.dirname(filename)\n if dirname.startswith(tuple(self.observable_paths)):\n if filename.endswith(('.pyc', '.pyo', '.py')):\n self.trigger_reload(filename)\n\n class _CustomHandler(FileSystemEventHandler):\n\n def on_created(self, event):\n _check_modification(event.src_path)\n\n def on_modified(self, event):\n _check_modification(event.src_path)\n\n def on_moved(self, event):\n _check_modification(event.src_path)\n _check_modification(event.dest_path)\n\n def on_deleted(self, event):\n _check_modification(event.src_path)\n\n reloader_name = Observer.__name__.lower()\n if reloader_name.endswith('observer'):\n reloader_name = reloader_name[:-8]\n reloader_name += ' reloader'\n\n self.name = reloader_name\n\n self.observer_class = Observer\n self.event_handler = _CustomHandler()\n self.should_reload = False\n\n def trigger_reload(self, filename):\n # This is called inside an event handler, which means throwing\n # SystemExit has no effect.\n # https://github.com/gorakhargosh/watchdog/issues/294\n self.should_reload = True\n self.log_reload(filename)\n\n def run(self):\n watches = {}\n observer = self.observer_class()\n observer.start()\n\n try:\n while not self.should_reload:\n to_delete = set(watches)\n paths = _find_observable_paths(self.extra_files)\n for path in paths:\n if path not in watches:\n try:\n watches[path] = observer.schedule(\n self.event_handler, path, recursive=True)\n except OSError:\n # Clear this path from list of watches We don't want\n # the same error message showing again in the next\n # iteration.\n watches[path] = None\n to_delete.discard(path)\n for path in to_delete:\n watch = watches.pop(path, None)\n if watch is not None:\n observer.unschedule(watch)\n self.observable_paths = paths\n self._sleep(self.interval)\n finally:\n observer.stop()\n observer.join()\n\n sys.exit(3)\n\n\nreloader_loops = {\n 'stat': StatReloaderLoop,\n 'watchdog': WatchdogReloaderLoop,\n}\n\ntry:\n __import__('watchdog.observers')\nexcept ImportError:\n reloader_loops['auto'] = reloader_loops['stat']\nelse:\n reloader_loops['auto'] = reloader_loops['watchdog']\n\n\ndef ensure_echo_on():\n \"\"\"Ensure that echo mode is enabled. Some tools such as PDB disable\n it which causes usability issues after reload.\"\"\"\n # tcgetattr will fail if stdin isn't a tty\n if not sys.stdin.isatty():\n return\n try:\n import termios\n except ImportError:\n return\n attributes = termios.tcgetattr(sys.stdin)\n if not attributes[3] & termios.ECHO:\n attributes[3] |= termios.ECHO\n termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)\n\n\ndef run_with_reloader(main_func, extra_files=None, interval=1,\n reloader_type='auto'):\n \"\"\"Run the given function in an independent python interpreter.\"\"\"\n import signal\n reloader = reloader_loops[reloader_type](extra_files, interval)\n signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))\n try:\n if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':\n ensure_echo_on()\n t = threading.Thread(target=main_func, args=())\n t.setDaemon(True)\n t.start()\n reloader.run()\n else:\n sys.exit(reloader.restart_with_reloader())\n except KeyboardInterrupt:\n pass\n", "path": "werkzeug/_reloader.py"}]}
| 3,942 | 593 |
gh_patches_debug_29580
|
rasdani/github-patches
|
git_diff
|
mit-ll-responsible-ai__hydra-zen-175
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent static analysis via pyright
```python
x = [1, 2, 3]
make_config(a=[1, 2, 3])
make_config(a=x) # pyright marks this as invalid based on our annotations
```
This seems like a pyright issue, but we might consider revising annotations to accommodate
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/hydra_zen/typing/_implementations.py`
Content:
```
1 # Copyright (c) 2021 Massachusetts Institute of Technology
2 # SPDX-License-Identifier: MIT
3
4 from dataclasses import Field
5 from enum import Enum
6 from pathlib import Path
7 from typing import (
8 Any,
9 Callable,
10 Counter,
11 Deque,
12 Dict,
13 FrozenSet,
14 Generic,
15 List,
16 NewType,
17 Set,
18 Tuple,
19 TypeVar,
20 Union,
21 )
22
23 from omegaconf import DictConfig, ListConfig
24 from typing_extensions import Protocol, runtime_checkable
25
26 __all__ = [
27 "Just",
28 "Builds",
29 "PartialBuilds",
30 "Partial",
31 "Importable",
32 "SupportedPrimitive",
33 ]
34
35
36 _T = TypeVar("_T", covariant=True)
37
38
39 class Partial(Generic[_T]):
40 func: Callable[..., _T]
41 args: Tuple[Any, ...]
42 keywords: Dict[str, Any]
43
44 def __init__(
45 self, func: Callable[..., _T], *args: Any, **kwargs: Any
46 ) -> None: # pragma: no cover
47 ...
48
49 def __call__(self, *args: Any, **kwargs: Any) -> _T: # pragma: no cover
50 ...
51
52
53 InterpStr = NewType("InterpStr", str)
54
55 Importable = TypeVar("Importable")
56
57
58 class _DataClass(Protocol): # pragma: no cover
59 # doesn't provide __init__, __getattribute__, etc.
60 __dataclass_fields__: Dict[str, Field]
61
62
63 class DataClass(_DataClass, Protocol): # pragma: no cover
64 def __init__(self, *args, **kwargs) -> None:
65 ...
66
67 def __getattribute__(self, name: str) -> Any:
68 ...
69
70 def __setattr__(self, name: str, value: Any) -> None:
71 ...
72
73
74 @runtime_checkable
75 class Builds(DataClass, Protocol[_T]): # pragma: no cover
76
77 _target_: str
78
79
80 @runtime_checkable
81 class Just(Builds, Protocol[_T]): # pragma: no cover
82 path: str # interpolated string for importing obj
83 _target_: str = "hydra_zen.funcs.get_obj"
84
85
86 @runtime_checkable
87 class PartialBuilds(Builds, Protocol[_T]): # pragma: no cover
88 _target_: str = "hydra_zen.funcs.zen_processing"
89 _zen_target: str
90 _zen_partial: bool = True
91
92
93 @runtime_checkable
94 class HasTarget(Protocol): # pragma: no cover
95 _target_: str
96
97
98 @runtime_checkable
99 class HasPartialTarget(Protocol): # pragma: no cover
100 _zen_partial: bool = True
101
102
103 _HydraPrimitive = Union[
104 bool,
105 None,
106 int,
107 float,
108 str,
109 ]
110
111 _SupportedPrimitive = Union[
112 _HydraPrimitive,
113 ListConfig,
114 DictConfig,
115 type,
116 Callable,
117 Enum,
118 _DataClass,
119 complex,
120 Path,
121 range,
122 ]
123
124 SupportedPrimitive = Union[
125 _SupportedPrimitive,
126 Dict[_HydraPrimitive, "SupportedPrimitive"],
127 Counter[_HydraPrimitive],
128 Set["SupportedPrimitive"],
129 FrozenSet["SupportedPrimitive"],
130 Deque["SupportedPrimitive"],
131 List["SupportedPrimitive"],
132 Tuple["SupportedPrimitive", ...],
133 ]
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/hydra_zen/typing/_implementations.py b/src/hydra_zen/typing/_implementations.py
--- a/src/hydra_zen/typing/_implementations.py
+++ b/src/hydra_zen/typing/_implementations.py
@@ -7,21 +7,19 @@
from typing import (
Any,
Callable,
- Counter,
- Deque,
Dict,
FrozenSet,
Generic,
- List,
+ Mapping,
NewType,
- Set,
+ Sequence,
Tuple,
TypeVar,
Union,
)
from omegaconf import DictConfig, ListConfig
-from typing_extensions import Protocol, runtime_checkable
+from typing_extensions import Protocol, TypedDict, runtime_checkable
__all__ = [
"Just",
@@ -33,6 +31,10 @@
]
+class EmptyDict(TypedDict):
+ pass
+
+
_T = TypeVar("_T", covariant=True)
@@ -119,15 +121,21 @@
complex,
Path,
range,
+ set,
+ EmptyDict, # not covered by Mapping[..., ...]
]
SupportedPrimitive = Union[
_SupportedPrimitive,
- Dict[_HydraPrimitive, "SupportedPrimitive"],
- Counter[_HydraPrimitive],
- Set["SupportedPrimitive"],
FrozenSet["SupportedPrimitive"],
- Deque["SupportedPrimitive"],
- List["SupportedPrimitive"],
- Tuple["SupportedPrimitive", ...],
+ # Even thought this is redundant with Sequence, it seems to
+ # be needed for pyright to do proper checking of tuple contents
+ Tuple["SupportedPrimitive"],
+ # Mutable generic containers need to be invariant, so
+ # we have to settle for Sequence/Mapping. While this
+ # is overly permissive in terms of sequence-type, it
+ # at least affords quality checking of sequence content
+ Sequence["SupportedPrimitive"],
+ # Mapping is covariant only in value
+ Mapping[Any, "SupportedPrimitive"],
]
|
{"golden_diff": "diff --git a/src/hydra_zen/typing/_implementations.py b/src/hydra_zen/typing/_implementations.py\n--- a/src/hydra_zen/typing/_implementations.py\n+++ b/src/hydra_zen/typing/_implementations.py\n@@ -7,21 +7,19 @@\n from typing import (\n Any,\n Callable,\n- Counter,\n- Deque,\n Dict,\n FrozenSet,\n Generic,\n- List,\n+ Mapping,\n NewType,\n- Set,\n+ Sequence,\n Tuple,\n TypeVar,\n Union,\n )\n \n from omegaconf import DictConfig, ListConfig\n-from typing_extensions import Protocol, runtime_checkable\n+from typing_extensions import Protocol, TypedDict, runtime_checkable\n \n __all__ = [\n \"Just\",\n@@ -33,6 +31,10 @@\n ]\n \n \n+class EmptyDict(TypedDict):\n+ pass\n+\n+\n _T = TypeVar(\"_T\", covariant=True)\n \n \n@@ -119,15 +121,21 @@\n complex,\n Path,\n range,\n+ set,\n+ EmptyDict, # not covered by Mapping[..., ...]\n ]\n \n SupportedPrimitive = Union[\n _SupportedPrimitive,\n- Dict[_HydraPrimitive, \"SupportedPrimitive\"],\n- Counter[_HydraPrimitive],\n- Set[\"SupportedPrimitive\"],\n FrozenSet[\"SupportedPrimitive\"],\n- Deque[\"SupportedPrimitive\"],\n- List[\"SupportedPrimitive\"],\n- Tuple[\"SupportedPrimitive\", ...],\n+ # Even thought this is redundant with Sequence, it seems to\n+ # be needed for pyright to do proper checking of tuple contents\n+ Tuple[\"SupportedPrimitive\"],\n+ # Mutable generic containers need to be invariant, so\n+ # we have to settle for Sequence/Mapping. While this\n+ # is overly permissive in terms of sequence-type, it\n+ # at least affords quality checking of sequence content\n+ Sequence[\"SupportedPrimitive\"],\n+ # Mapping is covariant only in value\n+ Mapping[Any, \"SupportedPrimitive\"],\n ]\n", "issue": "Inconsistent static analysis via pyright\n```python\r\nx = [1, 2, 3]\r\nmake_config(a=[1, 2, 3])\r\nmake_config(a=x) # pyright marks this as invalid based on our annotations\r\n```\r\n\r\nThis seems like a pyright issue, but we might consider revising annotations to accommodate\n", "before_files": [{"content": "# Copyright (c) 2021 Massachusetts Institute of Technology\n# SPDX-License-Identifier: MIT\n\nfrom dataclasses import Field\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import (\n Any,\n Callable,\n Counter,\n Deque,\n Dict,\n FrozenSet,\n Generic,\n List,\n NewType,\n Set,\n Tuple,\n TypeVar,\n Union,\n)\n\nfrom omegaconf import DictConfig, ListConfig\nfrom typing_extensions import Protocol, runtime_checkable\n\n__all__ = [\n \"Just\",\n \"Builds\",\n \"PartialBuilds\",\n \"Partial\",\n \"Importable\",\n \"SupportedPrimitive\",\n]\n\n\n_T = TypeVar(\"_T\", covariant=True)\n\n\nclass Partial(Generic[_T]):\n func: Callable[..., _T]\n args: Tuple[Any, ...]\n keywords: Dict[str, Any]\n\n def __init__(\n self, func: Callable[..., _T], *args: Any, **kwargs: Any\n ) -> None: # pragma: no cover\n ...\n\n def __call__(self, *args: Any, **kwargs: Any) -> _T: # pragma: no cover\n ...\n\n\nInterpStr = NewType(\"InterpStr\", str)\n\nImportable = TypeVar(\"Importable\")\n\n\nclass _DataClass(Protocol): # pragma: no cover\n # doesn't provide __init__, __getattribute__, etc.\n __dataclass_fields__: Dict[str, Field]\n\n\nclass DataClass(_DataClass, Protocol): # pragma: no cover\n def __init__(self, *args, **kwargs) -> None:\n ...\n\n def __getattribute__(self, name: str) -> Any:\n ...\n\n def __setattr__(self, name: str, value: Any) -> None:\n ...\n\n\n@runtime_checkable\nclass Builds(DataClass, Protocol[_T]): # pragma: no cover\n\n _target_: str\n\n\n@runtime_checkable\nclass Just(Builds, Protocol[_T]): # pragma: no cover\n path: str # interpolated string for importing obj\n _target_: str = \"hydra_zen.funcs.get_obj\"\n\n\n@runtime_checkable\nclass PartialBuilds(Builds, Protocol[_T]): # pragma: no cover\n _target_: str = \"hydra_zen.funcs.zen_processing\"\n _zen_target: str\n _zen_partial: bool = True\n\n\n@runtime_checkable\nclass HasTarget(Protocol): # pragma: no cover\n _target_: str\n\n\n@runtime_checkable\nclass HasPartialTarget(Protocol): # pragma: no cover\n _zen_partial: bool = True\n\n\n_HydraPrimitive = Union[\n bool,\n None,\n int,\n float,\n str,\n]\n\n_SupportedPrimitive = Union[\n _HydraPrimitive,\n ListConfig,\n DictConfig,\n type,\n Callable,\n Enum,\n _DataClass,\n complex,\n Path,\n range,\n]\n\nSupportedPrimitive = Union[\n _SupportedPrimitive,\n Dict[_HydraPrimitive, \"SupportedPrimitive\"],\n Counter[_HydraPrimitive],\n Set[\"SupportedPrimitive\"],\n FrozenSet[\"SupportedPrimitive\"],\n Deque[\"SupportedPrimitive\"],\n List[\"SupportedPrimitive\"],\n Tuple[\"SupportedPrimitive\", ...],\n]\n", "path": "src/hydra_zen/typing/_implementations.py"}], "after_files": [{"content": "# Copyright (c) 2021 Massachusetts Institute of Technology\n# SPDX-License-Identifier: MIT\n\nfrom dataclasses import Field\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import (\n Any,\n Callable,\n Dict,\n FrozenSet,\n Generic,\n Mapping,\n NewType,\n Sequence,\n Tuple,\n TypeVar,\n Union,\n)\n\nfrom omegaconf import DictConfig, ListConfig\nfrom typing_extensions import Protocol, TypedDict, runtime_checkable\n\n__all__ = [\n \"Just\",\n \"Builds\",\n \"PartialBuilds\",\n \"Partial\",\n \"Importable\",\n \"SupportedPrimitive\",\n]\n\n\nclass EmptyDict(TypedDict):\n pass\n\n\n_T = TypeVar(\"_T\", covariant=True)\n\n\nclass Partial(Generic[_T]):\n func: Callable[..., _T]\n args: Tuple[Any, ...]\n keywords: Dict[str, Any]\n\n def __init__(\n self, func: Callable[..., _T], *args: Any, **kwargs: Any\n ) -> None: # pragma: no cover\n ...\n\n def __call__(self, *args: Any, **kwargs: Any) -> _T: # pragma: no cover\n ...\n\n\nInterpStr = NewType(\"InterpStr\", str)\n\nImportable = TypeVar(\"Importable\")\n\n\nclass _DataClass(Protocol): # pragma: no cover\n # doesn't provide __init__, __getattribute__, etc.\n __dataclass_fields__: Dict[str, Field]\n\n\nclass DataClass(_DataClass, Protocol): # pragma: no cover\n def __init__(self, *args, **kwargs) -> None:\n ...\n\n def __getattribute__(self, name: str) -> Any:\n ...\n\n def __setattr__(self, name: str, value: Any) -> None:\n ...\n\n\n@runtime_checkable\nclass Builds(DataClass, Protocol[_T]): # pragma: no cover\n\n _target_: str\n\n\n@runtime_checkable\nclass Just(Builds, Protocol[_T]): # pragma: no cover\n path: str # interpolated string for importing obj\n _target_: str = \"hydra_zen.funcs.get_obj\"\n\n\n@runtime_checkable\nclass PartialBuilds(Builds, Protocol[_T]): # pragma: no cover\n _target_: str = \"hydra_zen.funcs.zen_processing\"\n _zen_target: str\n _zen_partial: bool = True\n\n\n@runtime_checkable\nclass HasTarget(Protocol): # pragma: no cover\n _target_: str\n\n\n@runtime_checkable\nclass HasPartialTarget(Protocol): # pragma: no cover\n _zen_partial: bool = True\n\n\n_HydraPrimitive = Union[\n bool,\n None,\n int,\n float,\n str,\n]\n\n_SupportedPrimitive = Union[\n _HydraPrimitive,\n ListConfig,\n DictConfig,\n type,\n Callable,\n Enum,\n _DataClass,\n complex,\n Path,\n range,\n set,\n EmptyDict, # not covered by Mapping[..., ...]\n]\n\nSupportedPrimitive = Union[\n _SupportedPrimitive,\n FrozenSet[\"SupportedPrimitive\"],\n # Even thought this is redundant with Sequence, it seems to\n # be needed for pyright to do proper checking of tuple contents\n Tuple[\"SupportedPrimitive\"],\n # Mutable generic containers need to be invariant, so\n # we have to settle for Sequence/Mapping. While this\n # is overly permissive in terms of sequence-type, it\n # at least affords quality checking of sequence content\n Sequence[\"SupportedPrimitive\"],\n # Mapping is covariant only in value\n Mapping[Any, \"SupportedPrimitive\"],\n]\n", "path": "src/hydra_zen/typing/_implementations.py"}]}
| 1,364 | 453 |
gh_patches_debug_44429
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-408
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove `activation` argument from AveragePrecision and ROC_AUC
I propose to remove `activation` argument from `AveragePrecision` and `ROC_AUC`:
https://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/contrib/metrics/roc_auc.py#L23-L28
to be coherent with `Accuracy` et friends and the usage of `output_transform` to apply the binarization/activation :
https://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/metrics/accuracy.py#L81-L93
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/metrics/roc_auc.py`
Content:
```
1 from functools import partial
2 from ignite.metrics import EpochMetric
3
4
5 def roc_auc_compute_fn(y_preds, y_targets, activation=None):
6 try:
7 from sklearn.metrics import roc_auc_score
8 except ImportError:
9 raise RuntimeError("This contrib module requires sklearn to be installed.")
10
11 y_true = y_targets.numpy()
12 if activation is not None:
13 y_preds = activation(y_preds)
14 y_pred = y_preds.numpy()
15 return roc_auc_score(y_true, y_pred)
16
17
18 class ROC_AUC(EpochMetric):
19 """Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC)
20 accumulating predictions and the ground-truth during an epoch and applying
21 `sklearn.metrics.roc_auc_score <http://scikit-learn.org/stable/modules/generated/
22 sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .
23
24 Args:
25 activation (callable, optional): optional function to apply on prediction tensors,
26 e.g. `activation=torch.sigmoid` to transform logits.
27 output_transform (callable, optional): a callable that is used to transform the
28 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
29 form expected by the metric. This can be useful if, for example, you have a multi-output model and
30 you want to compute the metric with respect to one of the outputs.
31
32 """
33 def __init__(self, activation=None, output_transform=lambda x: x):
34 super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),
35 output_transform=output_transform)
36
```
Path: `ignite/contrib/metrics/average_precision.py`
Content:
```
1 from functools import partial
2 from ignite.metrics import EpochMetric
3
4
5 def average_precision_compute_fn(y_preds, y_targets, activation=None):
6 try:
7 from sklearn.metrics import average_precision_score
8 except ImportError:
9 raise RuntimeError("This contrib module requires sklearn to be installed.")
10
11 y_true = y_targets.numpy()
12 if activation is not None:
13 y_preds = activation(y_preds)
14 y_pred = y_preds.numpy()
15 return average_precision_score(y_true, y_pred)
16
17
18 class AveragePrecision(EpochMetric):
19 """Computes Average Precision accumulating predictions and the ground-truth during an epoch
20 and applying `sklearn.metrics.average_precision_score <http://scikit-learn.org/stable/modules/generated/
21 sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .
22
23 Args:
24 activation (callable, optional): optional function to apply on prediction tensors,
25 e.g. `activation=torch.sigmoid` to transform logits.
26 output_transform (callable, optional): a callable that is used to transform the
27 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
28 form expected by the metric. This can be useful if, for example, you have a multi-output model and
29 you want to compute the metric with respect to one of the outputs.
30
31 """
32 def __init__(self, activation=None, output_transform=lambda x: x):
33 super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),
34 output_transform=output_transform)
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/contrib/metrics/average_precision.py b/ignite/contrib/metrics/average_precision.py
--- a/ignite/contrib/metrics/average_precision.py
+++ b/ignite/contrib/metrics/average_precision.py
@@ -1,16 +1,13 @@
-from functools import partial
from ignite.metrics import EpochMetric
-def average_precision_compute_fn(y_preds, y_targets, activation=None):
+def average_precision_compute_fn(y_preds, y_targets):
try:
from sklearn.metrics import average_precision_score
except ImportError:
raise RuntimeError("This contrib module requires sklearn to be installed.")
y_true = y_targets.numpy()
- if activation is not None:
- y_preds = activation(y_preds)
y_pred = y_preds.numpy()
return average_precision_score(y_true, y_pred)
@@ -21,14 +18,23 @@
sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .
Args:
- activation (callable, optional): optional function to apply on prediction tensors,
- e.g. `activation=torch.sigmoid` to transform logits.
output_transform (callable, optional): a callable that is used to transform the
:class:`~ignite.engine.Engine`'s `process_function`'s output into the
form expected by the metric. This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
+ AveragePrecision expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or
+ confidence values. To apply an activation to y_pred, use output_transform as shown below:
+
+ .. code-block:: python
+
+ def activated_output_transform(output):
+ y_pred, y = output
+ y_pred = torch.softmax(y_pred)
+ return y_pred, y
+
+ avg_precision = AveragePrecision(activated_output_transform)
+
"""
def __init__(self, activation=None, output_transform=lambda x: x):
- super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),
- output_transform=output_transform)
+ super(AveragePrecision, self).__init__(average_precision_compute_fn, output_transform=output_transform)
diff --git a/ignite/contrib/metrics/roc_auc.py b/ignite/contrib/metrics/roc_auc.py
--- a/ignite/contrib/metrics/roc_auc.py
+++ b/ignite/contrib/metrics/roc_auc.py
@@ -1,16 +1,13 @@
-from functools import partial
from ignite.metrics import EpochMetric
-def roc_auc_compute_fn(y_preds, y_targets, activation=None):
+def roc_auc_compute_fn(y_preds, y_targets):
try:
from sklearn.metrics import roc_auc_score
except ImportError:
raise RuntimeError("This contrib module requires sklearn to be installed.")
y_true = y_targets.numpy()
- if activation is not None:
- y_preds = activation(y_preds)
y_pred = y_preds.numpy()
return roc_auc_score(y_true, y_pred)
@@ -22,14 +19,23 @@
sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .
Args:
- activation (callable, optional): optional function to apply on prediction tensors,
- e.g. `activation=torch.sigmoid` to transform logits.
output_transform (callable, optional): a callable that is used to transform the
:class:`~ignite.engine.Engine`'s `process_function`'s output into the
form expected by the metric. This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
+ ROC_AUC expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or confidence
+ values. To apply an activation to y_pred, use output_transform as shown below:
+
+ .. code-block:: python
+
+ def activated_output_transform(output):
+ y_pred, y = output
+ y_pred = torch.sigmoid(y_pred)
+ return y_pred, y
+
+ roc_auc = ROC_AUC(activated_output_transform)
+
"""
- def __init__(self, activation=None, output_transform=lambda x: x):
- super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),
- output_transform=output_transform)
+ def __init__(self, output_transform=lambda x: x):
+ super(ROC_AUC, self).__init__(roc_auc_compute_fn, output_transform=output_transform)
|
{"golden_diff": "diff --git a/ignite/contrib/metrics/average_precision.py b/ignite/contrib/metrics/average_precision.py\n--- a/ignite/contrib/metrics/average_precision.py\n+++ b/ignite/contrib/metrics/average_precision.py\n@@ -1,16 +1,13 @@\n-from functools import partial\n from ignite.metrics import EpochMetric\n \n \n-def average_precision_compute_fn(y_preds, y_targets, activation=None):\n+def average_precision_compute_fn(y_preds, y_targets):\n try:\n from sklearn.metrics import average_precision_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n \n y_true = y_targets.numpy()\n- if activation is not None:\n- y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return average_precision_score(y_true, y_pred)\n \n@@ -21,14 +18,23 @@\n sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .\n \n Args:\n- activation (callable, optional): optional function to apply on prediction tensors,\n- e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n \n+ AveragePrecision expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or\n+ confidence values. To apply an activation to y_pred, use output_transform as shown below:\n+\n+ .. code-block:: python\n+\n+ def activated_output_transform(output):\n+ y_pred, y = output\n+ y_pred = torch.softmax(y_pred)\n+ return y_pred, y\n+\n+ avg_precision = AveragePrecision(activated_output_transform)\n+\n \"\"\"\n def __init__(self, activation=None, output_transform=lambda x: x):\n- super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),\n- output_transform=output_transform)\n+ super(AveragePrecision, self).__init__(average_precision_compute_fn, output_transform=output_transform)\ndiff --git a/ignite/contrib/metrics/roc_auc.py b/ignite/contrib/metrics/roc_auc.py\n--- a/ignite/contrib/metrics/roc_auc.py\n+++ b/ignite/contrib/metrics/roc_auc.py\n@@ -1,16 +1,13 @@\n-from functools import partial\n from ignite.metrics import EpochMetric\n \n \n-def roc_auc_compute_fn(y_preds, y_targets, activation=None):\n+def roc_auc_compute_fn(y_preds, y_targets):\n try:\n from sklearn.metrics import roc_auc_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n \n y_true = y_targets.numpy()\n- if activation is not None:\n- y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return roc_auc_score(y_true, y_pred)\n \n@@ -22,14 +19,23 @@\n sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .\n \n Args:\n- activation (callable, optional): optional function to apply on prediction tensors,\n- e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n \n+ ROC_AUC expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or confidence\n+ values. To apply an activation to y_pred, use output_transform as shown below:\n+\n+ .. code-block:: python\n+\n+ def activated_output_transform(output):\n+ y_pred, y = output\n+ y_pred = torch.sigmoid(y_pred)\n+ return y_pred, y\n+\n+ roc_auc = ROC_AUC(activated_output_transform)\n+\n \"\"\"\n- def __init__(self, activation=None, output_transform=lambda x: x):\n- super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),\n- output_transform=output_transform)\n+ def __init__(self, output_transform=lambda x: x):\n+ super(ROC_AUC, self).__init__(roc_auc_compute_fn, output_transform=output_transform)\n", "issue": "Remove `activation` argument from AveragePrecision and ROC_AUC\nI propose to remove `activation` argument from `AveragePrecision` and `ROC_AUC`:\r\nhttps://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/contrib/metrics/roc_auc.py#L23-L28\r\nto be coherent with `Accuracy` et friends and the usage of `output_transform` to apply the binarization/activation :\r\nhttps://github.com/pytorch/ignite/blob/862ab0073e461a32bf5b53fa015f88c143ae9079/ignite/metrics/accuracy.py#L81-L93\r\n\r\n\n", "before_files": [{"content": "from functools import partial\nfrom ignite.metrics import EpochMetric\n\n\ndef roc_auc_compute_fn(y_preds, y_targets, activation=None):\n try:\n from sklearn.metrics import roc_auc_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n\n y_true = y_targets.numpy()\n if activation is not None:\n y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return roc_auc_score(y_true, y_pred)\n\n\nclass ROC_AUC(EpochMetric):\n \"\"\"Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC)\n accumulating predictions and the ground-truth during an epoch and applying\n `sklearn.metrics.roc_auc_score <http://scikit-learn.org/stable/modules/generated/\n sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .\n\n Args:\n activation (callable, optional): optional function to apply on prediction tensors,\n e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n def __init__(self, activation=None, output_transform=lambda x: x):\n super(ROC_AUC, self).__init__(partial(roc_auc_compute_fn, activation=activation),\n output_transform=output_transform)\n", "path": "ignite/contrib/metrics/roc_auc.py"}, {"content": "from functools import partial\nfrom ignite.metrics import EpochMetric\n\n\ndef average_precision_compute_fn(y_preds, y_targets, activation=None):\n try:\n from sklearn.metrics import average_precision_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n\n y_true = y_targets.numpy()\n if activation is not None:\n y_preds = activation(y_preds)\n y_pred = y_preds.numpy()\n return average_precision_score(y_true, y_pred)\n\n\nclass AveragePrecision(EpochMetric):\n \"\"\"Computes Average Precision accumulating predictions and the ground-truth during an epoch\n and applying `sklearn.metrics.average_precision_score <http://scikit-learn.org/stable/modules/generated/\n sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .\n\n Args:\n activation (callable, optional): optional function to apply on prediction tensors,\n e.g. `activation=torch.sigmoid` to transform logits.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n def __init__(self, activation=None, output_transform=lambda x: x):\n super(AveragePrecision, self).__init__(partial(average_precision_compute_fn, activation=activation),\n output_transform=output_transform)\n", "path": "ignite/contrib/metrics/average_precision.py"}], "after_files": [{"content": "from ignite.metrics import EpochMetric\n\n\ndef roc_auc_compute_fn(y_preds, y_targets):\n try:\n from sklearn.metrics import roc_auc_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n\n y_true = y_targets.numpy()\n y_pred = y_preds.numpy()\n return roc_auc_score(y_true, y_pred)\n\n\nclass ROC_AUC(EpochMetric):\n \"\"\"Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC)\n accumulating predictions and the ground-truth during an epoch and applying\n `sklearn.metrics.roc_auc_score <http://scikit-learn.org/stable/modules/generated/\n sklearn.metrics.roc_auc_score.html#sklearn.metrics.roc_auc_score>`_ .\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n ROC_AUC expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or confidence\n values. To apply an activation to y_pred, use output_transform as shown below:\n\n .. code-block:: python\n\n def activated_output_transform(output):\n y_pred, y = output\n y_pred = torch.sigmoid(y_pred)\n return y_pred, y\n\n roc_auc = ROC_AUC(activated_output_transform)\n\n \"\"\"\n def __init__(self, output_transform=lambda x: x):\n super(ROC_AUC, self).__init__(roc_auc_compute_fn, output_transform=output_transform)\n", "path": "ignite/contrib/metrics/roc_auc.py"}, {"content": "from ignite.metrics import EpochMetric\n\n\ndef average_precision_compute_fn(y_preds, y_targets):\n try:\n from sklearn.metrics import average_precision_score\n except ImportError:\n raise RuntimeError(\"This contrib module requires sklearn to be installed.\")\n\n y_true = y_targets.numpy()\n y_pred = y_preds.numpy()\n return average_precision_score(y_true, y_pred)\n\n\nclass AveragePrecision(EpochMetric):\n \"\"\"Computes Average Precision accumulating predictions and the ground-truth during an epoch\n and applying `sklearn.metrics.average_precision_score <http://scikit-learn.org/stable/modules/generated/\n sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n AveragePrecision expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or\n confidence values. To apply an activation to y_pred, use output_transform as shown below:\n\n .. code-block:: python\n\n def activated_output_transform(output):\n y_pred, y = output\n y_pred = torch.softmax(y_pred)\n return y_pred, y\n\n avg_precision = AveragePrecision(activated_output_transform)\n\n \"\"\"\n def __init__(self, activation=None, output_transform=lambda x: x):\n super(AveragePrecision, self).__init__(average_precision_compute_fn, output_transform=output_transform)\n", "path": "ignite/contrib/metrics/average_precision.py"}]}
| 1,254 | 1,012 |
gh_patches_debug_23849
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-6943
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Key error when adding a policy statement to an SNS Topic
### Describe the bug
When adding/describing a statement to an SNS topic a key error is thrown when attempting to retrieve the `Sid` of the policy.
### What did you expect to happen?
A policy statement should be described and added without error
### Cloud Provider
Amazon Web Services (AWS)
### Cloud Custodian version and dependency information
```shell
Custodian: 0.9.13
Python: 3.8.11 (default, Jun 29 2021, 03:17:05)
[Clang 12.0.0 (clang-1200.0.32.29)]
Platform: posix.uname_result(sysname='Darwin', nodename='f8ffc250c2ee', release='20.6.0', version='Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64', machine='x86_64')
Using venv: True
Docker: False
Installed:
PyJWT==1.7.1
PyYAML==5.4.1
adal==1.2.7
appdirs==1.4.4
applicationinsights==0.11.10
apscheduler==3.7.0
argcomplete==1.12.3
attrs==21.2.0
azure-common==1.1.27
azure-core==1.15.0
azure-cosmos==3.2.0
azure-cosmosdb-nspkg==2.0.2
azure-cosmosdb-table==1.0.6
azure-functions==1.7.2
azure-graphrbac==0.61.1
azure-identity==1.6.0
azure-keyvault==4.1.0
azure-keyvault-certificates==4.3.0
azure-keyvault-keys==4.4.0
azure-keyvault-secrets==4.3.0
azure-mgmt-apimanagement==1.0.0
azure-mgmt-applicationinsights==1.0.0
azure-mgmt-authorization==1.0.0
azure-mgmt-batch==15.0.0
azure-mgmt-cdn==10.0.0
azure-mgmt-cognitiveservices==11.0.0
azure-mgmt-compute==19.0.0
azure-mgmt-containerinstance==7.0.0
azure-mgmt-containerregistry==8.0.0b1
azure-mgmt-containerservice==15.1.0
azure-mgmt-core==1.2.2
azure-mgmt-cosmosdb==6.4.0
azure-mgmt-costmanagement==1.0.0
azure-mgmt-databricks==1.0.0b1
azure-mgmt-datafactory==1.1.0
azure-mgmt-datalake-store==1.0.0
azure-mgmt-dns==8.0.0b1
azure-mgmt-eventgrid==8.0.0
azure-mgmt-eventhub==8.0.0
azure-mgmt-frontdoor==1.0.0
azure-mgmt-hdinsight==7.0.0
azure-mgmt-iothub==1.0.0
azure-mgmt-keyvault==8.0.0
azure-mgmt-logic==9.0.0
azure-mgmt-managementgroups==1.0.0b1
azure-mgmt-monitor==2.0.0
azure-mgmt-msi==1.0.0
azure-mgmt-network==17.1.0
azure-mgmt-policyinsights==1.0.0
azure-mgmt-rdbms==8.1.0
azure-mgmt-redis==12.0.0
azure-mgmt-resource==16.1.0
azure-mgmt-resourcegraph==7.0.0
azure-mgmt-search==8.0.0
azure-mgmt-servicefabric==1.0.0
azure-mgmt-sql==1.0.0
azure-mgmt-storage==17.1.0
azure-mgmt-subscription==1.0.0
azure-mgmt-trafficmanager==0.51.0
azure-mgmt-web==2.0.0
azure-nspkg==3.0.2
azure-storage-blob==12.8.1
azure-storage-common==2.1.0
azure-storage-file==2.1.0
azure-storage-file-share==12.5.0
azure-storage-queue==12.1.6
boto3==1.17.102
botocore==1.20.102
c7n==0.9.13
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.5
chardet==4.0.0
click==7.1.2
cryptography==3.4.7
decorator==5.0.9
distlib==0.3.2
dogpile.cache==1.1.3
google-api-core==1.30.0
google-api-python-client==1.12.8
google-auth==1.32.0
google-auth-httplib2==0.1.0
google-cloud-core==1.7.1
google-cloud-logging==1.15.1
google-cloud-monitoring==0.34.0
google-cloud-storage==1.40.0
google-crc32c==1.1.2
google-resumable-media==1.3.1
googleapis-common-protos==1.53.0
httplib2==0.19.1
idna==2.10
importlib-metadata==4.6.0
iso8601==0.1.14
isodate==0.6.0
jmespath==0.10.0
jsonpatch==1.32
jsonpointer==2.1
jsonschema==3.2.0
keystoneauth1==4.3.1
kubernetes==10.0.1
mock==4.0.3
msal==1.12.0
msal-extensions==0.3.0
msrest==0.6.21
msrestazure==0.6.4
munch==2.5.0
netaddr==0.7.20
netifaces==0.11.0
oauthlib==3.1.1
openstacksdk==0.52.0
os-service-types==1.7.0
packaging==20.9
pbr==5.6.0
portalocker==1.7.1
protobuf==3.17.3
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pyparsing==2.4.7
pyrsistent==0.18.0
python-dateutil==2.8.1
pytz==2021.1
pyyaml==5.4.1
ratelimiter==1.2.0.post0
requests==2.25.1
requests-oauthlib==1.3.0
requestsexceptions==1.4.0
retrying==1.3.3
rsa==4.7.2
s3transfer==0.4.2
setuptools==56.0.0
six==1.16.0
stevedore==3.3.0
tabulate==0.8.9
typing-extensions==3.10.0.0
tzlocal==2.1
uritemplate==3.0.1
urllib3==1.26.6
websocket-client==1.1.0
zipp==3.5.0
```
### Policy
```shell
- name: sns-encrypt-transit
resource: aws.sns
description: All SNS topics must be encrypted in transit
filters:
- not:
- type: has-statement
statements:
- Action:
- sns:GetTopicAttributes
- sns:SetTopicAttributes
- sns:AddPermission
- sns:RemovePermission
- sns:DeleteTopic
- sns:Subscribe
- sns:ListSubscriptionsByTopic
- sns:Publish
- sns:Receive
Condition:
Bool:
aws:SecureTransport: 'false'
Effect: Deny
Principal: '*'
actions:
- type: modify-policy
add-statements:
- Action:
- sns:GetTopicAttributes
- sns:SetTopicAttributes
- sns:AddPermission
- sns:RemovePermission
- sns:DeleteTopic
- sns:Subscribe
- sns:ListSubscriptionsByTopic
- sns:Publish
- sns:Receive
Condition:
Bool:
aws:SecureTransport: 'false'
Effect: Deny
Principal: '*'
Resource: '*'
Sid: DenyNotEncryptedInTrans
- type: notify
action_desc: 'ACTION Taken: Custodian has added the policy statement to enable encryption in transit.'
cc:
- [email protected]
from: [email protected]
template: default.html
to:
- [email protected]
- resource-owner
transport:
type: sns
topic: arn:aws:sns:us-east-1:0123456789:mailer
violation_desc: The following SNS Topic(s) do not have encryption in transit enabled.
```
### Relevant log/traceback output
```shell
2021-10-08 12:58:33,582 - custodian.output - ERROR - Error while executing policy
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/c7n/policy.py", line 322, in run
results = a.process(resources)
File "/usr/local/lib/python3.7/dist-packages/c7n/resources/sns.py", line 330, in process
new_policy, added = self.add_statements(new_policy)
File "/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py", line 100, in add_statements
current = {s['Sid']: s for s in policy_statements}
File "/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py", line 100, in
current = {s['Sid']: s for s in policy_statements}
KeyError: 'Sid'
```
### Extra information or context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/actions/policy.py`
Content:
```
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from .core import BaseAction
5 from c7n import utils
6
7
8 class RemovePolicyBase(BaseAction):
9
10 schema = utils.type_schema(
11 'remove-statements',
12 required=['statement_ids'],
13 statement_ids={'oneOf': [
14 {'enum': ['matched', "*"]},
15 {'type': 'array', 'items': {'type': 'string'}}]})
16
17 def process_policy(self, policy, resource, matched_key):
18 statements = policy.get('Statement', [])
19 resource_statements = resource.get(matched_key, ())
20
21 return remove_statements(
22 self.data['statement_ids'], statements, resource_statements)
23
24
25 def remove_statements(match_ids, statements, matched=()):
26 found = []
27 for s in list(statements):
28 s_found = False
29 if match_ids == '*':
30 s_found = True
31 elif match_ids == 'matched':
32 if s in matched:
33 s_found = True
34 elif 'Sid' in s and s['Sid'] in match_ids:
35 s_found = True
36 if s_found:
37 found.append(s)
38 statements.remove(s)
39 if not found:
40 return None, found
41 return statements, found
42
43
44 class ModifyPolicyBase(BaseAction):
45 """Action to modify resource IAM policy statements.
46
47 Applies to all resources with embedded IAM Policies.
48
49 :example:
50
51 .. code-block:: yaml
52
53 policies:
54 - name: sns-yank-cross-account
55 resource: sns
56 filters:
57 - type: cross-account
58 actions:
59 - type: modify-policy
60 add-statements: [{
61 "Sid": "ReplaceWithMe",
62 "Effect": "Allow",
63 "Principal": "*",
64 "Action": ["SNS:GetTopicAttributes"],
65 "Resource": topic_arn,
66 }]
67 remove-statements: '*'
68 """
69
70 schema_alias = True
71 schema = utils.type_schema(
72 'modify-policy',
73 **{
74 'add-statements': {
75 'type': 'array',
76 'items': {'$ref': '#/definitions/iam-statement'},
77 },
78 'remove-statements': {
79 'type': ['array', 'string'],
80 'oneOf': [
81 {'enum': ['matched', '*']},
82 {'type': 'array', 'items': {'type': 'string'}}
83 ],
84 }
85 }
86 )
87
88 def __init__(self, data=None, manager=None):
89 if manager is not None:
90 config_args = {
91 'account_id': manager.config.account_id,
92 'region': manager.config.region
93 }
94 self.data = utils.format_string_values(data, **config_args)
95 else:
96 self.data = utils.format_string_values(data)
97 self.manager = manager
98
99 def add_statements(self, policy_statements):
100 current = {s['Sid']: s for s in policy_statements}
101 additional = {s['Sid']: s for s in self.data.get('add-statements', [])}
102 current.update(additional)
103 return list(current.values()), bool(additional)
104
105 def remove_statements(self, policy_statements, resource, matched_key):
106 statement_ids = self.data.get('remove-statements', [])
107 found = []
108 if len(statement_ids) == 0:
109 return policy_statements, found
110 resource_statements = resource.get(matched_key, ())
111 return remove_statements(
112 statement_ids, policy_statements, resource_statements)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/c7n/actions/policy.py b/c7n/actions/policy.py
--- a/c7n/actions/policy.py
+++ b/c7n/actions/policy.py
@@ -1,5 +1,6 @@
# Copyright The Cloud Custodian Authors.
# SPDX-License-Identifier: Apache-2.0
+import hashlib
from .core import BaseAction
from c7n import utils
@@ -41,6 +42,11 @@
return statements, found
+def statement_id(s):
+ # for statements without a sid, use a checksum for identity
+ return hashlib.sha224(utils.dumps(s, indent=0).encode('utf8')).hexdigest()
+
+
class ModifyPolicyBase(BaseAction):
"""Action to modify resource IAM policy statements.
@@ -97,8 +103,8 @@
self.manager = manager
def add_statements(self, policy_statements):
- current = {s['Sid']: s for s in policy_statements}
- additional = {s['Sid']: s for s in self.data.get('add-statements', [])}
+ current = {s.get('Sid', statement_id(s)): s for s in policy_statements}
+ additional = {s.get('Sid', statement_id(s)): s for s in self.data.get('add-statements', [])}
current.update(additional)
return list(current.values()), bool(additional)
|
{"golden_diff": "diff --git a/c7n/actions/policy.py b/c7n/actions/policy.py\n--- a/c7n/actions/policy.py\n+++ b/c7n/actions/policy.py\n@@ -1,5 +1,6 @@\n # Copyright The Cloud Custodian Authors.\n # SPDX-License-Identifier: Apache-2.0\n+import hashlib\n \n from .core import BaseAction\n from c7n import utils\n@@ -41,6 +42,11 @@\n return statements, found\n \n \n+def statement_id(s):\n+ # for statements without a sid, use a checksum for identity\n+ return hashlib.sha224(utils.dumps(s, indent=0).encode('utf8')).hexdigest()\n+\n+\n class ModifyPolicyBase(BaseAction):\n \"\"\"Action to modify resource IAM policy statements.\n \n@@ -97,8 +103,8 @@\n self.manager = manager\n \n def add_statements(self, policy_statements):\n- current = {s['Sid']: s for s in policy_statements}\n- additional = {s['Sid']: s for s in self.data.get('add-statements', [])}\n+ current = {s.get('Sid', statement_id(s)): s for s in policy_statements}\n+ additional = {s.get('Sid', statement_id(s)): s for s in self.data.get('add-statements', [])}\n current.update(additional)\n return list(current.values()), bool(additional)\n", "issue": "Key error when adding a policy statement to an SNS Topic\n### Describe the bug\n\nWhen adding/describing a statement to an SNS topic a key error is thrown when attempting to retrieve the `Sid` of the policy.\n\n### What did you expect to happen?\n\nA policy statement should be described and added without error\n\n### Cloud Provider\n\nAmazon Web Services (AWS)\n\n### Cloud Custodian version and dependency information\n\n```shell\nCustodian: 0.9.13\r\nPython: 3.8.11 (default, Jun 29 2021, 03:17:05) \r\n [Clang 12.0.0 (clang-1200.0.32.29)]\r\nPlatform: posix.uname_result(sysname='Darwin', nodename='f8ffc250c2ee', release='20.6.0', version='Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64', machine='x86_64')\r\nUsing venv: True\r\nDocker: False\r\nInstalled: \r\n\r\nPyJWT==1.7.1\r\nPyYAML==5.4.1\r\nadal==1.2.7\r\nappdirs==1.4.4\r\napplicationinsights==0.11.10\r\napscheduler==3.7.0\r\nargcomplete==1.12.3\r\nattrs==21.2.0\r\nazure-common==1.1.27\r\nazure-core==1.15.0\r\nazure-cosmos==3.2.0\r\nazure-cosmosdb-nspkg==2.0.2\r\nazure-cosmosdb-table==1.0.6\r\nazure-functions==1.7.2\r\nazure-graphrbac==0.61.1\r\nazure-identity==1.6.0\r\nazure-keyvault==4.1.0\r\nazure-keyvault-certificates==4.3.0\r\nazure-keyvault-keys==4.4.0\r\nazure-keyvault-secrets==4.3.0\r\nazure-mgmt-apimanagement==1.0.0\r\nazure-mgmt-applicationinsights==1.0.0\r\nazure-mgmt-authorization==1.0.0\r\nazure-mgmt-batch==15.0.0\r\nazure-mgmt-cdn==10.0.0\r\nazure-mgmt-cognitiveservices==11.0.0\r\nazure-mgmt-compute==19.0.0\r\nazure-mgmt-containerinstance==7.0.0\r\nazure-mgmt-containerregistry==8.0.0b1\r\nazure-mgmt-containerservice==15.1.0\r\nazure-mgmt-core==1.2.2\r\nazure-mgmt-cosmosdb==6.4.0\r\nazure-mgmt-costmanagement==1.0.0\r\nazure-mgmt-databricks==1.0.0b1\r\nazure-mgmt-datafactory==1.1.0\r\nazure-mgmt-datalake-store==1.0.0\r\nazure-mgmt-dns==8.0.0b1\r\nazure-mgmt-eventgrid==8.0.0\r\nazure-mgmt-eventhub==8.0.0\r\nazure-mgmt-frontdoor==1.0.0\r\nazure-mgmt-hdinsight==7.0.0\r\nazure-mgmt-iothub==1.0.0\r\nazure-mgmt-keyvault==8.0.0\r\nazure-mgmt-logic==9.0.0\r\nazure-mgmt-managementgroups==1.0.0b1\r\nazure-mgmt-monitor==2.0.0\r\nazure-mgmt-msi==1.0.0\r\nazure-mgmt-network==17.1.0\r\nazure-mgmt-policyinsights==1.0.0\r\nazure-mgmt-rdbms==8.1.0\r\nazure-mgmt-redis==12.0.0\r\nazure-mgmt-resource==16.1.0\r\nazure-mgmt-resourcegraph==7.0.0\r\nazure-mgmt-search==8.0.0\r\nazure-mgmt-servicefabric==1.0.0\r\nazure-mgmt-sql==1.0.0\r\nazure-mgmt-storage==17.1.0\r\nazure-mgmt-subscription==1.0.0\r\nazure-mgmt-trafficmanager==0.51.0\r\nazure-mgmt-web==2.0.0\r\nazure-nspkg==3.0.2\r\nazure-storage-blob==12.8.1\r\nazure-storage-common==2.1.0\r\nazure-storage-file==2.1.0\r\nazure-storage-file-share==12.5.0\r\nazure-storage-queue==12.1.6\r\nboto3==1.17.102\r\nbotocore==1.20.102\r\nc7n==0.9.13\r\ncachetools==4.2.2\r\ncertifi==2021.5.30\r\ncffi==1.14.5\r\nchardet==4.0.0\r\nclick==7.1.2\r\ncryptography==3.4.7\r\ndecorator==5.0.9\r\ndistlib==0.3.2\r\ndogpile.cache==1.1.3\r\ngoogle-api-core==1.30.0\r\ngoogle-api-python-client==1.12.8\r\ngoogle-auth==1.32.0\r\ngoogle-auth-httplib2==0.1.0\r\ngoogle-cloud-core==1.7.1\r\ngoogle-cloud-logging==1.15.1\r\ngoogle-cloud-monitoring==0.34.0\r\ngoogle-cloud-storage==1.40.0\r\ngoogle-crc32c==1.1.2\r\ngoogle-resumable-media==1.3.1\r\ngoogleapis-common-protos==1.53.0\r\nhttplib2==0.19.1\r\nidna==2.10\r\nimportlib-metadata==4.6.0\r\niso8601==0.1.14\r\nisodate==0.6.0\r\njmespath==0.10.0\r\njsonpatch==1.32\r\njsonpointer==2.1\r\njsonschema==3.2.0\r\nkeystoneauth1==4.3.1\r\nkubernetes==10.0.1\r\nmock==4.0.3\r\nmsal==1.12.0\r\nmsal-extensions==0.3.0\r\nmsrest==0.6.21\r\nmsrestazure==0.6.4\r\nmunch==2.5.0\r\nnetaddr==0.7.20\r\nnetifaces==0.11.0\r\noauthlib==3.1.1\r\nopenstacksdk==0.52.0\r\nos-service-types==1.7.0\r\npackaging==20.9\r\npbr==5.6.0\r\nportalocker==1.7.1\r\nprotobuf==3.17.3\r\npyasn1==0.4.8\r\npyasn1-modules==0.2.8\r\npycparser==2.20\r\npyparsing==2.4.7\r\npyrsistent==0.18.0\r\npython-dateutil==2.8.1\r\npytz==2021.1\r\npyyaml==5.4.1\r\nratelimiter==1.2.0.post0\r\nrequests==2.25.1\r\nrequests-oauthlib==1.3.0\r\nrequestsexceptions==1.4.0\r\nretrying==1.3.3\r\nrsa==4.7.2\r\ns3transfer==0.4.2\r\nsetuptools==56.0.0\r\nsix==1.16.0\r\nstevedore==3.3.0\r\ntabulate==0.8.9\r\ntyping-extensions==3.10.0.0\r\ntzlocal==2.1\r\nuritemplate==3.0.1\r\nurllib3==1.26.6\r\nwebsocket-client==1.1.0\r\nzipp==3.5.0\n```\n\n\n### Policy\n\n```shell\n- name: sns-encrypt-transit\r\n resource: aws.sns\r\n description: All SNS topics must be encrypted in transit\r\n filters:\r\n - not:\r\n - type: has-statement\r\n statements:\r\n - Action:\r\n - sns:GetTopicAttributes\r\n - sns:SetTopicAttributes\r\n - sns:AddPermission\r\n - sns:RemovePermission\r\n - sns:DeleteTopic\r\n - sns:Subscribe\r\n - sns:ListSubscriptionsByTopic\r\n - sns:Publish\r\n - sns:Receive\r\n Condition:\r\n Bool:\r\n aws:SecureTransport: 'false'\r\n Effect: Deny\r\n Principal: '*'\r\n actions:\r\n - type: modify-policy\r\n add-statements:\r\n - Action:\r\n - sns:GetTopicAttributes\r\n - sns:SetTopicAttributes\r\n - sns:AddPermission\r\n - sns:RemovePermission\r\n - sns:DeleteTopic\r\n - sns:Subscribe\r\n - sns:ListSubscriptionsByTopic\r\n - sns:Publish\r\n - sns:Receive\r\n Condition:\r\n Bool:\r\n aws:SecureTransport: 'false'\r\n Effect: Deny\r\n Principal: '*'\r\n Resource: '*'\r\n Sid: DenyNotEncryptedInTrans\r\n - type: notify\r\n action_desc: 'ACTION Taken: Custodian has added the policy statement to enable encryption in transit.'\r\n cc:\r\n - [email protected]\r\n from: [email protected]\r\n template: default.html\r\n to:\r\n - [email protected]\r\n - resource-owner\r\n transport:\r\n type: sns\r\n topic: arn:aws:sns:us-east-1:0123456789:mailer\r\n violation_desc: The following SNS Topic(s) do not have encryption in transit enabled.\n```\n\n\n### Relevant log/traceback output\n\n```shell\n2021-10-08 12:58:33,582 - custodian.output - ERROR - Error while executing policy\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/policy.py\", line 322, in run\r\n results = a.process(resources)\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/resources/sns.py\", line 330, in process\r\n new_policy, added = self.add_statements(new_policy)\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py\", line 100, in add_statements\r\n current = {s['Sid']: s for s in policy_statements}\r\n File \"/usr/local/lib/python3.7/dist-packages/c7n/actions/policy.py\", line 100, in \r\n current = {s['Sid']: s for s in policy_statements}\r\nKeyError: 'Sid'\n```\n\n\n### Extra information or context\n\n_No response_\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom .core import BaseAction\nfrom c7n import utils\n\n\nclass RemovePolicyBase(BaseAction):\n\n schema = utils.type_schema(\n 'remove-statements',\n required=['statement_ids'],\n statement_ids={'oneOf': [\n {'enum': ['matched', \"*\"]},\n {'type': 'array', 'items': {'type': 'string'}}]})\n\n def process_policy(self, policy, resource, matched_key):\n statements = policy.get('Statement', [])\n resource_statements = resource.get(matched_key, ())\n\n return remove_statements(\n self.data['statement_ids'], statements, resource_statements)\n\n\ndef remove_statements(match_ids, statements, matched=()):\n found = []\n for s in list(statements):\n s_found = False\n if match_ids == '*':\n s_found = True\n elif match_ids == 'matched':\n if s in matched:\n s_found = True\n elif 'Sid' in s and s['Sid'] in match_ids:\n s_found = True\n if s_found:\n found.append(s)\n statements.remove(s)\n if not found:\n return None, found\n return statements, found\n\n\nclass ModifyPolicyBase(BaseAction):\n \"\"\"Action to modify resource IAM policy statements.\n\n Applies to all resources with embedded IAM Policies.\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: sns-yank-cross-account\n resource: sns\n filters:\n - type: cross-account\n actions:\n - type: modify-policy\n add-statements: [{\n \"Sid\": \"ReplaceWithMe\",\n \"Effect\": \"Allow\",\n \"Principal\": \"*\",\n \"Action\": [\"SNS:GetTopicAttributes\"],\n \"Resource\": topic_arn,\n }]\n remove-statements: '*'\n \"\"\"\n\n schema_alias = True\n schema = utils.type_schema(\n 'modify-policy',\n **{\n 'add-statements': {\n 'type': 'array',\n 'items': {'$ref': '#/definitions/iam-statement'},\n },\n 'remove-statements': {\n 'type': ['array', 'string'],\n 'oneOf': [\n {'enum': ['matched', '*']},\n {'type': 'array', 'items': {'type': 'string'}}\n ],\n }\n }\n )\n\n def __init__(self, data=None, manager=None):\n if manager is not None:\n config_args = {\n 'account_id': manager.config.account_id,\n 'region': manager.config.region\n }\n self.data = utils.format_string_values(data, **config_args)\n else:\n self.data = utils.format_string_values(data)\n self.manager = manager\n\n def add_statements(self, policy_statements):\n current = {s['Sid']: s for s in policy_statements}\n additional = {s['Sid']: s for s in self.data.get('add-statements', [])}\n current.update(additional)\n return list(current.values()), bool(additional)\n\n def remove_statements(self, policy_statements, resource, matched_key):\n statement_ids = self.data.get('remove-statements', [])\n found = []\n if len(statement_ids) == 0:\n return policy_statements, found\n resource_statements = resource.get(matched_key, ())\n return remove_statements(\n statement_ids, policy_statements, resource_statements)\n", "path": "c7n/actions/policy.py"}], "after_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport hashlib\n\nfrom .core import BaseAction\nfrom c7n import utils\n\n\nclass RemovePolicyBase(BaseAction):\n\n schema = utils.type_schema(\n 'remove-statements',\n required=['statement_ids'],\n statement_ids={'oneOf': [\n {'enum': ['matched', \"*\"]},\n {'type': 'array', 'items': {'type': 'string'}}]})\n\n def process_policy(self, policy, resource, matched_key):\n statements = policy.get('Statement', [])\n resource_statements = resource.get(matched_key, ())\n\n return remove_statements(\n self.data['statement_ids'], statements, resource_statements)\n\n\ndef remove_statements(match_ids, statements, matched=()):\n found = []\n for s in list(statements):\n s_found = False\n if match_ids == '*':\n s_found = True\n elif match_ids == 'matched':\n if s in matched:\n s_found = True\n elif 'Sid' in s and s['Sid'] in match_ids:\n s_found = True\n if s_found:\n found.append(s)\n statements.remove(s)\n if not found:\n return None, found\n return statements, found\n\n\ndef statement_id(s):\n # for statements without a sid, use a checksum for identity\n return hashlib.sha224(utils.dumps(s, indent=0).encode('utf8')).hexdigest()\n\n\nclass ModifyPolicyBase(BaseAction):\n \"\"\"Action to modify resource IAM policy statements.\n\n Applies to all resources with embedded IAM Policies.\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: sns-yank-cross-account\n resource: sns\n filters:\n - type: cross-account\n actions:\n - type: modify-policy\n add-statements: [{\n \"Sid\": \"ReplaceWithMe\",\n \"Effect\": \"Allow\",\n \"Principal\": \"*\",\n \"Action\": [\"SNS:GetTopicAttributes\"],\n \"Resource\": topic_arn,\n }]\n remove-statements: '*'\n \"\"\"\n\n schema_alias = True\n schema = utils.type_schema(\n 'modify-policy',\n **{\n 'add-statements': {\n 'type': 'array',\n 'items': {'$ref': '#/definitions/iam-statement'},\n },\n 'remove-statements': {\n 'type': ['array', 'string'],\n 'oneOf': [\n {'enum': ['matched', '*']},\n {'type': 'array', 'items': {'type': 'string'}}\n ],\n }\n }\n )\n\n def __init__(self, data=None, manager=None):\n if manager is not None:\n config_args = {\n 'account_id': manager.config.account_id,\n 'region': manager.config.region\n }\n self.data = utils.format_string_values(data, **config_args)\n else:\n self.data = utils.format_string_values(data)\n self.manager = manager\n\n def add_statements(self, policy_statements):\n current = {s.get('Sid', statement_id(s)): s for s in policy_statements}\n additional = {s.get('Sid', statement_id(s)): s for s in self.data.get('add-statements', [])}\n current.update(additional)\n return list(current.values()), bool(additional)\n\n def remove_statements(self, policy_statements, resource, matched_key):\n statement_ids = self.data.get('remove-statements', [])\n found = []\n if len(statement_ids) == 0:\n return policy_statements, found\n resource_statements = resource.get(matched_key, ())\n return remove_statements(\n statement_ids, policy_statements, resource_statements)\n", "path": "c7n/actions/policy.py"}]}
| 3,696 | 307 |
gh_patches_debug_25375
|
rasdani/github-patches
|
git_diff
|
vllm-project__vllm-3177
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support Lora for qwen2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/model_executor/models/qwen2.py`
Content:
```
1 # coding=utf-8
2 # Adapted from
3 # https://github.com/huggingface/transformers/blob/v4.28.0/src/transformers/models/qwen2/modeling_qwen2.py
4 # Copyright 2024 The Qwen team.
5 # Copyright 2023 The vLLM team.
6 # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
7 #
8 # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
9 # and OPT implementations in this library. It has been modified from its
10 # original forms to accommodate minor architectural differences compared
11 # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
12 #
13 # Licensed under the Apache License, Version 2.0 (the "License");
14 # you may not use this file except in compliance with the License.
15 # You may obtain a copy of the License at
16 #
17 # http://www.apache.org/licenses/LICENSE-2.0
18 #
19 # Unless required by applicable law or agreed to in writing, software
20 # distributed under the License is distributed on an "AS IS" BASIS,
21 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
22 # See the License for the specific language governing permissions and
23 # limitations under the License.
24 """Inference-only Qwen2 model compatible with HuggingFace weights."""
25 from typing import List, Optional, Tuple
26
27 import torch
28 from torch import nn
29 from transformers import Qwen2Config
30
31 from vllm.model_executor.input_metadata import InputMetadata
32 from vllm.model_executor.layers.activation import SiluAndMul
33 from vllm.model_executor.layers.attention import PagedAttention
34 from vllm.model_executor.layers.layernorm import RMSNorm
35 from vllm.model_executor.layers.linear import (LinearMethodBase,
36 MergedColumnParallelLinear,
37 QKVParallelLinear,
38 RowParallelLinear)
39 from vllm.model_executor.layers.rotary_embedding import get_rope
40 from vllm.model_executor.layers.sampler import Sampler
41 from vllm.model_executor.layers.vocab_parallel_embedding import (
42 VocabParallelEmbedding, ParallelLMHead)
43 from vllm.model_executor.parallel_utils.parallel_state import (
44 get_tensor_model_parallel_world_size)
45 from vllm.model_executor.sampling_metadata import SamplingMetadata
46 from vllm.model_executor.weight_utils import (default_weight_loader,
47 hf_model_weights_iterator)
48 from vllm.sequence import SamplerOutput
49
50 KVCache = Tuple[torch.Tensor, torch.Tensor]
51
52
53 class Qwen2MLP(nn.Module):
54
55 def __init__(
56 self,
57 hidden_size: int,
58 intermediate_size: int,
59 hidden_act: str,
60 linear_method: Optional[LinearMethodBase] = None,
61 ) -> None:
62 super().__init__()
63 self.gate_up_proj = MergedColumnParallelLinear(
64 hidden_size, [intermediate_size] * 2,
65 bias=False,
66 linear_method=linear_method)
67 self.down_proj = RowParallelLinear(intermediate_size,
68 hidden_size,
69 bias=False,
70 linear_method=linear_method)
71 if hidden_act != "silu":
72 raise ValueError(f"Unsupported activation: {hidden_act}. "
73 "Only silu is supported for now.")
74 self.act_fn = SiluAndMul()
75
76 def forward(self, x):
77 gate_up, _ = self.gate_up_proj(x)
78 x = self.act_fn(gate_up)
79 x, _ = self.down_proj(x)
80 return x
81
82
83 class Qwen2Attention(nn.Module):
84
85 def __init__(self,
86 hidden_size: int,
87 num_heads: int,
88 num_kv_heads: int,
89 max_position: int = 4096 * 32,
90 rope_theta: float = 10000,
91 use_sliding_window: bool = False,
92 linear_method: Optional[LinearMethodBase] = None,
93 sliding_window: Optional[int] = None) -> None:
94 super().__init__()
95 self.hidden_size = hidden_size
96 tp_size = get_tensor_model_parallel_world_size()
97 self.total_num_heads = num_heads
98 assert self.total_num_heads % tp_size == 0
99 self.num_heads = self.total_num_heads // tp_size
100 self.total_num_kv_heads = num_kv_heads
101 if self.total_num_kv_heads >= tp_size:
102 # Number of KV heads is greater than TP size, so we partition
103 # the KV heads across multiple tensor parallel GPUs.
104 assert self.total_num_kv_heads % tp_size == 0
105 else:
106 # Number of KV heads is less than TP size, so we replicate
107 # the KV heads across multiple tensor parallel GPUs.
108 assert tp_size % self.total_num_kv_heads == 0
109 self.num_kv_heads = max(1, self.total_num_kv_heads // tp_size)
110 self.head_dim = hidden_size // self.total_num_heads
111 self.q_size = self.num_heads * self.head_dim
112 self.kv_size = self.num_kv_heads * self.head_dim
113 self.scaling = self.head_dim**-0.5
114 self.rope_theta = rope_theta
115 self.sliding_window = sliding_window if use_sliding_window else None
116
117 self.qkv_proj = QKVParallelLinear(
118 hidden_size,
119 self.head_dim,
120 self.total_num_heads,
121 self.total_num_kv_heads,
122 bias=True,
123 linear_method=linear_method,
124 )
125 self.o_proj = RowParallelLinear(
126 self.total_num_heads * self.head_dim,
127 hidden_size,
128 bias=False,
129 linear_method=linear_method,
130 )
131
132 self.rotary_emb = get_rope(
133 self.head_dim,
134 rotary_dim=self.head_dim,
135 max_position=max_position,
136 base=self.rope_theta,
137 )
138 self.attn = PagedAttention(self.num_heads,
139 self.head_dim,
140 self.scaling,
141 num_kv_heads=self.num_kv_heads,
142 sliding_window=self.sliding_window)
143
144 def forward(
145 self,
146 positions: torch.Tensor,
147 hidden_states: torch.Tensor,
148 kv_cache: KVCache,
149 input_metadata: InputMetadata,
150 ) -> torch.Tensor:
151 qkv, _ = self.qkv_proj(hidden_states)
152 q, k, v = qkv.split([self.q_size, self.kv_size, self.kv_size], dim=-1)
153 q, k = self.rotary_emb(positions, q, k)
154 k_cache, v_cache = kv_cache
155 attn_output = self.attn(q, k, v, k_cache, v_cache, input_metadata)
156 output, _ = self.o_proj(attn_output)
157 return output
158
159
160 class Qwen2DecoderLayer(nn.Module):
161
162 def __init__(
163 self,
164 config: Qwen2Config,
165 layer_idx: int,
166 linear_method: Optional[LinearMethodBase] = None,
167 ) -> None:
168 super().__init__()
169 self.hidden_size = config.hidden_size
170 # Requires transformers > 4.32.0
171 rope_theta = getattr(config, "rope_theta", 1000000)
172 use_sliding_window = config.use_sliding_window and layer_idx < config.max_window_layers
173 self.self_attn = Qwen2Attention(
174 hidden_size=self.hidden_size,
175 num_heads=config.num_attention_heads,
176 max_position=config.max_position_embeddings,
177 num_kv_heads=config.num_key_value_heads,
178 rope_theta=rope_theta,
179 use_sliding_window=use_sliding_window,
180 linear_method=linear_method,
181 sliding_window=config.sliding_window)
182 self.mlp = Qwen2MLP(
183 hidden_size=self.hidden_size,
184 intermediate_size=config.intermediate_size,
185 hidden_act=config.hidden_act,
186 linear_method=linear_method,
187 )
188 self.input_layernorm = RMSNorm(config.hidden_size,
189 eps=config.rms_norm_eps)
190 self.post_attention_layernorm = RMSNorm(config.hidden_size,
191 eps=config.rms_norm_eps)
192
193 def forward(
194 self,
195 positions: torch.Tensor,
196 hidden_states: torch.Tensor,
197 kv_cache: KVCache,
198 input_metadata: InputMetadata,
199 residual: Optional[torch.Tensor],
200 ) -> Tuple[torch.Tensor, torch.Tensor]:
201 # Self Attention
202 if residual is None:
203 residual = hidden_states
204 hidden_states = self.input_layernorm(hidden_states)
205 else:
206 hidden_states, residual = self.input_layernorm(
207 hidden_states, residual)
208 hidden_states = self.self_attn(
209 positions=positions,
210 hidden_states=hidden_states,
211 kv_cache=kv_cache,
212 input_metadata=input_metadata,
213 )
214
215 # Fully Connected
216 hidden_states, residual = self.post_attention_layernorm(
217 hidden_states, residual)
218 hidden_states = self.mlp(hidden_states)
219 return hidden_states, residual
220
221
222 class Qwen2Model(nn.Module):
223
224 def __init__(
225 self,
226 config: Qwen2Config,
227 linear_method: Optional[LinearMethodBase] = None,
228 ) -> None:
229 super().__init__()
230 self.config = config
231 self.padding_idx = config.pad_token_id
232 self.vocab_size = config.vocab_size
233
234 self.embed_tokens = VocabParallelEmbedding(
235 config.vocab_size,
236 config.hidden_size,
237 )
238 self.layers = nn.ModuleList([
239 Qwen2DecoderLayer(config, layer_idx, linear_method)
240 for layer_idx in range(config.num_hidden_layers)
241 ])
242 self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
243
244 def forward(
245 self,
246 input_ids: torch.Tensor,
247 positions: torch.Tensor,
248 kv_caches: List[KVCache],
249 input_metadata: InputMetadata,
250 ) -> torch.Tensor:
251 hidden_states = self.embed_tokens(input_ids)
252 residual = None
253 for i in range(len(self.layers)):
254 layer = self.layers[i]
255 hidden_states, residual = layer(
256 positions,
257 hidden_states,
258 kv_caches[i],
259 input_metadata,
260 residual,
261 )
262 hidden_states, _ = self.norm(hidden_states, residual)
263 return hidden_states
264
265
266 class Qwen2ForCausalLM(nn.Module):
267
268 def __init__(
269 self,
270 config: Qwen2Config,
271 linear_method: Optional[LinearMethodBase] = None,
272 ) -> None:
273 super().__init__()
274 self.config = config
275 self.linear_method = linear_method
276 self.model = Qwen2Model(config, linear_method)
277 self.lm_head = ParallelLMHead(config.vocab_size, config.hidden_size)
278 self.sampler = Sampler(config.vocab_size)
279
280 def forward(
281 self,
282 input_ids: torch.Tensor,
283 positions: torch.Tensor,
284 kv_caches: List[KVCache],
285 input_metadata: InputMetadata,
286 ) -> torch.Tensor:
287 hidden_states = self.model(input_ids, positions, kv_caches,
288 input_metadata)
289 return hidden_states
290
291 def sample(
292 self,
293 hidden_states: torch.Tensor,
294 sampling_metadata: SamplingMetadata,
295 ) -> Optional[SamplerOutput]:
296 next_tokens = self.sampler(self.lm_head.weight, hidden_states,
297 sampling_metadata)
298 return next_tokens
299
300 def load_weights(self,
301 model_name_or_path: str,
302 cache_dir: Optional[str] = None,
303 load_format: str = "auto",
304 revision: Optional[str] = None):
305 stacked_params_mapping = [
306 # (param_name, shard_name, shard_id)
307 ("qkv_proj", "q_proj", "q"),
308 ("qkv_proj", "k_proj", "k"),
309 ("qkv_proj", "v_proj", "v"),
310 ("gate_up_proj", "gate_proj", 0),
311 ("gate_up_proj", "up_proj", 1),
312 ]
313 params_dict = dict(self.named_parameters())
314 for name, loaded_weight in hf_model_weights_iterator(
315 model_name_or_path, cache_dir, load_format, revision):
316 if "rotary_emb.inv_freq" in name:
317 continue
318 for (param_name, weight_name, shard_id) in stacked_params_mapping:
319 if weight_name not in name:
320 continue
321 name = name.replace(weight_name, param_name)
322 # Skip loading extra bias for GPTQ models.
323 if name.endswith(".bias") and name not in params_dict:
324 continue
325 param = params_dict[name]
326 weight_loader = param.weight_loader
327 weight_loader(param, loaded_weight, shard_id)
328 break
329 else:
330 # Skip loading extra bias for GPTQ models.
331 if name.endswith(".bias") and name not in params_dict:
332 continue
333 param = params_dict[name]
334 weight_loader = getattr(param, "weight_loader",
335 default_weight_loader)
336 weight_loader(param, loaded_weight)
337
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vllm/model_executor/models/qwen2.py b/vllm/model_executor/models/qwen2.py
--- a/vllm/model_executor/models/qwen2.py
+++ b/vllm/model_executor/models/qwen2.py
@@ -46,6 +46,7 @@
from vllm.model_executor.weight_utils import (default_weight_loader,
hf_model_weights_iterator)
from vllm.sequence import SamplerOutput
+from vllm.config import LoRAConfig
KVCache = Tuple[torch.Tensor, torch.Tensor]
@@ -264,12 +265,35 @@
class Qwen2ForCausalLM(nn.Module):
+ packed_modules_mapping = {
+ "qkv_proj": [
+ "q_proj",
+ "k_proj",
+ "v_proj",
+ ],
+ "gate_up_proj": [
+ "gate_proj",
+ "up_proj",
+ ],
+ }
+
+ # LoRA specific attributes
+ supported_lora_modules = [
+ "qkv_proj",
+ "o_proj",
+ "gate_up_proj",
+ "down_proj",
+ ]
+ embedding_modules = {}
+ embedding_padding_modules = []
def __init__(
self,
config: Qwen2Config,
linear_method: Optional[LinearMethodBase] = None,
+ lora_config: Optional[LoRAConfig] = None,
) -> None:
+ del lora_config
super().__init__()
self.config = config
self.linear_method = linear_method
|
{"golden_diff": "diff --git a/vllm/model_executor/models/qwen2.py b/vllm/model_executor/models/qwen2.py\n--- a/vllm/model_executor/models/qwen2.py\n+++ b/vllm/model_executor/models/qwen2.py\n@@ -46,6 +46,7 @@\n from vllm.model_executor.weight_utils import (default_weight_loader,\n hf_model_weights_iterator)\n from vllm.sequence import SamplerOutput\n+from vllm.config import LoRAConfig\n \n KVCache = Tuple[torch.Tensor, torch.Tensor]\n \n@@ -264,12 +265,35 @@\n \n \n class Qwen2ForCausalLM(nn.Module):\n+ packed_modules_mapping = {\n+ \"qkv_proj\": [\n+ \"q_proj\",\n+ \"k_proj\",\n+ \"v_proj\",\n+ ],\n+ \"gate_up_proj\": [\n+ \"gate_proj\",\n+ \"up_proj\",\n+ ],\n+ }\n+\n+ # LoRA specific attributes\n+ supported_lora_modules = [\n+ \"qkv_proj\",\n+ \"o_proj\",\n+ \"gate_up_proj\",\n+ \"down_proj\",\n+ ]\n+ embedding_modules = {}\n+ embedding_padding_modules = []\n \n def __init__(\n self,\n config: Qwen2Config,\n linear_method: Optional[LinearMethodBase] = None,\n+ lora_config: Optional[LoRAConfig] = None,\n ) -> None:\n+ del lora_config\n super().__init__()\n self.config = config\n self.linear_method = linear_method\n", "issue": "Support Lora for qwen2\n\n", "before_files": [{"content": "# coding=utf-8\n# Adapted from\n# https://github.com/huggingface/transformers/blob/v4.28.0/src/transformers/models/qwen2/modeling_qwen2.py\n# Copyright 2024 The Qwen team.\n# Copyright 2023 The vLLM team.\n# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.\n#\n# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX\n# and OPT implementations in this library. It has been modified from its\n# original forms to accommodate minor architectural differences compared\n# to GPT-NeoX and OPT used by the Meta AI team that trained the model.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Inference-only Qwen2 model compatible with HuggingFace weights.\"\"\"\nfrom typing import List, Optional, Tuple\n\nimport torch\nfrom torch import nn\nfrom transformers import Qwen2Config\n\nfrom vllm.model_executor.input_metadata import InputMetadata\nfrom vllm.model_executor.layers.activation import SiluAndMul\nfrom vllm.model_executor.layers.attention import PagedAttention\nfrom vllm.model_executor.layers.layernorm import RMSNorm\nfrom vllm.model_executor.layers.linear import (LinearMethodBase,\n MergedColumnParallelLinear,\n QKVParallelLinear,\n RowParallelLinear)\nfrom vllm.model_executor.layers.rotary_embedding import get_rope\nfrom vllm.model_executor.layers.sampler import Sampler\nfrom vllm.model_executor.layers.vocab_parallel_embedding import (\n VocabParallelEmbedding, ParallelLMHead)\nfrom vllm.model_executor.parallel_utils.parallel_state import (\n get_tensor_model_parallel_world_size)\nfrom vllm.model_executor.sampling_metadata import SamplingMetadata\nfrom vllm.model_executor.weight_utils import (default_weight_loader,\n hf_model_weights_iterator)\nfrom vllm.sequence import SamplerOutput\n\nKVCache = Tuple[torch.Tensor, torch.Tensor]\n\n\nclass Qwen2MLP(nn.Module):\n\n def __init__(\n self,\n hidden_size: int,\n intermediate_size: int,\n hidden_act: str,\n linear_method: Optional[LinearMethodBase] = None,\n ) -> None:\n super().__init__()\n self.gate_up_proj = MergedColumnParallelLinear(\n hidden_size, [intermediate_size] * 2,\n bias=False,\n linear_method=linear_method)\n self.down_proj = RowParallelLinear(intermediate_size,\n hidden_size,\n bias=False,\n linear_method=linear_method)\n if hidden_act != \"silu\":\n raise ValueError(f\"Unsupported activation: {hidden_act}. \"\n \"Only silu is supported for now.\")\n self.act_fn = SiluAndMul()\n\n def forward(self, x):\n gate_up, _ = self.gate_up_proj(x)\n x = self.act_fn(gate_up)\n x, _ = self.down_proj(x)\n return x\n\n\nclass Qwen2Attention(nn.Module):\n\n def __init__(self,\n hidden_size: int,\n num_heads: int,\n num_kv_heads: int,\n max_position: int = 4096 * 32,\n rope_theta: float = 10000,\n use_sliding_window: bool = False,\n linear_method: Optional[LinearMethodBase] = None,\n sliding_window: Optional[int] = None) -> None:\n super().__init__()\n self.hidden_size = hidden_size\n tp_size = get_tensor_model_parallel_world_size()\n self.total_num_heads = num_heads\n assert self.total_num_heads % tp_size == 0\n self.num_heads = self.total_num_heads // tp_size\n self.total_num_kv_heads = num_kv_heads\n if self.total_num_kv_heads >= tp_size:\n # Number of KV heads is greater than TP size, so we partition\n # the KV heads across multiple tensor parallel GPUs.\n assert self.total_num_kv_heads % tp_size == 0\n else:\n # Number of KV heads is less than TP size, so we replicate\n # the KV heads across multiple tensor parallel GPUs.\n assert tp_size % self.total_num_kv_heads == 0\n self.num_kv_heads = max(1, self.total_num_kv_heads // tp_size)\n self.head_dim = hidden_size // self.total_num_heads\n self.q_size = self.num_heads * self.head_dim\n self.kv_size = self.num_kv_heads * self.head_dim\n self.scaling = self.head_dim**-0.5\n self.rope_theta = rope_theta\n self.sliding_window = sliding_window if use_sliding_window else None\n\n self.qkv_proj = QKVParallelLinear(\n hidden_size,\n self.head_dim,\n self.total_num_heads,\n self.total_num_kv_heads,\n bias=True,\n linear_method=linear_method,\n )\n self.o_proj = RowParallelLinear(\n self.total_num_heads * self.head_dim,\n hidden_size,\n bias=False,\n linear_method=linear_method,\n )\n\n self.rotary_emb = get_rope(\n self.head_dim,\n rotary_dim=self.head_dim,\n max_position=max_position,\n base=self.rope_theta,\n )\n self.attn = PagedAttention(self.num_heads,\n self.head_dim,\n self.scaling,\n num_kv_heads=self.num_kv_heads,\n sliding_window=self.sliding_window)\n\n def forward(\n self,\n positions: torch.Tensor,\n hidden_states: torch.Tensor,\n kv_cache: KVCache,\n input_metadata: InputMetadata,\n ) -> torch.Tensor:\n qkv, _ = self.qkv_proj(hidden_states)\n q, k, v = qkv.split([self.q_size, self.kv_size, self.kv_size], dim=-1)\n q, k = self.rotary_emb(positions, q, k)\n k_cache, v_cache = kv_cache\n attn_output = self.attn(q, k, v, k_cache, v_cache, input_metadata)\n output, _ = self.o_proj(attn_output)\n return output\n\n\nclass Qwen2DecoderLayer(nn.Module):\n\n def __init__(\n self,\n config: Qwen2Config,\n layer_idx: int,\n linear_method: Optional[LinearMethodBase] = None,\n ) -> None:\n super().__init__()\n self.hidden_size = config.hidden_size\n # Requires transformers > 4.32.0\n rope_theta = getattr(config, \"rope_theta\", 1000000)\n use_sliding_window = config.use_sliding_window and layer_idx < config.max_window_layers\n self.self_attn = Qwen2Attention(\n hidden_size=self.hidden_size,\n num_heads=config.num_attention_heads,\n max_position=config.max_position_embeddings,\n num_kv_heads=config.num_key_value_heads,\n rope_theta=rope_theta,\n use_sliding_window=use_sliding_window,\n linear_method=linear_method,\n sliding_window=config.sliding_window)\n self.mlp = Qwen2MLP(\n hidden_size=self.hidden_size,\n intermediate_size=config.intermediate_size,\n hidden_act=config.hidden_act,\n linear_method=linear_method,\n )\n self.input_layernorm = RMSNorm(config.hidden_size,\n eps=config.rms_norm_eps)\n self.post_attention_layernorm = RMSNorm(config.hidden_size,\n eps=config.rms_norm_eps)\n\n def forward(\n self,\n positions: torch.Tensor,\n hidden_states: torch.Tensor,\n kv_cache: KVCache,\n input_metadata: InputMetadata,\n residual: Optional[torch.Tensor],\n ) -> Tuple[torch.Tensor, torch.Tensor]:\n # Self Attention\n if residual is None:\n residual = hidden_states\n hidden_states = self.input_layernorm(hidden_states)\n else:\n hidden_states, residual = self.input_layernorm(\n hidden_states, residual)\n hidden_states = self.self_attn(\n positions=positions,\n hidden_states=hidden_states,\n kv_cache=kv_cache,\n input_metadata=input_metadata,\n )\n\n # Fully Connected\n hidden_states, residual = self.post_attention_layernorm(\n hidden_states, residual)\n hidden_states = self.mlp(hidden_states)\n return hidden_states, residual\n\n\nclass Qwen2Model(nn.Module):\n\n def __init__(\n self,\n config: Qwen2Config,\n linear_method: Optional[LinearMethodBase] = None,\n ) -> None:\n super().__init__()\n self.config = config\n self.padding_idx = config.pad_token_id\n self.vocab_size = config.vocab_size\n\n self.embed_tokens = VocabParallelEmbedding(\n config.vocab_size,\n config.hidden_size,\n )\n self.layers = nn.ModuleList([\n Qwen2DecoderLayer(config, layer_idx, linear_method)\n for layer_idx in range(config.num_hidden_layers)\n ])\n self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)\n\n def forward(\n self,\n input_ids: torch.Tensor,\n positions: torch.Tensor,\n kv_caches: List[KVCache],\n input_metadata: InputMetadata,\n ) -> torch.Tensor:\n hidden_states = self.embed_tokens(input_ids)\n residual = None\n for i in range(len(self.layers)):\n layer = self.layers[i]\n hidden_states, residual = layer(\n positions,\n hidden_states,\n kv_caches[i],\n input_metadata,\n residual,\n )\n hidden_states, _ = self.norm(hidden_states, residual)\n return hidden_states\n\n\nclass Qwen2ForCausalLM(nn.Module):\n\n def __init__(\n self,\n config: Qwen2Config,\n linear_method: Optional[LinearMethodBase] = None,\n ) -> None:\n super().__init__()\n self.config = config\n self.linear_method = linear_method\n self.model = Qwen2Model(config, linear_method)\n self.lm_head = ParallelLMHead(config.vocab_size, config.hidden_size)\n self.sampler = Sampler(config.vocab_size)\n\n def forward(\n self,\n input_ids: torch.Tensor,\n positions: torch.Tensor,\n kv_caches: List[KVCache],\n input_metadata: InputMetadata,\n ) -> torch.Tensor:\n hidden_states = self.model(input_ids, positions, kv_caches,\n input_metadata)\n return hidden_states\n\n def sample(\n self,\n hidden_states: torch.Tensor,\n sampling_metadata: SamplingMetadata,\n ) -> Optional[SamplerOutput]:\n next_tokens = self.sampler(self.lm_head.weight, hidden_states,\n sampling_metadata)\n return next_tokens\n\n def load_weights(self,\n model_name_or_path: str,\n cache_dir: Optional[str] = None,\n load_format: str = \"auto\",\n revision: Optional[str] = None):\n stacked_params_mapping = [\n # (param_name, shard_name, shard_id)\n (\"qkv_proj\", \"q_proj\", \"q\"),\n (\"qkv_proj\", \"k_proj\", \"k\"),\n (\"qkv_proj\", \"v_proj\", \"v\"),\n (\"gate_up_proj\", \"gate_proj\", 0),\n (\"gate_up_proj\", \"up_proj\", 1),\n ]\n params_dict = dict(self.named_parameters())\n for name, loaded_weight in hf_model_weights_iterator(\n model_name_or_path, cache_dir, load_format, revision):\n if \"rotary_emb.inv_freq\" in name:\n continue\n for (param_name, weight_name, shard_id) in stacked_params_mapping:\n if weight_name not in name:\n continue\n name = name.replace(weight_name, param_name)\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n param = params_dict[name]\n weight_loader = param.weight_loader\n weight_loader(param, loaded_weight, shard_id)\n break\n else:\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n param = params_dict[name]\n weight_loader = getattr(param, \"weight_loader\",\n default_weight_loader)\n weight_loader(param, loaded_weight)\n", "path": "vllm/model_executor/models/qwen2.py"}], "after_files": [{"content": "# coding=utf-8\n# Adapted from\n# https://github.com/huggingface/transformers/blob/v4.28.0/src/transformers/models/qwen2/modeling_qwen2.py\n# Copyright 2024 The Qwen team.\n# Copyright 2023 The vLLM team.\n# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.\n#\n# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX\n# and OPT implementations in this library. It has been modified from its\n# original forms to accommodate minor architectural differences compared\n# to GPT-NeoX and OPT used by the Meta AI team that trained the model.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Inference-only Qwen2 model compatible with HuggingFace weights.\"\"\"\nfrom typing import List, Optional, Tuple\n\nimport torch\nfrom torch import nn\nfrom transformers import Qwen2Config\n\nfrom vllm.model_executor.input_metadata import InputMetadata\nfrom vllm.model_executor.layers.activation import SiluAndMul\nfrom vllm.model_executor.layers.attention import PagedAttention\nfrom vllm.model_executor.layers.layernorm import RMSNorm\nfrom vllm.model_executor.layers.linear import (LinearMethodBase,\n MergedColumnParallelLinear,\n QKVParallelLinear,\n RowParallelLinear)\nfrom vllm.model_executor.layers.rotary_embedding import get_rope\nfrom vllm.model_executor.layers.sampler import Sampler\nfrom vllm.model_executor.layers.vocab_parallel_embedding import (\n VocabParallelEmbedding, ParallelLMHead)\nfrom vllm.model_executor.parallel_utils.parallel_state import (\n get_tensor_model_parallel_world_size)\nfrom vllm.model_executor.sampling_metadata import SamplingMetadata\nfrom vllm.model_executor.weight_utils import (default_weight_loader,\n hf_model_weights_iterator)\nfrom vllm.sequence import SamplerOutput\nfrom vllm.config import LoRAConfig\n\nKVCache = Tuple[torch.Tensor, torch.Tensor]\n\n\nclass Qwen2MLP(nn.Module):\n\n def __init__(\n self,\n hidden_size: int,\n intermediate_size: int,\n hidden_act: str,\n linear_method: Optional[LinearMethodBase] = None,\n ) -> None:\n super().__init__()\n self.gate_up_proj = MergedColumnParallelLinear(\n hidden_size, [intermediate_size] * 2,\n bias=False,\n linear_method=linear_method)\n self.down_proj = RowParallelLinear(intermediate_size,\n hidden_size,\n bias=False,\n linear_method=linear_method)\n if hidden_act != \"silu\":\n raise ValueError(f\"Unsupported activation: {hidden_act}. \"\n \"Only silu is supported for now.\")\n self.act_fn = SiluAndMul()\n\n def forward(self, x):\n gate_up, _ = self.gate_up_proj(x)\n x = self.act_fn(gate_up)\n x, _ = self.down_proj(x)\n return x\n\n\nclass Qwen2Attention(nn.Module):\n\n def __init__(self,\n hidden_size: int,\n num_heads: int,\n num_kv_heads: int,\n max_position: int = 4096 * 32,\n rope_theta: float = 10000,\n use_sliding_window: bool = False,\n linear_method: Optional[LinearMethodBase] = None,\n sliding_window: Optional[int] = None) -> None:\n super().__init__()\n self.hidden_size = hidden_size\n tp_size = get_tensor_model_parallel_world_size()\n self.total_num_heads = num_heads\n assert self.total_num_heads % tp_size == 0\n self.num_heads = self.total_num_heads // tp_size\n self.total_num_kv_heads = num_kv_heads\n if self.total_num_kv_heads >= tp_size:\n # Number of KV heads is greater than TP size, so we partition\n # the KV heads across multiple tensor parallel GPUs.\n assert self.total_num_kv_heads % tp_size == 0\n else:\n # Number of KV heads is less than TP size, so we replicate\n # the KV heads across multiple tensor parallel GPUs.\n assert tp_size % self.total_num_kv_heads == 0\n self.num_kv_heads = max(1, self.total_num_kv_heads // tp_size)\n self.head_dim = hidden_size // self.total_num_heads\n self.q_size = self.num_heads * self.head_dim\n self.kv_size = self.num_kv_heads * self.head_dim\n self.scaling = self.head_dim**-0.5\n self.rope_theta = rope_theta\n self.sliding_window = sliding_window if use_sliding_window else None\n\n self.qkv_proj = QKVParallelLinear(\n hidden_size,\n self.head_dim,\n self.total_num_heads,\n self.total_num_kv_heads,\n bias=True,\n linear_method=linear_method,\n )\n self.o_proj = RowParallelLinear(\n self.total_num_heads * self.head_dim,\n hidden_size,\n bias=False,\n linear_method=linear_method,\n )\n\n self.rotary_emb = get_rope(\n self.head_dim,\n rotary_dim=self.head_dim,\n max_position=max_position,\n base=self.rope_theta,\n )\n self.attn = PagedAttention(self.num_heads,\n self.head_dim,\n self.scaling,\n num_kv_heads=self.num_kv_heads,\n sliding_window=self.sliding_window)\n\n def forward(\n self,\n positions: torch.Tensor,\n hidden_states: torch.Tensor,\n kv_cache: KVCache,\n input_metadata: InputMetadata,\n ) -> torch.Tensor:\n qkv, _ = self.qkv_proj(hidden_states)\n q, k, v = qkv.split([self.q_size, self.kv_size, self.kv_size], dim=-1)\n q, k = self.rotary_emb(positions, q, k)\n k_cache, v_cache = kv_cache\n attn_output = self.attn(q, k, v, k_cache, v_cache, input_metadata)\n output, _ = self.o_proj(attn_output)\n return output\n\n\nclass Qwen2DecoderLayer(nn.Module):\n\n def __init__(\n self,\n config: Qwen2Config,\n layer_idx: int,\n linear_method: Optional[LinearMethodBase] = None,\n ) -> None:\n super().__init__()\n self.hidden_size = config.hidden_size\n # Requires transformers > 4.32.0\n rope_theta = getattr(config, \"rope_theta\", 1000000)\n use_sliding_window = config.use_sliding_window and layer_idx < config.max_window_layers\n self.self_attn = Qwen2Attention(\n hidden_size=self.hidden_size,\n num_heads=config.num_attention_heads,\n max_position=config.max_position_embeddings,\n num_kv_heads=config.num_key_value_heads,\n rope_theta=rope_theta,\n use_sliding_window=use_sliding_window,\n linear_method=linear_method,\n sliding_window=config.sliding_window)\n self.mlp = Qwen2MLP(\n hidden_size=self.hidden_size,\n intermediate_size=config.intermediate_size,\n hidden_act=config.hidden_act,\n linear_method=linear_method,\n )\n self.input_layernorm = RMSNorm(config.hidden_size,\n eps=config.rms_norm_eps)\n self.post_attention_layernorm = RMSNorm(config.hidden_size,\n eps=config.rms_norm_eps)\n\n def forward(\n self,\n positions: torch.Tensor,\n hidden_states: torch.Tensor,\n kv_cache: KVCache,\n input_metadata: InputMetadata,\n residual: Optional[torch.Tensor],\n ) -> Tuple[torch.Tensor, torch.Tensor]:\n # Self Attention\n if residual is None:\n residual = hidden_states\n hidden_states = self.input_layernorm(hidden_states)\n else:\n hidden_states, residual = self.input_layernorm(\n hidden_states, residual)\n hidden_states = self.self_attn(\n positions=positions,\n hidden_states=hidden_states,\n kv_cache=kv_cache,\n input_metadata=input_metadata,\n )\n\n # Fully Connected\n hidden_states, residual = self.post_attention_layernorm(\n hidden_states, residual)\n hidden_states = self.mlp(hidden_states)\n return hidden_states, residual\n\n\nclass Qwen2Model(nn.Module):\n\n def __init__(\n self,\n config: Qwen2Config,\n linear_method: Optional[LinearMethodBase] = None,\n ) -> None:\n super().__init__()\n self.config = config\n self.padding_idx = config.pad_token_id\n self.vocab_size = config.vocab_size\n\n self.embed_tokens = VocabParallelEmbedding(\n config.vocab_size,\n config.hidden_size,\n )\n self.layers = nn.ModuleList([\n Qwen2DecoderLayer(config, layer_idx, linear_method)\n for layer_idx in range(config.num_hidden_layers)\n ])\n self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)\n\n def forward(\n self,\n input_ids: torch.Tensor,\n positions: torch.Tensor,\n kv_caches: List[KVCache],\n input_metadata: InputMetadata,\n ) -> torch.Tensor:\n hidden_states = self.embed_tokens(input_ids)\n residual = None\n for i in range(len(self.layers)):\n layer = self.layers[i]\n hidden_states, residual = layer(\n positions,\n hidden_states,\n kv_caches[i],\n input_metadata,\n residual,\n )\n hidden_states, _ = self.norm(hidden_states, residual)\n return hidden_states\n\n\nclass Qwen2ForCausalLM(nn.Module):\n packed_modules_mapping = {\n \"qkv_proj\": [\n \"q_proj\",\n \"k_proj\",\n \"v_proj\",\n ],\n \"gate_up_proj\": [\n \"gate_proj\",\n \"up_proj\",\n ],\n }\n\n # LoRA specific attributes\n supported_lora_modules = [\n \"qkv_proj\",\n \"o_proj\",\n \"gate_up_proj\",\n \"down_proj\",\n ]\n embedding_modules = {}\n embedding_padding_modules = []\n\n def __init__(\n self,\n config: Qwen2Config,\n linear_method: Optional[LinearMethodBase] = None,\n lora_config: Optional[LoRAConfig] = None,\n ) -> None:\n del lora_config\n super().__init__()\n self.config = config\n self.linear_method = linear_method\n self.model = Qwen2Model(config, linear_method)\n self.lm_head = ParallelLMHead(config.vocab_size, config.hidden_size)\n self.sampler = Sampler(config.vocab_size)\n\n def forward(\n self,\n input_ids: torch.Tensor,\n positions: torch.Tensor,\n kv_caches: List[KVCache],\n input_metadata: InputMetadata,\n ) -> torch.Tensor:\n hidden_states = self.model(input_ids, positions, kv_caches,\n input_metadata)\n return hidden_states\n\n def sample(\n self,\n hidden_states: torch.Tensor,\n sampling_metadata: SamplingMetadata,\n ) -> Optional[SamplerOutput]:\n next_tokens = self.sampler(self.lm_head.weight, hidden_states,\n sampling_metadata)\n return next_tokens\n\n def load_weights(self,\n model_name_or_path: str,\n cache_dir: Optional[str] = None,\n load_format: str = \"auto\",\n revision: Optional[str] = None):\n stacked_params_mapping = [\n # (param_name, shard_name, shard_id)\n (\"qkv_proj\", \"q_proj\", \"q\"),\n (\"qkv_proj\", \"k_proj\", \"k\"),\n (\"qkv_proj\", \"v_proj\", \"v\"),\n (\"gate_up_proj\", \"gate_proj\", 0),\n (\"gate_up_proj\", \"up_proj\", 1),\n ]\n params_dict = dict(self.named_parameters())\n for name, loaded_weight in hf_model_weights_iterator(\n model_name_or_path, cache_dir, load_format, revision):\n if \"rotary_emb.inv_freq\" in name:\n continue\n for (param_name, weight_name, shard_id) in stacked_params_mapping:\n if weight_name not in name:\n continue\n name = name.replace(weight_name, param_name)\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n param = params_dict[name]\n weight_loader = param.weight_loader\n weight_loader(param, loaded_weight, shard_id)\n break\n else:\n # Skip loading extra bias for GPTQ models.\n if name.endswith(\".bias\") and name not in params_dict:\n continue\n param = params_dict[name]\n weight_loader = getattr(param, \"weight_loader\",\n default_weight_loader)\n weight_loader(param, loaded_weight)\n", "path": "vllm/model_executor/models/qwen2.py"}]}
| 3,909 | 341 |
gh_patches_debug_21131
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-2023
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search by topic broken ?
Browsing packages, then choosing the Internet / WWW9HTTP / Browser topics gives no result:
https://pypi.org/search/?q=&o=&c=Topic+%3A%3A+Internet+%3A%3A+WWW%2FHTTP+%3A%3A+Browsers
There should be at least the [mechanoid package](https://pypi.org/project/mechanoid/)
Using firefox 50.1:0 on Ubuntu 16.04
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/cli/search/reindex.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import binascii
14 import os
15
16 import click
17
18 from elasticsearch.helpers import parallel_bulk
19 from sqlalchemy.orm import lazyload, joinedload, load_only
20
21 from warehouse.cli.search import search
22 from warehouse.db import Session
23 from warehouse.packaging.models import Release, Project
24 from warehouse.packaging.search import Project as ProjectDocType
25 from warehouse.search import get_index
26 from warehouse.utils.db import windowed_query
27
28
29 def _project_docs(db):
30 releases = (
31 db.query(Release)
32 .options(load_only(
33 "summary", "description", "author",
34 "author_email", "maintainer", "maintainer_email",
35 "home_page", "download_url", "keywords", "platform",
36 "created"))
37 .options(lazyload("*"),
38 (joinedload(Release.project)
39 .load_only("normalized_name", "name")
40 .joinedload(Project.releases)
41 .load_only("version", "is_prerelease")),
42 joinedload(Release._classifiers).load_only("classifier"))
43 .distinct(Release.name)
44 .order_by(Release.name, Release._pypi_ordering.desc())
45 )
46 for release in windowed_query(releases, Release.name, 1000):
47 p = ProjectDocType.from_db(release)
48 p.full_clean()
49 yield p.to_dict(include_meta=True)
50
51
52 @search.command()
53 @click.pass_obj
54 def reindex(config, **kwargs):
55 """
56 Recreate the Search Index.
57 """
58 client = config.registry["elasticsearch.client"]
59 db = Session(bind=config.registry["sqlalchemy.engine"])
60 number_of_replicas = config.registry.get("elasticsearch.replicas", 0)
61 refresh_interval = config.registry.get("elasticsearch.interval", "1s")
62
63 # We use a randomly named index so that we can do a zero downtime reindex.
64 # Essentially we'll use a randomly named index which we will use until all
65 # of the data has been reindexed, at which point we'll point an alias at
66 # our randomly named index, and then delete the old randomly named index.
67
68 # Create the new index and associate all of our doc types with it.
69 index_base = config.registry["elasticsearch.index"]
70 random_token = binascii.hexlify(os.urandom(5)).decode("ascii")
71 new_index_name = "{}-{}".format(index_base, random_token)
72 doc_types = config.registry.get("search.doc_types", set())
73
74 # Create the new index with zero replicas and index refreshes disabled
75 # while we are bulk indexing.
76 new_index = get_index(
77 new_index_name,
78 doc_types,
79 using=client,
80 shards=config.registry.get("elasticsearch.shards", 1),
81 replicas=0,
82 interval="-1",
83 )
84
85 # From this point on, if any error occurs, we want to be able to delete our
86 # in progress index.
87 try:
88 db.execute("SET statement_timeout = '600s'")
89
90 for _ in parallel_bulk(client, _project_docs(db)):
91 pass
92 except:
93 new_index.delete()
94 raise
95 finally:
96 db.rollback()
97 db.close()
98
99 # Now that we've finished indexing all of our data we can optimize it and
100 # update the replicas and refresh intervals.
101 client.indices.forcemerge(index=new_index_name)
102 client.indices.put_settings(
103 index=new_index_name,
104 body={
105 "index": {
106 "number_of_replicas": number_of_replicas,
107 "refresh_interval": refresh_interval,
108 }
109 }
110 )
111
112 # Point the alias at our new randomly named index and delete the old index.
113 if client.indices.exists_alias(name=index_base):
114 to_delete = set()
115 actions = []
116 for name in client.indices.get_alias(name=index_base):
117 to_delete.add(name)
118 actions.append({"remove": {"index": name, "alias": index_base}})
119 actions.append({"add": {"index": new_index_name, "alias": index_base}})
120 client.indices.update_aliases({"actions": actions})
121 client.indices.delete(",".join(to_delete))
122 else:
123 client.indices.put_alias(name=index_base, index=new_index_name)
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/cli/search/reindex.py b/warehouse/cli/search/reindex.py
--- a/warehouse/cli/search/reindex.py
+++ b/warehouse/cli/search/reindex.py
@@ -70,6 +70,7 @@
random_token = binascii.hexlify(os.urandom(5)).decode("ascii")
new_index_name = "{}-{}".format(index_base, random_token)
doc_types = config.registry.get("search.doc_types", set())
+ shards = config.registry.get("elasticsearch.shards", 1)
# Create the new index with zero replicas and index refreshes disabled
# while we are bulk indexing.
@@ -77,10 +78,11 @@
new_index_name,
doc_types,
using=client,
- shards=config.registry.get("elasticsearch.shards", 1),
+ shards=shards,
replicas=0,
interval="-1",
)
+ new_index.create(wait_for_active_shards=shards)
# From this point on, if any error occurs, we want to be able to delete our
# in progress index.
|
{"golden_diff": "diff --git a/warehouse/cli/search/reindex.py b/warehouse/cli/search/reindex.py\n--- a/warehouse/cli/search/reindex.py\n+++ b/warehouse/cli/search/reindex.py\n@@ -70,6 +70,7 @@\n random_token = binascii.hexlify(os.urandom(5)).decode(\"ascii\")\n new_index_name = \"{}-{}\".format(index_base, random_token)\n doc_types = config.registry.get(\"search.doc_types\", set())\n+ shards = config.registry.get(\"elasticsearch.shards\", 1)\n \n # Create the new index with zero replicas and index refreshes disabled\n # while we are bulk indexing.\n@@ -77,10 +78,11 @@\n new_index_name,\n doc_types,\n using=client,\n- shards=config.registry.get(\"elasticsearch.shards\", 1),\n+ shards=shards,\n replicas=0,\n interval=\"-1\",\n )\n+ new_index.create(wait_for_active_shards=shards)\n \n # From this point on, if any error occurs, we want to be able to delete our\n # in progress index.\n", "issue": "Search by topic broken ?\nBrowsing packages, then choosing the Internet / WWW9HTTP / Browser topics gives no result:\r\n\r\nhttps://pypi.org/search/?q=&o=&c=Topic+%3A%3A+Internet+%3A%3A+WWW%2FHTTP+%3A%3A+Browsers\r\n\r\nThere should be at least the [mechanoid package](https://pypi.org/project/mechanoid/)\r\n\r\nUsing firefox 50.1:0 on Ubuntu 16.04\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport binascii\nimport os\n\nimport click\n\nfrom elasticsearch.helpers import parallel_bulk\nfrom sqlalchemy.orm import lazyload, joinedload, load_only\n\nfrom warehouse.cli.search import search\nfrom warehouse.db import Session\nfrom warehouse.packaging.models import Release, Project\nfrom warehouse.packaging.search import Project as ProjectDocType\nfrom warehouse.search import get_index\nfrom warehouse.utils.db import windowed_query\n\n\ndef _project_docs(db):\n releases = (\n db.query(Release)\n .options(load_only(\n \"summary\", \"description\", \"author\",\n \"author_email\", \"maintainer\", \"maintainer_email\",\n \"home_page\", \"download_url\", \"keywords\", \"platform\",\n \"created\"))\n .options(lazyload(\"*\"),\n (joinedload(Release.project)\n .load_only(\"normalized_name\", \"name\")\n .joinedload(Project.releases)\n .load_only(\"version\", \"is_prerelease\")),\n joinedload(Release._classifiers).load_only(\"classifier\"))\n .distinct(Release.name)\n .order_by(Release.name, Release._pypi_ordering.desc())\n )\n for release in windowed_query(releases, Release.name, 1000):\n p = ProjectDocType.from_db(release)\n p.full_clean()\n yield p.to_dict(include_meta=True)\n\n\[email protected]()\[email protected]_obj\ndef reindex(config, **kwargs):\n \"\"\"\n Recreate the Search Index.\n \"\"\"\n client = config.registry[\"elasticsearch.client\"]\n db = Session(bind=config.registry[\"sqlalchemy.engine\"])\n number_of_replicas = config.registry.get(\"elasticsearch.replicas\", 0)\n refresh_interval = config.registry.get(\"elasticsearch.interval\", \"1s\")\n\n # We use a randomly named index so that we can do a zero downtime reindex.\n # Essentially we'll use a randomly named index which we will use until all\n # of the data has been reindexed, at which point we'll point an alias at\n # our randomly named index, and then delete the old randomly named index.\n\n # Create the new index and associate all of our doc types with it.\n index_base = config.registry[\"elasticsearch.index\"]\n random_token = binascii.hexlify(os.urandom(5)).decode(\"ascii\")\n new_index_name = \"{}-{}\".format(index_base, random_token)\n doc_types = config.registry.get(\"search.doc_types\", set())\n\n # Create the new index with zero replicas and index refreshes disabled\n # while we are bulk indexing.\n new_index = get_index(\n new_index_name,\n doc_types,\n using=client,\n shards=config.registry.get(\"elasticsearch.shards\", 1),\n replicas=0,\n interval=\"-1\",\n )\n\n # From this point on, if any error occurs, we want to be able to delete our\n # in progress index.\n try:\n db.execute(\"SET statement_timeout = '600s'\")\n\n for _ in parallel_bulk(client, _project_docs(db)):\n pass\n except:\n new_index.delete()\n raise\n finally:\n db.rollback()\n db.close()\n\n # Now that we've finished indexing all of our data we can optimize it and\n # update the replicas and refresh intervals.\n client.indices.forcemerge(index=new_index_name)\n client.indices.put_settings(\n index=new_index_name,\n body={\n \"index\": {\n \"number_of_replicas\": number_of_replicas,\n \"refresh_interval\": refresh_interval,\n }\n }\n )\n\n # Point the alias at our new randomly named index and delete the old index.\n if client.indices.exists_alias(name=index_base):\n to_delete = set()\n actions = []\n for name in client.indices.get_alias(name=index_base):\n to_delete.add(name)\n actions.append({\"remove\": {\"index\": name, \"alias\": index_base}})\n actions.append({\"add\": {\"index\": new_index_name, \"alias\": index_base}})\n client.indices.update_aliases({\"actions\": actions})\n client.indices.delete(\",\".join(to_delete))\n else:\n client.indices.put_alias(name=index_base, index=new_index_name)\n", "path": "warehouse/cli/search/reindex.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport binascii\nimport os\n\nimport click\n\nfrom elasticsearch.helpers import parallel_bulk\nfrom sqlalchemy.orm import lazyload, joinedload, load_only\n\nfrom warehouse.cli.search import search\nfrom warehouse.db import Session\nfrom warehouse.packaging.models import Release, Project\nfrom warehouse.packaging.search import Project as ProjectDocType\nfrom warehouse.search import get_index\nfrom warehouse.utils.db import windowed_query\n\n\ndef _project_docs(db):\n releases = (\n db.query(Release)\n .options(load_only(\n \"summary\", \"description\", \"author\",\n \"author_email\", \"maintainer\", \"maintainer_email\",\n \"home_page\", \"download_url\", \"keywords\", \"platform\",\n \"created\"))\n .options(lazyload(\"*\"),\n (joinedload(Release.project)\n .load_only(\"normalized_name\", \"name\")\n .joinedload(Project.releases)\n .load_only(\"version\", \"is_prerelease\")),\n joinedload(Release._classifiers).load_only(\"classifier\"))\n .distinct(Release.name)\n .order_by(Release.name, Release._pypi_ordering.desc())\n )\n for release in windowed_query(releases, Release.name, 1000):\n p = ProjectDocType.from_db(release)\n p.full_clean()\n yield p.to_dict(include_meta=True)\n\n\[email protected]()\[email protected]_obj\ndef reindex(config, **kwargs):\n \"\"\"\n Recreate the Search Index.\n \"\"\"\n client = config.registry[\"elasticsearch.client\"]\n db = Session(bind=config.registry[\"sqlalchemy.engine\"])\n number_of_replicas = config.registry.get(\"elasticsearch.replicas\", 0)\n refresh_interval = config.registry.get(\"elasticsearch.interval\", \"1s\")\n\n # We use a randomly named index so that we can do a zero downtime reindex.\n # Essentially we'll use a randomly named index which we will use until all\n # of the data has been reindexed, at which point we'll point an alias at\n # our randomly named index, and then delete the old randomly named index.\n\n # Create the new index and associate all of our doc types with it.\n index_base = config.registry[\"elasticsearch.index\"]\n random_token = binascii.hexlify(os.urandom(5)).decode(\"ascii\")\n new_index_name = \"{}-{}\".format(index_base, random_token)\n doc_types = config.registry.get(\"search.doc_types\", set())\n shards = config.registry.get(\"elasticsearch.shards\", 1)\n\n # Create the new index with zero replicas and index refreshes disabled\n # while we are bulk indexing.\n new_index = get_index(\n new_index_name,\n doc_types,\n using=client,\n shards=shards,\n replicas=0,\n interval=\"-1\",\n )\n new_index.create(wait_for_active_shards=shards)\n\n # From this point on, if any error occurs, we want to be able to delete our\n # in progress index.\n try:\n db.execute(\"SET statement_timeout = '600s'\")\n\n for _ in parallel_bulk(client, _project_docs(db)):\n pass\n except:\n new_index.delete()\n raise\n finally:\n db.rollback()\n db.close()\n\n # Now that we've finished indexing all of our data we can optimize it and\n # update the replicas and refresh intervals.\n client.indices.forcemerge(index=new_index_name)\n client.indices.put_settings(\n index=new_index_name,\n body={\n \"index\": {\n \"number_of_replicas\": number_of_replicas,\n \"refresh_interval\": refresh_interval,\n }\n }\n )\n\n # Point the alias at our new randomly named index and delete the old index.\n if client.indices.exists_alias(name=index_base):\n to_delete = set()\n actions = []\n for name in client.indices.get_alias(name=index_base):\n to_delete.add(name)\n actions.append({\"remove\": {\"index\": name, \"alias\": index_base}})\n actions.append({\"add\": {\"index\": new_index_name, \"alias\": index_base}})\n client.indices.update_aliases({\"actions\": actions})\n client.indices.delete(\",\".join(to_delete))\n else:\n client.indices.put_alias(name=index_base, index=new_index_name)\n", "path": "warehouse/cli/search/reindex.py"}]}
| 1,637 | 243 |
gh_patches_debug_29581
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2709
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Food API: AttributeError: 'Event' object has no attribute 'member_registration'
Sentry Issue: [CONCREXIT-KG](https://sentry.io/organizations/thalia/issues/3768057031/?referrer=github_integration)
```
AttributeError: 'Event' object has no attribute 'member_registration'
(11 additional frame(s) were not displayed)
...
File "rest_framework/serializers.py", line 253, in data
self._data = self.to_representation(self.instance)
File "rest_framework/serializers.py", line 522, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "rest_framework/serializers.py", line 522, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "rest_framework/fields.py", line 1838, in to_representation
return method(value)
File "events/api/v2/serializers/event.py", line 83, in _registration_status
if self.context["request"].member and len(instance.member_registration) > 0:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/pizzas/api/v2/views.py`
Content:
```
1 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
2 from rest_framework import filters as framework_filters
3 from rest_framework import status
4 from rest_framework.generics import (
5 CreateAPIView,
6 DestroyAPIView,
7 ListAPIView,
8 RetrieveAPIView,
9 UpdateAPIView,
10 get_object_or_404,
11 )
12 from rest_framework.response import Response
13
14 from payments.exceptions import PaymentError
15 from payments.services import delete_payment
16 from pizzas.api.v2 import filters
17 from pizzas.api.v2.serializers import (
18 FoodOrderCreateSerializer,
19 FoodOrderSerializer,
20 FoodOrderUpdateSerializer,
21 ProductSerializer,
22 )
23 from pizzas.api.v2.serializers.food_event import FoodEventSerializer
24 from pizzas.models import FoodEvent, FoodOrder, Product
25 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
26
27
28 class FoodEventListView(ListAPIView):
29 """Returns an overview of all food events."""
30
31 serializer_class = FoodEventSerializer
32 queryset = FoodEvent.objects.all()
33 filter_backends = (
34 framework_filters.OrderingFilter,
35 filters.FoodEventDateFilterBackend,
36 )
37 ordering_fields = ("start", "end")
38 permission_classes = [
39 IsAuthenticatedOrTokenHasScope,
40 ]
41 required_scopes = ["food:read"]
42
43
44 class FoodEventDetailView(RetrieveAPIView):
45 """Returns one single food event."""
46
47 serializer_class = FoodEventSerializer
48 queryset = FoodEvent.objects.all()
49 permission_classes = [
50 IsAuthenticatedOrTokenHasScope,
51 ]
52 required_scopes = ["food:read"]
53
54
55 class FoodEventProductsListView(ListAPIView):
56 """Returns an overview of all products."""
57
58 serializer_class = ProductSerializer
59 queryset = Product.available_products.all()
60 filter_backends = (framework_filters.SearchFilter,)
61 search_fields = ("name",)
62 permission_classes = [
63 IsAuthenticatedOrTokenHasScope,
64 ]
65 required_scopes = ["food:read"]
66
67
68 class FoodEventOrderDetailView(
69 RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView
70 ):
71 """Returns details of a food order."""
72
73 permission_classes = [
74 IsAuthenticatedOrTokenHasScopeForMethod,
75 ]
76 required_scopes_per_method = {
77 "GET": ["food:read"],
78 "POST": ["food:order"],
79 "PUT": ["food:order"],
80 "PATCH": ["food:order"],
81 "DELETE": ["food:order"],
82 }
83
84 def get_serializer_class(self):
85 if self.request.method.lower() == "get":
86 return FoodOrderSerializer
87 if self.request.method.lower() == "post":
88 return FoodOrderCreateSerializer
89 return FoodOrderUpdateSerializer
90
91 def get_queryset(self):
92 return FoodOrder.objects.filter(food_event=self.food_event)
93
94 def get_object(self):
95 queryset = self.filter_queryset(self.get_queryset())
96 obj = get_object_or_404(queryset, member=self.request.member)
97
98 # May raise a permission denied
99 self.check_object_permissions(self.request, obj)
100
101 return obj
102
103 def dispatch(self, request, *args, **kwargs):
104 self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get("pk"))
105 try:
106 return super().dispatch(request, *args, **kwargs)
107 except PaymentError as e:
108 return Response(
109 str(e),
110 status=status.HTTP_403_FORBIDDEN,
111 )
112
113 def update(self, request, *args, **kwargs):
114 instance = self.get_object()
115
116 if instance.payment:
117 delete_payment(instance, member=request.member, ignore_change_window=True)
118
119 super().update(request, *args, **kwargs)
120
121 return Response(
122 FoodOrderSerializer(instance, context=self.get_serializer_context()).data
123 )
124
125 def create(self, request, *args, **kwargs):
126 serializer = self.get_serializer(data=request.data)
127 serializer.is_valid(raise_exception=True)
128 self.perform_create(serializer)
129 return Response(
130 FoodOrderSerializer(
131 serializer.instance, context=self.get_serializer_context()
132 ).data,
133 status=status.HTTP_201_CREATED,
134 )
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py
--- a/website/pizzas/api/v2/views.py
+++ b/website/pizzas/api/v2/views.py
@@ -1,3 +1,5 @@
+from django.db.models import Prefetch
+
from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
from rest_framework import filters as framework_filters
from rest_framework import status
@@ -11,6 +13,7 @@
)
from rest_framework.response import Response
+from events.models.event_registration import EventRegistration
from payments.exceptions import PaymentError
from payments.services import delete_payment
from pizzas.api.v2 import filters
@@ -45,12 +48,25 @@
"""Returns one single food event."""
serializer_class = FoodEventSerializer
- queryset = FoodEvent.objects.all()
permission_classes = [
IsAuthenticatedOrTokenHasScope,
]
required_scopes = ["food:read"]
+ def get_queryset(self):
+ events = FoodEvent.objects.all()
+ if self.request.member:
+ events = events.prefetch_related(
+ Prefetch(
+ "event__eventregistration_set",
+ to_attr="member_registration",
+ queryset=EventRegistration.objects.filter(
+ member=self.request.member
+ ).select_properties("queue_position"),
+ )
+ )
+ return events
+
class FoodEventProductsListView(ListAPIView):
"""Returns an overview of all products."""
|
{"golden_diff": "diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py\n--- a/website/pizzas/api/v2/views.py\n+++ b/website/pizzas/api/v2/views.py\n@@ -1,3 +1,5 @@\n+from django.db.models import Prefetch\n+\n from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\n from rest_framework import filters as framework_filters\n from rest_framework import status\n@@ -11,6 +13,7 @@\n )\n from rest_framework.response import Response\n \n+from events.models.event_registration import EventRegistration\n from payments.exceptions import PaymentError\n from payments.services import delete_payment\n from pizzas.api.v2 import filters\n@@ -45,12 +48,25 @@\n \"\"\"Returns one single food event.\"\"\"\n \n serializer_class = FoodEventSerializer\n- queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n \n+ def get_queryset(self):\n+ events = FoodEvent.objects.all()\n+ if self.request.member:\n+ events = events.prefetch_related(\n+ Prefetch(\n+ \"event__eventregistration_set\",\n+ to_attr=\"member_registration\",\n+ queryset=EventRegistration.objects.filter(\n+ member=self.request.member\n+ ).select_properties(\"queue_position\"),\n+ )\n+ )\n+ return events\n+\n \n class FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n", "issue": "Food API: AttributeError: 'Event' object has no attribute 'member_registration'\nSentry Issue: [CONCREXIT-KG](https://sentry.io/organizations/thalia/issues/3768057031/?referrer=github_integration)\n\n```\nAttributeError: 'Event' object has no attribute 'member_registration'\n(11 additional frame(s) were not displayed)\n...\n File \"rest_framework/serializers.py\", line 253, in data\n self._data = self.to_representation(self.instance)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/fields.py\", line 1838, in to_representation\n return method(value)\n File \"events/api/v2/serializers/event.py\", line 83, in _registration_status\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n```\n", "before_files": [{"content": "from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import filters as framework_filters\nfrom rest_framework import status\nfrom rest_framework.generics import (\n CreateAPIView,\n DestroyAPIView,\n ListAPIView,\n RetrieveAPIView,\n UpdateAPIView,\n get_object_or_404,\n)\nfrom rest_framework.response import Response\n\nfrom payments.exceptions import PaymentError\nfrom payments.services import delete_payment\nfrom pizzas.api.v2 import filters\nfrom pizzas.api.v2.serializers import (\n FoodOrderCreateSerializer,\n FoodOrderSerializer,\n FoodOrderUpdateSerializer,\n ProductSerializer,\n)\nfrom pizzas.api.v2.serializers.food_event import FoodEventSerializer\nfrom pizzas.models import FoodEvent, FoodOrder, Product\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass FoodEventListView(ListAPIView):\n \"\"\"Returns an overview of all food events.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n )\n ordering_fields = (\"start\", \"end\")\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventDetailView(RetrieveAPIView):\n \"\"\"Returns one single food event.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n\n serializer_class = ProductSerializer\n queryset = Product.available_products.all()\n filter_backends = (framework_filters.SearchFilter,)\n search_fields = (\"name\",)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventOrderDetailView(\n RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView\n):\n \"\"\"Returns details of a food order.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n ]\n required_scopes_per_method = {\n \"GET\": [\"food:read\"],\n \"POST\": [\"food:order\"],\n \"PUT\": [\"food:order\"],\n \"PATCH\": [\"food:order\"],\n \"DELETE\": [\"food:order\"],\n }\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"get\":\n return FoodOrderSerializer\n if self.request.method.lower() == \"post\":\n return FoodOrderCreateSerializer\n return FoodOrderUpdateSerializer\n\n def get_queryset(self):\n return FoodOrder.objects.filter(food_event=self.food_event)\n\n def get_object(self):\n queryset = self.filter_queryset(self.get_queryset())\n obj = get_object_or_404(queryset, member=self.request.member)\n\n # May raise a permission denied\n self.check_object_permissions(self.request, obj)\n\n return obj\n\n def dispatch(self, request, *args, **kwargs):\n self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get(\"pk\"))\n try:\n return super().dispatch(request, *args, **kwargs)\n except PaymentError as e:\n return Response(\n str(e),\n status=status.HTTP_403_FORBIDDEN,\n )\n\n def update(self, request, *args, **kwargs):\n instance = self.get_object()\n\n if instance.payment:\n delete_payment(instance, member=request.member, ignore_change_window=True)\n\n super().update(request, *args, **kwargs)\n\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return Response(\n FoodOrderSerializer(\n serializer.instance, context=self.get_serializer_context()\n ).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/pizzas/api/v2/views.py"}], "after_files": [{"content": "from django.db.models import Prefetch\n\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import filters as framework_filters\nfrom rest_framework import status\nfrom rest_framework.generics import (\n CreateAPIView,\n DestroyAPIView,\n ListAPIView,\n RetrieveAPIView,\n UpdateAPIView,\n get_object_or_404,\n)\nfrom rest_framework.response import Response\n\nfrom events.models.event_registration import EventRegistration\nfrom payments.exceptions import PaymentError\nfrom payments.services import delete_payment\nfrom pizzas.api.v2 import filters\nfrom pizzas.api.v2.serializers import (\n FoodOrderCreateSerializer,\n FoodOrderSerializer,\n FoodOrderUpdateSerializer,\n ProductSerializer,\n)\nfrom pizzas.api.v2.serializers.food_event import FoodEventSerializer\nfrom pizzas.models import FoodEvent, FoodOrder, Product\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass FoodEventListView(ListAPIView):\n \"\"\"Returns an overview of all food events.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n )\n ordering_fields = (\"start\", \"end\")\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventDetailView(RetrieveAPIView):\n \"\"\"Returns one single food event.\"\"\"\n\n serializer_class = FoodEventSerializer\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n def get_queryset(self):\n events = FoodEvent.objects.all()\n if self.request.member:\n events = events.prefetch_related(\n Prefetch(\n \"event__eventregistration_set\",\n to_attr=\"member_registration\",\n queryset=EventRegistration.objects.filter(\n member=self.request.member\n ).select_properties(\"queue_position\"),\n )\n )\n return events\n\n\nclass FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n\n serializer_class = ProductSerializer\n queryset = Product.available_products.all()\n filter_backends = (framework_filters.SearchFilter,)\n search_fields = (\"name\",)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventOrderDetailView(\n RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView\n):\n \"\"\"Returns details of a food order.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n ]\n required_scopes_per_method = {\n \"GET\": [\"food:read\"],\n \"POST\": [\"food:order\"],\n \"PUT\": [\"food:order\"],\n \"PATCH\": [\"food:order\"],\n \"DELETE\": [\"food:order\"],\n }\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"get\":\n return FoodOrderSerializer\n if self.request.method.lower() == \"post\":\n return FoodOrderCreateSerializer\n return FoodOrderUpdateSerializer\n\n def get_queryset(self):\n return FoodOrder.objects.filter(food_event=self.food_event)\n\n def get_object(self):\n queryset = self.filter_queryset(self.get_queryset())\n obj = get_object_or_404(queryset, member=self.request.member)\n\n # May raise a permission denied\n self.check_object_permissions(self.request, obj)\n\n return obj\n\n def dispatch(self, request, *args, **kwargs):\n self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get(\"pk\"))\n try:\n return super().dispatch(request, *args, **kwargs)\n except PaymentError as e:\n return Response(\n str(e),\n status=status.HTTP_403_FORBIDDEN,\n )\n\n def update(self, request, *args, **kwargs):\n instance = self.get_object()\n\n if instance.payment:\n delete_payment(instance, member=request.member, ignore_change_window=True)\n\n super().update(request, *args, **kwargs)\n\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return Response(\n FoodOrderSerializer(\n serializer.instance, context=self.get_serializer_context()\n ).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/pizzas/api/v2/views.py"}]}
| 1,675 | 325 |
gh_patches_debug_8269
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-3616
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mistaken msglen in MQTTPublish message
### Brief description
I noticed that the StrLenField of value in MQTTPublish packets is defined as
```python
StrLenField("value", "",
length_from=lambda pkt: (pkt.underlayer.len -
pkt.length - 2)),
```
However, if the QoS of the publish message is 1 or 2, because of the field of msgid, I guess it should be pkt.underlayer.len - pkt.length - 4, as I met this problem when I was trying to parse a packet that contains two MQTT Publish messages.

I tried to fix it with codes as followed. Perhaps it could help.
```python
StrLenField("value", "",
length_from=lambda pkt: pkt.underlayer.len -pkt.length - 2 if pkt.underlayer.QOS == 0
else pkt.underlayer.len -pkt.length - 4)
```
### Scapy version
2.4.5
### Python version
3.8
### Operating system
Win11
### Additional environment information
_No response_
### How to reproduce
Sniff a packet with two MQTT Publish messages......
### Actual result
_No response_
### Expected result
_No response_
### Related resources
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/contrib/mqtt.py`
Content:
```
1 # This file is part of Scapy
2 # See http://www.secdev.org/projects/scapy for more information
3 # Copyright (C) Santiago Hernandez Ramos <[email protected]>
4 # This program is published under GPLv2 license
5
6 # scapy.contrib.description = Message Queuing Telemetry Transport (MQTT)
7 # scapy.contrib.status = loads
8
9 from scapy.packet import Packet, bind_layers
10 from scapy.fields import FieldLenField, BitEnumField, StrLenField, \
11 ShortField, ConditionalField, ByteEnumField, ByteField, PacketListField
12 from scapy.layers.inet import TCP
13 from scapy.error import Scapy_Exception
14 from scapy.compat import orb, chb
15 from scapy.volatile import RandNum
16 from scapy.config import conf
17
18
19 # CUSTOM FIELDS
20 # source: http://stackoverflow.com/a/43717630
21 class VariableFieldLenField(FieldLenField):
22 def addfield(self, pkt, s, val):
23 val = self.i2m(pkt, val)
24 data = []
25 while val:
26 if val > 127:
27 data.append(val & 127)
28 val //= 128
29 else:
30 data.append(val)
31 lastoffset = len(data) - 1
32 data = b"".join(chb(val | (0 if i == lastoffset else 128))
33 for i, val in enumerate(data))
34 return s + data
35 if len(data) > 3:
36 raise Scapy_Exception("%s: malformed length field" %
37 self.__class__.__name__)
38 # If val is None / 0
39 return s + b"\x00"
40
41 def getfield(self, pkt, s):
42 value = 0
43 for offset, curbyte in enumerate(s):
44 curbyte = orb(curbyte)
45 value += (curbyte & 127) * (128 ** offset)
46 if curbyte & 128 == 0:
47 return s[offset + 1:], value
48 if offset > 2:
49 raise Scapy_Exception("%s: malformed length field" %
50 self.__class__.__name__)
51
52 def randval(self):
53 return RandVariableFieldLen()
54
55
56 class RandVariableFieldLen(RandNum):
57 def __init__(self):
58 RandNum.__init__(self, 0, 268435455)
59
60
61 # LAYERS
62 CONTROL_PACKET_TYPE = {
63 1: 'CONNECT',
64 2: 'CONNACK',
65 3: 'PUBLISH',
66 4: 'PUBACK',
67 5: 'PUBREC',
68 6: 'PUBREL',
69 7: 'PUBCOMP',
70 8: 'SUBSCRIBE',
71 9: 'SUBACK',
72 10: 'UNSUBSCRIBE',
73 11: 'UNSUBACK',
74 12: 'PINGREQ',
75 13: 'PINGRESP',
76 14: 'DISCONNECT',
77 15: 'AUTH' # Added in v5.0
78 }
79
80
81 QOS_LEVEL = {
82 0: 'At most once delivery',
83 1: 'At least once delivery',
84 2: 'Exactly once delivery'
85 }
86
87
88 # source: http://stackoverflow.com/a/43722441
89 class MQTT(Packet):
90 name = "MQTT fixed header"
91 fields_desc = [
92 BitEnumField("type", 1, 4, CONTROL_PACKET_TYPE),
93 BitEnumField("DUP", 0, 1, {0: 'Disabled',
94 1: 'Enabled'}),
95 BitEnumField("QOS", 0, 2, QOS_LEVEL),
96 BitEnumField("RETAIN", 0, 1, {0: 'Disabled',
97 1: 'Enabled'}),
98 # Since the size of the len field depends on the next layer, we need
99 # to "cheat" with the length_of parameter and use adjust parameter to
100 # calculate the value.
101 VariableFieldLenField("len", None, length_of="len",
102 adjust=lambda pkt, x: len(pkt.payload),),
103 ]
104
105
106 PROTOCOL_LEVEL = {
107 3: 'v3.1',
108 4: 'v3.1.1',
109 5: 'v5.0'
110 }
111
112
113 class MQTTConnect(Packet):
114 name = "MQTT connect"
115 fields_desc = [
116 FieldLenField("length", None, length_of="protoname"),
117 StrLenField("protoname", "",
118 length_from=lambda pkt: pkt.length),
119 ByteEnumField("protolevel", 5, PROTOCOL_LEVEL),
120 BitEnumField("usernameflag", 0, 1, {0: 'Disabled',
121 1: 'Enabled'}),
122 BitEnumField("passwordflag", 0, 1, {0: 'Disabled',
123 1: 'Enabled'}),
124 BitEnumField("willretainflag", 0, 1, {0: 'Disabled',
125 1: 'Enabled'}),
126 BitEnumField("willQOSflag", 0, 2, QOS_LEVEL),
127 BitEnumField("willflag", 0, 1, {0: 'Disabled',
128 1: 'Enabled'}),
129 BitEnumField("cleansess", 0, 1, {0: 'Disabled',
130 1: 'Enabled'}),
131 BitEnumField("reserved", 0, 1, {0: 'Disabled',
132 1: 'Enabled'}),
133 ShortField("klive", 0),
134 FieldLenField("clientIdlen", None, length_of="clientId"),
135 StrLenField("clientId", "",
136 length_from=lambda pkt: pkt.clientIdlen),
137 # Payload with optional fields depending on the flags
138 ConditionalField(FieldLenField("wtoplen", None, length_of="willtopic"),
139 lambda pkt: pkt.willflag == 1),
140 ConditionalField(StrLenField("willtopic", "",
141 length_from=lambda pkt: pkt.wtoplen),
142 lambda pkt: pkt.willflag == 1),
143 ConditionalField(FieldLenField("wmsglen", None, length_of="willmsg"),
144 lambda pkt: pkt.willflag == 1),
145 ConditionalField(StrLenField("willmsg", "",
146 length_from=lambda pkt: pkt.wmsglen),
147 lambda pkt: pkt.willflag == 1),
148 ConditionalField(FieldLenField("userlen", None, length_of="username"),
149 lambda pkt: pkt.usernameflag == 1),
150 ConditionalField(StrLenField("username", "",
151 length_from=lambda pkt: pkt.userlen),
152 lambda pkt: pkt.usernameflag == 1),
153 ConditionalField(FieldLenField("passlen", None, length_of="password"),
154 lambda pkt: pkt.passwordflag == 1),
155 ConditionalField(StrLenField("password", "",
156 length_from=lambda pkt: pkt.passlen),
157 lambda pkt: pkt.passwordflag == 1),
158 ]
159
160
161 class MQTTDisconnect(Packet):
162 name = "MQTT disconnect"
163 fields_desc = []
164
165
166 RETURN_CODE = {
167 0: 'Connection Accepted',
168 1: 'Unacceptable protocol version',
169 2: 'Identifier rejected',
170 3: 'Server unavailable',
171 4: 'Bad username/password',
172 5: 'Not authorized'
173 }
174
175
176 class MQTTConnack(Packet):
177 name = "MQTT connack"
178 fields_desc = [
179 ByteField("sessPresentFlag", 0),
180 ByteEnumField("retcode", 0, RETURN_CODE),
181 # this package has not payload
182 ]
183
184
185 class MQTTPublish(Packet):
186 name = "MQTT publish"
187 fields_desc = [
188 FieldLenField("length", None, length_of="topic"),
189 StrLenField("topic", "",
190 length_from=lambda pkt: pkt.length),
191 ConditionalField(ShortField("msgid", None),
192 lambda pkt: (pkt.underlayer.QOS == 1 or
193 pkt.underlayer.QOS == 2)),
194 StrLenField("value", "",
195 length_from=lambda pkt: (pkt.underlayer.len -
196 pkt.length - 2)),
197 ]
198
199
200 class MQTTPuback(Packet):
201 name = "MQTT puback"
202 fields_desc = [
203 ShortField("msgid", None),
204 ]
205
206
207 class MQTTPubrec(Packet):
208 name = "MQTT pubrec"
209 fields_desc = [
210 ShortField("msgid", None),
211 ]
212
213
214 class MQTTPubrel(Packet):
215 name = "MQTT pubrel"
216 fields_desc = [
217 ShortField("msgid", None),
218 ]
219
220
221 class MQTTPubcomp(Packet):
222 name = "MQTT pubcomp"
223 fields_desc = [
224 ShortField("msgid", None),
225 ]
226
227
228 class MQTTTopic(Packet):
229 name = "MQTT topic"
230 fields_desc = [
231 FieldLenField("length", None, length_of="topic"),
232 StrLenField("topic", "", length_from=lambda pkt:pkt.length)
233 ]
234
235 def guess_payload_class(self, payload):
236 return conf.padding_layer
237
238
239 class MQTTTopicQOS(MQTTTopic):
240 fields_desc = MQTTTopic.fields_desc + [ByteEnumField("QOS", 0, QOS_LEVEL)]
241
242
243 class MQTTSubscribe(Packet):
244 name = "MQTT subscribe"
245 fields_desc = [
246 ShortField("msgid", None),
247 PacketListField("topics", [], pkt_cls=MQTTTopicQOS)
248 ]
249
250
251 ALLOWED_RETURN_CODE = {
252 0: 'Success',
253 1: 'Success',
254 2: 'Success',
255 128: 'Failure'
256 }
257
258
259 class MQTTSuback(Packet):
260 name = "MQTT suback"
261 fields_desc = [
262 ShortField("msgid", None),
263 ByteEnumField("retcode", None, ALLOWED_RETURN_CODE)
264 ]
265
266
267 class MQTTUnsubscribe(Packet):
268 name = "MQTT unsubscribe"
269 fields_desc = [
270 ShortField("msgid", None),
271 PacketListField("topics", [], pkt_cls=MQTTTopic)
272 ]
273
274
275 class MQTTUnsuback(Packet):
276 name = "MQTT unsuback"
277 fields_desc = [
278 ShortField("msgid", None)
279 ]
280
281
282 # LAYERS BINDINGS
283
284 bind_layers(TCP, MQTT, sport=1883)
285 bind_layers(TCP, MQTT, dport=1883)
286 bind_layers(MQTT, MQTTConnect, type=1)
287 bind_layers(MQTT, MQTTConnack, type=2)
288 bind_layers(MQTT, MQTTPublish, type=3)
289 bind_layers(MQTT, MQTTPuback, type=4)
290 bind_layers(MQTT, MQTTPubrec, type=5)
291 bind_layers(MQTT, MQTTPubrel, type=6)
292 bind_layers(MQTT, MQTTPubcomp, type=7)
293 bind_layers(MQTT, MQTTSubscribe, type=8)
294 bind_layers(MQTT, MQTTSuback, type=9)
295 bind_layers(MQTT, MQTTUnsubscribe, type=10)
296 bind_layers(MQTT, MQTTUnsuback, type=11)
297 bind_layers(MQTT, MQTTDisconnect, type=14)
298 bind_layers(MQTTConnect, MQTT)
299 bind_layers(MQTTConnack, MQTT)
300 bind_layers(MQTTPublish, MQTT)
301 bind_layers(MQTTPuback, MQTT)
302 bind_layers(MQTTPubrec, MQTT)
303 bind_layers(MQTTPubrel, MQTT)
304 bind_layers(MQTTPubcomp, MQTT)
305 bind_layers(MQTTSubscribe, MQTT)
306 bind_layers(MQTTSuback, MQTT)
307 bind_layers(MQTTUnsubscribe, MQTT)
308 bind_layers(MQTTUnsuback, MQTT)
309 bind_layers(MQTTDisconnect, MQTT)
310
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scapy/contrib/mqtt.py b/scapy/contrib/mqtt.py
--- a/scapy/contrib/mqtt.py
+++ b/scapy/contrib/mqtt.py
@@ -192,8 +192,9 @@
lambda pkt: (pkt.underlayer.QOS == 1 or
pkt.underlayer.QOS == 2)),
StrLenField("value", "",
- length_from=lambda pkt: (pkt.underlayer.len -
- pkt.length - 2)),
+ length_from=lambda pkt: pkt.underlayer.len - pkt.length - 2
+ if pkt.underlayer.QOS == 0 else
+ pkt.underlayer.len - pkt.length - 4)
]
|
{"golden_diff": "diff --git a/scapy/contrib/mqtt.py b/scapy/contrib/mqtt.py\n--- a/scapy/contrib/mqtt.py\n+++ b/scapy/contrib/mqtt.py\n@@ -192,8 +192,9 @@\n lambda pkt: (pkt.underlayer.QOS == 1 or\n pkt.underlayer.QOS == 2)),\n StrLenField(\"value\", \"\",\n- length_from=lambda pkt: (pkt.underlayer.len -\n- pkt.length - 2)),\n+ length_from=lambda pkt: pkt.underlayer.len - pkt.length - 2\n+ if pkt.underlayer.QOS == 0 else\n+ pkt.underlayer.len - pkt.length - 4)\n ]\n", "issue": "mistaken msglen in MQTTPublish message\n### Brief description\n\nI noticed that the StrLenField of value in MQTTPublish packets is defined as\r\n```python\r\nStrLenField(\"value\", \"\",\r\n length_from=lambda pkt: (pkt.underlayer.len -\r\n pkt.length - 2)),\r\n```\r\nHowever, if the QoS of the publish message is 1 or 2, because of the field of msgid, I guess it should be pkt.underlayer.len - pkt.length - 4, as I met this problem when I was trying to parse a packet that contains two MQTT Publish messages. \r\n\r\nI tried to fix it with codes as followed. Perhaps it could help. \r\n```python\r\nStrLenField(\"value\", \"\",\r\n length_from=lambda pkt: pkt.underlayer.len -pkt.length - 2 if pkt.underlayer.QOS == 0 \r\n else pkt.underlayer.len -pkt.length - 4)\r\n```\r\n\r\n\n\n### Scapy version\n\n2.4.5\n\n### Python version\n\n3.8\n\n### Operating system\n\nWin11\n\n### Additional environment information\n\n_No response_\n\n### How to reproduce\n\nSniff a packet with two MQTT Publish messages......\n\n### Actual result\n\n_No response_\n\n### Expected result\n\n_No response_\n\n### Related resources\n\n_No response_\n", "before_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Santiago Hernandez Ramos <[email protected]>\n# This program is published under GPLv2 license\n\n# scapy.contrib.description = Message Queuing Telemetry Transport (MQTT)\n# scapy.contrib.status = loads\n\nfrom scapy.packet import Packet, bind_layers\nfrom scapy.fields import FieldLenField, BitEnumField, StrLenField, \\\n ShortField, ConditionalField, ByteEnumField, ByteField, PacketListField\nfrom scapy.layers.inet import TCP\nfrom scapy.error import Scapy_Exception\nfrom scapy.compat import orb, chb\nfrom scapy.volatile import RandNum\nfrom scapy.config import conf\n\n\n# CUSTOM FIELDS\n# source: http://stackoverflow.com/a/43717630\nclass VariableFieldLenField(FieldLenField):\n def addfield(self, pkt, s, val):\n val = self.i2m(pkt, val)\n data = []\n while val:\n if val > 127:\n data.append(val & 127)\n val //= 128\n else:\n data.append(val)\n lastoffset = len(data) - 1\n data = b\"\".join(chb(val | (0 if i == lastoffset else 128))\n for i, val in enumerate(data))\n return s + data\n if len(data) > 3:\n raise Scapy_Exception(\"%s: malformed length field\" %\n self.__class__.__name__)\n # If val is None / 0\n return s + b\"\\x00\"\n\n def getfield(self, pkt, s):\n value = 0\n for offset, curbyte in enumerate(s):\n curbyte = orb(curbyte)\n value += (curbyte & 127) * (128 ** offset)\n if curbyte & 128 == 0:\n return s[offset + 1:], value\n if offset > 2:\n raise Scapy_Exception(\"%s: malformed length field\" %\n self.__class__.__name__)\n\n def randval(self):\n return RandVariableFieldLen()\n\n\nclass RandVariableFieldLen(RandNum):\n def __init__(self):\n RandNum.__init__(self, 0, 268435455)\n\n\n# LAYERS\nCONTROL_PACKET_TYPE = {\n 1: 'CONNECT',\n 2: 'CONNACK',\n 3: 'PUBLISH',\n 4: 'PUBACK',\n 5: 'PUBREC',\n 6: 'PUBREL',\n 7: 'PUBCOMP',\n 8: 'SUBSCRIBE',\n 9: 'SUBACK',\n 10: 'UNSUBSCRIBE',\n 11: 'UNSUBACK',\n 12: 'PINGREQ',\n 13: 'PINGRESP',\n 14: 'DISCONNECT',\n 15: 'AUTH' # Added in v5.0\n}\n\n\nQOS_LEVEL = {\n 0: 'At most once delivery',\n 1: 'At least once delivery',\n 2: 'Exactly once delivery'\n}\n\n\n# source: http://stackoverflow.com/a/43722441\nclass MQTT(Packet):\n name = \"MQTT fixed header\"\n fields_desc = [\n BitEnumField(\"type\", 1, 4, CONTROL_PACKET_TYPE),\n BitEnumField(\"DUP\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"QOS\", 0, 2, QOS_LEVEL),\n BitEnumField(\"RETAIN\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n # Since the size of the len field depends on the next layer, we need\n # to \"cheat\" with the length_of parameter and use adjust parameter to\n # calculate the value.\n VariableFieldLenField(\"len\", None, length_of=\"len\",\n adjust=lambda pkt, x: len(pkt.payload),),\n ]\n\n\nPROTOCOL_LEVEL = {\n 3: 'v3.1',\n 4: 'v3.1.1',\n 5: 'v5.0'\n}\n\n\nclass MQTTConnect(Packet):\n name = \"MQTT connect\"\n fields_desc = [\n FieldLenField(\"length\", None, length_of=\"protoname\"),\n StrLenField(\"protoname\", \"\",\n length_from=lambda pkt: pkt.length),\n ByteEnumField(\"protolevel\", 5, PROTOCOL_LEVEL),\n BitEnumField(\"usernameflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"passwordflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"willretainflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"willQOSflag\", 0, 2, QOS_LEVEL),\n BitEnumField(\"willflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"cleansess\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"reserved\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n ShortField(\"klive\", 0),\n FieldLenField(\"clientIdlen\", None, length_of=\"clientId\"),\n StrLenField(\"clientId\", \"\",\n length_from=lambda pkt: pkt.clientIdlen),\n # Payload with optional fields depending on the flags\n ConditionalField(FieldLenField(\"wtoplen\", None, length_of=\"willtopic\"),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(StrLenField(\"willtopic\", \"\",\n length_from=lambda pkt: pkt.wtoplen),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(FieldLenField(\"wmsglen\", None, length_of=\"willmsg\"),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(StrLenField(\"willmsg\", \"\",\n length_from=lambda pkt: pkt.wmsglen),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(FieldLenField(\"userlen\", None, length_of=\"username\"),\n lambda pkt: pkt.usernameflag == 1),\n ConditionalField(StrLenField(\"username\", \"\",\n length_from=lambda pkt: pkt.userlen),\n lambda pkt: pkt.usernameflag == 1),\n ConditionalField(FieldLenField(\"passlen\", None, length_of=\"password\"),\n lambda pkt: pkt.passwordflag == 1),\n ConditionalField(StrLenField(\"password\", \"\",\n length_from=lambda pkt: pkt.passlen),\n lambda pkt: pkt.passwordflag == 1),\n ]\n\n\nclass MQTTDisconnect(Packet):\n name = \"MQTT disconnect\"\n fields_desc = []\n\n\nRETURN_CODE = {\n 0: 'Connection Accepted',\n 1: 'Unacceptable protocol version',\n 2: 'Identifier rejected',\n 3: 'Server unavailable',\n 4: 'Bad username/password',\n 5: 'Not authorized'\n}\n\n\nclass MQTTConnack(Packet):\n name = \"MQTT connack\"\n fields_desc = [\n ByteField(\"sessPresentFlag\", 0),\n ByteEnumField(\"retcode\", 0, RETURN_CODE),\n # this package has not payload\n ]\n\n\nclass MQTTPublish(Packet):\n name = \"MQTT publish\"\n fields_desc = [\n FieldLenField(\"length\", None, length_of=\"topic\"),\n StrLenField(\"topic\", \"\",\n length_from=lambda pkt: pkt.length),\n ConditionalField(ShortField(\"msgid\", None),\n lambda pkt: (pkt.underlayer.QOS == 1 or\n pkt.underlayer.QOS == 2)),\n StrLenField(\"value\", \"\",\n length_from=lambda pkt: (pkt.underlayer.len -\n pkt.length - 2)),\n ]\n\n\nclass MQTTPuback(Packet):\n name = \"MQTT puback\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTPubrec(Packet):\n name = \"MQTT pubrec\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTPubrel(Packet):\n name = \"MQTT pubrel\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTPubcomp(Packet):\n name = \"MQTT pubcomp\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTTopic(Packet):\n name = \"MQTT topic\"\n fields_desc = [\n FieldLenField(\"length\", None, length_of=\"topic\"),\n StrLenField(\"topic\", \"\", length_from=lambda pkt:pkt.length)\n ]\n\n def guess_payload_class(self, payload):\n return conf.padding_layer\n\n\nclass MQTTTopicQOS(MQTTTopic):\n fields_desc = MQTTTopic.fields_desc + [ByteEnumField(\"QOS\", 0, QOS_LEVEL)]\n\n\nclass MQTTSubscribe(Packet):\n name = \"MQTT subscribe\"\n fields_desc = [\n ShortField(\"msgid\", None),\n PacketListField(\"topics\", [], pkt_cls=MQTTTopicQOS)\n ]\n\n\nALLOWED_RETURN_CODE = {\n 0: 'Success',\n 1: 'Success',\n 2: 'Success',\n 128: 'Failure'\n}\n\n\nclass MQTTSuback(Packet):\n name = \"MQTT suback\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ByteEnumField(\"retcode\", None, ALLOWED_RETURN_CODE)\n ]\n\n\nclass MQTTUnsubscribe(Packet):\n name = \"MQTT unsubscribe\"\n fields_desc = [\n ShortField(\"msgid\", None),\n PacketListField(\"topics\", [], pkt_cls=MQTTTopic)\n ]\n\n\nclass MQTTUnsuback(Packet):\n name = \"MQTT unsuback\"\n fields_desc = [\n ShortField(\"msgid\", None)\n ]\n\n\n# LAYERS BINDINGS\n\nbind_layers(TCP, MQTT, sport=1883)\nbind_layers(TCP, MQTT, dport=1883)\nbind_layers(MQTT, MQTTConnect, type=1)\nbind_layers(MQTT, MQTTConnack, type=2)\nbind_layers(MQTT, MQTTPublish, type=3)\nbind_layers(MQTT, MQTTPuback, type=4)\nbind_layers(MQTT, MQTTPubrec, type=5)\nbind_layers(MQTT, MQTTPubrel, type=6)\nbind_layers(MQTT, MQTTPubcomp, type=7)\nbind_layers(MQTT, MQTTSubscribe, type=8)\nbind_layers(MQTT, MQTTSuback, type=9)\nbind_layers(MQTT, MQTTUnsubscribe, type=10)\nbind_layers(MQTT, MQTTUnsuback, type=11)\nbind_layers(MQTT, MQTTDisconnect, type=14)\nbind_layers(MQTTConnect, MQTT)\nbind_layers(MQTTConnack, MQTT)\nbind_layers(MQTTPublish, MQTT)\nbind_layers(MQTTPuback, MQTT)\nbind_layers(MQTTPubrec, MQTT)\nbind_layers(MQTTPubrel, MQTT)\nbind_layers(MQTTPubcomp, MQTT)\nbind_layers(MQTTSubscribe, MQTT)\nbind_layers(MQTTSuback, MQTT)\nbind_layers(MQTTUnsubscribe, MQTT)\nbind_layers(MQTTUnsuback, MQTT)\nbind_layers(MQTTDisconnect, MQTT)\n", "path": "scapy/contrib/mqtt.py"}], "after_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Santiago Hernandez Ramos <[email protected]>\n# This program is published under GPLv2 license\n\n# scapy.contrib.description = Message Queuing Telemetry Transport (MQTT)\n# scapy.contrib.status = loads\n\nfrom scapy.packet import Packet, bind_layers\nfrom scapy.fields import FieldLenField, BitEnumField, StrLenField, \\\n ShortField, ConditionalField, ByteEnumField, ByteField, PacketListField\nfrom scapy.layers.inet import TCP\nfrom scapy.error import Scapy_Exception\nfrom scapy.compat import orb, chb\nfrom scapy.volatile import RandNum\nfrom scapy.config import conf\n\n\n# CUSTOM FIELDS\n# source: http://stackoverflow.com/a/43717630\nclass VariableFieldLenField(FieldLenField):\n def addfield(self, pkt, s, val):\n val = self.i2m(pkt, val)\n data = []\n while val:\n if val > 127:\n data.append(val & 127)\n val //= 128\n else:\n data.append(val)\n lastoffset = len(data) - 1\n data = b\"\".join(chb(val | (0 if i == lastoffset else 128))\n for i, val in enumerate(data))\n return s + data\n if len(data) > 3:\n raise Scapy_Exception(\"%s: malformed length field\" %\n self.__class__.__name__)\n # If val is None / 0\n return s + b\"\\x00\"\n\n def getfield(self, pkt, s):\n value = 0\n for offset, curbyte in enumerate(s):\n curbyte = orb(curbyte)\n value += (curbyte & 127) * (128 ** offset)\n if curbyte & 128 == 0:\n return s[offset + 1:], value\n if offset > 2:\n raise Scapy_Exception(\"%s: malformed length field\" %\n self.__class__.__name__)\n\n def randval(self):\n return RandVariableFieldLen()\n\n\nclass RandVariableFieldLen(RandNum):\n def __init__(self):\n RandNum.__init__(self, 0, 268435455)\n\n\n# LAYERS\nCONTROL_PACKET_TYPE = {\n 1: 'CONNECT',\n 2: 'CONNACK',\n 3: 'PUBLISH',\n 4: 'PUBACK',\n 5: 'PUBREC',\n 6: 'PUBREL',\n 7: 'PUBCOMP',\n 8: 'SUBSCRIBE',\n 9: 'SUBACK',\n 10: 'UNSUBSCRIBE',\n 11: 'UNSUBACK',\n 12: 'PINGREQ',\n 13: 'PINGRESP',\n 14: 'DISCONNECT',\n 15: 'AUTH' # Added in v5.0\n}\n\n\nQOS_LEVEL = {\n 0: 'At most once delivery',\n 1: 'At least once delivery',\n 2: 'Exactly once delivery'\n}\n\n\n# source: http://stackoverflow.com/a/43722441\nclass MQTT(Packet):\n name = \"MQTT fixed header\"\n fields_desc = [\n BitEnumField(\"type\", 1, 4, CONTROL_PACKET_TYPE),\n BitEnumField(\"DUP\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"QOS\", 0, 2, QOS_LEVEL),\n BitEnumField(\"RETAIN\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n # Since the size of the len field depends on the next layer, we need\n # to \"cheat\" with the length_of parameter and use adjust parameter to\n # calculate the value.\n VariableFieldLenField(\"len\", None, length_of=\"len\",\n adjust=lambda pkt, x: len(pkt.payload),),\n ]\n\n\nPROTOCOL_LEVEL = {\n 3: 'v3.1',\n 4: 'v3.1.1',\n 5: 'v5.0'\n}\n\n\nclass MQTTConnect(Packet):\n name = \"MQTT connect\"\n fields_desc = [\n FieldLenField(\"length\", None, length_of=\"protoname\"),\n StrLenField(\"protoname\", \"\",\n length_from=lambda pkt: pkt.length),\n ByteEnumField(\"protolevel\", 5, PROTOCOL_LEVEL),\n BitEnumField(\"usernameflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"passwordflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"willretainflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"willQOSflag\", 0, 2, QOS_LEVEL),\n BitEnumField(\"willflag\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"cleansess\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n BitEnumField(\"reserved\", 0, 1, {0: 'Disabled',\n 1: 'Enabled'}),\n ShortField(\"klive\", 0),\n FieldLenField(\"clientIdlen\", None, length_of=\"clientId\"),\n StrLenField(\"clientId\", \"\",\n length_from=lambda pkt: pkt.clientIdlen),\n # Payload with optional fields depending on the flags\n ConditionalField(FieldLenField(\"wtoplen\", None, length_of=\"willtopic\"),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(StrLenField(\"willtopic\", \"\",\n length_from=lambda pkt: pkt.wtoplen),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(FieldLenField(\"wmsglen\", None, length_of=\"willmsg\"),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(StrLenField(\"willmsg\", \"\",\n length_from=lambda pkt: pkt.wmsglen),\n lambda pkt: pkt.willflag == 1),\n ConditionalField(FieldLenField(\"userlen\", None, length_of=\"username\"),\n lambda pkt: pkt.usernameflag == 1),\n ConditionalField(StrLenField(\"username\", \"\",\n length_from=lambda pkt: pkt.userlen),\n lambda pkt: pkt.usernameflag == 1),\n ConditionalField(FieldLenField(\"passlen\", None, length_of=\"password\"),\n lambda pkt: pkt.passwordflag == 1),\n ConditionalField(StrLenField(\"password\", \"\",\n length_from=lambda pkt: pkt.passlen),\n lambda pkt: pkt.passwordflag == 1),\n ]\n\n\nclass MQTTDisconnect(Packet):\n name = \"MQTT disconnect\"\n fields_desc = []\n\n\nRETURN_CODE = {\n 0: 'Connection Accepted',\n 1: 'Unacceptable protocol version',\n 2: 'Identifier rejected',\n 3: 'Server unavailable',\n 4: 'Bad username/password',\n 5: 'Not authorized'\n}\n\n\nclass MQTTConnack(Packet):\n name = \"MQTT connack\"\n fields_desc = [\n ByteField(\"sessPresentFlag\", 0),\n ByteEnumField(\"retcode\", 0, RETURN_CODE),\n # this package has not payload\n ]\n\n\nclass MQTTPublish(Packet):\n name = \"MQTT publish\"\n fields_desc = [\n FieldLenField(\"length\", None, length_of=\"topic\"),\n StrLenField(\"topic\", \"\",\n length_from=lambda pkt: pkt.length),\n ConditionalField(ShortField(\"msgid\", None),\n lambda pkt: (pkt.underlayer.QOS == 1 or\n pkt.underlayer.QOS == 2)),\n StrLenField(\"value\", \"\",\n length_from=lambda pkt: pkt.underlayer.len - pkt.length - 2\n if pkt.underlayer.QOS == 0 else\n pkt.underlayer.len - pkt.length - 4)\n ]\n\n\nclass MQTTPuback(Packet):\n name = \"MQTT puback\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTPubrec(Packet):\n name = \"MQTT pubrec\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTPubrel(Packet):\n name = \"MQTT pubrel\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTPubcomp(Packet):\n name = \"MQTT pubcomp\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ]\n\n\nclass MQTTTopic(Packet):\n name = \"MQTT topic\"\n fields_desc = [\n FieldLenField(\"length\", None, length_of=\"topic\"),\n StrLenField(\"topic\", \"\", length_from=lambda pkt:pkt.length)\n ]\n\n def guess_payload_class(self, payload):\n return conf.padding_layer\n\n\nclass MQTTTopicQOS(MQTTTopic):\n fields_desc = MQTTTopic.fields_desc + [ByteEnumField(\"QOS\", 0, QOS_LEVEL)]\n\n\nclass MQTTSubscribe(Packet):\n name = \"MQTT subscribe\"\n fields_desc = [\n ShortField(\"msgid\", None),\n PacketListField(\"topics\", [], pkt_cls=MQTTTopicQOS)\n ]\n\n\nALLOWED_RETURN_CODE = {\n 0: 'Success',\n 1: 'Success',\n 2: 'Success',\n 128: 'Failure'\n}\n\n\nclass MQTTSuback(Packet):\n name = \"MQTT suback\"\n fields_desc = [\n ShortField(\"msgid\", None),\n ByteEnumField(\"retcode\", None, ALLOWED_RETURN_CODE)\n ]\n\n\nclass MQTTUnsubscribe(Packet):\n name = \"MQTT unsubscribe\"\n fields_desc = [\n ShortField(\"msgid\", None),\n PacketListField(\"topics\", [], pkt_cls=MQTTTopic)\n ]\n\n\nclass MQTTUnsuback(Packet):\n name = \"MQTT unsuback\"\n fields_desc = [\n ShortField(\"msgid\", None)\n ]\n\n\n# LAYERS BINDINGS\n\nbind_layers(TCP, MQTT, sport=1883)\nbind_layers(TCP, MQTT, dport=1883)\nbind_layers(MQTT, MQTTConnect, type=1)\nbind_layers(MQTT, MQTTConnack, type=2)\nbind_layers(MQTT, MQTTPublish, type=3)\nbind_layers(MQTT, MQTTPuback, type=4)\nbind_layers(MQTT, MQTTPubrec, type=5)\nbind_layers(MQTT, MQTTPubrel, type=6)\nbind_layers(MQTT, MQTTPubcomp, type=7)\nbind_layers(MQTT, MQTTSubscribe, type=8)\nbind_layers(MQTT, MQTTSuback, type=9)\nbind_layers(MQTT, MQTTUnsubscribe, type=10)\nbind_layers(MQTT, MQTTUnsuback, type=11)\nbind_layers(MQTT, MQTTDisconnect, type=14)\nbind_layers(MQTTConnect, MQTT)\nbind_layers(MQTTConnack, MQTT)\nbind_layers(MQTTPublish, MQTT)\nbind_layers(MQTTPuback, MQTT)\nbind_layers(MQTTPubrec, MQTT)\nbind_layers(MQTTPubrel, MQTT)\nbind_layers(MQTTPubcomp, MQTT)\nbind_layers(MQTTSubscribe, MQTT)\nbind_layers(MQTTSuback, MQTT)\nbind_layers(MQTTUnsubscribe, MQTT)\nbind_layers(MQTTUnsuback, MQTT)\nbind_layers(MQTTDisconnect, MQTT)\n", "path": "scapy/contrib/mqtt.py"}]}
| 3,973 | 160 |
gh_patches_debug_19964
|
rasdani/github-patches
|
git_diff
|
matrix-org__synapse-6151
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Uploading a too large file: wrong error code
When uploading a too large file, synapse responds with a 413 (OK) but with errcode M_UNKNOWN. According to the [spec](https://matrix.org/docs/spec/client_server/latest#post-matrix-media-r0-upload), it should be "M_TOO_LARGE"
Received responseCode: 413
Received responseBody:
`{"errcode":"M_UNKNOWN","error":"Upload request body is too large"}`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `synapse/rest/media/v1/upload_resource.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2014-2016 OpenMarket Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import logging
17
18 from twisted.web.server import NOT_DONE_YET
19
20 from synapse.api.errors import SynapseError
21 from synapse.http.server import (
22 DirectServeResource,
23 respond_with_json,
24 wrap_json_request_handler,
25 )
26 from synapse.http.servlet import parse_string
27
28 logger = logging.getLogger(__name__)
29
30
31 class UploadResource(DirectServeResource):
32 isLeaf = True
33
34 def __init__(self, hs, media_repo):
35 super().__init__()
36
37 self.media_repo = media_repo
38 self.filepaths = media_repo.filepaths
39 self.store = hs.get_datastore()
40 self.clock = hs.get_clock()
41 self.server_name = hs.hostname
42 self.auth = hs.get_auth()
43 self.max_upload_size = hs.config.max_upload_size
44 self.clock = hs.get_clock()
45
46 def render_OPTIONS(self, request):
47 respond_with_json(request, 200, {}, send_cors=True)
48 return NOT_DONE_YET
49
50 @wrap_json_request_handler
51 async def _async_render_POST(self, request):
52 requester = await self.auth.get_user_by_req(request)
53 # TODO: The checks here are a bit late. The content will have
54 # already been uploaded to a tmp file at this point
55 content_length = request.getHeader(b"Content-Length").decode("ascii")
56 if content_length is None:
57 raise SynapseError(msg="Request must specify a Content-Length", code=400)
58 if int(content_length) > self.max_upload_size:
59 raise SynapseError(msg="Upload request body is too large", code=413)
60
61 upload_name = parse_string(request, b"filename", encoding=None)
62 if upload_name:
63 try:
64 upload_name = upload_name.decode("utf8")
65 except UnicodeDecodeError:
66 raise SynapseError(
67 msg="Invalid UTF-8 filename parameter: %r" % (upload_name), code=400
68 )
69
70 headers = request.requestHeaders
71
72 if headers.hasHeader(b"Content-Type"):
73 media_type = headers.getRawHeaders(b"Content-Type")[0].decode("ascii")
74 else:
75 raise SynapseError(msg="Upload request missing 'Content-Type'", code=400)
76
77 # if headers.hasHeader(b"Content-Disposition"):
78 # disposition = headers.getRawHeaders(b"Content-Disposition")[0]
79 # TODO(markjh): parse content-dispostion
80
81 content_uri = await self.media_repo.create_content(
82 media_type, upload_name, request.content, content_length, requester.user
83 )
84
85 logger.info("Uploaded content with URI %r", content_uri)
86
87 respond_with_json(request, 200, {"content_uri": content_uri}, send_cors=True)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/synapse/rest/media/v1/upload_resource.py b/synapse/rest/media/v1/upload_resource.py
--- a/synapse/rest/media/v1/upload_resource.py
+++ b/synapse/rest/media/v1/upload_resource.py
@@ -17,7 +17,7 @@
from twisted.web.server import NOT_DONE_YET
-from synapse.api.errors import SynapseError
+from synapse.api.errors import Codes, SynapseError
from synapse.http.server import (
DirectServeResource,
respond_with_json,
@@ -56,7 +56,11 @@
if content_length is None:
raise SynapseError(msg="Request must specify a Content-Length", code=400)
if int(content_length) > self.max_upload_size:
- raise SynapseError(msg="Upload request body is too large", code=413)
+ raise SynapseError(
+ msg="Upload request body is too large",
+ code=413,
+ errcode=Codes.TOO_LARGE,
+ )
upload_name = parse_string(request, b"filename", encoding=None)
if upload_name:
|
{"golden_diff": "diff --git a/synapse/rest/media/v1/upload_resource.py b/synapse/rest/media/v1/upload_resource.py\n--- a/synapse/rest/media/v1/upload_resource.py\n+++ b/synapse/rest/media/v1/upload_resource.py\n@@ -17,7 +17,7 @@\n \n from twisted.web.server import NOT_DONE_YET\n \n-from synapse.api.errors import SynapseError\n+from synapse.api.errors import Codes, SynapseError\n from synapse.http.server import (\n DirectServeResource,\n respond_with_json,\n@@ -56,7 +56,11 @@\n if content_length is None:\n raise SynapseError(msg=\"Request must specify a Content-Length\", code=400)\n if int(content_length) > self.max_upload_size:\n- raise SynapseError(msg=\"Upload request body is too large\", code=413)\n+ raise SynapseError(\n+ msg=\"Upload request body is too large\",\n+ code=413,\n+ errcode=Codes.TOO_LARGE,\n+ )\n \n upload_name = parse_string(request, b\"filename\", encoding=None)\n if upload_name:\n", "issue": "Uploading a too large file: wrong error code\nWhen uploading a too large file, synapse responds with a 413 (OK) but with errcode M_UNKNOWN. According to the [spec](https://matrix.org/docs/spec/client_server/latest#post-matrix-media-r0-upload), it should be \"M_TOO_LARGE\"\r\n\r\nReceived responseCode: 413\r\nReceived responseBody:\r\n`{\"errcode\":\"M_UNKNOWN\",\"error\":\"Upload request body is too large\"}`\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014-2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\n\nfrom twisted.web.server import NOT_DONE_YET\n\nfrom synapse.api.errors import SynapseError\nfrom synapse.http.server import (\n DirectServeResource,\n respond_with_json,\n wrap_json_request_handler,\n)\nfrom synapse.http.servlet import parse_string\n\nlogger = logging.getLogger(__name__)\n\n\nclass UploadResource(DirectServeResource):\n isLeaf = True\n\n def __init__(self, hs, media_repo):\n super().__init__()\n\n self.media_repo = media_repo\n self.filepaths = media_repo.filepaths\n self.store = hs.get_datastore()\n self.clock = hs.get_clock()\n self.server_name = hs.hostname\n self.auth = hs.get_auth()\n self.max_upload_size = hs.config.max_upload_size\n self.clock = hs.get_clock()\n\n def render_OPTIONS(self, request):\n respond_with_json(request, 200, {}, send_cors=True)\n return NOT_DONE_YET\n\n @wrap_json_request_handler\n async def _async_render_POST(self, request):\n requester = await self.auth.get_user_by_req(request)\n # TODO: The checks here are a bit late. The content will have\n # already been uploaded to a tmp file at this point\n content_length = request.getHeader(b\"Content-Length\").decode(\"ascii\")\n if content_length is None:\n raise SynapseError(msg=\"Request must specify a Content-Length\", code=400)\n if int(content_length) > self.max_upload_size:\n raise SynapseError(msg=\"Upload request body is too large\", code=413)\n\n upload_name = parse_string(request, b\"filename\", encoding=None)\n if upload_name:\n try:\n upload_name = upload_name.decode(\"utf8\")\n except UnicodeDecodeError:\n raise SynapseError(\n msg=\"Invalid UTF-8 filename parameter: %r\" % (upload_name), code=400\n )\n\n headers = request.requestHeaders\n\n if headers.hasHeader(b\"Content-Type\"):\n media_type = headers.getRawHeaders(b\"Content-Type\")[0].decode(\"ascii\")\n else:\n raise SynapseError(msg=\"Upload request missing 'Content-Type'\", code=400)\n\n # if headers.hasHeader(b\"Content-Disposition\"):\n # disposition = headers.getRawHeaders(b\"Content-Disposition\")[0]\n # TODO(markjh): parse content-dispostion\n\n content_uri = await self.media_repo.create_content(\n media_type, upload_name, request.content, content_length, requester.user\n )\n\n logger.info(\"Uploaded content with URI %r\", content_uri)\n\n respond_with_json(request, 200, {\"content_uri\": content_uri}, send_cors=True)\n", "path": "synapse/rest/media/v1/upload_resource.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014-2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\n\nfrom twisted.web.server import NOT_DONE_YET\n\nfrom synapse.api.errors import Codes, SynapseError\nfrom synapse.http.server import (\n DirectServeResource,\n respond_with_json,\n wrap_json_request_handler,\n)\nfrom synapse.http.servlet import parse_string\n\nlogger = logging.getLogger(__name__)\n\n\nclass UploadResource(DirectServeResource):\n isLeaf = True\n\n def __init__(self, hs, media_repo):\n super().__init__()\n\n self.media_repo = media_repo\n self.filepaths = media_repo.filepaths\n self.store = hs.get_datastore()\n self.clock = hs.get_clock()\n self.server_name = hs.hostname\n self.auth = hs.get_auth()\n self.max_upload_size = hs.config.max_upload_size\n self.clock = hs.get_clock()\n\n def render_OPTIONS(self, request):\n respond_with_json(request, 200, {}, send_cors=True)\n return NOT_DONE_YET\n\n @wrap_json_request_handler\n async def _async_render_POST(self, request):\n requester = await self.auth.get_user_by_req(request)\n # TODO: The checks here are a bit late. The content will have\n # already been uploaded to a tmp file at this point\n content_length = request.getHeader(b\"Content-Length\").decode(\"ascii\")\n if content_length is None:\n raise SynapseError(msg=\"Request must specify a Content-Length\", code=400)\n if int(content_length) > self.max_upload_size:\n raise SynapseError(\n msg=\"Upload request body is too large\",\n code=413,\n errcode=Codes.TOO_LARGE,\n )\n\n upload_name = parse_string(request, b\"filename\", encoding=None)\n if upload_name:\n try:\n upload_name = upload_name.decode(\"utf8\")\n except UnicodeDecodeError:\n raise SynapseError(\n msg=\"Invalid UTF-8 filename parameter: %r\" % (upload_name), code=400\n )\n\n headers = request.requestHeaders\n\n if headers.hasHeader(b\"Content-Type\"):\n media_type = headers.getRawHeaders(b\"Content-Type\")[0].decode(\"ascii\")\n else:\n raise SynapseError(msg=\"Upload request missing 'Content-Type'\", code=400)\n\n # if headers.hasHeader(b\"Content-Disposition\"):\n # disposition = headers.getRawHeaders(b\"Content-Disposition\")[0]\n # TODO(markjh): parse content-dispostion\n\n content_uri = await self.media_repo.create_content(\n media_type, upload_name, request.content, content_length, requester.user\n )\n\n logger.info(\"Uploaded content with URI %r\", content_uri)\n\n respond_with_json(request, 200, {\"content_uri\": content_uri}, send_cors=True)\n", "path": "synapse/rest/media/v1/upload_resource.py"}]}
| 1,258 | 248 |
gh_patches_debug_5760
|
rasdani/github-patches
|
git_diff
|
NVIDIA__NVFlare-363
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Users are not warned when running poc command
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvflare/lighter/poc.py`
Content:
```
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import argparse
16 import os
17 import pathlib
18 import shutil
19
20
21 def clone_client(num_clients: int):
22 current_path = os.getcwd()
23 poc_folder = os.path.join(current_path, "poc")
24 src_folder = os.path.join(poc_folder, "client")
25 for index in range(1, num_clients + 1):
26 dst_folder = os.path.join(poc_folder, f"site-{index}")
27 shutil.copytree(src_folder, dst_folder)
28 start_sh = open(os.path.join(dst_folder, "startup", "start.sh"), "rt")
29 content = start_sh.read()
30 start_sh.close()
31 content = content.replace("NNN", f"{index}")
32 with open(os.path.join(dst_folder, "startup", "start.sh"), "wt") as f:
33 f.write(content)
34 shutil.rmtree(src_folder)
35
36
37 def main():
38 parser = argparse.ArgumentParser()
39 parser.add_argument("-n", "--num_clients", type=int, default=1, help="number of client folders to create")
40
41 args = parser.parse_args()
42
43 file_dir_path = pathlib.Path(__file__).parent.absolute()
44 poc_zip_path = file_dir_path.parent / "poc.zip"
45 poc_folder_path = file_dir_path.parent / "poc"
46 answer = input("This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) ")
47 if answer.strip().upper() == "Y":
48 dest_poc_folder = os.path.join(os.getcwd(), "poc")
49 shutil.rmtree(dest_poc_folder, ignore_errors=True)
50 try:
51 shutil.unpack_archive(poc_zip_path)
52 except shutil.ReadError:
53 print(f"poc.zip not found at {poc_zip_path}, try to use template poc folder")
54 try:
55 shutil.copytree(poc_folder_path, dest_poc_folder)
56 except BaseException:
57 print(f"Unable to copy poc folder from {poc_folder_path}. Exit")
58 exit(1)
59 for root, dirs, files in os.walk(dest_poc_folder):
60 for file in files:
61 if file.endswith(".sh"):
62 os.chmod(os.path.join(root, file), 0o755)
63 clone_client(args.num_clients)
64 print("Successfully creating poc folder. Please read poc/Readme.rst for user guide.")
65
66
67 if __name__ == "__main__":
68 main()
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nvflare/lighter/poc.py b/nvflare/lighter/poc.py
--- a/nvflare/lighter/poc.py
+++ b/nvflare/lighter/poc.py
@@ -62,6 +62,7 @@
os.chmod(os.path.join(root, file), 0o755)
clone_client(args.num_clients)
print("Successfully creating poc folder. Please read poc/Readme.rst for user guide.")
+ print("\n\nWARNING:\n******* Files generated by this poc command are NOT intended for production environments.")
if __name__ == "__main__":
|
{"golden_diff": "diff --git a/nvflare/lighter/poc.py b/nvflare/lighter/poc.py\n--- a/nvflare/lighter/poc.py\n+++ b/nvflare/lighter/poc.py\n@@ -62,6 +62,7 @@\n os.chmod(os.path.join(root, file), 0o755)\n clone_client(args.num_clients)\n print(\"Successfully creating poc folder. Please read poc/Readme.rst for user guide.\")\n+ print(\"\\n\\nWARNING:\\n******* Files generated by this poc command are NOT intended for production environments.\")\n \n \n if __name__ == \"__main__\":\n", "issue": "Users are not warned when running poc command\n\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport os\nimport pathlib\nimport shutil\n\n\ndef clone_client(num_clients: int):\n current_path = os.getcwd()\n poc_folder = os.path.join(current_path, \"poc\")\n src_folder = os.path.join(poc_folder, \"client\")\n for index in range(1, num_clients + 1):\n dst_folder = os.path.join(poc_folder, f\"site-{index}\")\n shutil.copytree(src_folder, dst_folder)\n start_sh = open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"rt\")\n content = start_sh.read()\n start_sh.close()\n content = content.replace(\"NNN\", f\"{index}\")\n with open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"wt\") as f:\n f.write(content)\n shutil.rmtree(src_folder)\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-n\", \"--num_clients\", type=int, default=1, help=\"number of client folders to create\")\n\n args = parser.parse_args()\n\n file_dir_path = pathlib.Path(__file__).parent.absolute()\n poc_zip_path = file_dir_path.parent / \"poc.zip\"\n poc_folder_path = file_dir_path.parent / \"poc\"\n answer = input(\"This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) \")\n if answer.strip().upper() == \"Y\":\n dest_poc_folder = os.path.join(os.getcwd(), \"poc\")\n shutil.rmtree(dest_poc_folder, ignore_errors=True)\n try:\n shutil.unpack_archive(poc_zip_path)\n except shutil.ReadError:\n print(f\"poc.zip not found at {poc_zip_path}, try to use template poc folder\")\n try:\n shutil.copytree(poc_folder_path, dest_poc_folder)\n except BaseException:\n print(f\"Unable to copy poc folder from {poc_folder_path}. Exit\")\n exit(1)\n for root, dirs, files in os.walk(dest_poc_folder):\n for file in files:\n if file.endswith(\".sh\"):\n os.chmod(os.path.join(root, file), 0o755)\n clone_client(args.num_clients)\n print(\"Successfully creating poc folder. Please read poc/Readme.rst for user guide.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/lighter/poc.py"}], "after_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport os\nimport pathlib\nimport shutil\n\n\ndef clone_client(num_clients: int):\n current_path = os.getcwd()\n poc_folder = os.path.join(current_path, \"poc\")\n src_folder = os.path.join(poc_folder, \"client\")\n for index in range(1, num_clients + 1):\n dst_folder = os.path.join(poc_folder, f\"site-{index}\")\n shutil.copytree(src_folder, dst_folder)\n start_sh = open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"rt\")\n content = start_sh.read()\n start_sh.close()\n content = content.replace(\"NNN\", f\"{index}\")\n with open(os.path.join(dst_folder, \"startup\", \"start.sh\"), \"wt\") as f:\n f.write(content)\n shutil.rmtree(src_folder)\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-n\", \"--num_clients\", type=int, default=1, help=\"number of client folders to create\")\n\n args = parser.parse_args()\n\n file_dir_path = pathlib.Path(__file__).parent.absolute()\n poc_zip_path = file_dir_path.parent / \"poc.zip\"\n poc_folder_path = file_dir_path.parent / \"poc\"\n answer = input(\"This will delete poc folder in current directory and create a new one. Is it OK to proceed? (y/N) \")\n if answer.strip().upper() == \"Y\":\n dest_poc_folder = os.path.join(os.getcwd(), \"poc\")\n shutil.rmtree(dest_poc_folder, ignore_errors=True)\n try:\n shutil.unpack_archive(poc_zip_path)\n except shutil.ReadError:\n print(f\"poc.zip not found at {poc_zip_path}, try to use template poc folder\")\n try:\n shutil.copytree(poc_folder_path, dest_poc_folder)\n except BaseException:\n print(f\"Unable to copy poc folder from {poc_folder_path}. Exit\")\n exit(1)\n for root, dirs, files in os.walk(dest_poc_folder):\n for file in files:\n if file.endswith(\".sh\"):\n os.chmod(os.path.join(root, file), 0o755)\n clone_client(args.num_clients)\n print(\"Successfully creating poc folder. Please read poc/Readme.rst for user guide.\")\n print(\"\\n\\nWARNING:\\n******* Files generated by this poc command are NOT intended for production environments.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/lighter/poc.py"}]}
| 1,053 | 134 |
gh_patches_debug_14571
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-1811
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
wagtailsearch.views.frontend needs updating to use non-deprecated search method
Currently using Page.search, which is deprecated in Wagtail 1.2 - should be updated to use the QuerySet-based mechanism instead.
https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailsearch/views/frontend.py#L41
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailsearch/views/frontend.py`
Content:
```
1 from django.conf import settings
2 from django.shortcuts import render
3 from django.http import JsonResponse
4 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
5
6 from wagtail.wagtailcore import models
7 from wagtail.wagtailsearch.models import Query
8
9
10 def search(
11 request,
12 template=None,
13 template_ajax=None,
14 results_per_page=10,
15 use_json=False,
16 json_attrs=['title', 'url'],
17 show_unpublished=False,
18 search_title_only=False,
19 extra_filters={},
20 path=None):
21
22 # Get default templates
23 if template is None:
24 if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE'):
25 template = settings.WAGTAILSEARCH_RESULTS_TEMPLATE
26 else:
27 template = 'wagtailsearch/search_results.html'
28
29 if template_ajax is None:
30 if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX'):
31 template_ajax = settings.WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX
32 else:
33 template_ajax = template
34
35 # Get query string and page from GET paramters
36 query_string = request.GET.get('q', '')
37 page = request.GET.get('page', request.GET.get('p', 1))
38
39 # Search
40 if query_string != '':
41 search_results = models.Page.search(
42 query_string,
43 show_unpublished=show_unpublished,
44 search_title_only=search_title_only,
45 extra_filters=extra_filters,
46 path=path if path else request.site.root_page.path
47 )
48
49 # Get query object
50 query = Query.get(query_string)
51
52 # Add hit
53 query.add_hit()
54
55 # Pagination
56 paginator = Paginator(search_results, results_per_page)
57 try:
58 search_results = paginator.page(page)
59 except PageNotAnInteger:
60 search_results = paginator.page(1)
61 except EmptyPage:
62 search_results = paginator.page(paginator.num_pages)
63 else:
64 query = None
65 search_results = None
66
67 if use_json:
68 # Return a json response
69 if search_results:
70 search_results_json = []
71 for result in search_results:
72 result_specific = result.specific
73
74 search_results_json.append(dict(
75 (attr, getattr(result_specific, attr))
76 for attr in json_attrs
77 if hasattr(result_specific, attr)
78 ))
79
80 return JsonResponse(search_results_json, safe=False)
81 else:
82 return JsonResponse([], safe=False)
83 else: # Render a template
84 if request.is_ajax() and template_ajax:
85 template = template_ajax
86
87 return render(request, template, dict(
88 query_string=query_string,
89 search_results=search_results,
90 is_ajax=request.is_ajax(),
91 query=query
92 ))
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/wagtailsearch/views/frontend.py b/wagtail/wagtailsearch/views/frontend.py
--- a/wagtail/wagtailsearch/views/frontend.py
+++ b/wagtail/wagtailsearch/views/frontend.py
@@ -38,13 +38,18 @@
# Search
if query_string != '':
- search_results = models.Page.search(
- query_string,
- show_unpublished=show_unpublished,
- search_title_only=search_title_only,
- extra_filters=extra_filters,
- path=path if path else request.site.root_page.path
- )
+ pages = models.Page.objects.filter(path__startswith=(path or request.site.root_page.path))
+
+ if not show_unpublished:
+ pages = pages.live()
+
+ if extra_filters:
+ pages = pages.filter(**extra_filters)
+
+ if search_title_only:
+ search_results = pages.search(query_string, fields=['title'])
+ else:
+ search_results = pages.search(query_string)
# Get query object
query = Query.get(query_string)
|
{"golden_diff": "diff --git a/wagtail/wagtailsearch/views/frontend.py b/wagtail/wagtailsearch/views/frontend.py\n--- a/wagtail/wagtailsearch/views/frontend.py\n+++ b/wagtail/wagtailsearch/views/frontend.py\n@@ -38,13 +38,18 @@\n \n # Search\n if query_string != '':\n- search_results = models.Page.search(\n- query_string,\n- show_unpublished=show_unpublished,\n- search_title_only=search_title_only,\n- extra_filters=extra_filters,\n- path=path if path else request.site.root_page.path\n- )\n+ pages = models.Page.objects.filter(path__startswith=(path or request.site.root_page.path))\n+\n+ if not show_unpublished:\n+ pages = pages.live()\n+\n+ if extra_filters:\n+ pages = pages.filter(**extra_filters)\n+\n+ if search_title_only:\n+ search_results = pages.search(query_string, fields=['title'])\n+ else:\n+ search_results = pages.search(query_string)\n \n # Get query object\n query = Query.get(query_string)\n", "issue": "wagtailsearch.views.frontend needs updating to use non-deprecated search method\nCurrently using Page.search, which is deprecated in Wagtail 1.2 - should be updated to use the QuerySet-based mechanism instead.\n\nhttps://github.com/torchbox/wagtail/blob/master/wagtail/wagtailsearch/views/frontend.py#L41\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render\nfrom django.http import JsonResponse\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\n\nfrom wagtail.wagtailcore import models\nfrom wagtail.wagtailsearch.models import Query\n\n\ndef search(\n request,\n template=None,\n template_ajax=None,\n results_per_page=10,\n use_json=False,\n json_attrs=['title', 'url'],\n show_unpublished=False,\n search_title_only=False,\n extra_filters={},\n path=None):\n\n # Get default templates\n if template is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE'):\n template = settings.WAGTAILSEARCH_RESULTS_TEMPLATE\n else:\n template = 'wagtailsearch/search_results.html'\n\n if template_ajax is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX'):\n template_ajax = settings.WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX\n else:\n template_ajax = template\n\n # Get query string and page from GET paramters\n query_string = request.GET.get('q', '')\n page = request.GET.get('page', request.GET.get('p', 1))\n\n # Search\n if query_string != '':\n search_results = models.Page.search(\n query_string,\n show_unpublished=show_unpublished,\n search_title_only=search_title_only,\n extra_filters=extra_filters,\n path=path if path else request.site.root_page.path\n )\n\n # Get query object\n query = Query.get(query_string)\n\n # Add hit\n query.add_hit()\n\n # Pagination\n paginator = Paginator(search_results, results_per_page)\n try:\n search_results = paginator.page(page)\n except PageNotAnInteger:\n search_results = paginator.page(1)\n except EmptyPage:\n search_results = paginator.page(paginator.num_pages)\n else:\n query = None\n search_results = None\n\n if use_json:\n # Return a json response\n if search_results:\n search_results_json = []\n for result in search_results:\n result_specific = result.specific\n\n search_results_json.append(dict(\n (attr, getattr(result_specific, attr))\n for attr in json_attrs\n if hasattr(result_specific, attr)\n ))\n\n return JsonResponse(search_results_json, safe=False)\n else:\n return JsonResponse([], safe=False)\n else: # Render a template\n if request.is_ajax() and template_ajax:\n template = template_ajax\n\n return render(request, template, dict(\n query_string=query_string,\n search_results=search_results,\n is_ajax=request.is_ajax(),\n query=query\n ))\n", "path": "wagtail/wagtailsearch/views/frontend.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render\nfrom django.http import JsonResponse\nfrom django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\n\nfrom wagtail.wagtailcore import models\nfrom wagtail.wagtailsearch.models import Query\n\n\ndef search(\n request,\n template=None,\n template_ajax=None,\n results_per_page=10,\n use_json=False,\n json_attrs=['title', 'url'],\n show_unpublished=False,\n search_title_only=False,\n extra_filters={},\n path=None):\n\n # Get default templates\n if template is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE'):\n template = settings.WAGTAILSEARCH_RESULTS_TEMPLATE\n else:\n template = 'wagtailsearch/search_results.html'\n\n if template_ajax is None:\n if hasattr(settings, 'WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX'):\n template_ajax = settings.WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX\n else:\n template_ajax = template\n\n # Get query string and page from GET paramters\n query_string = request.GET.get('q', '')\n page = request.GET.get('page', request.GET.get('p', 1))\n\n # Search\n if query_string != '':\n pages = models.Page.objects.filter(path__startswith=(path or request.site.root_page.path))\n\n if not show_unpublished:\n pages = pages.live()\n\n if extra_filters:\n pages = pages.filter(**extra_filters)\n\n if search_title_only:\n search_results = pages.search(query_string, fields=['title'])\n else:\n search_results = pages.search(query_string)\n\n # Get query object\n query = Query.get(query_string)\n\n # Add hit\n query.add_hit()\n\n # Pagination\n paginator = Paginator(search_results, results_per_page)\n try:\n search_results = paginator.page(page)\n except PageNotAnInteger:\n search_results = paginator.page(1)\n except EmptyPage:\n search_results = paginator.page(paginator.num_pages)\n else:\n query = None\n search_results = None\n\n if use_json:\n # Return a json response\n if search_results:\n search_results_json = []\n for result in search_results:\n result_specific = result.specific\n\n search_results_json.append(dict(\n (attr, getattr(result_specific, attr))\n for attr in json_attrs\n if hasattr(result_specific, attr)\n ))\n\n return JsonResponse(search_results_json, safe=False)\n else:\n return JsonResponse([], safe=False)\n else: # Render a template\n if request.is_ajax() and template_ajax:\n template = template_ajax\n\n return render(request, template, dict(\n query_string=query_string,\n search_results=search_results,\n is_ajax=request.is_ajax(),\n query=query\n ))\n", "path": "wagtail/wagtailsearch/views/frontend.py"}]}
| 1,084 | 238 |
gh_patches_debug_61667
|
rasdani/github-patches
|
git_diff
|
pallets__click-2714
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs wrongly links PRs and Issues to flask

Environment:
- Python version: N/A
- Click version: N/A
Docs wrongly links PRs and Issues to flask

Environment:
- Python version: N/A
- Click version: N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 from pallets_sphinx_themes import get_version
2 from pallets_sphinx_themes import ProjectLink
3
4 # Project --------------------------------------------------------------
5
6 project = "Click"
7 copyright = "2014 Pallets"
8 author = "Pallets"
9 release, version = get_version("Click")
10
11 # General --------------------------------------------------------------
12
13 default_role = "code"
14 extensions = [
15 "sphinx.ext.autodoc",
16 "sphinx.ext.extlinks",
17 "sphinx.ext.intersphinx",
18 "sphinx_tabs.tabs",
19 "sphinxcontrib.log_cabinet",
20 "pallets_sphinx_themes",
21 ]
22 autodoc_member_order = "bysource"
23 autodoc_typehints = "description"
24 autodoc_preserve_defaults = True
25 extlinks = {
26 "issue": ("https://github.com/pallets/flask/issues/%s", "#%s"),
27 "pr": ("https://github.com/pallets/flask/pull/%s", "#%s"),
28 }
29 intersphinx_mapping = {
30 "python": ("https://docs.python.org/3/", None),
31 }
32
33 # HTML -----------------------------------------------------------------
34
35 html_theme = "click"
36 html_theme_options = {"index_sidebar_logo": False}
37 html_context = {
38 "project_links": [
39 ProjectLink("Donate", "https://palletsprojects.com/donate"),
40 ProjectLink("PyPI Releases", "https://pypi.org/project/click/"),
41 ProjectLink("Source Code", "https://github.com/pallets/click/"),
42 ProjectLink("Issue Tracker", "https://github.com/pallets/click/issues/"),
43 ProjectLink("Chat", "https://discord.gg/pallets"),
44 ]
45 }
46 html_sidebars = {
47 "index": ["project.html", "localtoc.html", "searchbox.html", "ethicalads.html"],
48 "**": ["localtoc.html", "relations.html", "searchbox.html", "ethicalads.html"],
49 }
50 singlehtml_sidebars = {"index": ["project.html", "localtoc.html", "ethicalads.html"]}
51 html_static_path = ["_static"]
52 html_favicon = "_static/click-icon.png"
53 html_logo = "_static/click-logo-sidebar.png"
54 html_title = f"Click Documentation ({version})"
55 html_show_sourcelink = False
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -23,8 +23,8 @@
autodoc_typehints = "description"
autodoc_preserve_defaults = True
extlinks = {
- "issue": ("https://github.com/pallets/flask/issues/%s", "#%s"),
- "pr": ("https://github.com/pallets/flask/pull/%s", "#%s"),
+ "issue": ("https://github.com/pallets/click/issues/%s", "#%s"),
+ "pr": ("https://github.com/pallets/click/pull/%s", "#%s"),
}
intersphinx_mapping = {
"python": ("https://docs.python.org/3/", None),
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -23,8 +23,8 @@\n autodoc_typehints = \"description\"\n autodoc_preserve_defaults = True\n extlinks = {\n- \"issue\": (\"https://github.com/pallets/flask/issues/%s\", \"#%s\"),\n- \"pr\": (\"https://github.com/pallets/flask/pull/%s\", \"#%s\"),\n+ \"issue\": (\"https://github.com/pallets/click/issues/%s\", \"#%s\"),\n+ \"pr\": (\"https://github.com/pallets/click/pull/%s\", \"#%s\"),\n }\n intersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n", "issue": "Docs wrongly links PRs and Issues to flask\n\r\n\r\nEnvironment:\r\n\r\n- Python version: N/A\r\n- Click version: N/A\r\n\nDocs wrongly links PRs and Issues to flask\n\r\n\r\nEnvironment:\r\n\r\n- Python version: N/A\r\n- Click version: N/A\r\n\n", "before_files": [{"content": "from pallets_sphinx_themes import get_version\nfrom pallets_sphinx_themes import ProjectLink\n\n# Project --------------------------------------------------------------\n\nproject = \"Click\"\ncopyright = \"2014 Pallets\"\nauthor = \"Pallets\"\nrelease, version = get_version(\"Click\")\n\n# General --------------------------------------------------------------\n\ndefault_role = \"code\"\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.extlinks\",\n \"sphinx.ext.intersphinx\",\n \"sphinx_tabs.tabs\",\n \"sphinxcontrib.log_cabinet\",\n \"pallets_sphinx_themes\",\n]\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\nautodoc_preserve_defaults = True\nextlinks = {\n \"issue\": (\"https://github.com/pallets/flask/issues/%s\", \"#%s\"),\n \"pr\": (\"https://github.com/pallets/flask/pull/%s\", \"#%s\"),\n}\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n}\n\n# HTML -----------------------------------------------------------------\n\nhtml_theme = \"click\"\nhtml_theme_options = {\"index_sidebar_logo\": False}\nhtml_context = {\n \"project_links\": [\n ProjectLink(\"Donate\", \"https://palletsprojects.com/donate\"),\n ProjectLink(\"PyPI Releases\", \"https://pypi.org/project/click/\"),\n ProjectLink(\"Source Code\", \"https://github.com/pallets/click/\"),\n ProjectLink(\"Issue Tracker\", \"https://github.com/pallets/click/issues/\"),\n ProjectLink(\"Chat\", \"https://discord.gg/pallets\"),\n ]\n}\nhtml_sidebars = {\n \"index\": [\"project.html\", \"localtoc.html\", \"searchbox.html\", \"ethicalads.html\"],\n \"**\": [\"localtoc.html\", \"relations.html\", \"searchbox.html\", \"ethicalads.html\"],\n}\nsinglehtml_sidebars = {\"index\": [\"project.html\", \"localtoc.html\", \"ethicalads.html\"]}\nhtml_static_path = [\"_static\"]\nhtml_favicon = \"_static/click-icon.png\"\nhtml_logo = \"_static/click-logo-sidebar.png\"\nhtml_title = f\"Click Documentation ({version})\"\nhtml_show_sourcelink = False\n", "path": "docs/conf.py"}], "after_files": [{"content": "from pallets_sphinx_themes import get_version\nfrom pallets_sphinx_themes import ProjectLink\n\n# Project --------------------------------------------------------------\n\nproject = \"Click\"\ncopyright = \"2014 Pallets\"\nauthor = \"Pallets\"\nrelease, version = get_version(\"Click\")\n\n# General --------------------------------------------------------------\n\ndefault_role = \"code\"\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.extlinks\",\n \"sphinx.ext.intersphinx\",\n \"sphinx_tabs.tabs\",\n \"sphinxcontrib.log_cabinet\",\n \"pallets_sphinx_themes\",\n]\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\nautodoc_preserve_defaults = True\nextlinks = {\n \"issue\": (\"https://github.com/pallets/click/issues/%s\", \"#%s\"),\n \"pr\": (\"https://github.com/pallets/click/pull/%s\", \"#%s\"),\n}\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n}\n\n# HTML -----------------------------------------------------------------\n\nhtml_theme = \"click\"\nhtml_theme_options = {\"index_sidebar_logo\": False}\nhtml_context = {\n \"project_links\": [\n ProjectLink(\"Donate\", \"https://palletsprojects.com/donate\"),\n ProjectLink(\"PyPI Releases\", \"https://pypi.org/project/click/\"),\n ProjectLink(\"Source Code\", \"https://github.com/pallets/click/\"),\n ProjectLink(\"Issue Tracker\", \"https://github.com/pallets/click/issues/\"),\n ProjectLink(\"Chat\", \"https://discord.gg/pallets\"),\n ]\n}\nhtml_sidebars = {\n \"index\": [\"project.html\", \"localtoc.html\", \"searchbox.html\", \"ethicalads.html\"],\n \"**\": [\"localtoc.html\", \"relations.html\", \"searchbox.html\", \"ethicalads.html\"],\n}\nsinglehtml_sidebars = {\"index\": [\"project.html\", \"localtoc.html\", \"ethicalads.html\"]}\nhtml_static_path = [\"_static\"]\nhtml_favicon = \"_static/click-icon.png\"\nhtml_logo = \"_static/click-logo-sidebar.png\"\nhtml_title = f\"Click Documentation ({version})\"\nhtml_show_sourcelink = False\n", "path": "docs/conf.py"}]}
| 998 | 173 |
gh_patches_debug_1188
|
rasdani/github-patches
|
git_diff
|
spack__spack-18268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installation issue: dbus (missing libsm dependency)
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
I am trying to install visit, and I am hitting an error when it tries to install dbus. This appears
to be due to dbus depending on libSM (and through that libuuid), but not declaring that dependency in Spack. So in my build of visit, the libuuid dependency is picked up and set to use the spack installed libuuid via some other package visit depends on, but dbus ends up using the system installed libSM, and there is a mismatch between the two. But the dbus
package should not be linking against system libSM.
### Steps to reproduce the issue
I am trying to install visit, and I am hitting an error when it tries to install dbus. This appears
to be
spack install [email protected]%[email protected] ^[email protected]
eventually aborts with
CCLD dbus-run-session
/lib/../lib64/libSM.so: undefined reference to `uuid_unparse_lower@UUID_1.0'
/lib/../lib64/libSM.so: undefined reference to `uuid_generate@UUID_1.0'
collect2: error: ld returned 1 exit status
Error appears due to the attempt to link the system /lib64/libSM.so
### Information on your system
spack debug report
* **Spack:** 0.14.2
* **Python:** 2.7.16
* **Platform:** linux-rhel7-broadwell
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/5125717/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/5125718/spack-build-out.txt)
No maintainers for dbus
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x ] I have uploaded the build log and environment files
- [ x] I have searched the issues of this repo and believe this is not a duplicate
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/dbus/package.py`
Content:
```
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Dbus(Package):
10 """D-Bus is a message bus system, a simple way for applications to
11 talk to one another. D-Bus supplies both a system daemon (for
12 events such new hardware device printer queue ) and a
13 per-user-login-session daemon (for general IPC needs among user
14 applications). Also, the message bus is built on top of a
15 general one-to-one message passing framework, which can be used
16 by any two applications to communicate directly (without going
17 through the message bus daemon)."""
18
19 homepage = "http://dbus.freedesktop.org/"
20 url = "http://dbus.freedesktop.org/releases/dbus/dbus-1.8.8.tar.gz"
21
22 version('1.12.8', sha256='e2dc99e7338303393b6663a98320aba6a63421bcdaaf571c8022f815e5896eb3')
23 version('1.11.2', sha256='5abc4c57686fa82669ad0039830788f9b03fdc4fff487f0ccf6c9d56ba2645c9')
24 version('1.9.0', sha256='38ebc695b5cbbd239e0f149aa5d5395f0051a0fec1b74f21ff2921b22a31c171')
25 version('1.8.8', sha256='dfab263649a979d0fff64a30cac374891a8e9940350e41f3bbd7679af32bd1fd')
26 version('1.8.6', sha256='eded83ca007b719f32761e60fd8b9ffd0f5796a4caf455b01b5a5ef740ebd23f')
27 version('1.8.4', sha256='3ef63dc8d0111042071ee7f7bafa0650c6ce2d7be957ef0b7ec269495a651ff8')
28 version('1.8.2', sha256='5689f7411165adc953f37974e276a3028db94447c76e8dd92efe910c6d3bae08')
29
30 depends_on('pkgconfig', type='build')
31 depends_on('expat')
32 depends_on('glib')
33
34 def install(self, spec, prefix):
35 configure(
36 "--prefix=%s" % prefix,
37 "--disable-systemd",
38 "--disable-launchd")
39 make()
40 make("install")
41
42 # dbus needs a machine id generated after install
43 dbus_uuidgen = Executable(join_path(prefix.bin, 'dbus-uuidgen'))
44 dbus_uuidgen('--ensure')
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/dbus/package.py b/var/spack/repos/builtin/packages/dbus/package.py
--- a/var/spack/repos/builtin/packages/dbus/package.py
+++ b/var/spack/repos/builtin/packages/dbus/package.py
@@ -30,6 +30,7 @@
depends_on('pkgconfig', type='build')
depends_on('expat')
depends_on('glib')
+ depends_on('libsm')
def install(self, spec, prefix):
configure(
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/dbus/package.py b/var/spack/repos/builtin/packages/dbus/package.py\n--- a/var/spack/repos/builtin/packages/dbus/package.py\n+++ b/var/spack/repos/builtin/packages/dbus/package.py\n@@ -30,6 +30,7 @@\n depends_on('pkgconfig', type='build')\n depends_on('expat')\n depends_on('glib')\n+ depends_on('libsm')\n \n def install(self, spec, prefix):\n configure(\n", "issue": "Installation issue: dbus (missing libsm dependency)\n\r\n<!-- Thanks for taking the time to report this build failure. To proceed with the report please:\r\n\r\n1. Title the issue \"Installation issue: <name-of-the-package>\".\r\n2. Provide the information required below.\r\n\r\nWe encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->\r\n\r\nI am trying to install visit, and I am hitting an error when it tries to install dbus. This appears\r\nto be due to dbus depending on libSM (and through that libuuid), but not declaring that dependency in Spack. So in my build of visit, the libuuid dependency is picked up and set to use the spack installed libuuid via some other package visit depends on, but dbus ends up using the system installed libSM, and there is a mismatch between the two. But the dbus\r\npackage should not be linking against system libSM.\r\n\r\n### Steps to reproduce the issue\r\nI am trying to install visit, and I am hitting an error when it tries to install dbus. This appears\r\nto be \r\nspack install [email protected]%[email protected] ^[email protected]\r\neventually aborts with\r\n CCLD dbus-run-session\r\n/lib/../lib64/libSM.so: undefined reference to `uuid_unparse_lower@UUID_1.0'\r\n/lib/../lib64/libSM.so: undefined reference to `uuid_generate@UUID_1.0'\r\ncollect2: error: ld returned 1 exit status\r\n\r\nError appears due to the attempt to link the system /lib64/libSM.so\r\n\r\n### Information on your system\r\nspack debug report\r\n* **Spack:** 0.14.2\r\n* **Python:** 2.7.16\r\n* **Platform:** linux-rhel7-broadwell\r\n\r\n### Additional information\r\n[spack-build-env.txt](https://github.com/spack/spack/files/5125717/spack-build-env.txt)\r\n[spack-build-out.txt](https://github.com/spack/spack/files/5125718/spack-build-out.txt)\r\n\r\nNo maintainers for dbus\r\n\r\n### General information\r\n\r\n<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->\r\n- [x ] I have run `spack debug report` and reported the version of Spack/Python/Platform\r\n- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers\r\n- [x ] I have uploaded the build log and environment files\r\n- [ x] I have searched the issues of this repo and believe this is not a duplicate\r\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Dbus(Package):\n \"\"\"D-Bus is a message bus system, a simple way for applications to\n talk to one another. D-Bus supplies both a system daemon (for\n events such new hardware device printer queue ) and a\n per-user-login-session daemon (for general IPC needs among user\n applications). Also, the message bus is built on top of a\n general one-to-one message passing framework, which can be used\n by any two applications to communicate directly (without going\n through the message bus daemon).\"\"\"\n\n homepage = \"http://dbus.freedesktop.org/\"\n url = \"http://dbus.freedesktop.org/releases/dbus/dbus-1.8.8.tar.gz\"\n\n version('1.12.8', sha256='e2dc99e7338303393b6663a98320aba6a63421bcdaaf571c8022f815e5896eb3')\n version('1.11.2', sha256='5abc4c57686fa82669ad0039830788f9b03fdc4fff487f0ccf6c9d56ba2645c9')\n version('1.9.0', sha256='38ebc695b5cbbd239e0f149aa5d5395f0051a0fec1b74f21ff2921b22a31c171')\n version('1.8.8', sha256='dfab263649a979d0fff64a30cac374891a8e9940350e41f3bbd7679af32bd1fd')\n version('1.8.6', sha256='eded83ca007b719f32761e60fd8b9ffd0f5796a4caf455b01b5a5ef740ebd23f')\n version('1.8.4', sha256='3ef63dc8d0111042071ee7f7bafa0650c6ce2d7be957ef0b7ec269495a651ff8')\n version('1.8.2', sha256='5689f7411165adc953f37974e276a3028db94447c76e8dd92efe910c6d3bae08')\n\n depends_on('pkgconfig', type='build')\n depends_on('expat')\n depends_on('glib')\n\n def install(self, spec, prefix):\n configure(\n \"--prefix=%s\" % prefix,\n \"--disable-systemd\",\n \"--disable-launchd\")\n make()\n make(\"install\")\n\n # dbus needs a machine id generated after install\n dbus_uuidgen = Executable(join_path(prefix.bin, 'dbus-uuidgen'))\n dbus_uuidgen('--ensure')\n", "path": "var/spack/repos/builtin/packages/dbus/package.py"}], "after_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Dbus(Package):\n \"\"\"D-Bus is a message bus system, a simple way for applications to\n talk to one another. D-Bus supplies both a system daemon (for\n events such new hardware device printer queue ) and a\n per-user-login-session daemon (for general IPC needs among user\n applications). Also, the message bus is built on top of a\n general one-to-one message passing framework, which can be used\n by any two applications to communicate directly (without going\n through the message bus daemon).\"\"\"\n\n homepage = \"http://dbus.freedesktop.org/\"\n url = \"http://dbus.freedesktop.org/releases/dbus/dbus-1.8.8.tar.gz\"\n\n version('1.12.8', sha256='e2dc99e7338303393b6663a98320aba6a63421bcdaaf571c8022f815e5896eb3')\n version('1.11.2', sha256='5abc4c57686fa82669ad0039830788f9b03fdc4fff487f0ccf6c9d56ba2645c9')\n version('1.9.0', sha256='38ebc695b5cbbd239e0f149aa5d5395f0051a0fec1b74f21ff2921b22a31c171')\n version('1.8.8', sha256='dfab263649a979d0fff64a30cac374891a8e9940350e41f3bbd7679af32bd1fd')\n version('1.8.6', sha256='eded83ca007b719f32761e60fd8b9ffd0f5796a4caf455b01b5a5ef740ebd23f')\n version('1.8.4', sha256='3ef63dc8d0111042071ee7f7bafa0650c6ce2d7be957ef0b7ec269495a651ff8')\n version('1.8.2', sha256='5689f7411165adc953f37974e276a3028db94447c76e8dd92efe910c6d3bae08')\n\n depends_on('pkgconfig', type='build')\n depends_on('expat')\n depends_on('glib')\n depends_on('libsm')\n\n def install(self, spec, prefix):\n configure(\n \"--prefix=%s\" % prefix,\n \"--disable-systemd\",\n \"--disable-launchd\")\n make()\n make(\"install\")\n\n # dbus needs a machine id generated after install\n dbus_uuidgen = Executable(join_path(prefix.bin, 'dbus-uuidgen'))\n dbus_uuidgen('--ensure')\n", "path": "var/spack/repos/builtin/packages/dbus/package.py"}]}
| 1,756 | 114 |
gh_patches_debug_8953
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-3600
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Guidelines not displayed when check fails
I am crafting some custom rules and am unable to get the guidelines section to print out when the rule fails.
When I run the out-of-the box rules, guidelines prints. How do I get my custom rules to generate the guideline so that my users can access url?
```
metadata:
name: "Ensure instance profile is configured - https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop"
guidelines: "https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop"
id: "CKV_AWS_TF_EC2_2"
category: "IAM"
severity: "TBD"
scope:
provider: "aws"
definition:
cond_type: "attribute"
resource_types:
- "aws_instance"
attribute: "iam_instance_profile"
operator: "exists"
```
**Version (please complete the following information):**
- Checkov Version [2.0.1065]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/common/checks_infra/checks_parser.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Dict, Any, List, Optional, Type, TYPE_CHECKING
4
5 from checkov.common.checks_infra.solvers import (
6 EqualsAttributeSolver,
7 NotEqualsAttributeSolver,
8 RegexMatchAttributeSolver,
9 NotRegexMatchAttributeSolver,
10 ExistsAttributeSolver,
11 AnyResourceSolver,
12 ContainsAttributeSolver,
13 NotExistsAttributeSolver,
14 WithinAttributeSolver,
15 NotContainsAttributeSolver,
16 StartingWithAttributeSolver,
17 NotStartingWithAttributeSolver,
18 EndingWithAttributeSolver,
19 NotEndingWithAttributeSolver,
20 AndSolver,
21 OrSolver,
22 NotSolver,
23 ConnectionExistsSolver,
24 ConnectionNotExistsSolver,
25 AndConnectionSolver,
26 OrConnectionSolver,
27 WithinFilterSolver,
28 GreaterThanAttributeSolver,
29 GreaterThanOrEqualAttributeSolver,
30 LessThanAttributeSolver,
31 LessThanOrEqualAttributeSolver,
32 SubsetAttributeSolver,
33 NotSubsetAttributeSolver,
34 IsEmptyAttributeSolver,
35 IsNotEmptyAttributeSolver,
36 LengthEqualsAttributeSolver,
37 LengthNotEqualsAttributeSolver,
38 LengthGreaterThanAttributeSolver,
39 LengthLessThanAttributeSolver,
40 LengthLessThanOrEqualAttributeSolver,
41 LengthGreaterThanOrEqualAttributeSolver,
42 IsTrueAttributeSolver,
43 IsFalseAttributeSolver,
44 IntersectsAttributeSolver,
45 NotIntersectsAttributeSolver
46 )
47 from checkov.common.checks_infra.solvers.connections_solvers.connection_one_exists_solver import \
48 ConnectionOneExistsSolver
49 from checkov.common.graph.checks_infra.base_check import BaseGraphCheck
50 from checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser
51 from checkov.common.graph.checks_infra.enums import SolverType
52 from checkov.common.graph.checks_infra.solvers.base_solver import BaseSolver
53 from checkov.common.util.type_forcers import force_list
54
55 if TYPE_CHECKING:
56 from checkov.common.checks_infra.solvers.attribute_solvers.base_attribute_solver import BaseAttributeSolver
57 from checkov.common.checks_infra.solvers.complex_solvers.base_complex_solver import BaseComplexSolver
58 from checkov.common.checks_infra.solvers.connections_solvers.base_connection_solver import BaseConnectionSolver
59 from checkov.common.checks_infra.solvers.connections_solvers.complex_connection_solver import ComplexConnectionSolver
60 from checkov.common.checks_infra.solvers.filter_solvers.base_filter_solver import BaseFilterSolver
61
62
63 operators_to_attributes_solver_classes: dict[str, Type[BaseAttributeSolver]] = {
64 "equals": EqualsAttributeSolver,
65 "not_equals": NotEqualsAttributeSolver,
66 "regex_match": RegexMatchAttributeSolver,
67 "not_regex_match": NotRegexMatchAttributeSolver,
68 "exists": ExistsAttributeSolver,
69 "any": AnyResourceSolver,
70 "contains": ContainsAttributeSolver,
71 "not_exists": NotExistsAttributeSolver,
72 "within": WithinAttributeSolver,
73 "not_contains": NotContainsAttributeSolver,
74 "starting_with": StartingWithAttributeSolver,
75 "not_starting_with": NotStartingWithAttributeSolver,
76 "ending_with": EndingWithAttributeSolver,
77 "not_ending_with": NotEndingWithAttributeSolver,
78 "greater_than": GreaterThanAttributeSolver,
79 "greater_than_or_equal": GreaterThanOrEqualAttributeSolver,
80 "less_than": LessThanAttributeSolver,
81 "less_than_or_equal": LessThanOrEqualAttributeSolver,
82 "subset": SubsetAttributeSolver,
83 "not_subset": NotSubsetAttributeSolver,
84 "is_empty": IsEmptyAttributeSolver,
85 "is_not_empty": IsNotEmptyAttributeSolver,
86 "length_equals": LengthEqualsAttributeSolver,
87 "length_not_equals": LengthNotEqualsAttributeSolver,
88 "length_greater_than": LengthGreaterThanAttributeSolver,
89 "length_greater_than_or_equal": LengthGreaterThanOrEqualAttributeSolver,
90 "length_less_than": LengthLessThanAttributeSolver,
91 "length_less_than_or_equal": LengthLessThanOrEqualAttributeSolver,
92 "is_true": IsTrueAttributeSolver,
93 "is_false": IsFalseAttributeSolver,
94 "intersects": IntersectsAttributeSolver,
95 "not_intersects": NotIntersectsAttributeSolver
96 }
97
98 operators_to_complex_solver_classes: dict[str, Type[BaseComplexSolver]] = {
99 "and": AndSolver,
100 "or": OrSolver,
101 "not": NotSolver,
102 }
103
104 operator_to_connection_solver_classes: dict[str, Type[BaseConnectionSolver]] = {
105 "exists": ConnectionExistsSolver,
106 "one_exists": ConnectionOneExistsSolver,
107 "not_exists": ConnectionNotExistsSolver
108 }
109
110 operator_to_complex_connection_solver_classes: dict[str, Type[ComplexConnectionSolver]] = {
111 "and": AndConnectionSolver,
112 "or": OrConnectionSolver,
113 }
114
115 operator_to_filter_solver_classes: dict[str, Type[BaseFilterSolver]] = {
116 "within": WithinFilterSolver,
117 }
118
119 condition_type_to_solver_type = {
120 "": SolverType.ATTRIBUTE,
121 "attribute": SolverType.ATTRIBUTE,
122 "connection": SolverType.CONNECTION,
123 "filter": SolverType.FILTER,
124 }
125
126 JSONPATH_PREFIX = "jsonpath_"
127
128
129 class NXGraphCheckParser(BaseGraphCheckParser):
130 def parse_raw_check(self, raw_check: Dict[str, Dict[str, Any]], **kwargs: Any) -> BaseGraphCheck:
131 policy_definition = raw_check.get("definition", {})
132 check = self._parse_raw_check(policy_definition, kwargs.get("resources_types"))
133 check.id = raw_check.get("metadata", {}).get("id", "")
134 check.name = raw_check.get("metadata", {}).get("name", "")
135 check.category = raw_check.get("metadata", {}).get("category", "")
136 check.frameworks = raw_check.get("metadata", {}).get("frameworks", [])
137 solver = self.get_check_solver(check)
138 check.set_solver(solver)
139
140 return check
141
142 def _parse_raw_check(self, raw_check: Dict[str, Any], resources_types: Optional[List[str]]) -> BaseGraphCheck:
143 check = BaseGraphCheck()
144 complex_operator = get_complex_operator(raw_check)
145 if complex_operator:
146 check.type = SolverType.COMPLEX
147 check.operator = complex_operator
148 sub_solvers = raw_check.get(complex_operator, [])
149
150 # this allows flexibility for specifying the child conditions, and makes "not" more intuitive by
151 # not requiring an actual list
152 if isinstance(sub_solvers, dict):
153 sub_solvers = [sub_solvers]
154
155 for sub_solver in sub_solvers:
156 check.sub_checks.append(self._parse_raw_check(sub_solver, resources_types))
157 resources_types_of_sub_solvers = [
158 force_list(q.resource_types) for q in check.sub_checks if q is not None and q.resource_types is not None
159 ]
160 check.resource_types = list(set(sum(resources_types_of_sub_solvers, [])))
161 if any(q.type in [SolverType.CONNECTION, SolverType.COMPLEX_CONNECTION] for q in check.sub_checks):
162 check.type = SolverType.COMPLEX_CONNECTION
163
164 else:
165 resource_type = raw_check.get("resource_types", [])
166 if (
167 not resource_type
168 or (isinstance(resource_type, str) and resource_type.lower() == "all")
169 or (isinstance(resource_type, list) and resource_type[0].lower() == "all")
170 ):
171 check.resource_types = resources_types or []
172 else:
173 check.resource_types = resource_type
174
175 connected_resources_type = raw_check.get("connected_resource_types", [])
176 if connected_resources_type == ["All"] or connected_resources_type == "all":
177 check.connected_resources_types = resources_types or []
178 else:
179 check.connected_resources_types = connected_resources_type
180
181 condition_type = raw_check.get("cond_type", "")
182 check.type = condition_type_to_solver_type.get(condition_type)
183 if condition_type == "":
184 check.operator = "any"
185 else:
186 check.operator = raw_check.get("operator", "")
187 check.attribute = raw_check.get("attribute")
188 check.attribute_value = raw_check.get("value")
189
190 return check
191
192 @staticmethod
193 def get_solver_type_method(check: BaseGraphCheck) -> Optional[BaseAttributeSolver]:
194 check.is_jsonpath_check = check.operator.startswith(JSONPATH_PREFIX)
195 if check.is_jsonpath_check:
196 solver = check.operator.replace(JSONPATH_PREFIX, '')
197 else:
198 solver = check.operator
199
200 return operators_to_attributes_solver_classes.get(solver, lambda *args: None)(
201 check.resource_types, check.attribute, check.attribute_value, check.is_jsonpath_check
202 )
203
204 def get_check_solver(self, check: BaseGraphCheck) -> BaseSolver:
205 sub_solvers: List[BaseSolver] = []
206 if check.sub_checks:
207 sub_solvers = []
208 for sub_solver in check.sub_checks:
209 sub_solvers.append(self.get_check_solver(sub_solver))
210
211 type_to_solver = {
212 SolverType.COMPLEX_CONNECTION: operator_to_complex_connection_solver_classes.get(
213 check.operator, lambda *args: None
214 )(sub_solvers, check.operator),
215 SolverType.COMPLEX: operators_to_complex_solver_classes.get(check.operator, lambda *args: None)(
216 sub_solvers, check.resource_types
217 ),
218 SolverType.ATTRIBUTE: self.get_solver_type_method(check),
219 SolverType.CONNECTION: operator_to_connection_solver_classes.get(check.operator, lambda *args: None)(
220 check.resource_types, check.connected_resources_types
221 ),
222 SolverType.FILTER: operator_to_filter_solver_classes.get(check.operator, lambda *args: None)(
223 check.resource_types, check.attribute, check.attribute_value
224 ),
225 }
226
227 solver = type_to_solver.get(check.type) # type:ignore[arg-type] # if not str will return None
228 if not solver:
229 raise NotImplementedError(f"solver type {check.type} with operator {check.operator} is not supported")
230 return solver
231
232
233 def get_complex_operator(raw_check: Dict[str, Any]) -> Optional[str]:
234 for operator in operators_to_complex_solver_classes.keys():
235 if raw_check.get(operator):
236 return operator
237 return None
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/common/checks_infra/checks_parser.py b/checkov/common/checks_infra/checks_parser.py
--- a/checkov/common/checks_infra/checks_parser.py
+++ b/checkov/common/checks_infra/checks_parser.py
@@ -134,6 +134,7 @@
check.name = raw_check.get("metadata", {}).get("name", "")
check.category = raw_check.get("metadata", {}).get("category", "")
check.frameworks = raw_check.get("metadata", {}).get("frameworks", [])
+ check.guideline = raw_check.get("metadata", {}).get("guideline")
solver = self.get_check_solver(check)
check.set_solver(solver)
|
{"golden_diff": "diff --git a/checkov/common/checks_infra/checks_parser.py b/checkov/common/checks_infra/checks_parser.py\n--- a/checkov/common/checks_infra/checks_parser.py\n+++ b/checkov/common/checks_infra/checks_parser.py\n@@ -134,6 +134,7 @@\n check.name = raw_check.get(\"metadata\", {}).get(\"name\", \"\")\n check.category = raw_check.get(\"metadata\", {}).get(\"category\", \"\")\n check.frameworks = raw_check.get(\"metadata\", {}).get(\"frameworks\", [])\n+ check.guideline = raw_check.get(\"metadata\", {}).get(\"guideline\")\n solver = self.get_check_solver(check)\n check.set_solver(solver)\n", "issue": "Guidelines not displayed when check fails \nI am crafting some custom rules and am unable to get the guidelines section to print out when the rule fails. \r\nWhen I run the out-of-the box rules, guidelines prints. How do I get my custom rules to generate the guideline so that my users can access url? \r\n\r\n```\r\nmetadata:\r\n name: \"Ensure instance profile is configured - https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop\"\r\n guidelines: \"https://test.example.com/EC2/MK_AWS_EC2_2.md?at=refs%2Fheads%2Fdevelop\"\r\n id: \"CKV_AWS_TF_EC2_2\"\r\n category: \"IAM\"\r\n severity: \"TBD\"\r\nscope:\r\nprovider: \"aws\"\r\ndefinition:\r\n cond_type: \"attribute\"\r\n resource_types:\r\n - \"aws_instance\"\r\n attribute: \"iam_instance_profile\"\r\n operator: \"exists\"\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version [2.0.1065]\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Dict, Any, List, Optional, Type, TYPE_CHECKING\n\nfrom checkov.common.checks_infra.solvers import (\n EqualsAttributeSolver,\n NotEqualsAttributeSolver,\n RegexMatchAttributeSolver,\n NotRegexMatchAttributeSolver,\n ExistsAttributeSolver,\n AnyResourceSolver,\n ContainsAttributeSolver,\n NotExistsAttributeSolver,\n WithinAttributeSolver,\n NotContainsAttributeSolver,\n StartingWithAttributeSolver,\n NotStartingWithAttributeSolver,\n EndingWithAttributeSolver,\n NotEndingWithAttributeSolver,\n AndSolver,\n OrSolver,\n NotSolver,\n ConnectionExistsSolver,\n ConnectionNotExistsSolver,\n AndConnectionSolver,\n OrConnectionSolver,\n WithinFilterSolver,\n GreaterThanAttributeSolver,\n GreaterThanOrEqualAttributeSolver,\n LessThanAttributeSolver,\n LessThanOrEqualAttributeSolver,\n SubsetAttributeSolver,\n NotSubsetAttributeSolver,\n IsEmptyAttributeSolver,\n IsNotEmptyAttributeSolver,\n LengthEqualsAttributeSolver,\n LengthNotEqualsAttributeSolver,\n LengthGreaterThanAttributeSolver,\n LengthLessThanAttributeSolver,\n LengthLessThanOrEqualAttributeSolver,\n LengthGreaterThanOrEqualAttributeSolver,\n IsTrueAttributeSolver,\n IsFalseAttributeSolver,\n IntersectsAttributeSolver,\n NotIntersectsAttributeSolver\n)\nfrom checkov.common.checks_infra.solvers.connections_solvers.connection_one_exists_solver import \\\n ConnectionOneExistsSolver\nfrom checkov.common.graph.checks_infra.base_check import BaseGraphCheck\nfrom checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser\nfrom checkov.common.graph.checks_infra.enums import SolverType\nfrom checkov.common.graph.checks_infra.solvers.base_solver import BaseSolver\nfrom checkov.common.util.type_forcers import force_list\n\nif TYPE_CHECKING:\n from checkov.common.checks_infra.solvers.attribute_solvers.base_attribute_solver import BaseAttributeSolver\n from checkov.common.checks_infra.solvers.complex_solvers.base_complex_solver import BaseComplexSolver\n from checkov.common.checks_infra.solvers.connections_solvers.base_connection_solver import BaseConnectionSolver\n from checkov.common.checks_infra.solvers.connections_solvers.complex_connection_solver import ComplexConnectionSolver\n from checkov.common.checks_infra.solvers.filter_solvers.base_filter_solver import BaseFilterSolver\n\n\noperators_to_attributes_solver_classes: dict[str, Type[BaseAttributeSolver]] = {\n \"equals\": EqualsAttributeSolver,\n \"not_equals\": NotEqualsAttributeSolver,\n \"regex_match\": RegexMatchAttributeSolver,\n \"not_regex_match\": NotRegexMatchAttributeSolver,\n \"exists\": ExistsAttributeSolver,\n \"any\": AnyResourceSolver,\n \"contains\": ContainsAttributeSolver,\n \"not_exists\": NotExistsAttributeSolver,\n \"within\": WithinAttributeSolver,\n \"not_contains\": NotContainsAttributeSolver,\n \"starting_with\": StartingWithAttributeSolver,\n \"not_starting_with\": NotStartingWithAttributeSolver,\n \"ending_with\": EndingWithAttributeSolver,\n \"not_ending_with\": NotEndingWithAttributeSolver,\n \"greater_than\": GreaterThanAttributeSolver,\n \"greater_than_or_equal\": GreaterThanOrEqualAttributeSolver,\n \"less_than\": LessThanAttributeSolver,\n \"less_than_or_equal\": LessThanOrEqualAttributeSolver,\n \"subset\": SubsetAttributeSolver,\n \"not_subset\": NotSubsetAttributeSolver,\n \"is_empty\": IsEmptyAttributeSolver,\n \"is_not_empty\": IsNotEmptyAttributeSolver,\n \"length_equals\": LengthEqualsAttributeSolver,\n \"length_not_equals\": LengthNotEqualsAttributeSolver,\n \"length_greater_than\": LengthGreaterThanAttributeSolver,\n \"length_greater_than_or_equal\": LengthGreaterThanOrEqualAttributeSolver,\n \"length_less_than\": LengthLessThanAttributeSolver,\n \"length_less_than_or_equal\": LengthLessThanOrEqualAttributeSolver,\n \"is_true\": IsTrueAttributeSolver,\n \"is_false\": IsFalseAttributeSolver,\n \"intersects\": IntersectsAttributeSolver,\n \"not_intersects\": NotIntersectsAttributeSolver\n}\n\noperators_to_complex_solver_classes: dict[str, Type[BaseComplexSolver]] = {\n \"and\": AndSolver,\n \"or\": OrSolver,\n \"not\": NotSolver,\n}\n\noperator_to_connection_solver_classes: dict[str, Type[BaseConnectionSolver]] = {\n \"exists\": ConnectionExistsSolver,\n \"one_exists\": ConnectionOneExistsSolver,\n \"not_exists\": ConnectionNotExistsSolver\n}\n\noperator_to_complex_connection_solver_classes: dict[str, Type[ComplexConnectionSolver]] = {\n \"and\": AndConnectionSolver,\n \"or\": OrConnectionSolver,\n}\n\noperator_to_filter_solver_classes: dict[str, Type[BaseFilterSolver]] = {\n \"within\": WithinFilterSolver,\n}\n\ncondition_type_to_solver_type = {\n \"\": SolverType.ATTRIBUTE,\n \"attribute\": SolverType.ATTRIBUTE,\n \"connection\": SolverType.CONNECTION,\n \"filter\": SolverType.FILTER,\n}\n\nJSONPATH_PREFIX = \"jsonpath_\"\n\n\nclass NXGraphCheckParser(BaseGraphCheckParser):\n def parse_raw_check(self, raw_check: Dict[str, Dict[str, Any]], **kwargs: Any) -> BaseGraphCheck:\n policy_definition = raw_check.get(\"definition\", {})\n check = self._parse_raw_check(policy_definition, kwargs.get(\"resources_types\"))\n check.id = raw_check.get(\"metadata\", {}).get(\"id\", \"\")\n check.name = raw_check.get(\"metadata\", {}).get(\"name\", \"\")\n check.category = raw_check.get(\"metadata\", {}).get(\"category\", \"\")\n check.frameworks = raw_check.get(\"metadata\", {}).get(\"frameworks\", [])\n solver = self.get_check_solver(check)\n check.set_solver(solver)\n\n return check\n\n def _parse_raw_check(self, raw_check: Dict[str, Any], resources_types: Optional[List[str]]) -> BaseGraphCheck:\n check = BaseGraphCheck()\n complex_operator = get_complex_operator(raw_check)\n if complex_operator:\n check.type = SolverType.COMPLEX\n check.operator = complex_operator\n sub_solvers = raw_check.get(complex_operator, [])\n\n # this allows flexibility for specifying the child conditions, and makes \"not\" more intuitive by\n # not requiring an actual list\n if isinstance(sub_solvers, dict):\n sub_solvers = [sub_solvers]\n\n for sub_solver in sub_solvers:\n check.sub_checks.append(self._parse_raw_check(sub_solver, resources_types))\n resources_types_of_sub_solvers = [\n force_list(q.resource_types) for q in check.sub_checks if q is not None and q.resource_types is not None\n ]\n check.resource_types = list(set(sum(resources_types_of_sub_solvers, [])))\n if any(q.type in [SolverType.CONNECTION, SolverType.COMPLEX_CONNECTION] for q in check.sub_checks):\n check.type = SolverType.COMPLEX_CONNECTION\n\n else:\n resource_type = raw_check.get(\"resource_types\", [])\n if (\n not resource_type\n or (isinstance(resource_type, str) and resource_type.lower() == \"all\")\n or (isinstance(resource_type, list) and resource_type[0].lower() == \"all\")\n ):\n check.resource_types = resources_types or []\n else:\n check.resource_types = resource_type\n\n connected_resources_type = raw_check.get(\"connected_resource_types\", [])\n if connected_resources_type == [\"All\"] or connected_resources_type == \"all\":\n check.connected_resources_types = resources_types or []\n else:\n check.connected_resources_types = connected_resources_type\n\n condition_type = raw_check.get(\"cond_type\", \"\")\n check.type = condition_type_to_solver_type.get(condition_type)\n if condition_type == \"\":\n check.operator = \"any\"\n else:\n check.operator = raw_check.get(\"operator\", \"\")\n check.attribute = raw_check.get(\"attribute\")\n check.attribute_value = raw_check.get(\"value\")\n\n return check\n\n @staticmethod\n def get_solver_type_method(check: BaseGraphCheck) -> Optional[BaseAttributeSolver]:\n check.is_jsonpath_check = check.operator.startswith(JSONPATH_PREFIX)\n if check.is_jsonpath_check:\n solver = check.operator.replace(JSONPATH_PREFIX, '')\n else:\n solver = check.operator\n\n return operators_to_attributes_solver_classes.get(solver, lambda *args: None)(\n check.resource_types, check.attribute, check.attribute_value, check.is_jsonpath_check\n )\n\n def get_check_solver(self, check: BaseGraphCheck) -> BaseSolver:\n sub_solvers: List[BaseSolver] = []\n if check.sub_checks:\n sub_solvers = []\n for sub_solver in check.sub_checks:\n sub_solvers.append(self.get_check_solver(sub_solver))\n\n type_to_solver = {\n SolverType.COMPLEX_CONNECTION: operator_to_complex_connection_solver_classes.get(\n check.operator, lambda *args: None\n )(sub_solvers, check.operator),\n SolverType.COMPLEX: operators_to_complex_solver_classes.get(check.operator, lambda *args: None)(\n sub_solvers, check.resource_types\n ),\n SolverType.ATTRIBUTE: self.get_solver_type_method(check),\n SolverType.CONNECTION: operator_to_connection_solver_classes.get(check.operator, lambda *args: None)(\n check.resource_types, check.connected_resources_types\n ),\n SolverType.FILTER: operator_to_filter_solver_classes.get(check.operator, lambda *args: None)(\n check.resource_types, check.attribute, check.attribute_value\n ),\n }\n\n solver = type_to_solver.get(check.type) # type:ignore[arg-type] # if not str will return None\n if not solver:\n raise NotImplementedError(f\"solver type {check.type} with operator {check.operator} is not supported\")\n return solver\n\n\ndef get_complex_operator(raw_check: Dict[str, Any]) -> Optional[str]:\n for operator in operators_to_complex_solver_classes.keys():\n if raw_check.get(operator):\n return operator\n return None\n", "path": "checkov/common/checks_infra/checks_parser.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Dict, Any, List, Optional, Type, TYPE_CHECKING\n\nfrom checkov.common.checks_infra.solvers import (\n EqualsAttributeSolver,\n NotEqualsAttributeSolver,\n RegexMatchAttributeSolver,\n NotRegexMatchAttributeSolver,\n ExistsAttributeSolver,\n AnyResourceSolver,\n ContainsAttributeSolver,\n NotExistsAttributeSolver,\n WithinAttributeSolver,\n NotContainsAttributeSolver,\n StartingWithAttributeSolver,\n NotStartingWithAttributeSolver,\n EndingWithAttributeSolver,\n NotEndingWithAttributeSolver,\n AndSolver,\n OrSolver,\n NotSolver,\n ConnectionExistsSolver,\n ConnectionNotExistsSolver,\n AndConnectionSolver,\n OrConnectionSolver,\n WithinFilterSolver,\n GreaterThanAttributeSolver,\n GreaterThanOrEqualAttributeSolver,\n LessThanAttributeSolver,\n LessThanOrEqualAttributeSolver,\n SubsetAttributeSolver,\n NotSubsetAttributeSolver,\n IsEmptyAttributeSolver,\n IsNotEmptyAttributeSolver,\n LengthEqualsAttributeSolver,\n LengthNotEqualsAttributeSolver,\n LengthGreaterThanAttributeSolver,\n LengthLessThanAttributeSolver,\n LengthLessThanOrEqualAttributeSolver,\n LengthGreaterThanOrEqualAttributeSolver,\n IsTrueAttributeSolver,\n IsFalseAttributeSolver,\n IntersectsAttributeSolver,\n NotIntersectsAttributeSolver\n)\nfrom checkov.common.checks_infra.solvers.connections_solvers.connection_one_exists_solver import \\\n ConnectionOneExistsSolver\nfrom checkov.common.graph.checks_infra.base_check import BaseGraphCheck\nfrom checkov.common.graph.checks_infra.base_parser import BaseGraphCheckParser\nfrom checkov.common.graph.checks_infra.enums import SolverType\nfrom checkov.common.graph.checks_infra.solvers.base_solver import BaseSolver\nfrom checkov.common.util.type_forcers import force_list\n\nif TYPE_CHECKING:\n from checkov.common.checks_infra.solvers.attribute_solvers.base_attribute_solver import BaseAttributeSolver\n from checkov.common.checks_infra.solvers.complex_solvers.base_complex_solver import BaseComplexSolver\n from checkov.common.checks_infra.solvers.connections_solvers.base_connection_solver import BaseConnectionSolver\n from checkov.common.checks_infra.solvers.connections_solvers.complex_connection_solver import ComplexConnectionSolver\n from checkov.common.checks_infra.solvers.filter_solvers.base_filter_solver import BaseFilterSolver\n\n\noperators_to_attributes_solver_classes: dict[str, Type[BaseAttributeSolver]] = {\n \"equals\": EqualsAttributeSolver,\n \"not_equals\": NotEqualsAttributeSolver,\n \"regex_match\": RegexMatchAttributeSolver,\n \"not_regex_match\": NotRegexMatchAttributeSolver,\n \"exists\": ExistsAttributeSolver,\n \"any\": AnyResourceSolver,\n \"contains\": ContainsAttributeSolver,\n \"not_exists\": NotExistsAttributeSolver,\n \"within\": WithinAttributeSolver,\n \"not_contains\": NotContainsAttributeSolver,\n \"starting_with\": StartingWithAttributeSolver,\n \"not_starting_with\": NotStartingWithAttributeSolver,\n \"ending_with\": EndingWithAttributeSolver,\n \"not_ending_with\": NotEndingWithAttributeSolver,\n \"greater_than\": GreaterThanAttributeSolver,\n \"greater_than_or_equal\": GreaterThanOrEqualAttributeSolver,\n \"less_than\": LessThanAttributeSolver,\n \"less_than_or_equal\": LessThanOrEqualAttributeSolver,\n \"subset\": SubsetAttributeSolver,\n \"not_subset\": NotSubsetAttributeSolver,\n \"is_empty\": IsEmptyAttributeSolver,\n \"is_not_empty\": IsNotEmptyAttributeSolver,\n \"length_equals\": LengthEqualsAttributeSolver,\n \"length_not_equals\": LengthNotEqualsAttributeSolver,\n \"length_greater_than\": LengthGreaterThanAttributeSolver,\n \"length_greater_than_or_equal\": LengthGreaterThanOrEqualAttributeSolver,\n \"length_less_than\": LengthLessThanAttributeSolver,\n \"length_less_than_or_equal\": LengthLessThanOrEqualAttributeSolver,\n \"is_true\": IsTrueAttributeSolver,\n \"is_false\": IsFalseAttributeSolver,\n \"intersects\": IntersectsAttributeSolver,\n \"not_intersects\": NotIntersectsAttributeSolver\n}\n\noperators_to_complex_solver_classes: dict[str, Type[BaseComplexSolver]] = {\n \"and\": AndSolver,\n \"or\": OrSolver,\n \"not\": NotSolver,\n}\n\noperator_to_connection_solver_classes: dict[str, Type[BaseConnectionSolver]] = {\n \"exists\": ConnectionExistsSolver,\n \"one_exists\": ConnectionOneExistsSolver,\n \"not_exists\": ConnectionNotExistsSolver\n}\n\noperator_to_complex_connection_solver_classes: dict[str, Type[ComplexConnectionSolver]] = {\n \"and\": AndConnectionSolver,\n \"or\": OrConnectionSolver,\n}\n\noperator_to_filter_solver_classes: dict[str, Type[BaseFilterSolver]] = {\n \"within\": WithinFilterSolver,\n}\n\ncondition_type_to_solver_type = {\n \"\": SolverType.ATTRIBUTE,\n \"attribute\": SolverType.ATTRIBUTE,\n \"connection\": SolverType.CONNECTION,\n \"filter\": SolverType.FILTER,\n}\n\nJSONPATH_PREFIX = \"jsonpath_\"\n\n\nclass NXGraphCheckParser(BaseGraphCheckParser):\n def parse_raw_check(self, raw_check: Dict[str, Dict[str, Any]], **kwargs: Any) -> BaseGraphCheck:\n policy_definition = raw_check.get(\"definition\", {})\n check = self._parse_raw_check(policy_definition, kwargs.get(\"resources_types\"))\n check.id = raw_check.get(\"metadata\", {}).get(\"id\", \"\")\n check.name = raw_check.get(\"metadata\", {}).get(\"name\", \"\")\n check.category = raw_check.get(\"metadata\", {}).get(\"category\", \"\")\n check.frameworks = raw_check.get(\"metadata\", {}).get(\"frameworks\", [])\n check.guideline = raw_check.get(\"metadata\", {}).get(\"guideline\")\n solver = self.get_check_solver(check)\n check.set_solver(solver)\n\n return check\n\n def _parse_raw_check(self, raw_check: Dict[str, Any], resources_types: Optional[List[str]]) -> BaseGraphCheck:\n check = BaseGraphCheck()\n complex_operator = get_complex_operator(raw_check)\n if complex_operator:\n check.type = SolverType.COMPLEX\n check.operator = complex_operator\n sub_solvers = raw_check.get(complex_operator, [])\n\n # this allows flexibility for specifying the child conditions, and makes \"not\" more intuitive by\n # not requiring an actual list\n if isinstance(sub_solvers, dict):\n sub_solvers = [sub_solvers]\n\n for sub_solver in sub_solvers:\n check.sub_checks.append(self._parse_raw_check(sub_solver, resources_types))\n resources_types_of_sub_solvers = [\n force_list(q.resource_types) for q in check.sub_checks if q is not None and q.resource_types is not None\n ]\n check.resource_types = list(set(sum(resources_types_of_sub_solvers, [])))\n if any(q.type in [SolverType.CONNECTION, SolverType.COMPLEX_CONNECTION] for q in check.sub_checks):\n check.type = SolverType.COMPLEX_CONNECTION\n\n else:\n resource_type = raw_check.get(\"resource_types\", [])\n if (\n not resource_type\n or (isinstance(resource_type, str) and resource_type.lower() == \"all\")\n or (isinstance(resource_type, list) and resource_type[0].lower() == \"all\")\n ):\n check.resource_types = resources_types or []\n else:\n check.resource_types = resource_type\n\n connected_resources_type = raw_check.get(\"connected_resource_types\", [])\n if connected_resources_type == [\"All\"] or connected_resources_type == \"all\":\n check.connected_resources_types = resources_types or []\n else:\n check.connected_resources_types = connected_resources_type\n\n condition_type = raw_check.get(\"cond_type\", \"\")\n check.type = condition_type_to_solver_type.get(condition_type)\n if condition_type == \"\":\n check.operator = \"any\"\n else:\n check.operator = raw_check.get(\"operator\", \"\")\n check.attribute = raw_check.get(\"attribute\")\n check.attribute_value = raw_check.get(\"value\")\n\n return check\n\n @staticmethod\n def get_solver_type_method(check: BaseGraphCheck) -> Optional[BaseAttributeSolver]:\n check.is_jsonpath_check = check.operator.startswith(JSONPATH_PREFIX)\n if check.is_jsonpath_check:\n solver = check.operator.replace(JSONPATH_PREFIX, '')\n else:\n solver = check.operator\n\n return operators_to_attributes_solver_classes.get(solver, lambda *args: None)(\n check.resource_types, check.attribute, check.attribute_value, check.is_jsonpath_check\n )\n\n def get_check_solver(self, check: BaseGraphCheck) -> BaseSolver:\n sub_solvers: List[BaseSolver] = []\n if check.sub_checks:\n sub_solvers = []\n for sub_solver in check.sub_checks:\n sub_solvers.append(self.get_check_solver(sub_solver))\n\n type_to_solver = {\n SolverType.COMPLEX_CONNECTION: operator_to_complex_connection_solver_classes.get(\n check.operator, lambda *args: None\n )(sub_solvers, check.operator),\n SolverType.COMPLEX: operators_to_complex_solver_classes.get(check.operator, lambda *args: None)(\n sub_solvers, check.resource_types\n ),\n SolverType.ATTRIBUTE: self.get_solver_type_method(check),\n SolverType.CONNECTION: operator_to_connection_solver_classes.get(check.operator, lambda *args: None)(\n check.resource_types, check.connected_resources_types\n ),\n SolverType.FILTER: operator_to_filter_solver_classes.get(check.operator, lambda *args: None)(\n check.resource_types, check.attribute, check.attribute_value\n ),\n }\n\n solver = type_to_solver.get(check.type) # type:ignore[arg-type] # if not str will return None\n if not solver:\n raise NotImplementedError(f\"solver type {check.type} with operator {check.operator} is not supported\")\n return solver\n\n\ndef get_complex_operator(raw_check: Dict[str, Any]) -> Optional[str]:\n for operator in operators_to_complex_solver_classes.keys():\n if raw_check.get(operator):\n return operator\n return None\n", "path": "checkov/common/checks_infra/checks_parser.py"}]}
| 3,212 | 154 |
gh_patches_debug_23335
|
rasdani/github-patches
|
git_diff
|
Cog-Creators__Red-DiscordBot-2919
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Mod] Bot doesn't record name changes correctly
# Other bugs
#### What were you trying to do?
Check past names using `[p]names` command
#### What were you expecting to happen?
Get past names
#### What actually happened?
I didn't get any names, because we're checking for it in wrong event (`on_member_update` instead of `on_user_update`).
#### How can we reproduce this issue?
1. Load mod cog
2. Change username
3. Use `[p]names` on yourself.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/cogs/mod/events.py`
Content:
```
1 import logging
2 from datetime import datetime
3 from collections import defaultdict, deque
4
5 import discord
6 from redbot.core import i18n, modlog, commands
7 from redbot.core.utils.mod import is_mod_or_superior
8 from .abc import MixinMeta
9
10 _ = i18n.Translator("Mod", __file__)
11 log = logging.getLogger("red.mod")
12
13
14 class Events(MixinMeta):
15 """
16 This is a mixin for the core mod cog
17 Has a bunch of things split off to here.
18 """
19
20 async def check_duplicates(self, message):
21 guild = message.guild
22 author = message.author
23
24 guild_cache = self.cache.get(guild.id, None)
25 if guild_cache is None:
26 repeats = await self.settings.guild(guild).delete_repeats()
27 if repeats == -1:
28 return False
29 guild_cache = self.cache[guild.id] = defaultdict(lambda: deque(maxlen=repeats))
30
31 if not message.content:
32 return False
33
34 guild_cache[author].append(message.content)
35 msgs = guild_cache[author]
36 if len(msgs) == msgs.maxlen and len(set(msgs)) == 1:
37 try:
38 await message.delete()
39 return True
40 except discord.HTTPException:
41 pass
42 return False
43
44 async def check_mention_spam(self, message):
45 guild = message.guild
46 author = message.author
47
48 max_mentions = await self.settings.guild(guild).ban_mention_spam()
49 if max_mentions:
50 mentions = set(message.mentions)
51 if len(mentions) >= max_mentions:
52 try:
53 await guild.ban(author, reason=_("Mention spam (Autoban)"))
54 except discord.HTTPException:
55 log.info(
56 "Failed to ban member for mention spam in server {}.".format(guild.id)
57 )
58 else:
59 try:
60 await modlog.create_case(
61 self.bot,
62 guild,
63 message.created_at,
64 "ban",
65 author,
66 guild.me,
67 _("Mention spam (Autoban)"),
68 until=None,
69 channel=None,
70 )
71 except RuntimeError as e:
72 print(e)
73 return False
74 return True
75 return False
76
77 @commands.Cog.listener()
78 async def on_message(self, message):
79 author = message.author
80 if message.guild is None or self.bot.user == author:
81 return
82 valid_user = isinstance(author, discord.Member) and not author.bot
83 if not valid_user:
84 return
85
86 # Bots and mods or superior are ignored from the filter
87 mod_or_superior = await is_mod_or_superior(self.bot, obj=author)
88 if mod_or_superior:
89 return
90 # As are anyone configured to be
91 if await self.bot.is_automod_immune(message):
92 return
93 deleted = await self.check_duplicates(message)
94 if not deleted:
95 await self.check_mention_spam(message)
96
97 @commands.Cog.listener()
98 async def on_member_update(self, before: discord.Member, after: discord.Member):
99 if before.name != after.name:
100 async with self.settings.user(before).past_names() as name_list:
101 while None in name_list: # clean out null entries from a bug
102 name_list.remove(None)
103 if after.name in name_list:
104 # Ensure order is maintained without duplicates occuring
105 name_list.remove(after.name)
106 name_list.append(after.name)
107 while len(name_list) > 20:
108 name_list.pop(0)
109
110 if before.nick != after.nick and after.nick is not None:
111 async with self.settings.member(before).past_nicks() as nick_list:
112 while None in nick_list: # clean out null entries from a bug
113 nick_list.remove(None)
114 if after.nick in nick_list:
115 nick_list.remove(after.nick)
116 nick_list.append(after.nick)
117 while len(nick_list) > 20:
118 nick_list.pop(0)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redbot/cogs/mod/events.py b/redbot/cogs/mod/events.py
--- a/redbot/cogs/mod/events.py
+++ b/redbot/cogs/mod/events.py
@@ -95,7 +95,7 @@
await self.check_mention_spam(message)
@commands.Cog.listener()
- async def on_member_update(self, before: discord.Member, after: discord.Member):
+ async def on_user_update(self, before: discord.User, after: discord.User):
if before.name != after.name:
async with self.settings.user(before).past_names() as name_list:
while None in name_list: # clean out null entries from a bug
@@ -107,6 +107,8 @@
while len(name_list) > 20:
name_list.pop(0)
+ @commands.Cog.listener()
+ async def on_member_update(self, before: discord.Member, after: discord.Member):
if before.nick != after.nick and after.nick is not None:
async with self.settings.member(before).past_nicks() as nick_list:
while None in nick_list: # clean out null entries from a bug
|
{"golden_diff": "diff --git a/redbot/cogs/mod/events.py b/redbot/cogs/mod/events.py\n--- a/redbot/cogs/mod/events.py\n+++ b/redbot/cogs/mod/events.py\n@@ -95,7 +95,7 @@\n await self.check_mention_spam(message)\n \n @commands.Cog.listener()\n- async def on_member_update(self, before: discord.Member, after: discord.Member):\n+ async def on_user_update(self, before: discord.User, after: discord.User):\n if before.name != after.name:\n async with self.settings.user(before).past_names() as name_list:\n while None in name_list: # clean out null entries from a bug\n@@ -107,6 +107,8 @@\n while len(name_list) > 20:\n name_list.pop(0)\n \n+ @commands.Cog.listener()\n+ async def on_member_update(self, before: discord.Member, after: discord.Member):\n if before.nick != after.nick and after.nick is not None:\n async with self.settings.member(before).past_nicks() as nick_list:\n while None in nick_list: # clean out null entries from a bug\n", "issue": "[Mod] Bot doesn't record name changes correctly\n# Other bugs\r\n\r\n#### What were you trying to do?\r\n\r\nCheck past names using `[p]names` command\r\n\r\n#### What were you expecting to happen?\r\n\r\nGet past names\r\n\r\n#### What actually happened?\r\n\r\nI didn't get any names, because we're checking for it in wrong event (`on_member_update` instead of `on_user_update`).\r\n\r\n#### How can we reproduce this issue?\r\n\r\n1. Load mod cog\r\n2. Change username\r\n3. Use `[p]names` on yourself.\r\n\n", "before_files": [{"content": "import logging\nfrom datetime import datetime\nfrom collections import defaultdict, deque\n\nimport discord\nfrom redbot.core import i18n, modlog, commands\nfrom redbot.core.utils.mod import is_mod_or_superior\nfrom .abc import MixinMeta\n\n_ = i18n.Translator(\"Mod\", __file__)\nlog = logging.getLogger(\"red.mod\")\n\n\nclass Events(MixinMeta):\n \"\"\"\n This is a mixin for the core mod cog\n Has a bunch of things split off to here.\n \"\"\"\n\n async def check_duplicates(self, message):\n guild = message.guild\n author = message.author\n\n guild_cache = self.cache.get(guild.id, None)\n if guild_cache is None:\n repeats = await self.settings.guild(guild).delete_repeats()\n if repeats == -1:\n return False\n guild_cache = self.cache[guild.id] = defaultdict(lambda: deque(maxlen=repeats))\n\n if not message.content:\n return False\n\n guild_cache[author].append(message.content)\n msgs = guild_cache[author]\n if len(msgs) == msgs.maxlen and len(set(msgs)) == 1:\n try:\n await message.delete()\n return True\n except discord.HTTPException:\n pass\n return False\n\n async def check_mention_spam(self, message):\n guild = message.guild\n author = message.author\n\n max_mentions = await self.settings.guild(guild).ban_mention_spam()\n if max_mentions:\n mentions = set(message.mentions)\n if len(mentions) >= max_mentions:\n try:\n await guild.ban(author, reason=_(\"Mention spam (Autoban)\"))\n except discord.HTTPException:\n log.info(\n \"Failed to ban member for mention spam in server {}.\".format(guild.id)\n )\n else:\n try:\n await modlog.create_case(\n self.bot,\n guild,\n message.created_at,\n \"ban\",\n author,\n guild.me,\n _(\"Mention spam (Autoban)\"),\n until=None,\n channel=None,\n )\n except RuntimeError as e:\n print(e)\n return False\n return True\n return False\n\n @commands.Cog.listener()\n async def on_message(self, message):\n author = message.author\n if message.guild is None or self.bot.user == author:\n return\n valid_user = isinstance(author, discord.Member) and not author.bot\n if not valid_user:\n return\n\n # Bots and mods or superior are ignored from the filter\n mod_or_superior = await is_mod_or_superior(self.bot, obj=author)\n if mod_or_superior:\n return\n # As are anyone configured to be\n if await self.bot.is_automod_immune(message):\n return\n deleted = await self.check_duplicates(message)\n if not deleted:\n await self.check_mention_spam(message)\n\n @commands.Cog.listener()\n async def on_member_update(self, before: discord.Member, after: discord.Member):\n if before.name != after.name:\n async with self.settings.user(before).past_names() as name_list:\n while None in name_list: # clean out null entries from a bug\n name_list.remove(None)\n if after.name in name_list:\n # Ensure order is maintained without duplicates occuring\n name_list.remove(after.name)\n name_list.append(after.name)\n while len(name_list) > 20:\n name_list.pop(0)\n\n if before.nick != after.nick and after.nick is not None:\n async with self.settings.member(before).past_nicks() as nick_list:\n while None in nick_list: # clean out null entries from a bug\n nick_list.remove(None)\n if after.nick in nick_list:\n nick_list.remove(after.nick)\n nick_list.append(after.nick)\n while len(nick_list) > 20:\n nick_list.pop(0)\n", "path": "redbot/cogs/mod/events.py"}], "after_files": [{"content": "import logging\nfrom datetime import datetime\nfrom collections import defaultdict, deque\n\nimport discord\nfrom redbot.core import i18n, modlog, commands\nfrom redbot.core.utils.mod import is_mod_or_superior\nfrom .abc import MixinMeta\n\n_ = i18n.Translator(\"Mod\", __file__)\nlog = logging.getLogger(\"red.mod\")\n\n\nclass Events(MixinMeta):\n \"\"\"\n This is a mixin for the core mod cog\n Has a bunch of things split off to here.\n \"\"\"\n\n async def check_duplicates(self, message):\n guild = message.guild\n author = message.author\n\n guild_cache = self.cache.get(guild.id, None)\n if guild_cache is None:\n repeats = await self.settings.guild(guild).delete_repeats()\n if repeats == -1:\n return False\n guild_cache = self.cache[guild.id] = defaultdict(lambda: deque(maxlen=repeats))\n\n if not message.content:\n return False\n\n guild_cache[author].append(message.content)\n msgs = guild_cache[author]\n if len(msgs) == msgs.maxlen and len(set(msgs)) == 1:\n try:\n await message.delete()\n return True\n except discord.HTTPException:\n pass\n return False\n\n async def check_mention_spam(self, message):\n guild = message.guild\n author = message.author\n\n max_mentions = await self.settings.guild(guild).ban_mention_spam()\n if max_mentions:\n mentions = set(message.mentions)\n if len(mentions) >= max_mentions:\n try:\n await guild.ban(author, reason=_(\"Mention spam (Autoban)\"))\n except discord.HTTPException:\n log.info(\n \"Failed to ban member for mention spam in server {}.\".format(guild.id)\n )\n else:\n try:\n await modlog.create_case(\n self.bot,\n guild,\n message.created_at,\n \"ban\",\n author,\n guild.me,\n _(\"Mention spam (Autoban)\"),\n until=None,\n channel=None,\n )\n except RuntimeError as e:\n print(e)\n return False\n return True\n return False\n\n @commands.Cog.listener()\n async def on_message(self, message):\n author = message.author\n if message.guild is None or self.bot.user == author:\n return\n valid_user = isinstance(author, discord.Member) and not author.bot\n if not valid_user:\n return\n\n # Bots and mods or superior are ignored from the filter\n mod_or_superior = await is_mod_or_superior(self.bot, obj=author)\n if mod_or_superior:\n return\n # As are anyone configured to be\n if await self.bot.is_automod_immune(message):\n return\n deleted = await self.check_duplicates(message)\n if not deleted:\n await self.check_mention_spam(message)\n\n @commands.Cog.listener()\n async def on_user_update(self, before: discord.User, after: discord.User):\n if before.name != after.name:\n async with self.settings.user(before).past_names() as name_list:\n while None in name_list: # clean out null entries from a bug\n name_list.remove(None)\n if after.name in name_list:\n # Ensure order is maintained without duplicates occuring\n name_list.remove(after.name)\n name_list.append(after.name)\n while len(name_list) > 20:\n name_list.pop(0)\n\n @commands.Cog.listener()\n async def on_member_update(self, before: discord.Member, after: discord.Member):\n if before.nick != after.nick and after.nick is not None:\n async with self.settings.member(before).past_nicks() as nick_list:\n while None in nick_list: # clean out null entries from a bug\n nick_list.remove(None)\n if after.nick in nick_list:\n nick_list.remove(after.nick)\n nick_list.append(after.nick)\n while len(nick_list) > 20:\n nick_list.pop(0)\n", "path": "redbot/cogs/mod/events.py"}]}
| 1,470 | 255 |
gh_patches_debug_38252
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-1261
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No way to restrict text classification labels to exactly one label to assign
Most classification tasks require exactly one label for each instance. This is also true for most text classification tasks, for example with sentiment classificaiton, and the possible labels negative, neutral, positive, each instance should receive one of the three labels, assigning e.g. both neutral and positive would make not sense.
Yet the text classification task in doccano still does not allow to restrict assignment to a single label, annotators are free to assign as many labels as they want, including all of them!
This limits the use of doccano for text classification tasks rather severely. The option to allow for any number of labels (0 to all of them) would still be good to have for multilabel classification tasks (e.g. assigning topics), but that is a much rarer annotation task in general.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/views/annotation.py`
Content:
```
1 from django.shortcuts import get_object_or_404
2 from rest_framework import generics, status
3 from rest_framework.exceptions import ValidationError
4 from rest_framework.permissions import IsAuthenticated
5 from rest_framework.response import Response
6 from rest_framework.views import APIView
7
8 from ..models import Document, Project
9 from ..permissions import (IsAnnotationApprover, IsInProjectOrAdmin,
10 IsOwnAnnotation, IsProjectAdmin)
11 from ..serializers import ApproverSerializer
12
13
14 class AnnotationList(generics.ListCreateAPIView):
15 pagination_class = None
16 permission_classes = [IsAuthenticated & IsInProjectOrAdmin]
17 swagger_schema = None
18
19 def get_serializer_class(self):
20 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
21 self.serializer_class = project.get_annotation_serializer()
22 return self.serializer_class
23
24 def get_queryset(self):
25 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
26 model = project.get_annotation_class()
27 queryset = model.objects.filter(document=self.kwargs['doc_id'])
28 if not project.collaborative_annotation:
29 queryset = queryset.filter(user=self.request.user)
30 return queryset
31
32 def create(self, request, *args, **kwargs):
33 self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)
34 request.data['document'] = self.kwargs['doc_id']
35 return super().create(request, args, kwargs)
36
37 def perform_create(self, serializer):
38 serializer.save(document_id=self.kwargs['doc_id'], user=self.request.user)
39
40 def delete(self, request, *args, **kwargs):
41 queryset = self.get_queryset()
42 queryset.all().delete()
43 return Response(status=status.HTTP_204_NO_CONTENT)
44
45 @staticmethod
46 def check_single_class_classification(project_id, doc_id, user):
47 project = get_object_or_404(Project, pk=project_id)
48 if not project.single_class_classification:
49 return
50
51 model = project.get_annotation_class()
52 annotations = model.objects.filter(document_id=doc_id)
53 if not project.collaborative_annotation:
54 annotations = annotations.filter(user=user)
55
56 if annotations.exists():
57 raise ValidationError('requested to create duplicate annotation for single-class-classification project')
58
59
60 class AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):
61 lookup_url_kwarg = 'annotation_id'
62 swagger_schema = None
63
64 def get_permissions(self):
65 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
66 if project.collaborative_annotation:
67 self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin]
68 else:
69 self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin & IsOwnAnnotation]
70 return super().get_permissions()
71
72 def get_serializer_class(self):
73 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
74 self.serializer_class = project.get_annotation_serializer()
75 return self.serializer_class
76
77 def get_queryset(self):
78 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
79 model = project.get_annotation_class()
80 self.queryset = model.objects.all()
81 return self.queryset
82
83
84 class ApproveLabelsAPI(APIView):
85 permission_classes = [IsAuthenticated & (IsAnnotationApprover | IsProjectAdmin)]
86
87 def post(self, request, *args, **kwargs):
88 approved = self.request.data.get('approved', True)
89 document = get_object_or_404(Document, pk=self.kwargs['doc_id'])
90 document.annotations_approved_by = self.request.user if approved else None
91 document.save()
92 return Response(ApproverSerializer(document).data)
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/views/annotation.py b/app/api/views/annotation.py
--- a/app/api/views/annotation.py
+++ b/app/api/views/annotation.py
@@ -1,6 +1,5 @@
from django.shortcuts import get_object_or_404
from rest_framework import generics, status
-from rest_framework.exceptions import ValidationError
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework.views import APIView
@@ -16,21 +15,24 @@
permission_classes = [IsAuthenticated & IsInProjectOrAdmin]
swagger_schema = None
+ @property
+ def project(self):
+ return get_object_or_404(Project, pk=self.kwargs['project_id'])
+
def get_serializer_class(self):
- project = get_object_or_404(Project, pk=self.kwargs['project_id'])
- self.serializer_class = project.get_annotation_serializer()
+ self.serializer_class = self.project.get_annotation_serializer()
return self.serializer_class
def get_queryset(self):
- project = get_object_or_404(Project, pk=self.kwargs['project_id'])
- model = project.get_annotation_class()
+ model = self.project.get_annotation_class()
queryset = model.objects.filter(document=self.kwargs['doc_id'])
- if not project.collaborative_annotation:
+ if not self.project.collaborative_annotation:
queryset = queryset.filter(user=self.request.user)
return queryset
def create(self, request, *args, **kwargs):
- self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)
+ if self.project.single_class_classification:
+ self.get_queryset().delete()
request.data['document'] = self.kwargs['doc_id']
return super().create(request, args, kwargs)
@@ -42,20 +44,6 @@
queryset.all().delete()
return Response(status=status.HTTP_204_NO_CONTENT)
- @staticmethod
- def check_single_class_classification(project_id, doc_id, user):
- project = get_object_or_404(Project, pk=project_id)
- if not project.single_class_classification:
- return
-
- model = project.get_annotation_class()
- annotations = model.objects.filter(document_id=doc_id)
- if not project.collaborative_annotation:
- annotations = annotations.filter(user=user)
-
- if annotations.exists():
- raise ValidationError('requested to create duplicate annotation for single-class-classification project')
-
class AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):
lookup_url_kwarg = 'annotation_id'
|
{"golden_diff": "diff --git a/app/api/views/annotation.py b/app/api/views/annotation.py\n--- a/app/api/views/annotation.py\n+++ b/app/api/views/annotation.py\n@@ -1,6 +1,5 @@\n from django.shortcuts import get_object_or_404\n from rest_framework import generics, status\n-from rest_framework.exceptions import ValidationError\n from rest_framework.permissions import IsAuthenticated\n from rest_framework.response import Response\n from rest_framework.views import APIView\n@@ -16,21 +15,24 @@\n permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n swagger_schema = None\n \n+ @property\n+ def project(self):\n+ return get_object_or_404(Project, pk=self.kwargs['project_id'])\n+\n def get_serializer_class(self):\n- project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n- self.serializer_class = project.get_annotation_serializer()\n+ self.serializer_class = self.project.get_annotation_serializer()\n return self.serializer_class\n \n def get_queryset(self):\n- project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n- model = project.get_annotation_class()\n+ model = self.project.get_annotation_class()\n queryset = model.objects.filter(document=self.kwargs['doc_id'])\n- if not project.collaborative_annotation:\n+ if not self.project.collaborative_annotation:\n queryset = queryset.filter(user=self.request.user)\n return queryset\n \n def create(self, request, *args, **kwargs):\n- self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)\n+ if self.project.single_class_classification:\n+ self.get_queryset().delete()\n request.data['document'] = self.kwargs['doc_id']\n return super().create(request, args, kwargs)\n \n@@ -42,20 +44,6 @@\n queryset.all().delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n \n- @staticmethod\n- def check_single_class_classification(project_id, doc_id, user):\n- project = get_object_or_404(Project, pk=project_id)\n- if not project.single_class_classification:\n- return\n-\n- model = project.get_annotation_class()\n- annotations = model.objects.filter(document_id=doc_id)\n- if not project.collaborative_annotation:\n- annotations = annotations.filter(user=user)\n-\n- if annotations.exists():\n- raise ValidationError('requested to create duplicate annotation for single-class-classification project')\n-\n \n class AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):\n lookup_url_kwarg = 'annotation_id'\n", "issue": "No way to restrict text classification labels to exactly one label to assign\nMost classification tasks require exactly one label for each instance. This is also true for most text classification tasks, for example with sentiment classificaiton, and the possible labels negative, neutral, positive, each instance should receive one of the three labels, assigning e.g. both neutral and positive would make not sense.\r\n\r\nYet the text classification task in doccano still does not allow to restrict assignment to a single label, annotators are free to assign as many labels as they want, including all of them!\r\n\r\nThis limits the use of doccano for text classification tasks rather severely. The option to allow for any number of labels (0 to all of them) would still be good to have for multilabel classification tasks (e.g. assigning topics), but that is a much rarer annotation task in general. \n", "before_files": [{"content": "from django.shortcuts import get_object_or_404\nfrom rest_framework import generics, status\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom ..models import Document, Project\nfrom ..permissions import (IsAnnotationApprover, IsInProjectOrAdmin,\n IsOwnAnnotation, IsProjectAdmin)\nfrom ..serializers import ApproverSerializer\n\n\nclass AnnotationList(generics.ListCreateAPIView):\n pagination_class = None\n permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n swagger_schema = None\n\n def get_serializer_class(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n self.serializer_class = project.get_annotation_serializer()\n return self.serializer_class\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n model = project.get_annotation_class()\n queryset = model.objects.filter(document=self.kwargs['doc_id'])\n if not project.collaborative_annotation:\n queryset = queryset.filter(user=self.request.user)\n return queryset\n\n def create(self, request, *args, **kwargs):\n self.check_single_class_classification(self.kwargs['project_id'], self.kwargs['doc_id'], request.user)\n request.data['document'] = self.kwargs['doc_id']\n return super().create(request, args, kwargs)\n\n def perform_create(self, serializer):\n serializer.save(document_id=self.kwargs['doc_id'], user=self.request.user)\n\n def delete(self, request, *args, **kwargs):\n queryset = self.get_queryset()\n queryset.all().delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n @staticmethod\n def check_single_class_classification(project_id, doc_id, user):\n project = get_object_or_404(Project, pk=project_id)\n if not project.single_class_classification:\n return\n\n model = project.get_annotation_class()\n annotations = model.objects.filter(document_id=doc_id)\n if not project.collaborative_annotation:\n annotations = annotations.filter(user=user)\n\n if annotations.exists():\n raise ValidationError('requested to create duplicate annotation for single-class-classification project')\n\n\nclass AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):\n lookup_url_kwarg = 'annotation_id'\n swagger_schema = None\n\n def get_permissions(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n if project.collaborative_annotation:\n self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n else:\n self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin & IsOwnAnnotation]\n return super().get_permissions()\n\n def get_serializer_class(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n self.serializer_class = project.get_annotation_serializer()\n return self.serializer_class\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n model = project.get_annotation_class()\n self.queryset = model.objects.all()\n return self.queryset\n\n\nclass ApproveLabelsAPI(APIView):\n permission_classes = [IsAuthenticated & (IsAnnotationApprover | IsProjectAdmin)]\n\n def post(self, request, *args, **kwargs):\n approved = self.request.data.get('approved', True)\n document = get_object_or_404(Document, pk=self.kwargs['doc_id'])\n document.annotations_approved_by = self.request.user if approved else None\n document.save()\n return Response(ApproverSerializer(document).data)\n", "path": "app/api/views/annotation.py"}], "after_files": [{"content": "from django.shortcuts import get_object_or_404\nfrom rest_framework import generics, status\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom ..models import Document, Project\nfrom ..permissions import (IsAnnotationApprover, IsInProjectOrAdmin,\n IsOwnAnnotation, IsProjectAdmin)\nfrom ..serializers import ApproverSerializer\n\n\nclass AnnotationList(generics.ListCreateAPIView):\n pagination_class = None\n permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n swagger_schema = None\n\n @property\n def project(self):\n return get_object_or_404(Project, pk=self.kwargs['project_id'])\n\n def get_serializer_class(self):\n self.serializer_class = self.project.get_annotation_serializer()\n return self.serializer_class\n\n def get_queryset(self):\n model = self.project.get_annotation_class()\n queryset = model.objects.filter(document=self.kwargs['doc_id'])\n if not self.project.collaborative_annotation:\n queryset = queryset.filter(user=self.request.user)\n return queryset\n\n def create(self, request, *args, **kwargs):\n if self.project.single_class_classification:\n self.get_queryset().delete()\n request.data['document'] = self.kwargs['doc_id']\n return super().create(request, args, kwargs)\n\n def perform_create(self, serializer):\n serializer.save(document_id=self.kwargs['doc_id'], user=self.request.user)\n\n def delete(self, request, *args, **kwargs):\n queryset = self.get_queryset()\n queryset.all().delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass AnnotationDetail(generics.RetrieveUpdateDestroyAPIView):\n lookup_url_kwarg = 'annotation_id'\n swagger_schema = None\n\n def get_permissions(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n if project.collaborative_annotation:\n self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin]\n else:\n self.permission_classes = [IsAuthenticated & IsInProjectOrAdmin & IsOwnAnnotation]\n return super().get_permissions()\n\n def get_serializer_class(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n self.serializer_class = project.get_annotation_serializer()\n return self.serializer_class\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n model = project.get_annotation_class()\n self.queryset = model.objects.all()\n return self.queryset\n\n\nclass ApproveLabelsAPI(APIView):\n permission_classes = [IsAuthenticated & (IsAnnotationApprover | IsProjectAdmin)]\n\n def post(self, request, *args, **kwargs):\n approved = self.request.data.get('approved', True)\n document = get_object_or_404(Document, pk=self.kwargs['doc_id'])\n document.annotations_approved_by = self.request.user if approved else None\n document.save()\n return Response(ApproverSerializer(document).data)\n", "path": "app/api/views/annotation.py"}]}
| 1,392 | 561 |
gh_patches_debug_380
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-1931
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SSO base template should extend the base template
## What kind of an issue is this?
- [x] Bug report
## What is the expected behaviour?
The template for the SSO app should extend the base template so we don't have to maintain multiple base templates.
## What is the current behaviour?
It's a custom template, which looks copy/pasted from the base template.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/sso/views.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2
3 import logging
4
5 from django.contrib.auth.decorators import login_required
6 from django.shortcuts import render
7 from oauth2_provider.views.base import AuthorizationView as DefaultAuthorizationView # flake8: noqa
8 from oauth2_provider.views.base import RevokeTokenView, TokenView
9
10 _log = logging.getLogger('SSO')
11
12
13 @login_required
14 def index(request):
15 """
16 This is the main SSO view
17 """
18
19 context = {}
20
21 return render(request, 'sso/index.html', context)
22
23
24 class AuthorizationView(DefaultAuthorizationView):
25 template_name = 'sso/authorize.html'
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/sso/views.py b/apps/sso/views.py
--- a/apps/sso/views.py
+++ b/apps/sso/views.py
@@ -18,7 +18,7 @@
context = {}
- return render(request, 'sso/index.html', context)
+ return render(request, 'sso/authorize.html', context)
class AuthorizationView(DefaultAuthorizationView):
|
{"golden_diff": "diff --git a/apps/sso/views.py b/apps/sso/views.py\n--- a/apps/sso/views.py\n+++ b/apps/sso/views.py\n@@ -18,7 +18,7 @@\n \n context = {}\n \n- return render(request, 'sso/index.html', context)\n+ return render(request, 'sso/authorize.html', context)\n \n \n class AuthorizationView(DefaultAuthorizationView):\n", "issue": "SSO base template should extend the base template\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n\r\n\r\n## What is the expected behaviour?\r\n\r\nThe template for the SSO app should extend the base template so we don't have to maintain multiple base templates.\r\n\r\n\r\n## What is the current behaviour?\r\n\r\nIt's a custom template, which looks copy/pasted from the base template.\r\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport logging\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import render\nfrom oauth2_provider.views.base import AuthorizationView as DefaultAuthorizationView # flake8: noqa\nfrom oauth2_provider.views.base import RevokeTokenView, TokenView\n\n_log = logging.getLogger('SSO')\n\n\n@login_required\ndef index(request):\n \"\"\"\n This is the main SSO view\n \"\"\"\n\n context = {}\n\n return render(request, 'sso/index.html', context)\n\n\nclass AuthorizationView(DefaultAuthorizationView):\n template_name = 'sso/authorize.html'\n", "path": "apps/sso/views.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport logging\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import render\nfrom oauth2_provider.views.base import AuthorizationView as DefaultAuthorizationView # flake8: noqa\nfrom oauth2_provider.views.base import RevokeTokenView, TokenView\n\n_log = logging.getLogger('SSO')\n\n\n@login_required\ndef index(request):\n \"\"\"\n This is the main SSO view\n \"\"\"\n\n context = {}\n\n return render(request, 'sso/authorize.html', context)\n\n\nclass AuthorizationView(DefaultAuthorizationView):\n template_name = 'sso/authorize.html'\n", "path": "apps/sso/views.py"}]}
| 523 | 87 |
gh_patches_debug_9208
|
rasdani/github-patches
|
git_diff
|
plotly__dash-1643
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing classifiers for Python 3.8/3.9
Python 3.8 and 3.9 are missing from the classifiers in `setup.py`:
https://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L55-L63
But there is no restriction to 3.7 or below in `python_requires`:
https://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L29
Anecdotally, I've been using Dash on Python 3.9 with no issues. Reporting as an issue as I can't see any mention of Python 3.8 and 3.9 compatibility so far!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import io
2 from setuptools import setup, find_packages
3
4 main_ns = {}
5 exec(open("dash/version.py").read(), main_ns) # pylint: disable=exec-used
6
7
8 def read_req_file(req_type):
9 with open("requires-{}.txt".format(req_type)) as fp:
10 requires = (line.strip() for line in fp)
11 return [req for req in requires if req and not req.startswith("#")]
12
13
14 setup(
15 name="dash",
16 version=main_ns["__version__"],
17 author="Chris Parmer",
18 author_email="[email protected]",
19 packages=find_packages(exclude=["tests*"]),
20 include_package_data=True,
21 license="MIT",
22 description=(
23 "A Python framework for building reactive web-apps. "
24 "Developed by Plotly."
25 ),
26 long_description=io.open("README.md", encoding="utf-8").read(),
27 long_description_content_type="text/markdown",
28 install_requires=read_req_file("install"),
29 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*",
30 extras_require={
31 "dev": read_req_file("dev"),
32 "testing": read_req_file("testing"),
33 },
34 entry_points={
35 "console_scripts": [
36 "dash-generate-components = "
37 "dash.development.component_generator:cli",
38 "renderer = dash.development.build_process:renderer",
39 ],
40 "pytest11": ["dash = dash.testing.plugin"],
41 },
42 url="https://plotly.com/dash",
43 classifiers=[
44 "Development Status :: 5 - Production/Stable",
45 "Environment :: Web Environment",
46 "Framework :: Dash",
47 "Framework :: Flask",
48 "Intended Audience :: Developers",
49 "Intended Audience :: Education",
50 "Intended Audience :: Financial and Insurance Industry",
51 "Intended Audience :: Healthcare Industry",
52 "Intended Audience :: Manufacturing",
53 "Intended Audience :: Science/Research",
54 "License :: OSI Approved :: MIT License",
55 "Programming Language :: Python",
56 "Programming Language :: Python :: 2",
57 "Programming Language :: Python :: 2.7",
58 "Programming Language :: Python :: 3",
59 "Programming Language :: Python :: 3.3",
60 "Programming Language :: Python :: 3.4",
61 "Programming Language :: Python :: 3.5",
62 "Programming Language :: Python :: 3.6",
63 "Programming Language :: Python :: 3.7",
64 "Topic :: Database :: Front-Ends",
65 "Topic :: Office/Business :: Financial :: Spreadsheet",
66 "Topic :: Scientific/Engineering :: Visualization",
67 "Topic :: Software Development :: Libraries :: Application Frameworks",
68 "Topic :: Software Development :: Widget Sets",
69 ],
70 )
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,6 +61,8 @@
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
"Topic :: Database :: Front-Ends",
"Topic :: Office/Business :: Financial :: Spreadsheet",
"Topic :: Scientific/Engineering :: Visualization",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,6 +61,8 @@\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n+ \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n", "issue": "Missing classifiers for Python 3.8/3.9\nPython 3.8 and 3.9 are missing from the classifiers in `setup.py`:\r\n\r\nhttps://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L55-L63\r\n\r\nBut there is no restriction to 3.7 or below in `python_requires`:\r\nhttps://github.com/plotly/dash/blob/358c5089c929b2e99996f9d4ee6ec634f65437fe/setup.py#L29\r\n\r\nAnecdotally, I've been using Dash on Python 3.9 with no issues. Reporting as an issue as I can't see any mention of Python 3.8 and 3.9 compatibility so far!\n", "before_files": [{"content": "import io\nfrom setuptools import setup, find_packages\n\nmain_ns = {}\nexec(open(\"dash/version.py\").read(), main_ns) # pylint: disable=exec-used\n\n\ndef read_req_file(req_type):\n with open(\"requires-{}.txt\".format(req_type)) as fp:\n requires = (line.strip() for line in fp)\n return [req for req in requires if req and not req.startswith(\"#\")]\n\n\nsetup(\n name=\"dash\",\n version=main_ns[\"__version__\"],\n author=\"Chris Parmer\",\n author_email=\"[email protected]\",\n packages=find_packages(exclude=[\"tests*\"]),\n include_package_data=True,\n license=\"MIT\",\n description=(\n \"A Python framework for building reactive web-apps. \"\n \"Developed by Plotly.\"\n ),\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*\",\n extras_require={\n \"dev\": read_req_file(\"dev\"),\n \"testing\": read_req_file(\"testing\"),\n },\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components = \"\n \"dash.development.component_generator:cli\",\n \"renderer = dash.development.build_process:renderer\",\n ],\n \"pytest11\": [\"dash = dash.testing.plugin\"],\n },\n url=\"https://plotly.com/dash\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Dash\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Manufacturing\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Widget Sets\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import io\nfrom setuptools import setup, find_packages\n\nmain_ns = {}\nexec(open(\"dash/version.py\").read(), main_ns) # pylint: disable=exec-used\n\n\ndef read_req_file(req_type):\n with open(\"requires-{}.txt\".format(req_type)) as fp:\n requires = (line.strip() for line in fp)\n return [req for req in requires if req and not req.startswith(\"#\")]\n\n\nsetup(\n name=\"dash\",\n version=main_ns[\"__version__\"],\n author=\"Chris Parmer\",\n author_email=\"[email protected]\",\n packages=find_packages(exclude=[\"tests*\"]),\n include_package_data=True,\n license=\"MIT\",\n description=(\n \"A Python framework for building reactive web-apps. \"\n \"Developed by Plotly.\"\n ),\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*\",\n extras_require={\n \"dev\": read_req_file(\"dev\"),\n \"testing\": read_req_file(\"testing\"),\n },\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components = \"\n \"dash.development.component_generator:cli\",\n \"renderer = dash.development.build_process:renderer\",\n ],\n \"pytest11\": [\"dash = dash.testing.plugin\"],\n },\n url=\"https://plotly.com/dash\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Dash\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Manufacturing\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Widget Sets\",\n ],\n)\n", "path": "setup.py"}]}
| 1,186 | 127 |
gh_patches_debug_10156
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-1075
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove "(line X)" annotations from output
#### What's the problem this feature will solve?
I love the new source annotations in 4.5.0 from #1058. But the line number annotations cause unnecessary diff churn. For example on a project I'm working on, I added a couple requirements that are alphabetically early, and it means *every* requirement afterwards has a new line number:
<img width="395" alt="Screenshot 2020-02-25 at 15 22 19" src="https://user-images.githubusercontent.com/857609/75261303-b6d72580-57e2-11ea-99ef-dcef368bb755.png">
Adding a single requirement with no dependencies can become a 100 line diff 😢
Equally sorting one's `requirements.in` would make diff noise just from changing the line numbers.
#### Describe the solution you'd like
Remove the line numbers from the output.
#### Alternative Solutions
A flag? I prefer sensible defaults though.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/writer.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import os
4 from itertools import chain
5
6 import six
7
8 from .click import unstyle
9 from .logging import log
10 from .utils import (
11 UNSAFE_PACKAGES,
12 comment,
13 dedup,
14 format_requirement,
15 get_compile_command,
16 key_from_ireq,
17 )
18
19 MESSAGE_UNHASHED_PACKAGE = comment(
20 "# WARNING: pip install will require the following package to be hashed."
21 "\n# Consider using a hashable URL like "
22 "https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip"
23 )
24
25 MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
26 "# WARNING: The following packages were not pinned, but pip requires them to be"
27 "\n# pinned when the requirements file includes hashes. "
28 "Consider using the --allow-unsafe flag."
29 )
30
31 MESSAGE_UNSAFE_PACKAGES = comment(
32 "# The following packages are considered to be unsafe in a requirements file:"
33 )
34
35 MESSAGE_UNINSTALLABLE = (
36 "The generated requirements file may be rejected by pip install. "
37 "See # WARNING lines for details."
38 )
39
40
41 def _comes_from_as_string(ireq):
42 if isinstance(ireq.comes_from, six.string_types):
43 return ireq.comes_from
44 return key_from_ireq(ireq.comes_from)
45
46
47 class OutputWriter(object):
48 def __init__(
49 self,
50 src_files,
51 dst_file,
52 click_ctx,
53 dry_run,
54 emit_header,
55 emit_index,
56 emit_trusted_host,
57 annotate,
58 generate_hashes,
59 default_index_url,
60 index_urls,
61 trusted_hosts,
62 format_control,
63 allow_unsafe,
64 find_links,
65 emit_find_links,
66 ):
67 self.src_files = src_files
68 self.dst_file = dst_file
69 self.click_ctx = click_ctx
70 self.dry_run = dry_run
71 self.emit_header = emit_header
72 self.emit_index = emit_index
73 self.emit_trusted_host = emit_trusted_host
74 self.annotate = annotate
75 self.generate_hashes = generate_hashes
76 self.default_index_url = default_index_url
77 self.index_urls = index_urls
78 self.trusted_hosts = trusted_hosts
79 self.format_control = format_control
80 self.allow_unsafe = allow_unsafe
81 self.find_links = find_links
82 self.emit_find_links = emit_find_links
83
84 def _sort_key(self, ireq):
85 return (not ireq.editable, str(ireq.req).lower())
86
87 def write_header(self):
88 if self.emit_header:
89 yield comment("#")
90 yield comment("# This file is autogenerated by pip-compile")
91 yield comment("# To update, run:")
92 yield comment("#")
93 compile_command = os.environ.get(
94 "CUSTOM_COMPILE_COMMAND"
95 ) or get_compile_command(self.click_ctx)
96 yield comment("# {}".format(compile_command))
97 yield comment("#")
98
99 def write_index_options(self):
100 if self.emit_index:
101 for index, index_url in enumerate(dedup(self.index_urls)):
102 if index_url.rstrip("/") == self.default_index_url:
103 continue
104 flag = "--index-url" if index == 0 else "--extra-index-url"
105 yield "{} {}".format(flag, index_url)
106
107 def write_trusted_hosts(self):
108 if self.emit_trusted_host:
109 for trusted_host in dedup(self.trusted_hosts):
110 yield "--trusted-host {}".format(trusted_host)
111
112 def write_format_controls(self):
113 for nb in dedup(self.format_control.no_binary):
114 yield "--no-binary {}".format(nb)
115 for ob in dedup(self.format_control.only_binary):
116 yield "--only-binary {}".format(ob)
117
118 def write_find_links(self):
119 if self.emit_find_links:
120 for find_link in dedup(self.find_links):
121 yield "--find-links {}".format(find_link)
122
123 def write_flags(self):
124 emitted = False
125 for line in chain(
126 self.write_index_options(),
127 self.write_find_links(),
128 self.write_trusted_hosts(),
129 self.write_format_controls(),
130 ):
131 emitted = True
132 yield line
133 if emitted:
134 yield ""
135
136 def _iter_lines(self, results, unsafe_requirements=None, markers=None, hashes=None):
137 # default values
138 unsafe_requirements = unsafe_requirements or []
139 markers = markers or {}
140 hashes = hashes or {}
141
142 # Check for unhashed or unpinned packages if at least one package does have
143 # hashes, which will trigger pip install's --require-hashes mode.
144 warn_uninstallable = False
145 has_hashes = hashes and any(hash for hash in hashes.values())
146
147 yielded = False
148
149 for line in self.write_header():
150 yield line
151 yielded = True
152 for line in self.write_flags():
153 yield line
154 yielded = True
155
156 unsafe_requirements = (
157 {r for r in results if r.name in UNSAFE_PACKAGES}
158 if not unsafe_requirements
159 else unsafe_requirements
160 )
161 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
162
163 if packages:
164 packages = sorted(packages, key=self._sort_key)
165 for ireq in packages:
166 if has_hashes and not hashes.get(ireq):
167 yield MESSAGE_UNHASHED_PACKAGE
168 warn_uninstallable = True
169 line = self._format_requirement(
170 ireq, markers.get(key_from_ireq(ireq)), hashes=hashes
171 )
172 yield line
173 yielded = True
174
175 if unsafe_requirements:
176 unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)
177 yield ""
178 yielded = True
179 if has_hashes and not self.allow_unsafe:
180 yield MESSAGE_UNSAFE_PACKAGES_UNPINNED
181 warn_uninstallable = True
182 else:
183 yield MESSAGE_UNSAFE_PACKAGES
184
185 for ireq in unsafe_requirements:
186 ireq_key = key_from_ireq(ireq)
187 if not self.allow_unsafe:
188 yield comment("# {}".format(ireq_key))
189 else:
190 line = self._format_requirement(
191 ireq, marker=markers.get(ireq_key), hashes=hashes
192 )
193 yield line
194
195 # Yield even when there's no real content, so that blank files are written
196 if not yielded:
197 yield ""
198
199 if warn_uninstallable:
200 log.warning(MESSAGE_UNINSTALLABLE)
201
202 def write(self, results, unsafe_requirements, markers, hashes):
203
204 for line in self._iter_lines(results, unsafe_requirements, markers, hashes):
205 log.info(line)
206 if not self.dry_run:
207 self.dst_file.write(unstyle(line).encode("utf-8"))
208 self.dst_file.write(os.linesep.encode("utf-8"))
209
210 def _format_requirement(self, ireq, marker=None, hashes=None):
211 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
212
213 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
214
215 if not self.annotate:
216 return line
217
218 # Annotate what packages or reqs-ins this package is required by
219 required_by = set()
220 if hasattr(ireq, "_source_ireqs"):
221 required_by |= {
222 _comes_from_as_string(src_ireq)
223 for src_ireq in ireq._source_ireqs
224 if src_ireq.comes_from
225 }
226 elif ireq.comes_from:
227 required_by.add(_comes_from_as_string(ireq))
228 if required_by:
229 annotation = ", ".join(sorted(required_by))
230 line = "{:24}{}{}".format(
231 line,
232 " \\\n " if ireq_hashes else " ",
233 comment("# via " + annotation),
234 )
235 return line
236
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
import os
+import re
from itertools import chain
import six
@@ -38,9 +39,12 @@
)
+strip_comes_from_line_re = re.compile(r" \(line \d+\)$")
+
+
def _comes_from_as_string(ireq):
if isinstance(ireq.comes_from, six.string_types):
- return ireq.comes_from
+ return strip_comes_from_line_re.sub("", ireq.comes_from)
return key_from_ireq(ireq.comes_from)
|
{"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -1,6 +1,7 @@\n from __future__ import unicode_literals\n \n import os\n+import re\n from itertools import chain\n \n import six\n@@ -38,9 +39,12 @@\n )\n \n \n+strip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n+\n+\n def _comes_from_as_string(ireq):\n if isinstance(ireq.comes_from, six.string_types):\n- return ireq.comes_from\n+ return strip_comes_from_line_re.sub(\"\", ireq.comes_from)\n return key_from_ireq(ireq.comes_from)\n", "issue": "Remove \"(line X)\" annotations from output\n#### What's the problem this feature will solve?\r\n\r\nI love the new source annotations in 4.5.0 from #1058. But the line number annotations cause unnecessary diff churn. For example on a project I'm working on, I added a couple requirements that are alphabetically early, and it means *every* requirement afterwards has a new line number:\r\n\r\n<img width=\"395\" alt=\"Screenshot 2020-02-25 at 15 22 19\" src=\"https://user-images.githubusercontent.com/857609/75261303-b6d72580-57e2-11ea-99ef-dcef368bb755.png\">\r\n\r\nAdding a single requirement with no dependencies can become a 100 line diff \ud83d\ude22\r\n\r\nEqually sorting one's `requirements.in` would make diff noise just from changing the line numbers.\r\n \r\n#### Describe the solution you'd like\r\n\r\nRemove the line numbers from the output.\r\n\r\n#### Alternative Solutions\r\n\r\nA flag? I prefer sensible defaults though.\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nfrom itertools import chain\n\nimport six\n\nfrom .click import unstyle\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\ndef _comes_from_as_string(ireq):\n if isinstance(ireq.comes_from, six.string_types):\n return ireq.comes_from\n return key_from_ireq(ireq.comes_from)\n\n\nclass OutputWriter(object):\n def __init__(\n self,\n src_files,\n dst_file,\n click_ctx,\n dry_run,\n emit_header,\n emit_index,\n emit_trusted_host,\n annotate,\n generate_hashes,\n default_index_url,\n index_urls,\n trusted_hosts,\n format_control,\n allow_unsafe,\n find_links,\n emit_find_links,\n ):\n self.src_files = src_files\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\"# This file is autogenerated by pip-compile\")\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(\"# {}\".format(compile_command))\n yield comment(\"#\")\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield \"{} {}\".format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield \"--trusted-host {}\".format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield \"--no-binary {}\".format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield \"--only-binary {}\".format(ob)\n\n def write_find_links(self):\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield \"--find-links {}\".format(find_link)\n\n def write_flags(self):\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(self, results, unsafe_requirements=None, markers=None, hashes=None):\n # default values\n unsafe_requirements = unsafe_requirements or []\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n packages = sorted(packages, key=self._sort_key)\n for ireq in packages:\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in unsafe_requirements:\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(\"# {}\".format(ireq_key))\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(self, results, unsafe_requirements, markers, hashes):\n\n for line in self._iter_lines(results, unsafe_requirements, markers, hashes):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode(\"utf-8\"))\n self.dst_file.write(os.linesep.encode(\"utf-8\"))\n\n def _format_requirement(self, ireq, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n elif ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq))\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation),\n )\n return line\n", "path": "piptools/writer.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nimport re\nfrom itertools import chain\n\nimport six\n\nfrom .click import unstyle\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nstrip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n\n\ndef _comes_from_as_string(ireq):\n if isinstance(ireq.comes_from, six.string_types):\n return strip_comes_from_line_re.sub(\"\", ireq.comes_from)\n return key_from_ireq(ireq.comes_from)\n\n\nclass OutputWriter(object):\n def __init__(\n self,\n src_files,\n dst_file,\n click_ctx,\n dry_run,\n emit_header,\n emit_index,\n emit_trusted_host,\n annotate,\n generate_hashes,\n default_index_url,\n index_urls,\n trusted_hosts,\n format_control,\n allow_unsafe,\n find_links,\n emit_find_links,\n ):\n self.src_files = src_files\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\"# This file is autogenerated by pip-compile\")\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(\"# {}\".format(compile_command))\n yield comment(\"#\")\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield \"{} {}\".format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield \"--trusted-host {}\".format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield \"--no-binary {}\".format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield \"--only-binary {}\".format(ob)\n\n def write_find_links(self):\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield \"--find-links {}\".format(find_link)\n\n def write_flags(self):\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(self, results, unsafe_requirements=None, markers=None, hashes=None):\n # default values\n unsafe_requirements = unsafe_requirements or []\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n packages = sorted(packages, key=self._sort_key)\n for ireq in packages:\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in unsafe_requirements:\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(\"# {}\".format(ireq_key))\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(self, results, unsafe_requirements, markers, hashes):\n\n for line in self._iter_lines(results, unsafe_requirements, markers, hashes):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode(\"utf-8\"))\n self.dst_file.write(os.linesep.encode(\"utf-8\"))\n\n def _format_requirement(self, ireq, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n elif ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq))\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation),\n )\n return line\n", "path": "piptools/writer.py"}]}
| 2,775 | 169 |
gh_patches_debug_56501
|
rasdani/github-patches
|
git_diff
|
canonical__microk8s-2478
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[dashboard] should be exposed via ingress
When running microk8s on the server, rather than doing port forwarding it should be possible to access the dashboard via ingress (similar to kubeflow dashboard)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/wrappers/status.py`
Content:
```
1 #!/usr/bin/python3
2 import os
3 import argparse
4
5 from common.utils import (
6 exit_if_no_permission,
7 exit_if_stopped,
8 is_cluster_locked,
9 is_ha_enabled,
10 get_dqlite_info,
11 wait_for_ready,
12 is_cluster_ready,
13 get_available_addons,
14 get_current_arch,
15 get_addon_by_name,
16 kubectl_get,
17 kubectl_get_clusterroles,
18 )
19
20
21 def is_enabled(addon, item):
22 if addon in item:
23 return True
24 else:
25 filepath = os.path.expandvars(addon)
26 return os.path.isfile(filepath)
27
28 return False
29
30
31 def print_short(isReady, enabled_addons, disabled_addons):
32 if isReady:
33 print("microk8s is running")
34 print("addons:")
35 if enabled_addons and len(enabled_addons) > 0:
36 for enabled in enabled_addons:
37 print("{}: enabled".format(enabled["name"]))
38 if disabled_addons and len(disabled_addons) > 0:
39 for disabled in disabled_addons:
40 print("{}: disabled".format(disabled["name"]))
41 else:
42 print("microk8s is not running. Use microk8s inspect for a deeper inspection.")
43
44
45 def print_pretty(isReady, enabled_addons, disabled_addons):
46 console_formatter = "{:>3} {:<20} # {}"
47 if isReady:
48 print("microk8s is running")
49 if not is_ha_enabled():
50 print("high-availability: no")
51 else:
52 info = get_dqlite_info()
53 if ha_cluster_formed(info):
54 print("high-availability: yes")
55 else:
56 print("high-availability: no")
57
58 masters = "none"
59 standby = "none"
60 for node in info:
61 if node[1] == "voter":
62 if masters == "none":
63 masters = "{}".format(node[0])
64 else:
65 masters = "{} {}".format(masters, node[0])
66 if node[1] == "standby":
67 if standby == "none":
68 standby = "{}".format(node[0])
69 else:
70 standby = "{} {}".format(standby, node[0])
71
72 print("{:>2}{} {}".format("", "datastore master nodes:", masters))
73 print("{:>2}{} {}".format("", "datastore standby nodes:", standby))
74
75 print("addons:")
76 if enabled_addons and len(enabled_addons) > 0:
77 print("{:>2}{}".format("", "enabled:"))
78 for enabled in enabled_addons:
79 print(console_formatter.format("", enabled["name"], enabled["description"]))
80 if disabled_addons and len(disabled_addons) > 0:
81 print("{:>2}{}".format("", "disabled:"))
82 for disabled in disabled_addons:
83 print(console_formatter.format("", disabled["name"], disabled["description"]))
84 else:
85 print("microk8s is not running. Use microk8s inspect for a deeper inspection.")
86
87
88 def print_short_yaml(isReady, enabled_addons, disabled_addons):
89 print("microk8s:")
90 print("{:>2}{} {}".format("", "running:", isReady))
91
92 if isReady:
93 print("addons:")
94 for enabled in enabled_addons:
95 print(" {}: enabled".format(enabled["name"]))
96
97 for disabled in disabled_addons:
98 print(" {}: disabled".format(disabled["name"]))
99 else:
100 print(
101 "{:>2}{} {}".format(
102 "",
103 "message:",
104 "microk8s is not running. Use microk8s inspect for a deeper inspection.",
105 )
106 )
107
108
109 def print_yaml(isReady, enabled_addons, disabled_addons):
110 print("microk8s:")
111 print("{:>2}{} {}".format("", "running:", isReady))
112
113 print("{:>2}".format("high-availability:"))
114 ha_enabled = is_ha_enabled()
115 print("{:>2}{} {}".format("", "enabled:", ha_enabled))
116 if ha_enabled:
117 info = get_dqlite_info()
118 print("{:>2}{}".format("", "nodes:"))
119 for node in info:
120 print("{:>6}address: {:<1}".format("- ", node[0]))
121 print("{:>6}role: {:<1}".format("", node[1]))
122
123 if isReady:
124 print("{:>2}".format("addons:"))
125 for enabled in enabled_addons:
126 print("{:>4}name: {:<1}".format("- ", enabled["name"]))
127 print("{:>4}description: {:<1}".format("", enabled["description"]))
128 print("{:>4}version: {:<1}".format("", enabled["version"]))
129 print("{:>4}status: enabled".format(""))
130
131 for disabled in disabled_addons:
132 print("{:>4}name: {:<1}".format("- ", disabled["name"]))
133 print("{:>4}description: {:<1}".format("", disabled["description"]))
134 print("{:>4}version: {:<1}".format("", disabled["version"]))
135 print("{:>4}status: disabled".format(""))
136 else:
137 print(
138 "{:>2}{} {}".format(
139 "",
140 "message:",
141 "microk8s is not running. Use microk8s inspect for a deeper inspection.",
142 )
143 )
144
145
146 def print_addon_status(enabled):
147 if len(enabled) > 0:
148 print("enabled")
149 else:
150 print("disabled")
151
152
153 def get_status(available_addons, isReady):
154 enabled = []
155 disabled = []
156 if isReady:
157 kube_output = kubectl_get("all")
158 cluster_output = kubectl_get_clusterroles()
159 kube_output = kube_output + cluster_output
160 for addon in available_addons:
161 found = False
162 for row in kube_output.split("\n"):
163 if is_enabled(addon["check_status"], row):
164 enabled.append(addon)
165 found = True
166 break
167 if not found:
168 disabled.append(addon)
169
170 return enabled, disabled
171
172
173 def ha_cluster_formed(info):
174 voters = 0
175 for node in info:
176 if node[1] == "voter":
177 voters += 1
178 ha_formed = False
179 if voters > 2:
180 ha_formed = True
181 return ha_formed
182
183
184 if __name__ == "__main__":
185 exit_if_no_permission()
186 exit_if_stopped()
187 is_cluster_locked()
188
189 # initiate the parser with a description
190 parser = argparse.ArgumentParser(
191 description="Microk8s cluster status check.", prog="microk8s status"
192 )
193 parser.add_argument(
194 "--format",
195 help="print cluster and addon status, output can be in yaml, pretty or short",
196 default="pretty",
197 choices={"pretty", "yaml", "short"},
198 )
199 parser.add_argument(
200 "-w", "--wait-ready", action="store_true", help="wait until the cluster is in ready state"
201 )
202 parser.add_argument(
203 "-t",
204 "--timeout",
205 help="specify a timeout in seconds when waiting for the cluster to be ready.",
206 type=int,
207 default=0,
208 )
209 parser.add_argument("-a", "--addon", help="check the status of an addon.", default="all")
210 parser.add_argument(
211 "--yaml", action="store_true", help="DEPRECATED, use '--format yaml' instead"
212 )
213
214 # read arguments from the command line
215 args = parser.parse_args()
216
217 wait_ready = args.wait_ready
218 timeout = args.timeout
219 yaml_short = args.yaml
220
221 if wait_ready:
222 isReady = wait_for_ready(timeout)
223 else:
224 isReady = is_cluster_ready()
225
226 available_addons = get_available_addons(get_current_arch())
227
228 if args.addon != "all":
229 available_addons = get_addon_by_name(available_addons, args.addon)
230
231 enabled, disabled = get_status(available_addons, isReady)
232
233 if args.addon != "all":
234 print_addon_status(enabled)
235 else:
236 if args.format == "yaml":
237 print_yaml(isReady, enabled, disabled)
238 elif args.format == "short":
239 print_short(isReady, enabled, disabled)
240 else:
241 if yaml_short:
242 print_short_yaml(isReady, enabled, disabled)
243 else:
244 print_pretty(isReady, enabled, disabled)
245
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/wrappers/status.py b/scripts/wrappers/status.py
--- a/scripts/wrappers/status.py
+++ b/scripts/wrappers/status.py
@@ -154,7 +154,8 @@
enabled = []
disabled = []
if isReady:
- kube_output = kubectl_get("all")
+ # 'all' does not include ingress
+ kube_output = kubectl_get("all,ingress")
cluster_output = kubectl_get_clusterroles()
kube_output = kube_output + cluster_output
for addon in available_addons:
|
{"golden_diff": "diff --git a/scripts/wrappers/status.py b/scripts/wrappers/status.py\n--- a/scripts/wrappers/status.py\n+++ b/scripts/wrappers/status.py\n@@ -154,7 +154,8 @@\n enabled = []\n disabled = []\n if isReady:\n- kube_output = kubectl_get(\"all\")\n+ # 'all' does not include ingress\n+ kube_output = kubectl_get(\"all,ingress\")\n cluster_output = kubectl_get_clusterroles()\n kube_output = kube_output + cluster_output\n for addon in available_addons:\n", "issue": "[dashboard] should be exposed via ingress\nWhen running microk8s on the server, rather than doing port forwarding it should be possible to access the dashboard via ingress (similar to kubeflow dashboard)\n", "before_files": [{"content": "#!/usr/bin/python3\nimport os\nimport argparse\n\nfrom common.utils import (\n exit_if_no_permission,\n exit_if_stopped,\n is_cluster_locked,\n is_ha_enabled,\n get_dqlite_info,\n wait_for_ready,\n is_cluster_ready,\n get_available_addons,\n get_current_arch,\n get_addon_by_name,\n kubectl_get,\n kubectl_get_clusterroles,\n)\n\n\ndef is_enabled(addon, item):\n if addon in item:\n return True\n else:\n filepath = os.path.expandvars(addon)\n return os.path.isfile(filepath)\n\n return False\n\n\ndef print_short(isReady, enabled_addons, disabled_addons):\n if isReady:\n print(\"microk8s is running\")\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n for enabled in enabled_addons:\n print(\"{}: enabled\".format(enabled[\"name\"]))\n if disabled_addons and len(disabled_addons) > 0:\n for disabled in disabled_addons:\n print(\"{}: disabled\".format(disabled[\"name\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_pretty(isReady, enabled_addons, disabled_addons):\n console_formatter = \"{:>3} {:<20} # {}\"\n if isReady:\n print(\"microk8s is running\")\n if not is_ha_enabled():\n print(\"high-availability: no\")\n else:\n info = get_dqlite_info()\n if ha_cluster_formed(info):\n print(\"high-availability: yes\")\n else:\n print(\"high-availability: no\")\n\n masters = \"none\"\n standby = \"none\"\n for node in info:\n if node[1] == \"voter\":\n if masters == \"none\":\n masters = \"{}\".format(node[0])\n else:\n masters = \"{} {}\".format(masters, node[0])\n if node[1] == \"standby\":\n if standby == \"none\":\n standby = \"{}\".format(node[0])\n else:\n standby = \"{} {}\".format(standby, node[0])\n\n print(\"{:>2}{} {}\".format(\"\", \"datastore master nodes:\", masters))\n print(\"{:>2}{} {}\".format(\"\", \"datastore standby nodes:\", standby))\n\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"enabled:\"))\n for enabled in enabled_addons:\n print(console_formatter.format(\"\", enabled[\"name\"], enabled[\"description\"]))\n if disabled_addons and len(disabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"disabled:\"))\n for disabled in disabled_addons:\n print(console_formatter.format(\"\", disabled[\"name\"], disabled[\"description\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_short_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n if isReady:\n print(\"addons:\")\n for enabled in enabled_addons:\n print(\" {}: enabled\".format(enabled[\"name\"]))\n\n for disabled in disabled_addons:\n print(\" {}: disabled\".format(disabled[\"name\"]))\n else:\n print(\n \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n print(\"{:>2}\".format(\"high-availability:\"))\n ha_enabled = is_ha_enabled()\n print(\"{:>2}{} {}\".format(\"\", \"enabled:\", ha_enabled))\n if ha_enabled:\n info = get_dqlite_info()\n print(\"{:>2}{}\".format(\"\", \"nodes:\"))\n for node in info:\n print(\"{:>6}address: {:<1}\".format(\"- \", node[0]))\n print(\"{:>6}role: {:<1}\".format(\"\", node[1]))\n\n if isReady:\n print(\"{:>2}\".format(\"addons:\"))\n for enabled in enabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", enabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", enabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", enabled[\"version\"]))\n print(\"{:>4}status: enabled\".format(\"\"))\n\n for disabled in disabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", disabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", disabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", disabled[\"version\"]))\n print(\"{:>4}status: disabled\".format(\"\"))\n else:\n print(\n \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_addon_status(enabled):\n if len(enabled) > 0:\n print(\"enabled\")\n else:\n print(\"disabled\")\n\n\ndef get_status(available_addons, isReady):\n enabled = []\n disabled = []\n if isReady:\n kube_output = kubectl_get(\"all\")\n cluster_output = kubectl_get_clusterroles()\n kube_output = kube_output + cluster_output\n for addon in available_addons:\n found = False\n for row in kube_output.split(\"\\n\"):\n if is_enabled(addon[\"check_status\"], row):\n enabled.append(addon)\n found = True\n break\n if not found:\n disabled.append(addon)\n\n return enabled, disabled\n\n\ndef ha_cluster_formed(info):\n voters = 0\n for node in info:\n if node[1] == \"voter\":\n voters += 1\n ha_formed = False\n if voters > 2:\n ha_formed = True\n return ha_formed\n\n\nif __name__ == \"__main__\":\n exit_if_no_permission()\n exit_if_stopped()\n is_cluster_locked()\n\n # initiate the parser with a description\n parser = argparse.ArgumentParser(\n description=\"Microk8s cluster status check.\", prog=\"microk8s status\"\n )\n parser.add_argument(\n \"--format\",\n help=\"print cluster and addon status, output can be in yaml, pretty or short\",\n default=\"pretty\",\n choices={\"pretty\", \"yaml\", \"short\"},\n )\n parser.add_argument(\n \"-w\", \"--wait-ready\", action=\"store_true\", help=\"wait until the cluster is in ready state\"\n )\n parser.add_argument(\n \"-t\",\n \"--timeout\",\n help=\"specify a timeout in seconds when waiting for the cluster to be ready.\",\n type=int,\n default=0,\n )\n parser.add_argument(\"-a\", \"--addon\", help=\"check the status of an addon.\", default=\"all\")\n parser.add_argument(\n \"--yaml\", action=\"store_true\", help=\"DEPRECATED, use '--format yaml' instead\"\n )\n\n # read arguments from the command line\n args = parser.parse_args()\n\n wait_ready = args.wait_ready\n timeout = args.timeout\n yaml_short = args.yaml\n\n if wait_ready:\n isReady = wait_for_ready(timeout)\n else:\n isReady = is_cluster_ready()\n\n available_addons = get_available_addons(get_current_arch())\n\n if args.addon != \"all\":\n available_addons = get_addon_by_name(available_addons, args.addon)\n\n enabled, disabled = get_status(available_addons, isReady)\n\n if args.addon != \"all\":\n print_addon_status(enabled)\n else:\n if args.format == \"yaml\":\n print_yaml(isReady, enabled, disabled)\n elif args.format == \"short\":\n print_short(isReady, enabled, disabled)\n else:\n if yaml_short:\n print_short_yaml(isReady, enabled, disabled)\n else:\n print_pretty(isReady, enabled, disabled)\n", "path": "scripts/wrappers/status.py"}], "after_files": [{"content": "#!/usr/bin/python3\nimport os\nimport argparse\n\nfrom common.utils import (\n exit_if_no_permission,\n exit_if_stopped,\n is_cluster_locked,\n is_ha_enabled,\n get_dqlite_info,\n wait_for_ready,\n is_cluster_ready,\n get_available_addons,\n get_current_arch,\n get_addon_by_name,\n kubectl_get,\n kubectl_get_clusterroles,\n)\n\n\ndef is_enabled(addon, item):\n if addon in item:\n return True\n else:\n filepath = os.path.expandvars(addon)\n return os.path.isfile(filepath)\n\n return False\n\n\ndef print_short(isReady, enabled_addons, disabled_addons):\n if isReady:\n print(\"microk8s is running\")\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n for enabled in enabled_addons:\n print(\"{}: enabled\".format(enabled[\"name\"]))\n if disabled_addons and len(disabled_addons) > 0:\n for disabled in disabled_addons:\n print(\"{}: disabled\".format(disabled[\"name\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_pretty(isReady, enabled_addons, disabled_addons):\n console_formatter = \"{:>3} {:<20} # {}\"\n if isReady:\n print(\"microk8s is running\")\n if not is_ha_enabled():\n print(\"high-availability: no\")\n else:\n info = get_dqlite_info()\n if ha_cluster_formed(info):\n print(\"high-availability: yes\")\n else:\n print(\"high-availability: no\")\n\n masters = \"none\"\n standby = \"none\"\n for node in info:\n if node[1] == \"voter\":\n if masters == \"none\":\n masters = \"{}\".format(node[0])\n else:\n masters = \"{} {}\".format(masters, node[0])\n if node[1] == \"standby\":\n if standby == \"none\":\n standby = \"{}\".format(node[0])\n else:\n standby = \"{} {}\".format(standby, node[0])\n\n print(\"{:>2}{} {}\".format(\"\", \"datastore master nodes:\", masters))\n print(\"{:>2}{} {}\".format(\"\", \"datastore standby nodes:\", standby))\n\n print(\"addons:\")\n if enabled_addons and len(enabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"enabled:\"))\n for enabled in enabled_addons:\n print(console_formatter.format(\"\", enabled[\"name\"], enabled[\"description\"]))\n if disabled_addons and len(disabled_addons) > 0:\n print(\"{:>2}{}\".format(\"\", \"disabled:\"))\n for disabled in disabled_addons:\n print(console_formatter.format(\"\", disabled[\"name\"], disabled[\"description\"]))\n else:\n print(\"microk8s is not running. Use microk8s inspect for a deeper inspection.\")\n\n\ndef print_short_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n if isReady:\n print(\"addons:\")\n for enabled in enabled_addons:\n print(\" {}: enabled\".format(enabled[\"name\"]))\n\n for disabled in disabled_addons:\n print(\" {}: disabled\".format(disabled[\"name\"]))\n else:\n print(\n \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_yaml(isReady, enabled_addons, disabled_addons):\n print(\"microk8s:\")\n print(\"{:>2}{} {}\".format(\"\", \"running:\", isReady))\n\n print(\"{:>2}\".format(\"high-availability:\"))\n ha_enabled = is_ha_enabled()\n print(\"{:>2}{} {}\".format(\"\", \"enabled:\", ha_enabled))\n if ha_enabled:\n info = get_dqlite_info()\n print(\"{:>2}{}\".format(\"\", \"nodes:\"))\n for node in info:\n print(\"{:>6}address: {:<1}\".format(\"- \", node[0]))\n print(\"{:>6}role: {:<1}\".format(\"\", node[1]))\n\n if isReady:\n print(\"{:>2}\".format(\"addons:\"))\n for enabled in enabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", enabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", enabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", enabled[\"version\"]))\n print(\"{:>4}status: enabled\".format(\"\"))\n\n for disabled in disabled_addons:\n print(\"{:>4}name: {:<1}\".format(\"- \", disabled[\"name\"]))\n print(\"{:>4}description: {:<1}\".format(\"\", disabled[\"description\"]))\n print(\"{:>4}version: {:<1}\".format(\"\", disabled[\"version\"]))\n print(\"{:>4}status: disabled\".format(\"\"))\n else:\n print(\n \"{:>2}{} {}\".format(\n \"\",\n \"message:\",\n \"microk8s is not running. Use microk8s inspect for a deeper inspection.\",\n )\n )\n\n\ndef print_addon_status(enabled):\n if len(enabled) > 0:\n print(\"enabled\")\n else:\n print(\"disabled\")\n\n\ndef get_status(available_addons, isReady):\n enabled = []\n disabled = []\n if isReady:\n # 'all' does not include ingress\n kube_output = kubectl_get(\"all,ingress\")\n cluster_output = kubectl_get_clusterroles()\n kube_output = kube_output + cluster_output\n for addon in available_addons:\n found = False\n for row in kube_output.split(\"\\n\"):\n if is_enabled(addon[\"check_status\"], row):\n enabled.append(addon)\n found = True\n break\n if not found:\n disabled.append(addon)\n\n return enabled, disabled\n\n\ndef ha_cluster_formed(info):\n voters = 0\n for node in info:\n if node[1] == \"voter\":\n voters += 1\n ha_formed = False\n if voters > 2:\n ha_formed = True\n return ha_formed\n\n\nif __name__ == \"__main__\":\n exit_if_no_permission()\n exit_if_stopped()\n is_cluster_locked()\n\n # initiate the parser with a description\n parser = argparse.ArgumentParser(\n description=\"Microk8s cluster status check.\", prog=\"microk8s status\"\n )\n parser.add_argument(\n \"--format\",\n help=\"print cluster and addon status, output can be in yaml, pretty or short\",\n default=\"pretty\",\n choices={\"pretty\", \"yaml\", \"short\"},\n )\n parser.add_argument(\n \"-w\", \"--wait-ready\", action=\"store_true\", help=\"wait until the cluster is in ready state\"\n )\n parser.add_argument(\n \"-t\",\n \"--timeout\",\n help=\"specify a timeout in seconds when waiting for the cluster to be ready.\",\n type=int,\n default=0,\n )\n parser.add_argument(\"-a\", \"--addon\", help=\"check the status of an addon.\", default=\"all\")\n parser.add_argument(\n \"--yaml\", action=\"store_true\", help=\"DEPRECATED, use '--format yaml' instead\"\n )\n\n # read arguments from the command line\n args = parser.parse_args()\n\n wait_ready = args.wait_ready\n timeout = args.timeout\n yaml_short = args.yaml\n\n if wait_ready:\n isReady = wait_for_ready(timeout)\n else:\n isReady = is_cluster_ready()\n\n available_addons = get_available_addons(get_current_arch())\n\n if args.addon != \"all\":\n available_addons = get_addon_by_name(available_addons, args.addon)\n\n enabled, disabled = get_status(available_addons, isReady)\n\n if args.addon != \"all\":\n print_addon_status(enabled)\n else:\n if args.format == \"yaml\":\n print_yaml(isReady, enabled, disabled)\n elif args.format == \"short\":\n print_short(isReady, enabled, disabled)\n else:\n if yaml_short:\n print_short_yaml(isReady, enabled, disabled)\n else:\n print_pretty(isReady, enabled, disabled)\n", "path": "scripts/wrappers/status.py"}]}
| 2,748 | 127 |
gh_patches_debug_35755
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-3527
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
prune unused names from parallel.error
<del>The error classes defined in parallel.error have no real reason to be defined separately from the core, and their presence elsewhere can cause shenanigans to be required if they are used in core, as in #2043.</del>
parallel.error is a leftover from the old kernel code, and should probably be aggressively pruned of unused exception classes <del>when they are merged into core.error</del>.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/core/error.py`
Content:
```
1 # encoding: utf-8
2 """
3 Global exception classes for IPython.core.
4
5 Authors:
6
7 * Brian Granger
8 * Fernando Perez
9 * Min Ragan-Kelley
10
11 Notes
12 -----
13 """
14
15 #-----------------------------------------------------------------------------
16 # Copyright (C) 2008-2011 The IPython Development Team
17 #
18 # Distributed under the terms of the BSD License. The full license is in
19 # the file COPYING, distributed as part of this software.
20 #-----------------------------------------------------------------------------
21
22 #-----------------------------------------------------------------------------
23 # Imports
24 #-----------------------------------------------------------------------------
25
26 #-----------------------------------------------------------------------------
27 # Exception classes
28 #-----------------------------------------------------------------------------
29
30 class IPythonCoreError(Exception):
31 pass
32
33
34 class TryNext(IPythonCoreError):
35 """Try next hook exception.
36
37 Raise this in your hook function to indicate that the next hook handler
38 should be used to handle the operation.
39 """
40
41 class UsageError(IPythonCoreError):
42 """Error in magic function arguments, etc.
43
44 Something that probably won't warrant a full traceback, but should
45 nevertheless interrupt a macro / batch file.
46 """
47
48 class StdinNotImplementedError(IPythonCoreError, NotImplementedError):
49 """raw_input was requested in a context where it is not supported
50
51 For use in IPython kernels, where only some frontends may support
52 stdin requests.
53 """
54
```
Path: `IPython/parallel/error.py`
Content:
```
1 # encoding: utf-8
2
3 """Classes and functions for kernel related errors and exceptions.
4
5 Inheritance diagram:
6
7 .. inheritance-diagram:: IPython.parallel.error
8 :parts: 3
9
10 Authors:
11
12 * Brian Granger
13 * Min RK
14 """
15 from __future__ import print_function
16
17 import sys
18 import traceback
19
20 __docformat__ = "restructuredtext en"
21
22 # Tell nose to skip this module
23 __test__ = {}
24
25 #-------------------------------------------------------------------------------
26 # Copyright (C) 2008-2011 The IPython Development Team
27 #
28 # Distributed under the terms of the BSD License. The full license is in
29 # the file COPYING, distributed as part of this software.
30 #-------------------------------------------------------------------------------
31
32 #-------------------------------------------------------------------------------
33 # Error classes
34 #-------------------------------------------------------------------------------
35 class IPythonError(Exception):
36 """Base exception that all of our exceptions inherit from.
37
38 This can be raised by code that doesn't have any more specific
39 information."""
40
41 pass
42
43 # Exceptions associated with the controller objects
44 class ControllerError(IPythonError): pass
45
46 class ControllerCreationError(ControllerError): pass
47
48
49 # Exceptions associated with the Engines
50 class EngineError(IPythonError): pass
51
52 class EngineCreationError(EngineError): pass
53
54 class KernelError(IPythonError):
55 pass
56
57 class NotDefined(KernelError):
58 def __init__(self, name):
59 self.name = name
60 self.args = (name,)
61
62 def __repr__(self):
63 return '<NotDefined: %s>' % self.name
64
65 __str__ = __repr__
66
67
68 class QueueCleared(KernelError):
69 pass
70
71
72 class IdInUse(KernelError):
73 pass
74
75
76 class ProtocolError(KernelError):
77 pass
78
79
80 class ConnectionError(KernelError):
81 pass
82
83
84 class InvalidEngineID(KernelError):
85 pass
86
87
88 class NoEnginesRegistered(KernelError):
89 pass
90
91
92 class InvalidClientID(KernelError):
93 pass
94
95
96 class InvalidDeferredID(KernelError):
97 pass
98
99
100 class SerializationError(KernelError):
101 pass
102
103
104 class MessageSizeError(KernelError):
105 pass
106
107
108 class PBMessageSizeError(MessageSizeError):
109 pass
110
111
112 class ResultNotCompleted(KernelError):
113 pass
114
115
116 class ResultAlreadyRetrieved(KernelError):
117 pass
118
119 class ClientError(KernelError):
120 pass
121
122
123 class TaskAborted(KernelError):
124 pass
125
126
127 class TaskTimeout(KernelError):
128 pass
129
130
131 class NotAPendingResult(KernelError):
132 pass
133
134
135 class UnpickleableException(KernelError):
136 pass
137
138
139 class AbortedPendingDeferredError(KernelError):
140 pass
141
142
143 class InvalidProperty(KernelError):
144 pass
145
146
147 class MissingBlockArgument(KernelError):
148 pass
149
150
151 class StopLocalExecution(KernelError):
152 pass
153
154
155 class SecurityError(KernelError):
156 pass
157
158
159 class FileTimeoutError(KernelError):
160 pass
161
162 class TimeoutError(KernelError):
163 pass
164
165 class UnmetDependency(KernelError):
166 pass
167
168 class ImpossibleDependency(UnmetDependency):
169 pass
170
171 class DependencyTimeout(ImpossibleDependency):
172 pass
173
174 class InvalidDependency(ImpossibleDependency):
175 pass
176
177 class RemoteError(KernelError):
178 """Error raised elsewhere"""
179 ename=None
180 evalue=None
181 traceback=None
182 engine_info=None
183
184 def __init__(self, ename, evalue, traceback, engine_info=None):
185 self.ename=ename
186 self.evalue=evalue
187 self.traceback=traceback
188 self.engine_info=engine_info or {}
189 self.args=(ename, evalue)
190
191 def __repr__(self):
192 engineid = self.engine_info.get('engine_id', ' ')
193 return "<Remote[%s]:%s(%s)>"%(engineid, self.ename, self.evalue)
194
195 def __str__(self):
196 return "%s(%s)" % (self.ename, self.evalue)
197
198 def render_traceback(self):
199 """render traceback to a list of lines"""
200 return (self.traceback or "No traceback available").splitlines()
201
202 def _render_traceback_(self):
203 """Special method for custom tracebacks within IPython.
204
205 This will be called by IPython instead of displaying the local traceback.
206
207 It should return a traceback rendered as a list of lines.
208 """
209 return self.render_traceback()
210
211 def print_traceback(self, excid=None):
212 """print my traceback"""
213 print('\n'.join(self.render_traceback()))
214
215
216
217
218 class TaskRejectError(KernelError):
219 """Exception to raise when a task should be rejected by an engine.
220
221 This exception can be used to allow a task running on an engine to test
222 if the engine (or the user's namespace on the engine) has the needed
223 task dependencies. If not, the task should raise this exception. For
224 the task to be retried on another engine, the task should be created
225 with the `retries` argument > 1.
226
227 The advantage of this approach over our older properties system is that
228 tasks have full access to the user's namespace on the engines and the
229 properties don't have to be managed or tested by the controller.
230 """
231
232
233 class CompositeError(RemoteError):
234 """Error for representing possibly multiple errors on engines"""
235 tb_limit = 4 # limit on how many tracebacks to draw
236
237 def __init__(self, message, elist):
238 Exception.__init__(self, *(message, elist))
239 # Don't use pack_exception because it will conflict with the .message
240 # attribute that is being deprecated in 2.6 and beyond.
241 self.msg = message
242 self.elist = elist
243 self.args = [ e[0] for e in elist ]
244
245 def _get_engine_str(self, ei):
246 if not ei:
247 return '[Engine Exception]'
248 else:
249 return '[%s:%s]: ' % (ei['engine_id'], ei['method'])
250
251 def _get_traceback(self, ev):
252 try:
253 tb = ev._ipython_traceback_text
254 except AttributeError:
255 return 'No traceback available'
256 else:
257 return tb
258
259 def __str__(self):
260 s = str(self.msg)
261 for en, ev, etb, ei in self.elist[:self.tb_limit]:
262 engine_str = self._get_engine_str(ei)
263 s = s + '\n' + engine_str + en + ': ' + str(ev)
264 if len(self.elist) > self.tb_limit:
265 s = s + '\n.... %i more exceptions ...' % (len(self.elist) - self.tb_limit)
266 return s
267
268 def __repr__(self):
269 return "CompositeError(%i)" % len(self.elist)
270
271 def render_traceback(self, excid=None):
272 """render one or all of my tracebacks to a list of lines"""
273 lines = []
274 if excid is None:
275 for (en,ev,etb,ei) in self.elist[:self.tb_limit]:
276 lines.append(self._get_engine_str(ei))
277 lines.extend((etb or 'No traceback available').splitlines())
278 lines.append('')
279 if len(self.elist) > self.tb_limit:
280 lines.append(
281 '... %i more exceptions ...' % (len(self.elist) - self.tb_limit)
282 )
283 else:
284 try:
285 en,ev,etb,ei = self.elist[excid]
286 except:
287 raise IndexError("an exception with index %i does not exist"%excid)
288 else:
289 lines.append(self._get_engine_str(ei))
290 lines.extend((etb or 'No traceback available').splitlines())
291
292 return lines
293
294 def print_traceback(self, excid=None):
295 print('\n'.join(self.render_traceback(excid)))
296
297 def raise_exception(self, excid=0):
298 try:
299 en,ev,etb,ei = self.elist[excid]
300 except:
301 raise IndexError("an exception with index %i does not exist"%excid)
302 else:
303 raise RemoteError(en, ev, etb, ei)
304
305
306 def collect_exceptions(rdict_or_list, method='unspecified'):
307 """check a result dict for errors, and raise CompositeError if any exist.
308 Passthrough otherwise."""
309 elist = []
310 if isinstance(rdict_or_list, dict):
311 rlist = rdict_or_list.values()
312 else:
313 rlist = rdict_or_list
314 for r in rlist:
315 if isinstance(r, RemoteError):
316 en, ev, etb, ei = r.ename, r.evalue, r.traceback, r.engine_info
317 # Sometimes we could have CompositeError in our list. Just take
318 # the errors out of them and put them in our new list. This
319 # has the effect of flattening lists of CompositeErrors into one
320 # CompositeError
321 if en=='CompositeError':
322 for e in ev.elist:
323 elist.append(e)
324 else:
325 elist.append((en, ev, etb, ei))
326 if len(elist)==0:
327 return rdict_or_list
328 else:
329 msg = "one or more exceptions from call to method: %s" % (method)
330 # This silliness is needed so the debugger has access to the exception
331 # instance (e in this case)
332 try:
333 raise CompositeError(msg, elist)
334 except CompositeError as e:
335 raise e
336
337 def wrap_exception(engine_info={}):
338 etype, evalue, tb = sys.exc_info()
339 stb = traceback.format_exception(etype, evalue, tb)
340 exc_content = {
341 'status' : 'error',
342 'traceback' : stb,
343 'ename' : unicode(etype.__name__),
344 'evalue' : unicode(evalue),
345 'engine_info' : engine_info
346 }
347 return exc_content
348
349 def unwrap_exception(content):
350 err = RemoteError(content['ename'], content['evalue'],
351 ''.join(content['traceback']),
352 content.get('engine_info', {}))
353 return err
354
355
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/core/error.py b/IPython/core/error.py
--- a/IPython/core/error.py
+++ b/IPython/core/error.py
@@ -13,7 +13,7 @@
"""
#-----------------------------------------------------------------------------
-# Copyright (C) 2008-2011 The IPython Development Team
+# Copyright (C) 2008 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
diff --git a/IPython/parallel/error.py b/IPython/parallel/error.py
--- a/IPython/parallel/error.py
+++ b/IPython/parallel/error.py
@@ -40,125 +40,19 @@
pass
-# Exceptions associated with the controller objects
-class ControllerError(IPythonError): pass
-
-class ControllerCreationError(ControllerError): pass
-
-
-# Exceptions associated with the Engines
-class EngineError(IPythonError): pass
-
-class EngineCreationError(EngineError): pass
-
class KernelError(IPythonError):
pass
-class NotDefined(KernelError):
- def __init__(self, name):
- self.name = name
- self.args = (name,)
-
- def __repr__(self):
- return '<NotDefined: %s>' % self.name
-
- __str__ = __repr__
-
-
-class QueueCleared(KernelError):
- pass
-
-
-class IdInUse(KernelError):
- pass
-
-
-class ProtocolError(KernelError):
- pass
-
-
-class ConnectionError(KernelError):
- pass
-
-
-class InvalidEngineID(KernelError):
- pass
class NoEnginesRegistered(KernelError):
pass
-class InvalidClientID(KernelError):
- pass
-
-
-class InvalidDeferredID(KernelError):
- pass
-
-
-class SerializationError(KernelError):
- pass
-
-
-class MessageSizeError(KernelError):
- pass
-
-
-class PBMessageSizeError(MessageSizeError):
- pass
-
-
-class ResultNotCompleted(KernelError):
- pass
-
-
-class ResultAlreadyRetrieved(KernelError):
- pass
-
-class ClientError(KernelError):
- pass
-
-
class TaskAborted(KernelError):
pass
-class TaskTimeout(KernelError):
- pass
-
-
-class NotAPendingResult(KernelError):
- pass
-
-
-class UnpickleableException(KernelError):
- pass
-
-
-class AbortedPendingDeferredError(KernelError):
- pass
-
-
-class InvalidProperty(KernelError):
- pass
-
-
-class MissingBlockArgument(KernelError):
- pass
-
-
-class StopLocalExecution(KernelError):
- pass
-
-
-class SecurityError(KernelError):
- pass
-
-
-class FileTimeoutError(KernelError):
- pass
-
class TimeoutError(KernelError):
pass
|
{"golden_diff": "diff --git a/IPython/core/error.py b/IPython/core/error.py\n--- a/IPython/core/error.py\n+++ b/IPython/core/error.py\n@@ -13,7 +13,7 @@\n \"\"\"\n \n #-----------------------------------------------------------------------------\n-# Copyright (C) 2008-2011 The IPython Development Team\n+# Copyright (C) 2008 The IPython Development Team\n #\n # Distributed under the terms of the BSD License. The full license is in\n # the file COPYING, distributed as part of this software.\ndiff --git a/IPython/parallel/error.py b/IPython/parallel/error.py\n--- a/IPython/parallel/error.py\n+++ b/IPython/parallel/error.py\n@@ -40,125 +40,19 @@\n \n pass\n \n-# Exceptions associated with the controller objects\n-class ControllerError(IPythonError): pass\n-\n-class ControllerCreationError(ControllerError): pass\n-\n-\n-# Exceptions associated with the Engines\n-class EngineError(IPythonError): pass\n-\n-class EngineCreationError(EngineError): pass\n-\n class KernelError(IPythonError):\n pass\n \n-class NotDefined(KernelError):\n- def __init__(self, name):\n- self.name = name\n- self.args = (name,)\n-\n- def __repr__(self):\n- return '<NotDefined: %s>' % self.name\n-\n- __str__ = __repr__\n-\n-\n-class QueueCleared(KernelError):\n- pass\n-\n-\n-class IdInUse(KernelError):\n- pass\n-\n-\n-class ProtocolError(KernelError):\n- pass\n-\n-\n-class ConnectionError(KernelError):\n- pass\n-\n-\n-class InvalidEngineID(KernelError):\n- pass\n \n \n class NoEnginesRegistered(KernelError):\n pass\n \n \n-class InvalidClientID(KernelError):\n- pass\n-\n-\n-class InvalidDeferredID(KernelError):\n- pass\n-\n-\n-class SerializationError(KernelError):\n- pass\n-\n-\n-class MessageSizeError(KernelError):\n- pass\n-\n-\n-class PBMessageSizeError(MessageSizeError):\n- pass\n-\n-\n-class ResultNotCompleted(KernelError):\n- pass\n-\n-\n-class ResultAlreadyRetrieved(KernelError):\n- pass\n-\n-class ClientError(KernelError):\n- pass\n-\n-\n class TaskAborted(KernelError):\n pass\n \n \n-class TaskTimeout(KernelError):\n- pass\n-\n-\n-class NotAPendingResult(KernelError):\n- pass\n-\n-\n-class UnpickleableException(KernelError):\n- pass\n-\n-\n-class AbortedPendingDeferredError(KernelError):\n- pass\n-\n-\n-class InvalidProperty(KernelError):\n- pass\n-\n-\n-class MissingBlockArgument(KernelError):\n- pass\n-\n-\n-class StopLocalExecution(KernelError):\n- pass\n-\n-\n-class SecurityError(KernelError):\n- pass\n-\n-\n-class FileTimeoutError(KernelError):\n- pass\n-\n class TimeoutError(KernelError):\n pass\n", "issue": "prune unused names from parallel.error\n<del>The error classes defined in parallel.error have no real reason to be defined separately from the core, and their presence elsewhere can cause shenanigans to be required if they are used in core, as in #2043.</del>\n\n\nparallel.error is a leftover from the old kernel code, and should probably be aggressively pruned of unused exception classes <del>when they are merged into core.error</del>.\n\n", "before_files": [{"content": "# encoding: utf-8\n\"\"\"\nGlobal exception classes for IPython.core.\n\nAuthors:\n\n* Brian Granger\n* Fernando Perez\n* Min Ragan-Kelley\n\nNotes\n-----\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (C) 2008-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Exception classes\n#-----------------------------------------------------------------------------\n\nclass IPythonCoreError(Exception):\n pass\n\n\nclass TryNext(IPythonCoreError):\n \"\"\"Try next hook exception.\n\n Raise this in your hook function to indicate that the next hook handler\n should be used to handle the operation.\n \"\"\"\n\nclass UsageError(IPythonCoreError):\n \"\"\"Error in magic function arguments, etc.\n\n Something that probably won't warrant a full traceback, but should\n nevertheless interrupt a macro / batch file.\n \"\"\"\n\nclass StdinNotImplementedError(IPythonCoreError, NotImplementedError):\n \"\"\"raw_input was requested in a context where it is not supported\n\n For use in IPython kernels, where only some frontends may support\n stdin requests.\n \"\"\"\n", "path": "IPython/core/error.py"}, {"content": "# encoding: utf-8\n\n\"\"\"Classes and functions for kernel related errors and exceptions.\n\nInheritance diagram:\n\n.. inheritance-diagram:: IPython.parallel.error\n :parts: 3\n\nAuthors:\n\n* Brian Granger\n* Min RK\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\nimport traceback\n\n__docformat__ = \"restructuredtext en\"\n\n# Tell nose to skip this module\n__test__ = {}\n\n#-------------------------------------------------------------------------------\n# Copyright (C) 2008-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-------------------------------------------------------------------------------\n\n#-------------------------------------------------------------------------------\n# Error classes\n#-------------------------------------------------------------------------------\nclass IPythonError(Exception):\n \"\"\"Base exception that all of our exceptions inherit from.\n\n This can be raised by code that doesn't have any more specific\n information.\"\"\"\n\n pass\n\n# Exceptions associated with the controller objects\nclass ControllerError(IPythonError): pass\n\nclass ControllerCreationError(ControllerError): pass\n\n\n# Exceptions associated with the Engines\nclass EngineError(IPythonError): pass\n\nclass EngineCreationError(EngineError): pass\n\nclass KernelError(IPythonError):\n pass\n\nclass NotDefined(KernelError):\n def __init__(self, name):\n self.name = name\n self.args = (name,)\n\n def __repr__(self):\n return '<NotDefined: %s>' % self.name\n\n __str__ = __repr__\n\n\nclass QueueCleared(KernelError):\n pass\n\n\nclass IdInUse(KernelError):\n pass\n\n\nclass ProtocolError(KernelError):\n pass\n\n\nclass ConnectionError(KernelError):\n pass\n\n\nclass InvalidEngineID(KernelError):\n pass\n\n\nclass NoEnginesRegistered(KernelError):\n pass\n\n\nclass InvalidClientID(KernelError):\n pass\n\n\nclass InvalidDeferredID(KernelError):\n pass\n\n\nclass SerializationError(KernelError):\n pass\n\n\nclass MessageSizeError(KernelError):\n pass\n\n\nclass PBMessageSizeError(MessageSizeError):\n pass\n\n\nclass ResultNotCompleted(KernelError):\n pass\n\n\nclass ResultAlreadyRetrieved(KernelError):\n pass\n\nclass ClientError(KernelError):\n pass\n\n\nclass TaskAborted(KernelError):\n pass\n\n\nclass TaskTimeout(KernelError):\n pass\n\n\nclass NotAPendingResult(KernelError):\n pass\n\n\nclass UnpickleableException(KernelError):\n pass\n\n\nclass AbortedPendingDeferredError(KernelError):\n pass\n\n\nclass InvalidProperty(KernelError):\n pass\n\n\nclass MissingBlockArgument(KernelError):\n pass\n\n\nclass StopLocalExecution(KernelError):\n pass\n\n\nclass SecurityError(KernelError):\n pass\n\n\nclass FileTimeoutError(KernelError):\n pass\n\nclass TimeoutError(KernelError):\n pass\n\nclass UnmetDependency(KernelError):\n pass\n\nclass ImpossibleDependency(UnmetDependency):\n pass\n\nclass DependencyTimeout(ImpossibleDependency):\n pass\n\nclass InvalidDependency(ImpossibleDependency):\n pass\n\nclass RemoteError(KernelError):\n \"\"\"Error raised elsewhere\"\"\"\n ename=None\n evalue=None\n traceback=None\n engine_info=None\n\n def __init__(self, ename, evalue, traceback, engine_info=None):\n self.ename=ename\n self.evalue=evalue\n self.traceback=traceback\n self.engine_info=engine_info or {}\n self.args=(ename, evalue)\n\n def __repr__(self):\n engineid = self.engine_info.get('engine_id', ' ')\n return \"<Remote[%s]:%s(%s)>\"%(engineid, self.ename, self.evalue)\n\n def __str__(self):\n return \"%s(%s)\" % (self.ename, self.evalue)\n \n def render_traceback(self):\n \"\"\"render traceback to a list of lines\"\"\"\n return (self.traceback or \"No traceback available\").splitlines()\n \n def _render_traceback_(self):\n \"\"\"Special method for custom tracebacks within IPython.\n \n This will be called by IPython instead of displaying the local traceback.\n \n It should return a traceback rendered as a list of lines.\n \"\"\"\n return self.render_traceback()\n \n def print_traceback(self, excid=None):\n \"\"\"print my traceback\"\"\"\n print('\\n'.join(self.render_traceback()))\n\n \n\n\nclass TaskRejectError(KernelError):\n \"\"\"Exception to raise when a task should be rejected by an engine.\n\n This exception can be used to allow a task running on an engine to test\n if the engine (or the user's namespace on the engine) has the needed\n task dependencies. If not, the task should raise this exception. For\n the task to be retried on another engine, the task should be created\n with the `retries` argument > 1.\n\n The advantage of this approach over our older properties system is that\n tasks have full access to the user's namespace on the engines and the\n properties don't have to be managed or tested by the controller.\n \"\"\"\n\n\nclass CompositeError(RemoteError):\n \"\"\"Error for representing possibly multiple errors on engines\"\"\"\n tb_limit = 4 # limit on how many tracebacks to draw\n \n def __init__(self, message, elist):\n Exception.__init__(self, *(message, elist))\n # Don't use pack_exception because it will conflict with the .message\n # attribute that is being deprecated in 2.6 and beyond.\n self.msg = message\n self.elist = elist\n self.args = [ e[0] for e in elist ]\n\n def _get_engine_str(self, ei):\n if not ei:\n return '[Engine Exception]'\n else:\n return '[%s:%s]: ' % (ei['engine_id'], ei['method'])\n\n def _get_traceback(self, ev):\n try:\n tb = ev._ipython_traceback_text\n except AttributeError:\n return 'No traceback available'\n else:\n return tb\n\n def __str__(self):\n s = str(self.msg)\n for en, ev, etb, ei in self.elist[:self.tb_limit]:\n engine_str = self._get_engine_str(ei)\n s = s + '\\n' + engine_str + en + ': ' + str(ev)\n if len(self.elist) > self.tb_limit:\n s = s + '\\n.... %i more exceptions ...' % (len(self.elist) - self.tb_limit)\n return s\n\n def __repr__(self):\n return \"CompositeError(%i)\" % len(self.elist)\n \n def render_traceback(self, excid=None):\n \"\"\"render one or all of my tracebacks to a list of lines\"\"\"\n lines = []\n if excid is None:\n for (en,ev,etb,ei) in self.elist[:self.tb_limit]:\n lines.append(self._get_engine_str(ei))\n lines.extend((etb or 'No traceback available').splitlines())\n lines.append('')\n if len(self.elist) > self.tb_limit:\n lines.append(\n '... %i more exceptions ...' % (len(self.elist) - self.tb_limit)\n )\n else:\n try:\n en,ev,etb,ei = self.elist[excid]\n except:\n raise IndexError(\"an exception with index %i does not exist\"%excid)\n else:\n lines.append(self._get_engine_str(ei))\n lines.extend((etb or 'No traceback available').splitlines())\n \n return lines\n \n def print_traceback(self, excid=None):\n print('\\n'.join(self.render_traceback(excid)))\n\n def raise_exception(self, excid=0):\n try:\n en,ev,etb,ei = self.elist[excid]\n except:\n raise IndexError(\"an exception with index %i does not exist\"%excid)\n else:\n raise RemoteError(en, ev, etb, ei)\n\n\ndef collect_exceptions(rdict_or_list, method='unspecified'):\n \"\"\"check a result dict for errors, and raise CompositeError if any exist.\n Passthrough otherwise.\"\"\"\n elist = []\n if isinstance(rdict_or_list, dict):\n rlist = rdict_or_list.values()\n else:\n rlist = rdict_or_list\n for r in rlist:\n if isinstance(r, RemoteError):\n en, ev, etb, ei = r.ename, r.evalue, r.traceback, r.engine_info\n # Sometimes we could have CompositeError in our list. Just take\n # the errors out of them and put them in our new list. This\n # has the effect of flattening lists of CompositeErrors into one\n # CompositeError\n if en=='CompositeError':\n for e in ev.elist:\n elist.append(e)\n else:\n elist.append((en, ev, etb, ei))\n if len(elist)==0:\n return rdict_or_list\n else:\n msg = \"one or more exceptions from call to method: %s\" % (method)\n # This silliness is needed so the debugger has access to the exception\n # instance (e in this case)\n try:\n raise CompositeError(msg, elist)\n except CompositeError as e:\n raise e\n\ndef wrap_exception(engine_info={}):\n etype, evalue, tb = sys.exc_info()\n stb = traceback.format_exception(etype, evalue, tb)\n exc_content = {\n 'status' : 'error',\n 'traceback' : stb,\n 'ename' : unicode(etype.__name__),\n 'evalue' : unicode(evalue),\n 'engine_info' : engine_info\n }\n return exc_content\n\ndef unwrap_exception(content):\n err = RemoteError(content['ename'], content['evalue'],\n ''.join(content['traceback']),\n content.get('engine_info', {}))\n return err\n\n", "path": "IPython/parallel/error.py"}], "after_files": [{"content": "# encoding: utf-8\n\"\"\"\nGlobal exception classes for IPython.core.\n\nAuthors:\n\n* Brian Granger\n* Fernando Perez\n* Min Ragan-Kelley\n\nNotes\n-----\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (C) 2008 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Exception classes\n#-----------------------------------------------------------------------------\n\nclass IPythonCoreError(Exception):\n pass\n\n\nclass TryNext(IPythonCoreError):\n \"\"\"Try next hook exception.\n\n Raise this in your hook function to indicate that the next hook handler\n should be used to handle the operation.\n \"\"\"\n\nclass UsageError(IPythonCoreError):\n \"\"\"Error in magic function arguments, etc.\n\n Something that probably won't warrant a full traceback, but should\n nevertheless interrupt a macro / batch file.\n \"\"\"\n\nclass StdinNotImplementedError(IPythonCoreError, NotImplementedError):\n \"\"\"raw_input was requested in a context where it is not supported\n\n For use in IPython kernels, where only some frontends may support\n stdin requests.\n \"\"\"\n", "path": "IPython/core/error.py"}, {"content": "# encoding: utf-8\n\n\"\"\"Classes and functions for kernel related errors and exceptions.\n\nInheritance diagram:\n\n.. inheritance-diagram:: IPython.parallel.error\n :parts: 3\n\nAuthors:\n\n* Brian Granger\n* Min RK\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\nimport traceback\n\n__docformat__ = \"restructuredtext en\"\n\n# Tell nose to skip this module\n__test__ = {}\n\n#-------------------------------------------------------------------------------\n# Copyright (C) 2008-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-------------------------------------------------------------------------------\n\n#-------------------------------------------------------------------------------\n# Error classes\n#-------------------------------------------------------------------------------\nclass IPythonError(Exception):\n \"\"\"Base exception that all of our exceptions inherit from.\n\n This can be raised by code that doesn't have any more specific\n information.\"\"\"\n\n pass\n\nclass KernelError(IPythonError):\n pass\n\n\n\nclass NoEnginesRegistered(KernelError):\n pass\n\n\nclass TaskAborted(KernelError):\n pass\n\n\nclass TimeoutError(KernelError):\n pass\n\nclass UnmetDependency(KernelError):\n pass\n\nclass ImpossibleDependency(UnmetDependency):\n pass\n\nclass DependencyTimeout(ImpossibleDependency):\n pass\n\nclass InvalidDependency(ImpossibleDependency):\n pass\n\nclass RemoteError(KernelError):\n \"\"\"Error raised elsewhere\"\"\"\n ename=None\n evalue=None\n traceback=None\n engine_info=None\n\n def __init__(self, ename, evalue, traceback, engine_info=None):\n self.ename=ename\n self.evalue=evalue\n self.traceback=traceback\n self.engine_info=engine_info or {}\n self.args=(ename, evalue)\n\n def __repr__(self):\n engineid = self.engine_info.get('engine_id', ' ')\n return \"<Remote[%s]:%s(%s)>\"%(engineid, self.ename, self.evalue)\n\n def __str__(self):\n return \"%s(%s)\" % (self.ename, self.evalue)\n \n def render_traceback(self):\n \"\"\"render traceback to a list of lines\"\"\"\n return (self.traceback or \"No traceback available\").splitlines()\n \n def _render_traceback_(self):\n \"\"\"Special method for custom tracebacks within IPython.\n \n This will be called by IPython instead of displaying the local traceback.\n \n It should return a traceback rendered as a list of lines.\n \"\"\"\n return self.render_traceback()\n \n def print_traceback(self, excid=None):\n \"\"\"print my traceback\"\"\"\n print('\\n'.join(self.render_traceback()))\n\n \n\n\nclass TaskRejectError(KernelError):\n \"\"\"Exception to raise when a task should be rejected by an engine.\n\n This exception can be used to allow a task running on an engine to test\n if the engine (or the user's namespace on the engine) has the needed\n task dependencies. If not, the task should raise this exception. For\n the task to be retried on another engine, the task should be created\n with the `retries` argument > 1.\n\n The advantage of this approach over our older properties system is that\n tasks have full access to the user's namespace on the engines and the\n properties don't have to be managed or tested by the controller.\n \"\"\"\n\n\nclass CompositeError(RemoteError):\n \"\"\"Error for representing possibly multiple errors on engines\"\"\"\n tb_limit = 4 # limit on how many tracebacks to draw\n \n def __init__(self, message, elist):\n Exception.__init__(self, *(message, elist))\n # Don't use pack_exception because it will conflict with the .message\n # attribute that is being deprecated in 2.6 and beyond.\n self.msg = message\n self.elist = elist\n self.args = [ e[0] for e in elist ]\n\n def _get_engine_str(self, ei):\n if not ei:\n return '[Engine Exception]'\n else:\n return '[%s:%s]: ' % (ei['engine_id'], ei['method'])\n\n def _get_traceback(self, ev):\n try:\n tb = ev._ipython_traceback_text\n except AttributeError:\n return 'No traceback available'\n else:\n return tb\n\n def __str__(self):\n s = str(self.msg)\n for en, ev, etb, ei in self.elist[:self.tb_limit]:\n engine_str = self._get_engine_str(ei)\n s = s + '\\n' + engine_str + en + ': ' + str(ev)\n if len(self.elist) > self.tb_limit:\n s = s + '\\n.... %i more exceptions ...' % (len(self.elist) - self.tb_limit)\n return s\n\n def __repr__(self):\n return \"CompositeError(%i)\" % len(self.elist)\n \n def render_traceback(self, excid=None):\n \"\"\"render one or all of my tracebacks to a list of lines\"\"\"\n lines = []\n if excid is None:\n for (en,ev,etb,ei) in self.elist[:self.tb_limit]:\n lines.append(self._get_engine_str(ei))\n lines.extend((etb or 'No traceback available').splitlines())\n lines.append('')\n if len(self.elist) > self.tb_limit:\n lines.append(\n '... %i more exceptions ...' % (len(self.elist) - self.tb_limit)\n )\n else:\n try:\n en,ev,etb,ei = self.elist[excid]\n except:\n raise IndexError(\"an exception with index %i does not exist\"%excid)\n else:\n lines.append(self._get_engine_str(ei))\n lines.extend((etb or 'No traceback available').splitlines())\n \n return lines\n \n def print_traceback(self, excid=None):\n print('\\n'.join(self.render_traceback(excid)))\n\n def raise_exception(self, excid=0):\n try:\n en,ev,etb,ei = self.elist[excid]\n except:\n raise IndexError(\"an exception with index %i does not exist\"%excid)\n else:\n raise RemoteError(en, ev, etb, ei)\n\n\ndef collect_exceptions(rdict_or_list, method='unspecified'):\n \"\"\"check a result dict for errors, and raise CompositeError if any exist.\n Passthrough otherwise.\"\"\"\n elist = []\n if isinstance(rdict_or_list, dict):\n rlist = rdict_or_list.values()\n else:\n rlist = rdict_or_list\n for r in rlist:\n if isinstance(r, RemoteError):\n en, ev, etb, ei = r.ename, r.evalue, r.traceback, r.engine_info\n # Sometimes we could have CompositeError in our list. Just take\n # the errors out of them and put them in our new list. This\n # has the effect of flattening lists of CompositeErrors into one\n # CompositeError\n if en=='CompositeError':\n for e in ev.elist:\n elist.append(e)\n else:\n elist.append((en, ev, etb, ei))\n if len(elist)==0:\n return rdict_or_list\n else:\n msg = \"one or more exceptions from call to method: %s\" % (method)\n # This silliness is needed so the debugger has access to the exception\n # instance (e in this case)\n try:\n raise CompositeError(msg, elist)\n except CompositeError as e:\n raise e\n\ndef wrap_exception(engine_info={}):\n etype, evalue, tb = sys.exc_info()\n stb = traceback.format_exception(etype, evalue, tb)\n exc_content = {\n 'status' : 'error',\n 'traceback' : stb,\n 'ename' : unicode(etype.__name__),\n 'evalue' : unicode(evalue),\n 'engine_info' : engine_info\n }\n return exc_content\n\ndef unwrap_exception(content):\n err = RemoteError(content['ename'], content['evalue'],\n ''.join(content['traceback']),\n content.get('engine_info', {}))\n return err\n\n", "path": "IPython/parallel/error.py"}]}
| 3,922 | 652 |
gh_patches_debug_13492
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2642
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider sheetz is broken
During the global build at 2021-08-11-14-42-19, spider **sheetz** failed with **526 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/logs/sheetz.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/sheetz.py`
Content:
```
1 import json
2 import re
3 import scrapy
4 from locations.items import GeojsonPointItem
5
6
7 class SheetzSpider(scrapy.Spider):
8 name = "sheetz"
9 item_attributes = {'brand': "Sheetz"}
10 allowed_domains = ["orderz.sheetz.com"]
11 start_urls = (
12 "https://orderz.sheetz.com/sas/store",
13 )
14
15 def parse(self, response):
16 stores = json.loads(response.body_as_unicode())
17
18 for store in stores:
19 properties = {
20 'addr_full': store['address'],
21 'city': store['city'],
22 'state': store['state'],
23 'postcode': store['zip'],
24 'ref': store['storeNumber'],
25 'phone': store.get('phone'),
26 'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],
27 'lat': float(store['latitude']),
28 'lon': float(store['longitude']),
29 'opening_hours': '24/7' if store['open24x7'] else None,
30 'extras': {
31 'amenity:chargingstation': store['evCharger'],
32 'amenity:fuel': True,
33 'atm': store['atm'],
34 'car_wash': store['carWash'],
35 'fax': store['fax'] if 'fax' in store else None,
36 'fuel:diesel': store['diesel'],
37 'fuel:e15': store['e15'],
38 'fuel:e85': store['e85'],
39 'fuel:kerosene': store['kerosene'],
40 'fuel:propane': store['propane'],
41 }
42 }
43
44 yield GeojsonPointItem(**properties)
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/sheetz.py b/locations/spiders/sheetz.py
--- a/locations/spiders/sheetz.py
+++ b/locations/spiders/sheetz.py
@@ -24,8 +24,8 @@
'ref': store['storeNumber'],
'phone': store.get('phone'),
'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],
- 'lat': float(store['latitude']),
- 'lon': float(store['longitude']),
+ 'lat': store['latitude'],
+ 'lon': store['longitude'],
'opening_hours': '24/7' if store['open24x7'] else None,
'extras': {
'amenity:chargingstation': store['evCharger'],
|
{"golden_diff": "diff --git a/locations/spiders/sheetz.py b/locations/spiders/sheetz.py\n--- a/locations/spiders/sheetz.py\n+++ b/locations/spiders/sheetz.py\n@@ -24,8 +24,8 @@\n 'ref': store['storeNumber'],\n 'phone': store.get('phone'),\n 'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],\n- 'lat': float(store['latitude']),\n- 'lon': float(store['longitude']),\n+ 'lat': store['latitude'],\n+ 'lon': store['longitude'],\n 'opening_hours': '24/7' if store['open24x7'] else None,\n 'extras': {\n 'amenity:chargingstation': store['evCharger'],\n", "issue": "Spider sheetz is broken\nDuring the global build at 2021-08-11-14-42-19, spider **sheetz** failed with **526 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/logs/sheetz.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-11-14-42-19/output/sheetz.geojson))\n", "before_files": [{"content": "import json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\n\nclass SheetzSpider(scrapy.Spider):\n name = \"sheetz\"\n item_attributes = {'brand': \"Sheetz\"}\n allowed_domains = [\"orderz.sheetz.com\"]\n start_urls = (\n \"https://orderz.sheetz.com/sas/store\",\n )\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())\n\n for store in stores:\n properties = {\n 'addr_full': store['address'],\n 'city': store['city'],\n 'state': store['state'],\n 'postcode': store['zip'],\n 'ref': store['storeNumber'],\n 'phone': store.get('phone'),\n 'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],\n 'lat': float(store['latitude']),\n 'lon': float(store['longitude']),\n 'opening_hours': '24/7' if store['open24x7'] else None,\n 'extras': {\n 'amenity:chargingstation': store['evCharger'],\n 'amenity:fuel': True,\n 'atm': store['atm'],\n 'car_wash': store['carWash'],\n 'fax': store['fax'] if 'fax' in store else None,\n 'fuel:diesel': store['diesel'],\n 'fuel:e15': store['e15'],\n 'fuel:e85': store['e85'],\n 'fuel:kerosene': store['kerosene'],\n 'fuel:propane': store['propane'],\n }\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/sheetz.py"}], "after_files": [{"content": "import json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\n\nclass SheetzSpider(scrapy.Spider):\n name = \"sheetz\"\n item_attributes = {'brand': \"Sheetz\"}\n allowed_domains = [\"orderz.sheetz.com\"]\n start_urls = (\n \"https://orderz.sheetz.com/sas/store\",\n )\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())\n\n for store in stores:\n properties = {\n 'addr_full': store['address'],\n 'city': store['city'],\n 'state': store['state'],\n 'postcode': store['zip'],\n 'ref': store['storeNumber'],\n 'phone': store.get('phone'),\n 'website': 'https://orderz.sheetz.com/#/main/location/store/'+store['storeNumber'],\n 'lat': store['latitude'],\n 'lon': store['longitude'],\n 'opening_hours': '24/7' if store['open24x7'] else None,\n 'extras': {\n 'amenity:chargingstation': store['evCharger'],\n 'amenity:fuel': True,\n 'atm': store['atm'],\n 'car_wash': store['carWash'],\n 'fax': store['fax'] if 'fax' in store else None,\n 'fuel:diesel': store['diesel'],\n 'fuel:e15': store['e15'],\n 'fuel:e85': store['e85'],\n 'fuel:kerosene': store['kerosene'],\n 'fuel:propane': store['propane'],\n }\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/sheetz.py"}]}
| 892 | 177 |
gh_patches_debug_5235
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-618
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
YCbCr JPEG-in-TIFF breaks rio-info
One creates a YCbCr JPEG-in-TIFF with GDAL using `photometric=YCbCr` and `compress=JPEG` options. But reading the TIFFs tags to get the compression method returns "YCbCr JPEG", a value that's not in `rasterio.enums.Compression`.
Reference: http://www.gdal.org/frmt_gtiff.html
Solution: normalize "YCbCr JPEG" to "JPEG" and add source color space to rio-info's output.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/enums.py`
Content:
```
1
2 from enum import Enum, IntEnum
3
4
5 class ColorInterp(IntEnum):
6 undefined=0
7 grey=1
8 gray=1
9 palette=2
10 red=3
11 green=4
12 blue=5
13 alpha=6
14 hue=7
15 saturation=8
16 lightness=9
17 cyan=10
18 magenta=11
19 yellow=12
20 black=13
21
22
23 class Resampling(Enum):
24 nearest='NEAREST'
25 gauss='GAUSS'
26 cubic='CUBIC'
27 average='AVERAGE'
28 mode='MODE'
29 average_magphase='AVERAGE_MAGPHASE'
30 none='NONE'
31
32
33 class Compression(Enum):
34 jpeg='JPEG'
35 lzw='LZW'
36 packbits='PACKBITS'
37 deflate='DEFLATE'
38 ccittrle='CCITTRLE'
39 ccittfax3='CCITTFAX3'
40 ccittfax4='CCITTFAX4'
41 lzma='LZMA'
42 none='NONE'
43
44
45 class Interleaving(Enum):
46 pixel='PIXEL'
47 line='LINE'
48 band='BAND'
49
50
51 class MaskFlags(IntEnum):
52 all_valid=1
53 per_dataset=2
54 alpha=4
55 nodata=8
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rasterio/enums.py b/rasterio/enums.py
--- a/rasterio/enums.py
+++ b/rasterio/enums.py
@@ -18,6 +18,9 @@
magenta=11
yellow=12
black=13
+ Y=14
+ Cb=15
+ Cr=16
class Resampling(Enum):
@@ -53,3 +56,14 @@
per_dataset=2
alpha=4
nodata=8
+
+
+class PhotometricInterp(Enum):
+ black='MINISBLACK'
+ white='MINISWHITE'
+ rgb='RGB'
+ cmyk='CMYK'
+ ycbcr='YCbCr'
+ cielab='CIELAB'
+ icclab='ICCLAB'
+ itulab='ITULAB'
|
{"golden_diff": "diff --git a/rasterio/enums.py b/rasterio/enums.py\n--- a/rasterio/enums.py\n+++ b/rasterio/enums.py\n@@ -18,6 +18,9 @@\n magenta=11\n yellow=12\n black=13\n+ Y=14\n+ Cb=15\n+ Cr=16\n \n \n class Resampling(Enum):\n@@ -53,3 +56,14 @@\n per_dataset=2\n alpha=4\n nodata=8\n+\n+\n+class PhotometricInterp(Enum):\n+ black='MINISBLACK'\n+ white='MINISWHITE'\n+ rgb='RGB'\n+ cmyk='CMYK'\n+ ycbcr='YCbCr'\n+ cielab='CIELAB'\n+ icclab='ICCLAB'\n+ itulab='ITULAB'\n", "issue": "YCbCr JPEG-in-TIFF breaks rio-info\nOne creates a YCbCr JPEG-in-TIFF with GDAL using `photometric=YCbCr` and `compress=JPEG` options. But reading the TIFFs tags to get the compression method returns \"YCbCr JPEG\", a value that's not in `rasterio.enums.Compression`.\n\nReference: http://www.gdal.org/frmt_gtiff.html\n\nSolution: normalize \"YCbCr JPEG\" to \"JPEG\" and add source color space to rio-info's output.\n\n", "before_files": [{"content": "\nfrom enum import Enum, IntEnum\n\n\nclass ColorInterp(IntEnum):\n undefined=0\n grey=1\n gray=1\n palette=2\n red=3\n green=4\n blue=5\n alpha=6\n hue=7\n saturation=8\n lightness=9\n cyan=10\n magenta=11\n yellow=12\n black=13\n\n\nclass Resampling(Enum):\n nearest='NEAREST'\n gauss='GAUSS'\n cubic='CUBIC'\n average='AVERAGE'\n mode='MODE'\n average_magphase='AVERAGE_MAGPHASE'\n none='NONE'\n\n\nclass Compression(Enum):\n jpeg='JPEG'\n lzw='LZW'\n packbits='PACKBITS'\n deflate='DEFLATE'\n ccittrle='CCITTRLE'\n ccittfax3='CCITTFAX3'\n ccittfax4='CCITTFAX4'\n lzma='LZMA'\n none='NONE'\n\n\nclass Interleaving(Enum):\n pixel='PIXEL'\n line='LINE'\n band='BAND'\n\n\nclass MaskFlags(IntEnum):\n all_valid=1\n per_dataset=2\n alpha=4\n nodata=8\n", "path": "rasterio/enums.py"}], "after_files": [{"content": "\nfrom enum import Enum, IntEnum\n\n\nclass ColorInterp(IntEnum):\n undefined=0\n grey=1\n gray=1\n palette=2\n red=3\n green=4\n blue=5\n alpha=6\n hue=7\n saturation=8\n lightness=9\n cyan=10\n magenta=11\n yellow=12\n black=13\n Y=14\n Cb=15\n Cr=16\n\n\nclass Resampling(Enum):\n nearest='NEAREST'\n gauss='GAUSS'\n cubic='CUBIC'\n average='AVERAGE'\n mode='MODE'\n average_magphase='AVERAGE_MAGPHASE'\n none='NONE'\n\n\nclass Compression(Enum):\n jpeg='JPEG'\n lzw='LZW'\n packbits='PACKBITS'\n deflate='DEFLATE'\n ccittrle='CCITTRLE'\n ccittfax3='CCITTFAX3'\n ccittfax4='CCITTFAX4'\n lzma='LZMA'\n none='NONE'\n\n\nclass Interleaving(Enum):\n pixel='PIXEL'\n line='LINE'\n band='BAND'\n\n\nclass MaskFlags(IntEnum):\n all_valid=1\n per_dataset=2\n alpha=4\n nodata=8\n\n\nclass PhotometricInterp(Enum):\n black='MINISBLACK'\n white='MINISWHITE'\n rgb='RGB'\n cmyk='CMYK'\n ycbcr='YCbCr'\n cielab='CIELAB'\n icclab='ICCLAB'\n itulab='ITULAB'\n", "path": "rasterio/enums.py"}]}
| 762 | 207 |
gh_patches_debug_40210
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-664
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Engineering: Front-end - Eligibility Start page design: Redesign to a single-column design. Add media list.
- For both Mobile and Desktop, redesign the page to a two-column vertical design.
- Redesign the 3-icon content page to be a 2-icon page and add the new bullet points of copy.
- Establish Mobile/Tablet width margins
- Make sure H1 font size is correct
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/3673236/172735521-c7b6760b-832d-448e-b45e-20f217debccb.png">
<img width="453" alt="image" src="https://user-images.githubusercontent.com/3673236/172735548-a9c6ed1f-23fd-4904-8e9f-629f6bf2eba1.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/eligibility/views.py`
Content:
```
1 """
2 The eligibility application: view definitions for the eligibility verification flow.
3 """
4 from django.conf import settings
5 from django.contrib import messages
6 from django.shortcuts import redirect
7 from django.template.response import TemplateResponse
8 from django.urls import reverse
9 from django.utils.decorators import decorator_from_middleware
10 from django.utils.translation import pgettext, gettext as _
11
12 from benefits.core import recaptcha, session, viewmodels
13 from benefits.core.middleware import AgencySessionRequired, LoginRequired, RateLimit, VerifierSessionRequired
14 from benefits.core.models import EligibilityVerifier
15 from benefits.core.views import PageTemplateResponse
16 from . import analytics, api, forms
17
18
19 @decorator_from_middleware(AgencySessionRequired)
20 def index(request):
21 """View handler for the eligibility verifier selection form."""
22
23 session.update(request, eligibility_types=[], origin=reverse("eligibility:index"))
24 agency = session.agency(request)
25
26 eligibility_start = reverse("eligibility:start")
27
28 page = viewmodels.Page(
29 title=_("eligibility.pages.index.title"),
30 content_title=_("eligibility.pages.index.content_title"),
31 forms=forms.EligibilityVerifierSelectionForm(agency=agency),
32 )
33
34 if request.method == "POST":
35 form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)
36
37 if form.is_valid():
38 verifier_id = form.cleaned_data.get("verifier")
39 verifier = EligibilityVerifier.objects.get(id=verifier_id)
40 session.update(request, verifier=verifier)
41
42 response = redirect(eligibility_start)
43 else:
44 # form was not valid, allow for correction/resubmission
45 page.forms = [form]
46 response = PageTemplateResponse(request, page)
47 else:
48 if agency.eligibility_verifiers.count() == 1:
49 verifier = agency.eligibility_verifiers.first()
50 session.update(request, verifier=verifier)
51 response = redirect(eligibility_start)
52 else:
53 response = PageTemplateResponse(request, page)
54
55 return response
56
57
58 @decorator_from_middleware(AgencySessionRequired)
59 @decorator_from_middleware(VerifierSessionRequired)
60 def start(request):
61 """View handler for the eligibility verification getting started screen."""
62
63 session.update(request, eligibility_types=[], origin=reverse("eligibility:start"))
64 verifier = session.verifier(request)
65
66 button = viewmodels.Button.primary(text=_("eligibility.buttons.continue"), url=reverse("eligibility:confirm"))
67
68 payment_options_link = f"{reverse('core:help')}#payment-options"
69 media = [
70 dict(
71 icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
72 heading=_(verifier.start_item_name),
73 details=_(verifier.start_item_description),
74 ),
75 dict(
76 icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
77 heading=_("eligibility.pages.start.bankcard.title"),
78 details=_("eligibility.pages.start.bankcard.text"),
79 links=[
80 viewmodels.Button.link(
81 classes="btn-text btn-link",
82 text=_("eligibility.pages.start.bankcard.button[0].link"),
83 url=payment_options_link,
84 ),
85 viewmodels.Button.link(
86 classes="btn-text btn-link",
87 text=_("eligibility.pages.start.bankcard.button[1].link"),
88 url=payment_options_link,
89 ),
90 ],
91 ),
92 ]
93
94 if verifier.requires_authentication:
95 if settings.OAUTH_CLIENT_NAME is None:
96 raise Exception("EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None")
97
98 oauth_help_link = f"{reverse('core:help')}#login-gov"
99
100 media.insert(
101 0,
102 dict(
103 icon=viewmodels.Icon("idscreencheck", pgettext("image alt text", "core.icons.idscreencheck")),
104 heading=_("eligibility.pages.start.oauth.heading"),
105 details=_("eligibility.pages.start.oauth.details"),
106 links=[
107 viewmodels.Button.link(
108 classes="btn-text btn-link",
109 text=_("eligibility.pages.start.oauth.link_text"),
110 url=oauth_help_link,
111 rel="noopener noreferrer",
112 )
113 ],
114 ),
115 )
116
117 if not session.logged_in(request):
118 button = viewmodels.Button.login(
119 label=_(verifier.auth_provider.sign_in_button_label),
120 text="",
121 url=reverse("oauth:login"),
122 )
123
124 page = viewmodels.Page(
125 title=_("eligibility.pages.start.title"),
126 noimage=True,
127 paragraphs=[_(verifier.start_blurb)],
128 button=button,
129 )
130
131 ctx = page.context_dict()
132 ctx["title"] = _(verifier.start_content_title)
133 ctx["media"] = media
134
135 return TemplateResponse(request, "eligibility/start.html", ctx)
136
137
138 @decorator_from_middleware(AgencySessionRequired)
139 @decorator_from_middleware(LoginRequired)
140 @decorator_from_middleware(RateLimit)
141 @decorator_from_middleware(VerifierSessionRequired)
142 def confirm(request):
143 """View handler for the eligibility verification form."""
144
145 template = "eligibility/confirm.html"
146 verifier = session.verifier(request)
147
148 page = viewmodels.Page(
149 title=_(verifier.form_title),
150 content_title=_(verifier.form_content_title),
151 paragraphs=[_(verifier.form_blurb)],
152 form=forms.EligibilityVerificationForm(auto_id=True, label_suffix="", verifier=verifier),
153 classes="text-lg-center",
154 )
155
156 # POST form submission, process form data
157 if request.method == "POST":
158 analytics.started_eligibility(request)
159
160 form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)
161
162 # form was not valid, allow for correction/resubmission
163 if not form.is_valid():
164 if recaptcha.has_error(form):
165 messages.error(request, "Recaptcha failed. Please try again.")
166
167 page.forms = [form]
168 return TemplateResponse(request, template, page.context_dict())
169
170 # form is valid, make Eligibility Verification request to get the verified types
171 verified_types = api.get_verified_types(request, form)
172
173 # form was not valid, allow for correction/resubmission
174 if verified_types is None:
175 analytics.returned_error(request, form.errors)
176 page.forms = [form]
177 return TemplateResponse(request, template, page.context_dict())
178 # no types were verified
179 elif len(verified_types) == 0:
180 return unverified(request)
181 # type(s) were verified
182 else:
183 return verified(request, verified_types)
184
185 # GET from an already verified user, no need to verify again
186 elif session.eligible(request):
187 eligibility = session.eligibility(request)
188 return verified(request, [eligibility.name])
189 # GET from an unverified user, present the form
190 else:
191 return TemplateResponse(request, template, page.context_dict())
192
193
194 @decorator_from_middleware(AgencySessionRequired)
195 @decorator_from_middleware(LoginRequired)
196 def verified(request, verified_types):
197 """View handler for the verified eligibility page."""
198
199 analytics.returned_success(request)
200
201 session.update(request, eligibility_types=verified_types)
202
203 return redirect("enrollment:index")
204
205
206 @decorator_from_middleware(AgencySessionRequired)
207 @decorator_from_middleware(LoginRequired)
208 @decorator_from_middleware(VerifierSessionRequired)
209 def unverified(request):
210 """View handler for the unverified eligibility page."""
211
212 analytics.returned_fail(request)
213
214 # tel: link to agency phone number
215 agency = session.agency(request)
216 buttons = viewmodels.Button.agency_contact_links(agency)
217 buttons.append(viewmodels.Button.home(request, _("core.buttons.retry")))
218
219 verifier = session.verifier(request)
220
221 page = viewmodels.Page(
222 title=_(verifier.unverified_title),
223 classes="with-agency-links",
224 content_title=_(verifier.unverified_content_title),
225 icon=viewmodels.Icon("idcardquestion", pgettext("image alt text", "core.icons.idcardquestion")),
226 paragraphs=[_(verifier.unverified_blurb), _("eligibility.pages.unverified.p[1]")],
227 buttons=buttons,
228 )
229
230 return TemplateResponse(request, "eligibility/unverified.html", page.context_dict())
231
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py
--- a/benefits/eligibility/views.py
+++ b/benefits/eligibility/views.py
@@ -67,11 +67,6 @@
payment_options_link = f"{reverse('core:help')}#payment-options"
media = [
- dict(
- icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
- heading=_(verifier.start_item_name),
- details=_(verifier.start_item_description),
- ),
dict(
icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
heading=_("eligibility.pages.start.bankcard.title"),
@@ -96,11 +91,12 @@
raise Exception("EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None")
oauth_help_link = f"{reverse('core:help')}#login-gov"
+ oauth_help_more_link = f"{reverse('core:help')}#login-gov-verify-items"
media.insert(
0,
dict(
- icon=viewmodels.Icon("idscreencheck", pgettext("image alt text", "core.icons.idscreencheck")),
+ icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
heading=_("eligibility.pages.start.oauth.heading"),
details=_("eligibility.pages.start.oauth.details"),
links=[
@@ -109,18 +105,38 @@
text=_("eligibility.pages.start.oauth.link_text"),
url=oauth_help_link,
rel="noopener noreferrer",
- )
+ ),
+ viewmodels.Button.link(
+ classes="btn-text btn-link",
+ text=_("eligibility.pages.start.oauth.link_text[2]"),
+ url=oauth_help_more_link,
+ rel="noopener noreferrer",
+ ),
+ ],
+ bullets=[
+ _("eligibility.pages.start.oauth.required_items[0]"),
+ _("eligibility.pages.start.oauth.required_items[1]"),
+ _("eligibility.pages.start.oauth.required_items[2]"),
],
),
)
if not session.logged_in(request):
button = viewmodels.Button.login(
- label=_(verifier.auth_provider.sign_in_button_label),
- text="",
+ text=_(verifier.auth_provider.sign_in_button_label),
url=reverse("oauth:login"),
)
+ else:
+ media.insert(
+ 0,
+ dict(
+ icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
+ heading=_(verifier.start_item_name),
+ details=_(verifier.start_item_description),
+ ),
+ )
+
page = viewmodels.Page(
title=_("eligibility.pages.start.title"),
noimage=True,
|
{"golden_diff": "diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py\n--- a/benefits/eligibility/views.py\n+++ b/benefits/eligibility/views.py\n@@ -67,11 +67,6 @@\n \n payment_options_link = f\"{reverse('core:help')}#payment-options\"\n media = [\n- dict(\n- icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n- heading=_(verifier.start_item_name),\n- details=_(verifier.start_item_description),\n- ),\n dict(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.bankcard.title\"),\n@@ -96,11 +91,12 @@\n raise Exception(\"EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None\")\n \n oauth_help_link = f\"{reverse('core:help')}#login-gov\"\n+ oauth_help_more_link = f\"{reverse('core:help')}#login-gov-verify-items\"\n \n media.insert(\n 0,\n dict(\n- icon=viewmodels.Icon(\"idscreencheck\", pgettext(\"image alt text\", \"core.icons.idscreencheck\")),\n+ icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(\"eligibility.pages.start.oauth.heading\"),\n details=_(\"eligibility.pages.start.oauth.details\"),\n links=[\n@@ -109,18 +105,38 @@\n text=_(\"eligibility.pages.start.oauth.link_text\"),\n url=oauth_help_link,\n rel=\"noopener noreferrer\",\n- )\n+ ),\n+ viewmodels.Button.link(\n+ classes=\"btn-text btn-link\",\n+ text=_(\"eligibility.pages.start.oauth.link_text[2]\"),\n+ url=oauth_help_more_link,\n+ rel=\"noopener noreferrer\",\n+ ),\n+ ],\n+ bullets=[\n+ _(\"eligibility.pages.start.oauth.required_items[0]\"),\n+ _(\"eligibility.pages.start.oauth.required_items[1]\"),\n+ _(\"eligibility.pages.start.oauth.required_items[2]\"),\n ],\n ),\n )\n \n if not session.logged_in(request):\n button = viewmodels.Button.login(\n- label=_(verifier.auth_provider.sign_in_button_label),\n- text=\"\",\n+ text=_(verifier.auth_provider.sign_in_button_label),\n url=reverse(\"oauth:login\"),\n )\n \n+ else:\n+ media.insert(\n+ 0,\n+ dict(\n+ icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n+ heading=_(verifier.start_item_name),\n+ details=_(verifier.start_item_description),\n+ ),\n+ )\n+\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n noimage=True,\n", "issue": "Engineering: Front-end - Eligibility Start page design: Redesign to a single-column design. Add media list.\n- For both Mobile and Desktop, redesign the page to a two-column vertical design. \r\n- Redesign the 3-icon content page to be a 2-icon page and add the new bullet points of copy.\r\n- Establish Mobile/Tablet width margins\r\n- Make sure H1 font size is correct\r\n<img width=\"1512\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/172735521-c7b6760b-832d-448e-b45e-20f217debccb.png\">\r\n<img width=\"453\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/172735548-a9c6ed1f-23fd-4904-8e9f-629f6bf2eba1.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThe eligibility application: view definitions for the eligibility verification flow.\n\"\"\"\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import recaptcha, session, viewmodels\nfrom benefits.core.middleware import AgencySessionRequired, LoginRequired, RateLimit, VerifierSessionRequired\nfrom benefits.core.models import EligibilityVerifier\nfrom benefits.core.views import PageTemplateResponse\nfrom . import analytics, api, forms\n\n\n@decorator_from_middleware(AgencySessionRequired)\ndef index(request):\n \"\"\"View handler for the eligibility verifier selection form.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:index\"))\n agency = session.agency(request)\n\n eligibility_start = reverse(\"eligibility:start\")\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.index.title\"),\n content_title=_(\"eligibility.pages.index.content_title\"),\n forms=forms.EligibilityVerifierSelectionForm(agency=agency),\n )\n\n if request.method == \"POST\":\n form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)\n\n if form.is_valid():\n verifier_id = form.cleaned_data.get(\"verifier\")\n verifier = EligibilityVerifier.objects.get(id=verifier_id)\n session.update(request, verifier=verifier)\n\n response = redirect(eligibility_start)\n else:\n # form was not valid, allow for correction/resubmission\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n else:\n if agency.eligibility_verifiers.count() == 1:\n verifier = agency.eligibility_verifiers.first()\n session.update(request, verifier=verifier)\n response = redirect(eligibility_start)\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(VerifierSessionRequired)\ndef start(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:start\"))\n verifier = session.verifier(request)\n\n button = viewmodels.Button.primary(text=_(\"eligibility.buttons.continue\"), url=reverse(\"eligibility:confirm\"))\n\n payment_options_link = f\"{reverse('core:help')}#payment-options\"\n media = [\n dict(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(verifier.start_item_name),\n details=_(verifier.start_item_description),\n ),\n dict(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.bankcard.title\"),\n details=_(\"eligibility.pages.start.bankcard.text\"),\n links=[\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.bankcard.button[0].link\"),\n url=payment_options_link,\n ),\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.bankcard.button[1].link\"),\n url=payment_options_link,\n ),\n ],\n ),\n ]\n\n if verifier.requires_authentication:\n if settings.OAUTH_CLIENT_NAME is None:\n raise Exception(\"EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None\")\n\n oauth_help_link = f\"{reverse('core:help')}#login-gov\"\n\n media.insert(\n 0,\n dict(\n icon=viewmodels.Icon(\"idscreencheck\", pgettext(\"image alt text\", \"core.icons.idscreencheck\")),\n heading=_(\"eligibility.pages.start.oauth.heading\"),\n details=_(\"eligibility.pages.start.oauth.details\"),\n links=[\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.oauth.link_text\"),\n url=oauth_help_link,\n rel=\"noopener noreferrer\",\n )\n ],\n ),\n )\n\n if not session.logged_in(request):\n button = viewmodels.Button.login(\n label=_(verifier.auth_provider.sign_in_button_label),\n text=\"\",\n url=reverse(\"oauth:login\"),\n )\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n noimage=True,\n paragraphs=[_(verifier.start_blurb)],\n button=button,\n )\n\n ctx = page.context_dict()\n ctx[\"title\"] = _(verifier.start_content_title)\n ctx[\"media\"] = media\n\n return TemplateResponse(request, \"eligibility/start.html\", ctx)\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\n@decorator_from_middleware(RateLimit)\n@decorator_from_middleware(VerifierSessionRequired)\ndef confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n\n template = \"eligibility/confirm.html\"\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.form_title),\n content_title=_(verifier.form_content_title),\n paragraphs=[_(verifier.form_blurb)],\n form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=\"\", verifier=verifier),\n classes=\"text-lg-center\",\n )\n\n # POST form submission, process form data\n if request.method == \"POST\":\n analytics.started_eligibility(request)\n\n form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)\n\n # form was not valid, allow for correction/resubmission\n if not form.is_valid():\n if recaptcha.has_error(form):\n messages.error(request, \"Recaptcha failed. Please try again.\")\n\n page.forms = [form]\n return TemplateResponse(request, template, page.context_dict())\n\n # form is valid, make Eligibility Verification request to get the verified types\n verified_types = api.get_verified_types(request, form)\n\n # form was not valid, allow for correction/resubmission\n if verified_types is None:\n analytics.returned_error(request, form.errors)\n page.forms = [form]\n return TemplateResponse(request, template, page.context_dict())\n # no types were verified\n elif len(verified_types) == 0:\n return unverified(request)\n # type(s) were verified\n else:\n return verified(request, verified_types)\n\n # GET from an already verified user, no need to verify again\n elif session.eligible(request):\n eligibility = session.eligibility(request)\n return verified(request, [eligibility.name])\n # GET from an unverified user, present the form\n else:\n return TemplateResponse(request, template, page.context_dict())\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\ndef verified(request, verified_types):\n \"\"\"View handler for the verified eligibility page.\"\"\"\n\n analytics.returned_success(request)\n\n session.update(request, eligibility_types=verified_types)\n\n return redirect(\"enrollment:index\")\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\n@decorator_from_middleware(VerifierSessionRequired)\ndef unverified(request):\n \"\"\"View handler for the unverified eligibility page.\"\"\"\n\n analytics.returned_fail(request)\n\n # tel: link to agency phone number\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.retry\")))\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.unverified_title),\n classes=\"with-agency-links\",\n content_title=_(verifier.unverified_content_title),\n icon=viewmodels.Icon(\"idcardquestion\", pgettext(\"image alt text\", \"core.icons.idcardquestion\")),\n paragraphs=[_(verifier.unverified_blurb), _(\"eligibility.pages.unverified.p[1]\")],\n buttons=buttons,\n )\n\n return TemplateResponse(request, \"eligibility/unverified.html\", page.context_dict())\n", "path": "benefits/eligibility/views.py"}], "after_files": [{"content": "\"\"\"\nThe eligibility application: view definitions for the eligibility verification flow.\n\"\"\"\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import recaptcha, session, viewmodels\nfrom benefits.core.middleware import AgencySessionRequired, LoginRequired, RateLimit, VerifierSessionRequired\nfrom benefits.core.models import EligibilityVerifier\nfrom benefits.core.views import PageTemplateResponse\nfrom . import analytics, api, forms\n\n\n@decorator_from_middleware(AgencySessionRequired)\ndef index(request):\n \"\"\"View handler for the eligibility verifier selection form.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:index\"))\n agency = session.agency(request)\n\n eligibility_start = reverse(\"eligibility:start\")\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.index.title\"),\n content_title=_(\"eligibility.pages.index.content_title\"),\n forms=forms.EligibilityVerifierSelectionForm(agency=agency),\n )\n\n if request.method == \"POST\":\n form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)\n\n if form.is_valid():\n verifier_id = form.cleaned_data.get(\"verifier\")\n verifier = EligibilityVerifier.objects.get(id=verifier_id)\n session.update(request, verifier=verifier)\n\n response = redirect(eligibility_start)\n else:\n # form was not valid, allow for correction/resubmission\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n else:\n if agency.eligibility_verifiers.count() == 1:\n verifier = agency.eligibility_verifiers.first()\n session.update(request, verifier=verifier)\n response = redirect(eligibility_start)\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(VerifierSessionRequired)\ndef start(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:start\"))\n verifier = session.verifier(request)\n\n button = viewmodels.Button.primary(text=_(\"eligibility.buttons.continue\"), url=reverse(\"eligibility:confirm\"))\n\n payment_options_link = f\"{reverse('core:help')}#payment-options\"\n media = [\n dict(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.bankcard.title\"),\n details=_(\"eligibility.pages.start.bankcard.text\"),\n links=[\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.bankcard.button[0].link\"),\n url=payment_options_link,\n ),\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.bankcard.button[1].link\"),\n url=payment_options_link,\n ),\n ],\n ),\n ]\n\n if verifier.requires_authentication:\n if settings.OAUTH_CLIENT_NAME is None:\n raise Exception(\"EligibilityVerifier requires authentication, but OAUTH_CLIENT_NAME is None\")\n\n oauth_help_link = f\"{reverse('core:help')}#login-gov\"\n oauth_help_more_link = f\"{reverse('core:help')}#login-gov-verify-items\"\n\n media.insert(\n 0,\n dict(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(\"eligibility.pages.start.oauth.heading\"),\n details=_(\"eligibility.pages.start.oauth.details\"),\n links=[\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.oauth.link_text\"),\n url=oauth_help_link,\n rel=\"noopener noreferrer\",\n ),\n viewmodels.Button.link(\n classes=\"btn-text btn-link\",\n text=_(\"eligibility.pages.start.oauth.link_text[2]\"),\n url=oauth_help_more_link,\n rel=\"noopener noreferrer\",\n ),\n ],\n bullets=[\n _(\"eligibility.pages.start.oauth.required_items[0]\"),\n _(\"eligibility.pages.start.oauth.required_items[1]\"),\n _(\"eligibility.pages.start.oauth.required_items[2]\"),\n ],\n ),\n )\n\n if not session.logged_in(request):\n button = viewmodels.Button.login(\n text=_(verifier.auth_provider.sign_in_button_label),\n url=reverse(\"oauth:login\"),\n )\n\n else:\n media.insert(\n 0,\n dict(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(verifier.start_item_name),\n details=_(verifier.start_item_description),\n ),\n )\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n noimage=True,\n paragraphs=[_(verifier.start_blurb)],\n button=button,\n )\n\n ctx = page.context_dict()\n ctx[\"title\"] = _(verifier.start_content_title)\n ctx[\"media\"] = media\n\n return TemplateResponse(request, \"eligibility/start.html\", ctx)\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\n@decorator_from_middleware(RateLimit)\n@decorator_from_middleware(VerifierSessionRequired)\ndef confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n\n template = \"eligibility/confirm.html\"\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.form_title),\n content_title=_(verifier.form_content_title),\n paragraphs=[_(verifier.form_blurb)],\n form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=\"\", verifier=verifier),\n classes=\"text-lg-center\",\n )\n\n # POST form submission, process form data\n if request.method == \"POST\":\n analytics.started_eligibility(request)\n\n form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)\n\n # form was not valid, allow for correction/resubmission\n if not form.is_valid():\n if recaptcha.has_error(form):\n messages.error(request, \"Recaptcha failed. Please try again.\")\n\n page.forms = [form]\n return TemplateResponse(request, template, page.context_dict())\n\n # form is valid, make Eligibility Verification request to get the verified types\n verified_types = api.get_verified_types(request, form)\n\n # form was not valid, allow for correction/resubmission\n if verified_types is None:\n analytics.returned_error(request, form.errors)\n page.forms = [form]\n return TemplateResponse(request, template, page.context_dict())\n # no types were verified\n elif len(verified_types) == 0:\n return unverified(request)\n # type(s) were verified\n else:\n return verified(request, verified_types)\n\n # GET from an already verified user, no need to verify again\n elif session.eligible(request):\n eligibility = session.eligibility(request)\n return verified(request, [eligibility.name])\n # GET from an unverified user, present the form\n else:\n return TemplateResponse(request, template, page.context_dict())\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\ndef verified(request, verified_types):\n \"\"\"View handler for the verified eligibility page.\"\"\"\n\n analytics.returned_success(request)\n\n session.update(request, eligibility_types=verified_types)\n\n return redirect(\"enrollment:index\")\n\n\n@decorator_from_middleware(AgencySessionRequired)\n@decorator_from_middleware(LoginRequired)\n@decorator_from_middleware(VerifierSessionRequired)\ndef unverified(request):\n \"\"\"View handler for the unverified eligibility page.\"\"\"\n\n analytics.returned_fail(request)\n\n # tel: link to agency phone number\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.retry\")))\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.unverified_title),\n classes=\"with-agency-links\",\n content_title=_(verifier.unverified_content_title),\n icon=viewmodels.Icon(\"idcardquestion\", pgettext(\"image alt text\", \"core.icons.idcardquestion\")),\n paragraphs=[_(verifier.unverified_blurb), _(\"eligibility.pages.unverified.p[1]\")],\n buttons=buttons,\n )\n\n return TemplateResponse(request, \"eligibility/unverified.html\", page.context_dict())\n", "path": "benefits/eligibility/views.py"}]}
| 2,807 | 637 |
gh_patches_debug_38569
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-600
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Locking pre-commit directory" should only print if waiting for a lock
Otherwise this is just useless console noise
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/store.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import logging
6 import os.path
7 import sqlite3
8 import tempfile
9
10 from cached_property import cached_property
11
12 import pre_commit.constants as C
13 from pre_commit import file_lock
14 from pre_commit.prefixed_command_runner import PrefixedCommandRunner
15 from pre_commit.util import clean_path_on_failure
16 from pre_commit.util import cmd_output
17 from pre_commit.util import copy_tree_to_path
18 from pre_commit.util import cwd
19 from pre_commit.util import no_git_env
20 from pre_commit.util import resource_filename
21
22
23 logger = logging.getLogger('pre_commit')
24
25
26 def _get_default_directory():
27 """Returns the default directory for the Store. This is intentionally
28 underscored to indicate that `Store.get_default_directory` is the intended
29 way to get this information. This is also done so
30 `Store.get_default_directory` can be mocked in tests and
31 `_get_default_directory` can be tested.
32 """
33 return os.environ.get(
34 'PRE_COMMIT_HOME',
35 os.path.join(os.path.expanduser('~'), '.pre-commit'),
36 )
37
38
39 class Store(object):
40 get_default_directory = staticmethod(_get_default_directory)
41 __created = False
42
43 def __init__(self, directory=None):
44 if directory is None:
45 directory = self.get_default_directory()
46
47 self.directory = directory
48
49 @contextlib.contextmanager
50 def exclusive_lock(self, quiet=False):
51 if not quiet:
52 logger.info('Locking pre-commit directory')
53 with file_lock.lock(os.path.join(self.directory, '.lock')):
54 yield
55
56 def _write_readme(self):
57 with io.open(os.path.join(self.directory, 'README'), 'w') as readme:
58 readme.write(
59 'This directory is maintained by the pre-commit project.\n'
60 'Learn more: https://github.com/pre-commit/pre-commit\n',
61 )
62
63 def _write_sqlite_db(self):
64 # To avoid a race where someone ^Cs between db creation and execution
65 # of the CREATE TABLE statement
66 fd, tmpfile = tempfile.mkstemp(dir=self.directory)
67 # We'll be managing this file ourselves
68 os.close(fd)
69 # sqlite doesn't close its fd with its contextmanager >.<
70 # contextlib.closing fixes this.
71 # See: http://stackoverflow.com/a/28032829/812183
72 with contextlib.closing(sqlite3.connect(tmpfile)) as db:
73 db.executescript(
74 'CREATE TABLE repos ('
75 ' repo CHAR(255) NOT NULL,'
76 ' ref CHAR(255) NOT NULL,'
77 ' path CHAR(255) NOT NULL,'
78 ' PRIMARY KEY (repo, ref)'
79 ');',
80 )
81
82 # Atomic file move
83 os.rename(tmpfile, self.db_path)
84
85 def _create(self):
86 if not os.path.exists(self.directory):
87 os.makedirs(self.directory)
88 self._write_readme()
89
90 if os.path.exists(self.db_path):
91 return
92 with self.exclusive_lock(quiet=True):
93 # Another process may have already completed this work
94 if os.path.exists(self.db_path): # pragma: no cover (race)
95 return
96 self._write_sqlite_db()
97
98 def require_created(self):
99 """Require the pre-commit file store to be created."""
100 if not self.__created:
101 self._create()
102 self.__created = True
103
104 def _new_repo(self, repo, ref, make_strategy):
105 self.require_created()
106
107 def _get_result():
108 # Check if we already exist
109 with sqlite3.connect(self.db_path) as db:
110 result = db.execute(
111 'SELECT path FROM repos WHERE repo = ? AND ref = ?',
112 [repo, ref],
113 ).fetchone()
114 if result:
115 return result[0]
116
117 result = _get_result()
118 if result:
119 return result
120 with self.exclusive_lock():
121 # Another process may have already completed this work
122 result = _get_result()
123 if result: # pragma: no cover (race)
124 return result
125
126 logger.info('Initializing environment for {}.'.format(repo))
127
128 directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)
129 with clean_path_on_failure(directory):
130 make_strategy(directory)
131
132 # Update our db with the created repo
133 with sqlite3.connect(self.db_path) as db:
134 db.execute(
135 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',
136 [repo, ref, directory],
137 )
138 return directory
139
140 def clone(self, repo, ref):
141 """Clone the given url and checkout the specific ref."""
142 def clone_strategy(directory):
143 cmd_output(
144 'git', 'clone', '--no-checkout', repo, directory,
145 env=no_git_env(),
146 )
147 with cwd(directory):
148 cmd_output('git', 'reset', ref, '--hard', env=no_git_env())
149 cmd_output(
150 'git', 'submodule', 'update', '--init', '--recursive',
151 env=no_git_env(),
152 )
153
154 return self._new_repo(repo, ref, clone_strategy)
155
156 def make_local(self, deps):
157 def make_local_strategy(directory):
158 copy_tree_to_path(resource_filename('empty_template'), directory)
159 return self._new_repo(
160 'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,
161 make_local_strategy,
162 )
163
164 @cached_property
165 def cmd_runner(self):
166 return PrefixedCommandRunner(self.directory)
167
168 @cached_property
169 def db_path(self):
170 return os.path.join(self.directory, 'db.db')
171
```
Path: `pre_commit/file_lock.py`
Content:
```
1 import contextlib
2 import errno
3
4
5 try: # pragma: no cover (windows)
6 import msvcrt
7
8 # https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/locking
9
10 # on windows we lock "regions" of files, we don't care about the actual
11 # byte region so we'll just pick *some* number here.
12 _region = 0xffff
13
14 @contextlib.contextmanager
15 def _locked(fileno):
16 while True:
17 try:
18 msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
19 except OSError as e:
20 # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
21 # flag is specified and the file cannot be locked after 10
22 # attempts.
23 if e.errno != errno.EDEADLOCK:
24 raise
25 else:
26 break
27
28 try:
29 yield
30 finally:
31 # From cursory testing, it seems to get unlocked when the file is
32 # closed so this may not be necessary.
33 # The documentation however states:
34 # "Regions should be locked only briefly and should be unlocked
35 # before closing a file or exiting the program."
36 msvcrt.locking(fileno, msvcrt.LK_UNLCK, _region)
37 except ImportError: # pragma: no cover (posix)
38 import fcntl
39
40 @contextlib.contextmanager
41 def _locked(fileno):
42 fcntl.flock(fileno, fcntl.LOCK_EX)
43 try:
44 yield
45 finally:
46 fcntl.flock(fileno, fcntl.LOCK_UN)
47
48
49 @contextlib.contextmanager
50 def lock(path):
51 with open(path, 'a+') as f:
52 with _locked(f.fileno()):
53 yield
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/file_lock.py b/pre_commit/file_lock.py
--- a/pre_commit/file_lock.py
+++ b/pre_commit/file_lock.py
@@ -12,18 +12,22 @@
_region = 0xffff
@contextlib.contextmanager
- def _locked(fileno):
- while True:
- try:
- msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
- except OSError as e:
- # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
- # flag is specified and the file cannot be locked after 10
- # attempts.
- if e.errno != errno.EDEADLOCK:
- raise
- else:
- break
+ def _locked(fileno, blocked_cb):
+ try:
+ msvcrt.locking(fileno, msvcrt.LK_NBLCK, _region)
+ except IOError:
+ blocked_cb()
+ while True:
+ try:
+ msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)
+ except IOError as e:
+ # Locking violation. Returned when the _LK_LOCK or _LK_RLCK
+ # flag is specified and the file cannot be locked after 10
+ # attempts.
+ if e.errno != errno.EDEADLOCK:
+ raise
+ else:
+ break
try:
yield
@@ -38,8 +42,12 @@
import fcntl
@contextlib.contextmanager
- def _locked(fileno):
- fcntl.flock(fileno, fcntl.LOCK_EX)
+ def _locked(fileno, blocked_cb):
+ try:
+ fcntl.flock(fileno, fcntl.LOCK_EX | fcntl.LOCK_NB)
+ except IOError:
+ blocked_cb()
+ fcntl.flock(fileno, fcntl.LOCK_EX)
try:
yield
finally:
@@ -47,7 +55,7 @@
@contextlib.contextmanager
-def lock(path):
+def lock(path, blocked_cb):
with open(path, 'a+') as f:
- with _locked(f.fileno()):
+ with _locked(f.fileno(), blocked_cb):
yield
diff --git a/pre_commit/store.py b/pre_commit/store.py
--- a/pre_commit/store.py
+++ b/pre_commit/store.py
@@ -47,10 +47,11 @@
self.directory = directory
@contextlib.contextmanager
- def exclusive_lock(self, quiet=False):
- if not quiet:
+ def exclusive_lock(self):
+ def blocked_cb(): # pragma: no cover (tests are single-process)
logger.info('Locking pre-commit directory')
- with file_lock.lock(os.path.join(self.directory, '.lock')):
+
+ with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):
yield
def _write_readme(self):
@@ -89,7 +90,7 @@
if os.path.exists(self.db_path):
return
- with self.exclusive_lock(quiet=True):
+ with self.exclusive_lock():
# Another process may have already completed this work
if os.path.exists(self.db_path): # pragma: no cover (race)
return
|
{"golden_diff": "diff --git a/pre_commit/file_lock.py b/pre_commit/file_lock.py\n--- a/pre_commit/file_lock.py\n+++ b/pre_commit/file_lock.py\n@@ -12,18 +12,22 @@\n _region = 0xffff\n \n @contextlib.contextmanager\n- def _locked(fileno):\n- while True:\n- try:\n- msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)\n- except OSError as e:\n- # Locking violation. Returned when the _LK_LOCK or _LK_RLCK\n- # flag is specified and the file cannot be locked after 10\n- # attempts.\n- if e.errno != errno.EDEADLOCK:\n- raise\n- else:\n- break\n+ def _locked(fileno, blocked_cb):\n+ try:\n+ msvcrt.locking(fileno, msvcrt.LK_NBLCK, _region)\n+ except IOError:\n+ blocked_cb()\n+ while True:\n+ try:\n+ msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)\n+ except IOError as e:\n+ # Locking violation. Returned when the _LK_LOCK or _LK_RLCK\n+ # flag is specified and the file cannot be locked after 10\n+ # attempts.\n+ if e.errno != errno.EDEADLOCK:\n+ raise\n+ else:\n+ break\n \n try:\n yield\n@@ -38,8 +42,12 @@\n import fcntl\n \n @contextlib.contextmanager\n- def _locked(fileno):\n- fcntl.flock(fileno, fcntl.LOCK_EX)\n+ def _locked(fileno, blocked_cb):\n+ try:\n+ fcntl.flock(fileno, fcntl.LOCK_EX | fcntl.LOCK_NB)\n+ except IOError:\n+ blocked_cb()\n+ fcntl.flock(fileno, fcntl.LOCK_EX)\n try:\n yield\n finally:\n@@ -47,7 +55,7 @@\n \n \n @contextlib.contextmanager\n-def lock(path):\n+def lock(path, blocked_cb):\n with open(path, 'a+') as f:\n- with _locked(f.fileno()):\n+ with _locked(f.fileno(), blocked_cb):\n yield\ndiff --git a/pre_commit/store.py b/pre_commit/store.py\n--- a/pre_commit/store.py\n+++ b/pre_commit/store.py\n@@ -47,10 +47,11 @@\n self.directory = directory\n \n @contextlib.contextmanager\n- def exclusive_lock(self, quiet=False):\n- if not quiet:\n+ def exclusive_lock(self):\n+ def blocked_cb(): # pragma: no cover (tests are single-process)\n logger.info('Locking pre-commit directory')\n- with file_lock.lock(os.path.join(self.directory, '.lock')):\n+\n+ with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):\n yield\n \n def _write_readme(self):\n@@ -89,7 +90,7 @@\n \n if os.path.exists(self.db_path):\n return\n- with self.exclusive_lock(quiet=True):\n+ with self.exclusive_lock():\n # Another process may have already completed this work\n if os.path.exists(self.db_path): # pragma: no cover (race)\n return\n", "issue": "\"Locking pre-commit directory\" should only print if waiting for a lock\nOtherwise this is just useless console noise\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport os.path\nimport sqlite3\nimport tempfile\n\nfrom cached_property import cached_property\n\nimport pre_commit.constants as C\nfrom pre_commit import file_lock\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import copy_tree_to_path\nfrom pre_commit.util import cwd\nfrom pre_commit.util import no_git_env\nfrom pre_commit.util import resource_filename\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory():\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n return os.environ.get(\n 'PRE_COMMIT_HOME',\n os.path.join(os.path.expanduser('~'), '.pre-commit'),\n )\n\n\nclass Store(object):\n get_default_directory = staticmethod(_get_default_directory)\n __created = False\n\n def __init__(self, directory=None):\n if directory is None:\n directory = self.get_default_directory()\n\n self.directory = directory\n\n @contextlib.contextmanager\n def exclusive_lock(self, quiet=False):\n if not quiet:\n logger.info('Locking pre-commit directory')\n with file_lock.lock(os.path.join(self.directory, '.lock')):\n yield\n\n def _write_readme(self):\n with io.open(os.path.join(self.directory, 'README'), 'w') as readme:\n readme.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n',\n )\n\n def _write_sqlite_db(self):\n # To avoid a race where someone ^Cs between db creation and execution\n # of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: http://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(tmpfile)) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo CHAR(255) NOT NULL,'\n ' ref CHAR(255) NOT NULL,'\n ' path CHAR(255) NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');',\n )\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n def _create(self):\n if not os.path.exists(self.directory):\n os.makedirs(self.directory)\n self._write_readme()\n\n if os.path.exists(self.db_path):\n return\n with self.exclusive_lock(quiet=True):\n # Another process may have already completed this work\n if os.path.exists(self.db_path): # pragma: no cover (race)\n return\n self._write_sqlite_db()\n\n def require_created(self):\n \"\"\"Require the pre-commit file store to be created.\"\"\"\n if not self.__created:\n self._create()\n self.__created = True\n\n def _new_repo(self, repo, ref, make_strategy):\n self.require_created()\n\n def _get_result():\n # Check if we already exist\n with sqlite3.connect(self.db_path) as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n [repo, ref],\n ).fetchone()\n if result:\n return result[0]\n\n result = _get_result()\n if result:\n return result\n with self.exclusive_lock():\n # Another process may have already completed this work\n result = _get_result()\n if result: # pragma: no cover (race)\n return result\n\n logger.info('Initializing environment for {}.'.format(repo))\n\n directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(directory):\n make_strategy(directory)\n\n # Update our db with the created repo\n with sqlite3.connect(self.db_path) as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [repo, ref, directory],\n )\n return directory\n\n def clone(self, repo, ref):\n \"\"\"Clone the given url and checkout the specific ref.\"\"\"\n def clone_strategy(directory):\n cmd_output(\n 'git', 'clone', '--no-checkout', repo, directory,\n env=no_git_env(),\n )\n with cwd(directory):\n cmd_output('git', 'reset', ref, '--hard', env=no_git_env())\n cmd_output(\n 'git', 'submodule', 'update', '--init', '--recursive',\n env=no_git_env(),\n )\n\n return self._new_repo(repo, ref, clone_strategy)\n\n def make_local(self, deps):\n def make_local_strategy(directory):\n copy_tree_to_path(resource_filename('empty_template'), directory)\n return self._new_repo(\n 'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,\n make_local_strategy,\n )\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.directory)\n\n @cached_property\n def db_path(self):\n return os.path.join(self.directory, 'db.db')\n", "path": "pre_commit/store.py"}, {"content": "import contextlib\nimport errno\n\n\ntry: # pragma: no cover (windows)\n import msvcrt\n\n # https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/locking\n\n # on windows we lock \"regions\" of files, we don't care about the actual\n # byte region so we'll just pick *some* number here.\n _region = 0xffff\n\n @contextlib.contextmanager\n def _locked(fileno):\n while True:\n try:\n msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)\n except OSError as e:\n # Locking violation. Returned when the _LK_LOCK or _LK_RLCK\n # flag is specified and the file cannot be locked after 10\n # attempts.\n if e.errno != errno.EDEADLOCK:\n raise\n else:\n break\n\n try:\n yield\n finally:\n # From cursory testing, it seems to get unlocked when the file is\n # closed so this may not be necessary.\n # The documentation however states:\n # \"Regions should be locked only briefly and should be unlocked\n # before closing a file or exiting the program.\"\n msvcrt.locking(fileno, msvcrt.LK_UNLCK, _region)\nexcept ImportError: # pragma: no cover (posix)\n import fcntl\n\n @contextlib.contextmanager\n def _locked(fileno):\n fcntl.flock(fileno, fcntl.LOCK_EX)\n try:\n yield\n finally:\n fcntl.flock(fileno, fcntl.LOCK_UN)\n\n\[email protected]\ndef lock(path):\n with open(path, 'a+') as f:\n with _locked(f.fileno()):\n yield\n", "path": "pre_commit/file_lock.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport os.path\nimport sqlite3\nimport tempfile\n\nfrom cached_property import cached_property\n\nimport pre_commit.constants as C\nfrom pre_commit import file_lock\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import copy_tree_to_path\nfrom pre_commit.util import cwd\nfrom pre_commit.util import no_git_env\nfrom pre_commit.util import resource_filename\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_default_directory():\n \"\"\"Returns the default directory for the Store. This is intentionally\n underscored to indicate that `Store.get_default_directory` is the intended\n way to get this information. This is also done so\n `Store.get_default_directory` can be mocked in tests and\n `_get_default_directory` can be tested.\n \"\"\"\n return os.environ.get(\n 'PRE_COMMIT_HOME',\n os.path.join(os.path.expanduser('~'), '.pre-commit'),\n )\n\n\nclass Store(object):\n get_default_directory = staticmethod(_get_default_directory)\n __created = False\n\n def __init__(self, directory=None):\n if directory is None:\n directory = self.get_default_directory()\n\n self.directory = directory\n\n @contextlib.contextmanager\n def exclusive_lock(self):\n def blocked_cb(): # pragma: no cover (tests are single-process)\n logger.info('Locking pre-commit directory')\n\n with file_lock.lock(os.path.join(self.directory, '.lock'), blocked_cb):\n yield\n\n def _write_readme(self):\n with io.open(os.path.join(self.directory, 'README'), 'w') as readme:\n readme.write(\n 'This directory is maintained by the pre-commit project.\\n'\n 'Learn more: https://github.com/pre-commit/pre-commit\\n',\n )\n\n def _write_sqlite_db(self):\n # To avoid a race where someone ^Cs between db creation and execution\n # of the CREATE TABLE statement\n fd, tmpfile = tempfile.mkstemp(dir=self.directory)\n # We'll be managing this file ourselves\n os.close(fd)\n # sqlite doesn't close its fd with its contextmanager >.<\n # contextlib.closing fixes this.\n # See: http://stackoverflow.com/a/28032829/812183\n with contextlib.closing(sqlite3.connect(tmpfile)) as db:\n db.executescript(\n 'CREATE TABLE repos ('\n ' repo CHAR(255) NOT NULL,'\n ' ref CHAR(255) NOT NULL,'\n ' path CHAR(255) NOT NULL,'\n ' PRIMARY KEY (repo, ref)'\n ');',\n )\n\n # Atomic file move\n os.rename(tmpfile, self.db_path)\n\n def _create(self):\n if not os.path.exists(self.directory):\n os.makedirs(self.directory)\n self._write_readme()\n\n if os.path.exists(self.db_path):\n return\n with self.exclusive_lock():\n # Another process may have already completed this work\n if os.path.exists(self.db_path): # pragma: no cover (race)\n return\n self._write_sqlite_db()\n\n def require_created(self):\n \"\"\"Require the pre-commit file store to be created.\"\"\"\n if not self.__created:\n self._create()\n self.__created = True\n\n def _new_repo(self, repo, ref, make_strategy):\n self.require_created()\n\n def _get_result():\n # Check if we already exist\n with sqlite3.connect(self.db_path) as db:\n result = db.execute(\n 'SELECT path FROM repos WHERE repo = ? AND ref = ?',\n [repo, ref],\n ).fetchone()\n if result:\n return result[0]\n\n result = _get_result()\n if result:\n return result\n with self.exclusive_lock():\n # Another process may have already completed this work\n result = _get_result()\n if result: # pragma: no cover (race)\n return result\n\n logger.info('Initializing environment for {}.'.format(repo))\n\n directory = tempfile.mkdtemp(prefix='repo', dir=self.directory)\n with clean_path_on_failure(directory):\n make_strategy(directory)\n\n # Update our db with the created repo\n with sqlite3.connect(self.db_path) as db:\n db.execute(\n 'INSERT INTO repos (repo, ref, path) VALUES (?, ?, ?)',\n [repo, ref, directory],\n )\n return directory\n\n def clone(self, repo, ref):\n \"\"\"Clone the given url and checkout the specific ref.\"\"\"\n def clone_strategy(directory):\n cmd_output(\n 'git', 'clone', '--no-checkout', repo, directory,\n env=no_git_env(),\n )\n with cwd(directory):\n cmd_output('git', 'reset', ref, '--hard', env=no_git_env())\n cmd_output(\n 'git', 'submodule', 'update', '--init', '--recursive',\n env=no_git_env(),\n )\n\n return self._new_repo(repo, ref, clone_strategy)\n\n def make_local(self, deps):\n def make_local_strategy(directory):\n copy_tree_to_path(resource_filename('empty_template'), directory)\n return self._new_repo(\n 'local:{}'.format(','.join(sorted(deps))), C.LOCAL_REPO_VERSION,\n make_local_strategy,\n )\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.directory)\n\n @cached_property\n def db_path(self):\n return os.path.join(self.directory, 'db.db')\n", "path": "pre_commit/store.py"}, {"content": "import contextlib\nimport errno\n\n\ntry: # pragma: no cover (windows)\n import msvcrt\n\n # https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/locking\n\n # on windows we lock \"regions\" of files, we don't care about the actual\n # byte region so we'll just pick *some* number here.\n _region = 0xffff\n\n @contextlib.contextmanager\n def _locked(fileno, blocked_cb):\n try:\n msvcrt.locking(fileno, msvcrt.LK_NBLCK, _region)\n except IOError:\n blocked_cb()\n while True:\n try:\n msvcrt.locking(fileno, msvcrt.LK_LOCK, _region)\n except IOError as e:\n # Locking violation. Returned when the _LK_LOCK or _LK_RLCK\n # flag is specified and the file cannot be locked after 10\n # attempts.\n if e.errno != errno.EDEADLOCK:\n raise\n else:\n break\n\n try:\n yield\n finally:\n # From cursory testing, it seems to get unlocked when the file is\n # closed so this may not be necessary.\n # The documentation however states:\n # \"Regions should be locked only briefly and should be unlocked\n # before closing a file or exiting the program.\"\n msvcrt.locking(fileno, msvcrt.LK_UNLCK, _region)\nexcept ImportError: # pragma: no cover (posix)\n import fcntl\n\n @contextlib.contextmanager\n def _locked(fileno, blocked_cb):\n try:\n fcntl.flock(fileno, fcntl.LOCK_EX | fcntl.LOCK_NB)\n except IOError:\n blocked_cb()\n fcntl.flock(fileno, fcntl.LOCK_EX)\n try:\n yield\n finally:\n fcntl.flock(fileno, fcntl.LOCK_UN)\n\n\[email protected]\ndef lock(path, blocked_cb):\n with open(path, 'a+') as f:\n with _locked(f.fileno(), blocked_cb):\n yield\n", "path": "pre_commit/file_lock.py"}]}
| 2,416 | 739 |
gh_patches_debug_21465
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2785
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Custom environment metadata in JSON artifacts
### Describe the feature
Include custom environment-sourced metadata in the JSON artifacts.
Inside the `metadata` field (or at the root if #2761 is not done yet), dbt should add a field - tentatively `env`. This field should be a dictionary, populated from the contents of dbt's environment that start with `DBT_ENV_CUSTOM_ENV_`. The keys should be the environment variables without the prefix, and the values the values. For example:
DBT_ENV_CUSTOM_ENV_run_id=1 DBT_ENV_CUSTOM_ENV_run_env=ci dbt run
will result in a manifest with `{"env": {"run_id": "1", "run_env": "ci"}}` in its metadata field.
See [this comment](https://github.com/fishtown-analytics/dbt/issues/2671#issuecomment-685892719) for a tiny bit of context.
Split off from #2671
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/contracts/util.py`
Content:
```
1 import dataclasses
2 from datetime import datetime
3 from typing import (
4 List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional
5 )
6
7 from dbt.clients.system import write_json, read_json
8 from dbt.exceptions import (
9 IncompatibleSchemaException,
10 InternalException,
11 RuntimeException,
12 )
13 from dbt.version import __version__
14 from dbt.tracking import get_invocation_id
15 from hologram import JsonSchemaMixin
16
17 MacroKey = Tuple[str, str]
18 SourceKey = Tuple[str, str]
19
20
21 def list_str() -> List[str]:
22 """Mypy gets upset about stuff like:
23
24 from dataclasses import dataclass, field
25 from typing import Optional, List
26
27 @dataclass
28 class Foo:
29 x: Optional[List[str]] = field(default_factory=list)
30
31
32 Because `list` could be any kind of list, I guess
33 """
34 return []
35
36
37 class Replaceable:
38 def replace(self, **kwargs):
39 return dataclasses.replace(self, **kwargs)
40
41
42 class Mergeable(Replaceable):
43 def merged(self, *args):
44 """Perform a shallow merge, where the last non-None write wins. This is
45 intended to merge dataclasses that are a collection of optional values.
46 """
47 replacements = {}
48 cls = type(self)
49 for arg in args:
50 for field in dataclasses.fields(cls):
51 value = getattr(arg, field.name)
52 if value is not None:
53 replacements[field.name] = value
54
55 return self.replace(**replacements)
56
57
58 class Writable:
59 def write(self, path: str, omit_none: bool = False):
60 write_json(path, self.to_dict(omit_none=omit_none)) # type: ignore
61
62
63 class AdditionalPropertiesMixin:
64 """Make this class an extensible property.
65
66 The underlying class definition must include a type definition for a field
67 named '_extra' that is of type `Dict[str, Any]`.
68 """
69 ADDITIONAL_PROPERTIES = True
70
71 @classmethod
72 def from_dict(cls, data, validate=True):
73 self = super().from_dict(data=data, validate=validate)
74 keys = self.to_dict(validate=False, omit_none=False)
75 for key, value in data.items():
76 if key not in keys:
77 self.extra[key] = value
78 return self
79
80 def to_dict(self, omit_none=True, validate=False):
81 data = super().to_dict(omit_none=omit_none, validate=validate)
82 data.update(self.extra)
83 return data
84
85 def replace(self, **kwargs):
86 dct = self.to_dict(omit_none=False, validate=False)
87 dct.update(kwargs)
88 return self.from_dict(dct)
89
90 @property
91 def extra(self):
92 return self._extra
93
94
95 class Readable:
96 @classmethod
97 def read(cls, path: str):
98 try:
99 data = read_json(path)
100 except (EnvironmentError, ValueError) as exc:
101 raise RuntimeException(
102 f'Could not read {cls.__name__} at "{path}" as JSON: {exc}'
103 ) from exc
104
105 return cls.from_dict(data) # type: ignore
106
107
108 BASE_SCHEMAS_URL = 'https://schemas.getdbt.com/dbt/{name}/v{version}.json'
109
110
111 @dataclasses.dataclass
112 class SchemaVersion:
113 name: str
114 version: int
115
116 def __str__(self) -> str:
117 return BASE_SCHEMAS_URL.format(
118 name=self.name,
119 version=self.version,
120 )
121
122
123 SCHEMA_VERSION_KEY = 'dbt_schema_version'
124
125
126 @dataclasses.dataclass
127 class BaseArtifactMetadata(JsonSchemaMixin):
128 dbt_schema_version: str
129 dbt_version: str = __version__
130 generated_at: datetime = dataclasses.field(
131 default_factory=datetime.utcnow
132 )
133 invocation_id: Optional[str] = dataclasses.field(
134 default_factory=get_invocation_id
135 )
136
137
138 def schema_version(name: str, version: int):
139 def inner(cls: Type[VersionedSchema]):
140 cls.dbt_schema_version = SchemaVersion(
141 name=name,
142 version=version,
143 )
144 return cls
145 return inner
146
147
148 @dataclasses.dataclass
149 class VersionedSchema(JsonSchemaMixin):
150 dbt_schema_version: ClassVar[SchemaVersion]
151
152 @classmethod
153 def json_schema(cls, embeddable: bool = False) -> Dict[str, Any]:
154 result = super().json_schema(embeddable=embeddable)
155 if not embeddable:
156 result['$id'] = str(cls.dbt_schema_version)
157 return result
158
159
160 T = TypeVar('T', bound='ArtifactMixin')
161
162
163 # metadata should really be a Generic[T_M] where T_M is a TypeVar bound to
164 # BaseArtifactMetadata. Unfortunately this isn't possible due to a mypy issue:
165 # https://github.com/python/mypy/issues/7520
166 @dataclasses.dataclass(init=False)
167 class ArtifactMixin(VersionedSchema, Writable, Readable):
168 metadata: BaseArtifactMetadata
169
170 @classmethod
171 def from_dict(
172 cls: Type[T], data: Dict[str, Any], validate: bool = True
173 ) -> T:
174 if cls.dbt_schema_version is None:
175 raise InternalException(
176 'Cannot call from_dict with no schema version!'
177 )
178
179 if validate:
180 expected = str(cls.dbt_schema_version)
181 found = data.get('metadata', {}).get(SCHEMA_VERSION_KEY)
182 if found != expected:
183 raise IncompatibleSchemaException(expected, found)
184
185 return super().from_dict(data=data, validate=validate)
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dbt/contracts/util.py b/core/dbt/contracts/util.py
--- a/core/dbt/contracts/util.py
+++ b/core/dbt/contracts/util.py
@@ -1,4 +1,5 @@
import dataclasses
+import os
from datetime import datetime
from typing import (
List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional
@@ -123,6 +124,16 @@
SCHEMA_VERSION_KEY = 'dbt_schema_version'
+METADATA_ENV_PREFIX = 'DBT_ENV_CUSTOM_ENV_'
+
+
+def get_metadata_env() -> Dict[str, str]:
+ return {
+ k[len(METADATA_ENV_PREFIX):]: v for k, v in os.environ.items()
+ if k.startswith(METADATA_ENV_PREFIX)
+ }
+
+
@dataclasses.dataclass
class BaseArtifactMetadata(JsonSchemaMixin):
dbt_schema_version: str
@@ -133,6 +144,7 @@
invocation_id: Optional[str] = dataclasses.field(
default_factory=get_invocation_id
)
+ env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_env)
def schema_version(name: str, version: int):
|
{"golden_diff": "diff --git a/core/dbt/contracts/util.py b/core/dbt/contracts/util.py\n--- a/core/dbt/contracts/util.py\n+++ b/core/dbt/contracts/util.py\n@@ -1,4 +1,5 @@\n import dataclasses\n+import os\n from datetime import datetime\n from typing import (\n List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional\n@@ -123,6 +124,16 @@\n SCHEMA_VERSION_KEY = 'dbt_schema_version'\n \n \n+METADATA_ENV_PREFIX = 'DBT_ENV_CUSTOM_ENV_'\n+\n+\n+def get_metadata_env() -> Dict[str, str]:\n+ return {\n+ k[len(METADATA_ENV_PREFIX):]: v for k, v in os.environ.items()\n+ if k.startswith(METADATA_ENV_PREFIX)\n+ }\n+\n+\n @dataclasses.dataclass\n class BaseArtifactMetadata(JsonSchemaMixin):\n dbt_schema_version: str\n@@ -133,6 +144,7 @@\n invocation_id: Optional[str] = dataclasses.field(\n default_factory=get_invocation_id\n )\n+ env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_env)\n \n \n def schema_version(name: str, version: int):\n", "issue": "Custom environment metadata in JSON artifacts\n### Describe the feature\r\nInclude custom environment-sourced metadata in the JSON artifacts.\r\n\r\nInside the `metadata` field (or at the root if #2761 is not done yet), dbt should add a field - tentatively `env`. This field should be a dictionary, populated from the contents of dbt's environment that start with `DBT_ENV_CUSTOM_ENV_`. The keys should be the environment variables without the prefix, and the values the values. For example:\r\nDBT_ENV_CUSTOM_ENV_run_id=1 DBT_ENV_CUSTOM_ENV_run_env=ci dbt run\r\nwill result in a manifest with `{\"env\": {\"run_id\": \"1\", \"run_env\": \"ci\"}}` in its metadata field.\r\n\r\nSee [this comment](https://github.com/fishtown-analytics/dbt/issues/2671#issuecomment-685892719) for a tiny bit of context.\r\n\r\nSplit off from #2671 \r\n\n", "before_files": [{"content": "import dataclasses\nfrom datetime import datetime\nfrom typing import (\n List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional\n)\n\nfrom dbt.clients.system import write_json, read_json\nfrom dbt.exceptions import (\n IncompatibleSchemaException,\n InternalException,\n RuntimeException,\n)\nfrom dbt.version import __version__\nfrom dbt.tracking import get_invocation_id\nfrom hologram import JsonSchemaMixin\n\nMacroKey = Tuple[str, str]\nSourceKey = Tuple[str, str]\n\n\ndef list_str() -> List[str]:\n \"\"\"Mypy gets upset about stuff like:\n\n from dataclasses import dataclass, field\n from typing import Optional, List\n\n @dataclass\n class Foo:\n x: Optional[List[str]] = field(default_factory=list)\n\n\n Because `list` could be any kind of list, I guess\n \"\"\"\n return []\n\n\nclass Replaceable:\n def replace(self, **kwargs):\n return dataclasses.replace(self, **kwargs)\n\n\nclass Mergeable(Replaceable):\n def merged(self, *args):\n \"\"\"Perform a shallow merge, where the last non-None write wins. This is\n intended to merge dataclasses that are a collection of optional values.\n \"\"\"\n replacements = {}\n cls = type(self)\n for arg in args:\n for field in dataclasses.fields(cls):\n value = getattr(arg, field.name)\n if value is not None:\n replacements[field.name] = value\n\n return self.replace(**replacements)\n\n\nclass Writable:\n def write(self, path: str, omit_none: bool = False):\n write_json(path, self.to_dict(omit_none=omit_none)) # type: ignore\n\n\nclass AdditionalPropertiesMixin:\n \"\"\"Make this class an extensible property.\n\n The underlying class definition must include a type definition for a field\n named '_extra' that is of type `Dict[str, Any]`.\n \"\"\"\n ADDITIONAL_PROPERTIES = True\n\n @classmethod\n def from_dict(cls, data, validate=True):\n self = super().from_dict(data=data, validate=validate)\n keys = self.to_dict(validate=False, omit_none=False)\n for key, value in data.items():\n if key not in keys:\n self.extra[key] = value\n return self\n\n def to_dict(self, omit_none=True, validate=False):\n data = super().to_dict(omit_none=omit_none, validate=validate)\n data.update(self.extra)\n return data\n\n def replace(self, **kwargs):\n dct = self.to_dict(omit_none=False, validate=False)\n dct.update(kwargs)\n return self.from_dict(dct)\n\n @property\n def extra(self):\n return self._extra\n\n\nclass Readable:\n @classmethod\n def read(cls, path: str):\n try:\n data = read_json(path)\n except (EnvironmentError, ValueError) as exc:\n raise RuntimeException(\n f'Could not read {cls.__name__} at \"{path}\" as JSON: {exc}'\n ) from exc\n\n return cls.from_dict(data) # type: ignore\n\n\nBASE_SCHEMAS_URL = 'https://schemas.getdbt.com/dbt/{name}/v{version}.json'\n\n\[email protected]\nclass SchemaVersion:\n name: str\n version: int\n\n def __str__(self) -> str:\n return BASE_SCHEMAS_URL.format(\n name=self.name,\n version=self.version,\n )\n\n\nSCHEMA_VERSION_KEY = 'dbt_schema_version'\n\n\[email protected]\nclass BaseArtifactMetadata(JsonSchemaMixin):\n dbt_schema_version: str\n dbt_version: str = __version__\n generated_at: datetime = dataclasses.field(\n default_factory=datetime.utcnow\n )\n invocation_id: Optional[str] = dataclasses.field(\n default_factory=get_invocation_id\n )\n\n\ndef schema_version(name: str, version: int):\n def inner(cls: Type[VersionedSchema]):\n cls.dbt_schema_version = SchemaVersion(\n name=name,\n version=version,\n )\n return cls\n return inner\n\n\[email protected]\nclass VersionedSchema(JsonSchemaMixin):\n dbt_schema_version: ClassVar[SchemaVersion]\n\n @classmethod\n def json_schema(cls, embeddable: bool = False) -> Dict[str, Any]:\n result = super().json_schema(embeddable=embeddable)\n if not embeddable:\n result['$id'] = str(cls.dbt_schema_version)\n return result\n\n\nT = TypeVar('T', bound='ArtifactMixin')\n\n\n# metadata should really be a Generic[T_M] where T_M is a TypeVar bound to\n# BaseArtifactMetadata. Unfortunately this isn't possible due to a mypy issue:\n# https://github.com/python/mypy/issues/7520\[email protected](init=False)\nclass ArtifactMixin(VersionedSchema, Writable, Readable):\n metadata: BaseArtifactMetadata\n\n @classmethod\n def from_dict(\n cls: Type[T], data: Dict[str, Any], validate: bool = True\n ) -> T:\n if cls.dbt_schema_version is None:\n raise InternalException(\n 'Cannot call from_dict with no schema version!'\n )\n\n if validate:\n expected = str(cls.dbt_schema_version)\n found = data.get('metadata', {}).get(SCHEMA_VERSION_KEY)\n if found != expected:\n raise IncompatibleSchemaException(expected, found)\n\n return super().from_dict(data=data, validate=validate)\n", "path": "core/dbt/contracts/util.py"}], "after_files": [{"content": "import dataclasses\nimport os\nfrom datetime import datetime\nfrom typing import (\n List, Tuple, ClassVar, Type, TypeVar, Dict, Any, Optional\n)\n\nfrom dbt.clients.system import write_json, read_json\nfrom dbt.exceptions import (\n IncompatibleSchemaException,\n InternalException,\n RuntimeException,\n)\nfrom dbt.version import __version__\nfrom dbt.tracking import get_invocation_id\nfrom hologram import JsonSchemaMixin\n\nMacroKey = Tuple[str, str]\nSourceKey = Tuple[str, str]\n\n\ndef list_str() -> List[str]:\n \"\"\"Mypy gets upset about stuff like:\n\n from dataclasses import dataclass, field\n from typing import Optional, List\n\n @dataclass\n class Foo:\n x: Optional[List[str]] = field(default_factory=list)\n\n\n Because `list` could be any kind of list, I guess\n \"\"\"\n return []\n\n\nclass Replaceable:\n def replace(self, **kwargs):\n return dataclasses.replace(self, **kwargs)\n\n\nclass Mergeable(Replaceable):\n def merged(self, *args):\n \"\"\"Perform a shallow merge, where the last non-None write wins. This is\n intended to merge dataclasses that are a collection of optional values.\n \"\"\"\n replacements = {}\n cls = type(self)\n for arg in args:\n for field in dataclasses.fields(cls):\n value = getattr(arg, field.name)\n if value is not None:\n replacements[field.name] = value\n\n return self.replace(**replacements)\n\n\nclass Writable:\n def write(self, path: str, omit_none: bool = False):\n write_json(path, self.to_dict(omit_none=omit_none)) # type: ignore\n\n\nclass AdditionalPropertiesMixin:\n \"\"\"Make this class an extensible property.\n\n The underlying class definition must include a type definition for a field\n named '_extra' that is of type `Dict[str, Any]`.\n \"\"\"\n ADDITIONAL_PROPERTIES = True\n\n @classmethod\n def from_dict(cls, data, validate=True):\n self = super().from_dict(data=data, validate=validate)\n keys = self.to_dict(validate=False, omit_none=False)\n for key, value in data.items():\n if key not in keys:\n self.extra[key] = value\n return self\n\n def to_dict(self, omit_none=True, validate=False):\n data = super().to_dict(omit_none=omit_none, validate=validate)\n data.update(self.extra)\n return data\n\n def replace(self, **kwargs):\n dct = self.to_dict(omit_none=False, validate=False)\n dct.update(kwargs)\n return self.from_dict(dct)\n\n @property\n def extra(self):\n return self._extra\n\n\nclass Readable:\n @classmethod\n def read(cls, path: str):\n try:\n data = read_json(path)\n except (EnvironmentError, ValueError) as exc:\n raise RuntimeException(\n f'Could not read {cls.__name__} at \"{path}\" as JSON: {exc}'\n ) from exc\n\n return cls.from_dict(data) # type: ignore\n\n\nBASE_SCHEMAS_URL = 'https://schemas.getdbt.com/dbt/{name}/v{version}.json'\n\n\[email protected]\nclass SchemaVersion:\n name: str\n version: int\n\n def __str__(self) -> str:\n return BASE_SCHEMAS_URL.format(\n name=self.name,\n version=self.version,\n )\n\n\nSCHEMA_VERSION_KEY = 'dbt_schema_version'\n\n\nMETADATA_ENV_PREFIX = 'DBT_ENV_CUSTOM_ENV_'\n\n\ndef get_metadata_env() -> Dict[str, str]:\n return {\n k[len(METADATA_ENV_PREFIX):]: v for k, v in os.environ.items()\n if k.startswith(METADATA_ENV_PREFIX)\n }\n\n\[email protected]\nclass BaseArtifactMetadata(JsonSchemaMixin):\n dbt_schema_version: str\n dbt_version: str = __version__\n generated_at: datetime = dataclasses.field(\n default_factory=datetime.utcnow\n )\n invocation_id: Optional[str] = dataclasses.field(\n default_factory=get_invocation_id\n )\n env: Dict[str, str] = dataclasses.field(default_factory=get_metadata_env)\n\n\ndef schema_version(name: str, version: int):\n def inner(cls: Type[VersionedSchema]):\n cls.dbt_schema_version = SchemaVersion(\n name=name,\n version=version,\n )\n return cls\n return inner\n\n\[email protected]\nclass VersionedSchema(JsonSchemaMixin):\n dbt_schema_version: ClassVar[SchemaVersion]\n\n @classmethod\n def json_schema(cls, embeddable: bool = False) -> Dict[str, Any]:\n result = super().json_schema(embeddable=embeddable)\n if not embeddable:\n result['$id'] = str(cls.dbt_schema_version)\n return result\n\n\nT = TypeVar('T', bound='ArtifactMixin')\n\n\n# metadata should really be a Generic[T_M] where T_M is a TypeVar bound to\n# BaseArtifactMetadata. Unfortunately this isn't possible due to a mypy issue:\n# https://github.com/python/mypy/issues/7520\[email protected](init=False)\nclass ArtifactMixin(VersionedSchema, Writable, Readable):\n metadata: BaseArtifactMetadata\n\n @classmethod\n def from_dict(\n cls: Type[T], data: Dict[str, Any], validate: bool = True\n ) -> T:\n if cls.dbt_schema_version is None:\n raise InternalException(\n 'Cannot call from_dict with no schema version!'\n )\n\n if validate:\n expected = str(cls.dbt_schema_version)\n found = data.get('metadata', {}).get(SCHEMA_VERSION_KEY)\n if found != expected:\n raise IncompatibleSchemaException(expected, found)\n\n return super().from_dict(data=data, validate=validate)\n", "path": "core/dbt/contracts/util.py"}]}
| 2,132 | 271 |
gh_patches_debug_19931
|
rasdani/github-patches
|
git_diff
|
CiviWiki__OpenCiviWiki-1089
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Migration of frontend_views
### Idea summary
Elaboration of issue #1070
### Further details
As mentioned in issue #1070, we need to migrate from
```py
url(r"^")
```
to
```py
path()
```
And for frontend_views the following points need to be kept in mind
- [ ] usage of django.urls module for path and include
- [ ] Including the following views using the include method
- [ ] About_view
- [ ] support_us_view
- [ ] how_it_works_view
- [ ] user_profile
- [ ] issue_thread
- [ ] base_view
- [ ] civi2csv
- [ ] use path for all of the above mentioned urls
- [ ] Use the same name for reverse match.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `project/frontend_views/urls.py`
Content:
```
1 from django.conf.urls import url
2 from . import views as v
3
4 urlpatterns = [
5 url(r"^about$", v.about_view, name="about"),
6 url(r"^support_us$", v.support_us_view, name="support us"),
7 url(r"^howitworks$", v.how_it_works_view, name="how it works"),
8 url(r"^profile/(?P<username>[a-zA-Z0-9-_]*)$", v.user_profile, name="profile"),
9 url(r"^profile/rep/(?P<username>\d+)$", v.user_profile, name="profile"),
10 url(r"^thread/(?P<thread_id>\w+)$", v.issue_thread, name="issue thread"),
11 url(r"^profile$", v.user_profile, name="default_profile"),
12 url(r"^$", v.base_view, name="base"),
13 url(r"^thread/(?P<thread_id>\w+)/csv$", v.civi2csv, name="civi2csv"),
14 ]
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/project/frontend_views/urls.py b/project/frontend_views/urls.py
--- a/project/frontend_views/urls.py
+++ b/project/frontend_views/urls.py
@@ -1,14 +1,13 @@
-from django.conf.urls import url
-from . import views as v
+from django.urls import path
+from frontend_views import views
urlpatterns = [
- url(r"^about$", v.about_view, name="about"),
- url(r"^support_us$", v.support_us_view, name="support us"),
- url(r"^howitworks$", v.how_it_works_view, name="how it works"),
- url(r"^profile/(?P<username>[a-zA-Z0-9-_]*)$", v.user_profile, name="profile"),
- url(r"^profile/rep/(?P<username>\d+)$", v.user_profile, name="profile"),
- url(r"^thread/(?P<thread_id>\w+)$", v.issue_thread, name="issue thread"),
- url(r"^profile$", v.user_profile, name="default_profile"),
- url(r"^$", v.base_view, name="base"),
- url(r"^thread/(?P<thread_id>\w+)/csv$", v.civi2csv, name="civi2csv"),
+ path("about/", views.about_view, name="about"),
+ path("support_us/", views.support_us_view, name="support us"),
+ path("howitworks/", views.how_it_works_view, name="how it works"),
+ path("profile/<str:username>/", views.user_profile, name="profile"),
+ path("thread/<int:thread_id>/", views.issue_thread, name="issue thread"),
+ path("profile/", views.user_profile, name="default_profile"),
+ path("", views.base_view, name="base"),
+ path("thread/<int:thread_id>/csv/", views.civi2csv, name="civi2csv"),
]
|
{"golden_diff": "diff --git a/project/frontend_views/urls.py b/project/frontend_views/urls.py\n--- a/project/frontend_views/urls.py\n+++ b/project/frontend_views/urls.py\n@@ -1,14 +1,13 @@\n-from django.conf.urls import url\n-from . import views as v\n+from django.urls import path\n+from frontend_views import views\n \n urlpatterns = [\n- url(r\"^about$\", v.about_view, name=\"about\"),\n- url(r\"^support_us$\", v.support_us_view, name=\"support us\"),\n- url(r\"^howitworks$\", v.how_it_works_view, name=\"how it works\"),\n- url(r\"^profile/(?P<username>[a-zA-Z0-9-_]*)$\", v.user_profile, name=\"profile\"),\n- url(r\"^profile/rep/(?P<username>\\d+)$\", v.user_profile, name=\"profile\"),\n- url(r\"^thread/(?P<thread_id>\\w+)$\", v.issue_thread, name=\"issue thread\"),\n- url(r\"^profile$\", v.user_profile, name=\"default_profile\"),\n- url(r\"^$\", v.base_view, name=\"base\"),\n- url(r\"^thread/(?P<thread_id>\\w+)/csv$\", v.civi2csv, name=\"civi2csv\"),\n+ path(\"about/\", views.about_view, name=\"about\"),\n+ path(\"support_us/\", views.support_us_view, name=\"support us\"),\n+ path(\"howitworks/\", views.how_it_works_view, name=\"how it works\"),\n+ path(\"profile/<str:username>/\", views.user_profile, name=\"profile\"),\n+ path(\"thread/<int:thread_id>/\", views.issue_thread, name=\"issue thread\"),\n+ path(\"profile/\", views.user_profile, name=\"default_profile\"),\n+ path(\"\", views.base_view, name=\"base\"),\n+ path(\"thread/<int:thread_id>/csv/\", views.civi2csv, name=\"civi2csv\"),\n ]\n", "issue": "Migration of frontend_views \n### Idea summary\n\nElaboration of issue #1070\n\n### Further details\n\nAs mentioned in issue #1070, we need to migrate from\r\n```py\r\nurl(r\"^\")\r\n```\r\nto \r\n```py\r\npath()\r\n```\r\nAnd for frontend_views the following points need to be kept in mind\r\n- [ ] usage of django.urls module for path and include\r\n- [ ] Including the following views using the include method\r\n - [ ] About_view\r\n - [ ] support_us_view\r\n - [ ] how_it_works_view\r\n - [ ] user_profile\r\n - [ ] issue_thread\r\n - [ ] base_view\r\n - [ ] civi2csv\r\n - [ ] use path for all of the above mentioned urls\r\n - [ ] Use the same name for reverse match. \n", "before_files": [{"content": "from django.conf.urls import url\nfrom . import views as v\n\nurlpatterns = [\n url(r\"^about$\", v.about_view, name=\"about\"),\n url(r\"^support_us$\", v.support_us_view, name=\"support us\"),\n url(r\"^howitworks$\", v.how_it_works_view, name=\"how it works\"),\n url(r\"^profile/(?P<username>[a-zA-Z0-9-_]*)$\", v.user_profile, name=\"profile\"),\n url(r\"^profile/rep/(?P<username>\\d+)$\", v.user_profile, name=\"profile\"),\n url(r\"^thread/(?P<thread_id>\\w+)$\", v.issue_thread, name=\"issue thread\"),\n url(r\"^profile$\", v.user_profile, name=\"default_profile\"),\n url(r\"^$\", v.base_view, name=\"base\"),\n url(r\"^thread/(?P<thread_id>\\w+)/csv$\", v.civi2csv, name=\"civi2csv\"),\n]\n", "path": "project/frontend_views/urls.py"}], "after_files": [{"content": "from django.urls import path\nfrom frontend_views import views\n\nurlpatterns = [\n path(\"about/\", views.about_view, name=\"about\"),\n path(\"support_us/\", views.support_us_view, name=\"support us\"),\n path(\"howitworks/\", views.how_it_works_view, name=\"how it works\"),\n path(\"profile/<str:username>/\", views.user_profile, name=\"profile\"),\n path(\"thread/<int:thread_id>/\", views.issue_thread, name=\"issue thread\"),\n path(\"profile/\", views.user_profile, name=\"default_profile\"),\n path(\"\", views.base_view, name=\"base\"),\n path(\"thread/<int:thread_id>/csv/\", views.civi2csv, name=\"civi2csv\"),\n]\n", "path": "project/frontend_views/urls.py"}]}
| 663 | 419 |
gh_patches_debug_39915
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-800
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch sense of rio-warp's --bounds from src to dst
For 1.0, let's harmonize rio-warp's `--dimensions`, `--res`, `--bounds`, and `--like` options so they all specify the output/destination raster. This will mean a breaking change for `--bounds`. I apologize for not catching the interface discrepancy earlier, everyone.
Plan:
- raise a [FutureWarning](https://docs.python.org/2/library/exceptions.html#exceptions.FutureWarning) starting with the next release explaining the upcoming change to rio-warp's `--bounds` option.
- add a `--x-dst-bounds` option to impl the corrected feature.
- add a `--src-bounds` option to preserve the existing feature.
- prevent the GDAL faults (crashes and non-returning calls) that are possible when bad destination bounds are passed.
Sound good, @brendan-ward @perrygeo?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/warp.py`
Content:
```
1 import logging
2 from math import ceil
3 import warnings
4
5 import click
6 from cligj import files_inout_arg, format_opt
7
8 from .helpers import resolve_inout
9 from . import options
10 import rasterio
11 from rasterio.crs import CRS
12 from rasterio.errors import CRSError
13 from rasterio.transform import Affine
14 from rasterio.warp import (
15 reproject, Resampling, calculate_default_transform, transform_bounds)
16
17
18 # Improper usage of rio-warp can lead to accidental creation of
19 # extremely large datasets. We'll put a hard limit on the size of
20 # datasets and raise a usage error if the limits are exceeded.
21 MAX_OUTPUT_WIDTH = 100000
22 MAX_OUTPUT_HEIGHT = 100000
23
24
25 def bounds_handler(ctx, param, value):
26 """Warn about future usage changes."""
27 if value:
28 click.echo(
29 "Future Warning: "
30 "the semantics of the `--bounds` option will change in Rasterio "
31 "version 1.0 from bounds of the source dataset to bounds of the "
32 "destination dataset.", err=True)
33 return value
34
35
36 def x_dst_bounds_handler(ctx, param, value):
37 """Warn about future usage changes."""
38 if value:
39 click.echo(
40 "Future Warning: "
41 "the `--x-dst-bounds` option will be removed in Rasterio version "
42 "1.0 in favor of `--bounds`.", err=True)
43 return value
44
45
46 @click.command(short_help='Warp a raster dataset.')
47 @files_inout_arg
48 @options.output_opt
49 @format_opt
50 @click.option(
51 '--like',
52 type=click.Path(exists=True),
53 help='Raster dataset to use as a template for obtaining affine '
54 'transform (bounds and resolution), and crs.')
55 @click.option('--dst-crs', default=None,
56 help='Target coordinate reference system.')
57 @options.dimensions_opt
58 @click.option(
59 '--src-bounds',
60 nargs=4, type=float, default=None,
61 help="Determine output extent from source bounds: left bottom right top "
62 "(note: for future backwards compatibility in 1.0).")
63 @click.option(
64 '--x-dst-bounds',
65 nargs=4, type=float, default=None, callback=x_dst_bounds_handler,
66 help="Set output extent from bounding values: left bottom right top "
67 "(note: this option will be removed in 1.0).")
68 @click.option(
69 '--bounds',
70 nargs=4, type=float, default=None, callback=bounds_handler,
71 help="Determine output extent from source bounds: left bottom right top "
72 "(note: the semantics of this option will change to those of "
73 "`--x-dst-bounds` in version 1.0).")
74 @options.resolution_opt
75 @click.option('--resampling', type=click.Choice([r.name for r in Resampling]),
76 default='nearest', help="Resampling method.",
77 show_default=True)
78 @click.option('--src-nodata', default=None, show_default=True,
79 type=float, help="Manually override source nodata")
80 @click.option('--dst-nodata', default=None, show_default=True,
81 type=float, help="Manually override destination nodata")
82 @click.option('--threads', type=int, default=1,
83 help='Number of processing threads.')
84 @click.option('--check-invert-proj', type=bool, default=True,
85 help='Constrain output to valid coordinate region in dst-crs')
86 @options.force_overwrite_opt
87 @options.creation_options
88 @click.pass_context
89 def warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,
90 x_dst_bounds, bounds, res, resampling, src_nodata, dst_nodata, threads, check_invert_proj,
91 force_overwrite, creation_options):
92 """
93 Warp a raster dataset.
94
95 If a template raster is provided using the --like option, the
96 coordinate reference system, affine transform, and dimensions of
97 that raster will be used for the output. In this case --dst-crs,
98 --bounds, --res, and --dimensions options are ignored.
99
100 \b
101 $ rio warp input.tif output.tif --like template.tif
102
103 The output coordinate reference system may be either a PROJ.4 or
104 EPSG:nnnn string,
105
106 \b
107 --dst-crs EPSG:4326
108 --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'
109
110 or a JSON text-encoded PROJ.4 object.
111
112 \b
113 --dst-crs '{"proj": "utm", "zone": 18, ...}'
114
115 If --dimensions are provided, --res and --bounds are ignored.
116 Resolution is calculated based on the relationship between the
117 raster bounds in the target coordinate system and the dimensions,
118 and may produce rectangular rather than square pixels.
119
120 \b
121 $ rio warp input.tif output.tif --dimensions 100 200 \\
122 > --dst-crs EPSG:4326
123
124 If --bounds are provided, --res is required if --dst-crs is provided
125 (defaults to source raster resolution otherwise).
126
127 \b
128 $ rio warp input.tif output.tif \\
129 > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326
130
131 """
132 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
133
134 output, files = resolve_inout(
135 files=files, output=output, force_overwrite=force_overwrite)
136
137 resampling = Resampling[resampling] # get integer code for method
138
139 if not len(res):
140 # Click sets this as an empty tuple if not provided
141 res = None
142 else:
143 # Expand one value to two if needed
144 res = (res[0], res[0]) if len(res) == 1 else res
145
146 with rasterio.Env(CPL_DEBUG=verbosity > 2,
147 CHECK_WITH_INVERT_PROJ=check_invert_proj):
148 with rasterio.open(files[0]) as src:
149 l, b, r, t = src.bounds
150 out_kwargs = src.meta.copy()
151 out_kwargs['driver'] = driver
152
153 # Sort out the bounds options.
154 src_bounds = bounds or src_bounds
155 dst_bounds = x_dst_bounds
156 if src_bounds and dst_bounds:
157 raise click.BadParameter(
158 "Source and destination bounds may not be specified "
159 "simultaneously.")
160
161 if like:
162 with rasterio.open(like) as template_ds:
163 dst_crs = template_ds.crs
164 dst_transform = template_ds.affine
165 dst_height = template_ds.height
166 dst_width = template_ds.width
167
168 elif dst_crs is not None:
169 try:
170 dst_crs = CRS.from_string(dst_crs)
171 except ValueError as err:
172 raise click.BadParameter(
173 str(err), param='dst_crs', param_hint='dst_crs')
174
175 if dimensions:
176 # Calculate resolution appropriate for dimensions
177 # in target.
178 dst_width, dst_height = dimensions
179 try:
180 xmin, ymin, xmax, ymax = transform_bounds(
181 src.crs, dst_crs, *src.bounds)
182 except CRSError as err:
183 raise click.BadParameter(
184 str(err), param='dst_crs', param_hint='dst_crs')
185 dst_transform = Affine(
186 (xmax - xmin) / float(dst_width),
187 0, xmin, 0,
188 (ymin - ymax) / float(dst_height),
189 ymax
190 )
191
192 elif src_bounds or dst_bounds:
193 if not res:
194 raise click.BadParameter(
195 "Required when using --bounds.",
196 param='res', param_hint='res')
197
198 if src_bounds:
199 try:
200 xmin, ymin, xmax, ymax = transform_bounds(
201 src.crs, dst_crs, *src_bounds)
202 except CRSError as err:
203 raise click.BadParameter(
204 str(err), param='dst_crs',
205 param_hint='dst_crs')
206 else:
207 xmin, ymin, xmax, ymax = dst_bounds
208
209 dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)
210 dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)
211 dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)
212
213 else:
214 try:
215 dst_transform, dst_width, dst_height = calculate_default_transform(
216 src.crs, dst_crs, src.width, src.height,
217 *src.bounds, resolution=res)
218 except CRSError as err:
219 raise click.BadParameter(
220 str(err), param='dst_crs', param_hint='dst_crs')
221 elif dimensions:
222 # Same projection, different dimensions, calculate resolution.
223 dst_crs = src.crs
224 dst_width, dst_height = dimensions
225 dst_transform = Affine(
226 (r - l) / float(dst_width),
227 0, l, 0,
228 (b - t) / float(dst_height),
229 t
230 )
231
232 elif src_bounds or dst_bounds:
233 # Same projection, different dimensions and possibly
234 # different resolution.
235 if not res:
236 res = (src.affine.a, -src.affine.e)
237
238 dst_crs = src.crs
239 xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)
240 dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)
241 dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)
242 dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)
243
244 elif res:
245 # Same projection, different resolution.
246 dst_crs = src.crs
247 dst_transform = Affine(res[0], 0, l, 0, -res[1], t)
248 dst_width = max(int(ceil((r - l) / res[0])), 1)
249 dst_height = max(int(ceil((t - b) / res[1])), 1)
250
251 else:
252 dst_crs = src.crs
253 dst_transform = src.affine
254 dst_width = src.width
255 dst_height = src.height
256
257 # If src_nodata is not None, update the dst metadata NODATA
258 # value to src_nodata (will be overridden by dst_nodata if it is not None
259 if src_nodata is not None:
260 # Update the dst nodata value
261 out_kwargs.update({
262 'nodata': src_nodata
263 })
264
265 # Validate a manually set destination NODATA value
266 # against the input datatype.
267 if dst_nodata is not None:
268 if src_nodata is None and src.meta['nodata'] is None:
269 raise click.BadParameter(
270 "--src-nodata must be provided because dst-nodata is not None")
271 else:
272 # Update the dst nodata value
273 out_kwargs.update({
274 'nodata': dst_nodata
275 })
276
277 # When the bounds option is misused, extreme values of
278 # destination width and height may result.
279 if (dst_width < 0 or dst_height < 0 or
280 dst_width > MAX_OUTPUT_WIDTH or
281 dst_height > MAX_OUTPUT_HEIGHT):
282 raise click.BadParameter(
283 "Invalid output dimensions: {0}.".format(
284 (dst_width, dst_height)))
285
286 out_kwargs.update({
287 'crs': dst_crs,
288 'transform': dst_transform,
289 'affine': dst_transform,
290 'width': dst_width,
291 'height': dst_height
292 })
293
294 out_kwargs.update(**creation_options)
295
296 with rasterio.open(output, 'w', **out_kwargs) as dst:
297 for i in range(1, src.count + 1):
298
299 reproject(
300 source=rasterio.band(src, i),
301 destination=rasterio.band(dst, i),
302 src_transform=src.affine,
303 src_crs=src.crs,
304 src_nodata=src_nodata,
305 dst_transform=out_kwargs['transform'],
306 dst_crs=out_kwargs['crs'],
307 dst_nodata=dst_nodata,
308 resampling=resampling,
309 num_threads=threads)
310
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rasterio/rio/warp.py b/rasterio/rio/warp.py
--- a/rasterio/rio/warp.py
+++ b/rasterio/rio/warp.py
@@ -22,27 +22,6 @@
MAX_OUTPUT_HEIGHT = 100000
-def bounds_handler(ctx, param, value):
- """Warn about future usage changes."""
- if value:
- click.echo(
- "Future Warning: "
- "the semantics of the `--bounds` option will change in Rasterio "
- "version 1.0 from bounds of the source dataset to bounds of the "
- "destination dataset.", err=True)
- return value
-
-
-def x_dst_bounds_handler(ctx, param, value):
- """Warn about future usage changes."""
- if value:
- click.echo(
- "Future Warning: "
- "the `--x-dst-bounds` option will be removed in Rasterio version "
- "1.0 in favor of `--bounds`.", err=True)
- return value
-
-
@click.command(short_help='Warp a raster dataset.')
@files_inout_arg
@options.output_opt
@@ -59,18 +38,10 @@
'--src-bounds',
nargs=4, type=float, default=None,
help="Determine output extent from source bounds: left bottom right top "
- "(note: for future backwards compatibility in 1.0).")
+ ". Cannot be used with destination --bounds")
@click.option(
- '--x-dst-bounds',
- nargs=4, type=float, default=None, callback=x_dst_bounds_handler,
- help="Set output extent from bounding values: left bottom right top "
- "(note: this option will be removed in 1.0).")
[email protected](
- '--bounds',
- nargs=4, type=float, default=None, callback=bounds_handler,
- help="Determine output extent from source bounds: left bottom right top "
- "(note: the semantics of this option will change to those of "
- "`--x-dst-bounds` in version 1.0).")
+ '--bounds', '--dst-bounds', nargs=4, type=float, default=None,
+ help="Determine output extent from destination bounds: left bottom right top")
@options.resolution_opt
@click.option('--resampling', type=click.Choice([r.name for r in Resampling]),
default='nearest', help="Resampling method.",
@@ -87,7 +58,7 @@
@options.creation_options
@click.pass_context
def warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,
- x_dst_bounds, bounds, res, resampling, src_nodata, dst_nodata, threads, check_invert_proj,
+ dst_bounds, res, resampling, src_nodata, dst_nodata, threads, check_invert_proj,
force_overwrite, creation_options):
"""
Warp a raster dataset.
@@ -151,11 +122,9 @@
out_kwargs['driver'] = driver
# Sort out the bounds options.
- src_bounds = bounds or src_bounds
- dst_bounds = x_dst_bounds
if src_bounds and dst_bounds:
raise click.BadParameter(
- "Source and destination bounds may not be specified "
+ "--src-bounds and destination --bounds may not be specified "
"simultaneously.")
if like:
|
{"golden_diff": "diff --git a/rasterio/rio/warp.py b/rasterio/rio/warp.py\n--- a/rasterio/rio/warp.py\n+++ b/rasterio/rio/warp.py\n@@ -22,27 +22,6 @@\n MAX_OUTPUT_HEIGHT = 100000\n \n \n-def bounds_handler(ctx, param, value):\n- \"\"\"Warn about future usage changes.\"\"\"\n- if value:\n- click.echo(\n- \"Future Warning: \"\n- \"the semantics of the `--bounds` option will change in Rasterio \"\n- \"version 1.0 from bounds of the source dataset to bounds of the \"\n- \"destination dataset.\", err=True)\n- return value\n-\n-\n-def x_dst_bounds_handler(ctx, param, value):\n- \"\"\"Warn about future usage changes.\"\"\"\n- if value:\n- click.echo(\n- \"Future Warning: \"\n- \"the `--x-dst-bounds` option will be removed in Rasterio version \"\n- \"1.0 in favor of `--bounds`.\", err=True)\n- return value\n-\n-\n @click.command(short_help='Warp a raster dataset.')\n @files_inout_arg\n @options.output_opt\n@@ -59,18 +38,10 @@\n '--src-bounds',\n nargs=4, type=float, default=None,\n help=\"Determine output extent from source bounds: left bottom right top \"\n- \"(note: for future backwards compatibility in 1.0).\")\n+ \". Cannot be used with destination --bounds\")\n @click.option(\n- '--x-dst-bounds',\n- nargs=4, type=float, default=None, callback=x_dst_bounds_handler,\n- help=\"Set output extent from bounding values: left bottom right top \"\n- \"(note: this option will be removed in 1.0).\")\[email protected](\n- '--bounds',\n- nargs=4, type=float, default=None, callback=bounds_handler,\n- help=\"Determine output extent from source bounds: left bottom right top \"\n- \"(note: the semantics of this option will change to those of \"\n- \"`--x-dst-bounds` in version 1.0).\")\n+ '--bounds', '--dst-bounds', nargs=4, type=float, default=None,\n+ help=\"Determine output extent from destination bounds: left bottom right top\")\n @options.resolution_opt\n @click.option('--resampling', type=click.Choice([r.name for r in Resampling]),\n default='nearest', help=\"Resampling method.\",\n@@ -87,7 +58,7 @@\n @options.creation_options\n @click.pass_context\n def warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,\n- x_dst_bounds, bounds, res, resampling, src_nodata, dst_nodata, threads, check_invert_proj,\n+ dst_bounds, res, resampling, src_nodata, dst_nodata, threads, check_invert_proj,\n force_overwrite, creation_options):\n \"\"\"\n Warp a raster dataset.\n@@ -151,11 +122,9 @@\n out_kwargs['driver'] = driver\n \n # Sort out the bounds options.\n- src_bounds = bounds or src_bounds\n- dst_bounds = x_dst_bounds\n if src_bounds and dst_bounds:\n raise click.BadParameter(\n- \"Source and destination bounds may not be specified \"\n+ \"--src-bounds and destination --bounds may not be specified \"\n \"simultaneously.\")\n \n if like:\n", "issue": "Switch sense of rio-warp's --bounds from src to dst\nFor 1.0, let's harmonize rio-warp's `--dimensions`, `--res`, `--bounds`, and `--like` options so they all specify the output/destination raster. This will mean a breaking change for `--bounds`. I apologize for not catching the interface discrepancy earlier, everyone.\n\nPlan: \n- raise a [FutureWarning](https://docs.python.org/2/library/exceptions.html#exceptions.FutureWarning) starting with the next release explaining the upcoming change to rio-warp's `--bounds` option.\n- add a `--x-dst-bounds` option to impl the corrected feature.\n- add a `--src-bounds` option to preserve the existing feature.\n- prevent the GDAL faults (crashes and non-returning calls) that are possible when bad destination bounds are passed.\n\nSound good, @brendan-ward @perrygeo?\n\n", "before_files": [{"content": "import logging\nfrom math import ceil\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.crs import CRS\nfrom rasterio.errors import CRSError\nfrom rasterio.transform import Affine\nfrom rasterio.warp import (\n reproject, Resampling, calculate_default_transform, transform_bounds)\n\n\n# Improper usage of rio-warp can lead to accidental creation of\n# extremely large datasets. We'll put a hard limit on the size of\n# datasets and raise a usage error if the limits are exceeded.\nMAX_OUTPUT_WIDTH = 100000\nMAX_OUTPUT_HEIGHT = 100000\n\n\ndef bounds_handler(ctx, param, value):\n \"\"\"Warn about future usage changes.\"\"\"\n if value:\n click.echo(\n \"Future Warning: \"\n \"the semantics of the `--bounds` option will change in Rasterio \"\n \"version 1.0 from bounds of the source dataset to bounds of the \"\n \"destination dataset.\", err=True)\n return value\n\n\ndef x_dst_bounds_handler(ctx, param, value):\n \"\"\"Warn about future usage changes.\"\"\"\n if value:\n click.echo(\n \"Future Warning: \"\n \"the `--x-dst-bounds` option will be removed in Rasterio version \"\n \"1.0 in favor of `--bounds`.\", err=True)\n return value\n\n\[email protected](short_help='Warp a raster dataset.')\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected](\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for obtaining affine '\n 'transform (bounds and resolution), and crs.')\[email protected]('--dst-crs', default=None,\n help='Target coordinate reference system.')\[email protected]_opt\[email protected](\n '--src-bounds',\n nargs=4, type=float, default=None,\n help=\"Determine output extent from source bounds: left bottom right top \"\n \"(note: for future backwards compatibility in 1.0).\")\[email protected](\n '--x-dst-bounds',\n nargs=4, type=float, default=None, callback=x_dst_bounds_handler,\n help=\"Set output extent from bounding values: left bottom right top \"\n \"(note: this option will be removed in 1.0).\")\[email protected](\n '--bounds',\n nargs=4, type=float, default=None, callback=bounds_handler,\n help=\"Determine output extent from source bounds: left bottom right top \"\n \"(note: the semantics of this option will change to those of \"\n \"`--x-dst-bounds` in version 1.0).\")\[email protected]_opt\[email protected]('--resampling', type=click.Choice([r.name for r in Resampling]),\n default='nearest', help=\"Resampling method.\",\n show_default=True)\[email protected]('--src-nodata', default=None, show_default=True,\n type=float, help=\"Manually override source nodata\")\[email protected]('--dst-nodata', default=None, show_default=True,\n type=float, help=\"Manually override destination nodata\")\[email protected]('--threads', type=int, default=1,\n help='Number of processing threads.')\[email protected]('--check-invert-proj', type=bool, default=True,\n help='Constrain output to valid coordinate region in dst-crs')\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,\n x_dst_bounds, bounds, res, resampling, src_nodata, dst_nodata, threads, check_invert_proj,\n force_overwrite, creation_options):\n \"\"\"\n Warp a raster dataset.\n\n If a template raster is provided using the --like option, the\n coordinate reference system, affine transform, and dimensions of\n that raster will be used for the output. In this case --dst-crs,\n --bounds, --res, and --dimensions options are ignored.\n\n \\b\n $ rio warp input.tif output.tif --like template.tif\n\n The output coordinate reference system may be either a PROJ.4 or\n EPSG:nnnn string,\n\n \\b\n --dst-crs EPSG:4326\n --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'\n\n or a JSON text-encoded PROJ.4 object.\n\n \\b\n --dst-crs '{\"proj\": \"utm\", \"zone\": 18, ...}'\n\n If --dimensions are provided, --res and --bounds are ignored.\n Resolution is calculated based on the relationship between the\n raster bounds in the target coordinate system and the dimensions,\n and may produce rectangular rather than square pixels.\n\n \\b\n $ rio warp input.tif output.tif --dimensions 100 200 \\\\\n > --dst-crs EPSG:4326\n\n If --bounds are provided, --res is required if --dst-crs is provided\n (defaults to source raster resolution otherwise).\n\n \\b\n $ rio warp input.tif output.tif \\\\\n > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326\n\n \"\"\"\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n\n output, files = resolve_inout(\n files=files, output=output, force_overwrite=force_overwrite)\n\n resampling = Resampling[resampling] # get integer code for method\n\n if not len(res):\n # Click sets this as an empty tuple if not provided\n res = None\n else:\n # Expand one value to two if needed\n res = (res[0], res[0]) if len(res) == 1 else res\n\n with rasterio.Env(CPL_DEBUG=verbosity > 2,\n CHECK_WITH_INVERT_PROJ=check_invert_proj):\n with rasterio.open(files[0]) as src:\n l, b, r, t = src.bounds\n out_kwargs = src.meta.copy()\n out_kwargs['driver'] = driver\n\n # Sort out the bounds options.\n src_bounds = bounds or src_bounds\n dst_bounds = x_dst_bounds\n if src_bounds and dst_bounds:\n raise click.BadParameter(\n \"Source and destination bounds may not be specified \"\n \"simultaneously.\")\n\n if like:\n with rasterio.open(like) as template_ds:\n dst_crs = template_ds.crs\n dst_transform = template_ds.affine\n dst_height = template_ds.height\n dst_width = template_ds.width\n\n elif dst_crs is not None:\n try:\n dst_crs = CRS.from_string(dst_crs)\n except ValueError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n\n if dimensions:\n # Calculate resolution appropriate for dimensions\n # in target.\n dst_width, dst_height = dimensions\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src.bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n dst_transform = Affine(\n (xmax - xmin) / float(dst_width),\n 0, xmin, 0,\n (ymin - ymax) / float(dst_height),\n ymax\n )\n\n elif src_bounds or dst_bounds:\n if not res:\n raise click.BadParameter(\n \"Required when using --bounds.\",\n param='res', param_hint='res')\n\n if src_bounds:\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src_bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs',\n param_hint='dst_crs')\n else:\n xmin, ymin, xmax, ymax = dst_bounds\n\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n else:\n try:\n dst_transform, dst_width, dst_height = calculate_default_transform(\n src.crs, dst_crs, src.width, src.height,\n *src.bounds, resolution=res)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n elif dimensions:\n # Same projection, different dimensions, calculate resolution.\n dst_crs = src.crs\n dst_width, dst_height = dimensions\n dst_transform = Affine(\n (r - l) / float(dst_width),\n 0, l, 0,\n (b - t) / float(dst_height),\n t\n )\n\n elif src_bounds or dst_bounds:\n # Same projection, different dimensions and possibly\n # different resolution.\n if not res:\n res = (src.affine.a, -src.affine.e)\n\n dst_crs = src.crs\n xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n elif res:\n # Same projection, different resolution.\n dst_crs = src.crs\n dst_transform = Affine(res[0], 0, l, 0, -res[1], t)\n dst_width = max(int(ceil((r - l) / res[0])), 1)\n dst_height = max(int(ceil((t - b) / res[1])), 1)\n\n else:\n dst_crs = src.crs\n dst_transform = src.affine\n dst_width = src.width\n dst_height = src.height\n\n # If src_nodata is not None, update the dst metadata NODATA\n # value to src_nodata (will be overridden by dst_nodata if it is not None\n if src_nodata is not None:\n # Update the dst nodata value\n out_kwargs.update({\n 'nodata': src_nodata\n })\n\n # Validate a manually set destination NODATA value\n # against the input datatype.\n if dst_nodata is not None:\n if src_nodata is None and src.meta['nodata'] is None:\n raise click.BadParameter(\n \"--src-nodata must be provided because dst-nodata is not None\")\n else:\n # Update the dst nodata value\n out_kwargs.update({\n 'nodata': dst_nodata\n })\n\n # When the bounds option is misused, extreme values of\n # destination width and height may result.\n if (dst_width < 0 or dst_height < 0 or\n dst_width > MAX_OUTPUT_WIDTH or\n dst_height > MAX_OUTPUT_HEIGHT):\n raise click.BadParameter(\n \"Invalid output dimensions: {0}.\".format(\n (dst_width, dst_height)))\n\n out_kwargs.update({\n 'crs': dst_crs,\n 'transform': dst_transform,\n 'affine': dst_transform,\n 'width': dst_width,\n 'height': dst_height\n })\n\n out_kwargs.update(**creation_options)\n\n with rasterio.open(output, 'w', **out_kwargs) as dst:\n for i in range(1, src.count + 1):\n\n reproject(\n source=rasterio.band(src, i),\n destination=rasterio.band(dst, i),\n src_transform=src.affine,\n src_crs=src.crs,\n src_nodata=src_nodata,\n dst_transform=out_kwargs['transform'],\n dst_crs=out_kwargs['crs'],\n dst_nodata=dst_nodata,\n resampling=resampling,\n num_threads=threads)\n", "path": "rasterio/rio/warp.py"}], "after_files": [{"content": "import logging\nfrom math import ceil\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.crs import CRS\nfrom rasterio.errors import CRSError\nfrom rasterio.transform import Affine\nfrom rasterio.warp import (\n reproject, Resampling, calculate_default_transform, transform_bounds)\n\n\n# Improper usage of rio-warp can lead to accidental creation of\n# extremely large datasets. We'll put a hard limit on the size of\n# datasets and raise a usage error if the limits are exceeded.\nMAX_OUTPUT_WIDTH = 100000\nMAX_OUTPUT_HEIGHT = 100000\n\n\[email protected](short_help='Warp a raster dataset.')\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected](\n '--like',\n type=click.Path(exists=True),\n help='Raster dataset to use as a template for obtaining affine '\n 'transform (bounds and resolution), and crs.')\[email protected]('--dst-crs', default=None,\n help='Target coordinate reference system.')\[email protected]_opt\[email protected](\n '--src-bounds',\n nargs=4, type=float, default=None,\n help=\"Determine output extent from source bounds: left bottom right top \"\n \". Cannot be used with destination --bounds\")\[email protected](\n '--bounds', '--dst-bounds', nargs=4, type=float, default=None,\n help=\"Determine output extent from destination bounds: left bottom right top\")\[email protected]_opt\[email protected]('--resampling', type=click.Choice([r.name for r in Resampling]),\n default='nearest', help=\"Resampling method.\",\n show_default=True)\[email protected]('--src-nodata', default=None, show_default=True,\n type=float, help=\"Manually override source nodata\")\[email protected]('--dst-nodata', default=None, show_default=True,\n type=float, help=\"Manually override destination nodata\")\[email protected]('--threads', type=int, default=1,\n help='Number of processing threads.')\[email protected]('--check-invert-proj', type=bool, default=True,\n help='Constrain output to valid coordinate region in dst-crs')\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef warp(ctx, files, output, driver, like, dst_crs, dimensions, src_bounds,\n dst_bounds, res, resampling, src_nodata, dst_nodata, threads, check_invert_proj,\n force_overwrite, creation_options):\n \"\"\"\n Warp a raster dataset.\n\n If a template raster is provided using the --like option, the\n coordinate reference system, affine transform, and dimensions of\n that raster will be used for the output. In this case --dst-crs,\n --bounds, --res, and --dimensions options are ignored.\n\n \\b\n $ rio warp input.tif output.tif --like template.tif\n\n The output coordinate reference system may be either a PROJ.4 or\n EPSG:nnnn string,\n\n \\b\n --dst-crs EPSG:4326\n --dst-crs '+proj=longlat +ellps=WGS84 +datum=WGS84'\n\n or a JSON text-encoded PROJ.4 object.\n\n \\b\n --dst-crs '{\"proj\": \"utm\", \"zone\": 18, ...}'\n\n If --dimensions are provided, --res and --bounds are ignored.\n Resolution is calculated based on the relationship between the\n raster bounds in the target coordinate system and the dimensions,\n and may produce rectangular rather than square pixels.\n\n \\b\n $ rio warp input.tif output.tif --dimensions 100 200 \\\\\n > --dst-crs EPSG:4326\n\n If --bounds are provided, --res is required if --dst-crs is provided\n (defaults to source raster resolution otherwise).\n\n \\b\n $ rio warp input.tif output.tif \\\\\n > --bounds -78 22 -76 24 --res 0.1 --dst-crs EPSG:4326\n\n \"\"\"\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n\n output, files = resolve_inout(\n files=files, output=output, force_overwrite=force_overwrite)\n\n resampling = Resampling[resampling] # get integer code for method\n\n if not len(res):\n # Click sets this as an empty tuple if not provided\n res = None\n else:\n # Expand one value to two if needed\n res = (res[0], res[0]) if len(res) == 1 else res\n\n with rasterio.Env(CPL_DEBUG=verbosity > 2,\n CHECK_WITH_INVERT_PROJ=check_invert_proj):\n with rasterio.open(files[0]) as src:\n l, b, r, t = src.bounds\n out_kwargs = src.meta.copy()\n out_kwargs['driver'] = driver\n\n # Sort out the bounds options.\n if src_bounds and dst_bounds:\n raise click.BadParameter(\n \"--src-bounds and destination --bounds may not be specified \"\n \"simultaneously.\")\n\n if like:\n with rasterio.open(like) as template_ds:\n dst_crs = template_ds.crs\n dst_transform = template_ds.affine\n dst_height = template_ds.height\n dst_width = template_ds.width\n\n elif dst_crs is not None:\n try:\n dst_crs = CRS.from_string(dst_crs)\n except ValueError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n\n if dimensions:\n # Calculate resolution appropriate for dimensions\n # in target.\n dst_width, dst_height = dimensions\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src.bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n dst_transform = Affine(\n (xmax - xmin) / float(dst_width),\n 0, xmin, 0,\n (ymin - ymax) / float(dst_height),\n ymax\n )\n\n elif src_bounds or dst_bounds:\n if not res:\n raise click.BadParameter(\n \"Required when using --bounds.\",\n param='res', param_hint='res')\n\n if src_bounds:\n try:\n xmin, ymin, xmax, ymax = transform_bounds(\n src.crs, dst_crs, *src_bounds)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs',\n param_hint='dst_crs')\n else:\n xmin, ymin, xmax, ymax = dst_bounds\n\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n else:\n try:\n dst_transform, dst_width, dst_height = calculate_default_transform(\n src.crs, dst_crs, src.width, src.height,\n *src.bounds, resolution=res)\n except CRSError as err:\n raise click.BadParameter(\n str(err), param='dst_crs', param_hint='dst_crs')\n elif dimensions:\n # Same projection, different dimensions, calculate resolution.\n dst_crs = src.crs\n dst_width, dst_height = dimensions\n dst_transform = Affine(\n (r - l) / float(dst_width),\n 0, l, 0,\n (b - t) / float(dst_height),\n t\n )\n\n elif src_bounds or dst_bounds:\n # Same projection, different dimensions and possibly\n # different resolution.\n if not res:\n res = (src.affine.a, -src.affine.e)\n\n dst_crs = src.crs\n xmin, ymin, xmax, ymax = (src_bounds or dst_bounds)\n dst_transform = Affine(res[0], 0, xmin, 0, -res[1], ymax)\n dst_width = max(int(ceil((xmax - xmin) / res[0])), 1)\n dst_height = max(int(ceil((ymax - ymin) / res[1])), 1)\n\n elif res:\n # Same projection, different resolution.\n dst_crs = src.crs\n dst_transform = Affine(res[0], 0, l, 0, -res[1], t)\n dst_width = max(int(ceil((r - l) / res[0])), 1)\n dst_height = max(int(ceil((t - b) / res[1])), 1)\n\n else:\n dst_crs = src.crs\n dst_transform = src.affine\n dst_width = src.width\n dst_height = src.height\n\n # If src_nodata is not None, update the dst metadata NODATA\n # value to src_nodata (will be overridden by dst_nodata if it is not None\n if src_nodata is not None:\n # Update the dst nodata value\n out_kwargs.update({\n 'nodata': src_nodata\n })\n\n # Validate a manually set destination NODATA value\n # against the input datatype.\n if dst_nodata is not None:\n if src_nodata is None and src.meta['nodata'] is None:\n raise click.BadParameter(\n \"--src-nodata must be provided because dst-nodata is not None\")\n else:\n # Update the dst nodata value\n out_kwargs.update({\n 'nodata': dst_nodata\n })\n\n # When the bounds option is misused, extreme values of\n # destination width and height may result.\n if (dst_width < 0 or dst_height < 0 or\n dst_width > MAX_OUTPUT_WIDTH or\n dst_height > MAX_OUTPUT_HEIGHT):\n raise click.BadParameter(\n \"Invalid output dimensions: {0}.\".format(\n (dst_width, dst_height)))\n\n out_kwargs.update({\n 'crs': dst_crs,\n 'transform': dst_transform,\n 'affine': dst_transform,\n 'width': dst_width,\n 'height': dst_height\n })\n\n out_kwargs.update(**creation_options)\n\n with rasterio.open(output, 'w', **out_kwargs) as dst:\n for i in range(1, src.count + 1):\n\n reproject(\n source=rasterio.band(src, i),\n destination=rasterio.band(dst, i),\n src_transform=src.affine,\n src_crs=src.crs,\n src_nodata=src_nodata,\n dst_transform=out_kwargs['transform'],\n dst_crs=out_kwargs['crs'],\n dst_nodata=dst_nodata,\n resampling=resampling,\n num_threads=threads)\n", "path": "rasterio/rio/warp.py"}]}
| 3,976 | 770 |
gh_patches_debug_37451
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-2142
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in slack_notification Action
Hi Guys,
I have 3 pipelines with great expectations, but one pipeline started to give some errors:
`Error running action with name notify_slack
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 384, in _run_actions
payload=batch_actions_results,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 55, in run
**kwargs,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 170, in _run
validation_result_suite, data_docs_pages, self.notify_with
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 101, in render
report_element = self._get_report_element(docs_link)
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 155, in _get_report_element
if "file:///" in docs_link:
TypeError: argument of type 'NoneType' is not iterable
Traceback (most recent call last):
File "fact_demand_coverage_datalake_datasets_job.py", line 90, in <module>
expectation_suite=expectation_suite_name_for_validations
File "/mnt1/yarn/usercache/hadoop/appcache/application_1603978914184_5458/container_1603978914184_5458_01_000001/data_quality.zip/data_quality/ge_validation.py", line 174, in validate
File "/usr/local/lib/python3.7/site-packages/great_expectations/core/usage_statistics/usage_statistics.py", line 257, in usage_statistics_wrapped_method
result = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/great_expectations/data_context/data_context.py", line 1022, in run_validation_operator
**kwargs,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 334, in run
run_id,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 399, in _run_actions
raise e
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py", line 384, in _run_actions
payload=batch_actions_results,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 55, in run
**kwargs,
File "/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py", line 170, in _run
validation_result_suite, data_docs_pages, self.notify_with
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 101, in render
report_element = self._get_report_element(docs_link)
File "/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py", line 155, in _get_report_element
if "file:///" in docs_link:
TypeError: argument of type 'NoneType' is not iterable`
The stranger thing with this error, the error doesn't happen every day, for example yesterday it worked fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `great_expectations/render/renderer/slack_renderer.py`
Content:
```
1 import logging
2
3 from great_expectations.exceptions import InvalidKeyError
4
5 logger = logging.getLogger(__name__)
6
7 from ...core.id_dict import BatchKwargs
8 from .renderer import Renderer
9
10
11 class SlackRenderer(Renderer):
12 def __init__(self):
13 super().__init__()
14
15 def render(
16 self, validation_result=None, data_docs_pages=None, notify_with=None,
17 ):
18 default_text = (
19 "No validation occurred. Please ensure you passed a validation_result."
20 )
21 status = "Failed :x:"
22
23 title_block = {
24 "type": "section",
25 "text": {"type": "mrkdwn", "text": default_text,},
26 }
27
28 query = {
29 "blocks": [title_block],
30 # this abbreviated root level "text" will show up in the notification and not the message
31 "text": default_text,
32 }
33
34 if validation_result:
35 expectation_suite_name = validation_result.meta.get(
36 "expectation_suite_name", "__no_expectation_suite_name__"
37 )
38
39 if "batch_kwargs" in validation_result.meta:
40 data_asset_name = validation_result.meta["batch_kwargs"].get(
41 "data_asset_name", "__no_data_asset_name__"
42 )
43 else:
44 data_asset_name = "__no_data_asset_name__"
45
46 n_checks_succeeded = validation_result.statistics["successful_expectations"]
47 n_checks = validation_result.statistics["evaluated_expectations"]
48 run_id = validation_result.meta.get("run_id", "__no_run_id__")
49 batch_id = BatchKwargs(
50 validation_result.meta.get("batch_kwargs", {})
51 ).to_id()
52 check_details_text = (
53 f"*{n_checks_succeeded}* of *{n_checks}* expectations were met"
54 )
55
56 if validation_result.success:
57 status = "Success :tada:"
58
59 summary_text = f"""*Batch Validation Status*: {status}
60 *Expectation suite name*: `{expectation_suite_name}`
61 *Data asset name*: `{data_asset_name}`
62 *Run ID*: `{run_id}`
63 *Batch ID*: `{batch_id}`
64 *Summary*: {check_details_text}"""
65 query["blocks"][0]["text"]["text"] = summary_text
66 # this abbreviated root level "text" will show up in the notification and not the message
67 query["text"] = f"{expectation_suite_name}: {status}"
68
69 if data_docs_pages:
70 if notify_with is not None:
71 for docs_link_key in notify_with:
72 if docs_link_key in data_docs_pages.keys():
73 docs_link = data_docs_pages[docs_link_key]
74 report_element = self._get_report_element(docs_link)
75 else:
76 logger.critical(
77 f"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\n"
78 )
79 report_element = {
80 "type": "section",
81 "text": {
82 "type": "mrkdwn",
83 "text": f"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\n",
84 },
85 }
86 if report_element:
87 query["blocks"].append(report_element)
88 else:
89 for docs_link_key in data_docs_pages.keys():
90 if docs_link_key == "class":
91 continue
92 docs_link = data_docs_pages[docs_link_key]
93 report_element = self._get_report_element(docs_link)
94 if report_element:
95 query["blocks"].append(report_element)
96
97 if "result_reference" in validation_result.meta:
98 result_reference = validation_result.meta["result_reference"]
99 report_element = {
100 "type": "section",
101 "text": {
102 "type": "mrkdwn",
103 "text": f"- *Validation Report*: {result_reference}",
104 },
105 }
106 query["blocks"].append(report_element)
107
108 if "dataset_reference" in validation_result.meta:
109 dataset_reference = validation_result.meta["dataset_reference"]
110 dataset_element = {
111 "type": "section",
112 "text": {
113 "type": "mrkdwn",
114 "text": f"- *Validation data asset*: {dataset_reference}",
115 },
116 }
117 query["blocks"].append(dataset_element)
118
119 custom_blocks = self._custom_blocks(evr=validation_result)
120 if custom_blocks:
121 query["blocks"].append(custom_blocks)
122
123 documentation_url = "https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html"
124 footer_section = {
125 "type": "context",
126 "elements": [
127 {
128 "type": "mrkdwn",
129 "text": f"Learn how to review validation results in Data Docs: {documentation_url}",
130 }
131 ],
132 }
133
134 divider_block = {"type": "divider"}
135 query["blocks"].append(divider_block)
136 query["blocks"].append(footer_section)
137 return query
138
139 def _custom_blocks(self, evr):
140 return None
141
142 def _get_report_element(self, docs_link):
143 if docs_link is None:
144 logger.warn("No docs link found. Skipping data docs link in slack message.")
145 return
146
147 if "file://" in docs_link:
148 # handle special case since Slack does not render these links
149 report_element = {
150 "type": "section",
151 "text": {
152 "type": "mrkdwn",
153 "text": f"*DataDocs* can be found here: `{docs_link}` \n (Please copy and paste link into a browser to view)\n",
154 },
155 }
156 else:
157 report_element = {
158 "type": "section",
159 "text": {
160 "type": "mrkdwn",
161 "text": f"*DataDocs* can be found here: <{docs_link}|{docs_link}>",
162 },
163 }
164 return report_element
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/great_expectations/render/renderer/slack_renderer.py b/great_expectations/render/renderer/slack_renderer.py
--- a/great_expectations/render/renderer/slack_renderer.py
+++ b/great_expectations/render/renderer/slack_renderer.py
@@ -116,10 +116,6 @@
}
query["blocks"].append(dataset_element)
- custom_blocks = self._custom_blocks(evr=validation_result)
- if custom_blocks:
- query["blocks"].append(custom_blocks)
-
documentation_url = "https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html"
footer_section = {
"type": "context",
@@ -136,29 +132,36 @@
query["blocks"].append(footer_section)
return query
- def _custom_blocks(self, evr):
- return None
-
def _get_report_element(self, docs_link):
- if docs_link is None:
- logger.warn("No docs link found. Skipping data docs link in slack message.")
- return
-
- if "file://" in docs_link:
- # handle special case since Slack does not render these links
- report_element = {
- "type": "section",
- "text": {
- "type": "mrkdwn",
- "text": f"*DataDocs* can be found here: `{docs_link}` \n (Please copy and paste link into a browser to view)\n",
- },
- }
+ report_element = None
+ if docs_link:
+ try:
+ if "file://" in docs_link:
+ # handle special case since Slack does not render these links
+ report_element = {
+ "type": "section",
+ "text": {
+ "type": "mrkdwn",
+ "text": f"*DataDocs* can be found here: `{docs_link}` \n (Please copy and paste link into a browser to view)\n",
+ },
+ }
+ else:
+ report_element = {
+ "type": "section",
+ "text": {
+ "type": "mrkdwn",
+ "text": f"*DataDocs* can be found here: <{docs_link}|{docs_link}>",
+ },
+ }
+ except Exception as e:
+ logger.warning(
+ f"""SlackRenderer had a problem with generating the docs link.
+ link used to generate the docs link is: {docs_link} and is of type: {type(docs_link)}.
+ Error: {e}"""
+ )
+ return
else:
- report_element = {
- "type": "section",
- "text": {
- "type": "mrkdwn",
- "text": f"*DataDocs* can be found here: <{docs_link}|{docs_link}>",
- },
- }
+ logger.warning(
+ "No docs link found. Skipping data docs link in Slack message."
+ )
return report_element
|
{"golden_diff": "diff --git a/great_expectations/render/renderer/slack_renderer.py b/great_expectations/render/renderer/slack_renderer.py\n--- a/great_expectations/render/renderer/slack_renderer.py\n+++ b/great_expectations/render/renderer/slack_renderer.py\n@@ -116,10 +116,6 @@\n }\n query[\"blocks\"].append(dataset_element)\n \n- custom_blocks = self._custom_blocks(evr=validation_result)\n- if custom_blocks:\n- query[\"blocks\"].append(custom_blocks)\n-\n documentation_url = \"https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html\"\n footer_section = {\n \"type\": \"context\",\n@@ -136,29 +132,36 @@\n query[\"blocks\"].append(footer_section)\n return query\n \n- def _custom_blocks(self, evr):\n- return None\n-\n def _get_report_element(self, docs_link):\n- if docs_link is None:\n- logger.warn(\"No docs link found. Skipping data docs link in slack message.\")\n- return\n-\n- if \"file://\" in docs_link:\n- # handle special case since Slack does not render these links\n- report_element = {\n- \"type\": \"section\",\n- \"text\": {\n- \"type\": \"mrkdwn\",\n- \"text\": f\"*DataDocs* can be found here: `{docs_link}` \\n (Please copy and paste link into a browser to view)\\n\",\n- },\n- }\n+ report_element = None\n+ if docs_link:\n+ try:\n+ if \"file://\" in docs_link:\n+ # handle special case since Slack does not render these links\n+ report_element = {\n+ \"type\": \"section\",\n+ \"text\": {\n+ \"type\": \"mrkdwn\",\n+ \"text\": f\"*DataDocs* can be found here: `{docs_link}` \\n (Please copy and paste link into a browser to view)\\n\",\n+ },\n+ }\n+ else:\n+ report_element = {\n+ \"type\": \"section\",\n+ \"text\": {\n+ \"type\": \"mrkdwn\",\n+ \"text\": f\"*DataDocs* can be found here: <{docs_link}|{docs_link}>\",\n+ },\n+ }\n+ except Exception as e:\n+ logger.warning(\n+ f\"\"\"SlackRenderer had a problem with generating the docs link.\n+ link used to generate the docs link is: {docs_link} and is of type: {type(docs_link)}.\n+ Error: {e}\"\"\"\n+ )\n+ return\n else:\n- report_element = {\n- \"type\": \"section\",\n- \"text\": {\n- \"type\": \"mrkdwn\",\n- \"text\": f\"*DataDocs* can be found here: <{docs_link}|{docs_link}>\",\n- },\n- }\n+ logger.warning(\n+ \"No docs link found. Skipping data docs link in Slack message.\"\n+ )\n return report_element\n", "issue": "Error in slack_notification Action\nHi Guys,\r\nI have 3 pipelines with great expectations, but one pipeline started to give some errors:\r\n\r\n`Error running action with name notify_slack\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 384, in _run_actions\r\n payload=batch_actions_results,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 55, in run\r\n **kwargs,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 170, in _run\r\n validation_result_suite, data_docs_pages, self.notify_with\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 101, in render\r\n report_element = self._get_report_element(docs_link)\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 155, in _get_report_element\r\n if \"file:///\" in docs_link:\r\nTypeError: argument of type 'NoneType' is not iterable\r\nTraceback (most recent call last):\r\n File \"fact_demand_coverage_datalake_datasets_job.py\", line 90, in <module>\r\n expectation_suite=expectation_suite_name_for_validations\r\n File \"/mnt1/yarn/usercache/hadoop/appcache/application_1603978914184_5458/container_1603978914184_5458_01_000001/data_quality.zip/data_quality/ge_validation.py\", line 174, in validate\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/core/usage_statistics/usage_statistics.py\", line 257, in usage_statistics_wrapped_method\r\n result = func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/data_context/data_context.py\", line 1022, in run_validation_operator\r\n **kwargs,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 334, in run\r\n run_id,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 399, in _run_actions\r\n raise e\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/validation_operators.py\", line 384, in _run_actions\r\n payload=batch_actions_results,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 55, in run\r\n **kwargs,\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/validation_operators/actions.py\", line 170, in _run\r\n validation_result_suite, data_docs_pages, self.notify_with\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 101, in render\r\n report_element = self._get_report_element(docs_link)\r\n File \"/usr/local/lib/python3.7/site-packages/great_expectations/render/renderer/slack_renderer.py\", line 155, in _get_report_element\r\n if \"file:///\" in docs_link:\r\nTypeError: argument of type 'NoneType' is not iterable`\r\n\r\nThe stranger thing with this error, the error doesn't happen every day, for example yesterday it worked fine.\n", "before_files": [{"content": "import logging\n\nfrom great_expectations.exceptions import InvalidKeyError\n\nlogger = logging.getLogger(__name__)\n\nfrom ...core.id_dict import BatchKwargs\nfrom .renderer import Renderer\n\n\nclass SlackRenderer(Renderer):\n def __init__(self):\n super().__init__()\n\n def render(\n self, validation_result=None, data_docs_pages=None, notify_with=None,\n ):\n default_text = (\n \"No validation occurred. Please ensure you passed a validation_result.\"\n )\n status = \"Failed :x:\"\n\n title_block = {\n \"type\": \"section\",\n \"text\": {\"type\": \"mrkdwn\", \"text\": default_text,},\n }\n\n query = {\n \"blocks\": [title_block],\n # this abbreviated root level \"text\" will show up in the notification and not the message\n \"text\": default_text,\n }\n\n if validation_result:\n expectation_suite_name = validation_result.meta.get(\n \"expectation_suite_name\", \"__no_expectation_suite_name__\"\n )\n\n if \"batch_kwargs\" in validation_result.meta:\n data_asset_name = validation_result.meta[\"batch_kwargs\"].get(\n \"data_asset_name\", \"__no_data_asset_name__\"\n )\n else:\n data_asset_name = \"__no_data_asset_name__\"\n\n n_checks_succeeded = validation_result.statistics[\"successful_expectations\"]\n n_checks = validation_result.statistics[\"evaluated_expectations\"]\n run_id = validation_result.meta.get(\"run_id\", \"__no_run_id__\")\n batch_id = BatchKwargs(\n validation_result.meta.get(\"batch_kwargs\", {})\n ).to_id()\n check_details_text = (\n f\"*{n_checks_succeeded}* of *{n_checks}* expectations were met\"\n )\n\n if validation_result.success:\n status = \"Success :tada:\"\n\n summary_text = f\"\"\"*Batch Validation Status*: {status}\n*Expectation suite name*: `{expectation_suite_name}`\n*Data asset name*: `{data_asset_name}`\n*Run ID*: `{run_id}`\n*Batch ID*: `{batch_id}`\n*Summary*: {check_details_text}\"\"\"\n query[\"blocks\"][0][\"text\"][\"text\"] = summary_text\n # this abbreviated root level \"text\" will show up in the notification and not the message\n query[\"text\"] = f\"{expectation_suite_name}: {status}\"\n\n if data_docs_pages:\n if notify_with is not None:\n for docs_link_key in notify_with:\n if docs_link_key in data_docs_pages.keys():\n docs_link = data_docs_pages[docs_link_key]\n report_element = self._get_report_element(docs_link)\n else:\n logger.critical(\n f\"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\\n\"\n )\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\\n\",\n },\n }\n if report_element:\n query[\"blocks\"].append(report_element)\n else:\n for docs_link_key in data_docs_pages.keys():\n if docs_link_key == \"class\":\n continue\n docs_link = data_docs_pages[docs_link_key]\n report_element = self._get_report_element(docs_link)\n if report_element:\n query[\"blocks\"].append(report_element)\n\n if \"result_reference\" in validation_result.meta:\n result_reference = validation_result.meta[\"result_reference\"]\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"- *Validation Report*: {result_reference}\",\n },\n }\n query[\"blocks\"].append(report_element)\n\n if \"dataset_reference\" in validation_result.meta:\n dataset_reference = validation_result.meta[\"dataset_reference\"]\n dataset_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"- *Validation data asset*: {dataset_reference}\",\n },\n }\n query[\"blocks\"].append(dataset_element)\n\n custom_blocks = self._custom_blocks(evr=validation_result)\n if custom_blocks:\n query[\"blocks\"].append(custom_blocks)\n\n documentation_url = \"https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html\"\n footer_section = {\n \"type\": \"context\",\n \"elements\": [\n {\n \"type\": \"mrkdwn\",\n \"text\": f\"Learn how to review validation results in Data Docs: {documentation_url}\",\n }\n ],\n }\n\n divider_block = {\"type\": \"divider\"}\n query[\"blocks\"].append(divider_block)\n query[\"blocks\"].append(footer_section)\n return query\n\n def _custom_blocks(self, evr):\n return None\n\n def _get_report_element(self, docs_link):\n if docs_link is None:\n logger.warn(\"No docs link found. Skipping data docs link in slack message.\")\n return\n\n if \"file://\" in docs_link:\n # handle special case since Slack does not render these links\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*DataDocs* can be found here: `{docs_link}` \\n (Please copy and paste link into a browser to view)\\n\",\n },\n }\n else:\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*DataDocs* can be found here: <{docs_link}|{docs_link}>\",\n },\n }\n return report_element\n", "path": "great_expectations/render/renderer/slack_renderer.py"}], "after_files": [{"content": "import logging\n\nfrom great_expectations.exceptions import InvalidKeyError\n\nlogger = logging.getLogger(__name__)\n\nfrom ...core.id_dict import BatchKwargs\nfrom .renderer import Renderer\n\n\nclass SlackRenderer(Renderer):\n def __init__(self):\n super().__init__()\n\n def render(\n self, validation_result=None, data_docs_pages=None, notify_with=None,\n ):\n default_text = (\n \"No validation occurred. Please ensure you passed a validation_result.\"\n )\n status = \"Failed :x:\"\n\n title_block = {\n \"type\": \"section\",\n \"text\": {\"type\": \"mrkdwn\", \"text\": default_text,},\n }\n\n query = {\n \"blocks\": [title_block],\n # this abbreviated root level \"text\" will show up in the notification and not the message\n \"text\": default_text,\n }\n\n if validation_result:\n expectation_suite_name = validation_result.meta.get(\n \"expectation_suite_name\", \"__no_expectation_suite_name__\"\n )\n\n if \"batch_kwargs\" in validation_result.meta:\n data_asset_name = validation_result.meta[\"batch_kwargs\"].get(\n \"data_asset_name\", \"__no_data_asset_name__\"\n )\n else:\n data_asset_name = \"__no_data_asset_name__\"\n\n n_checks_succeeded = validation_result.statistics[\"successful_expectations\"]\n n_checks = validation_result.statistics[\"evaluated_expectations\"]\n run_id = validation_result.meta.get(\"run_id\", \"__no_run_id__\")\n batch_id = BatchKwargs(\n validation_result.meta.get(\"batch_kwargs\", {})\n ).to_id()\n check_details_text = (\n f\"*{n_checks_succeeded}* of *{n_checks}* expectations were met\"\n )\n\n if validation_result.success:\n status = \"Success :tada:\"\n\n summary_text = f\"\"\"*Batch Validation Status*: {status}\n*Expectation suite name*: `{expectation_suite_name}`\n*Data asset name*: `{data_asset_name}`\n*Run ID*: `{run_id}`\n*Batch ID*: `{batch_id}`\n*Summary*: {check_details_text}\"\"\"\n query[\"blocks\"][0][\"text\"][\"text\"] = summary_text\n # this abbreviated root level \"text\" will show up in the notification and not the message\n query[\"text\"] = f\"{expectation_suite_name}: {status}\"\n\n if data_docs_pages:\n if notify_with is not None:\n for docs_link_key in notify_with:\n if docs_link_key in data_docs_pages.keys():\n docs_link = data_docs_pages[docs_link_key]\n report_element = self._get_report_element(docs_link)\n else:\n logger.critical(\n f\"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\\n\"\n )\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*ERROR*: Slack is trying to provide a link to the following DataDocs: `{str(docs_link_key)}`, but it is not configured under `data_docs_sites` in the `great_expectations.yml`\\n\",\n },\n }\n if report_element:\n query[\"blocks\"].append(report_element)\n else:\n for docs_link_key in data_docs_pages.keys():\n if docs_link_key == \"class\":\n continue\n docs_link = data_docs_pages[docs_link_key]\n report_element = self._get_report_element(docs_link)\n if report_element:\n query[\"blocks\"].append(report_element)\n\n if \"result_reference\" in validation_result.meta:\n result_reference = validation_result.meta[\"result_reference\"]\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"- *Validation Report*: {result_reference}\",\n },\n }\n query[\"blocks\"].append(report_element)\n\n if \"dataset_reference\" in validation_result.meta:\n dataset_reference = validation_result.meta[\"dataset_reference\"]\n dataset_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"- *Validation data asset*: {dataset_reference}\",\n },\n }\n query[\"blocks\"].append(dataset_element)\n\n documentation_url = \"https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/set_up_data_docs.html\"\n footer_section = {\n \"type\": \"context\",\n \"elements\": [\n {\n \"type\": \"mrkdwn\",\n \"text\": f\"Learn how to review validation results in Data Docs: {documentation_url}\",\n }\n ],\n }\n\n divider_block = {\"type\": \"divider\"}\n query[\"blocks\"].append(divider_block)\n query[\"blocks\"].append(footer_section)\n return query\n\n def _get_report_element(self, docs_link):\n report_element = None\n if docs_link:\n try:\n if \"file://\" in docs_link:\n # handle special case since Slack does not render these links\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*DataDocs* can be found here: `{docs_link}` \\n (Please copy and paste link into a browser to view)\\n\",\n },\n }\n else:\n report_element = {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": f\"*DataDocs* can be found here: <{docs_link}|{docs_link}>\",\n },\n }\n except Exception as e:\n logger.warning(\n f\"\"\"SlackRenderer had a problem with generating the docs link.\n link used to generate the docs link is: {docs_link} and is of type: {type(docs_link)}.\n Error: {e}\"\"\"\n )\n return\n else:\n logger.warning(\n \"No docs link found. Skipping data docs link in Slack message.\"\n )\n return report_element\n", "path": "great_expectations/render/renderer/slack_renderer.py"}]}
| 2,723 | 675 |
gh_patches_debug_8920
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-3378
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
2.3rc1 drops python 2.7 support in a minor version
People using `NetworkX~=2.2` as their version specifier to pip, but still on Python 2.7 will get this message now that 2.3rc1 is out:
```
NetworkX requires Python 3.5 or later (2.7 detected).
```
This happens with no changes to their code.
Would dropping Python 2.7 support be enough of a change to necessitate a new major version of NetworkX?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `networkx/release.py`
Content:
```
1 """Release data for NetworkX.
2
3 When NetworkX is imported a number of steps are followed to determine
4 the version information.
5
6 1) If the release is not a development release (dev=False), then version
7 information is read from version.py, a file containing statically
8 defined version information. This file should exist on every
9 downloadable release of NetworkX since setup.py creates it during
10 packaging/installation. However, version.py might not exist if one
11 is running NetworkX from the mercurial repository. In the event that
12 version.py does not exist, then no vcs information will be available.
13
14 2) If the release is a development release, then version information
15 is read dynamically, when possible. If no dynamic information can be
16 read, then an attempt is made to read the information from version.py.
17 If version.py does not exist, then no vcs information will be available.
18
19 Clarification:
20 version.py is created only by setup.py
21
22 When setup.py creates version.py, it does so before packaging/installation.
23 So the created file is included in the source distribution. When a user
24 downloads a tar.gz file and extracts the files, the files will not be in a
25 live version control repository. So when the user runs setup.py to install
26 NetworkX, we must make sure write_versionfile() does not overwrite the
27 revision information contained in the version.py that was included in the
28 tar.gz file. This is why write_versionfile() includes an early escape.
29
30 """
31
32 # Copyright (C) 2004-2019 by
33 # Aric Hagberg <[email protected]>
34 # Dan Schult <[email protected]>
35 # Pieter Swart <[email protected]>
36 # All rights reserved.
37 # BSD license.
38
39 from __future__ import absolute_import
40
41 import os
42 import sys
43 import time
44 import datetime
45
46 basedir = os.path.abspath(os.path.split(__file__)[0])
47
48
49 def write_versionfile():
50 """Creates a static file containing version information."""
51 versionfile = os.path.join(basedir, 'version.py')
52
53 text = '''"""
54 Version information for NetworkX, created during installation.
55
56 Do not add this file to the repository.
57
58 """
59
60 import datetime
61
62 version = %(version)r
63 date = %(date)r
64
65 # Was NetworkX built from a development version? If so, remember that the major
66 # and minor versions reference the "target" (rather than "current") release.
67 dev = %(dev)r
68
69 # Format: (name, major, min, revision)
70 version_info = %(version_info)r
71
72 # Format: a 'datetime.datetime' instance
73 date_info = %(date_info)r
74
75 # Format: (vcs, vcs_tuple)
76 vcs_info = %(vcs_info)r
77
78 '''
79
80 # Try to update all information
81 date, date_info, version, version_info, vcs_info = get_info(dynamic=True)
82
83 def writefile():
84 fh = open(versionfile, 'w')
85 subs = {
86 'dev': dev,
87 'version': version,
88 'version_info': version_info,
89 'date': date,
90 'date_info': date_info,
91 'vcs_info': vcs_info
92 }
93 fh.write(text % subs)
94 fh.close()
95
96 if vcs_info[0] == 'mercurial':
97 # Then, we want to update version.py.
98 writefile()
99 else:
100 if os.path.isfile(versionfile):
101 # This is *good*, and the most likely place users will be when
102 # running setup.py. We do not want to overwrite version.py.
103 # Grab the version so that setup can use it.
104 # sys.path.insert(0, basedir)
105 from version import version
106 # del sys.path[0]
107 else:
108 # This is *bad*. It means the user might have a tarball that
109 # does not include version.py. Let this error raise so we can
110 # fix the tarball.
111 # raise Exception('version.py not found!')
112
113 # We no longer require that prepared tarballs include a version.py
114 # So we use the possibly trunctated value from get_info()
115 # Then we write a new file.
116 writefile()
117
118 return version
119
120
121 def get_revision():
122 """Returns revision and vcs information, dynamically obtained."""
123 vcs, revision, tag = None, None, None
124
125 gitdir = os.path.join(basedir, '..', '.git')
126
127 if os.path.isdir(gitdir):
128 vcs = 'git'
129 # For now, we are not bothering with revision and tag.
130
131 vcs_info = (vcs, (revision, tag))
132
133 return revision, vcs_info
134
135
136 def get_info(dynamic=True):
137 # Date information
138 date_info = datetime.datetime.utcfromtimestamp(int(os.environ.get('SOURCE_DATE_EPOCH', time.time())))
139 date = time.asctime(date_info.timetuple())
140
141 revision, version, version_info, vcs_info = None, None, None, None
142
143 import_failed = False
144 dynamic_failed = False
145
146 if dynamic:
147 revision, vcs_info = get_revision()
148 if revision is None:
149 dynamic_failed = True
150
151 if dynamic_failed or not dynamic:
152 # This is where most final releases of NetworkX will be.
153 # All info should come from version.py. If it does not exist, then
154 # no vcs information will be provided.
155 # sys.path.insert(0, basedir)
156 try:
157 from version import date, date_info, version, version_info, vcs_info
158 except ImportError:
159 import_failed = True
160 vcs_info = (None, (None, None))
161 else:
162 revision = vcs_info[1][0]
163 #del sys.path[0]
164
165 if import_failed or (dynamic and not dynamic_failed):
166 # We are here if:
167 # we failed to determine static versioning info, or
168 # we successfully obtained dynamic revision info
169 version = ''.join([str(major), '.', str(minor)])
170 if dev:
171 version += '.dev_' + date_info.strftime("%Y%m%d%H%M%S")
172 version_info = (name, major, minor, revision)
173
174 return date, date_info, version, version_info, vcs_info
175
176
177 # Version information
178 name = 'networkx'
179 major = "2"
180 minor = "3rc2"
181
182
183 # Declare current release as a development release.
184 # Change to False before tagging a release; then change back.
185 dev = True
186
187
188 description = "Python package for creating and manipulating graphs and networks"
189
190 long_description = \
191 """
192 NetworkX is a Python package for the creation, manipulation, and
193 study of the structure, dynamics, and functions of complex networks.
194
195 """
196 license = 'BSD'
197 authors = {'Hagberg': ('Aric Hagberg', '[email protected]'),
198 'Schult': ('Dan Schult', '[email protected]'),
199 'Swart': ('Pieter Swart', '[email protected]')}
200 maintainer = "NetworkX Developers"
201 maintainer_email = "[email protected]"
202 url = 'http://networkx.github.io/'
203 download_url = 'https://pypi.python.org/pypi/networkx/'
204 platforms = ['Linux', 'Mac OSX', 'Windows', 'Unix']
205 keywords = ['Networks', 'Graph Theory', 'Mathematics',
206 'network', 'graph', 'discrete mathematics', 'math']
207 classifiers = [
208 'Development Status :: 5 - Production/Stable',
209 'Intended Audience :: Developers',
210 'Intended Audience :: Science/Research',
211 'License :: OSI Approved :: BSD License',
212 'Operating System :: OS Independent',
213 'Programming Language :: Python :: 3',
214 'Programming Language :: Python :: 3.5',
215 'Programming Language :: Python :: 3.6',
216 'Programming Language :: Python :: 3.7',
217 'Topic :: Software Development :: Libraries :: Python Modules',
218 'Topic :: Scientific/Engineering :: Bio-Informatics',
219 'Topic :: Scientific/Engineering :: Information Analysis',
220 'Topic :: Scientific/Engineering :: Mathematics',
221 'Topic :: Scientific/Engineering :: Physics']
222
223 date, date_info, version, version_info, vcs_info = get_info()
224
225 if __name__ == '__main__':
226 # Write versionfile for nightly snapshots.
227 write_versionfile()
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/networkx/release.py b/networkx/release.py
--- a/networkx/release.py
+++ b/networkx/release.py
@@ -214,6 +214,7 @@
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3 :: Only',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Scientific/Engineering :: Bio-Informatics',
'Topic :: Scientific/Engineering :: Information Analysis',
|
{"golden_diff": "diff --git a/networkx/release.py b/networkx/release.py\n--- a/networkx/release.py\n+++ b/networkx/release.py\n@@ -214,6 +214,7 @@\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n", "issue": "2.3rc1 drops python 2.7 support in a minor version \nPeople using `NetworkX~=2.2` as their version specifier to pip, but still on Python 2.7 will get this message now that 2.3rc1 is out:\r\n\r\n```\r\nNetworkX requires Python 3.5 or later (2.7 detected).\r\n```\r\n\r\nThis happens with no changes to their code. \r\n\r\nWould dropping Python 2.7 support be enough of a change to necessitate a new major version of NetworkX?\n", "before_files": [{"content": "\"\"\"Release data for NetworkX.\n\nWhen NetworkX is imported a number of steps are followed to determine\nthe version information.\n\n 1) If the release is not a development release (dev=False), then version\n information is read from version.py, a file containing statically\n defined version information. This file should exist on every\n downloadable release of NetworkX since setup.py creates it during\n packaging/installation. However, version.py might not exist if one\n is running NetworkX from the mercurial repository. In the event that\n version.py does not exist, then no vcs information will be available.\n\n 2) If the release is a development release, then version information\n is read dynamically, when possible. If no dynamic information can be\n read, then an attempt is made to read the information from version.py.\n If version.py does not exist, then no vcs information will be available.\n\nClarification:\n version.py is created only by setup.py\n\nWhen setup.py creates version.py, it does so before packaging/installation.\nSo the created file is included in the source distribution. When a user\ndownloads a tar.gz file and extracts the files, the files will not be in a\nlive version control repository. So when the user runs setup.py to install\nNetworkX, we must make sure write_versionfile() does not overwrite the\nrevision information contained in the version.py that was included in the\ntar.gz file. This is why write_versionfile() includes an early escape.\n\n\"\"\"\n\n# Copyright (C) 2004-2019 by\n# Aric Hagberg <[email protected]>\n# Dan Schult <[email protected]>\n# Pieter Swart <[email protected]>\n# All rights reserved.\n# BSD license.\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport time\nimport datetime\n\nbasedir = os.path.abspath(os.path.split(__file__)[0])\n\n\ndef write_versionfile():\n \"\"\"Creates a static file containing version information.\"\"\"\n versionfile = os.path.join(basedir, 'version.py')\n\n text = '''\"\"\"\nVersion information for NetworkX, created during installation.\n\nDo not add this file to the repository.\n\n\"\"\"\n\nimport datetime\n\nversion = %(version)r\ndate = %(date)r\n\n# Was NetworkX built from a development version? If so, remember that the major\n# and minor versions reference the \"target\" (rather than \"current\") release.\ndev = %(dev)r\n\n# Format: (name, major, min, revision)\nversion_info = %(version_info)r\n\n# Format: a 'datetime.datetime' instance\ndate_info = %(date_info)r\n\n# Format: (vcs, vcs_tuple)\nvcs_info = %(vcs_info)r\n\n'''\n\n # Try to update all information\n date, date_info, version, version_info, vcs_info = get_info(dynamic=True)\n\n def writefile():\n fh = open(versionfile, 'w')\n subs = {\n 'dev': dev,\n 'version': version,\n 'version_info': version_info,\n 'date': date,\n 'date_info': date_info,\n 'vcs_info': vcs_info\n }\n fh.write(text % subs)\n fh.close()\n\n if vcs_info[0] == 'mercurial':\n # Then, we want to update version.py.\n writefile()\n else:\n if os.path.isfile(versionfile):\n # This is *good*, and the most likely place users will be when\n # running setup.py. We do not want to overwrite version.py.\n # Grab the version so that setup can use it.\n # sys.path.insert(0, basedir)\n from version import version\n # del sys.path[0]\n else:\n # This is *bad*. It means the user might have a tarball that\n # does not include version.py. Let this error raise so we can\n # fix the tarball.\n # raise Exception('version.py not found!')\n\n # We no longer require that prepared tarballs include a version.py\n # So we use the possibly trunctated value from get_info()\n # Then we write a new file.\n writefile()\n\n return version\n\n\ndef get_revision():\n \"\"\"Returns revision and vcs information, dynamically obtained.\"\"\"\n vcs, revision, tag = None, None, None\n\n gitdir = os.path.join(basedir, '..', '.git')\n\n if os.path.isdir(gitdir):\n vcs = 'git'\n # For now, we are not bothering with revision and tag.\n\n vcs_info = (vcs, (revision, tag))\n\n return revision, vcs_info\n\n\ndef get_info(dynamic=True):\n # Date information\n date_info = datetime.datetime.utcfromtimestamp(int(os.environ.get('SOURCE_DATE_EPOCH', time.time())))\n date = time.asctime(date_info.timetuple())\n\n revision, version, version_info, vcs_info = None, None, None, None\n\n import_failed = False\n dynamic_failed = False\n\n if dynamic:\n revision, vcs_info = get_revision()\n if revision is None:\n dynamic_failed = True\n\n if dynamic_failed or not dynamic:\n # This is where most final releases of NetworkX will be.\n # All info should come from version.py. If it does not exist, then\n # no vcs information will be provided.\n # sys.path.insert(0, basedir)\n try:\n from version import date, date_info, version, version_info, vcs_info\n except ImportError:\n import_failed = True\n vcs_info = (None, (None, None))\n else:\n revision = vcs_info[1][0]\n #del sys.path[0]\n\n if import_failed or (dynamic and not dynamic_failed):\n # We are here if:\n # we failed to determine static versioning info, or\n # we successfully obtained dynamic revision info\n version = ''.join([str(major), '.', str(minor)])\n if dev:\n version += '.dev_' + date_info.strftime(\"%Y%m%d%H%M%S\")\n version_info = (name, major, minor, revision)\n\n return date, date_info, version, version_info, vcs_info\n\n\n# Version information\nname = 'networkx'\nmajor = \"2\"\nminor = \"3rc2\"\n\n\n# Declare current release as a development release.\n# Change to False before tagging a release; then change back.\ndev = True\n\n\ndescription = \"Python package for creating and manipulating graphs and networks\"\n\nlong_description = \\\n \"\"\"\nNetworkX is a Python package for the creation, manipulation, and\nstudy of the structure, dynamics, and functions of complex networks.\n\n\"\"\"\nlicense = 'BSD'\nauthors = {'Hagberg': ('Aric Hagberg', '[email protected]'),\n 'Schult': ('Dan Schult', '[email protected]'),\n 'Swart': ('Pieter Swart', '[email protected]')}\nmaintainer = \"NetworkX Developers\"\nmaintainer_email = \"[email protected]\"\nurl = 'http://networkx.github.io/'\ndownload_url = 'https://pypi.python.org/pypi/networkx/'\nplatforms = ['Linux', 'Mac OSX', 'Windows', 'Unix']\nkeywords = ['Networks', 'Graph Theory', 'Mathematics',\n 'network', 'graph', 'discrete mathematics', 'math']\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics']\n\ndate, date_info, version, version_info, vcs_info = get_info()\n\nif __name__ == '__main__':\n # Write versionfile for nightly snapshots.\n write_versionfile()\n", "path": "networkx/release.py"}], "after_files": [{"content": "\"\"\"Release data for NetworkX.\n\nWhen NetworkX is imported a number of steps are followed to determine\nthe version information.\n\n 1) If the release is not a development release (dev=False), then version\n information is read from version.py, a file containing statically\n defined version information. This file should exist on every\n downloadable release of NetworkX since setup.py creates it during\n packaging/installation. However, version.py might not exist if one\n is running NetworkX from the mercurial repository. In the event that\n version.py does not exist, then no vcs information will be available.\n\n 2) If the release is a development release, then version information\n is read dynamically, when possible. If no dynamic information can be\n read, then an attempt is made to read the information from version.py.\n If version.py does not exist, then no vcs information will be available.\n\nClarification:\n version.py is created only by setup.py\n\nWhen setup.py creates version.py, it does so before packaging/installation.\nSo the created file is included in the source distribution. When a user\ndownloads a tar.gz file and extracts the files, the files will not be in a\nlive version control repository. So when the user runs setup.py to install\nNetworkX, we must make sure write_versionfile() does not overwrite the\nrevision information contained in the version.py that was included in the\ntar.gz file. This is why write_versionfile() includes an early escape.\n\n\"\"\"\n\n# Copyright (C) 2004-2019 by\n# Aric Hagberg <[email protected]>\n# Dan Schult <[email protected]>\n# Pieter Swart <[email protected]>\n# All rights reserved.\n# BSD license.\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport time\nimport datetime\n\nbasedir = os.path.abspath(os.path.split(__file__)[0])\n\n\ndef write_versionfile():\n \"\"\"Creates a static file containing version information.\"\"\"\n versionfile = os.path.join(basedir, 'version.py')\n\n text = '''\"\"\"\nVersion information for NetworkX, created during installation.\n\nDo not add this file to the repository.\n\n\"\"\"\n\nimport datetime\n\nversion = %(version)r\ndate = %(date)r\n\n# Was NetworkX built from a development version? If so, remember that the major\n# and minor versions reference the \"target\" (rather than \"current\") release.\ndev = %(dev)r\n\n# Format: (name, major, min, revision)\nversion_info = %(version_info)r\n\n# Format: a 'datetime.datetime' instance\ndate_info = %(date_info)r\n\n# Format: (vcs, vcs_tuple)\nvcs_info = %(vcs_info)r\n\n'''\n\n # Try to update all information\n date, date_info, version, version_info, vcs_info = get_info(dynamic=True)\n\n def writefile():\n fh = open(versionfile, 'w')\n subs = {\n 'dev': dev,\n 'version': version,\n 'version_info': version_info,\n 'date': date,\n 'date_info': date_info,\n 'vcs_info': vcs_info\n }\n fh.write(text % subs)\n fh.close()\n\n if vcs_info[0] == 'mercurial':\n # Then, we want to update version.py.\n writefile()\n else:\n if os.path.isfile(versionfile):\n # This is *good*, and the most likely place users will be when\n # running setup.py. We do not want to overwrite version.py.\n # Grab the version so that setup can use it.\n # sys.path.insert(0, basedir)\n from version import version\n # del sys.path[0]\n else:\n # This is *bad*. It means the user might have a tarball that\n # does not include version.py. Let this error raise so we can\n # fix the tarball.\n # raise Exception('version.py not found!')\n\n # We no longer require that prepared tarballs include a version.py\n # So we use the possibly trunctated value from get_info()\n # Then we write a new file.\n writefile()\n\n return version\n\n\ndef get_revision():\n \"\"\"Returns revision and vcs information, dynamically obtained.\"\"\"\n vcs, revision, tag = None, None, None\n\n gitdir = os.path.join(basedir, '..', '.git')\n\n if os.path.isdir(gitdir):\n vcs = 'git'\n # For now, we are not bothering with revision and tag.\n\n vcs_info = (vcs, (revision, tag))\n\n return revision, vcs_info\n\n\ndef get_info(dynamic=True):\n # Date information\n date_info = datetime.datetime.utcfromtimestamp(int(os.environ.get('SOURCE_DATE_EPOCH', time.time())))\n date = time.asctime(date_info.timetuple())\n\n revision, version, version_info, vcs_info = None, None, None, None\n\n import_failed = False\n dynamic_failed = False\n\n if dynamic:\n revision, vcs_info = get_revision()\n if revision is None:\n dynamic_failed = True\n\n if dynamic_failed or not dynamic:\n # This is where most final releases of NetworkX will be.\n # All info should come from version.py. If it does not exist, then\n # no vcs information will be provided.\n # sys.path.insert(0, basedir)\n try:\n from version import date, date_info, version, version_info, vcs_info\n except ImportError:\n import_failed = True\n vcs_info = (None, (None, None))\n else:\n revision = vcs_info[1][0]\n #del sys.path[0]\n\n if import_failed or (dynamic and not dynamic_failed):\n # We are here if:\n # we failed to determine static versioning info, or\n # we successfully obtained dynamic revision info\n version = ''.join([str(major), '.', str(minor)])\n if dev:\n version += '.dev_' + date_info.strftime(\"%Y%m%d%H%M%S\")\n version_info = (name, major, minor, revision)\n\n return date, date_info, version, version_info, vcs_info\n\n\n# Version information\nname = 'networkx'\nmajor = \"2\"\nminor = \"3rc2\"\n\n\n# Declare current release as a development release.\n# Change to False before tagging a release; then change back.\ndev = True\n\n\ndescription = \"Python package for creating and manipulating graphs and networks\"\n\nlong_description = \\\n \"\"\"\nNetworkX is a Python package for the creation, manipulation, and\nstudy of the structure, dynamics, and functions of complex networks.\n\n\"\"\"\nlicense = 'BSD'\nauthors = {'Hagberg': ('Aric Hagberg', '[email protected]'),\n 'Schult': ('Dan Schult', '[email protected]'),\n 'Swart': ('Pieter Swart', '[email protected]')}\nmaintainer = \"NetworkX Developers\"\nmaintainer_email = \"[email protected]\"\nurl = 'http://networkx.github.io/'\ndownload_url = 'https://pypi.python.org/pypi/networkx/'\nplatforms = ['Linux', 'Mac OSX', 'Windows', 'Unix']\nkeywords = ['Networks', 'Graph Theory', 'Mathematics',\n 'network', 'graph', 'discrete mathematics', 'math']\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics']\n\ndate, date_info, version, version_info, vcs_info = get_info()\n\nif __name__ == '__main__':\n # Write versionfile for nightly snapshots.\n write_versionfile()\n", "path": "networkx/release.py"}]}
| 2,793 | 127 |
gh_patches_debug_7530
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-3780
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`ExponentialHistogram` is missing the `to_json` method
The [`ExponentialHistorgram`](https://github.com/open-telemetry/opentelemetry-python/blob/3dfe2249cc4a203bf24578483b192fec7266596b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py#L93-L102) data point type was [somewhat recently added](https://github.com/open-telemetry/opentelemetry-python/pull/2964), and does not work with the `ConsoleMetricExporter` because it is missing a `to_json` method (which seems to be defined on the other data point types).
This is the stack trace I get locally:
```
Exception while exporting metrics 'ExponentialHistogram' object has no attribute 'to_json'
Traceback (most recent call last):
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py", line 523, in _receive_metrics
self._exporter.export(
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py", line 163, in export
self.out.write(self.formatter(metrics_data))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py", line 143, in <lambda>
] = lambda metrics_data: metrics_data.to_json()
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 252, in to_json
"resource_metrics": [
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 253, in <listcomp>
loads(resource_metrics.to_json(indent=indent))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 233, in to_json
"scope_metrics": [
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 234, in <listcomp>
loads(scope_metrics.to_json(indent=indent))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 211, in to_json
"metrics": [
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 212, in <listcomp>
loads(metric.to_json(indent=indent))
File "/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py", line 193, in to_json
"data": loads(self.data.to_json(indent=indent)),
AttributeError: 'ExponentialHistogram' object has no attribute 'to_json'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=unused-import
16
17 from dataclasses import asdict, dataclass
18 from json import dumps, loads
19 from typing import Optional, Sequence, Union
20
21 # This kind of import is needed to avoid Sphinx errors.
22 import opentelemetry.sdk.metrics._internal
23 from opentelemetry.sdk.resources import Resource
24 from opentelemetry.sdk.util.instrumentation import InstrumentationScope
25 from opentelemetry.util.types import Attributes
26
27
28 @dataclass(frozen=True)
29 class NumberDataPoint:
30 """Single data point in a timeseries that describes the time-varying scalar
31 value of a metric.
32 """
33
34 attributes: Attributes
35 start_time_unix_nano: int
36 time_unix_nano: int
37 value: Union[int, float]
38
39 def to_json(self, indent=4) -> str:
40 return dumps(asdict(self), indent=indent)
41
42
43 @dataclass(frozen=True)
44 class HistogramDataPoint:
45 """Single data point in a timeseries that describes the time-varying scalar
46 value of a metric.
47 """
48
49 attributes: Attributes
50 start_time_unix_nano: int
51 time_unix_nano: int
52 count: int
53 sum: Union[int, float]
54 bucket_counts: Sequence[int]
55 explicit_bounds: Sequence[float]
56 min: float
57 max: float
58
59 def to_json(self, indent=4) -> str:
60 return dumps(asdict(self), indent=indent)
61
62
63 @dataclass(frozen=True)
64 class Buckets:
65 offset: int
66 bucket_counts: Sequence[int]
67
68
69 @dataclass(frozen=True)
70 class ExponentialHistogramDataPoint:
71 """Single data point in a timeseries whose boundaries are defined by an
72 exponential function. This timeseries describes the time-varying scalar
73 value of a metric.
74 """
75
76 attributes: Attributes
77 start_time_unix_nano: int
78 time_unix_nano: int
79 count: int
80 sum: Union[int, float]
81 scale: int
82 zero_count: int
83 positive: Buckets
84 negative: Buckets
85 flags: int
86 min: float
87 max: float
88
89 def to_json(self, indent=4) -> str:
90 return dumps(asdict(self), indent=indent)
91
92
93 @dataclass(frozen=True)
94 class ExponentialHistogram:
95 """Represents the type of a metric that is calculated by aggregating as an
96 ExponentialHistogram of all reported measurements over a time interval.
97 """
98
99 data_points: Sequence[ExponentialHistogramDataPoint]
100 aggregation_temporality: (
101 "opentelemetry.sdk.metrics.export.AggregationTemporality"
102 )
103
104
105 @dataclass(frozen=True)
106 class Sum:
107 """Represents the type of a scalar metric that is calculated as a sum of
108 all reported measurements over a time interval."""
109
110 data_points: Sequence[NumberDataPoint]
111 aggregation_temporality: (
112 "opentelemetry.sdk.metrics.export.AggregationTemporality"
113 )
114 is_monotonic: bool
115
116 def to_json(self, indent=4) -> str:
117 return dumps(
118 {
119 "data_points": [
120 loads(data_point.to_json(indent=indent))
121 for data_point in self.data_points
122 ],
123 "aggregation_temporality": self.aggregation_temporality,
124 "is_monotonic": self.is_monotonic,
125 },
126 indent=indent,
127 )
128
129
130 @dataclass(frozen=True)
131 class Gauge:
132 """Represents the type of a scalar metric that always exports the current
133 value for every data point. It should be used for an unknown
134 aggregation."""
135
136 data_points: Sequence[NumberDataPoint]
137
138 def to_json(self, indent=4) -> str:
139 return dumps(
140 {
141 "data_points": [
142 loads(data_point.to_json(indent=indent))
143 for data_point in self.data_points
144 ],
145 },
146 indent=indent,
147 )
148
149
150 @dataclass(frozen=True)
151 class Histogram:
152 """Represents the type of a metric that is calculated by aggregating as a
153 histogram of all reported measurements over a time interval."""
154
155 data_points: Sequence[HistogramDataPoint]
156 aggregation_temporality: (
157 "opentelemetry.sdk.metrics.export.AggregationTemporality"
158 )
159
160 def to_json(self, indent=4) -> str:
161 return dumps(
162 {
163 "data_points": [
164 loads(data_point.to_json(indent=indent))
165 for data_point in self.data_points
166 ],
167 "aggregation_temporality": self.aggregation_temporality,
168 },
169 indent=indent,
170 )
171
172
173 # pylint: disable=invalid-name
174 DataT = Union[Sum, Gauge, Histogram]
175 DataPointT = Union[NumberDataPoint, HistogramDataPoint]
176
177
178 @dataclass(frozen=True)
179 class Metric:
180 """Represents a metric point in the OpenTelemetry data model to be
181 exported."""
182
183 name: str
184 description: Optional[str]
185 unit: Optional[str]
186 data: DataT
187
188 def to_json(self, indent=4) -> str:
189 return dumps(
190 {
191 "name": self.name,
192 "description": self.description or "",
193 "unit": self.unit or "",
194 "data": loads(self.data.to_json(indent=indent)),
195 },
196 indent=indent,
197 )
198
199
200 @dataclass(frozen=True)
201 class ScopeMetrics:
202 """A collection of Metrics produced by a scope"""
203
204 scope: InstrumentationScope
205 metrics: Sequence[Metric]
206 schema_url: str
207
208 def to_json(self, indent=4) -> str:
209 return dumps(
210 {
211 "scope": loads(self.scope.to_json(indent=indent)),
212 "metrics": [
213 loads(metric.to_json(indent=indent))
214 for metric in self.metrics
215 ],
216 "schema_url": self.schema_url,
217 },
218 indent=indent,
219 )
220
221
222 @dataclass(frozen=True)
223 class ResourceMetrics:
224 """A collection of ScopeMetrics from a Resource"""
225
226 resource: Resource
227 scope_metrics: Sequence[ScopeMetrics]
228 schema_url: str
229
230 def to_json(self, indent=4) -> str:
231 return dumps(
232 {
233 "resource": loads(self.resource.to_json(indent=indent)),
234 "scope_metrics": [
235 loads(scope_metrics.to_json(indent=indent))
236 for scope_metrics in self.scope_metrics
237 ],
238 "schema_url": self.schema_url,
239 },
240 indent=indent,
241 )
242
243
244 @dataclass(frozen=True)
245 class MetricsData:
246 """An array of ResourceMetrics"""
247
248 resource_metrics: Sequence[ResourceMetrics]
249
250 def to_json(self, indent=4) -> str:
251 return dumps(
252 {
253 "resource_metrics": [
254 loads(resource_metrics.to_json(indent=indent))
255 for resource_metrics in self.resource_metrics
256 ]
257 },
258 indent=indent,
259 )
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
@@ -101,6 +101,18 @@
"opentelemetry.sdk.metrics.export.AggregationTemporality"
)
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "data_points": [
+ loads(data_point.to_json(indent=indent))
+ for data_point in self.data_points
+ ],
+ "aggregation_temporality": self.aggregation_temporality,
+ },
+ indent=indent,
+ )
+
@dataclass(frozen=True)
class Sum:
|
{"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py\n@@ -101,6 +101,18 @@\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n \n+ def to_json(self, indent=4) -> str:\n+ return dumps(\n+ {\n+ \"data_points\": [\n+ loads(data_point.to_json(indent=indent))\n+ for data_point in self.data_points\n+ ],\n+ \"aggregation_temporality\": self.aggregation_temporality,\n+ },\n+ indent=indent,\n+ )\n+\n \n @dataclass(frozen=True)\n class Sum:\n", "issue": "`ExponentialHistogram` is missing the `to_json` method\nThe [`ExponentialHistorgram`](https://github.com/open-telemetry/opentelemetry-python/blob/3dfe2249cc4a203bf24578483b192fec7266596b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py#L93-L102) data point type was [somewhat recently added](https://github.com/open-telemetry/opentelemetry-python/pull/2964), and does not work with the `ConsoleMetricExporter` because it is missing a `to_json` method (which seems to be defined on the other data point types).\r\n\r\nThis is the stack trace I get locally:\r\n\r\n```\r\nException while exporting metrics 'ExponentialHistogram' object has no attribute 'to_json'\r\nTraceback (most recent call last):\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py\", line 523, in _receive_metrics\r\n self._exporter.export(\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py\", line 163, in export\r\n self.out.write(self.formatter(metrics_data))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/export/__init__.py\", line 143, in <lambda>\r\n ] = lambda metrics_data: metrics_data.to_json()\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 252, in to_json\r\n \"resource_metrics\": [\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 253, in <listcomp>\r\n loads(resource_metrics.to_json(indent=indent))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 233, in to_json\r\n \"scope_metrics\": [\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 234, in <listcomp>\r\n loads(scope_metrics.to_json(indent=indent))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 211, in to_json\r\n \"metrics\": [\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 212, in <listcomp>\r\n loads(metric.to_json(indent=indent))\r\n File \"/home/closeio/venv/lib/python3.10/site-packages/opentelemetry/sdk/metrics/_internal/point.py\", line 193, in to_json\r\n \"data\": loads(self.data.to_json(indent=indent)),\r\nAttributeError: 'ExponentialHistogram' object has no attribute 'to_json'\r\n```\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=unused-import\n\nfrom dataclasses import asdict, dataclass\nfrom json import dumps, loads\nfrom typing import Optional, Sequence, Union\n\n# This kind of import is needed to avoid Sphinx errors.\nimport opentelemetry.sdk.metrics._internal\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationScope\nfrom opentelemetry.util.types import Attributes\n\n\n@dataclass(frozen=True)\nclass NumberDataPoint:\n \"\"\"Single data point in a timeseries that describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n value: Union[int, float]\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass HistogramDataPoint:\n \"\"\"Single data point in a timeseries that describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n count: int\n sum: Union[int, float]\n bucket_counts: Sequence[int]\n explicit_bounds: Sequence[float]\n min: float\n max: float\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass Buckets:\n offset: int\n bucket_counts: Sequence[int]\n\n\n@dataclass(frozen=True)\nclass ExponentialHistogramDataPoint:\n \"\"\"Single data point in a timeseries whose boundaries are defined by an\n exponential function. This timeseries describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n count: int\n sum: Union[int, float]\n scale: int\n zero_count: int\n positive: Buckets\n negative: Buckets\n flags: int\n min: float\n max: float\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass ExponentialHistogram:\n \"\"\"Represents the type of a metric that is calculated by aggregating as an\n ExponentialHistogram of all reported measurements over a time interval.\n \"\"\"\n\n data_points: Sequence[ExponentialHistogramDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n\n\n@dataclass(frozen=True)\nclass Sum:\n \"\"\"Represents the type of a scalar metric that is calculated as a sum of\n all reported measurements over a time interval.\"\"\"\n\n data_points: Sequence[NumberDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n is_monotonic: bool\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n \"aggregation_temporality\": self.aggregation_temporality,\n \"is_monotonic\": self.is_monotonic,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass Gauge:\n \"\"\"Represents the type of a scalar metric that always exports the current\n value for every data point. It should be used for an unknown\n aggregation.\"\"\"\n\n data_points: Sequence[NumberDataPoint]\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass Histogram:\n \"\"\"Represents the type of a metric that is calculated by aggregating as a\n histogram of all reported measurements over a time interval.\"\"\"\n\n data_points: Sequence[HistogramDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n \"aggregation_temporality\": self.aggregation_temporality,\n },\n indent=indent,\n )\n\n\n# pylint: disable=invalid-name\nDataT = Union[Sum, Gauge, Histogram]\nDataPointT = Union[NumberDataPoint, HistogramDataPoint]\n\n\n@dataclass(frozen=True)\nclass Metric:\n \"\"\"Represents a metric point in the OpenTelemetry data model to be\n exported.\"\"\"\n\n name: str\n description: Optional[str]\n unit: Optional[str]\n data: DataT\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"name\": self.name,\n \"description\": self.description or \"\",\n \"unit\": self.unit or \"\",\n \"data\": loads(self.data.to_json(indent=indent)),\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass ScopeMetrics:\n \"\"\"A collection of Metrics produced by a scope\"\"\"\n\n scope: InstrumentationScope\n metrics: Sequence[Metric]\n schema_url: str\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"scope\": loads(self.scope.to_json(indent=indent)),\n \"metrics\": [\n loads(metric.to_json(indent=indent))\n for metric in self.metrics\n ],\n \"schema_url\": self.schema_url,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass ResourceMetrics:\n \"\"\"A collection of ScopeMetrics from a Resource\"\"\"\n\n resource: Resource\n scope_metrics: Sequence[ScopeMetrics]\n schema_url: str\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"resource\": loads(self.resource.to_json(indent=indent)),\n \"scope_metrics\": [\n loads(scope_metrics.to_json(indent=indent))\n for scope_metrics in self.scope_metrics\n ],\n \"schema_url\": self.schema_url,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass MetricsData:\n \"\"\"An array of ResourceMetrics\"\"\"\n\n resource_metrics: Sequence[ResourceMetrics]\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"resource_metrics\": [\n loads(resource_metrics.to_json(indent=indent))\n for resource_metrics in self.resource_metrics\n ]\n },\n indent=indent,\n )\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=unused-import\n\nfrom dataclasses import asdict, dataclass\nfrom json import dumps, loads\nfrom typing import Optional, Sequence, Union\n\n# This kind of import is needed to avoid Sphinx errors.\nimport opentelemetry.sdk.metrics._internal\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationScope\nfrom opentelemetry.util.types import Attributes\n\n\n@dataclass(frozen=True)\nclass NumberDataPoint:\n \"\"\"Single data point in a timeseries that describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n value: Union[int, float]\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass HistogramDataPoint:\n \"\"\"Single data point in a timeseries that describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n count: int\n sum: Union[int, float]\n bucket_counts: Sequence[int]\n explicit_bounds: Sequence[float]\n min: float\n max: float\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass Buckets:\n offset: int\n bucket_counts: Sequence[int]\n\n\n@dataclass(frozen=True)\nclass ExponentialHistogramDataPoint:\n \"\"\"Single data point in a timeseries whose boundaries are defined by an\n exponential function. This timeseries describes the time-varying scalar\n value of a metric.\n \"\"\"\n\n attributes: Attributes\n start_time_unix_nano: int\n time_unix_nano: int\n count: int\n sum: Union[int, float]\n scale: int\n zero_count: int\n positive: Buckets\n negative: Buckets\n flags: int\n min: float\n max: float\n\n def to_json(self, indent=4) -> str:\n return dumps(asdict(self), indent=indent)\n\n\n@dataclass(frozen=True)\nclass ExponentialHistogram:\n \"\"\"Represents the type of a metric that is calculated by aggregating as an\n ExponentialHistogram of all reported measurements over a time interval.\n \"\"\"\n\n data_points: Sequence[ExponentialHistogramDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n \"aggregation_temporality\": self.aggregation_temporality,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass Sum:\n \"\"\"Represents the type of a scalar metric that is calculated as a sum of\n all reported measurements over a time interval.\"\"\"\n\n data_points: Sequence[NumberDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n is_monotonic: bool\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n \"aggregation_temporality\": self.aggregation_temporality,\n \"is_monotonic\": self.is_monotonic,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass Gauge:\n \"\"\"Represents the type of a scalar metric that always exports the current\n value for every data point. It should be used for an unknown\n aggregation.\"\"\"\n\n data_points: Sequence[NumberDataPoint]\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass Histogram:\n \"\"\"Represents the type of a metric that is calculated by aggregating as a\n histogram of all reported measurements over a time interval.\"\"\"\n\n data_points: Sequence[HistogramDataPoint]\n aggregation_temporality: (\n \"opentelemetry.sdk.metrics.export.AggregationTemporality\"\n )\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"data_points\": [\n loads(data_point.to_json(indent=indent))\n for data_point in self.data_points\n ],\n \"aggregation_temporality\": self.aggregation_temporality,\n },\n indent=indent,\n )\n\n\n# pylint: disable=invalid-name\nDataT = Union[Sum, Gauge, Histogram]\nDataPointT = Union[NumberDataPoint, HistogramDataPoint]\n\n\n@dataclass(frozen=True)\nclass Metric:\n \"\"\"Represents a metric point in the OpenTelemetry data model to be\n exported.\"\"\"\n\n name: str\n description: Optional[str]\n unit: Optional[str]\n data: DataT\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"name\": self.name,\n \"description\": self.description or \"\",\n \"unit\": self.unit or \"\",\n \"data\": loads(self.data.to_json(indent=indent)),\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass ScopeMetrics:\n \"\"\"A collection of Metrics produced by a scope\"\"\"\n\n scope: InstrumentationScope\n metrics: Sequence[Metric]\n schema_url: str\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"scope\": loads(self.scope.to_json(indent=indent)),\n \"metrics\": [\n loads(metric.to_json(indent=indent))\n for metric in self.metrics\n ],\n \"schema_url\": self.schema_url,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass ResourceMetrics:\n \"\"\"A collection of ScopeMetrics from a Resource\"\"\"\n\n resource: Resource\n scope_metrics: Sequence[ScopeMetrics]\n schema_url: str\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"resource\": loads(self.resource.to_json(indent=indent)),\n \"scope_metrics\": [\n loads(scope_metrics.to_json(indent=indent))\n for scope_metrics in self.scope_metrics\n ],\n \"schema_url\": self.schema_url,\n },\n indent=indent,\n )\n\n\n@dataclass(frozen=True)\nclass MetricsData:\n \"\"\"An array of ResourceMetrics\"\"\"\n\n resource_metrics: Sequence[ResourceMetrics]\n\n def to_json(self, indent=4) -> str:\n return dumps(\n {\n \"resource_metrics\": [\n loads(resource_metrics.to_json(indent=indent))\n for resource_metrics in self.resource_metrics\n ]\n },\n indent=indent,\n )\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py"}]}
| 3,250 | 201 |
gh_patches_debug_16841
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-761
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] PeriodicKernel returns error with BernoulliLikelihood
# 🐛 Bug
When running the simple GPClassificationModel example (as in https://gpytorch.readthedocs.io/en/latest/examples/02_Simple_GP_Classification/Simple_GP_Classification.html), and changing the RBFKernel to a PeriodicKernel, the `likelihood(model(test_x))` line returns a RuntimeError:
`RuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])`
This error does not occur when changing the RBFKernel to a LinearKernel, or when changing the BernoulliLikelihood to GaussianLikelihood.
## To reproduce
```
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
train_x = torch.linspace(0, 1, 10)
train_y = torch.sign(torch.cos(train_x * (4 * math.pi))).add(1).div(2)
from gpytorch.models import AbstractVariationalGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import VariationalStrategy
class GPClassificationModel(AbstractVariationalGP):
def __init__(self, train_x):
variational_distribution = CholeskyVariationalDistribution(train_x.size(0))
variational_strategy = VariationalStrategy(self, train_x, variational_distribution)
super(GPClassificationModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return latent_pred
model = GPClassificationModel(train_x)
likelihood = gpytorch.likelihoods.BernoulliLikelihood()
from gpytorch.mlls.variational_elbo import VariationalELBO
model.train()
likelihood.train()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
mll = VariationalELBO(likelihood, model, train_y.numel())
training_iter = 50
for i in range(training_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
optimizer.step()
model.eval()
likelihood.eval()
with torch.no_grad():
test_x = torch.linspace(0, 1, 101)
observed_pred = likelihood(model(test_x))
print('Reached the end of the script, successfully calculated likelihood')
```
** Stack trace/error message **
```
RuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])
```
## Expected Behavior
The expected behavior is that the model will run with PeriodicKernel, and not only RBFKernel or LinearKernel.
## System information
**Please complete the following information:**
- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->0.3.2
- <!-- PyTorch Version (run `print(torch.__version__)` -->1.1.0
- <!-- Computer OS -->macOS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/kernels/periodic_kernel.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import math
4 import torch
5 from .kernel import Kernel
6 from ..constraints import Positive
7
8
9 class PeriodicKernel(Kernel):
10 r""" Computes a covariance matrix based on the periodic kernel
11 between inputs :math:`\mathbf{x_1}` and :math:`\mathbf{x_2}`:
12
13 .. math::
14
15 \begin{equation*}
16 k_{\text{Periodic}}(\mathbf{x_1}, \mathbf{x_2}) = \exp \left(
17 \frac{2 \sin^2 \left( \pi \Vert \mathbf{x_1} - \mathbf{x_2} \Vert_1 / p \right) }
18 { \ell^2 } \right)
19 \end{equation*}
20
21 where
22
23 * :math:`p` is the periord length parameter.
24 * :math:`\ell` is a lengthscale parameter.
25
26 .. note::
27
28 This kernel does not have an `outputscale` parameter. To add a scaling parameter,
29 decorate this kernel with a :class:`gpytorch.kernels.ScaleKernel`.
30
31 .. note::
32
33 This kernel does not have an ARD lengthscale option.
34
35 Args:
36 :attr:`batch_shape` (torch.Size, optional):
37 Set this if you want a separate lengthscale for each
38 batch of input data. It should be `b` if :attr:`x1` is a `b x n x d` tensor. Default: `torch.Size([])`.
39 :attr:`active_dims` (tuple of ints, optional):
40 Set this if you want to compute the covariance of only a few input dimensions. The ints
41 corresponds to the indices of the dimensions. Default: `None`.
42 :attr:`period_length_prior` (Prior, optional):
43 Set this if you want to apply a prior to the period length parameter. Default: `None`.
44 :attr:`lengthscale_prior` (Prior, optional):
45 Set this if you want to apply a prior to the lengthscale parameter. Default: `None`.
46 :attr:`lengthscale_constraint` (Constraint, optional):
47 Set this if you want to apply a constraint to the value of the lengthscale. Default: `Positive`.
48 :attr:`period_length_constraint` (Constraint, optional):
49 Set this if you want to apply a constraint to the value of the period length. Default: `Positive`.
50 :attr:`eps` (float):
51 The minimum value that the lengthscale/period length can take
52 (prevents divide by zero errors). Default: `1e-6`.
53
54 Attributes:
55 :attr:`lengthscale` (Tensor):
56 The lengthscale parameter. Size = `*batch_shape x 1 x 1`.
57 :attr:`period_length` (Tensor):
58 The period length parameter. Size = `*batch_shape x 1 x 1`.
59
60 Example:
61 >>> x = torch.randn(10, 5)
62 >>> # Non-batch: Simple option
63 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())
64 >>>
65 >>> batch_x = torch.randn(2, 10, 5)
66 >>> # Batch: Simple option
67 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())
68 >>> # Batch: different lengthscale for each batch
69 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel(batch_size=2))
70 >>> covar = covar_module(x) # Output: LazyVariable of size (2 x 10 x 10)
71 """
72
73 def __init__(self, period_length_prior=None, period_length_constraint=None, **kwargs):
74 super(PeriodicKernel, self).__init__(has_lengthscale=True, **kwargs)
75 if period_length_constraint is None:
76 period_length_constraint = Positive()
77
78 self.register_parameter(
79 name="raw_period_length",
80 parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, 1)))
81
82 if period_length_prior is not None:
83 self.register_prior(
84 "period_length_prior",
85 period_length_prior,
86 lambda: self.period_length,
87 lambda v: self._set_period_length(v),
88 )
89
90 self.register_constraint("raw_period_length", period_length_constraint)
91
92 @property
93 def period_length(self):
94 return self.raw_period_length_constraint.transform(self.raw_period_length)
95
96 @period_length.setter
97 def period_length(self, value):
98 self._set_period_length(value)
99
100 def _set_period_length(self, value):
101 if not torch.is_tensor(value):
102 value = torch.as_tensor(value).to(self.raw_period_length)
103 self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))
104
105 def forward(self, x1, x2, **params):
106 x1_ = x1.div(self.period_length)
107 x2_ = x2.div(self.period_length)
108 diff = self.covar_dist(x1_, x2_, **params)
109 res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()
110 if diff.ndimension() == 2:
111 res = res.squeeze(0)
112 return res
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gpytorch/kernels/periodic_kernel.py b/gpytorch/kernels/periodic_kernel.py
--- a/gpytorch/kernels/periodic_kernel.py
+++ b/gpytorch/kernels/periodic_kernel.py
@@ -102,11 +102,11 @@
value = torch.as_tensor(value).to(self.raw_period_length)
self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))
- def forward(self, x1, x2, **params):
+ def forward(self, x1, x2, diag=False, **params):
x1_ = x1.div(self.period_length)
x2_ = x2.div(self.period_length)
- diff = self.covar_dist(x1_, x2_, **params)
+ diff = self.covar_dist(x1_, x2_, diag=diag, **params)
res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()
- if diff.ndimension() == 2:
+ if diff.ndimension() == 2 or diag:
res = res.squeeze(0)
return res
|
{"golden_diff": "diff --git a/gpytorch/kernels/periodic_kernel.py b/gpytorch/kernels/periodic_kernel.py\n--- a/gpytorch/kernels/periodic_kernel.py\n+++ b/gpytorch/kernels/periodic_kernel.py\n@@ -102,11 +102,11 @@\n value = torch.as_tensor(value).to(self.raw_period_length)\n self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))\n \n- def forward(self, x1, x2, **params):\n+ def forward(self, x1, x2, diag=False, **params):\n x1_ = x1.div(self.period_length)\n x2_ = x2.div(self.period_length)\n- diff = self.covar_dist(x1_, x2_, **params)\n+ diff = self.covar_dist(x1_, x2_, diag=diag, **params)\n res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()\n- if diff.ndimension() == 2:\n+ if diff.ndimension() == 2 or diag:\n res = res.squeeze(0)\n return res\n", "issue": "[Bug] PeriodicKernel returns error with BernoulliLikelihood \n# \ud83d\udc1b Bug\r\n\r\nWhen running the simple GPClassificationModel example (as in https://gpytorch.readthedocs.io/en/latest/examples/02_Simple_GP_Classification/Simple_GP_Classification.html), and changing the RBFKernel to a PeriodicKernel, the `likelihood(model(test_x))` line returns a RuntimeError:\r\n\r\n`RuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])`\r\n\r\nThis error does not occur when changing the RBFKernel to a LinearKernel, or when changing the BernoulliLikelihood to GaussianLikelihood.\r\n\r\n## To reproduce\r\n\r\n```\r\nimport math\r\nimport torch\r\nimport gpytorch\r\nfrom matplotlib import pyplot as plt\r\n\r\ntrain_x = torch.linspace(0, 1, 10)\r\ntrain_y = torch.sign(torch.cos(train_x * (4 * math.pi))).add(1).div(2)\r\n\r\nfrom gpytorch.models import AbstractVariationalGP\r\nfrom gpytorch.variational import CholeskyVariationalDistribution\r\nfrom gpytorch.variational import VariationalStrategy\r\n\r\nclass GPClassificationModel(AbstractVariationalGP):\r\n def __init__(self, train_x):\r\n variational_distribution = CholeskyVariationalDistribution(train_x.size(0))\r\n variational_strategy = VariationalStrategy(self, train_x, variational_distribution)\r\n super(GPClassificationModel, self).__init__(variational_strategy)\r\n self.mean_module = gpytorch.means.ConstantMean()\r\n self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\r\n\r\n def forward(self, x):\r\n mean_x = self.mean_module(x)\r\n covar_x = self.covar_module(x)\r\n latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\r\n return latent_pred\r\n\r\n\r\nmodel = GPClassificationModel(train_x)\r\nlikelihood = gpytorch.likelihoods.BernoulliLikelihood()\r\n\r\nfrom gpytorch.mlls.variational_elbo import VariationalELBO\r\n\r\nmodel.train()\r\nlikelihood.train()\r\n\r\noptimizer = torch.optim.Adam(model.parameters(), lr=0.1)\r\n\r\nmll = VariationalELBO(likelihood, model, train_y.numel())\r\n\r\ntraining_iter = 50\r\nfor i in range(training_iter):\r\n optimizer.zero_grad()\r\n output = model(train_x)\r\n loss = -mll(output, train_y)\r\n loss.backward()\r\n optimizer.step()\r\n\r\nmodel.eval()\r\nlikelihood.eval()\r\n\r\nwith torch.no_grad():\r\n test_x = torch.linspace(0, 1, 101)\r\n observed_pred = likelihood(model(test_x))\r\n\r\nprint('Reached the end of the script, successfully calculated likelihood')\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\nRuntimeError: The kernel LazyEvaluatedKernelTensor is not equipped to handle and diag. Expected size torch.Size([101]). Got size torch.Size([1, 101])\r\n```\r\n\r\n## Expected Behavior\r\n\r\nThe expected behavior is that the model will run with PeriodicKernel, and not only RBFKernel or LinearKernel.\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->0.3.2\r\n- <!-- PyTorch Version (run `print(torch.__version__)` -->1.1.0\r\n- <!-- Computer OS -->macOS\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport math\nimport torch\nfrom .kernel import Kernel\nfrom ..constraints import Positive\n\n\nclass PeriodicKernel(Kernel):\n r\"\"\" Computes a covariance matrix based on the periodic kernel\n between inputs :math:`\\mathbf{x_1}` and :math:`\\mathbf{x_2}`:\n\n .. math::\n\n \\begin{equation*}\n k_{\\text{Periodic}}(\\mathbf{x_1}, \\mathbf{x_2}) = \\exp \\left(\n \\frac{2 \\sin^2 \\left( \\pi \\Vert \\mathbf{x_1} - \\mathbf{x_2} \\Vert_1 / p \\right) }\n { \\ell^2 } \\right)\n \\end{equation*}\n\n where\n\n * :math:`p` is the periord length parameter.\n * :math:`\\ell` is a lengthscale parameter.\n\n .. note::\n\n This kernel does not have an `outputscale` parameter. To add a scaling parameter,\n decorate this kernel with a :class:`gpytorch.kernels.ScaleKernel`.\n\n .. note::\n\n This kernel does not have an ARD lengthscale option.\n\n Args:\n :attr:`batch_shape` (torch.Size, optional):\n Set this if you want a separate lengthscale for each\n batch of input data. It should be `b` if :attr:`x1` is a `b x n x d` tensor. Default: `torch.Size([])`.\n :attr:`active_dims` (tuple of ints, optional):\n Set this if you want to compute the covariance of only a few input dimensions. The ints\n corresponds to the indices of the dimensions. Default: `None`.\n :attr:`period_length_prior` (Prior, optional):\n Set this if you want to apply a prior to the period length parameter. Default: `None`.\n :attr:`lengthscale_prior` (Prior, optional):\n Set this if you want to apply a prior to the lengthscale parameter. Default: `None`.\n :attr:`lengthscale_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the value of the lengthscale. Default: `Positive`.\n :attr:`period_length_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the value of the period length. Default: `Positive`.\n :attr:`eps` (float):\n The minimum value that the lengthscale/period length can take\n (prevents divide by zero errors). Default: `1e-6`.\n\n Attributes:\n :attr:`lengthscale` (Tensor):\n The lengthscale parameter. Size = `*batch_shape x 1 x 1`.\n :attr:`period_length` (Tensor):\n The period length parameter. Size = `*batch_shape x 1 x 1`.\n\n Example:\n >>> x = torch.randn(10, 5)\n >>> # Non-batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\n >>>\n >>> batch_x = torch.randn(2, 10, 5)\n >>> # Batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\n >>> # Batch: different lengthscale for each batch\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel(batch_size=2))\n >>> covar = covar_module(x) # Output: LazyVariable of size (2 x 10 x 10)\n \"\"\"\n\n def __init__(self, period_length_prior=None, period_length_constraint=None, **kwargs):\n super(PeriodicKernel, self).__init__(has_lengthscale=True, **kwargs)\n if period_length_constraint is None:\n period_length_constraint = Positive()\n\n self.register_parameter(\n name=\"raw_period_length\",\n parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, 1)))\n\n if period_length_prior is not None:\n self.register_prior(\n \"period_length_prior\",\n period_length_prior,\n lambda: self.period_length,\n lambda v: self._set_period_length(v),\n )\n\n self.register_constraint(\"raw_period_length\", period_length_constraint)\n\n @property\n def period_length(self):\n return self.raw_period_length_constraint.transform(self.raw_period_length)\n\n @period_length.setter\n def period_length(self, value):\n self._set_period_length(value)\n\n def _set_period_length(self, value):\n if not torch.is_tensor(value):\n value = torch.as_tensor(value).to(self.raw_period_length)\n self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))\n\n def forward(self, x1, x2, **params):\n x1_ = x1.div(self.period_length)\n x2_ = x2.div(self.period_length)\n diff = self.covar_dist(x1_, x2_, **params)\n res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()\n if diff.ndimension() == 2:\n res = res.squeeze(0)\n return res\n", "path": "gpytorch/kernels/periodic_kernel.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport math\nimport torch\nfrom .kernel import Kernel\nfrom ..constraints import Positive\n\n\nclass PeriodicKernel(Kernel):\n r\"\"\" Computes a covariance matrix based on the periodic kernel\n between inputs :math:`\\mathbf{x_1}` and :math:`\\mathbf{x_2}`:\n\n .. math::\n\n \\begin{equation*}\n k_{\\text{Periodic}}(\\mathbf{x_1}, \\mathbf{x_2}) = \\exp \\left(\n \\frac{2 \\sin^2 \\left( \\pi \\Vert \\mathbf{x_1} - \\mathbf{x_2} \\Vert_1 / p \\right) }\n { \\ell^2 } \\right)\n \\end{equation*}\n\n where\n\n * :math:`p` is the periord length parameter.\n * :math:`\\ell` is a lengthscale parameter.\n\n .. note::\n\n This kernel does not have an `outputscale` parameter. To add a scaling parameter,\n decorate this kernel with a :class:`gpytorch.kernels.ScaleKernel`.\n\n .. note::\n\n This kernel does not have an ARD lengthscale option.\n\n Args:\n :attr:`batch_shape` (torch.Size, optional):\n Set this if you want a separate lengthscale for each\n batch of input data. It should be `b` if :attr:`x1` is a `b x n x d` tensor. Default: `torch.Size([])`.\n :attr:`active_dims` (tuple of ints, optional):\n Set this if you want to compute the covariance of only a few input dimensions. The ints\n corresponds to the indices of the dimensions. Default: `None`.\n :attr:`period_length_prior` (Prior, optional):\n Set this if you want to apply a prior to the period length parameter. Default: `None`.\n :attr:`lengthscale_prior` (Prior, optional):\n Set this if you want to apply a prior to the lengthscale parameter. Default: `None`.\n :attr:`lengthscale_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the value of the lengthscale. Default: `Positive`.\n :attr:`period_length_constraint` (Constraint, optional):\n Set this if you want to apply a constraint to the value of the period length. Default: `Positive`.\n :attr:`eps` (float):\n The minimum value that the lengthscale/period length can take\n (prevents divide by zero errors). Default: `1e-6`.\n\n Attributes:\n :attr:`lengthscale` (Tensor):\n The lengthscale parameter. Size = `*batch_shape x 1 x 1`.\n :attr:`period_length` (Tensor):\n The period length parameter. Size = `*batch_shape x 1 x 1`.\n\n Example:\n >>> x = torch.randn(10, 5)\n >>> # Non-batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\n >>>\n >>> batch_x = torch.randn(2, 10, 5)\n >>> # Batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel())\n >>> # Batch: different lengthscale for each batch\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.PeriodicKernel(batch_size=2))\n >>> covar = covar_module(x) # Output: LazyVariable of size (2 x 10 x 10)\n \"\"\"\n\n def __init__(self, period_length_prior=None, period_length_constraint=None, **kwargs):\n super(PeriodicKernel, self).__init__(has_lengthscale=True, **kwargs)\n if period_length_constraint is None:\n period_length_constraint = Positive()\n\n self.register_parameter(\n name=\"raw_period_length\",\n parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, 1)))\n\n if period_length_prior is not None:\n self.register_prior(\n \"period_length_prior\",\n period_length_prior,\n lambda: self.period_length,\n lambda v: self._set_period_length(v),\n )\n\n self.register_constraint(\"raw_period_length\", period_length_constraint)\n\n @property\n def period_length(self):\n return self.raw_period_length_constraint.transform(self.raw_period_length)\n\n @period_length.setter\n def period_length(self, value):\n self._set_period_length(value)\n\n def _set_period_length(self, value):\n if not torch.is_tensor(value):\n value = torch.as_tensor(value).to(self.raw_period_length)\n self.initialize(raw_period_length=self.raw_period_length_constraint.inverse_transform(value))\n\n def forward(self, x1, x2, diag=False, **params):\n x1_ = x1.div(self.period_length)\n x2_ = x2.div(self.period_length)\n diff = self.covar_dist(x1_, x2_, diag=diag, **params)\n res = torch.sin(diff.mul(math.pi)).pow(2).mul(-2 / self.lengthscale).exp_()\n if diff.ndimension() == 2 or diag:\n res = res.squeeze(0)\n return res\n", "path": "gpytorch/kernels/periodic_kernel.py"}]}
| 2,374 | 255 |
gh_patches_debug_12460
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-1591
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`.dist-info.virtualenv` marker files break importlib.metadata in app-data seeded venvs
Since virtualenv 20.0.0, dist-info directories in virtual environments seeded using the `app-data` method are accompanied by empty marker files with the same name as the directory plus a `.virtualenv` suffix:
https://github.com/pypa/virtualenv/blob/c8ac632cd40bfb014515151a02d1ea55d47ff467/src/virtualenv/seed/via_app_data/pip_install/base.py#L73-L76
This breaks `importlib.metadata`, which picks these files up and attempts to create `importlib.metadata.Distribution` instances for them. As a result, something like `importlib.metadata.version("pip")` will return `None`.
The following session was run in a virtual environment created by virtualenv 20.0.2 inside a `python:3.8.1` Docker image:
```python
Python 3.8.1 (default, Feb 2 2020, 08:37:37)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import importlib.metadata
>>> for d in importlib.metadata.distributions():
... name = d.metadata["name"]
... version = d.metadata["version"]
... path = d._path
... print(f"{path=} {name=} {version=}")
...
path=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info.virtualenv') name=None version=None
path=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info.virtualenv') name=None version=None
path=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info') name='pip' version='20.0.2'
path=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info') name='wheel' version='0.34.2'
path=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info') name='setuptools' version='45.2.0'
path=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info.virtualenv') name=None version=None
>>> d = importlib.metadata.distribution("pip")
>>> d.metadata["name"]
>>> importlib.metadata.version("pip")
>>>
```
This may in fact be an issue in `importlib.metadata`. I'm reporting it here in the assumption that virtualenv may be able to fix or work around this issue faster, and that it would need to work around the issue anyway for Python 3.8.0 and 3.8.1.
The issue also causes Poetry to break when invoked inside such virtual environments, for example in Nox and tox: https://github.com/python-poetry/poetry/issues/1953. This is currently affecting projects that use Poetry in their CI pipelines.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/virtualenv/seed/via_app_data/pip_install/base.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 import logging
4 import os
5 import re
6 import shutil
7 import zipfile
8 from abc import ABCMeta, abstractmethod
9 from tempfile import mkdtemp
10
11 import six
12 from six import PY3
13
14 from virtualenv.util import ConfigParser
15 from virtualenv.util.path import Path
16
17
18 @six.add_metaclass(ABCMeta)
19 class PipInstall(object):
20 def __init__(self, wheel, creator, image_folder):
21 self._wheel = wheel
22 self._creator = creator
23 self._image_dir = image_folder
24 self._extracted = False
25 self.__dist_info = None
26 self._console_entry_points = None
27
28 @abstractmethod
29 def _sync(self, src, dst):
30 raise NotImplementedError
31
32 def install(self):
33 self._extracted = True
34 # sync image
35 for filename in self._image_dir.iterdir():
36 into = self._creator.purelib / filename.name
37 if into.exists():
38 if into.is_dir() and not into.is_symlink():
39 shutil.rmtree(str(into))
40 else:
41 into.unlink()
42 self._sync(filename, into)
43 # generate console executables
44 consoles = set()
45 script_dir = self._creator.script_dir
46 for name, module in self._console_scripts.items():
47 consoles.update(self._create_console_entry_point(name, module, script_dir))
48 logging.debug("generated console scripts %s", " ".join(i.name for i in consoles))
49
50 def build_image(self):
51 # 1. first extract the wheel
52 logging.debug("build install image to %s of %s", self._image_dir, self._wheel.name)
53 with zipfile.ZipFile(str(self._wheel)) as zip_ref:
54 zip_ref.extractall(str(self._image_dir))
55 self._extracted = True
56 # 2. now add additional files not present in the package
57 new_files = self._generate_new_files()
58 # 3. finally fix the records file
59 self._fix_records(new_files)
60
61 def _records_text(self, files):
62 record_data = "\n".join(
63 "{},,".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))
64 for rec in files
65 )
66 return record_data
67
68 def _generate_new_files(self):
69 new_files = set()
70 installer = self._dist_info / "INSTALLER"
71 installer.write_text("pip\n")
72 new_files.add(installer)
73 # inject a no-op root element, as workaround for bug added
74 # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722
75 marker = self._image_dir / "{}.virtualenv".format(self._dist_info.name)
76 marker.write_text("")
77 new_files.add(marker)
78 folder = mkdtemp()
79 try:
80 to_folder = Path(folder)
81 rel = os.path.relpath(
82 six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))
83 )
84 for name, module in self._console_scripts.items():
85 new_files.update(
86 Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))
87 for i in self._create_console_entry_point(name, module, to_folder)
88 )
89 finally:
90 shutil.rmtree(folder, ignore_errors=True)
91 return new_files
92
93 @property
94 def _dist_info(self):
95 if self._extracted is False:
96 return None # pragma: no cover
97 if self.__dist_info is None:
98 for filename in self._image_dir.iterdir():
99 if filename.suffix == ".dist-info":
100 self.__dist_info = filename
101 break
102 else:
103 raise RuntimeError("no dist info") # pragma: no cover
104 return self.__dist_info
105
106 @abstractmethod
107 def _fix_records(self, extra_record_data):
108 raise NotImplementedError
109
110 @property
111 def _console_scripts(self):
112 if self._extracted is False:
113 return None # pragma: no cover
114 if self._console_entry_points is None:
115 self._console_entry_points = {}
116 entry_points = self._dist_info / "entry_points.txt"
117 if entry_points.exists():
118 parser = ConfigParser.ConfigParser()
119 with entry_points.open() as file_handler:
120 reader = getattr(parser, "read_file" if PY3 else "readfp")
121 reader(file_handler)
122 if "console_scripts" in parser.sections():
123 for name, value in parser.items("console_scripts"):
124 match = re.match(r"(.*?)-?\d\.?\d*", name)
125 if match:
126 name = match.groups(1)[0]
127 self._console_entry_points[name] = value
128 return self._console_entry_points
129
130 def _create_console_entry_point(self, name, value, to_folder):
131 result = []
132 from distlib.scripts import ScriptMaker
133
134 maker = ScriptMaker(None, str(to_folder))
135 maker.clobber = True # overwrite
136 maker.variants = {"", "X", "X.Y"} # create all variants
137 maker.set_mode = True # ensure they are executable
138 maker.executable = str(self._creator.exe)
139 specification = "{} = {}".format(name, value)
140 new_files = maker.make(specification)
141 result.extend(Path(i) for i in new_files)
142 return result
143
144 def clear(self):
145 if self._image_dir.exists():
146 shutil.rmtree(six.ensure_text(str(self._image_dir)))
147
148 def has_image(self):
149 return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py
--- a/src/virtualenv/seed/via_app_data/pip_install/base.py
+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py
@@ -70,9 +70,8 @@
installer = self._dist_info / "INSTALLER"
installer.write_text("pip\n")
new_files.add(installer)
- # inject a no-op root element, as workaround for bug added
- # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722
- marker = self._image_dir / "{}.virtualenv".format(self._dist_info.name)
+ # inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226
+ marker = self._image_dir / "{}.virtualenv".format(self._dist_info.stem)
marker.write_text("")
new_files.add(marker)
folder = mkdtemp()
|
{"golden_diff": "diff --git a/src/virtualenv/seed/via_app_data/pip_install/base.py b/src/virtualenv/seed/via_app_data/pip_install/base.py\n--- a/src/virtualenv/seed/via_app_data/pip_install/base.py\n+++ b/src/virtualenv/seed/via_app_data/pip_install/base.py\n@@ -70,9 +70,8 @@\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n- # inject a no-op root element, as workaround for bug added\n- # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722\n- marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.name)\n+ # inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226\n+ marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.stem)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n", "issue": "`.dist-info.virtualenv` marker files break importlib.metadata in app-data seeded venvs\nSince virtualenv 20.0.0, dist-info directories in virtual environments seeded using the `app-data` method are accompanied by empty marker files with the same name as the directory plus a `.virtualenv` suffix:\r\n\r\nhttps://github.com/pypa/virtualenv/blob/c8ac632cd40bfb014515151a02d1ea55d47ff467/src/virtualenv/seed/via_app_data/pip_install/base.py#L73-L76\r\n\r\nThis breaks `importlib.metadata`, which picks these files up and attempts to create `importlib.metadata.Distribution` instances for them. As a result, something like `importlib.metadata.version(\"pip\")` will return `None`.\r\n\r\nThe following session was run in a virtual environment created by virtualenv 20.0.2 inside a `python:3.8.1` Docker image:\r\n\r\n```python\r\nPython 3.8.1 (default, Feb 2 2020, 08:37:37)\r\n[GCC 8.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import importlib.metadata\r\n>>> for d in importlib.metadata.distributions():\r\n... name = d.metadata[\"name\"]\r\n... version = d.metadata[\"version\"]\r\n... path = d._path\r\n... print(f\"{path=} {name=} {version=}\")\r\n...\r\npath=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info.virtualenv') name=None version=None\r\npath=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info.virtualenv') name=None version=None\r\npath=PosixPath('/venv/lib/python3.8/site-packages/pip-20.0.2.dist-info') name='pip' version='20.0.2'\r\npath=PosixPath('/venv/lib/python3.8/site-packages/wheel-0.34.2.dist-info') name='wheel' version='0.34.2'\r\npath=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info') name='setuptools' version='45.2.0'\r\npath=PosixPath('/venv/lib/python3.8/site-packages/setuptools-45.2.0.dist-info.virtualenv') name=None version=None\r\n>>> d = importlib.metadata.distribution(\"pip\")\r\n>>> d.metadata[\"name\"]\r\n>>> importlib.metadata.version(\"pip\")\r\n>>>\r\n```\r\n\r\nThis may in fact be an issue in `importlib.metadata`. I'm reporting it here in the assumption that virtualenv may be able to fix or work around this issue faster, and that it would need to work around the issue anyway for Python 3.8.0 and 3.8.1.\r\n\r\nThe issue also causes Poetry to break when invoked inside such virtual environments, for example in Nox and tox: https://github.com/python-poetry/poetry/issues/1953. This is currently affecting projects that use Poetry in their CI pipelines.\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nimport re\nimport shutil\nimport zipfile\nfrom abc import ABCMeta, abstractmethod\nfrom tempfile import mkdtemp\n\nimport six\nfrom six import PY3\n\nfrom virtualenv.util import ConfigParser\nfrom virtualenv.util.path import Path\n\n\[email protected]_metaclass(ABCMeta)\nclass PipInstall(object):\n def __init__(self, wheel, creator, image_folder):\n self._wheel = wheel\n self._creator = creator\n self._image_dir = image_folder\n self._extracted = False\n self.__dist_info = None\n self._console_entry_points = None\n\n @abstractmethod\n def _sync(self, src, dst):\n raise NotImplementedError\n\n def install(self):\n self._extracted = True\n # sync image\n for filename in self._image_dir.iterdir():\n into = self._creator.purelib / filename.name\n if into.exists():\n if into.is_dir() and not into.is_symlink():\n shutil.rmtree(str(into))\n else:\n into.unlink()\n self._sync(filename, into)\n # generate console executables\n consoles = set()\n script_dir = self._creator.script_dir\n for name, module in self._console_scripts.items():\n consoles.update(self._create_console_entry_point(name, module, script_dir))\n logging.debug(\"generated console scripts %s\", \" \".join(i.name for i in consoles))\n\n def build_image(self):\n # 1. first extract the wheel\n logging.debug(\"build install image to %s of %s\", self._image_dir, self._wheel.name)\n with zipfile.ZipFile(str(self._wheel)) as zip_ref:\n zip_ref.extractall(str(self._image_dir))\n self._extracted = True\n # 2. now add additional files not present in the package\n new_files = self._generate_new_files()\n # 3. finally fix the records file\n self._fix_records(new_files)\n\n def _records_text(self, files):\n record_data = \"\\n\".join(\n \"{},,\".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))\n for rec in files\n )\n return record_data\n\n def _generate_new_files(self):\n new_files = set()\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n # inject a no-op root element, as workaround for bug added\n # by https://github.com/pypa/pip/commit/c7ae06c79#r35523722\n marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.name)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n try:\n to_folder = Path(folder)\n rel = os.path.relpath(\n six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))\n )\n for name, module in self._console_scripts.items():\n new_files.update(\n Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))\n for i in self._create_console_entry_point(name, module, to_folder)\n )\n finally:\n shutil.rmtree(folder, ignore_errors=True)\n return new_files\n\n @property\n def _dist_info(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self.__dist_info is None:\n for filename in self._image_dir.iterdir():\n if filename.suffix == \".dist-info\":\n self.__dist_info = filename\n break\n else:\n raise RuntimeError(\"no dist info\") # pragma: no cover\n return self.__dist_info\n\n @abstractmethod\n def _fix_records(self, extra_record_data):\n raise NotImplementedError\n\n @property\n def _console_scripts(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self._console_entry_points is None:\n self._console_entry_points = {}\n entry_points = self._dist_info / \"entry_points.txt\"\n if entry_points.exists():\n parser = ConfigParser.ConfigParser()\n with entry_points.open() as file_handler:\n reader = getattr(parser, \"read_file\" if PY3 else \"readfp\")\n reader(file_handler)\n if \"console_scripts\" in parser.sections():\n for name, value in parser.items(\"console_scripts\"):\n match = re.match(r\"(.*?)-?\\d\\.?\\d*\", name)\n if match:\n name = match.groups(1)[0]\n self._console_entry_points[name] = value\n return self._console_entry_points\n\n def _create_console_entry_point(self, name, value, to_folder):\n result = []\n from distlib.scripts import ScriptMaker\n\n maker = ScriptMaker(None, str(to_folder))\n maker.clobber = True # overwrite\n maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n maker.set_mode = True # ensure they are executable\n maker.executable = str(self._creator.exe)\n specification = \"{} = {}\".format(name, value)\n new_files = maker.make(specification)\n result.extend(Path(i) for i in new_files)\n return result\n\n def clear(self):\n if self._image_dir.exists():\n shutil.rmtree(six.ensure_text(str(self._image_dir)))\n\n def has_image(self):\n return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None\n", "path": "src/virtualenv/seed/via_app_data/pip_install/base.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nimport re\nimport shutil\nimport zipfile\nfrom abc import ABCMeta, abstractmethod\nfrom tempfile import mkdtemp\n\nimport six\nfrom six import PY3\n\nfrom virtualenv.util import ConfigParser\nfrom virtualenv.util.path import Path\n\n\[email protected]_metaclass(ABCMeta)\nclass PipInstall(object):\n def __init__(self, wheel, creator, image_folder):\n self._wheel = wheel\n self._creator = creator\n self._image_dir = image_folder\n self._extracted = False\n self.__dist_info = None\n self._console_entry_points = None\n\n @abstractmethod\n def _sync(self, src, dst):\n raise NotImplementedError\n\n def install(self):\n self._extracted = True\n # sync image\n for filename in self._image_dir.iterdir():\n into = self._creator.purelib / filename.name\n if into.exists():\n if into.is_dir() and not into.is_symlink():\n shutil.rmtree(str(into))\n else:\n into.unlink()\n self._sync(filename, into)\n # generate console executables\n consoles = set()\n script_dir = self._creator.script_dir\n for name, module in self._console_scripts.items():\n consoles.update(self._create_console_entry_point(name, module, script_dir))\n logging.debug(\"generated console scripts %s\", \" \".join(i.name for i in consoles))\n\n def build_image(self):\n # 1. first extract the wheel\n logging.debug(\"build install image to %s of %s\", self._image_dir, self._wheel.name)\n with zipfile.ZipFile(str(self._wheel)) as zip_ref:\n zip_ref.extractall(str(self._image_dir))\n self._extracted = True\n # 2. now add additional files not present in the package\n new_files = self._generate_new_files()\n # 3. finally fix the records file\n self._fix_records(new_files)\n\n def _records_text(self, files):\n record_data = \"\\n\".join(\n \"{},,\".format(os.path.relpath(six.ensure_text(str(rec)), six.ensure_text(str(self._image_dir))))\n for rec in files\n )\n return record_data\n\n def _generate_new_files(self):\n new_files = set()\n installer = self._dist_info / \"INSTALLER\"\n installer.write_text(\"pip\\n\")\n new_files.add(installer)\n # inject a no-op root element, as workaround for bug in https://github.com/pypa/pip/issues/7226\n marker = self._image_dir / \"{}.virtualenv\".format(self._dist_info.stem)\n marker.write_text(\"\")\n new_files.add(marker)\n folder = mkdtemp()\n try:\n to_folder = Path(folder)\n rel = os.path.relpath(\n six.ensure_text(str(self._creator.script_dir)), six.ensure_text(str(self._creator.purelib))\n )\n for name, module in self._console_scripts.items():\n new_files.update(\n Path(os.path.normpath(six.ensure_text(str(self._image_dir / rel / i.name))))\n for i in self._create_console_entry_point(name, module, to_folder)\n )\n finally:\n shutil.rmtree(folder, ignore_errors=True)\n return new_files\n\n @property\n def _dist_info(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self.__dist_info is None:\n for filename in self._image_dir.iterdir():\n if filename.suffix == \".dist-info\":\n self.__dist_info = filename\n break\n else:\n raise RuntimeError(\"no dist info\") # pragma: no cover\n return self.__dist_info\n\n @abstractmethod\n def _fix_records(self, extra_record_data):\n raise NotImplementedError\n\n @property\n def _console_scripts(self):\n if self._extracted is False:\n return None # pragma: no cover\n if self._console_entry_points is None:\n self._console_entry_points = {}\n entry_points = self._dist_info / \"entry_points.txt\"\n if entry_points.exists():\n parser = ConfigParser.ConfigParser()\n with entry_points.open() as file_handler:\n reader = getattr(parser, \"read_file\" if PY3 else \"readfp\")\n reader(file_handler)\n if \"console_scripts\" in parser.sections():\n for name, value in parser.items(\"console_scripts\"):\n match = re.match(r\"(.*?)-?\\d\\.?\\d*\", name)\n if match:\n name = match.groups(1)[0]\n self._console_entry_points[name] = value\n return self._console_entry_points\n\n def _create_console_entry_point(self, name, value, to_folder):\n result = []\n from distlib.scripts import ScriptMaker\n\n maker = ScriptMaker(None, str(to_folder))\n maker.clobber = True # overwrite\n maker.variants = {\"\", \"X\", \"X.Y\"} # create all variants\n maker.set_mode = True # ensure they are executable\n maker.executable = str(self._creator.exe)\n specification = \"{} = {}\".format(name, value)\n new_files = maker.make(specification)\n result.extend(Path(i) for i in new_files)\n return result\n\n def clear(self):\n if self._image_dir.exists():\n shutil.rmtree(six.ensure_text(str(self._image_dir)))\n\n def has_image(self):\n return self._image_dir.exists() and next(self._image_dir.iterdir()) is not None\n", "path": "src/virtualenv/seed/via_app_data/pip_install/base.py"}]}
| 2,544 | 252 |
gh_patches_debug_36414
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-1695
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
strict flag was removed from the compose API without a prior deprecation
Strict config composition functionality has become the default in Hydra 1.0 (See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated).
This flag was completely removed in Hydra 1.1.0.
Unfortunately, the Compose API strict flag was not deprecated and was thus an avoidable breaking change.
A followup PR will re-introduce the strict flag to the Compose API as a deprecated flag. That flag will be removed in the major version of Hydra.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/experimental/compose.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # DEPRECATED: remove in 1.2
3 import warnings
4 from typing import List, Optional
5
6 from omegaconf import DictConfig
7
8
9 def compose(
10 config_name: Optional[str] = None,
11 overrides: List[str] = [],
12 return_hydra_config: bool = False,
13 ) -> DictConfig:
14 from hydra import compose as real_compose
15
16 warnings.warn(
17 category=UserWarning,
18 message="hydra.experimental.compose() is no longer experimental."
19 " Use hydra.compose()",
20 )
21 return real_compose(
22 config_name=config_name,
23 overrides=overrides,
24 return_hydra_config=return_hydra_config,
25 )
26
```
Path: `hydra/compose.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from typing import List, Optional
3
4 from omegaconf import DictConfig, open_dict
5
6 from hydra.core.global_hydra import GlobalHydra
7 from hydra.types import RunMode
8
9
10 def compose(
11 config_name: Optional[str] = None,
12 overrides: List[str] = [],
13 return_hydra_config: bool = False,
14 ) -> DictConfig:
15 """
16 :param config_name: the name of the config
17 (usually the file name without the .yaml extension)
18 :param overrides: list of overrides for config file
19 :param return_hydra_config: True to return the hydra config node in the result
20 :return: the composed config
21 """
22 assert (
23 GlobalHydra().is_initialized()
24 ), "GlobalHydra is not initialized, use @hydra.main() or call one of the hydra initialization methods first"
25
26 gh = GlobalHydra.instance()
27 assert gh.hydra is not None
28 cfg = gh.hydra.compose_config(
29 config_name=config_name,
30 overrides=overrides,
31 run_mode=RunMode.RUN,
32 from_shell=False,
33 with_log_configuration=False,
34 )
35 assert isinstance(cfg, DictConfig)
36
37 if not return_hydra_config:
38 if "hydra" in cfg:
39 with open_dict(cfg):
40 del cfg["hydra"]
41 return cfg
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hydra/compose.py b/hydra/compose.py
--- a/hydra/compose.py
+++ b/hydra/compose.py
@@ -1,7 +1,9 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+import warnings
+from textwrap import dedent
from typing import List, Optional
-from omegaconf import DictConfig, open_dict
+from omegaconf import DictConfig, OmegaConf, open_dict
from hydra.core.global_hydra import GlobalHydra
from hydra.types import RunMode
@@ -11,12 +13,14 @@
config_name: Optional[str] = None,
overrides: List[str] = [],
return_hydra_config: bool = False,
+ strict: Optional[bool] = None,
) -> DictConfig:
"""
:param config_name: the name of the config
(usually the file name without the .yaml extension)
:param overrides: list of overrides for config file
:param return_hydra_config: True to return the hydra config node in the result
+ :param strict: DEPRECATED. If true, returned config has struct mode disabled.
:return: the composed config
"""
assert (
@@ -38,4 +42,18 @@
if "hydra" in cfg:
with open_dict(cfg):
del cfg["hydra"]
+
+ if strict is not None:
+ # DEPRECATED: remove in 1.2
+ warnings.warn(
+ dedent(
+ """\
+
+ The strict flag in the compose API is deprecated and will be removed in the next version of Hydra.
+ See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated for more info.
+ """
+ )
+ )
+ OmegaConf.set_struct(cfg, strict)
+
return cfg
diff --git a/hydra/experimental/compose.py b/hydra/experimental/compose.py
--- a/hydra/experimental/compose.py
+++ b/hydra/experimental/compose.py
@@ -10,6 +10,7 @@
config_name: Optional[str] = None,
overrides: List[str] = [],
return_hydra_config: bool = False,
+ strict: Optional[bool] = None,
) -> DictConfig:
from hydra import compose as real_compose
@@ -22,4 +23,5 @@
config_name=config_name,
overrides=overrides,
return_hydra_config=return_hydra_config,
+ strict=strict,
)
|
{"golden_diff": "diff --git a/hydra/compose.py b/hydra/compose.py\n--- a/hydra/compose.py\n+++ b/hydra/compose.py\n@@ -1,7 +1,9 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n+import warnings\n+from textwrap import dedent\n from typing import List, Optional\n \n-from omegaconf import DictConfig, open_dict\n+from omegaconf import DictConfig, OmegaConf, open_dict\n \n from hydra.core.global_hydra import GlobalHydra\n from hydra.types import RunMode\n@@ -11,12 +13,14 @@\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n+ strict: Optional[bool] = None,\n ) -> DictConfig:\n \"\"\"\n :param config_name: the name of the config\n (usually the file name without the .yaml extension)\n :param overrides: list of overrides for config file\n :param return_hydra_config: True to return the hydra config node in the result\n+ :param strict: DEPRECATED. If true, returned config has struct mode disabled.\n :return: the composed config\n \"\"\"\n assert (\n@@ -38,4 +42,18 @@\n if \"hydra\" in cfg:\n with open_dict(cfg):\n del cfg[\"hydra\"]\n+\n+ if strict is not None:\n+ # DEPRECATED: remove in 1.2\n+ warnings.warn(\n+ dedent(\n+ \"\"\"\\\n+\n+ The strict flag in the compose API is deprecated and will be removed in the next version of Hydra.\n+ See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated for more info.\n+ \"\"\"\n+ )\n+ )\n+ OmegaConf.set_struct(cfg, strict)\n+\n return cfg\ndiff --git a/hydra/experimental/compose.py b/hydra/experimental/compose.py\n--- a/hydra/experimental/compose.py\n+++ b/hydra/experimental/compose.py\n@@ -10,6 +10,7 @@\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n+ strict: Optional[bool] = None,\n ) -> DictConfig:\n from hydra import compose as real_compose\n \n@@ -22,4 +23,5 @@\n config_name=config_name,\n overrides=overrides,\n return_hydra_config=return_hydra_config,\n+ strict=strict,\n )\n", "issue": "strict flag was removed from the compose API without a prior deprecation\nStrict config composition functionality has become the default in Hydra 1.0 (See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated).\r\n\r\nThis flag was completely removed in Hydra 1.1.0.\r\nUnfortunately, the Compose API strict flag was not deprecated and was thus an avoidable breaking change.\r\n\r\nA followup PR will re-introduce the strict flag to the Compose API as a deprecated flag. That flag will be removed in the major version of Hydra.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# DEPRECATED: remove in 1.2\nimport warnings\nfrom typing import List, Optional\n\nfrom omegaconf import DictConfig\n\n\ndef compose(\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n) -> DictConfig:\n from hydra import compose as real_compose\n\n warnings.warn(\n category=UserWarning,\n message=\"hydra.experimental.compose() is no longer experimental.\"\n \" Use hydra.compose()\",\n )\n return real_compose(\n config_name=config_name,\n overrides=overrides,\n return_hydra_config=return_hydra_config,\n )\n", "path": "hydra/experimental/compose.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom typing import List, Optional\n\nfrom omegaconf import DictConfig, open_dict\n\nfrom hydra.core.global_hydra import GlobalHydra\nfrom hydra.types import RunMode\n\n\ndef compose(\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n) -> DictConfig:\n \"\"\"\n :param config_name: the name of the config\n (usually the file name without the .yaml extension)\n :param overrides: list of overrides for config file\n :param return_hydra_config: True to return the hydra config node in the result\n :return: the composed config\n \"\"\"\n assert (\n GlobalHydra().is_initialized()\n ), \"GlobalHydra is not initialized, use @hydra.main() or call one of the hydra initialization methods first\"\n\n gh = GlobalHydra.instance()\n assert gh.hydra is not None\n cfg = gh.hydra.compose_config(\n config_name=config_name,\n overrides=overrides,\n run_mode=RunMode.RUN,\n from_shell=False,\n with_log_configuration=False,\n )\n assert isinstance(cfg, DictConfig)\n\n if not return_hydra_config:\n if \"hydra\" in cfg:\n with open_dict(cfg):\n del cfg[\"hydra\"]\n return cfg\n", "path": "hydra/compose.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# DEPRECATED: remove in 1.2\nimport warnings\nfrom typing import List, Optional\n\nfrom omegaconf import DictConfig\n\n\ndef compose(\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n strict: Optional[bool] = None,\n) -> DictConfig:\n from hydra import compose as real_compose\n\n warnings.warn(\n category=UserWarning,\n message=\"hydra.experimental.compose() is no longer experimental.\"\n \" Use hydra.compose()\",\n )\n return real_compose(\n config_name=config_name,\n overrides=overrides,\n return_hydra_config=return_hydra_config,\n strict=strict,\n )\n", "path": "hydra/experimental/compose.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport warnings\nfrom textwrap import dedent\nfrom typing import List, Optional\n\nfrom omegaconf import DictConfig, OmegaConf, open_dict\n\nfrom hydra.core.global_hydra import GlobalHydra\nfrom hydra.types import RunMode\n\n\ndef compose(\n config_name: Optional[str] = None,\n overrides: List[str] = [],\n return_hydra_config: bool = False,\n strict: Optional[bool] = None,\n) -> DictConfig:\n \"\"\"\n :param config_name: the name of the config\n (usually the file name without the .yaml extension)\n :param overrides: list of overrides for config file\n :param return_hydra_config: True to return the hydra config node in the result\n :param strict: DEPRECATED. If true, returned config has struct mode disabled.\n :return: the composed config\n \"\"\"\n assert (\n GlobalHydra().is_initialized()\n ), \"GlobalHydra is not initialized, use @hydra.main() or call one of the hydra initialization methods first\"\n\n gh = GlobalHydra.instance()\n assert gh.hydra is not None\n cfg = gh.hydra.compose_config(\n config_name=config_name,\n overrides=overrides,\n run_mode=RunMode.RUN,\n from_shell=False,\n with_log_configuration=False,\n )\n assert isinstance(cfg, DictConfig)\n\n if not return_hydra_config:\n if \"hydra\" in cfg:\n with open_dict(cfg):\n del cfg[\"hydra\"]\n\n if strict is not None:\n # DEPRECATED: remove in 1.2\n warnings.warn(\n dedent(\n \"\"\"\\\n\n The strict flag in the compose API is deprecated and will be removed in the next version of Hydra.\n See https://hydra.cc/docs/upgrades/0.11_to_1.0/strict_mode_flag_deprecated for more info.\n \"\"\"\n )\n )\n OmegaConf.set_struct(cfg, strict)\n\n return cfg\n", "path": "hydra/compose.py"}]}
| 985 | 582 |
gh_patches_debug_22393
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1508
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[SkipRev] broken - cannot import name SkipDownload
SkipRev plugin failed to import with following error:
Errore durante l'importazione SkipRev: cannot import name SkipDownload
I already tried following, without success: http://forum.pyload.org/viewtopic.php?f=7&t=4335
Debian GNU/Linux 7.8 (wheezy) x64
Python 2.7.3
pyLoad 0.4.9
SkipRev 0.30
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `module/plugins/hooks/SkipRev.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import re
4 import urllib
5 import urlparse
6
7 from types import MethodType
8
9 from module.PyFile import PyFile
10 from module.plugins.internal.Hook import Hook
11 from module.plugins.internal.Plugin import SkipDownload
12
13
14 class SkipRev(Hook):
15 __name__ = "SkipRev"
16 __type__ = "hook"
17 __version__ = "0.30"
18
19 __config__ = [("mode" , "Auto;Manual", "Choose recovery archives to skip" , "Auto"),
20 ("revtokeep", "int" , "Number of recovery archives to keep for package", 0 )]
21
22 __description__ = """Skip recovery archives (.rev)"""
23 __license__ = "GPLv3"
24 __authors__ = [("Walter Purcaro", "[email protected]")]
25
26
27 interval = 0 #@TODO: Remove in 0.4.10
28
29
30 def setup(self):
31 self.info = {} #@TODO: Remove in 0.4.10
32
33
34 @staticmethod
35 def _setup(self):
36 self.pyfile.plugin._setup()
37 if self.pyfile.hasStatus("skipped"):
38 raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)
39
40
41 def _name(self, pyfile):
42 if hasattr(pyfile.pluginmodule, "getInfo"): #@NOTE: getInfo is deprecated in 0.4.10
43 return pyfile.pluginmodule.getInfo([pyfile.url]).next()[0]
44 else:
45 self.logWarning("Unable to grab file name")
46 return urlparse.urlparse(urllib.unquote(pyfile.url)).path.split('/')[-1]
47
48
49 def _pyfile(self, link):
50 return PyFile(self.core.files,
51 link.fid,
52 link.url,
53 link.name,
54 link.size,
55 link.status,
56 link.error,
57 link.plugin,
58 link.packageID,
59 link.order)
60
61
62 def downloadPreparing(self, pyfile):
63 name = self._name(pyfile)
64
65 if pyfile.statusname is _("unskipped") or not name.endswith(".rev") or not ".part" in name:
66 return
67
68 revtokeep = -1 if self.getConfig('mode') == "Auto" else self.getConfig('revtokeep')
69
70 if revtokeep:
71 status_list = (1, 4, 8, 9, 14) if revtokeep < 0 else (1, 3, 4, 8, 9, 14)
72 pyname = re.compile(r'%s\.part\d+\.rev$' % name.rsplit('.', 2)[0].replace('.', '\.'))
73
74 queued = [True for link in self.core.api.getPackageData(pyfile.package().id).links \
75 if link.status not in status_list and pyname.match(link.name)].count(True)
76
77 if not queued or queued < revtokeep: #: keep one rev at least in auto mode
78 return
79
80 pyfile.setCustomStatus("SkipRev", "skipped")
81
82 if not hasattr(pyfile.plugin, "_setup"):
83 # Work-around: inject status checker inside the preprocessing routine of the plugin
84 pyfile.plugin._setup = pyfile.plugin.setup
85 pyfile.plugin.setup = MethodType(self._setup, pyfile.plugin)
86
87
88 def downloadFailed(self, pyfile):
89 #: Check if pyfile is still "failed",
90 # maybe might has been restarted in meantime
91 if pyfile.status != 8 or pyfile.name.rsplit('.', 1)[-1].strip() not in ("rar", "rev"):
92 return
93
94 revtokeep = -1 if self.getConfig('mode') == "Auto" else self.getConfig('revtokeep')
95
96 if not revtokeep:
97 return
98
99 pyname = re.compile(r'%s\.part\d+\.rev$' % pyfile.name.rsplit('.', 2)[0].replace('.', '\.'))
100
101 for link in self.core.api.getPackageData(pyfile.package().id).links:
102 if link.status is 4 and pyname.match(link.name):
103 pylink = self._pyfile(link)
104
105 if revtokeep > -1 or pyfile.name.endswith(".rev"):
106 pylink.setStatus("queued")
107 else:
108 pylink.setCustomStatus(_("unskipped"), "queued")
109
110 self.core.files.save()
111 pylink.release()
112 return
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/module/plugins/hooks/SkipRev.py b/module/plugins/hooks/SkipRev.py
--- a/module/plugins/hooks/SkipRev.py
+++ b/module/plugins/hooks/SkipRev.py
@@ -8,13 +8,13 @@
from module.PyFile import PyFile
from module.plugins.internal.Hook import Hook
-from module.plugins.internal.Plugin import SkipDownload
+from module.plugins.internal.Plugin import Skip
class SkipRev(Hook):
__name__ = "SkipRev"
__type__ = "hook"
- __version__ = "0.30"
+ __version__ = "0.31"
__config__ = [("mode" , "Auto;Manual", "Choose recovery archives to skip" , "Auto"),
("revtokeep", "int" , "Number of recovery archives to keep for package", 0 )]
@@ -35,7 +35,7 @@
def _setup(self):
self.pyfile.plugin._setup()
if self.pyfile.hasStatus("skipped"):
- raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)
+ raise Skip(self.pyfile.statusname or self.pyfile.pluginname)
def _name(self, pyfile):
|
{"golden_diff": "diff --git a/module/plugins/hooks/SkipRev.py b/module/plugins/hooks/SkipRev.py\n--- a/module/plugins/hooks/SkipRev.py\n+++ b/module/plugins/hooks/SkipRev.py\n@@ -8,13 +8,13 @@\n \n from module.PyFile import PyFile\n from module.plugins.internal.Hook import Hook\n-from module.plugins.internal.Plugin import SkipDownload\n+from module.plugins.internal.Plugin import Skip\n \n \n class SkipRev(Hook):\n __name__ = \"SkipRev\"\n __type__ = \"hook\"\n- __version__ = \"0.30\"\n+ __version__ = \"0.31\"\n \n __config__ = [(\"mode\" , \"Auto;Manual\", \"Choose recovery archives to skip\" , \"Auto\"),\n (\"revtokeep\", \"int\" , \"Number of recovery archives to keep for package\", 0 )]\n@@ -35,7 +35,7 @@\n def _setup(self):\n self.pyfile.plugin._setup()\n if self.pyfile.hasStatus(\"skipped\"):\n- raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)\n+ raise Skip(self.pyfile.statusname or self.pyfile.pluginname)\n \n \n def _name(self, pyfile):\n", "issue": "[SkipRev] broken - cannot import name SkipDownload\nSkipRev plugin failed to import with following error:\nErrore durante l'importazione SkipRev: cannot import name SkipDownload\n\nI already tried following, without success: http://forum.pyload.org/viewtopic.php?f=7&t=4335\n\nDebian GNU/Linux 7.8 (wheezy) x64\nPython 2.7.3\npyLoad 0.4.9\nSkipRev 0.30\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\nimport urllib\nimport urlparse\n\nfrom types import MethodType\n\nfrom module.PyFile import PyFile\nfrom module.plugins.internal.Hook import Hook\nfrom module.plugins.internal.Plugin import SkipDownload\n\n\nclass SkipRev(Hook):\n __name__ = \"SkipRev\"\n __type__ = \"hook\"\n __version__ = \"0.30\"\n\n __config__ = [(\"mode\" , \"Auto;Manual\", \"Choose recovery archives to skip\" , \"Auto\"),\n (\"revtokeep\", \"int\" , \"Number of recovery archives to keep for package\", 0 )]\n\n __description__ = \"\"\"Skip recovery archives (.rev)\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n interval = 0 #@TODO: Remove in 0.4.10\n\n\n def setup(self):\n self.info = {} #@TODO: Remove in 0.4.10\n\n\n @staticmethod\n def _setup(self):\n self.pyfile.plugin._setup()\n if self.pyfile.hasStatus(\"skipped\"):\n raise SkipDownload(self.pyfile.statusname or self.pyfile.pluginname)\n\n\n def _name(self, pyfile):\n if hasattr(pyfile.pluginmodule, \"getInfo\"): #@NOTE: getInfo is deprecated in 0.4.10\n return pyfile.pluginmodule.getInfo([pyfile.url]).next()[0]\n else:\n self.logWarning(\"Unable to grab file name\")\n return urlparse.urlparse(urllib.unquote(pyfile.url)).path.split('/')[-1]\n\n\n def _pyfile(self, link):\n return PyFile(self.core.files,\n link.fid,\n link.url,\n link.name,\n link.size,\n link.status,\n link.error,\n link.plugin,\n link.packageID,\n link.order)\n\n\n def downloadPreparing(self, pyfile):\n name = self._name(pyfile)\n\n if pyfile.statusname is _(\"unskipped\") or not name.endswith(\".rev\") or not \".part\" in name:\n return\n\n revtokeep = -1 if self.getConfig('mode') == \"Auto\" else self.getConfig('revtokeep')\n\n if revtokeep:\n status_list = (1, 4, 8, 9, 14) if revtokeep < 0 else (1, 3, 4, 8, 9, 14)\n pyname = re.compile(r'%s\\.part\\d+\\.rev$' % name.rsplit('.', 2)[0].replace('.', '\\.'))\n\n queued = [True for link in self.core.api.getPackageData(pyfile.package().id).links \\\n if link.status not in status_list and pyname.match(link.name)].count(True)\n\n if not queued or queued < revtokeep: #: keep one rev at least in auto mode\n return\n\n pyfile.setCustomStatus(\"SkipRev\", \"skipped\")\n\n if not hasattr(pyfile.plugin, \"_setup\"):\n # Work-around: inject status checker inside the preprocessing routine of the plugin\n pyfile.plugin._setup = pyfile.plugin.setup\n pyfile.plugin.setup = MethodType(self._setup, pyfile.plugin)\n\n\n def downloadFailed(self, pyfile):\n #: Check if pyfile is still \"failed\",\n # maybe might has been restarted in meantime\n if pyfile.status != 8 or pyfile.name.rsplit('.', 1)[-1].strip() not in (\"rar\", \"rev\"):\n return\n\n revtokeep = -1 if self.getConfig('mode') == \"Auto\" else self.getConfig('revtokeep')\n\n if not revtokeep:\n return\n\n pyname = re.compile(r'%s\\.part\\d+\\.rev$' % pyfile.name.rsplit('.', 2)[0].replace('.', '\\.'))\n\n for link in self.core.api.getPackageData(pyfile.package().id).links:\n if link.status is 4 and pyname.match(link.name):\n pylink = self._pyfile(link)\n\n if revtokeep > -1 or pyfile.name.endswith(\".rev\"):\n pylink.setStatus(\"queued\")\n else:\n pylink.setCustomStatus(_(\"unskipped\"), \"queued\")\n\n self.core.files.save()\n pylink.release()\n return\n", "path": "module/plugins/hooks/SkipRev.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\nimport urllib\nimport urlparse\n\nfrom types import MethodType\n\nfrom module.PyFile import PyFile\nfrom module.plugins.internal.Hook import Hook\nfrom module.plugins.internal.Plugin import Skip\n\n\nclass SkipRev(Hook):\n __name__ = \"SkipRev\"\n __type__ = \"hook\"\n __version__ = \"0.31\"\n\n __config__ = [(\"mode\" , \"Auto;Manual\", \"Choose recovery archives to skip\" , \"Auto\"),\n (\"revtokeep\", \"int\" , \"Number of recovery archives to keep for package\", 0 )]\n\n __description__ = \"\"\"Skip recovery archives (.rev)\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n interval = 0 #@TODO: Remove in 0.4.10\n\n\n def setup(self):\n self.info = {} #@TODO: Remove in 0.4.10\n\n\n @staticmethod\n def _setup(self):\n self.pyfile.plugin._setup()\n if self.pyfile.hasStatus(\"skipped\"):\n raise Skip(self.pyfile.statusname or self.pyfile.pluginname)\n\n\n def _name(self, pyfile):\n if hasattr(pyfile.pluginmodule, \"getInfo\"): #@NOTE: getInfo is deprecated in 0.4.10\n return pyfile.pluginmodule.getInfo([pyfile.url]).next()[0]\n else:\n self.logWarning(\"Unable to grab file name\")\n return urlparse.urlparse(urllib.unquote(pyfile.url)).path.split('/')[-1]\n\n\n def _pyfile(self, link):\n return PyFile(self.core.files,\n link.fid,\n link.url,\n link.name,\n link.size,\n link.status,\n link.error,\n link.plugin,\n link.packageID,\n link.order)\n\n\n def downloadPreparing(self, pyfile):\n name = self._name(pyfile)\n\n if pyfile.statusname is _(\"unskipped\") or not name.endswith(\".rev\") or not \".part\" in name:\n return\n\n revtokeep = -1 if self.getConfig('mode') == \"Auto\" else self.getConfig('revtokeep')\n\n if revtokeep:\n status_list = (1, 4, 8, 9, 14) if revtokeep < 0 else (1, 3, 4, 8, 9, 14)\n pyname = re.compile(r'%s\\.part\\d+\\.rev$' % name.rsplit('.', 2)[0].replace('.', '\\.'))\n\n queued = [True for link in self.core.api.getPackageData(pyfile.package().id).links \\\n if link.status not in status_list and pyname.match(link.name)].count(True)\n\n if not queued or queued < revtokeep: #: keep one rev at least in auto mode\n return\n\n pyfile.setCustomStatus(\"SkipRev\", \"skipped\")\n\n if not hasattr(pyfile.plugin, \"_setup\"):\n # Work-around: inject status checker inside the preprocessing routine of the plugin\n pyfile.plugin._setup = pyfile.plugin.setup\n pyfile.plugin.setup = MethodType(self._setup, pyfile.plugin)\n\n\n def downloadFailed(self, pyfile):\n #: Check if pyfile is still \"failed\",\n # maybe might has been restarted in meantime\n if pyfile.status != 8 or pyfile.name.rsplit('.', 1)[-1].strip() not in (\"rar\", \"rev\"):\n return\n\n revtokeep = -1 if self.getConfig('mode') == \"Auto\" else self.getConfig('revtokeep')\n\n if not revtokeep:\n return\n\n pyname = re.compile(r'%s\\.part\\d+\\.rev$' % pyfile.name.rsplit('.', 2)[0].replace('.', '\\.'))\n\n for link in self.core.api.getPackageData(pyfile.package().id).links:\n if link.status is 4 and pyname.match(link.name):\n pylink = self._pyfile(link)\n\n if revtokeep > -1 or pyfile.name.endswith(\".rev\"):\n pylink.setStatus(\"queued\")\n else:\n pylink.setCustomStatus(_(\"unskipped\"), \"queued\")\n\n self.core.files.save()\n pylink.release()\n return\n", "path": "module/plugins/hooks/SkipRev.py"}]}
| 1,566 | 272 |
gh_patches_debug_19087
|
rasdani/github-patches
|
git_diff
|
cloudtools__troposphere-869
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Validation error in CodeCommit Trigger
It appears that the validation for the CodeCommit Trigger class does not currently allow the use of any intrinsic functions for the `Events` property, and only accepts a list of hard coded values.
https://github.com/cloudtools/troposphere/blob/45582eb1d21a6cc9cfa608f626d8acbf0317f37f/troposphere/codecommit.py#L18-L32
We are trying to allow for the dynamic selection of the values with a parameter, but encounter errors when attempting to use a `Ref`. A snippet of our trigger definition, and the error encountered is below:
```
repo_trigger1 = codecommit.Trigger(
Name = Ref(trigger_1_name),
CustomData = Ref(trigger_1_custom_data),
DestinationArn = Ref(trigger_1_destination_arn),
Branches = Ref(trigger_1_branches),
Events = Ref(trigger_1_events),
)
```
We are able to successfully generate the template when changing the `Events` assignment to:
```Events = ["all"],```
I believe we just need to check if the value is one of the Helper functions before iterating through the events. I will try to get a fix pushed up for review.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/codecommit.py`
Content:
```
1 # Copyright (c) 2016, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty
7
8
9 class Trigger(AWSProperty):
10 props = {
11 'Branches': ([basestring], False),
12 'CustomData': (basestring, False),
13 'DestinationArn': (basestring, False),
14 'Events': ([basestring], False),
15 'Name': (basestring, False),
16 }
17
18 def validate(self):
19 valid = [
20 'all',
21 'createReference',
22 'deleteReference',
23 'updateReference',
24 ]
25 events = self.properties.get('Events')
26 if events:
27 if 'all' in events and len(events) != 1:
28 raise ValueError('Trigger events: all must be used alone')
29 else:
30 for e in events:
31 if e not in valid:
32 raise ValueError('Trigger: invalid event %s' % e)
33
34
35 class Repository(AWSObject):
36 resource_type = "AWS::CodeCommit::Repository"
37
38 props = {
39 'RepositoryDescription': (basestring, False),
40 'RepositoryName': (basestring, True),
41 'Triggers': ([Trigger], False),
42 }
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/troposphere/codecommit.py b/troposphere/codecommit.py
--- a/troposphere/codecommit.py
+++ b/troposphere/codecommit.py
@@ -3,7 +3,7 @@
#
# See LICENSE file for full license.
-from . import AWSObject, AWSProperty
+from . import AWSHelperFn, AWSObject, AWSProperty
class Trigger(AWSProperty):
@@ -23,12 +23,12 @@
'updateReference',
]
events = self.properties.get('Events')
- if events:
+ if events and not isinstance(events, AWSHelperFn):
if 'all' in events and len(events) != 1:
raise ValueError('Trigger events: all must be used alone')
else:
for e in events:
- if e not in valid:
+ if e not in valid and not isinstance(e, AWSHelperFn):
raise ValueError('Trigger: invalid event %s' % e)
|
{"golden_diff": "diff --git a/troposphere/codecommit.py b/troposphere/codecommit.py\n--- a/troposphere/codecommit.py\n+++ b/troposphere/codecommit.py\n@@ -3,7 +3,7 @@\n #\n # See LICENSE file for full license.\n \n-from . import AWSObject, AWSProperty\n+from . import AWSHelperFn, AWSObject, AWSProperty\n \n \n class Trigger(AWSProperty):\n@@ -23,12 +23,12 @@\n 'updateReference',\n ]\n events = self.properties.get('Events')\n- if events:\n+ if events and not isinstance(events, AWSHelperFn):\n if 'all' in events and len(events) != 1:\n raise ValueError('Trigger events: all must be used alone')\n else:\n for e in events:\n- if e not in valid:\n+ if e not in valid and not isinstance(e, AWSHelperFn):\n raise ValueError('Trigger: invalid event %s' % e)\n", "issue": "Validation error in CodeCommit Trigger\nIt appears that the validation for the CodeCommit Trigger class does not currently allow the use of any intrinsic functions for the `Events` property, and only accepts a list of hard coded values.\r\n\r\nhttps://github.com/cloudtools/troposphere/blob/45582eb1d21a6cc9cfa608f626d8acbf0317f37f/troposphere/codecommit.py#L18-L32\r\n\r\nWe are trying to allow for the dynamic selection of the values with a parameter, but encounter errors when attempting to use a `Ref`. A snippet of our trigger definition, and the error encountered is below:\r\n\r\n```\r\nrepo_trigger1 = codecommit.Trigger(\r\n Name = Ref(trigger_1_name),\r\n CustomData = Ref(trigger_1_custom_data),\r\n DestinationArn = Ref(trigger_1_destination_arn),\r\n Branches = Ref(trigger_1_branches),\r\n Events = Ref(trigger_1_events),\r\n )\r\n```\r\nWe are able to successfully generate the template when changing the `Events` assignment to:\r\n\r\n```Events = [\"all\"],```\r\n\r\nI believe we just need to check if the value is one of the Helper functions before iterating through the events. I will try to get a fix pushed up for review.\n", "before_files": [{"content": "# Copyright (c) 2016, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty\n\n\nclass Trigger(AWSProperty):\n props = {\n 'Branches': ([basestring], False),\n 'CustomData': (basestring, False),\n 'DestinationArn': (basestring, False),\n 'Events': ([basestring], False),\n 'Name': (basestring, False),\n }\n\n def validate(self):\n valid = [\n 'all',\n 'createReference',\n 'deleteReference',\n 'updateReference',\n ]\n events = self.properties.get('Events')\n if events:\n if 'all' in events and len(events) != 1:\n raise ValueError('Trigger events: all must be used alone')\n else:\n for e in events:\n if e not in valid:\n raise ValueError('Trigger: invalid event %s' % e)\n\n\nclass Repository(AWSObject):\n resource_type = \"AWS::CodeCommit::Repository\"\n\n props = {\n 'RepositoryDescription': (basestring, False),\n 'RepositoryName': (basestring, True),\n 'Triggers': ([Trigger], False),\n }\n", "path": "troposphere/codecommit.py"}], "after_files": [{"content": "# Copyright (c) 2016, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSHelperFn, AWSObject, AWSProperty\n\n\nclass Trigger(AWSProperty):\n props = {\n 'Branches': ([basestring], False),\n 'CustomData': (basestring, False),\n 'DestinationArn': (basestring, False),\n 'Events': ([basestring], False),\n 'Name': (basestring, False),\n }\n\n def validate(self):\n valid = [\n 'all',\n 'createReference',\n 'deleteReference',\n 'updateReference',\n ]\n events = self.properties.get('Events')\n if events and not isinstance(events, AWSHelperFn):\n if 'all' in events and len(events) != 1:\n raise ValueError('Trigger events: all must be used alone')\n else:\n for e in events:\n if e not in valid and not isinstance(e, AWSHelperFn):\n raise ValueError('Trigger: invalid event %s' % e)\n\n\nclass Repository(AWSObject):\n resource_type = \"AWS::CodeCommit::Repository\"\n\n props = {\n 'RepositoryDescription': (basestring, False),\n 'RepositoryName': (basestring, True),\n 'Triggers': ([Trigger], False),\n }\n", "path": "troposphere/codecommit.py"}]}
| 884 | 209 |
gh_patches_debug_6298
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-1389
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Camera API documentation missing
I could not find a list of available cameras in the docs:
http://vispy.org/scene.html?highlight=cameras#module-vispy.scene.cameras
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/scene/cameras/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4 """
5 Cameras are responsible for determining which part of a scene is displayed
6 in a viewbox and for handling user input to change the view.
7
8 Several Camera subclasses are available to customize the projection of the
9 scene such as 3D perspective and orthographic projections, 2D
10 scale/translation, and other specialty cameras. A variety of user interaction
11 styles are available for each camera including arcball, turntable,
12 first-person, and pan/zoom interactions.
13
14 Internally, Cameras work by setting the transform of a SubScene object such
15 that a certain part of the scene is mapped to the bounding rectangle of the
16 ViewBox.
17 """
18 from ._base import make_camera # noqa
19 from .base_camera import BaseCamera # noqa
20 from .panzoom import PanZoomCamera # noqa
21 from .arcball import ArcballCamera # noqa
22 from .turntable import TurntableCamera # noqa
23 from .fly import FlyCamera # noqa
24 from .magnify import MagnifyCamera, Magnify1DCamera # noqa
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vispy/scene/cameras/__init__.py b/vispy/scene/cameras/__init__.py
--- a/vispy/scene/cameras/__init__.py
+++ b/vispy/scene/cameras/__init__.py
@@ -15,6 +15,9 @@
that a certain part of the scene is mapped to the bounding rectangle of the
ViewBox.
"""
+__all__ = ['ArcballCamera', 'BaseCamera', 'FlyCamera', 'MagnifyCamera',
+ 'Magnify1DCamera', 'PanZoomCamera', 'TurntableCamera']
+
from ._base import make_camera # noqa
from .base_camera import BaseCamera # noqa
from .panzoom import PanZoomCamera # noqa
|
{"golden_diff": "diff --git a/vispy/scene/cameras/__init__.py b/vispy/scene/cameras/__init__.py\n--- a/vispy/scene/cameras/__init__.py\n+++ b/vispy/scene/cameras/__init__.py\n@@ -15,6 +15,9 @@\n that a certain part of the scene is mapped to the bounding rectangle of the \n ViewBox.\n \"\"\"\n+__all__ = ['ArcballCamera', 'BaseCamera', 'FlyCamera', 'MagnifyCamera',\n+ 'Magnify1DCamera', 'PanZoomCamera', 'TurntableCamera']\n+\n from ._base import make_camera # noqa\n from .base_camera import BaseCamera # noqa\n from .panzoom import PanZoomCamera # noqa\n", "issue": "Camera API documentation missing\nI could not find a list of available cameras in the docs:\n\nhttp://vispy.org/scene.html?highlight=cameras#module-vispy.scene.cameras\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"\nCameras are responsible for determining which part of a scene is displayed\nin a viewbox and for handling user input to change the view.\n\nSeveral Camera subclasses are available to customize the projection of the \nscene such as 3D perspective and orthographic projections, 2D \nscale/translation, and other specialty cameras. A variety of user interaction\nstyles are available for each camera including arcball, turntable, \nfirst-person, and pan/zoom interactions.\n\nInternally, Cameras work by setting the transform of a SubScene object such \nthat a certain part of the scene is mapped to the bounding rectangle of the \nViewBox.\n\"\"\"\nfrom ._base import make_camera # noqa\nfrom .base_camera import BaseCamera # noqa\nfrom .panzoom import PanZoomCamera # noqa\nfrom .arcball import ArcballCamera # noqa\nfrom .turntable import TurntableCamera # noqa\nfrom .fly import FlyCamera # noqa\nfrom .magnify import MagnifyCamera, Magnify1DCamera # noqa\n", "path": "vispy/scene/cameras/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"\nCameras are responsible for determining which part of a scene is displayed\nin a viewbox and for handling user input to change the view.\n\nSeveral Camera subclasses are available to customize the projection of the \nscene such as 3D perspective and orthographic projections, 2D \nscale/translation, and other specialty cameras. A variety of user interaction\nstyles are available for each camera including arcball, turntable, \nfirst-person, and pan/zoom interactions.\n\nInternally, Cameras work by setting the transform of a SubScene object such \nthat a certain part of the scene is mapped to the bounding rectangle of the \nViewBox.\n\"\"\"\n__all__ = ['ArcballCamera', 'BaseCamera', 'FlyCamera', 'MagnifyCamera',\n 'Magnify1DCamera', 'PanZoomCamera', 'TurntableCamera']\n\nfrom ._base import make_camera # noqa\nfrom .base_camera import BaseCamera # noqa\nfrom .panzoom import PanZoomCamera # noqa\nfrom .arcball import ArcballCamera # noqa\nfrom .turntable import TurntableCamera # noqa\nfrom .fly import FlyCamera # noqa\nfrom .magnify import MagnifyCamera, Magnify1DCamera # noqa\n", "path": "vispy/scene/cameras/__init__.py"}]}
| 602 | 168 |
gh_patches_debug_22907
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-974
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The non-NULL comment text migration needs to be inserted before 4df1fcd59050
This migration was developed against the develop branch, but I had forgotten that there was already a migration in the develop branch that isn't on the 2.2 branch. We'll need to change both migrations so that the new one is inserted before the prior one:
```
[vagrant@localhost vagrant]$ alembic upgrade head
WARNING:fedmsg.crypto.x509:Crypto disabled ImportError('No module named M2Crypto',)
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
/usr/lib/python2.7/site-packages/alembic/util/messaging.py:69: UserWarning: Revision 4df1fcd59050 referenced from 4df1fcd59050 -> 37f38ddc4c8d (head), Do not allow NULL values in the text column of the comments table. is not present
warnings.warn(msg)
Traceback (most recent call last):
File "/usr/bin/alembic", line 9, in <module>
load_entry_point('alembic==0.8.3', 'console_scripts', 'alembic')()
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 450, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 444, in main
self.run_cmd(cfg, options)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 427, in run_cmd
**dict((k, getattr(options, k)) for k in kwarg)
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load_python_file
module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "alembic/env.py", line 72, in <module>
run_migrations_online()
File "alembic/env.py", line 65, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 797, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 303, in run_migrations
for step in self._migrations_fn(heads, self):
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 163, in upgrade
return script._upgrade_revs(revision, rev)
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 310, in _upgrade_revs
revs = list(revs)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 610, in _iterate_revisions
requested_lowers = self.get_revisions(lower)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 299, in get_revisions
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 301, in get_revisions
resolved_id, branch_label = self._resolve_revision_number(id_)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 423, in _resolve_revision_number
self._revision_map
File "/usr/lib/python2.7/site-packages/alembic/util/langhelpers.py", line 241, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 151, in _revision_map
down_revision = map_[downrev]
KeyError: '4df1fcd59050'
```
It needs to be possible to vagrant up on the 2.2 branch and then advance to the develop branch.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `alembic/versions/4df1fcd59050_.py`
Content:
```
1 """Add the new pending_signing_tag column to the releases table.
2
3 Revision ID: 4df1fcd59050
4 Revises: 3c72757fa59e
5 Create Date: 2016-09-16 18:51:19.514301
6
7 """
8 from alembic import op
9 import sqlalchemy as sa
10
11
12 # revision identifiers, used by Alembic.
13 revision = '4df1fcd59050'
14 down_revision = '3c72757fa59e'
15
16
17 def upgrade():
18 op.add_column('releases',
19 sa.Column('pending_signing_tag', sa.UnicodeText(), server_default='',
20 nullable=False))
21 # We only used the server_default to stop the new column from being NULL. Let's now drop the
22 # server default.
23 op.alter_column('releases', 'pending_signing_tag', server_default=None)
24
25
26 def downgrade():
27 op.drop_column('releases', 'pending_signing_tag')
28
```
Path: `alembic/versions/37f38ddc4c8d_.py`
Content:
```
1 """Do not allow NULL values in the text column of the comments table.
2
3 Revision ID: 37f38ddc4c8d
4 Revises: 4df1fcd59050
5 Create Date: 2016-09-21 19:51:04.946521
6
7 """
8
9 from alembic import op
10 import sqlalchemy as sa
11
12
13 # revision identifiers, used by Alembic.
14 revision = '37f38ddc4c8d'
15 down_revision = '4df1fcd59050'
16
17
18 def upgrade():
19 """
20 We will need to set all existing NULL comments to "", then change the column to disallow NULL comments.
21 """
22 # Build a fake mini version of the comments table so we can form an UPDATE statement.
23 comments = sa.sql.table('comments', sa.sql.column('text', sa.UnicodeText))
24 # Set existing NULL comments to "".
25 op.execute(comments.update().where(comments.c.text==None).values({'text': op.inline_literal('')}))
26
27 # Disallow new NULL comments.
28 op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=False)
29
30
31 def downgrade():
32 op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=True)
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/alembic/versions/37f38ddc4c8d_.py b/alembic/versions/37f38ddc4c8d_.py
--- a/alembic/versions/37f38ddc4c8d_.py
+++ b/alembic/versions/37f38ddc4c8d_.py
@@ -1,7 +1,7 @@
"""Do not allow NULL values in the text column of the comments table.
Revision ID: 37f38ddc4c8d
-Revises: 4df1fcd59050
+Revises: 3c72757fa59e
Create Date: 2016-09-21 19:51:04.946521
"""
@@ -12,7 +12,7 @@
# revision identifiers, used by Alembic.
revision = '37f38ddc4c8d'
-down_revision = '4df1fcd59050'
+down_revision = '3c72757fa59e'
def upgrade():
diff --git a/alembic/versions/4df1fcd59050_.py b/alembic/versions/4df1fcd59050_.py
--- a/alembic/versions/4df1fcd59050_.py
+++ b/alembic/versions/4df1fcd59050_.py
@@ -1,7 +1,7 @@
"""Add the new pending_signing_tag column to the releases table.
Revision ID: 4df1fcd59050
-Revises: 3c72757fa59e
+Revises: 37f38ddc4c8d
Create Date: 2016-09-16 18:51:19.514301
"""
@@ -11,7 +11,7 @@
# revision identifiers, used by Alembic.
revision = '4df1fcd59050'
-down_revision = '3c72757fa59e'
+down_revision = '37f38ddc4c8d'
def upgrade():
|
{"golden_diff": "diff --git a/alembic/versions/37f38ddc4c8d_.py b/alembic/versions/37f38ddc4c8d_.py\n--- a/alembic/versions/37f38ddc4c8d_.py\n+++ b/alembic/versions/37f38ddc4c8d_.py\n@@ -1,7 +1,7 @@\n \"\"\"Do not allow NULL values in the text column of the comments table.\n \n Revision ID: 37f38ddc4c8d\n-Revises: 4df1fcd59050\n+Revises: 3c72757fa59e\n Create Date: 2016-09-21 19:51:04.946521\n \n \"\"\"\n@@ -12,7 +12,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '37f38ddc4c8d'\n-down_revision = '4df1fcd59050'\n+down_revision = '3c72757fa59e'\n \n \n def upgrade():\ndiff --git a/alembic/versions/4df1fcd59050_.py b/alembic/versions/4df1fcd59050_.py\n--- a/alembic/versions/4df1fcd59050_.py\n+++ b/alembic/versions/4df1fcd59050_.py\n@@ -1,7 +1,7 @@\n \"\"\"Add the new pending_signing_tag column to the releases table.\n \n Revision ID: 4df1fcd59050\n-Revises: 3c72757fa59e\n+Revises: 37f38ddc4c8d\n Create Date: 2016-09-16 18:51:19.514301\n \n \"\"\"\n@@ -11,7 +11,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '4df1fcd59050'\n-down_revision = '3c72757fa59e'\n+down_revision = '37f38ddc4c8d'\n \n \n def upgrade():\n", "issue": "The non-NULL comment text migration needs to be inserted before 4df1fcd59050\nThis migration was developed against the develop branch, but I had forgotten that there was already a migration in the develop branch that isn't on the 2.2 branch. We'll need to change both migrations so that the new one is inserted before the prior one:\n\n```\n[vagrant@localhost vagrant]$ alembic upgrade head\nWARNING:fedmsg.crypto.x509:Crypto disabled ImportError('No module named M2Crypto',)\nINFO [alembic.runtime.migration] Context impl PostgresqlImpl.\nINFO [alembic.runtime.migration] Will assume transactional DDL.\n/usr/lib/python2.7/site-packages/alembic/util/messaging.py:69: UserWarning: Revision 4df1fcd59050 referenced from 4df1fcd59050 -> 37f38ddc4c8d (head), Do not allow NULL values in the text column of the comments table. is not present\n warnings.warn(msg)\nTraceback (most recent call last):\n File \"/usr/bin/alembic\", line 9, in <module>\n load_entry_point('alembic==0.8.3', 'console_scripts', 'alembic')()\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 450, in main\n CommandLine(prog=prog).main(argv=argv)\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 444, in main\n self.run_cmd(cfg, options)\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 427, in run_cmd\n **dict((k, getattr(options, k)) for k in kwarg)\n File \"/usr/lib/python2.7/site-packages/alembic/command.py\", line 174, in upgrade\n script.run_env()\n File \"/usr/lib/python2.7/site-packages/alembic/script/base.py\", line 397, in run_env\n util.load_python_file(self.dir, 'env.py')\n File \"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py\", line 81, in load_python_file\n module = load_module_py(module_id, path)\n File \"/usr/lib/python2.7/site-packages/alembic/util/compat.py\", line 79, in load_module_py\n mod = imp.load_source(module_id, path, fp)\n File \"alembic/env.py\", line 72, in <module>\n run_migrations_online()\n File \"alembic/env.py\", line 65, in run_migrations_online\n context.run_migrations()\n File \"<string>\", line 8, in run_migrations\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/environment.py\", line 797, in run_migrations\n self.get_context().run_migrations(**kw)\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/migration.py\", line 303, in run_migrations\n for step in self._migrations_fn(heads, self):\n File \"/usr/lib/python2.7/site-packages/alembic/command.py\", line 163, in upgrade\n return script._upgrade_revs(revision, rev)\n File \"/usr/lib/python2.7/site-packages/alembic/script/base.py\", line 310, in _upgrade_revs\n revs = list(revs)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 610, in _iterate_revisions\n requested_lowers = self.get_revisions(lower)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 299, in get_revisions\n return sum([self.get_revisions(id_elem) for id_elem in id_], ())\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 301, in get_revisions\n resolved_id, branch_label = self._resolve_revision_number(id_)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 423, in _resolve_revision_number\n self._revision_map\n File \"/usr/lib/python2.7/site-packages/alembic/util/langhelpers.py\", line 241, in __get__\n obj.__dict__[self.__name__] = result = self.fget(obj)\n File \"/usr/lib/python2.7/site-packages/alembic/script/revision.py\", line 151, in _revision_map\n down_revision = map_[downrev]\nKeyError: '4df1fcd59050'\n```\n\nIt needs to be possible to vagrant up on the 2.2 branch and then advance to the develop branch.\n\n", "before_files": [{"content": "\"\"\"Add the new pending_signing_tag column to the releases table.\n\nRevision ID: 4df1fcd59050\nRevises: 3c72757fa59e\nCreate Date: 2016-09-16 18:51:19.514301\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '4df1fcd59050'\ndown_revision = '3c72757fa59e'\n\n\ndef upgrade():\n op.add_column('releases',\n sa.Column('pending_signing_tag', sa.UnicodeText(), server_default='',\n nullable=False))\n # We only used the server_default to stop the new column from being NULL. Let's now drop the\n # server default.\n op.alter_column('releases', 'pending_signing_tag', server_default=None)\n\n\ndef downgrade():\n op.drop_column('releases', 'pending_signing_tag')\n", "path": "alembic/versions/4df1fcd59050_.py"}, {"content": "\"\"\"Do not allow NULL values in the text column of the comments table.\n\nRevision ID: 37f38ddc4c8d\nRevises: 4df1fcd59050\nCreate Date: 2016-09-21 19:51:04.946521\n\n\"\"\"\n\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '37f38ddc4c8d'\ndown_revision = '4df1fcd59050'\n\n\ndef upgrade():\n \"\"\"\n We will need to set all existing NULL comments to \"\", then change the column to disallow NULL comments.\n \"\"\"\n # Build a fake mini version of the comments table so we can form an UPDATE statement.\n comments = sa.sql.table('comments', sa.sql.column('text', sa.UnicodeText))\n # Set existing NULL comments to \"\".\n op.execute(comments.update().where(comments.c.text==None).values({'text': op.inline_literal('')}))\n\n # Disallow new NULL comments.\n op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=False)\n\n\ndef downgrade():\n op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=True)\n", "path": "alembic/versions/37f38ddc4c8d_.py"}], "after_files": [{"content": "\"\"\"Add the new pending_signing_tag column to the releases table.\n\nRevision ID: 4df1fcd59050\nRevises: 37f38ddc4c8d\nCreate Date: 2016-09-16 18:51:19.514301\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '4df1fcd59050'\ndown_revision = '37f38ddc4c8d'\n\n\ndef upgrade():\n op.add_column('releases',\n sa.Column('pending_signing_tag', sa.UnicodeText(), server_default='',\n nullable=False))\n # We only used the server_default to stop the new column from being NULL. Let's now drop the\n # server default.\n op.alter_column('releases', 'pending_signing_tag', server_default=None)\n\n\ndef downgrade():\n op.drop_column('releases', 'pending_signing_tag')\n", "path": "alembic/versions/4df1fcd59050_.py"}, {"content": "\"\"\"Do not allow NULL values in the text column of the comments table.\n\nRevision ID: 37f38ddc4c8d\nRevises: 3c72757fa59e\nCreate Date: 2016-09-21 19:51:04.946521\n\n\"\"\"\n\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '37f38ddc4c8d'\ndown_revision = '3c72757fa59e'\n\n\ndef upgrade():\n \"\"\"\n We will need to set all existing NULL comments to \"\", then change the column to disallow NULL comments.\n \"\"\"\n # Build a fake mini version of the comments table so we can form an UPDATE statement.\n comments = sa.sql.table('comments', sa.sql.column('text', sa.UnicodeText))\n # Set existing NULL comments to \"\".\n op.execute(comments.update().where(comments.c.text==None).values({'text': op.inline_literal('')}))\n\n # Disallow new NULL comments.\n op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=False)\n\n\ndef downgrade():\n op.alter_column('comments', 'text', existing_type=sa.TEXT(), nullable=True)\n", "path": "alembic/versions/37f38ddc4c8d_.py"}]}
| 2,024 | 543 |
gh_patches_debug_34075
|
rasdani/github-patches
|
git_diff
|
scikit-hep__awkward-2102
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
behaviors with decorated mixin methods cannot be pickled
### Version of Awkward Array
main
### Description and code to reproduce
The mixin machinery introduces a closure, which `pickle` can't serialise.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/awkward/behaviors/mixins.py`
Content:
```
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3
4 import sys
5
6 import awkward as ak
7
8
9 def mixin_class(registry, name=None):
10 """
11 Args:
12 registry (dict): The destination behavior mapping registry. Typically,
13 this would be the global registry #ak.behavior, but one may wish
14 to register methods in an alternative way.
15 name (str): The name to assign to the behaviour class.
16
17 This decorator can be used to register a behavior mixin class.
18
19 Any inherited behaviors will automatically be made available to the decorated
20 class.
21
22 See the "Mixin decorators" section of #ak.behavior for further details.
23 """
24
25 def register(cls):
26 cls_name = cls.__name__
27 if name is None:
28 behavior_name = cls_name
29 else:
30 behavior_name = name
31
32 record = type(
33 cls_name + "Record",
34 (cls, ak.highlevel.Record),
35 {"__module__": cls.__module__},
36 )
37 setattr(sys.modules[cls.__module__], cls_name + "Record", record)
38 registry[behavior_name] = record
39 array = type(
40 cls_name + "Array",
41 (cls, ak.highlevel.Array),
42 {"__module__": cls.__module__},
43 )
44 setattr(sys.modules[cls.__module__], cls_name + "Array", array)
45 registry["*", behavior_name] = array
46 for basecls in cls.mro():
47 for method in basecls.__dict__.values():
48 if hasattr(method, "_awkward_mixin"):
49 ufunc, rhs, transpose = method._awkward_mixin
50 if rhs is None:
51 registry.setdefault((ufunc, behavior_name), method)
52 continue
53 for rhs_name in list(rhs) + [behavior_name]:
54 registry.setdefault((ufunc, behavior_name, rhs_name), method)
55 if transpose is not None and rhs_name != behavior_name:
56 registry.setdefault(
57 (ufunc, rhs_name, behavior_name), transpose
58 )
59 if basecls.__name__ in rhs:
60 rhs.add(behavior_name)
61 return cls
62
63 return register
64
65
66 def mixin_class_method(ufunc, rhs=None, *, transpose=True):
67 """
68 Args:
69 ufunc (numpy.ufunc): A universal function (or NEP18 callable) that is
70 hooked in Awkward Array, i.e. it can be the first argument of a behavior.
71 rhs (Set[type] or None): Set of right-hand side argument types, optional
72 if wrapping a unary function. The left-hand side is expected to
73 always be `self` of the parent class.
74 transpose (bool): If true, automatically create a transpose signature
75 (only makes sense for binary ufuncs).
76
77 This decorator can be used to register a mixin class method.
78
79 Using this decorator ensures that derived classes that are declared with the
80 #ak.mixin_class decorator will also have the behaviors that this class has.
81 """
82
83 def register(method):
84 if not isinstance(rhs, (set, type(None))):
85 raise ak._errors.wrap_error(
86 ValueError("expected a set of right-hand-side argument types")
87 )
88 if transpose and rhs is not None:
89
90 def transposed(left, right):
91 return method(right, left)
92
93 # make a copy of rhs, we will edit it later
94 method._awkward_mixin = (ufunc, set(rhs), transposed)
95 else:
96 method._awkward_mixin = (ufunc, rhs, None)
97 return method
98
99 return register
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/awkward/behaviors/mixins.py b/src/awkward/behaviors/mixins.py
--- a/src/awkward/behaviors/mixins.py
+++ b/src/awkward/behaviors/mixins.py
@@ -1,6 +1,7 @@
# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
+import functools
import sys
import awkward as ak
@@ -63,6 +64,10 @@
return register
+def _call_transposed(func, left, right):
+ return func(right, left)
+
+
def mixin_class_method(ufunc, rhs=None, *, transpose=True):
"""
Args:
@@ -70,7 +75,8 @@
hooked in Awkward Array, i.e. it can be the first argument of a behavior.
rhs (Set[type] or None): Set of right-hand side argument types, optional
if wrapping a unary function. The left-hand side is expected to
- always be `self` of the parent class.
+ always be `self` of the parent class. The current class is implicitly
+ included in this set.
transpose (bool): If true, automatically create a transpose signature
(only makes sense for binary ufuncs).
@@ -86,12 +92,13 @@
ValueError("expected a set of right-hand-side argument types")
)
if transpose and rhs is not None:
-
- def transposed(left, right):
- return method(right, left)
-
# make a copy of rhs, we will edit it later
- method._awkward_mixin = (ufunc, set(rhs), transposed)
+ # use partial & a module-scoped function so that this is pickleable
+ method._awkward_mixin = (
+ ufunc,
+ set(rhs),
+ functools.partial(_call_transposed, method),
+ )
else:
method._awkward_mixin = (ufunc, rhs, None)
return method
|
{"golden_diff": "diff --git a/src/awkward/behaviors/mixins.py b/src/awkward/behaviors/mixins.py\n--- a/src/awkward/behaviors/mixins.py\n+++ b/src/awkward/behaviors/mixins.py\n@@ -1,6 +1,7 @@\n # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n \n \n+import functools\n import sys\n \n import awkward as ak\n@@ -63,6 +64,10 @@\n return register\n \n \n+def _call_transposed(func, left, right):\n+ return func(right, left)\n+\n+\n def mixin_class_method(ufunc, rhs=None, *, transpose=True):\n \"\"\"\n Args:\n@@ -70,7 +75,8 @@\n hooked in Awkward Array, i.e. it can be the first argument of a behavior.\n rhs (Set[type] or None): Set of right-hand side argument types, optional\n if wrapping a unary function. The left-hand side is expected to\n- always be `self` of the parent class.\n+ always be `self` of the parent class. The current class is implicitly\n+ included in this set.\n transpose (bool): If true, automatically create a transpose signature\n (only makes sense for binary ufuncs).\n \n@@ -86,12 +92,13 @@\n ValueError(\"expected a set of right-hand-side argument types\")\n )\n if transpose and rhs is not None:\n-\n- def transposed(left, right):\n- return method(right, left)\n-\n # make a copy of rhs, we will edit it later\n- method._awkward_mixin = (ufunc, set(rhs), transposed)\n+ # use partial & a module-scoped function so that this is pickleable\n+ method._awkward_mixin = (\n+ ufunc,\n+ set(rhs),\n+ functools.partial(_call_transposed, method),\n+ )\n else:\n method._awkward_mixin = (ufunc, rhs, None)\n return method\n", "issue": "behaviors with decorated mixin methods cannot be pickled\n### Version of Awkward Array\n\nmain\n\n### Description and code to reproduce\n\nThe mixin machinery introduces a closure, which `pickle` can't serialise.\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\n\nimport sys\n\nimport awkward as ak\n\n\ndef mixin_class(registry, name=None):\n \"\"\"\n Args:\n registry (dict): The destination behavior mapping registry. Typically,\n this would be the global registry #ak.behavior, but one may wish\n to register methods in an alternative way.\n name (str): The name to assign to the behaviour class.\n\n This decorator can be used to register a behavior mixin class.\n\n Any inherited behaviors will automatically be made available to the decorated\n class.\n\n See the \"Mixin decorators\" section of #ak.behavior for further details.\n \"\"\"\n\n def register(cls):\n cls_name = cls.__name__\n if name is None:\n behavior_name = cls_name\n else:\n behavior_name = name\n\n record = type(\n cls_name + \"Record\",\n (cls, ak.highlevel.Record),\n {\"__module__\": cls.__module__},\n )\n setattr(sys.modules[cls.__module__], cls_name + \"Record\", record)\n registry[behavior_name] = record\n array = type(\n cls_name + \"Array\",\n (cls, ak.highlevel.Array),\n {\"__module__\": cls.__module__},\n )\n setattr(sys.modules[cls.__module__], cls_name + \"Array\", array)\n registry[\"*\", behavior_name] = array\n for basecls in cls.mro():\n for method in basecls.__dict__.values():\n if hasattr(method, \"_awkward_mixin\"):\n ufunc, rhs, transpose = method._awkward_mixin\n if rhs is None:\n registry.setdefault((ufunc, behavior_name), method)\n continue\n for rhs_name in list(rhs) + [behavior_name]:\n registry.setdefault((ufunc, behavior_name, rhs_name), method)\n if transpose is not None and rhs_name != behavior_name:\n registry.setdefault(\n (ufunc, rhs_name, behavior_name), transpose\n )\n if basecls.__name__ in rhs:\n rhs.add(behavior_name)\n return cls\n\n return register\n\n\ndef mixin_class_method(ufunc, rhs=None, *, transpose=True):\n \"\"\"\n Args:\n ufunc (numpy.ufunc): A universal function (or NEP18 callable) that is\n hooked in Awkward Array, i.e. it can be the first argument of a behavior.\n rhs (Set[type] or None): Set of right-hand side argument types, optional\n if wrapping a unary function. The left-hand side is expected to\n always be `self` of the parent class.\n transpose (bool): If true, automatically create a transpose signature\n (only makes sense for binary ufuncs).\n\n This decorator can be used to register a mixin class method.\n\n Using this decorator ensures that derived classes that are declared with the\n #ak.mixin_class decorator will also have the behaviors that this class has.\n \"\"\"\n\n def register(method):\n if not isinstance(rhs, (set, type(None))):\n raise ak._errors.wrap_error(\n ValueError(\"expected a set of right-hand-side argument types\")\n )\n if transpose and rhs is not None:\n\n def transposed(left, right):\n return method(right, left)\n\n # make a copy of rhs, we will edit it later\n method._awkward_mixin = (ufunc, set(rhs), transposed)\n else:\n method._awkward_mixin = (ufunc, rhs, None)\n return method\n\n return register\n", "path": "src/awkward/behaviors/mixins.py"}], "after_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\n\nimport functools\nimport sys\n\nimport awkward as ak\n\n\ndef mixin_class(registry, name=None):\n \"\"\"\n Args:\n registry (dict): The destination behavior mapping registry. Typically,\n this would be the global registry #ak.behavior, but one may wish\n to register methods in an alternative way.\n name (str): The name to assign to the behaviour class.\n\n This decorator can be used to register a behavior mixin class.\n\n Any inherited behaviors will automatically be made available to the decorated\n class.\n\n See the \"Mixin decorators\" section of #ak.behavior for further details.\n \"\"\"\n\n def register(cls):\n cls_name = cls.__name__\n if name is None:\n behavior_name = cls_name\n else:\n behavior_name = name\n\n record = type(\n cls_name + \"Record\",\n (cls, ak.highlevel.Record),\n {\"__module__\": cls.__module__},\n )\n setattr(sys.modules[cls.__module__], cls_name + \"Record\", record)\n registry[behavior_name] = record\n array = type(\n cls_name + \"Array\",\n (cls, ak.highlevel.Array),\n {\"__module__\": cls.__module__},\n )\n setattr(sys.modules[cls.__module__], cls_name + \"Array\", array)\n registry[\"*\", behavior_name] = array\n for basecls in cls.mro():\n for method in basecls.__dict__.values():\n if hasattr(method, \"_awkward_mixin\"):\n ufunc, rhs, transpose = method._awkward_mixin\n if rhs is None:\n registry.setdefault((ufunc, behavior_name), method)\n continue\n for rhs_name in list(rhs) + [behavior_name]:\n registry.setdefault((ufunc, behavior_name, rhs_name), method)\n if transpose is not None and rhs_name != behavior_name:\n registry.setdefault(\n (ufunc, rhs_name, behavior_name), transpose\n )\n if basecls.__name__ in rhs:\n rhs.add(behavior_name)\n return cls\n\n return register\n\n\ndef _call_transposed(func, left, right):\n return func(right, left)\n\n\ndef mixin_class_method(ufunc, rhs=None, *, transpose=True):\n \"\"\"\n Args:\n ufunc (numpy.ufunc): A universal function (or NEP18 callable) that is\n hooked in Awkward Array, i.e. it can be the first argument of a behavior.\n rhs (Set[type] or None): Set of right-hand side argument types, optional\n if wrapping a unary function. The left-hand side is expected to\n always be `self` of the parent class. The current class is implicitly\n included in this set.\n transpose (bool): If true, automatically create a transpose signature\n (only makes sense for binary ufuncs).\n\n This decorator can be used to register a mixin class method.\n\n Using this decorator ensures that derived classes that are declared with the\n #ak.mixin_class decorator will also have the behaviors that this class has.\n \"\"\"\n\n def register(method):\n if not isinstance(rhs, (set, type(None))):\n raise ak._errors.wrap_error(\n ValueError(\"expected a set of right-hand-side argument types\")\n )\n if transpose and rhs is not None:\n # make a copy of rhs, we will edit it later\n # use partial & a module-scoped function so that this is pickleable\n method._awkward_mixin = (\n ufunc,\n set(rhs),\n functools.partial(_call_transposed, method),\n )\n else:\n method._awkward_mixin = (ufunc, rhs, None)\n return method\n\n return register\n", "path": "src/awkward/behaviors/mixins.py"}]}
| 1,280 | 460 |
gh_patches_debug_18294
|
rasdani/github-patches
|
git_diff
|
dask__distributed-6839
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix profile on msgpack 1.0.6/Windows
- Closes #8212
- Iterates on #3455
- Twin of #8214
Make sure that profile doesn't reach the point where sizeof() starts raising (at depth 140).
This is problematic with msgpack 1.0.6 on Windows because its max recursion depth fell from 512 to 256.
The remaining exception in the unit tests is fixed by #8214
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/deploy/local.py`
Content:
```
1 from __future__ import annotations
2
3 import logging
4 import math
5 import warnings
6
7 import toolz
8
9 from dask.system import CPU_COUNT
10 from dask.widgets import get_template
11
12 from distributed.deploy.spec import SpecCluster
13 from distributed.deploy.utils import nprocesses_nthreads
14 from distributed.nanny import Nanny
15 from distributed.scheduler import Scheduler
16 from distributed.security import Security
17 from distributed.worker import Worker
18 from distributed.worker_memory import parse_memory_limit
19
20 logger = logging.getLogger(__name__)
21
22
23 class LocalCluster(SpecCluster):
24 """Create local Scheduler and Workers
25
26 This creates a "cluster" of a scheduler and workers running on the local
27 machine.
28
29 Parameters
30 ----------
31 n_workers: int
32 Number of workers to start
33 processes: bool
34 Whether to use processes (True) or threads (False). Defaults to True, unless
35 worker_class=Worker, in which case it defaults to False.
36 threads_per_worker: int
37 Number of threads per each worker
38 scheduler_port: int
39 Port of the scheduler. 8786 by default, use 0 to choose a random port
40 silence_logs: logging level
41 Level of logs to print out to stdout. ``logging.WARN`` by default.
42 Use a falsey value like False or None for no change.
43 host: string
44 Host address on which the scheduler will listen, defaults to only localhost
45 ip: string
46 Deprecated. See ``host`` above.
47 dashboard_address: str
48 Address on which to listen for the Bokeh diagnostics server like
49 'localhost:8787' or '0.0.0.0:8787'. Defaults to ':8787'.
50 Set to ``None`` to disable the dashboard.
51 Use ':0' for a random port.
52 worker_dashboard_address: str
53 Address on which to listen for the Bokeh worker diagnostics server like
54 'localhost:8787' or '0.0.0.0:8787'. Defaults to None which disables the dashboard.
55 Use ':0' for a random port.
56 diagnostics_port: int
57 Deprecated. See dashboard_address.
58 asynchronous: bool (False by default)
59 Set to True if using this cluster within async/await functions or within
60 Tornado gen.coroutines. This should remain False for normal use.
61 blocked_handlers: List[str]
62 A list of strings specifying a blocklist of handlers to disallow on the
63 Scheduler, like ``['feed', 'run_function']``
64 service_kwargs: Dict[str, Dict]
65 Extra keywords to hand to the running services
66 security : Security or bool, optional
67 Configures communication security in this cluster. Can be a security
68 object, or True. If True, temporary self-signed credentials will
69 be created automatically.
70 protocol: str (optional)
71 Protocol to use like ``tcp://``, ``tls://``, ``inproc://``
72 This defaults to sensible choice given other keyword arguments like
73 ``processes`` and ``security``
74 interface: str (optional)
75 Network interface to use. Defaults to lo/localhost
76 worker_class: Worker
77 Worker class used to instantiate workers from. Defaults to Worker if
78 processes=False and Nanny if processes=True or omitted.
79 **worker_kwargs:
80 Extra worker arguments. Any additional keyword arguments will be passed
81 to the ``Worker`` class constructor.
82
83 Examples
84 --------
85 >>> cluster = LocalCluster() # Create a local cluster # doctest: +SKIP
86 >>> cluster # doctest: +SKIP
87 LocalCluster("127.0.0.1:8786", workers=8, threads=8)
88
89 >>> c = Client(cluster) # connect to local cluster # doctest: +SKIP
90
91 Scale the cluster to three workers
92
93 >>> cluster.scale(3) # doctest: +SKIP
94
95 Pass extra keyword arguments to Bokeh
96
97 >>> LocalCluster(service_kwargs={'dashboard': {'prefix': '/foo'}}) # doctest: +SKIP
98 """
99
100 def __init__(
101 self,
102 name=None,
103 n_workers=None,
104 threads_per_worker=None,
105 processes=None,
106 loop=None,
107 start=None,
108 host=None,
109 ip=None,
110 scheduler_port=0,
111 silence_logs=logging.WARN,
112 dashboard_address=":8787",
113 worker_dashboard_address=None,
114 diagnostics_port=None,
115 services=None,
116 worker_services=None,
117 service_kwargs=None,
118 asynchronous=False,
119 security=None,
120 protocol=None,
121 blocked_handlers=None,
122 interface=None,
123 worker_class=None,
124 scheduler_kwargs=None,
125 scheduler_sync_interval=1,
126 **worker_kwargs,
127 ):
128 if ip is not None:
129 # In the future we should warn users about this move
130 # warnings.warn("The ip keyword has been moved to host")
131 host = ip
132
133 if diagnostics_port is not None:
134 warnings.warn(
135 "diagnostics_port has been deprecated. "
136 "Please use `dashboard_address=` instead"
137 )
138 dashboard_address = diagnostics_port
139
140 if threads_per_worker == 0:
141 warnings.warn(
142 "Setting `threads_per_worker` to 0 has been deprecated. "
143 "Please set to None or to a specific int."
144 )
145 threads_per_worker = None
146
147 if "dashboard" in worker_kwargs:
148 warnings.warn(
149 "Setting `dashboard` is discouraged. "
150 "Please set `dashboard_address` to affect the scheduler (more common) "
151 "and `worker_dashboard_address` for the worker (less common)."
152 )
153
154 if processes is None:
155 processes = worker_class is None or issubclass(worker_class, Nanny)
156 if worker_class is None:
157 worker_class = Nanny if processes else Worker
158
159 self.status = None
160 self.processes = processes
161
162 if security is None:
163 # Falsey values load the default configuration
164 security = Security()
165 elif security is True:
166 # True indicates self-signed temporary credentials should be used
167 security = Security.temporary()
168 elif not isinstance(security, Security):
169 raise TypeError("security must be a Security object")
170
171 if protocol is None:
172 if host and "://" in host:
173 protocol = host.split("://")[0]
174 elif security and security.require_encryption:
175 protocol = "tls://"
176 elif not self.processes and not scheduler_port:
177 protocol = "inproc://"
178 else:
179 protocol = "tcp://"
180 if not protocol.endswith("://"):
181 protocol = protocol + "://"
182
183 if host is None and not protocol.startswith("inproc") and not interface:
184 host = "127.0.0.1"
185
186 services = services or {}
187 worker_services = worker_services or {}
188 if n_workers is None and threads_per_worker is None:
189 if processes:
190 n_workers, threads_per_worker = nprocesses_nthreads()
191 else:
192 n_workers = 1
193 threads_per_worker = CPU_COUNT
194 if n_workers is None and threads_per_worker is not None:
195 n_workers = max(1, CPU_COUNT // threads_per_worker) if processes else 1
196 if n_workers and threads_per_worker is None:
197 # Overcommit threads per worker, rather than undercommit
198 threads_per_worker = max(1, int(math.ceil(CPU_COUNT / n_workers)))
199 if n_workers and "memory_limit" not in worker_kwargs:
200 worker_kwargs["memory_limit"] = parse_memory_limit("auto", 1, n_workers)
201
202 worker_kwargs.update(
203 {
204 "host": host,
205 "nthreads": threads_per_worker,
206 "services": worker_services,
207 "dashboard_address": worker_dashboard_address,
208 "dashboard": worker_dashboard_address is not None,
209 "interface": interface,
210 "protocol": protocol,
211 "security": security,
212 "silence_logs": silence_logs,
213 }
214 )
215
216 scheduler = {
217 "cls": Scheduler,
218 "options": toolz.merge(
219 dict(
220 host=host,
221 services=services,
222 service_kwargs=service_kwargs,
223 security=security,
224 port=scheduler_port,
225 interface=interface,
226 protocol=protocol,
227 dashboard=dashboard_address is not None,
228 dashboard_address=dashboard_address,
229 blocked_handlers=blocked_handlers,
230 ),
231 scheduler_kwargs or {},
232 ),
233 }
234
235 worker = {"cls": worker_class, "options": worker_kwargs}
236 workers = {i: worker for i in range(n_workers)}
237
238 super().__init__(
239 name=name,
240 scheduler=scheduler,
241 workers=workers,
242 worker=worker,
243 loop=loop,
244 asynchronous=asynchronous,
245 silence_logs=silence_logs,
246 security=security,
247 scheduler_sync_interval=scheduler_sync_interval,
248 )
249
250 def start_worker(self, *args, **kwargs):
251 raise NotImplementedError(
252 "The `cluster.start_worker` function has been removed. "
253 "Please see the `cluster.scale` method instead."
254 )
255
256 def _repr_html_(self, cluster_status=None):
257 cluster_status = get_template("local_cluster.html.j2").render(
258 status=self.status.name,
259 processes=self.processes,
260 cluster_status=cluster_status,
261 )
262 return super()._repr_html_(cluster_status=cluster_status)
263
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/distributed/deploy/local.py b/distributed/deploy/local.py
--- a/distributed/deploy/local.py
+++ b/distributed/deploy/local.py
@@ -30,6 +30,19 @@
----------
n_workers: int
Number of workers to start
+ memory_limit: str, float, int, or None, default "auto"
+ Sets the memory limit *per worker*.
+
+ Notes regarding argument data type:
+
+ * If None or 0, no limit is applied.
+ * If "auto", the total system memory is split evenly between the workers.
+ * If a float, that fraction of the system memory is used *per worker*.
+ * If a string giving a number of bytes (like ``"1GiB"``), that amount is used *per worker*.
+ * If an int, that number of bytes is used *per worker*.
+
+ Note that the limit will only be enforced when ``processes=True``, and the limit is only
+ enforced on a best-effort basis — it's still possible for workers to exceed this limit.
processes: bool
Whether to use processes (True) or threads (False). Defaults to True, unless
worker_class=Worker, in which case it defaults to False.
|
{"golden_diff": "diff --git a/distributed/deploy/local.py b/distributed/deploy/local.py\n--- a/distributed/deploy/local.py\n+++ b/distributed/deploy/local.py\n@@ -30,6 +30,19 @@\n ----------\n n_workers: int\n Number of workers to start\n+ memory_limit: str, float, int, or None, default \"auto\"\n+ Sets the memory limit *per worker*.\n+\n+ Notes regarding argument data type:\n+\n+ * If None or 0, no limit is applied.\n+ * If \"auto\", the total system memory is split evenly between the workers.\n+ * If a float, that fraction of the system memory is used *per worker*.\n+ * If a string giving a number of bytes (like ``\"1GiB\"``), that amount is used *per worker*.\n+ * If an int, that number of bytes is used *per worker*.\n+\n+ Note that the limit will only be enforced when ``processes=True``, and the limit is only\n+ enforced on a best-effort basis \u2014 it's still possible for workers to exceed this limit.\n processes: bool\n Whether to use processes (True) or threads (False). Defaults to True, unless\n worker_class=Worker, in which case it defaults to False.\n", "issue": "Fix profile on msgpack 1.0.6/Windows\n- Closes #8212\r\n- Iterates on #3455\r\n- Twin of #8214\r\n\r\nMake sure that profile doesn't reach the point where sizeof() starts raising (at depth 140).\r\nThis is problematic with msgpack 1.0.6 on Windows because its max recursion depth fell from 512 to 256.\r\n\r\nThe remaining exception in the unit tests is fixed by #8214\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport math\nimport warnings\n\nimport toolz\n\nfrom dask.system import CPU_COUNT\nfrom dask.widgets import get_template\n\nfrom distributed.deploy.spec import SpecCluster\nfrom distributed.deploy.utils import nprocesses_nthreads\nfrom distributed.nanny import Nanny\nfrom distributed.scheduler import Scheduler\nfrom distributed.security import Security\nfrom distributed.worker import Worker\nfrom distributed.worker_memory import parse_memory_limit\n\nlogger = logging.getLogger(__name__)\n\n\nclass LocalCluster(SpecCluster):\n \"\"\"Create local Scheduler and Workers\n\n This creates a \"cluster\" of a scheduler and workers running on the local\n machine.\n\n Parameters\n ----------\n n_workers: int\n Number of workers to start\n processes: bool\n Whether to use processes (True) or threads (False). Defaults to True, unless\n worker_class=Worker, in which case it defaults to False.\n threads_per_worker: int\n Number of threads per each worker\n scheduler_port: int\n Port of the scheduler. 8786 by default, use 0 to choose a random port\n silence_logs: logging level\n Level of logs to print out to stdout. ``logging.WARN`` by default.\n Use a falsey value like False or None for no change.\n host: string\n Host address on which the scheduler will listen, defaults to only localhost\n ip: string\n Deprecated. See ``host`` above.\n dashboard_address: str\n Address on which to listen for the Bokeh diagnostics server like\n 'localhost:8787' or '0.0.0.0:8787'. Defaults to ':8787'.\n Set to ``None`` to disable the dashboard.\n Use ':0' for a random port.\n worker_dashboard_address: str\n Address on which to listen for the Bokeh worker diagnostics server like\n 'localhost:8787' or '0.0.0.0:8787'. Defaults to None which disables the dashboard.\n Use ':0' for a random port.\n diagnostics_port: int\n Deprecated. See dashboard_address.\n asynchronous: bool (False by default)\n Set to True if using this cluster within async/await functions or within\n Tornado gen.coroutines. This should remain False for normal use.\n blocked_handlers: List[str]\n A list of strings specifying a blocklist of handlers to disallow on the\n Scheduler, like ``['feed', 'run_function']``\n service_kwargs: Dict[str, Dict]\n Extra keywords to hand to the running services\n security : Security or bool, optional\n Configures communication security in this cluster. Can be a security\n object, or True. If True, temporary self-signed credentials will\n be created automatically.\n protocol: str (optional)\n Protocol to use like ``tcp://``, ``tls://``, ``inproc://``\n This defaults to sensible choice given other keyword arguments like\n ``processes`` and ``security``\n interface: str (optional)\n Network interface to use. Defaults to lo/localhost\n worker_class: Worker\n Worker class used to instantiate workers from. Defaults to Worker if\n processes=False and Nanny if processes=True or omitted.\n **worker_kwargs:\n Extra worker arguments. Any additional keyword arguments will be passed\n to the ``Worker`` class constructor.\n\n Examples\n --------\n >>> cluster = LocalCluster() # Create a local cluster # doctest: +SKIP\n >>> cluster # doctest: +SKIP\n LocalCluster(\"127.0.0.1:8786\", workers=8, threads=8)\n\n >>> c = Client(cluster) # connect to local cluster # doctest: +SKIP\n\n Scale the cluster to three workers\n\n >>> cluster.scale(3) # doctest: +SKIP\n\n Pass extra keyword arguments to Bokeh\n\n >>> LocalCluster(service_kwargs={'dashboard': {'prefix': '/foo'}}) # doctest: +SKIP\n \"\"\"\n\n def __init__(\n self,\n name=None,\n n_workers=None,\n threads_per_worker=None,\n processes=None,\n loop=None,\n start=None,\n host=None,\n ip=None,\n scheduler_port=0,\n silence_logs=logging.WARN,\n dashboard_address=\":8787\",\n worker_dashboard_address=None,\n diagnostics_port=None,\n services=None,\n worker_services=None,\n service_kwargs=None,\n asynchronous=False,\n security=None,\n protocol=None,\n blocked_handlers=None,\n interface=None,\n worker_class=None,\n scheduler_kwargs=None,\n scheduler_sync_interval=1,\n **worker_kwargs,\n ):\n if ip is not None:\n # In the future we should warn users about this move\n # warnings.warn(\"The ip keyword has been moved to host\")\n host = ip\n\n if diagnostics_port is not None:\n warnings.warn(\n \"diagnostics_port has been deprecated. \"\n \"Please use `dashboard_address=` instead\"\n )\n dashboard_address = diagnostics_port\n\n if threads_per_worker == 0:\n warnings.warn(\n \"Setting `threads_per_worker` to 0 has been deprecated. \"\n \"Please set to None or to a specific int.\"\n )\n threads_per_worker = None\n\n if \"dashboard\" in worker_kwargs:\n warnings.warn(\n \"Setting `dashboard` is discouraged. \"\n \"Please set `dashboard_address` to affect the scheduler (more common) \"\n \"and `worker_dashboard_address` for the worker (less common).\"\n )\n\n if processes is None:\n processes = worker_class is None or issubclass(worker_class, Nanny)\n if worker_class is None:\n worker_class = Nanny if processes else Worker\n\n self.status = None\n self.processes = processes\n\n if security is None:\n # Falsey values load the default configuration\n security = Security()\n elif security is True:\n # True indicates self-signed temporary credentials should be used\n security = Security.temporary()\n elif not isinstance(security, Security):\n raise TypeError(\"security must be a Security object\")\n\n if protocol is None:\n if host and \"://\" in host:\n protocol = host.split(\"://\")[0]\n elif security and security.require_encryption:\n protocol = \"tls://\"\n elif not self.processes and not scheduler_port:\n protocol = \"inproc://\"\n else:\n protocol = \"tcp://\"\n if not protocol.endswith(\"://\"):\n protocol = protocol + \"://\"\n\n if host is None and not protocol.startswith(\"inproc\") and not interface:\n host = \"127.0.0.1\"\n\n services = services or {}\n worker_services = worker_services or {}\n if n_workers is None and threads_per_worker is None:\n if processes:\n n_workers, threads_per_worker = nprocesses_nthreads()\n else:\n n_workers = 1\n threads_per_worker = CPU_COUNT\n if n_workers is None and threads_per_worker is not None:\n n_workers = max(1, CPU_COUNT // threads_per_worker) if processes else 1\n if n_workers and threads_per_worker is None:\n # Overcommit threads per worker, rather than undercommit\n threads_per_worker = max(1, int(math.ceil(CPU_COUNT / n_workers)))\n if n_workers and \"memory_limit\" not in worker_kwargs:\n worker_kwargs[\"memory_limit\"] = parse_memory_limit(\"auto\", 1, n_workers)\n\n worker_kwargs.update(\n {\n \"host\": host,\n \"nthreads\": threads_per_worker,\n \"services\": worker_services,\n \"dashboard_address\": worker_dashboard_address,\n \"dashboard\": worker_dashboard_address is not None,\n \"interface\": interface,\n \"protocol\": protocol,\n \"security\": security,\n \"silence_logs\": silence_logs,\n }\n )\n\n scheduler = {\n \"cls\": Scheduler,\n \"options\": toolz.merge(\n dict(\n host=host,\n services=services,\n service_kwargs=service_kwargs,\n security=security,\n port=scheduler_port,\n interface=interface,\n protocol=protocol,\n dashboard=dashboard_address is not None,\n dashboard_address=dashboard_address,\n blocked_handlers=blocked_handlers,\n ),\n scheduler_kwargs or {},\n ),\n }\n\n worker = {\"cls\": worker_class, \"options\": worker_kwargs}\n workers = {i: worker for i in range(n_workers)}\n\n super().__init__(\n name=name,\n scheduler=scheduler,\n workers=workers,\n worker=worker,\n loop=loop,\n asynchronous=asynchronous,\n silence_logs=silence_logs,\n security=security,\n scheduler_sync_interval=scheduler_sync_interval,\n )\n\n def start_worker(self, *args, **kwargs):\n raise NotImplementedError(\n \"The `cluster.start_worker` function has been removed. \"\n \"Please see the `cluster.scale` method instead.\"\n )\n\n def _repr_html_(self, cluster_status=None):\n cluster_status = get_template(\"local_cluster.html.j2\").render(\n status=self.status.name,\n processes=self.processes,\n cluster_status=cluster_status,\n )\n return super()._repr_html_(cluster_status=cluster_status)\n", "path": "distributed/deploy/local.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport math\nimport warnings\n\nimport toolz\n\nfrom dask.system import CPU_COUNT\nfrom dask.widgets import get_template\n\nfrom distributed.deploy.spec import SpecCluster\nfrom distributed.deploy.utils import nprocesses_nthreads\nfrom distributed.nanny import Nanny\nfrom distributed.scheduler import Scheduler\nfrom distributed.security import Security\nfrom distributed.worker import Worker\nfrom distributed.worker_memory import parse_memory_limit\n\nlogger = logging.getLogger(__name__)\n\n\nclass LocalCluster(SpecCluster):\n \"\"\"Create local Scheduler and Workers\n\n This creates a \"cluster\" of a scheduler and workers running on the local\n machine.\n\n Parameters\n ----------\n n_workers: int\n Number of workers to start\n memory_limit: str, float, int, or None, default \"auto\"\n Sets the memory limit *per worker*.\n\n Notes regarding argument data type:\n\n * If None or 0, no limit is applied.\n * If \"auto\", the total system memory is split evenly between the workers.\n * If a float, that fraction of the system memory is used *per worker*.\n * If a string giving a number of bytes (like ``\"1GiB\"``), that amount is used *per worker*.\n * If an int, that number of bytes is used *per worker*.\n\n Note that the limit will only be enforced when ``processes=True``, and the limit is only\n enforced on a best-effort basis \u2014 it's still possible for workers to exceed this limit.\n processes: bool\n Whether to use processes (True) or threads (False). Defaults to True, unless\n worker_class=Worker, in which case it defaults to False.\n threads_per_worker: int\n Number of threads per each worker\n scheduler_port: int\n Port of the scheduler. 8786 by default, use 0 to choose a random port\n silence_logs: logging level\n Level of logs to print out to stdout. ``logging.WARN`` by default.\n Use a falsey value like False or None for no change.\n host: string\n Host address on which the scheduler will listen, defaults to only localhost\n ip: string\n Deprecated. See ``host`` above.\n dashboard_address: str\n Address on which to listen for the Bokeh diagnostics server like\n 'localhost:8787' or '0.0.0.0:8787'. Defaults to ':8787'.\n Set to ``None`` to disable the dashboard.\n Use ':0' for a random port.\n worker_dashboard_address: str\n Address on which to listen for the Bokeh worker diagnostics server like\n 'localhost:8787' or '0.0.0.0:8787'. Defaults to None which disables the dashboard.\n Use ':0' for a random port.\n diagnostics_port: int\n Deprecated. See dashboard_address.\n asynchronous: bool (False by default)\n Set to True if using this cluster within async/await functions or within\n Tornado gen.coroutines. This should remain False for normal use.\n blocked_handlers: List[str]\n A list of strings specifying a blocklist of handlers to disallow on the\n Scheduler, like ``['feed', 'run_function']``\n service_kwargs: Dict[str, Dict]\n Extra keywords to hand to the running services\n security : Security or bool, optional\n Configures communication security in this cluster. Can be a security\n object, or True. If True, temporary self-signed credentials will\n be created automatically.\n protocol: str (optional)\n Protocol to use like ``tcp://``, ``tls://``, ``inproc://``\n This defaults to sensible choice given other keyword arguments like\n ``processes`` and ``security``\n interface: str (optional)\n Network interface to use. Defaults to lo/localhost\n worker_class: Worker\n Worker class used to instantiate workers from. Defaults to Worker if\n processes=False and Nanny if processes=True or omitted.\n **worker_kwargs:\n Extra worker arguments. Any additional keyword arguments will be passed\n to the ``Worker`` class constructor.\n\n Examples\n --------\n >>> cluster = LocalCluster() # Create a local cluster # doctest: +SKIP\n >>> cluster # doctest: +SKIP\n LocalCluster(\"127.0.0.1:8786\", workers=8, threads=8)\n\n >>> c = Client(cluster) # connect to local cluster # doctest: +SKIP\n\n Scale the cluster to three workers\n\n >>> cluster.scale(3) # doctest: +SKIP\n\n Pass extra keyword arguments to Bokeh\n\n >>> LocalCluster(service_kwargs={'dashboard': {'prefix': '/foo'}}) # doctest: +SKIP\n \"\"\"\n\n def __init__(\n self,\n name=None,\n n_workers=None,\n threads_per_worker=None,\n processes=None,\n loop=None,\n start=None,\n host=None,\n ip=None,\n scheduler_port=0,\n silence_logs=logging.WARN,\n dashboard_address=\":8787\",\n worker_dashboard_address=None,\n diagnostics_port=None,\n services=None,\n worker_services=None,\n service_kwargs=None,\n asynchronous=False,\n security=None,\n protocol=None,\n blocked_handlers=None,\n interface=None,\n worker_class=None,\n scheduler_kwargs=None,\n scheduler_sync_interval=1,\n **worker_kwargs,\n ):\n if ip is not None:\n # In the future we should warn users about this move\n # warnings.warn(\"The ip keyword has been moved to host\")\n host = ip\n\n if diagnostics_port is not None:\n warnings.warn(\n \"diagnostics_port has been deprecated. \"\n \"Please use `dashboard_address=` instead\"\n )\n dashboard_address = diagnostics_port\n\n if threads_per_worker == 0:\n warnings.warn(\n \"Setting `threads_per_worker` to 0 has been deprecated. \"\n \"Please set to None or to a specific int.\"\n )\n threads_per_worker = None\n\n if \"dashboard\" in worker_kwargs:\n warnings.warn(\n \"Setting `dashboard` is discouraged. \"\n \"Please set `dashboard_address` to affect the scheduler (more common) \"\n \"and `worker_dashboard_address` for the worker (less common).\"\n )\n\n if processes is None:\n processes = worker_class is None or issubclass(worker_class, Nanny)\n if worker_class is None:\n worker_class = Nanny if processes else Worker\n\n self.status = None\n self.processes = processes\n\n if security is None:\n # Falsey values load the default configuration\n security = Security()\n elif security is True:\n # True indicates self-signed temporary credentials should be used\n security = Security.temporary()\n elif not isinstance(security, Security):\n raise TypeError(\"security must be a Security object\")\n\n if protocol is None:\n if host and \"://\" in host:\n protocol = host.split(\"://\")[0]\n elif security and security.require_encryption:\n protocol = \"tls://\"\n elif not self.processes and not scheduler_port:\n protocol = \"inproc://\"\n else:\n protocol = \"tcp://\"\n if not protocol.endswith(\"://\"):\n protocol = protocol + \"://\"\n\n if host is None and not protocol.startswith(\"inproc\") and not interface:\n host = \"127.0.0.1\"\n\n services = services or {}\n worker_services = worker_services or {}\n if n_workers is None and threads_per_worker is None:\n if processes:\n n_workers, threads_per_worker = nprocesses_nthreads()\n else:\n n_workers = 1\n threads_per_worker = CPU_COUNT\n if n_workers is None and threads_per_worker is not None:\n n_workers = max(1, CPU_COUNT // threads_per_worker) if processes else 1\n if n_workers and threads_per_worker is None:\n # Overcommit threads per worker, rather than undercommit\n threads_per_worker = max(1, int(math.ceil(CPU_COUNT / n_workers)))\n if n_workers and \"memory_limit\" not in worker_kwargs:\n worker_kwargs[\"memory_limit\"] = parse_memory_limit(\"auto\", 1, n_workers)\n\n worker_kwargs.update(\n {\n \"host\": host,\n \"nthreads\": threads_per_worker,\n \"services\": worker_services,\n \"dashboard_address\": worker_dashboard_address,\n \"dashboard\": worker_dashboard_address is not None,\n \"interface\": interface,\n \"protocol\": protocol,\n \"security\": security,\n \"silence_logs\": silence_logs,\n }\n )\n\n scheduler = {\n \"cls\": Scheduler,\n \"options\": toolz.merge(\n dict(\n host=host,\n services=services,\n service_kwargs=service_kwargs,\n security=security,\n port=scheduler_port,\n interface=interface,\n protocol=protocol,\n dashboard=dashboard_address is not None,\n dashboard_address=dashboard_address,\n blocked_handlers=blocked_handlers,\n ),\n scheduler_kwargs or {},\n ),\n }\n\n worker = {\"cls\": worker_class, \"options\": worker_kwargs}\n workers = {i: worker for i in range(n_workers)}\n\n super().__init__(\n name=name,\n scheduler=scheduler,\n workers=workers,\n worker=worker,\n loop=loop,\n asynchronous=asynchronous,\n silence_logs=silence_logs,\n security=security,\n scheduler_sync_interval=scheduler_sync_interval,\n )\n\n def start_worker(self, *args, **kwargs):\n raise NotImplementedError(\n \"The `cluster.start_worker` function has been removed. \"\n \"Please see the `cluster.scale` method instead.\"\n )\n\n def _repr_html_(self, cluster_status=None):\n cluster_status = get_template(\"local_cluster.html.j2\").render(\n status=self.status.name,\n processes=self.processes,\n cluster_status=cluster_status,\n )\n return super()._repr_html_(cluster_status=cluster_status)\n", "path": "distributed/deploy/local.py"}]}
| 3,052 | 286 |
gh_patches_debug_14398
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3803
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
/related_project/ issue
Attempted to POST the following data:
{"relation":"1","project":"8012","related_iati_id":"23232"}
The response is 400:
`related_project: ["This field is required."]`
However, we want to allow a selection of external project:
<img width="1440" alt="Screenshot 2019-08-16 10 16 11" src="https://user-images.githubusercontent.com/1336477/63150230-effe3400-c00e-11e9-8aff-dc162686751a.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/serializers/related_project.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from akvo.rsr.models import RelatedProject
9
10 from .rsr_serializer import BaseRSRSerializer
11
12 from rest_framework import serializers
13
14
15 class RelatedProjectRawSerializer(BaseRSRSerializer):
16
17 class Meta:
18 model = RelatedProject
19 fields = '__all__'
20
21
22 class RelatedProjectSerializer(RelatedProjectRawSerializer):
23
24 related_project_name = serializers.ReadOnlyField(source='related_project.title')
25 related_project_show_link = serializers.ReadOnlyField()
26 relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/akvo/rest/serializers/related_project.py b/akvo/rest/serializers/related_project.py
--- a/akvo/rest/serializers/related_project.py
+++ b/akvo/rest/serializers/related_project.py
@@ -5,7 +5,7 @@
# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
-from akvo.rsr.models import RelatedProject
+from akvo.rsr.models import RelatedProject, Project
from .rsr_serializer import BaseRSRSerializer
@@ -14,6 +14,9 @@
class RelatedProjectRawSerializer(BaseRSRSerializer):
+ related_project = serializers.PrimaryKeyRelatedField(
+ allow_null=True, queryset=Project.objects.all(), required=False, default=None)
+
class Meta:
model = RelatedProject
fields = '__all__'
|
{"golden_diff": "diff --git a/akvo/rest/serializers/related_project.py b/akvo/rest/serializers/related_project.py\n--- a/akvo/rest/serializers/related_project.py\n+++ b/akvo/rest/serializers/related_project.py\n@@ -5,7 +5,7 @@\n # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n \n \n-from akvo.rsr.models import RelatedProject\n+from akvo.rsr.models import RelatedProject, Project\n \n from .rsr_serializer import BaseRSRSerializer\n \n@@ -14,6 +14,9 @@\n \n class RelatedProjectRawSerializer(BaseRSRSerializer):\n \n+ related_project = serializers.PrimaryKeyRelatedField(\n+ allow_null=True, queryset=Project.objects.all(), required=False, default=None)\n+\n class Meta:\n model = RelatedProject\n fields = '__all__'\n", "issue": "/related_project/ issue\nAttempted to POST the following data:\r\n{\"relation\":\"1\",\"project\":\"8012\",\"related_iati_id\":\"23232\"}\r\n\r\nThe response is 400:\r\n`related_project: [\"This field is required.\"]`\r\n\r\nHowever, we want to allow a selection of external project:\r\n<img width=\"1440\" alt=\"Screenshot 2019-08-16 10 16 11\" src=\"https://user-images.githubusercontent.com/1336477/63150230-effe3400-c00e-11e9-8aff-dc162686751a.png\">\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import RelatedProject\n\nfrom .rsr_serializer import BaseRSRSerializer\n\nfrom rest_framework import serializers\n\n\nclass RelatedProjectRawSerializer(BaseRSRSerializer):\n\n class Meta:\n model = RelatedProject\n fields = '__all__'\n\n\nclass RelatedProjectSerializer(RelatedProjectRawSerializer):\n\n related_project_name = serializers.ReadOnlyField(source='related_project.title')\n related_project_show_link = serializers.ReadOnlyField()\n relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')\n", "path": "akvo/rest/serializers/related_project.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import RelatedProject, Project\n\nfrom .rsr_serializer import BaseRSRSerializer\n\nfrom rest_framework import serializers\n\n\nclass RelatedProjectRawSerializer(BaseRSRSerializer):\n\n related_project = serializers.PrimaryKeyRelatedField(\n allow_null=True, queryset=Project.objects.all(), required=False, default=None)\n\n class Meta:\n model = RelatedProject\n fields = '__all__'\n\n\nclass RelatedProjectSerializer(RelatedProjectRawSerializer):\n\n related_project_name = serializers.ReadOnlyField(source='related_project.title')\n related_project_show_link = serializers.ReadOnlyField()\n relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')\n", "path": "akvo/rest/serializers/related_project.py"}]}
| 651 | 193 |
gh_patches_debug_15838
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-4730
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AppRegistryNotReady("Apps aren't loaded yet.")
### After updating wagtail from 2.1 to 2.2 I get an AppRegistryNotReady error when migrating.
Removing **from .sitemap_generator import Sitemap**
from **wagtail/contrib/sitemaps/__init__.py**
fixed the problem.
Maybe it has something to do with the User model; I use a custom one.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/contrib/sitemaps/sitemap_generator.py`
Content:
```
1 import warnings
2
3 from django.contrib.sitemaps import Sitemap as DjangoSitemap
4
5 from wagtail.core.models import Site
6 from wagtail.core.utils import accepts_kwarg
7 from wagtail.utils.deprecation import RemovedInWagtail24Warning
8
9
10 class Sitemap(DjangoSitemap):
11
12 def __init__(self, request=None):
13 self.request = request
14
15 def location(self, obj):
16 return obj.get_full_url(self.request)
17
18 def lastmod(self, obj):
19 # fall back on latest_revision_created_at if last_published_at is null
20 # (for backwards compatibility from before last_published_at was added)
21 return (obj.last_published_at or obj.latest_revision_created_at)
22
23 def get_wagtail_site(self):
24 site = getattr(self.request, 'site', None)
25 if site is None:
26 return Site.objects.select_related(
27 'root_page'
28 ).get(is_default_site=True)
29 return site
30
31 def items(self):
32 return (
33 self.get_wagtail_site()
34 .root_page
35 .get_descendants(inclusive=True)
36 .live()
37 .public()
38 .order_by('path')
39 .specific())
40
41 def _urls(self, page, protocol, domain):
42 urls = []
43 last_mods = set()
44
45 for item in self.paginator.page(page).object_list:
46
47 if not accepts_kwarg(item.get_sitemap_urls, 'request'):
48 warnings.warn(
49 "%s.get_sitemap_urls() must be updated to accept an optional "
50 "'request' keyword argument" % type(item).__name__,
51 category=RemovedInWagtail24Warning)
52
53 url_info_items = item.get_sitemap_urls()
54 else:
55 url_info_items = item.get_sitemap_urls(self.request)
56
57 for url_info in url_info_items:
58 urls.append(url_info)
59 last_mods.add(url_info.get('lastmod'))
60
61 # last_mods might be empty if the whole site is private
62 if last_mods and None not in last_mods:
63 self.latest_lastmod = max(last_mods)
64 return urls
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/contrib/sitemaps/sitemap_generator.py b/wagtail/contrib/sitemaps/sitemap_generator.py
--- a/wagtail/contrib/sitemaps/sitemap_generator.py
+++ b/wagtail/contrib/sitemaps/sitemap_generator.py
@@ -2,7 +2,6 @@
from django.contrib.sitemaps import Sitemap as DjangoSitemap
-from wagtail.core.models import Site
from wagtail.core.utils import accepts_kwarg
from wagtail.utils.deprecation import RemovedInWagtail24Warning
@@ -23,6 +22,7 @@
def get_wagtail_site(self):
site = getattr(self.request, 'site', None)
if site is None:
+ from wagtail.core.models import Site
return Site.objects.select_related(
'root_page'
).get(is_default_site=True)
|
{"golden_diff": "diff --git a/wagtail/contrib/sitemaps/sitemap_generator.py b/wagtail/contrib/sitemaps/sitemap_generator.py\n--- a/wagtail/contrib/sitemaps/sitemap_generator.py\n+++ b/wagtail/contrib/sitemaps/sitemap_generator.py\n@@ -2,7 +2,6 @@\n \n from django.contrib.sitemaps import Sitemap as DjangoSitemap\n \n-from wagtail.core.models import Site\n from wagtail.core.utils import accepts_kwarg\n from wagtail.utils.deprecation import RemovedInWagtail24Warning\n \n@@ -23,6 +22,7 @@\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n+ from wagtail.core.models import Site\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n", "issue": "AppRegistryNotReady(\"Apps aren't loaded yet.\")\n### After updating wagtail from 2.1 to 2.2 I get an AppRegistryNotReady error when migrating.\r\n\r\nRemoving **from .sitemap_generator import Sitemap**\r\nfrom **wagtail/contrib/sitemaps/__init__.py**\r\nfixed the problem.\r\n\r\nMaybe it has something to do with the User model; I use a custom one.\n", "before_files": [{"content": "import warnings\n\nfrom django.contrib.sitemaps import Sitemap as DjangoSitemap\n\nfrom wagtail.core.models import Site\nfrom wagtail.core.utils import accepts_kwarg\nfrom wagtail.utils.deprecation import RemovedInWagtail24Warning\n\n\nclass Sitemap(DjangoSitemap):\n\n def __init__(self, request=None):\n self.request = request\n\n def location(self, obj):\n return obj.get_full_url(self.request)\n\n def lastmod(self, obj):\n # fall back on latest_revision_created_at if last_published_at is null\n # (for backwards compatibility from before last_published_at was added)\n return (obj.last_published_at or obj.latest_revision_created_at)\n\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n return site\n\n def items(self):\n return (\n self.get_wagtail_site()\n .root_page\n .get_descendants(inclusive=True)\n .live()\n .public()\n .order_by('path')\n .specific())\n\n def _urls(self, page, protocol, domain):\n urls = []\n last_mods = set()\n\n for item in self.paginator.page(page).object_list:\n\n if not accepts_kwarg(item.get_sitemap_urls, 'request'):\n warnings.warn(\n \"%s.get_sitemap_urls() must be updated to accept an optional \"\n \"'request' keyword argument\" % type(item).__name__,\n category=RemovedInWagtail24Warning)\n\n url_info_items = item.get_sitemap_urls()\n else:\n url_info_items = item.get_sitemap_urls(self.request)\n\n for url_info in url_info_items:\n urls.append(url_info)\n last_mods.add(url_info.get('lastmod'))\n\n # last_mods might be empty if the whole site is private\n if last_mods and None not in last_mods:\n self.latest_lastmod = max(last_mods)\n return urls\n", "path": "wagtail/contrib/sitemaps/sitemap_generator.py"}], "after_files": [{"content": "import warnings\n\nfrom django.contrib.sitemaps import Sitemap as DjangoSitemap\n\nfrom wagtail.core.utils import accepts_kwarg\nfrom wagtail.utils.deprecation import RemovedInWagtail24Warning\n\n\nclass Sitemap(DjangoSitemap):\n\n def __init__(self, request=None):\n self.request = request\n\n def location(self, obj):\n return obj.get_full_url(self.request)\n\n def lastmod(self, obj):\n # fall back on latest_revision_created_at if last_published_at is null\n # (for backwards compatibility from before last_published_at was added)\n return (obj.last_published_at or obj.latest_revision_created_at)\n\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n from wagtail.core.models import Site\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n return site\n\n def items(self):\n return (\n self.get_wagtail_site()\n .root_page\n .get_descendants(inclusive=True)\n .live()\n .public()\n .order_by('path')\n .specific())\n\n def _urls(self, page, protocol, domain):\n urls = []\n last_mods = set()\n\n for item in self.paginator.page(page).object_list:\n\n if not accepts_kwarg(item.get_sitemap_urls, 'request'):\n warnings.warn(\n \"%s.get_sitemap_urls() must be updated to accept an optional \"\n \"'request' keyword argument\" % type(item).__name__,\n category=RemovedInWagtail24Warning)\n\n url_info_items = item.get_sitemap_urls()\n else:\n url_info_items = item.get_sitemap_urls(self.request)\n\n for url_info in url_info_items:\n urls.append(url_info)\n last_mods.add(url_info.get('lastmod'))\n\n # last_mods might be empty if the whole site is private\n if last_mods and None not in last_mods:\n self.latest_lastmod = max(last_mods)\n return urls\n", "path": "wagtail/contrib/sitemaps/sitemap_generator.py"}]}
| 920 | 188 |
gh_patches_debug_28337
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-1968
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Malformed data when passing a tuple or struct that contains an array
### Version Information
* vyper Version (output of `vyper --version`): `0.1.0-beta16` (current master)
* OS: linux
* Python Version: 3.7
### What's your issue about?
As reported by @michwill on Gitter, passing a struct between functions, where the struct contains an array, results in malformed data. WIth the following contract:
```python
struct A:
many: uint256[4]
one: uint256
@private
@constant
def _foo(_many: uint256[4], _one: uint256) -> A:
return A({many: _many, one: _one})
@public
@constant
def foo() -> A:
return self._foo([1, 2, 3, 4], 5)
@public
@constant
def bar() -> (uint256[4], uint256):
out: A = self._foo([1, 2, 3, 4], 5)
return out.many, out.one
```
Here is the output of each public function:
```python
>>> contract.foo()
((1, 2, 5, 0), 0)
>>> contract.bar()
((0, 0, 5, 4), 3)
```
Observations:
1. Generating the struct within the public function and returning it, or a component of it, works as expected.
2. Passing the struct between public functions of two different contracts works as expected.
3. Structs that do not contain arrays work as expected.
4. Removing the `@constant` modifier has no effect.
### How can it be fixed?
It looks like an off-by-one (or 2?) error to me, but I'm unsure how to fix it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/parser/self_call.py`
Content:
```
1 import itertools
2
3 from vyper.exceptions import (
4 ConstancyViolation,
5 StructureException,
6 TypeMismatch,
7 )
8 from vyper.parser.lll_node import LLLnode
9 from vyper.parser.parser_utils import getpos, pack_arguments
10 from vyper.signatures.function_signature import FunctionSignature
11 from vyper.types import (
12 BaseType,
13 ByteArrayLike,
14 ListType,
15 TupleLike,
16 get_size_of_type,
17 get_static_size_of_type,
18 has_dynamic_data,
19 )
20
21
22 def call_lookup_specs(stmt_expr, context):
23 from vyper.parser.expr import Expr
24
25 method_name = stmt_expr.func.attr
26
27 if len(stmt_expr.keywords):
28 raise TypeMismatch(
29 "Cannot use keyword arguments in calls to functions via 'self'",
30 stmt_expr,
31 )
32 expr_args = [
33 Expr(arg, context).lll_node
34 for arg in stmt_expr.args
35 ]
36
37 sig = FunctionSignature.lookup_sig(
38 context.sigs,
39 method_name,
40 expr_args,
41 stmt_expr,
42 context,
43 )
44
45 return method_name, expr_args, sig
46
47
48 def make_call(stmt_expr, context):
49 method_name, _, sig = call_lookup_specs(stmt_expr, context)
50
51 if context.is_constant() and not sig.const:
52 raise ConstancyViolation(
53 f"May not call non-constant function '{method_name}' within {context.pp_constancy()}.",
54 getpos(stmt_expr)
55 )
56
57 if not sig.private:
58 raise StructureException("Cannot call public functions via 'self'", stmt_expr)
59
60 return call_self_private(stmt_expr, context, sig)
61
62
63 def call_make_placeholder(stmt_expr, context, sig):
64 if sig.output_type is None:
65 return 0, 0, 0
66
67 output_placeholder = context.new_placeholder(typ=sig.output_type)
68 out_size = get_size_of_type(sig.output_type) * 32
69 returner = output_placeholder
70
71 if not sig.private and isinstance(sig.output_type, ByteArrayLike):
72 returner = output_placeholder + 32
73
74 return output_placeholder, returner, out_size
75
76
77 def call_self_private(stmt_expr, context, sig):
78 # ** Private Call **
79 # Steps:
80 # (x) push current local variables
81 # (x) push arguments
82 # (x) push jumpdest (callback ptr)
83 # (x) jump to label
84 # (x) pop return values
85 # (x) pop local variables
86
87 method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)
88 pre_init = []
89 pop_local_vars = []
90 push_local_vars = []
91 pop_return_values = []
92 push_args = []
93
94 # Push local variables.
95 var_slots = [
96 (v.pos, v.size) for name, v in context.vars.items()
97 if v.location == 'memory'
98 ]
99 if var_slots:
100 var_slots.sort(key=lambda x: x[0])
101 mem_from, mem_to = var_slots[0][0], var_slots[-1][0] + var_slots[-1][1] * 32
102
103 i_placeholder = context.new_placeholder(BaseType('uint256'))
104 local_save_ident = f"_{stmt_expr.lineno}_{stmt_expr.col_offset}"
105 push_loop_label = 'save_locals_start' + local_save_ident
106 pop_loop_label = 'restore_locals_start' + local_save_ident
107
108 if mem_to - mem_from > 320:
109 push_local_vars = [
110 ['mstore', i_placeholder, mem_from],
111 ['label', push_loop_label],
112 ['mload', ['mload', i_placeholder]],
113 ['mstore', i_placeholder, ['add', ['mload', i_placeholder], 32]],
114 ['if', ['lt', ['mload', i_placeholder], mem_to],
115 ['goto', push_loop_label]]
116 ]
117 pop_local_vars = [
118 ['mstore', i_placeholder, mem_to - 32],
119 ['label', pop_loop_label],
120 ['mstore', ['mload', i_placeholder], 'pass'],
121 ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]],
122 ['if', ['ge', ['mload', i_placeholder], mem_from],
123 ['goto', pop_loop_label]]
124 ]
125 else:
126 push_local_vars = [['mload', pos] for pos in range(mem_from, mem_to, 32)]
127 pop_local_vars = [['mstore', pos, 'pass'] for pos in range(mem_to-32, mem_from-32, -32)]
128
129 # Push Arguments
130 if expr_args:
131 inargs, inargsize, arg_pos = pack_arguments(
132 sig,
133 expr_args,
134 context,
135 stmt_expr,
136 return_placeholder=False,
137 )
138 push_args += [inargs] # copy arguments first, to not mess up the push/pop sequencing.
139
140 static_arg_size = 32 * sum(
141 [get_static_size_of_type(arg.typ)
142 for arg in expr_args])
143 static_pos = int(arg_pos + static_arg_size)
144 needs_dyn_section = any(
145 [has_dynamic_data(arg.typ)
146 for arg in expr_args])
147
148 if needs_dyn_section:
149 ident = f'push_args_{sig.method_id}_{stmt_expr.lineno}_{stmt_expr.col_offset}'
150 start_label = ident + '_start'
151 end_label = ident + '_end'
152 i_placeholder = context.new_placeholder(BaseType('uint256'))
153
154 # Calculate copy start position.
155 # Given | static | dynamic | section in memory,
156 # copy backwards so the values are in order on the stack.
157 # We calculate i, the end of the whole encoded part
158 # (i.e. the starting index for copy)
159 # by taking ceil32(len<arg>) + offset<arg> + arg_pos
160 # for the last dynamic argument and arg_pos is the start
161 # the whole argument section.
162 idx = 0
163 for arg in expr_args:
164 if isinstance(arg.typ, ByteArrayLike):
165 last_idx = idx
166 idx += get_static_size_of_type(arg.typ)
167 push_args += [
168 ['with', 'offset', ['mload', arg_pos + last_idx * 32],
169 ['with', 'len_pos', ['add', arg_pos, 'offset'],
170 ['with', 'len_value', ['mload', 'len_pos'],
171 ['mstore', i_placeholder,
172 ['add', 'len_pos', ['ceil32', 'len_value']]]]]]
173 ]
174 # loop from end of dynamic section to start of dynamic section,
175 # pushing each element onto the stack.
176 push_args += [
177
178 ['label', start_label],
179 ['if', ['lt', ['mload', i_placeholder], static_pos],
180 ['goto', end_label]],
181 ['mload', ['mload', i_placeholder]],
182 ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]], # decrease i
183 ['goto', start_label],
184 ['label', end_label]
185 ]
186
187 # push static section
188 push_args += [
189 ['mload', pos] for pos in reversed(range(arg_pos, static_pos, 32))
190 ]
191 elif sig.args:
192 raise StructureException(
193 f"Wrong number of args for: {sig.name} (0 args given, expected {len(sig.args)})",
194 stmt_expr
195 )
196
197 # Jump to function label.
198 jump_to_func = [
199 ['add', ['pc'], 6], # set callback pointer.
200 ['goto', f'priv_{sig.method_id}'],
201 ['jumpdest'],
202 ]
203
204 # Pop return values.
205 returner = [0]
206 if sig.output_type:
207 output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)
208 if output_size > 0:
209 dynamic_offsets = []
210 if isinstance(sig.output_type, (BaseType, ListType)):
211 pop_return_values = [
212 ['mstore', ['add', output_placeholder, pos], 'pass']
213 for pos in range(0, output_size, 32)
214 ]
215 elif isinstance(sig.output_type, ByteArrayLike):
216 dynamic_offsets = [(0, sig.output_type)]
217 pop_return_values = [
218 ['pop', 'pass'],
219 ]
220 elif isinstance(sig.output_type, TupleLike):
221 static_offset = 0
222 pop_return_values = []
223 for out_type in sig.output_type.members:
224 if isinstance(out_type, ByteArrayLike):
225 pop_return_values.append(
226 ['mstore', ['add', output_placeholder, static_offset], 'pass']
227 )
228 dynamic_offsets.append(
229 (['mload', ['add', output_placeholder, static_offset]], out_type)
230 )
231 else:
232 pop_return_values.append(
233 ['mstore', ['add', output_placeholder, static_offset], 'pass']
234 )
235 static_offset += 32
236
237 # append dynamic unpacker.
238 dyn_idx = 0
239 for in_memory_offset, _out_type in dynamic_offsets:
240 ident = f"{stmt_expr.lineno}_{stmt_expr.col_offset}_arg_{dyn_idx}"
241 dyn_idx += 1
242 start_label = 'dyn_unpack_start_' + ident
243 end_label = 'dyn_unpack_end_' + ident
244 i_placeholder = context.new_placeholder(typ=BaseType('uint256'))
245 begin_pos = ['add', output_placeholder, in_memory_offset]
246 # loop until length.
247 o = LLLnode.from_list(
248 ['seq_unchecked',
249 ['mstore', begin_pos, 'pass'], # get len
250 ['mstore', i_placeholder, 0],
251 ['label', start_label],
252 [ # break
253 'if',
254 ['ge', ['mload', i_placeholder], ['ceil32', ['mload', begin_pos]]],
255 ['goto', end_label]
256 ],
257 [ # pop into correct memory slot.
258 'mstore',
259 ['add', ['add', begin_pos, 32], ['mload', i_placeholder]],
260 'pass',
261 ],
262 # increment i
263 ['mstore', i_placeholder, ['add', 32, ['mload', i_placeholder]]],
264 ['goto', start_label],
265 ['label', end_label]],
266 typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))
267 pop_return_values.append(o)
268
269 call_body = list(itertools.chain(
270 ['seq_unchecked'],
271 pre_init,
272 push_local_vars,
273 push_args,
274 jump_to_func,
275 pop_return_values,
276 pop_local_vars,
277 [returner],
278 ))
279 # If we have no return, we need to pop off
280 pop_returner_call_body = ['pop', call_body] if sig.output_type is None else call_body
281
282 o = LLLnode.from_list(
283 pop_returner_call_body,
284 typ=sig.output_type,
285 location='memory',
286 pos=getpos(stmt_expr),
287 annotation=f'Internal Call: {method_name}',
288 add_gas_estimate=sig.gas
289 )
290 o.gas += sig.gas
291 return o
292
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vyper/parser/self_call.py b/vyper/parser/self_call.py
--- a/vyper/parser/self_call.py
+++ b/vyper/parser/self_call.py
@@ -220,19 +220,22 @@
elif isinstance(sig.output_type, TupleLike):
static_offset = 0
pop_return_values = []
- for out_type in sig.output_type.members:
- if isinstance(out_type, ByteArrayLike):
+ for name, typ in sig.output_type.tuple_items():
+ if isinstance(typ, ByteArrayLike):
pop_return_values.append(
['mstore', ['add', output_placeholder, static_offset], 'pass']
)
dynamic_offsets.append(
- (['mload', ['add', output_placeholder, static_offset]], out_type)
+ (['mload', ['add', output_placeholder, static_offset]], name)
)
+ static_offset += 32
else:
- pop_return_values.append(
- ['mstore', ['add', output_placeholder, static_offset], 'pass']
- )
- static_offset += 32
+ member_output_size = get_size_of_type(typ) * 32
+ pop_return_values.extend([
+ ['mstore', ['add', output_placeholder, pos], 'pass']
+ for pos in range(static_offset, static_offset + member_output_size, 32)
+ ])
+ static_offset += member_output_size
# append dynamic unpacker.
dyn_idx = 0
|
{"golden_diff": "diff --git a/vyper/parser/self_call.py b/vyper/parser/self_call.py\n--- a/vyper/parser/self_call.py\n+++ b/vyper/parser/self_call.py\n@@ -220,19 +220,22 @@\n elif isinstance(sig.output_type, TupleLike):\n static_offset = 0\n pop_return_values = []\n- for out_type in sig.output_type.members:\n- if isinstance(out_type, ByteArrayLike):\n+ for name, typ in sig.output_type.tuple_items():\n+ if isinstance(typ, ByteArrayLike):\n pop_return_values.append(\n ['mstore', ['add', output_placeholder, static_offset], 'pass']\n )\n dynamic_offsets.append(\n- (['mload', ['add', output_placeholder, static_offset]], out_type)\n+ (['mload', ['add', output_placeholder, static_offset]], name)\n )\n+ static_offset += 32\n else:\n- pop_return_values.append(\n- ['mstore', ['add', output_placeholder, static_offset], 'pass']\n- )\n- static_offset += 32\n+ member_output_size = get_size_of_type(typ) * 32\n+ pop_return_values.extend([\n+ ['mstore', ['add', output_placeholder, pos], 'pass']\n+ for pos in range(static_offset, static_offset + member_output_size, 32)\n+ ])\n+ static_offset += member_output_size\n \n # append dynamic unpacker.\n dyn_idx = 0\n", "issue": "Malformed data when passing a tuple or struct that contains an array\n### Version Information\r\n\r\n* vyper Version (output of `vyper --version`): `0.1.0-beta16` (current master)\r\n* OS: linux\r\n* Python Version: 3.7\r\n\r\n### What's your issue about?\r\n\r\nAs reported by @michwill on Gitter, passing a struct between functions, where the struct contains an array, results in malformed data. WIth the following contract:\r\n\r\n```python\r\nstruct A:\r\n many: uint256[4]\r\n one: uint256\r\n\r\n@private\r\n@constant\r\ndef _foo(_many: uint256[4], _one: uint256) -> A:\r\n return A({many: _many, one: _one})\r\n\r\n@public\r\n@constant\r\ndef foo() -> A:\r\n return self._foo([1, 2, 3, 4], 5)\r\n\r\n@public\r\n@constant\r\ndef bar() -> (uint256[4], uint256):\r\n out: A = self._foo([1, 2, 3, 4], 5)\r\n return out.many, out.one\r\n```\r\n\r\nHere is the output of each public function:\r\n\r\n```python\r\n>>> contract.foo()\r\n((1, 2, 5, 0), 0)\r\n>>> contract.bar()\r\n((0, 0, 5, 4), 3)\r\n```\r\n\r\nObservations:\r\n1. Generating the struct within the public function and returning it, or a component of it, works as expected.\r\n2. Passing the struct between public functions of two different contracts works as expected.\r\n3. Structs that do not contain arrays work as expected.\r\n4. Removing the `@constant` modifier has no effect.\r\n\r\n### How can it be fixed?\r\nIt looks like an off-by-one (or 2?) error to me, but I'm unsure how to fix it.\n", "before_files": [{"content": "import itertools\n\nfrom vyper.exceptions import (\n ConstancyViolation,\n StructureException,\n TypeMismatch,\n)\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.parser.parser_utils import getpos, pack_arguments\nfrom vyper.signatures.function_signature import FunctionSignature\nfrom vyper.types import (\n BaseType,\n ByteArrayLike,\n ListType,\n TupleLike,\n get_size_of_type,\n get_static_size_of_type,\n has_dynamic_data,\n)\n\n\ndef call_lookup_specs(stmt_expr, context):\n from vyper.parser.expr import Expr\n\n method_name = stmt_expr.func.attr\n\n if len(stmt_expr.keywords):\n raise TypeMismatch(\n \"Cannot use keyword arguments in calls to functions via 'self'\",\n stmt_expr,\n )\n expr_args = [\n Expr(arg, context).lll_node\n for arg in stmt_expr.args\n ]\n\n sig = FunctionSignature.lookup_sig(\n context.sigs,\n method_name,\n expr_args,\n stmt_expr,\n context,\n )\n\n return method_name, expr_args, sig\n\n\ndef make_call(stmt_expr, context):\n method_name, _, sig = call_lookup_specs(stmt_expr, context)\n\n if context.is_constant() and not sig.const:\n raise ConstancyViolation(\n f\"May not call non-constant function '{method_name}' within {context.pp_constancy()}.\",\n getpos(stmt_expr)\n )\n\n if not sig.private:\n raise StructureException(\"Cannot call public functions via 'self'\", stmt_expr)\n\n return call_self_private(stmt_expr, context, sig)\n\n\ndef call_make_placeholder(stmt_expr, context, sig):\n if sig.output_type is None:\n return 0, 0, 0\n\n output_placeholder = context.new_placeholder(typ=sig.output_type)\n out_size = get_size_of_type(sig.output_type) * 32\n returner = output_placeholder\n\n if not sig.private and isinstance(sig.output_type, ByteArrayLike):\n returner = output_placeholder + 32\n\n return output_placeholder, returner, out_size\n\n\ndef call_self_private(stmt_expr, context, sig):\n # ** Private Call **\n # Steps:\n # (x) push current local variables\n # (x) push arguments\n # (x) push jumpdest (callback ptr)\n # (x) jump to label\n # (x) pop return values\n # (x) pop local variables\n\n method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)\n pre_init = []\n pop_local_vars = []\n push_local_vars = []\n pop_return_values = []\n push_args = []\n\n # Push local variables.\n var_slots = [\n (v.pos, v.size) for name, v in context.vars.items()\n if v.location == 'memory'\n ]\n if var_slots:\n var_slots.sort(key=lambda x: x[0])\n mem_from, mem_to = var_slots[0][0], var_slots[-1][0] + var_slots[-1][1] * 32\n\n i_placeholder = context.new_placeholder(BaseType('uint256'))\n local_save_ident = f\"_{stmt_expr.lineno}_{stmt_expr.col_offset}\"\n push_loop_label = 'save_locals_start' + local_save_ident\n pop_loop_label = 'restore_locals_start' + local_save_ident\n\n if mem_to - mem_from > 320:\n push_local_vars = [\n ['mstore', i_placeholder, mem_from],\n ['label', push_loop_label],\n ['mload', ['mload', i_placeholder]],\n ['mstore', i_placeholder, ['add', ['mload', i_placeholder], 32]],\n ['if', ['lt', ['mload', i_placeholder], mem_to],\n ['goto', push_loop_label]]\n ]\n pop_local_vars = [\n ['mstore', i_placeholder, mem_to - 32],\n ['label', pop_loop_label],\n ['mstore', ['mload', i_placeholder], 'pass'],\n ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]],\n ['if', ['ge', ['mload', i_placeholder], mem_from],\n ['goto', pop_loop_label]]\n ]\n else:\n push_local_vars = [['mload', pos] for pos in range(mem_from, mem_to, 32)]\n pop_local_vars = [['mstore', pos, 'pass'] for pos in range(mem_to-32, mem_from-32, -32)]\n\n # Push Arguments\n if expr_args:\n inargs, inargsize, arg_pos = pack_arguments(\n sig,\n expr_args,\n context,\n stmt_expr,\n return_placeholder=False,\n )\n push_args += [inargs] # copy arguments first, to not mess up the push/pop sequencing.\n\n static_arg_size = 32 * sum(\n [get_static_size_of_type(arg.typ)\n for arg in expr_args])\n static_pos = int(arg_pos + static_arg_size)\n needs_dyn_section = any(\n [has_dynamic_data(arg.typ)\n for arg in expr_args])\n\n if needs_dyn_section:\n ident = f'push_args_{sig.method_id}_{stmt_expr.lineno}_{stmt_expr.col_offset}'\n start_label = ident + '_start'\n end_label = ident + '_end'\n i_placeholder = context.new_placeholder(BaseType('uint256'))\n\n # Calculate copy start position.\n # Given | static | dynamic | section in memory,\n # copy backwards so the values are in order on the stack.\n # We calculate i, the end of the whole encoded part\n # (i.e. the starting index for copy)\n # by taking ceil32(len<arg>) + offset<arg> + arg_pos\n # for the last dynamic argument and arg_pos is the start\n # the whole argument section.\n idx = 0\n for arg in expr_args:\n if isinstance(arg.typ, ByteArrayLike):\n last_idx = idx\n idx += get_static_size_of_type(arg.typ)\n push_args += [\n ['with', 'offset', ['mload', arg_pos + last_idx * 32],\n ['with', 'len_pos', ['add', arg_pos, 'offset'],\n ['with', 'len_value', ['mload', 'len_pos'],\n ['mstore', i_placeholder,\n ['add', 'len_pos', ['ceil32', 'len_value']]]]]]\n ]\n # loop from end of dynamic section to start of dynamic section,\n # pushing each element onto the stack.\n push_args += [\n\n ['label', start_label],\n ['if', ['lt', ['mload', i_placeholder], static_pos],\n ['goto', end_label]],\n ['mload', ['mload', i_placeholder]],\n ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]], # decrease i\n ['goto', start_label],\n ['label', end_label]\n ]\n\n # push static section\n push_args += [\n ['mload', pos] for pos in reversed(range(arg_pos, static_pos, 32))\n ]\n elif sig.args:\n raise StructureException(\n f\"Wrong number of args for: {sig.name} (0 args given, expected {len(sig.args)})\",\n stmt_expr\n )\n\n # Jump to function label.\n jump_to_func = [\n ['add', ['pc'], 6], # set callback pointer.\n ['goto', f'priv_{sig.method_id}'],\n ['jumpdest'],\n ]\n\n # Pop return values.\n returner = [0]\n if sig.output_type:\n output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)\n if output_size > 0:\n dynamic_offsets = []\n if isinstance(sig.output_type, (BaseType, ListType)):\n pop_return_values = [\n ['mstore', ['add', output_placeholder, pos], 'pass']\n for pos in range(0, output_size, 32)\n ]\n elif isinstance(sig.output_type, ByteArrayLike):\n dynamic_offsets = [(0, sig.output_type)]\n pop_return_values = [\n ['pop', 'pass'],\n ]\n elif isinstance(sig.output_type, TupleLike):\n static_offset = 0\n pop_return_values = []\n for out_type in sig.output_type.members:\n if isinstance(out_type, ByteArrayLike):\n pop_return_values.append(\n ['mstore', ['add', output_placeholder, static_offset], 'pass']\n )\n dynamic_offsets.append(\n (['mload', ['add', output_placeholder, static_offset]], out_type)\n )\n else:\n pop_return_values.append(\n ['mstore', ['add', output_placeholder, static_offset], 'pass']\n )\n static_offset += 32\n\n # append dynamic unpacker.\n dyn_idx = 0\n for in_memory_offset, _out_type in dynamic_offsets:\n ident = f\"{stmt_expr.lineno}_{stmt_expr.col_offset}_arg_{dyn_idx}\"\n dyn_idx += 1\n start_label = 'dyn_unpack_start_' + ident\n end_label = 'dyn_unpack_end_' + ident\n i_placeholder = context.new_placeholder(typ=BaseType('uint256'))\n begin_pos = ['add', output_placeholder, in_memory_offset]\n # loop until length.\n o = LLLnode.from_list(\n ['seq_unchecked',\n ['mstore', begin_pos, 'pass'], # get len\n ['mstore', i_placeholder, 0],\n ['label', start_label],\n [ # break\n 'if',\n ['ge', ['mload', i_placeholder], ['ceil32', ['mload', begin_pos]]],\n ['goto', end_label]\n ],\n [ # pop into correct memory slot.\n 'mstore',\n ['add', ['add', begin_pos, 32], ['mload', i_placeholder]],\n 'pass',\n ],\n # increment i\n ['mstore', i_placeholder, ['add', 32, ['mload', i_placeholder]]],\n ['goto', start_label],\n ['label', end_label]],\n typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))\n pop_return_values.append(o)\n\n call_body = list(itertools.chain(\n ['seq_unchecked'],\n pre_init,\n push_local_vars,\n push_args,\n jump_to_func,\n pop_return_values,\n pop_local_vars,\n [returner],\n ))\n # If we have no return, we need to pop off\n pop_returner_call_body = ['pop', call_body] if sig.output_type is None else call_body\n\n o = LLLnode.from_list(\n pop_returner_call_body,\n typ=sig.output_type,\n location='memory',\n pos=getpos(stmt_expr),\n annotation=f'Internal Call: {method_name}',\n add_gas_estimate=sig.gas\n )\n o.gas += sig.gas\n return o\n", "path": "vyper/parser/self_call.py"}], "after_files": [{"content": "import itertools\n\nfrom vyper.exceptions import (\n ConstancyViolation,\n StructureException,\n TypeMismatch,\n)\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.parser.parser_utils import getpos, pack_arguments\nfrom vyper.signatures.function_signature import FunctionSignature\nfrom vyper.types import (\n BaseType,\n ByteArrayLike,\n ListType,\n TupleLike,\n get_size_of_type,\n get_static_size_of_type,\n has_dynamic_data,\n)\n\n\ndef call_lookup_specs(stmt_expr, context):\n from vyper.parser.expr import Expr\n\n method_name = stmt_expr.func.attr\n\n if len(stmt_expr.keywords):\n raise TypeMismatch(\n \"Cannot use keyword arguments in calls to functions via 'self'\",\n stmt_expr,\n )\n expr_args = [\n Expr(arg, context).lll_node\n for arg in stmt_expr.args\n ]\n\n sig = FunctionSignature.lookup_sig(\n context.sigs,\n method_name,\n expr_args,\n stmt_expr,\n context,\n )\n\n return method_name, expr_args, sig\n\n\ndef make_call(stmt_expr, context):\n method_name, _, sig = call_lookup_specs(stmt_expr, context)\n\n if context.is_constant() and not sig.const:\n raise ConstancyViolation(\n f\"May not call non-constant function '{method_name}' within {context.pp_constancy()}.\",\n getpos(stmt_expr)\n )\n\n if not sig.private:\n raise StructureException(\"Cannot call public functions via 'self'\", stmt_expr)\n\n return call_self_private(stmt_expr, context, sig)\n\n\ndef call_make_placeholder(stmt_expr, context, sig):\n if sig.output_type is None:\n return 0, 0, 0\n\n output_placeholder = context.new_placeholder(typ=sig.output_type)\n out_size = get_size_of_type(sig.output_type) * 32\n returner = output_placeholder\n\n if not sig.private and isinstance(sig.output_type, ByteArrayLike):\n returner = output_placeholder + 32\n\n return output_placeholder, returner, out_size\n\n\ndef call_self_private(stmt_expr, context, sig):\n # ** Private Call **\n # Steps:\n # (x) push current local variables\n # (x) push arguments\n # (x) push jumpdest (callback ptr)\n # (x) jump to label\n # (x) pop return values\n # (x) pop local variables\n\n method_name, expr_args, sig = call_lookup_specs(stmt_expr, context)\n pre_init = []\n pop_local_vars = []\n push_local_vars = []\n pop_return_values = []\n push_args = []\n\n # Push local variables.\n var_slots = [\n (v.pos, v.size) for name, v in context.vars.items()\n if v.location == 'memory'\n ]\n if var_slots:\n var_slots.sort(key=lambda x: x[0])\n mem_from, mem_to = var_slots[0][0], var_slots[-1][0] + var_slots[-1][1] * 32\n\n i_placeholder = context.new_placeholder(BaseType('uint256'))\n local_save_ident = f\"_{stmt_expr.lineno}_{stmt_expr.col_offset}\"\n push_loop_label = 'save_locals_start' + local_save_ident\n pop_loop_label = 'restore_locals_start' + local_save_ident\n\n if mem_to - mem_from > 320:\n push_local_vars = [\n ['mstore', i_placeholder, mem_from],\n ['label', push_loop_label],\n ['mload', ['mload', i_placeholder]],\n ['mstore', i_placeholder, ['add', ['mload', i_placeholder], 32]],\n ['if', ['lt', ['mload', i_placeholder], mem_to],\n ['goto', push_loop_label]]\n ]\n pop_local_vars = [\n ['mstore', i_placeholder, mem_to - 32],\n ['label', pop_loop_label],\n ['mstore', ['mload', i_placeholder], 'pass'],\n ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]],\n ['if', ['ge', ['mload', i_placeholder], mem_from],\n ['goto', pop_loop_label]]\n ]\n else:\n push_local_vars = [['mload', pos] for pos in range(mem_from, mem_to, 32)]\n pop_local_vars = [['mstore', pos, 'pass'] for pos in range(mem_to-32, mem_from-32, -32)]\n\n # Push Arguments\n if expr_args:\n inargs, inargsize, arg_pos = pack_arguments(\n sig,\n expr_args,\n context,\n stmt_expr,\n return_placeholder=False,\n )\n push_args += [inargs] # copy arguments first, to not mess up the push/pop sequencing.\n\n static_arg_size = 32 * sum(\n [get_static_size_of_type(arg.typ)\n for arg in expr_args])\n static_pos = int(arg_pos + static_arg_size)\n needs_dyn_section = any(\n [has_dynamic_data(arg.typ)\n for arg in expr_args])\n\n if needs_dyn_section:\n ident = f'push_args_{sig.method_id}_{stmt_expr.lineno}_{stmt_expr.col_offset}'\n start_label = ident + '_start'\n end_label = ident + '_end'\n i_placeholder = context.new_placeholder(BaseType('uint256'))\n\n # Calculate copy start position.\n # Given | static | dynamic | section in memory,\n # copy backwards so the values are in order on the stack.\n # We calculate i, the end of the whole encoded part\n # (i.e. the starting index for copy)\n # by taking ceil32(len<arg>) + offset<arg> + arg_pos\n # for the last dynamic argument and arg_pos is the start\n # the whole argument section.\n idx = 0\n for arg in expr_args:\n if isinstance(arg.typ, ByteArrayLike):\n last_idx = idx\n idx += get_static_size_of_type(arg.typ)\n push_args += [\n ['with', 'offset', ['mload', arg_pos + last_idx * 32],\n ['with', 'len_pos', ['add', arg_pos, 'offset'],\n ['with', 'len_value', ['mload', 'len_pos'],\n ['mstore', i_placeholder,\n ['add', 'len_pos', ['ceil32', 'len_value']]]]]]\n ]\n # loop from end of dynamic section to start of dynamic section,\n # pushing each element onto the stack.\n push_args += [\n\n ['label', start_label],\n ['if', ['lt', ['mload', i_placeholder], static_pos],\n ['goto', end_label]],\n ['mload', ['mload', i_placeholder]],\n ['mstore', i_placeholder, ['sub', ['mload', i_placeholder], 32]], # decrease i\n ['goto', start_label],\n ['label', end_label]\n ]\n\n # push static section\n push_args += [\n ['mload', pos] for pos in reversed(range(arg_pos, static_pos, 32))\n ]\n elif sig.args:\n raise StructureException(\n f\"Wrong number of args for: {sig.name} (0 args given, expected {len(sig.args)})\",\n stmt_expr\n )\n\n # Jump to function label.\n jump_to_func = [\n ['add', ['pc'], 6], # set callback pointer.\n ['goto', f'priv_{sig.method_id}'],\n ['jumpdest'],\n ]\n\n # Pop return values.\n returner = [0]\n if sig.output_type:\n output_placeholder, returner, output_size = call_make_placeholder(stmt_expr, context, sig)\n if output_size > 0:\n dynamic_offsets = []\n if isinstance(sig.output_type, (BaseType, ListType)):\n pop_return_values = [\n ['mstore', ['add', output_placeholder, pos], 'pass']\n for pos in range(0, output_size, 32)\n ]\n elif isinstance(sig.output_type, ByteArrayLike):\n dynamic_offsets = [(0, sig.output_type)]\n pop_return_values = [\n ['pop', 'pass'],\n ]\n elif isinstance(sig.output_type, TupleLike):\n static_offset = 0\n pop_return_values = []\n for name, typ in sig.output_type.tuple_items():\n if isinstance(typ, ByteArrayLike):\n pop_return_values.append(\n ['mstore', ['add', output_placeholder, static_offset], 'pass']\n )\n dynamic_offsets.append(\n (['mload', ['add', output_placeholder, static_offset]], name)\n )\n static_offset += 32\n else:\n member_output_size = get_size_of_type(typ) * 32\n pop_return_values.extend([\n ['mstore', ['add', output_placeholder, pos], 'pass']\n for pos in range(static_offset, static_offset + member_output_size, 32)\n ])\n static_offset += member_output_size\n\n # append dynamic unpacker.\n dyn_idx = 0\n for in_memory_offset, _out_type in dynamic_offsets:\n ident = f\"{stmt_expr.lineno}_{stmt_expr.col_offset}_arg_{dyn_idx}\"\n dyn_idx += 1\n start_label = 'dyn_unpack_start_' + ident\n end_label = 'dyn_unpack_end_' + ident\n i_placeholder = context.new_placeholder(typ=BaseType('uint256'))\n begin_pos = ['add', output_placeholder, in_memory_offset]\n # loop until length.\n o = LLLnode.from_list(\n ['seq_unchecked',\n ['mstore', begin_pos, 'pass'], # get len\n ['mstore', i_placeholder, 0],\n ['label', start_label],\n [ # break\n 'if',\n ['ge', ['mload', i_placeholder], ['ceil32', ['mload', begin_pos]]],\n ['goto', end_label]\n ],\n [ # pop into correct memory slot.\n 'mstore',\n ['add', ['add', begin_pos, 32], ['mload', i_placeholder]],\n 'pass',\n ],\n # increment i\n ['mstore', i_placeholder, ['add', 32, ['mload', i_placeholder]]],\n ['goto', start_label],\n ['label', end_label]],\n typ=None, annotation='dynamic unpacker', pos=getpos(stmt_expr))\n pop_return_values.append(o)\n\n call_body = list(itertools.chain(\n ['seq_unchecked'],\n pre_init,\n push_local_vars,\n push_args,\n jump_to_func,\n pop_return_values,\n pop_local_vars,\n [returner],\n ))\n # If we have no return, we need to pop off\n pop_returner_call_body = ['pop', call_body] if sig.output_type is None else call_body\n\n o = LLLnode.from_list(\n pop_returner_call_body,\n typ=sig.output_type,\n location='memory',\n pos=getpos(stmt_expr),\n annotation=f'Internal Call: {method_name}',\n add_gas_estimate=sig.gas\n )\n o.gas += sig.gas\n return o\n", "path": "vyper/parser/self_call.py"}]}
| 3,832 | 324 |
gh_patches_debug_14105
|
rasdani/github-patches
|
git_diff
|
interactions-py__interactions.py-1169
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Cannot pop from Cache during dispatch.
### Describe the bug.
When running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.
This is caused by the implementation of #482
### List the steps.
n/A
### What you expected.
n/A
### What you saw.
```
Websocket have raised an exception, closing.
Traceback (most recent call last):
File "C:\Users\\Desktop\PycharmProjects\library\interactions\client\bot.py", line 440, in _login
await self._websocket.run()
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 279, in run
await self._handle_stream(msg)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 332, in _handle_stream
self._dispatch_event(event, data)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 562, in _dispatch_event
_message_cache.pop(message_id)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\cache.py", line 131, in pop
return self.values.pop(key, default)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\utils\dict_caches.py", line 39, in __getitem__
self.move_to_end(key)
KeyError: Snowflake(1040316644695756912)
Process finished with exit code 0
```
### What version of the library did you use?
unstable
### Version specification
The unstable unstable version
### Code of Conduct
- [X] I agree to follow the contribution requirements.
[BUG] Cannot pop from Cache during dispatch.
### Describe the bug.
When running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.
This is caused by the implementation of #482
### List the steps.
n/A
### What you expected.
n/A
### What you saw.
```
Websocket have raised an exception, closing.
Traceback (most recent call last):
File "C:\Users\\Desktop\PycharmProjects\library\interactions\client\bot.py", line 440, in _login
await self._websocket.run()
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 279, in run
await self._handle_stream(msg)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 332, in _handle_stream
self._dispatch_event(event, data)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 562, in _dispatch_event
_message_cache.pop(message_id)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\cache.py", line 131, in pop
return self.values.pop(key, default)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\utils\dict_caches.py", line 39, in __getitem__
self.move_to_end(key)
KeyError: Snowflake(1040316644695756912)
Process finished with exit code 0
```
### What version of the library did you use?
unstable
### Version specification
The unstable unstable version
### Code of Conduct
- [X] I agree to follow the contribution requirements.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `interactions/utils/dict_caches.py`
Content:
```
1 from collections import OrderedDict
2 from typing import Generic, TypeVar
3
4 __all__ = ("FIFODict", "LRUDict")
5
6 _KT = TypeVar("_KT")
7 _VT = TypeVar("_VT")
8
9
10 class FIFODict(OrderedDict, Generic[_KT, _VT]):
11 """A dictionary that removes the old keys if over the item limit"""
12
13 def __init__(self, *args, max_items: int = float("inf"), **kwargs):
14 if max_items < 0:
15 raise RuntimeError("You cannot set max_items to negative numbers.")
16
17 super().__init__(*args, **kwargs)
18 self._max_items = max_items
19
20 def __setitem__(self, key: _KT, value: _VT):
21 super().__setitem__(key, value)
22
23 # Prevent buildup over time
24 while len(self) > self._max_items:
25 del self[next(iter(self))]
26
27
28 class LRUDict(OrderedDict, Generic[_KT, _VT]):
29 """A dictionary that removes the value that was the least recently used if over the item limit"""
30
31 def __init__(self, *args, max_items: int = float("inf"), **kwargs):
32 if max_items < 0:
33 raise RuntimeError("You cannot set max_items to negative numbers.")
34
35 super().__init__(*args, **kwargs)
36 self._max_items = max_items
37
38 def __getitem__(self, key: _KT) -> _VT:
39 self.move_to_end(key)
40 return super().__getitem__(key)
41
42 def __setitem__(self, key: _KT, value: _VT):
43 super().__setitem__(key, value)
44
45 # Prevent buildup over time
46 while len(self) > self._max_items:
47 del self[next(iter(self))]
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/interactions/utils/dict_caches.py b/interactions/utils/dict_caches.py
--- a/interactions/utils/dict_caches.py
+++ b/interactions/utils/dict_caches.py
@@ -1,6 +1,8 @@
from collections import OrderedDict
from typing import Generic, TypeVar
+from .missing import MISSING
+
__all__ = ("FIFODict", "LRUDict")
_KT = TypeVar("_KT")
@@ -45,3 +47,14 @@
# Prevent buildup over time
while len(self) > self._max_items:
del self[next(iter(self))]
+
+ __marker = object()
+
+ def pop(self, key: _KT, default: _VT = __marker) -> _VT:
+ if key in self:
+ result = self[key]
+ del self[key]
+ return result
+ if default is MISSING:
+ raise KeyError(key)
+ return default
|
{"golden_diff": "diff --git a/interactions/utils/dict_caches.py b/interactions/utils/dict_caches.py\n--- a/interactions/utils/dict_caches.py\n+++ b/interactions/utils/dict_caches.py\n@@ -1,6 +1,8 @@\n from collections import OrderedDict\n from typing import Generic, TypeVar\n \n+from .missing import MISSING\n+\n __all__ = (\"FIFODict\", \"LRUDict\")\n \n _KT = TypeVar(\"_KT\")\n@@ -45,3 +47,14 @@\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n+\n+ __marker = object()\n+\n+ def pop(self, key: _KT, default: _VT = __marker) -> _VT:\n+ if key in self:\n+ result = self[key]\n+ del self[key]\n+ return result\n+ if default is MISSING:\n+ raise KeyError(key)\n+ return default\n", "issue": "[BUG] Cannot pop from Cache during dispatch.\n### Describe the bug.\r\n\r\nWhen running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.\r\n\r\nThis is caused by the implementation of #482\r\n\r\n### List the steps.\r\n\r\nn/A\r\n\r\n### What you expected.\r\n\r\nn/A\r\n\r\n### What you saw.\r\n```\r\nWebsocket have raised an exception, closing.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\client\\bot.py\", line 440, in _login\r\n await self._websocket.run()\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 279, in run\r\n await self._handle_stream(msg)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 332, in _handle_stream\r\n self._dispatch_event(event, data)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 562, in _dispatch_event\r\n _message_cache.pop(message_id)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\cache.py\", line 131, in pop\r\n return self.values.pop(key, default)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\utils\\dict_caches.py\", line 39, in __getitem__\r\n self.move_to_end(key)\r\nKeyError: Snowflake(1040316644695756912)\r\n\r\nProcess finished with exit code 0\r\n```\r\n\r\n### What version of the library did you use?\r\n\r\nunstable\r\n\r\n### Version specification\r\n\r\nThe unstable unstable version\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the contribution requirements.\n[BUG] Cannot pop from Cache during dispatch.\n### Describe the bug.\r\n\r\nWhen running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.\r\n\r\nThis is caused by the implementation of #482\r\n\r\n### List the steps.\r\n\r\nn/A\r\n\r\n### What you expected.\r\n\r\nn/A\r\n\r\n### What you saw.\r\n```\r\nWebsocket have raised an exception, closing.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\client\\bot.py\", line 440, in _login\r\n await self._websocket.run()\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 279, in run\r\n await self._handle_stream(msg)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 332, in _handle_stream\r\n self._dispatch_event(event, data)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 562, in _dispatch_event\r\n _message_cache.pop(message_id)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\cache.py\", line 131, in pop\r\n return self.values.pop(key, default)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\utils\\dict_caches.py\", line 39, in __getitem__\r\n self.move_to_end(key)\r\nKeyError: Snowflake(1040316644695756912)\r\n\r\nProcess finished with exit code 0\r\n```\r\n\r\n### What version of the library did you use?\r\n\r\nunstable\r\n\r\n### Version specification\r\n\r\nThe unstable unstable version\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the contribution requirements.\n", "before_files": [{"content": "from collections import OrderedDict\nfrom typing import Generic, TypeVar\n\n__all__ = (\"FIFODict\", \"LRUDict\")\n\n_KT = TypeVar(\"_KT\")\n_VT = TypeVar(\"_VT\")\n\n\nclass FIFODict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the old keys if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n\n\nclass LRUDict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the value that was the least recently used if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __getitem__(self, key: _KT) -> _VT:\n self.move_to_end(key)\n return super().__getitem__(key)\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n", "path": "interactions/utils/dict_caches.py"}], "after_files": [{"content": "from collections import OrderedDict\nfrom typing import Generic, TypeVar\n\nfrom .missing import MISSING\n\n__all__ = (\"FIFODict\", \"LRUDict\")\n\n_KT = TypeVar(\"_KT\")\n_VT = TypeVar(\"_VT\")\n\n\nclass FIFODict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the old keys if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n\n\nclass LRUDict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the value that was the least recently used if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __getitem__(self, key: _KT) -> _VT:\n self.move_to_end(key)\n return super().__getitem__(key)\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n\n __marker = object()\n\n def pop(self, key: _KT, default: _VT = __marker) -> _VT:\n if key in self:\n result = self[key]\n del self[key]\n return result\n if default is MISSING:\n raise KeyError(key)\n return default\n", "path": "interactions/utils/dict_caches.py"}]}
| 1,579 | 216 |
gh_patches_debug_9077
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-nlp-760
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Simplify docstring example of `TokenAndPositionEmbedding`
The current [docstring example](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/layers/token_and_position_embedding.py#L45) of `TokenAndPositionEmbedding` is a bit more complicated than needed and should better match newer examples like those in [`keras_nlp.models.BertBackbone`](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/models/bert/bert_backbone.py#L67).
Make the following changes to the docstring example:
* Create a real input of the correct shape using `tf.ones`
* Inline constant values like `"seq_length"` directly into the function calls
Be sure to check that your example runs on colab or your local machine!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_nlp/layers/token_and_position_embedding.py`
Content:
```
1 # Copyright 2023 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Creates an Embedding Layer and adds Positional Embeddings"""
16
17 from tensorflow import keras
18
19 import keras_nlp.layers
20 from keras_nlp.utils.keras_utils import clone_initializer
21
22
23 @keras.utils.register_keras_serializable(package="keras_nlp")
24 class TokenAndPositionEmbedding(keras.layers.Layer):
25 """A layer which sums a token and position embedding.
26
27 Token and position embeddings are ways of representing words and their order
28 in a sentence. This layer creates a `keras.layers.Embedding` token embedding
29 and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their
30 output when called. This layer assumes that the last dimension in the input
31 corresponds to the sequence dimension.
32
33 Args:
34 vocabulary_size: The size of the vocabulary.
35 sequence_length: The maximum length of input sequence
36 embedding_dim: The output dimension of the embedding layer
37 embeddings_initializer: The initializer to use for the Embedding
38 Layers
39 mask_zero: Boolean, whether or not the input value 0 is a special
40 "padding" value that should be masked out.
41 This is useful when using recurrent layers which may take variable
42 length input. If this is True, then all subsequent layers in the
43 model need to support masking or an exception will be raised.
44 If mask_zero` is set to True, as a consequence, index 0 cannot be
45 used in the vocabulary
46 (input_dim should equal size of vocabulary + 1).
47
48 Examples:
49 ```python
50 seq_length = 50
51 vocab_size = 5000
52 embed_dim = 128
53 inputs = keras.Input(shape=(seq_length,))
54 embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
55 vocabulary_size=vocab_size,
56 sequence_length=seq_length,
57 embedding_dim=embed_dim,
58 )
59 outputs = embedding_layer(inputs)
60 ```
61 """
62
63 def __init__(
64 self,
65 vocabulary_size,
66 sequence_length,
67 embedding_dim,
68 embeddings_initializer="glorot_uniform",
69 mask_zero=False,
70 **kwargs
71 ):
72 super().__init__(**kwargs)
73 if vocabulary_size is None:
74 raise ValueError(
75 "`vocabulary_size` must be an Integer, received `None`."
76 )
77 if sequence_length is None:
78 raise ValueError(
79 "`sequence_length` must be an Integer, received `None`."
80 )
81 if embedding_dim is None:
82 raise ValueError(
83 "`embedding_dim` must be an Integer, received `None`."
84 )
85 self.vocabulary_size = int(vocabulary_size)
86 self.sequence_length = int(sequence_length)
87 self.embedding_dim = int(embedding_dim)
88 self.embeddings_initializer = keras.initializers.get(
89 embeddings_initializer
90 )
91 self.token_embedding = keras.layers.Embedding(
92 vocabulary_size,
93 embedding_dim,
94 embeddings_initializer=clone_initializer(
95 self.embeddings_initializer
96 ),
97 mask_zero=mask_zero,
98 name="token_embedding"
99 + str(keras.backend.get_uid("token_embedding")),
100 )
101 self.position_embedding = keras_nlp.layers.PositionEmbedding(
102 sequence_length=sequence_length,
103 initializer=clone_initializer(self.embeddings_initializer),
104 name="position_embedding"
105 + str(keras.backend.get_uid("position_embedding")),
106 )
107 self.supports_masking = self.token_embedding.supports_masking
108
109 def get_config(self):
110 config = super().get_config()
111 config.update(
112 {
113 "vocabulary_size": self.vocabulary_size,
114 "sequence_length": self.sequence_length,
115 "embedding_dim": self.embedding_dim,
116 "embeddings_initializer": keras.initializers.serialize(
117 self.embeddings_initializer
118 ),
119 "mask_zero": self.token_embedding.mask_zero,
120 },
121 )
122 return config
123
124 def call(self, inputs):
125 embedded_tokens = self.token_embedding(inputs)
126 embedded_positions = self.position_embedding(embedded_tokens)
127 outputs = embedded_tokens + embedded_positions
128 return outputs
129
130 def compute_mask(self, inputs, mask=None):
131 return self.token_embedding.compute_mask(inputs, mask=mask)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py
--- a/keras_nlp/layers/token_and_position_embedding.py
+++ b/keras_nlp/layers/token_and_position_embedding.py
@@ -47,14 +47,11 @@
Examples:
```python
- seq_length = 50
- vocab_size = 5000
- embed_dim = 128
- inputs = keras.Input(shape=(seq_length,))
+ inputs = tf.ones(shape=(1, 50), dtype=tf.int64)
embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
- vocabulary_size=vocab_size,
- sequence_length=seq_length,
- embedding_dim=embed_dim,
+ vocabulary_size=10_000,
+ sequence_length=50,
+ embedding_dim=128,
)
outputs = embedding_layer(inputs)
```
|
{"golden_diff": "diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py\n--- a/keras_nlp/layers/token_and_position_embedding.py\n+++ b/keras_nlp/layers/token_and_position_embedding.py\n@@ -47,14 +47,11 @@\n \n Examples:\n ```python\n- seq_length = 50\n- vocab_size = 5000\n- embed_dim = 128\n- inputs = keras.Input(shape=(seq_length,))\n+ inputs = tf.ones(shape=(1, 50), dtype=tf.int64)\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n- vocabulary_size=vocab_size,\n- sequence_length=seq_length,\n- embedding_dim=embed_dim,\n+ vocabulary_size=10_000,\n+ sequence_length=50,\n+ embedding_dim=128,\n )\n outputs = embedding_layer(inputs)\n ```\n", "issue": "Simplify docstring example of `TokenAndPositionEmbedding`\nThe current [docstring example](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/layers/token_and_position_embedding.py#L45) of `TokenAndPositionEmbedding` is a bit more complicated than needed and should better match newer examples like those in [`keras_nlp.models.BertBackbone`](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/models/bert/bert_backbone.py#L67).\r\n\r\nMake the following changes to the docstring example:\r\n* Create a real input of the correct shape using `tf.ones`\r\n* Inline constant values like `\"seq_length\"` directly into the function calls\r\n\r\nBe sure to check that your example runs on colab or your local machine!\n", "before_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Creates an Embedding Layer and adds Positional Embeddings\"\"\"\n\nfrom tensorflow import keras\n\nimport keras_nlp.layers\nfrom keras_nlp.utils.keras_utils import clone_initializer\n\n\[email protected]_keras_serializable(package=\"keras_nlp\")\nclass TokenAndPositionEmbedding(keras.layers.Layer):\n \"\"\"A layer which sums a token and position embedding.\n\n Token and position embeddings are ways of representing words and their order\n in a sentence. This layer creates a `keras.layers.Embedding` token embedding\n and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their\n output when called. This layer assumes that the last dimension in the input\n corresponds to the sequence dimension.\n\n Args:\n vocabulary_size: The size of the vocabulary.\n sequence_length: The maximum length of input sequence\n embedding_dim: The output dimension of the embedding layer\n embeddings_initializer: The initializer to use for the Embedding\n Layers\n mask_zero: Boolean, whether or not the input value 0 is a special\n \"padding\" value that should be masked out.\n This is useful when using recurrent layers which may take variable\n length input. If this is True, then all subsequent layers in the\n model need to support masking or an exception will be raised.\n If mask_zero` is set to True, as a consequence, index 0 cannot be\n used in the vocabulary\n (input_dim should equal size of vocabulary + 1).\n\n Examples:\n ```python\n seq_length = 50\n vocab_size = 5000\n embed_dim = 128\n inputs = keras.Input(shape=(seq_length,))\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n vocabulary_size=vocab_size,\n sequence_length=seq_length,\n embedding_dim=embed_dim,\n )\n outputs = embedding_layer(inputs)\n ```\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n sequence_length,\n embedding_dim,\n embeddings_initializer=\"glorot_uniform\",\n mask_zero=False,\n **kwargs\n ):\n super().__init__(**kwargs)\n if vocabulary_size is None:\n raise ValueError(\n \"`vocabulary_size` must be an Integer, received `None`.\"\n )\n if sequence_length is None:\n raise ValueError(\n \"`sequence_length` must be an Integer, received `None`.\"\n )\n if embedding_dim is None:\n raise ValueError(\n \"`embedding_dim` must be an Integer, received `None`.\"\n )\n self.vocabulary_size = int(vocabulary_size)\n self.sequence_length = int(sequence_length)\n self.embedding_dim = int(embedding_dim)\n self.embeddings_initializer = keras.initializers.get(\n embeddings_initializer\n )\n self.token_embedding = keras.layers.Embedding(\n vocabulary_size,\n embedding_dim,\n embeddings_initializer=clone_initializer(\n self.embeddings_initializer\n ),\n mask_zero=mask_zero,\n name=\"token_embedding\"\n + str(keras.backend.get_uid(\"token_embedding\")),\n )\n self.position_embedding = keras_nlp.layers.PositionEmbedding(\n sequence_length=sequence_length,\n initializer=clone_initializer(self.embeddings_initializer),\n name=\"position_embedding\"\n + str(keras.backend.get_uid(\"position_embedding\")),\n )\n self.supports_masking = self.token_embedding.supports_masking\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"sequence_length\": self.sequence_length,\n \"embedding_dim\": self.embedding_dim,\n \"embeddings_initializer\": keras.initializers.serialize(\n self.embeddings_initializer\n ),\n \"mask_zero\": self.token_embedding.mask_zero,\n },\n )\n return config\n\n def call(self, inputs):\n embedded_tokens = self.token_embedding(inputs)\n embedded_positions = self.position_embedding(embedded_tokens)\n outputs = embedded_tokens + embedded_positions\n return outputs\n\n def compute_mask(self, inputs, mask=None):\n return self.token_embedding.compute_mask(inputs, mask=mask)\n", "path": "keras_nlp/layers/token_and_position_embedding.py"}], "after_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Creates an Embedding Layer and adds Positional Embeddings\"\"\"\n\nfrom tensorflow import keras\n\nimport keras_nlp.layers\nfrom keras_nlp.utils.keras_utils import clone_initializer\n\n\[email protected]_keras_serializable(package=\"keras_nlp\")\nclass TokenAndPositionEmbedding(keras.layers.Layer):\n \"\"\"A layer which sums a token and position embedding.\n\n Token and position embeddings are ways of representing words and their order\n in a sentence. This layer creates a `keras.layers.Embedding` token embedding\n and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their\n output when called. This layer assumes that the last dimension in the input\n corresponds to the sequence dimension.\n\n Args:\n vocabulary_size: The size of the vocabulary.\n sequence_length: The maximum length of input sequence\n embedding_dim: The output dimension of the embedding layer\n embeddings_initializer: The initializer to use for the Embedding\n Layers\n mask_zero: Boolean, whether or not the input value 0 is a special\n \"padding\" value that should be masked out.\n This is useful when using recurrent layers which may take variable\n length input. If this is True, then all subsequent layers in the\n model need to support masking or an exception will be raised.\n If mask_zero` is set to True, as a consequence, index 0 cannot be\n used in the vocabulary\n (input_dim should equal size of vocabulary + 1).\n\n Examples:\n ```python\n inputs = tf.ones(shape=(1, 50), dtype=tf.int64)\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n vocabulary_size=10_000,\n sequence_length=50,\n embedding_dim=128,\n )\n outputs = embedding_layer(inputs)\n ```\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n sequence_length,\n embedding_dim,\n embeddings_initializer=\"glorot_uniform\",\n mask_zero=False,\n **kwargs\n ):\n super().__init__(**kwargs)\n if vocabulary_size is None:\n raise ValueError(\n \"`vocabulary_size` must be an Integer, received `None`.\"\n )\n if sequence_length is None:\n raise ValueError(\n \"`sequence_length` must be an Integer, received `None`.\"\n )\n if embedding_dim is None:\n raise ValueError(\n \"`embedding_dim` must be an Integer, received `None`.\"\n )\n self.vocabulary_size = int(vocabulary_size)\n self.sequence_length = int(sequence_length)\n self.embedding_dim = int(embedding_dim)\n self.embeddings_initializer = keras.initializers.get(\n embeddings_initializer\n )\n self.token_embedding = keras.layers.Embedding(\n vocabulary_size,\n embedding_dim,\n embeddings_initializer=clone_initializer(\n self.embeddings_initializer\n ),\n mask_zero=mask_zero,\n name=\"token_embedding\"\n + str(keras.backend.get_uid(\"token_embedding\")),\n )\n self.position_embedding = keras_nlp.layers.PositionEmbedding(\n sequence_length=sequence_length,\n initializer=clone_initializer(self.embeddings_initializer),\n name=\"position_embedding\"\n + str(keras.backend.get_uid(\"position_embedding\")),\n )\n self.supports_masking = self.token_embedding.supports_masking\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"sequence_length\": self.sequence_length,\n \"embedding_dim\": self.embedding_dim,\n \"embeddings_initializer\": keras.initializers.serialize(\n self.embeddings_initializer\n ),\n \"mask_zero\": self.token_embedding.mask_zero,\n },\n )\n return config\n\n def call(self, inputs):\n embedded_tokens = self.token_embedding(inputs)\n embedded_positions = self.position_embedding(embedded_tokens)\n outputs = embedded_tokens + embedded_positions\n return outputs\n\n def compute_mask(self, inputs, mask=None):\n return self.token_embedding.compute_mask(inputs, mask=mask)\n", "path": "keras_nlp/layers/token_and_position_embedding.py"}]}
| 1,810 | 221 |
gh_patches_debug_41055
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-7897
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow Chat Generators to connect to Answer Builder
**Is your feature request related to a problem? Please describe.**
There is no current way for a `ChatGenerator` to directly connect to `AnswerBuilder()` despite similar functionality to a regular Generator.
**Describe the solution you'd like**
We can extract the text from a `ChatMessage` and use it as a `str` as expected in the code for AnswerBuilder.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
ChatGenerators that rely on non-open ai models require a specific chat format to be applied to properly differentiate between system/user/assistant messages. This is done manually (and prone to errors) in regular generators. This is an example I pulled from a Haystack notebook:
```
prompt_template = """
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Using the information contained in the context, give a comprehensive answer to the question.
If the answer cannot be deduced from the context, do not give an answer.
Context:
{% for doc in documents %}
{{ doc.content }} URL:{{ doc.meta['url'] }}
{% endfor %};
Question: {{query}}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
"""
prompt_builder = PromptBuilder(template=prompt_template)
```
This can be accomplished by using a `ChatGenerator` as follows:
```
system_message = ChatMessage.from_system(
"""
Read the context provided and answer the question if possible.If you can not form an answer from the context, reply with "Nah".
Context:
{% for doc in documents %}
{{ doc.content }}
{% endfor %};
"""
)
user_message = ChatMessage.from_user("query: {{query}}")
assistent_message = ChatMessage.from_assistant("Answer: ")
```
We can then add the output to an `AnswerBuilder` to complete a pipeline ensuring that the proper template is applied.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/components/builders/answer_builder.py`
Content:
```
1 # SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>
2 #
3 # SPDX-License-Identifier: Apache-2.0
4
5 import re
6 from typing import Any, Dict, List, Optional
7
8 from haystack import Document, GeneratedAnswer, component, logging
9
10 logger = logging.getLogger(__name__)
11
12
13 @component
14 class AnswerBuilder:
15 """
16 Takes a query and the replies a Generator returns as input and parses them into GeneratedAnswer objects.
17
18 Optionally, it also takes Documents and metadata from the Generator as inputs to enrich the GeneratedAnswer objects.
19
20 Usage example:
21 ```python
22 from haystack.components.builders import AnswerBuilder
23
24 builder = AnswerBuilder(pattern="Answer: (.*)")
25 builder.run(query="What's the answer?", replies=["This is an argument. Answer: This is the answer."])
26 ```
27 """
28
29 def __init__(self, pattern: Optional[str] = None, reference_pattern: Optional[str] = None):
30 """
31 Creates an instance of the AnswerBuilder component.
32
33 :param pattern:
34 The regular expression pattern to use to extract the answer text from the generator output.
35 If not specified, the whole string is used as the answer. The regular expression can have at
36 most one capture group. If a capture group is present, the text matched by the capture group
37 is used as the answer. If no capture group is present, the whole match is used as the answer.
38 Examples:
39 `[^\\n]+$` finds "this is an answer" in a string "this is an argument.\\nthis is an answer".
40 `Answer: (.*)` finds "this is an answer" in a string "this is an argument. Answer: this is an answer".
41
42 :param reference_pattern:
43 The regular expression pattern to use for parsing the document references.
44 We assume that references are specified as indices of the input documents and that
45 indices start at 1.
46 Example: `\\[(\\d+)\\]` finds "1" in a string "this is an answer[1]".
47 If not specified, no parsing is done, and all documents are referenced.
48 """
49 if pattern:
50 AnswerBuilder._check_num_groups_in_regex(pattern)
51
52 self.pattern = pattern
53 self.reference_pattern = reference_pattern
54
55 @component.output_types(answers=List[GeneratedAnswer])
56 def run(
57 self,
58 query: str,
59 replies: List[str],
60 meta: Optional[List[Dict[str, Any]]] = None,
61 documents: Optional[List[Document]] = None,
62 pattern: Optional[str] = None,
63 reference_pattern: Optional[str] = None,
64 ):
65 """
66 Turns the output of a Generator into `Answer` objects using regular expressions.
67
68 :param query:
69 The query used in the prompts for the Generator.
70 :param replies:
71 The output of the Generator.
72 :param meta:
73 The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.
74 :param documents:
75 The documents used as input to the Generator. If `documents` are specified, they are added to the `Answer`
76 objects. If both `documents` and `reference_pattern` are specified, the documents referenced in the
77 Generator output are extracted from the input documents and added to the `Answer` objects.
78 :param pattern:
79 The regular expression pattern to use to extract the answer text from the generator output.
80 If not specified, the whole string is used as the answer. The regular expression can have at
81 most one capture group. If a capture group is present, the text matched by the capture group
82 is used as the answer. If no capture group is present, the whole match is used as the answer.
83 Examples:
84 `[^\\n]+$` finds "this is an answer" in a string "this is an argument.\\nthis is an answer".
85 `Answer: (.*)` finds "this is an answer" in a string
86 "this is an argument. Answer: this is an answer".
87 :param reference_pattern:
88 The regular expression pattern to use for parsing the document references.
89 We assume that references are specified as indices of the input documents and that indices start at 1.
90 Example: `\\[(\\d+)\\]` finds "1" in a string "this is an answer[1]".
91 If not specified, no parsing is done, and all documents are referenced.
92
93 :returns: A dictionary with the following keys:
94 - `answers`: The answers obtained from the output of the generator
95 """
96 if not meta:
97 meta = [{}] * len(replies)
98 elif len(replies) != len(meta):
99 raise ValueError(f"Number of replies ({len(replies)}), and metadata ({len(meta)}) must match.")
100
101 if pattern:
102 AnswerBuilder._check_num_groups_in_regex(pattern)
103
104 pattern = pattern or self.pattern
105 reference_pattern = reference_pattern or self.reference_pattern
106
107 all_answers = []
108 for reply, metadata in zip(replies, meta):
109 referenced_docs = []
110 if documents:
111 reference_idxs = []
112 if reference_pattern:
113 reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)
114 else:
115 reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]
116
117 for idx in reference_idxs:
118 try:
119 referenced_docs.append(documents[idx])
120 except IndexError:
121 logger.warning(
122 "Document index '{index}' referenced in Generator output is out of range. ", index=idx + 1
123 )
124
125 answer_string = AnswerBuilder._extract_answer_string(reply, pattern)
126 answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)
127 all_answers.append(answer)
128
129 return {"answers": all_answers}
130
131 @staticmethod
132 def _extract_answer_string(reply: str, pattern: Optional[str] = None) -> str:
133 """
134 Extract the answer string from the generator output using the specified pattern.
135
136 If no pattern is specified, the whole string is used as the answer.
137
138 :param reply:
139 The output of the Generator. A string.
140 :param pattern:
141 The regular expression pattern to use to extract the answer text from the generator output.
142 """
143 if pattern is None:
144 return reply
145
146 if match := re.search(pattern, reply):
147 # No capture group in pattern -> use the whole match as answer
148 if not match.lastindex:
149 return match.group(0)
150 # One capture group in pattern -> use the capture group as answer
151 return match.group(1)
152 return ""
153
154 @staticmethod
155 def _extract_reference_idxs(reply: str, reference_pattern: str) -> List[int]:
156 document_idxs = re.findall(reference_pattern, reply)
157 return [int(idx) - 1 for idx in document_idxs]
158
159 @staticmethod
160 def _check_num_groups_in_regex(pattern: str):
161 num_groups = re.compile(pattern).groups
162 if num_groups > 1:
163 raise ValueError(
164 f"Pattern '{pattern}' contains multiple capture groups. "
165 f"Please specify a pattern with at most one capture group."
166 )
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/haystack/components/builders/answer_builder.py b/haystack/components/builders/answer_builder.py
--- a/haystack/components/builders/answer_builder.py
+++ b/haystack/components/builders/answer_builder.py
@@ -3,9 +3,10 @@
# SPDX-License-Identifier: Apache-2.0
import re
-from typing import Any, Dict, List, Optional
+from typing import Any, Dict, List, Optional, Union
from haystack import Document, GeneratedAnswer, component, logging
+from haystack.dataclasses.chat_message import ChatMessage
logger = logging.getLogger(__name__)
@@ -56,7 +57,7 @@
def run(
self,
query: str,
- replies: List[str],
+ replies: Union[List[str], List[ChatMessage]],
meta: Optional[List[Dict[str, Any]]] = None,
documents: Optional[List[Document]] = None,
pattern: Optional[str] = None,
@@ -68,7 +69,7 @@
:param query:
The query used in the prompts for the Generator.
:param replies:
- The output of the Generator.
+ The output of the Generator. Can be a list of strings or a list of ChatMessage objects.
:param meta:
The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.
:param documents:
@@ -103,14 +104,15 @@
pattern = pattern or self.pattern
reference_pattern = reference_pattern or self.reference_pattern
-
all_answers = []
for reply, metadata in zip(replies, meta):
+ # Extract content from ChatMessage objects if reply is a ChatMessages, else use the string as is
+ extracted_reply: str = reply.content if isinstance(reply, ChatMessage) else reply # type: ignore
+ extracted_metadata = reply.meta if isinstance(reply, ChatMessage) else metadata
referenced_docs = []
if documents:
- reference_idxs = []
if reference_pattern:
- reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)
+ reference_idxs = AnswerBuilder._extract_reference_idxs(extracted_reply, reference_pattern)
else:
reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]
@@ -122,8 +124,10 @@
"Document index '{index}' referenced in Generator output is out of range. ", index=idx + 1
)
- answer_string = AnswerBuilder._extract_answer_string(reply, pattern)
- answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)
+ answer_string = AnswerBuilder._extract_answer_string(extracted_reply, pattern)
+ answer = GeneratedAnswer(
+ data=answer_string, query=query, documents=referenced_docs, meta=extracted_metadata
+ )
all_answers.append(answer)
return {"answers": all_answers}
|
{"golden_diff": "diff --git a/haystack/components/builders/answer_builder.py b/haystack/components/builders/answer_builder.py\n--- a/haystack/components/builders/answer_builder.py\n+++ b/haystack/components/builders/answer_builder.py\n@@ -3,9 +3,10 @@\n # SPDX-License-Identifier: Apache-2.0\n \n import re\n-from typing import Any, Dict, List, Optional\n+from typing import Any, Dict, List, Optional, Union\n \n from haystack import Document, GeneratedAnswer, component, logging\n+from haystack.dataclasses.chat_message import ChatMessage\n \n logger = logging.getLogger(__name__)\n \n@@ -56,7 +57,7 @@\n def run(\n self,\n query: str,\n- replies: List[str],\n+ replies: Union[List[str], List[ChatMessage]],\n meta: Optional[List[Dict[str, Any]]] = None,\n documents: Optional[List[Document]] = None,\n pattern: Optional[str] = None,\n@@ -68,7 +69,7 @@\n :param query:\n The query used in the prompts for the Generator.\n :param replies:\n- The output of the Generator.\n+ The output of the Generator. Can be a list of strings or a list of ChatMessage objects.\n :param meta:\n The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.\n :param documents:\n@@ -103,14 +104,15 @@\n \n pattern = pattern or self.pattern\n reference_pattern = reference_pattern or self.reference_pattern\n-\n all_answers = []\n for reply, metadata in zip(replies, meta):\n+ # Extract content from ChatMessage objects if reply is a ChatMessages, else use the string as is\n+ extracted_reply: str = reply.content if isinstance(reply, ChatMessage) else reply # type: ignore\n+ extracted_metadata = reply.meta if isinstance(reply, ChatMessage) else metadata\n referenced_docs = []\n if documents:\n- reference_idxs = []\n if reference_pattern:\n- reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)\n+ reference_idxs = AnswerBuilder._extract_reference_idxs(extracted_reply, reference_pattern)\n else:\n reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]\n \n@@ -122,8 +124,10 @@\n \"Document index '{index}' referenced in Generator output is out of range. \", index=idx + 1\n )\n \n- answer_string = AnswerBuilder._extract_answer_string(reply, pattern)\n- answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)\n+ answer_string = AnswerBuilder._extract_answer_string(extracted_reply, pattern)\n+ answer = GeneratedAnswer(\n+ data=answer_string, query=query, documents=referenced_docs, meta=extracted_metadata\n+ )\n all_answers.append(answer)\n \n return {\"answers\": all_answers}\n", "issue": "Allow Chat Generators to connect to Answer Builder\n**Is your feature request related to a problem? Please describe.**\r\nThere is no current way for a `ChatGenerator` to directly connect to `AnswerBuilder()` despite similar functionality to a regular Generator.\r\n\r\n**Describe the solution you'd like**\r\nWe can extract the text from a `ChatMessage` and use it as a `str` as expected in the code for AnswerBuilder.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n**Additional context**\r\nChatGenerators that rely on non-open ai models require a specific chat format to be applied to properly differentiate between system/user/assistant messages. This is done manually (and prone to errors) in regular generators. This is an example I pulled from a Haystack notebook:\r\n\r\n```\r\nprompt_template = \"\"\"\r\n<|begin_of_text|><|start_header_id|>user<|end_header_id|>\r\n\r\n\r\nUsing the information contained in the context, give a comprehensive answer to the question.\r\nIf the answer cannot be deduced from the context, do not give an answer.\r\n\r\nContext:\r\n {% for doc in documents %}\r\n {{ doc.content }} URL:{{ doc.meta['url'] }}\r\n {% endfor %};\r\n Question: {{query}}<|eot_id|>\r\n\r\n<|start_header_id|>assistant<|end_header_id|>\r\n\r\n\r\n\"\"\"\r\nprompt_builder = PromptBuilder(template=prompt_template)\r\n```\r\n\r\nThis can be accomplished by using a `ChatGenerator` as follows:\r\n\r\n```\r\nsystem_message = ChatMessage.from_system(\r\n \"\"\"\r\n Read the context provided and answer the question if possible.If you can not form an answer from the context, reply with \"Nah\".\r\n Context:\r\n {% for doc in documents %}\r\n {{ doc.content }}\r\n {% endfor %};\r\n \"\"\"\r\n)\r\nuser_message = ChatMessage.from_user(\"query: {{query}}\")\r\nassistent_message = ChatMessage.from_assistant(\"Answer: \")\r\n```\r\n\r\nWe can then add the output to an `AnswerBuilder` to complete a pipeline ensuring that the proper template is applied.\n", "before_files": [{"content": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nimport re\nfrom typing import Any, Dict, List, Optional\n\nfrom haystack import Document, GeneratedAnswer, component, logging\n\nlogger = logging.getLogger(__name__)\n\n\n@component\nclass AnswerBuilder:\n \"\"\"\n Takes a query and the replies a Generator returns as input and parses them into GeneratedAnswer objects.\n\n Optionally, it also takes Documents and metadata from the Generator as inputs to enrich the GeneratedAnswer objects.\n\n Usage example:\n ```python\n from haystack.components.builders import AnswerBuilder\n\n builder = AnswerBuilder(pattern=\"Answer: (.*)\")\n builder.run(query=\"What's the answer?\", replies=[\"This is an argument. Answer: This is the answer.\"])\n ```\n \"\"\"\n\n def __init__(self, pattern: Optional[str] = None, reference_pattern: Optional[str] = None):\n \"\"\"\n Creates an instance of the AnswerBuilder component.\n\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n If not specified, the whole string is used as the answer. The regular expression can have at\n most one capture group. If a capture group is present, the text matched by the capture group\n is used as the answer. If no capture group is present, the whole match is used as the answer.\n Examples:\n `[^\\\\n]+$` finds \"this is an answer\" in a string \"this is an argument.\\\\nthis is an answer\".\n `Answer: (.*)` finds \"this is an answer\" in a string \"this is an argument. Answer: this is an answer\".\n\n :param reference_pattern:\n The regular expression pattern to use for parsing the document references.\n We assume that references are specified as indices of the input documents and that\n indices start at 1.\n Example: `\\\\[(\\\\d+)\\\\]` finds \"1\" in a string \"this is an answer[1]\".\n If not specified, no parsing is done, and all documents are referenced.\n \"\"\"\n if pattern:\n AnswerBuilder._check_num_groups_in_regex(pattern)\n\n self.pattern = pattern\n self.reference_pattern = reference_pattern\n\n @component.output_types(answers=List[GeneratedAnswer])\n def run(\n self,\n query: str,\n replies: List[str],\n meta: Optional[List[Dict[str, Any]]] = None,\n documents: Optional[List[Document]] = None,\n pattern: Optional[str] = None,\n reference_pattern: Optional[str] = None,\n ):\n \"\"\"\n Turns the output of a Generator into `Answer` objects using regular expressions.\n\n :param query:\n The query used in the prompts for the Generator.\n :param replies:\n The output of the Generator.\n :param meta:\n The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.\n :param documents:\n The documents used as input to the Generator. If `documents` are specified, they are added to the `Answer`\n objects. If both `documents` and `reference_pattern` are specified, the documents referenced in the\n Generator output are extracted from the input documents and added to the `Answer` objects.\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n If not specified, the whole string is used as the answer. The regular expression can have at\n most one capture group. If a capture group is present, the text matched by the capture group\n is used as the answer. If no capture group is present, the whole match is used as the answer.\n Examples:\n `[^\\\\n]+$` finds \"this is an answer\" in a string \"this is an argument.\\\\nthis is an answer\".\n `Answer: (.*)` finds \"this is an answer\" in a string\n \"this is an argument. Answer: this is an answer\".\n :param reference_pattern:\n The regular expression pattern to use for parsing the document references.\n We assume that references are specified as indices of the input documents and that indices start at 1.\n Example: `\\\\[(\\\\d+)\\\\]` finds \"1\" in a string \"this is an answer[1]\".\n If not specified, no parsing is done, and all documents are referenced.\n\n :returns: A dictionary with the following keys:\n - `answers`: The answers obtained from the output of the generator\n \"\"\"\n if not meta:\n meta = [{}] * len(replies)\n elif len(replies) != len(meta):\n raise ValueError(f\"Number of replies ({len(replies)}), and metadata ({len(meta)}) must match.\")\n\n if pattern:\n AnswerBuilder._check_num_groups_in_regex(pattern)\n\n pattern = pattern or self.pattern\n reference_pattern = reference_pattern or self.reference_pattern\n\n all_answers = []\n for reply, metadata in zip(replies, meta):\n referenced_docs = []\n if documents:\n reference_idxs = []\n if reference_pattern:\n reference_idxs = AnswerBuilder._extract_reference_idxs(reply, reference_pattern)\n else:\n reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]\n\n for idx in reference_idxs:\n try:\n referenced_docs.append(documents[idx])\n except IndexError:\n logger.warning(\n \"Document index '{index}' referenced in Generator output is out of range. \", index=idx + 1\n )\n\n answer_string = AnswerBuilder._extract_answer_string(reply, pattern)\n answer = GeneratedAnswer(data=answer_string, query=query, documents=referenced_docs, meta=metadata)\n all_answers.append(answer)\n\n return {\"answers\": all_answers}\n\n @staticmethod\n def _extract_answer_string(reply: str, pattern: Optional[str] = None) -> str:\n \"\"\"\n Extract the answer string from the generator output using the specified pattern.\n\n If no pattern is specified, the whole string is used as the answer.\n\n :param reply:\n The output of the Generator. A string.\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n \"\"\"\n if pattern is None:\n return reply\n\n if match := re.search(pattern, reply):\n # No capture group in pattern -> use the whole match as answer\n if not match.lastindex:\n return match.group(0)\n # One capture group in pattern -> use the capture group as answer\n return match.group(1)\n return \"\"\n\n @staticmethod\n def _extract_reference_idxs(reply: str, reference_pattern: str) -> List[int]:\n document_idxs = re.findall(reference_pattern, reply)\n return [int(idx) - 1 for idx in document_idxs]\n\n @staticmethod\n def _check_num_groups_in_regex(pattern: str):\n num_groups = re.compile(pattern).groups\n if num_groups > 1:\n raise ValueError(\n f\"Pattern '{pattern}' contains multiple capture groups. \"\n f\"Please specify a pattern with at most one capture group.\"\n )\n", "path": "haystack/components/builders/answer_builder.py"}], "after_files": [{"content": "# SPDX-FileCopyrightText: 2022-present deepset GmbH <[email protected]>\n#\n# SPDX-License-Identifier: Apache-2.0\n\nimport re\nfrom typing import Any, Dict, List, Optional, Union\n\nfrom haystack import Document, GeneratedAnswer, component, logging\nfrom haystack.dataclasses.chat_message import ChatMessage\n\nlogger = logging.getLogger(__name__)\n\n\n@component\nclass AnswerBuilder:\n \"\"\"\n Takes a query and the replies a Generator returns as input and parses them into GeneratedAnswer objects.\n\n Optionally, it also takes Documents and metadata from the Generator as inputs to enrich the GeneratedAnswer objects.\n\n Usage example:\n ```python\n from haystack.components.builders import AnswerBuilder\n\n builder = AnswerBuilder(pattern=\"Answer: (.*)\")\n builder.run(query=\"What's the answer?\", replies=[\"This is an argument. Answer: This is the answer.\"])\n ```\n \"\"\"\n\n def __init__(self, pattern: Optional[str] = None, reference_pattern: Optional[str] = None):\n \"\"\"\n Creates an instance of the AnswerBuilder component.\n\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n If not specified, the whole string is used as the answer. The regular expression can have at\n most one capture group. If a capture group is present, the text matched by the capture group\n is used as the answer. If no capture group is present, the whole match is used as the answer.\n Examples:\n `[^\\\\n]+$` finds \"this is an answer\" in a string \"this is an argument.\\\\nthis is an answer\".\n `Answer: (.*)` finds \"this is an answer\" in a string \"this is an argument. Answer: this is an answer\".\n\n :param reference_pattern:\n The regular expression pattern to use for parsing the document references.\n We assume that references are specified as indices of the input documents and that\n indices start at 1.\n Example: `\\\\[(\\\\d+)\\\\]` finds \"1\" in a string \"this is an answer[1]\".\n If not specified, no parsing is done, and all documents are referenced.\n \"\"\"\n if pattern:\n AnswerBuilder._check_num_groups_in_regex(pattern)\n\n self.pattern = pattern\n self.reference_pattern = reference_pattern\n\n @component.output_types(answers=List[GeneratedAnswer])\n def run(\n self,\n query: str,\n replies: Union[List[str], List[ChatMessage]],\n meta: Optional[List[Dict[str, Any]]] = None,\n documents: Optional[List[Document]] = None,\n pattern: Optional[str] = None,\n reference_pattern: Optional[str] = None,\n ):\n \"\"\"\n Turns the output of a Generator into `Answer` objects using regular expressions.\n\n :param query:\n The query used in the prompts for the Generator.\n :param replies:\n The output of the Generator. Can be a list of strings or a list of ChatMessage objects.\n :param meta:\n The metadata returned by the Generator. If not specified, the generated answer will contain no metadata.\n :param documents:\n The documents used as input to the Generator. If `documents` are specified, they are added to the `Answer`\n objects. If both `documents` and `reference_pattern` are specified, the documents referenced in the\n Generator output are extracted from the input documents and added to the `Answer` objects.\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n If not specified, the whole string is used as the answer. The regular expression can have at\n most one capture group. If a capture group is present, the text matched by the capture group\n is used as the answer. If no capture group is present, the whole match is used as the answer.\n Examples:\n `[^\\\\n]+$` finds \"this is an answer\" in a string \"this is an argument.\\\\nthis is an answer\".\n `Answer: (.*)` finds \"this is an answer\" in a string\n \"this is an argument. Answer: this is an answer\".\n :param reference_pattern:\n The regular expression pattern to use for parsing the document references.\n We assume that references are specified as indices of the input documents and that indices start at 1.\n Example: `\\\\[(\\\\d+)\\\\]` finds \"1\" in a string \"this is an answer[1]\".\n If not specified, no parsing is done, and all documents are referenced.\n\n :returns: A dictionary with the following keys:\n - `answers`: The answers obtained from the output of the generator\n \"\"\"\n if not meta:\n meta = [{}] * len(replies)\n elif len(replies) != len(meta):\n raise ValueError(f\"Number of replies ({len(replies)}), and metadata ({len(meta)}) must match.\")\n\n if pattern:\n AnswerBuilder._check_num_groups_in_regex(pattern)\n\n pattern = pattern or self.pattern\n reference_pattern = reference_pattern or self.reference_pattern\n all_answers = []\n for reply, metadata in zip(replies, meta):\n # Extract content from ChatMessage objects if reply is a ChatMessages, else use the string as is\n extracted_reply: str = reply.content if isinstance(reply, ChatMessage) else reply # type: ignore\n extracted_metadata = reply.meta if isinstance(reply, ChatMessage) else metadata\n referenced_docs = []\n if documents:\n if reference_pattern:\n reference_idxs = AnswerBuilder._extract_reference_idxs(extracted_reply, reference_pattern)\n else:\n reference_idxs = [doc_idx for doc_idx, _ in enumerate(documents)]\n\n for idx in reference_idxs:\n try:\n referenced_docs.append(documents[idx])\n except IndexError:\n logger.warning(\n \"Document index '{index}' referenced in Generator output is out of range. \", index=idx + 1\n )\n\n answer_string = AnswerBuilder._extract_answer_string(extracted_reply, pattern)\n answer = GeneratedAnswer(\n data=answer_string, query=query, documents=referenced_docs, meta=extracted_metadata\n )\n all_answers.append(answer)\n\n return {\"answers\": all_answers}\n\n @staticmethod\n def _extract_answer_string(reply: str, pattern: Optional[str] = None) -> str:\n \"\"\"\n Extract the answer string from the generator output using the specified pattern.\n\n If no pattern is specified, the whole string is used as the answer.\n\n :param reply:\n The output of the Generator. A string.\n :param pattern:\n The regular expression pattern to use to extract the answer text from the generator output.\n \"\"\"\n if pattern is None:\n return reply\n\n if match := re.search(pattern, reply):\n # No capture group in pattern -> use the whole match as answer\n if not match.lastindex:\n return match.group(0)\n # One capture group in pattern -> use the capture group as answer\n return match.group(1)\n return \"\"\n\n @staticmethod\n def _extract_reference_idxs(reply: str, reference_pattern: str) -> List[int]:\n document_idxs = re.findall(reference_pattern, reply)\n return [int(idx) - 1 for idx in document_idxs]\n\n @staticmethod\n def _check_num_groups_in_regex(pattern: str):\n num_groups = re.compile(pattern).groups\n if num_groups > 1:\n raise ValueError(\n f\"Pattern '{pattern}' contains multiple capture groups. \"\n f\"Please specify a pattern with at most one capture group.\"\n )\n", "path": "haystack/components/builders/answer_builder.py"}]}
| 2,618 | 643 |
gh_patches_debug_12713
|
rasdani/github-patches
|
git_diff
|
pypa__pip-3037
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
installing from a wheel puts platform files into purelib, ignoring WHEEL Root-Is-Pure: False
When installing a package with C extensions, the platform specific shared libraries should be placed in the 'platform library' install path install_platlib. On 64 bit linux, this would be /usr/lib64/python2.7/<...>
pip 7.0.3 incorrectly installs these platform specific shared binaries to the 'pure library' install path install_purelib.
This only happens when installing from a wheel file. The wheel file contains a WHEEL configuration file which indicates the package is not pure (Root-Is-Pure: False), but pip appears to ignore that.
For example, if I were to:
pip install python-qpid-proton
pip builds the extensions, generates a wheel in the cache, and installs from that wheel.
the shared libraries generated by the extension are placed here:
## $ pip show python-qpid-proton
Metadata-Version: 2.0
Name: python-qpid-proton
Version: 0.9.1.1
Summary: An AMQP based messaging library.
Home-page: http://qpid.apache.org/proton/
Author: Apache Qpid
Author-email: [email protected]
License: Apache Software License
Location: /usr/lib/python2.7/site-packages <----------------------------
However, if I disable the cache (which skips the wheel building/install) the shared objects are correctly installed:
$ sudo pip --no-cache-dir install python-qpid-proton
## $ pip show python-qpid-proton
Metadata-Version: 1.1
Name: python-qpid-proton
Version: 0.9.1.1
Summary: An AMQP based messaging library.
Home-page: http://qpid.apache.org/proton/
Author: Apache Qpid
Author-email: [email protected]
License: Apache Software License
Location: /usr/lib64/python2.7/site-packages <-------------------------------------------
Requires:
It appears that the install_platlib value in the distribution scheme is being incorrectly overridden here
https://github.com/pypa/pip/blob/develop/pip/locations.py#L196
The Distribution instance created here:
https://github.com/pypa/pip/blob/develop/pip/locations.py#L181
is a dummy distribution in the case of a wheel. It is not the actual distribution as defined by the packages setup.py file. As such, it will not have any extensions configured.
But when finalize_options is run on the dummy install instance, distutils checks to see if 'install_lib' is unset, and _if there are no extensions_ it set install_lib to the value of install_purelib.
Which means install_lib will never be set to platlib, even if the package contains binary extensions.
So when the installation path for a platform library is picked here:
https://github.com/pypa/pip/blob/develop/pip/wheel.py#L248
it has been incorrectly set to the pure lib directory.
https://github.com/pypa/pip/blob/develop/pip/wheel.py#L193
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/locations.py`
Content:
```
1 """Locations where we look for configs, install stuff, etc"""
2 from __future__ import absolute_import
3
4 import getpass
5 import os
6 import os.path
7 import site
8 import sys
9
10 from distutils import sysconfig
11 from distutils.command.install import install, SCHEME_KEYS # noqa
12
13 from pip.compat import WINDOWS
14 from pip.utils import appdirs
15
16
17 # CA Bundle Locations
18 CA_BUNDLE_PATHS = [
19 # Debian/Ubuntu/Gentoo etc.
20 "/etc/ssl/certs/ca-certificates.crt",
21
22 # Fedora/RHEL
23 "/etc/pki/tls/certs/ca-bundle.crt",
24
25 # OpenSUSE
26 "/etc/ssl/ca-bundle.pem",
27
28 # OpenBSD
29 "/etc/ssl/cert.pem",
30
31 # FreeBSD/DragonFly
32 "/usr/local/share/certs/ca-root-nss.crt",
33
34 # Homebrew on OSX
35 "/usr/local/etc/openssl/cert.pem",
36 ]
37
38 # Attempt to locate a CA Bundle that we can pass into requests, we have a list
39 # of possible ones from various systems. If we cannot find one then we'll set
40 # this to None so that we default to whatever requests is setup to handle.
41 #
42 # Note to Downstream: If you wish to disable this autodetection and simply use
43 # whatever requests does (likely you've already patched
44 # requests.certs.where()) then simply edit this line so
45 # that it reads ``CA_BUNDLE_PATH = None``.
46 CA_BUNDLE_PATH = next((x for x in CA_BUNDLE_PATHS if os.path.exists(x)), None)
47
48
49 # Application Directories
50 USER_CACHE_DIR = appdirs.user_cache_dir("pip")
51
52
53 DELETE_MARKER_MESSAGE = '''\
54 This file is placed here by pip to indicate the source was put
55 here by pip.
56
57 Once this package is successfully installed this source code will be
58 deleted (unless you remove this file).
59 '''
60 PIP_DELETE_MARKER_FILENAME = 'pip-delete-this-directory.txt'
61
62
63 def write_delete_marker_file(directory):
64 """
65 Write the pip delete marker file into this directory.
66 """
67 filepath = os.path.join(directory, PIP_DELETE_MARKER_FILENAME)
68 with open(filepath, 'w') as marker_fp:
69 marker_fp.write(DELETE_MARKER_MESSAGE)
70
71
72 def running_under_virtualenv():
73 """
74 Return True if we're running inside a virtualenv, False otherwise.
75
76 """
77 if hasattr(sys, 'real_prefix'):
78 return True
79 elif sys.prefix != getattr(sys, "base_prefix", sys.prefix):
80 return True
81
82 return False
83
84
85 def virtualenv_no_global():
86 """
87 Return True if in a venv and no system site packages.
88 """
89 # this mirrors the logic in virtualenv.py for locating the
90 # no-global-site-packages.txt file
91 site_mod_dir = os.path.dirname(os.path.abspath(site.__file__))
92 no_global_file = os.path.join(site_mod_dir, 'no-global-site-packages.txt')
93 if running_under_virtualenv() and os.path.isfile(no_global_file):
94 return True
95
96
97 def __get_username():
98 """ Returns the effective username of the current process. """
99 if WINDOWS:
100 return getpass.getuser()
101 import pwd
102 return pwd.getpwuid(os.geteuid()).pw_name
103
104
105 if running_under_virtualenv():
106 src_prefix = os.path.join(sys.prefix, 'src')
107 else:
108 # FIXME: keep src in cwd for now (it is not a temporary folder)
109 try:
110 src_prefix = os.path.join(os.getcwd(), 'src')
111 except OSError:
112 # In case the current working directory has been renamed or deleted
113 sys.exit(
114 "The folder you are executing pip from can no longer be found."
115 )
116
117 # under Mac OS X + virtualenv sys.prefix is not properly resolved
118 # it is something like /path/to/python/bin/..
119 # Note: using realpath due to tmp dirs on OSX being symlinks
120 src_prefix = os.path.abspath(src_prefix)
121
122 # FIXME doesn't account for venv linked to global site-packages
123
124 site_packages = sysconfig.get_python_lib()
125 user_site = site.USER_SITE
126 user_dir = os.path.expanduser('~')
127 if WINDOWS:
128 bin_py = os.path.join(sys.prefix, 'Scripts')
129 bin_user = os.path.join(user_site, 'Scripts')
130 # buildout uses 'bin' on Windows too?
131 if not os.path.exists(bin_py):
132 bin_py = os.path.join(sys.prefix, 'bin')
133 bin_user = os.path.join(user_site, 'bin')
134
135 config_basename = 'pip.ini'
136
137 legacy_storage_dir = os.path.join(user_dir, 'pip')
138 legacy_config_file = os.path.join(
139 legacy_storage_dir,
140 config_basename,
141 )
142 else:
143 bin_py = os.path.join(sys.prefix, 'bin')
144 bin_user = os.path.join(user_site, 'bin')
145
146 config_basename = 'pip.conf'
147
148 legacy_storage_dir = os.path.join(user_dir, '.pip')
149 legacy_config_file = os.path.join(
150 legacy_storage_dir,
151 config_basename,
152 )
153
154 # Forcing to use /usr/local/bin for standard Mac OS X framework installs
155 # Also log to ~/Library/Logs/ for use with the Console.app log viewer
156 if sys.platform[:6] == 'darwin' and sys.prefix[:16] == '/System/Library/':
157 bin_py = '/usr/local/bin'
158
159 site_config_files = [
160 os.path.join(path, config_basename)
161 for path in appdirs.site_config_dirs('pip')
162 ]
163
164
165 def distutils_scheme(dist_name, user=False, home=None, root=None,
166 isolated=False):
167 """
168 Return a distutils install scheme
169 """
170 from distutils.dist import Distribution
171
172 scheme = {}
173
174 if isolated:
175 extra_dist_args = {"script_args": ["--no-user-cfg"]}
176 else:
177 extra_dist_args = {}
178 dist_args = {'name': dist_name}
179 dist_args.update(extra_dist_args)
180
181 d = Distribution(dist_args)
182 d.parse_config_files()
183 i = d.get_command_obj('install', create=True)
184 # NOTE: setting user or home has the side-effect of creating the home dir
185 # or user base for installations during finalize_options()
186 # ideally, we'd prefer a scheme class that has no side-effects.
187 i.user = user or i.user
188 if user:
189 i.prefix = ""
190 i.home = home or i.home
191 i.root = root or i.root
192 i.finalize_options()
193 for key in SCHEME_KEYS:
194 scheme[key] = getattr(i, 'install_' + key)
195
196 if i.install_lib is not None:
197 # install_lib takes precedence over purelib and platlib
198 scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))
199
200 if running_under_virtualenv():
201 scheme['headers'] = os.path.join(
202 sys.prefix,
203 'include',
204 'site',
205 'python' + sys.version[:3],
206 dist_name,
207 )
208
209 if root is not None:
210 scheme["headers"] = os.path.join(
211 root,
212 os.path.abspath(scheme["headers"])[1:],
213 )
214
215 return scheme
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pip/locations.py b/pip/locations.py
--- a/pip/locations.py
+++ b/pip/locations.py
@@ -237,8 +237,12 @@
for key in SCHEME_KEYS:
scheme[key] = getattr(i, 'install_' + key)
- if i.install_lib is not None:
- # install_lib takes precedence over purelib and platlib
+ # install_lib specified in setup.cfg should install *everything*
+ # into there (i.e. it takes precedence over both purelib and
+ # platlib). Note, i.install_lib is *always* set after
+ # finalize_options(); we only want to override here if the user
+ # has explicitly requested it hence going back to the config
+ if 'install_lib' in d.get_option_dict('install'):
scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))
if running_under_virtualenv():
|
{"golden_diff": "diff --git a/pip/locations.py b/pip/locations.py\n--- a/pip/locations.py\n+++ b/pip/locations.py\n@@ -237,8 +237,12 @@\n for key in SCHEME_KEYS:\n scheme[key] = getattr(i, 'install_' + key)\n \n- if i.install_lib is not None:\n- # install_lib takes precedence over purelib and platlib\n+ # install_lib specified in setup.cfg should install *everything*\n+ # into there (i.e. it takes precedence over both purelib and\n+ # platlib). Note, i.install_lib is *always* set after\n+ # finalize_options(); we only want to override here if the user\n+ # has explicitly requested it hence going back to the config\n+ if 'install_lib' in d.get_option_dict('install'):\n scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))\n \n if running_under_virtualenv():\n", "issue": "installing from a wheel puts platform files into purelib, ignoring WHEEL Root-Is-Pure: False\nWhen installing a package with C extensions, the platform specific shared libraries should be placed in the 'platform library' install path install_platlib. On 64 bit linux, this would be /usr/lib64/python2.7/<...>\n\npip 7.0.3 incorrectly installs these platform specific shared binaries to the 'pure library' install path install_purelib.\n\nThis only happens when installing from a wheel file. The wheel file contains a WHEEL configuration file which indicates the package is not pure (Root-Is-Pure: False), but pip appears to ignore that.\n\nFor example, if I were to:\n\n pip install python-qpid-proton\n\npip builds the extensions, generates a wheel in the cache, and installs from that wheel.\nthe shared libraries generated by the extension are placed here:\n## $ pip show python-qpid-proton\n\nMetadata-Version: 2.0\nName: python-qpid-proton\nVersion: 0.9.1.1\nSummary: An AMQP based messaging library.\nHome-page: http://qpid.apache.org/proton/\nAuthor: Apache Qpid\nAuthor-email: [email protected]\nLicense: Apache Software License\nLocation: /usr/lib/python2.7/site-packages <----------------------------\n\nHowever, if I disable the cache (which skips the wheel building/install) the shared objects are correctly installed:\n\n$ sudo pip --no-cache-dir install python-qpid-proton\n## $ pip show python-qpid-proton\n\nMetadata-Version: 1.1\nName: python-qpid-proton\nVersion: 0.9.1.1\nSummary: An AMQP based messaging library.\nHome-page: http://qpid.apache.org/proton/\nAuthor: Apache Qpid\nAuthor-email: [email protected]\nLicense: Apache Software License\nLocation: /usr/lib64/python2.7/site-packages <-------------------------------------------\nRequires: \n\nIt appears that the install_platlib value in the distribution scheme is being incorrectly overridden here\n\nhttps://github.com/pypa/pip/blob/develop/pip/locations.py#L196\n\nThe Distribution instance created here:\nhttps://github.com/pypa/pip/blob/develop/pip/locations.py#L181\n\nis a dummy distribution in the case of a wheel. It is not the actual distribution as defined by the packages setup.py file. As such, it will not have any extensions configured. \n\nBut when finalize_options is run on the dummy install instance, distutils checks to see if 'install_lib' is unset, and _if there are no extensions_ it set install_lib to the value of install_purelib.\n\nWhich means install_lib will never be set to platlib, even if the package contains binary extensions.\nSo when the installation path for a platform library is picked here:\n\nhttps://github.com/pypa/pip/blob/develop/pip/wheel.py#L248\n\nit has been incorrectly set to the pure lib directory.\n\nhttps://github.com/pypa/pip/blob/develop/pip/wheel.py#L193\n\n", "before_files": [{"content": "\"\"\"Locations where we look for configs, install stuff, etc\"\"\"\nfrom __future__ import absolute_import\n\nimport getpass\nimport os\nimport os.path\nimport site\nimport sys\n\nfrom distutils import sysconfig\nfrom distutils.command.install import install, SCHEME_KEYS # noqa\n\nfrom pip.compat import WINDOWS\nfrom pip.utils import appdirs\n\n\n# CA Bundle Locations\nCA_BUNDLE_PATHS = [\n # Debian/Ubuntu/Gentoo etc.\n \"/etc/ssl/certs/ca-certificates.crt\",\n\n # Fedora/RHEL\n \"/etc/pki/tls/certs/ca-bundle.crt\",\n\n # OpenSUSE\n \"/etc/ssl/ca-bundle.pem\",\n\n # OpenBSD\n \"/etc/ssl/cert.pem\",\n\n # FreeBSD/DragonFly\n \"/usr/local/share/certs/ca-root-nss.crt\",\n\n # Homebrew on OSX\n \"/usr/local/etc/openssl/cert.pem\",\n]\n\n# Attempt to locate a CA Bundle that we can pass into requests, we have a list\n# of possible ones from various systems. If we cannot find one then we'll set\n# this to None so that we default to whatever requests is setup to handle.\n#\n# Note to Downstream: If you wish to disable this autodetection and simply use\n# whatever requests does (likely you've already patched\n# requests.certs.where()) then simply edit this line so\n# that it reads ``CA_BUNDLE_PATH = None``.\nCA_BUNDLE_PATH = next((x for x in CA_BUNDLE_PATHS if os.path.exists(x)), None)\n\n\n# Application Directories\nUSER_CACHE_DIR = appdirs.user_cache_dir(\"pip\")\n\n\nDELETE_MARKER_MESSAGE = '''\\\nThis file is placed here by pip to indicate the source was put\nhere by pip.\n\nOnce this package is successfully installed this source code will be\ndeleted (unless you remove this file).\n'''\nPIP_DELETE_MARKER_FILENAME = 'pip-delete-this-directory.txt'\n\n\ndef write_delete_marker_file(directory):\n \"\"\"\n Write the pip delete marker file into this directory.\n \"\"\"\n filepath = os.path.join(directory, PIP_DELETE_MARKER_FILENAME)\n with open(filepath, 'w') as marker_fp:\n marker_fp.write(DELETE_MARKER_MESSAGE)\n\n\ndef running_under_virtualenv():\n \"\"\"\n Return True if we're running inside a virtualenv, False otherwise.\n\n \"\"\"\n if hasattr(sys, 'real_prefix'):\n return True\n elif sys.prefix != getattr(sys, \"base_prefix\", sys.prefix):\n return True\n\n return False\n\n\ndef virtualenv_no_global():\n \"\"\"\n Return True if in a venv and no system site packages.\n \"\"\"\n # this mirrors the logic in virtualenv.py for locating the\n # no-global-site-packages.txt file\n site_mod_dir = os.path.dirname(os.path.abspath(site.__file__))\n no_global_file = os.path.join(site_mod_dir, 'no-global-site-packages.txt')\n if running_under_virtualenv() and os.path.isfile(no_global_file):\n return True\n\n\ndef __get_username():\n \"\"\" Returns the effective username of the current process. \"\"\"\n if WINDOWS:\n return getpass.getuser()\n import pwd\n return pwd.getpwuid(os.geteuid()).pw_name\n\n\nif running_under_virtualenv():\n src_prefix = os.path.join(sys.prefix, 'src')\nelse:\n # FIXME: keep src in cwd for now (it is not a temporary folder)\n try:\n src_prefix = os.path.join(os.getcwd(), 'src')\n except OSError:\n # In case the current working directory has been renamed or deleted\n sys.exit(\n \"The folder you are executing pip from can no longer be found.\"\n )\n\n# under Mac OS X + virtualenv sys.prefix is not properly resolved\n# it is something like /path/to/python/bin/..\n# Note: using realpath due to tmp dirs on OSX being symlinks\nsrc_prefix = os.path.abspath(src_prefix)\n\n# FIXME doesn't account for venv linked to global site-packages\n\nsite_packages = sysconfig.get_python_lib()\nuser_site = site.USER_SITE\nuser_dir = os.path.expanduser('~')\nif WINDOWS:\n bin_py = os.path.join(sys.prefix, 'Scripts')\n bin_user = os.path.join(user_site, 'Scripts')\n # buildout uses 'bin' on Windows too?\n if not os.path.exists(bin_py):\n bin_py = os.path.join(sys.prefix, 'bin')\n bin_user = os.path.join(user_site, 'bin')\n\n config_basename = 'pip.ini'\n\n legacy_storage_dir = os.path.join(user_dir, 'pip')\n legacy_config_file = os.path.join(\n legacy_storage_dir,\n config_basename,\n )\nelse:\n bin_py = os.path.join(sys.prefix, 'bin')\n bin_user = os.path.join(user_site, 'bin')\n\n config_basename = 'pip.conf'\n\n legacy_storage_dir = os.path.join(user_dir, '.pip')\n legacy_config_file = os.path.join(\n legacy_storage_dir,\n config_basename,\n )\n\n # Forcing to use /usr/local/bin for standard Mac OS X framework installs\n # Also log to ~/Library/Logs/ for use with the Console.app log viewer\n if sys.platform[:6] == 'darwin' and sys.prefix[:16] == '/System/Library/':\n bin_py = '/usr/local/bin'\n\nsite_config_files = [\n os.path.join(path, config_basename)\n for path in appdirs.site_config_dirs('pip')\n]\n\n\ndef distutils_scheme(dist_name, user=False, home=None, root=None,\n isolated=False):\n \"\"\"\n Return a distutils install scheme\n \"\"\"\n from distutils.dist import Distribution\n\n scheme = {}\n\n if isolated:\n extra_dist_args = {\"script_args\": [\"--no-user-cfg\"]}\n else:\n extra_dist_args = {}\n dist_args = {'name': dist_name}\n dist_args.update(extra_dist_args)\n\n d = Distribution(dist_args)\n d.parse_config_files()\n i = d.get_command_obj('install', create=True)\n # NOTE: setting user or home has the side-effect of creating the home dir\n # or user base for installations during finalize_options()\n # ideally, we'd prefer a scheme class that has no side-effects.\n i.user = user or i.user\n if user:\n i.prefix = \"\"\n i.home = home or i.home\n i.root = root or i.root\n i.finalize_options()\n for key in SCHEME_KEYS:\n scheme[key] = getattr(i, 'install_' + key)\n\n if i.install_lib is not None:\n # install_lib takes precedence over purelib and platlib\n scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))\n\n if running_under_virtualenv():\n scheme['headers'] = os.path.join(\n sys.prefix,\n 'include',\n 'site',\n 'python' + sys.version[:3],\n dist_name,\n )\n\n if root is not None:\n scheme[\"headers\"] = os.path.join(\n root,\n os.path.abspath(scheme[\"headers\"])[1:],\n )\n\n return scheme\n", "path": "pip/locations.py"}], "after_files": [{"content": "\"\"\"Locations where we look for configs, install stuff, etc\"\"\"\nfrom __future__ import absolute_import\n\nimport getpass\nimport os\nimport os.path\nimport site\nimport sys\nimport tempfile\n\nfrom distutils import sysconfig\nfrom distutils.command.install import install, SCHEME_KEYS\n\nfrom pip.compat import get_path_uid, WINDOWS\nfrom pip.utils import appdirs\nfrom pip import exceptions\n\n\n# Hack for flake8\ninstall\n\n\n# CA Bundle Locations\nCA_BUNDLE_PATHS = [\n # Debian/Ubuntu/Gentoo etc.\n \"/etc/ssl/certs/ca-certificates.crt\",\n\n # Fedora/RHEL\n \"/etc/pki/tls/certs/ca-bundle.crt\",\n\n # OpenSUSE\n \"/etc/ssl/ca-bundle.pem\",\n\n # OpenBSD\n \"/etc/ssl/cert.pem\",\n\n # FreeBSD/DragonFly\n \"/usr/local/share/certs/ca-root-nss.crt\",\n\n # Homebrew on OSX\n \"/usr/local/etc/openssl/cert.pem\",\n]\n\n# Attempt to locate a CA Bundle that we can pass into requests, we have a list\n# of possible ones from various systems. If we cannot find one then we'll set\n# this to None so that we default to whatever requests is setup to handle.\n#\n# Note to Downstream: If you wish to disable this autodetection and simply use\n# whatever requests does (likely you've already patched\n# requests.certs.where()) then simply edit this line so\n# that it reads ``CA_BUNDLE_PATH = None``.\nCA_BUNDLE_PATH = next((x for x in CA_BUNDLE_PATHS if os.path.exists(x)), None)\n\n\n# Application Directories\nUSER_CACHE_DIR = appdirs.user_cache_dir(\"pip\")\n\n\nDELETE_MARKER_MESSAGE = '''\\\nThis file is placed here by pip to indicate the source was put\nhere by pip.\n\nOnce this package is successfully installed this source code will be\ndeleted (unless you remove this file).\n'''\nPIP_DELETE_MARKER_FILENAME = 'pip-delete-this-directory.txt'\n\n\ndef write_delete_marker_file(directory):\n \"\"\"\n Write the pip delete marker file into this directory.\n \"\"\"\n filepath = os.path.join(directory, PIP_DELETE_MARKER_FILENAME)\n with open(filepath, 'w') as marker_fp:\n marker_fp.write(DELETE_MARKER_MESSAGE)\n\n\ndef running_under_virtualenv():\n \"\"\"\n Return True if we're running inside a virtualenv, False otherwise.\n\n \"\"\"\n if hasattr(sys, 'real_prefix'):\n return True\n elif sys.prefix != getattr(sys, \"base_prefix\", sys.prefix):\n return True\n\n return False\n\n\ndef virtualenv_no_global():\n \"\"\"\n Return True if in a venv and no system site packages.\n \"\"\"\n # this mirrors the logic in virtualenv.py for locating the\n # no-global-site-packages.txt file\n site_mod_dir = os.path.dirname(os.path.abspath(site.__file__))\n no_global_file = os.path.join(site_mod_dir, 'no-global-site-packages.txt')\n if running_under_virtualenv() and os.path.isfile(no_global_file):\n return True\n\n\ndef __get_username():\n \"\"\" Returns the effective username of the current process. \"\"\"\n if WINDOWS:\n return getpass.getuser()\n import pwd\n return pwd.getpwuid(os.geteuid()).pw_name\n\n\ndef _get_build_prefix():\n \"\"\" Returns a safe build_prefix \"\"\"\n path = os.path.join(\n tempfile.gettempdir(),\n 'pip_build_%s' % __get_username().replace(' ', '_')\n )\n if WINDOWS:\n \"\"\" on windows(tested on 7) temp dirs are isolated \"\"\"\n return path\n try:\n os.mkdir(path)\n write_delete_marker_file(path)\n except OSError:\n file_uid = None\n try:\n # raises OSError for symlinks\n # https://github.com/pypa/pip/pull/935#discussion_r5307003\n file_uid = get_path_uid(path)\n except OSError:\n file_uid = None\n\n if file_uid != os.geteuid():\n msg = (\n \"The temporary folder for building (%s) is either not owned by\"\n \" you, or is a symlink.\" % path\n )\n print(msg)\n print(\n \"pip will not work until the temporary folder is either \"\n \"deleted or is a real directory owned by your user account.\"\n )\n raise exceptions.InstallationError(msg)\n return path\n\nif running_under_virtualenv():\n build_prefix = os.path.join(sys.prefix, 'build')\n src_prefix = os.path.join(sys.prefix, 'src')\nelse:\n # Note: intentionally NOT using mkdtemp\n # See https://github.com/pypa/pip/issues/906 for plan to move to mkdtemp\n build_prefix = _get_build_prefix()\n\n # FIXME: keep src in cwd for now (it is not a temporary folder)\n try:\n src_prefix = os.path.join(os.getcwd(), 'src')\n except OSError:\n # In case the current working directory has been renamed or deleted\n sys.exit(\n \"The folder you are executing pip from can no longer be found.\"\n )\n\n# under Mac OS X + virtualenv sys.prefix is not properly resolved\n# it is something like /path/to/python/bin/..\n# Note: using realpath due to tmp dirs on OSX being symlinks\nbuild_prefix = os.path.abspath(os.path.realpath(build_prefix))\nsrc_prefix = os.path.abspath(src_prefix)\n\n# FIXME doesn't account for venv linked to global site-packages\n\nsite_packages = sysconfig.get_python_lib()\nuser_site = site.USER_SITE\nuser_dir = os.path.expanduser('~')\nif WINDOWS:\n bin_py = os.path.join(sys.prefix, 'Scripts')\n bin_user = os.path.join(user_site, 'Scripts')\n # buildout uses 'bin' on Windows too?\n if not os.path.exists(bin_py):\n bin_py = os.path.join(sys.prefix, 'bin')\n bin_user = os.path.join(user_site, 'bin')\n\n config_basename = 'pip.ini'\n\n legacy_storage_dir = os.path.join(user_dir, 'pip')\n legacy_config_file = os.path.join(\n legacy_storage_dir,\n config_basename,\n )\nelse:\n bin_py = os.path.join(sys.prefix, 'bin')\n bin_user = os.path.join(user_site, 'bin')\n\n config_basename = 'pip.conf'\n\n legacy_storage_dir = os.path.join(user_dir, '.pip')\n legacy_config_file = os.path.join(\n legacy_storage_dir,\n config_basename,\n )\n\n # Forcing to use /usr/local/bin for standard Mac OS X framework installs\n # Also log to ~/Library/Logs/ for use with the Console.app log viewer\n if sys.platform[:6] == 'darwin' and sys.prefix[:16] == '/System/Library/':\n bin_py = '/usr/local/bin'\n\nsite_config_files = [\n os.path.join(path, config_basename)\n for path in appdirs.site_config_dirs('pip')\n]\n\n\ndef distutils_scheme(dist_name, user=False, home=None, root=None,\n isolated=False):\n \"\"\"\n Return a distutils install scheme\n \"\"\"\n from distutils.dist import Distribution\n\n scheme = {}\n\n if isolated:\n extra_dist_args = {\"script_args\": [\"--no-user-cfg\"]}\n else:\n extra_dist_args = {}\n dist_args = {'name': dist_name}\n dist_args.update(extra_dist_args)\n\n d = Distribution(dist_args)\n d.parse_config_files()\n i = d.get_command_obj('install', create=True)\n # NOTE: setting user or home has the side-effect of creating the home dir\n # or user base for installations during finalize_options()\n # ideally, we'd prefer a scheme class that has no side-effects.\n i.user = user or i.user\n i.home = home or i.home\n i.root = root or i.root\n i.finalize_options()\n for key in SCHEME_KEYS:\n scheme[key] = getattr(i, 'install_' + key)\n\n # install_lib specified in setup.cfg should install *everything*\n # into there (i.e. it takes precedence over both purelib and\n # platlib). Note, i.install_lib is *always* set after\n # finalize_options(); we only want to override here if the user\n # has explicitly requested it hence going back to the config\n if 'install_lib' in d.get_option_dict('install'):\n scheme.update(dict(purelib=i.install_lib, platlib=i.install_lib))\n\n if running_under_virtualenv():\n scheme['headers'] = os.path.join(\n sys.prefix,\n 'include',\n 'site',\n 'python' + sys.version[:3],\n dist_name,\n )\n\n if root is not None:\n scheme[\"headers\"] = os.path.join(\n root,\n os.path.abspath(scheme[\"headers\"])[1:],\n )\n\n return scheme\n", "path": "pip/locations.py"}]}
| 3,004 | 215 |
gh_patches_debug_21734
|
rasdani/github-patches
|
git_diff
|
marshmallow-code__webargs-99
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor tests
The tests can be DRY'd up significantly. I suggest the following:
- [ ] Implement the same HTTP API across all the supported frameworks
- [ ] Use webtest to test endpoints; create a base test class that tests common functionality
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tasks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import os
3 import sys
4 import webbrowser
5
6 from invoke import task, run
7
8 docs_dir = 'docs'
9 build_dir = os.path.join(docs_dir, '_build')
10
11 @task
12 def test(coverage=False, browse=False):
13 flake()
14 import pytest
15 args = []
16 if coverage:
17 args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])
18
19 if sys.version_info < (3, 4, 1):
20 args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))
21 retcode = pytest.main(args)
22 if coverage and browse:
23 webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))
24 sys.exit(retcode)
25
26 @task
27 def flake():
28 """Run flake8 on codebase."""
29 cmd = 'flake8 .'
30 if sys.version_info < (3, 4, 1):
31 excludes = [
32 os.path.join('tests', 'test_aiohttp'),
33 os.path.join('webargs', 'async.py'),
34 os.path.join('webargs', 'aiohttpparser.py'),
35 os.path.join('examples', 'annotations_example.py'),
36 'build',
37 ]
38 cmd += ' --exclude={0}'.format(','.join(excludes))
39 run(cmd, echo=True)
40
41 @task
42 def clean():
43 run("rm -rf build")
44 run("rm -rf dist")
45 run("rm -rf webargs.egg-info")
46 clean_docs()
47 print("Cleaned up.")
48
49 @task
50 def readme(browse=False):
51 run('rst2html.py README.rst > README.html')
52 if browse:
53 webbrowser.open_new_tab('README.html')
54
55 @task
56 def clean_docs():
57 run("rm -rf %s" % build_dir)
58
59 @task
60 def browse_docs():
61 path = os.path.join(build_dir, 'index.html')
62 webbrowser.open_new_tab(path)
63
64 @task
65 def docs(clean=False, browse=False, watch=False):
66 """Build the docs."""
67 if clean:
68 clean_docs()
69 run("sphinx-build %s %s" % (docs_dir, build_dir), echo=True)
70 if browse:
71 browse_docs()
72 if watch:
73 watch_docs()
74
75 @task
76 def watch_docs():
77 """Run build the docs when a file changes."""
78 try:
79 import sphinx_autobuild # noqa
80 except ImportError:
81 print('ERROR: watch task requires the sphinx_autobuild package.')
82 print('Install it with:')
83 print(' pip install sphinx-autobuild')
84 sys.exit(1)
85 run('sphinx-autobuild {0} {1} --watch {2}'.format(
86 docs_dir, build_dir, 'webargs'), echo=True, pty=True)
87
88 @task
89 def publish(test=False):
90 """Publish to the cheeseshop."""
91 clean()
92 if test:
93 run('python setup.py register -r test sdist bdist_wheel', echo=True)
94 run('twine upload dist/* -r test', echo=True)
95 else:
96 run('python setup.py register sdist bdist_wheel', echo=True)
97 run('twine upload dist/*', echo=True)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -17,7 +17,7 @@
args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])
if sys.version_info < (3, 4, 1):
- args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))
+ args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))
retcode = pytest.main(args)
if coverage and browse:
webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))
@@ -29,7 +29,8 @@
cmd = 'flake8 .'
if sys.version_info < (3, 4, 1):
excludes = [
- os.path.join('tests', 'test_aiohttp'),
+ os.path.join('tests', 'apps', 'aiohttp_app.py'),
+ os.path.join('tests', 'test_aiohttparser.py'),
os.path.join('webargs', 'async.py'),
os.path.join('webargs', 'aiohttpparser.py'),
os.path.join('examples', 'annotations_example.py'),
|
{"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -17,7 +17,7 @@\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n \n if sys.version_info < (3, 4, 1):\n- args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))\n+ args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n@@ -29,7 +29,8 @@\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n- os.path.join('tests', 'test_aiohttp'),\n+ os.path.join('tests', 'apps', 'aiohttp_app.py'),\n+ os.path.join('tests', 'test_aiohttparser.py'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n", "issue": "Refactor tests\nThe tests can be DRY'd up significantly. I suggest the following:\n- [ ] Implement the same HTTP API across all the supported frameworks\n- [ ] Use webtest to test endpoints; create a base test class that tests common functionality\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nimport webbrowser\n\nfrom invoke import task, run\n\ndocs_dir = 'docs'\nbuild_dir = os.path.join(docs_dir, '_build')\n\n@task\ndef test(coverage=False, browse=False):\n flake()\n import pytest\n args = []\n if coverage:\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n\n if sys.version_info < (3, 4, 1):\n args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n sys.exit(retcode)\n\n@task\ndef flake():\n \"\"\"Run flake8 on codebase.\"\"\"\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n os.path.join('tests', 'test_aiohttp'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n 'build',\n ]\n cmd += ' --exclude={0}'.format(','.join(excludes))\n run(cmd, echo=True)\n\n@task\ndef clean():\n run(\"rm -rf build\")\n run(\"rm -rf dist\")\n run(\"rm -rf webargs.egg-info\")\n clean_docs()\n print(\"Cleaned up.\")\n\n@task\ndef readme(browse=False):\n run('rst2html.py README.rst > README.html')\n if browse:\n webbrowser.open_new_tab('README.html')\n\n@task\ndef clean_docs():\n run(\"rm -rf %s\" % build_dir)\n\n@task\ndef browse_docs():\n path = os.path.join(build_dir, 'index.html')\n webbrowser.open_new_tab(path)\n\n@task\ndef docs(clean=False, browse=False, watch=False):\n \"\"\"Build the docs.\"\"\"\n if clean:\n clean_docs()\n run(\"sphinx-build %s %s\" % (docs_dir, build_dir), echo=True)\n if browse:\n browse_docs()\n if watch:\n watch_docs()\n\n@task\ndef watch_docs():\n \"\"\"Run build the docs when a file changes.\"\"\"\n try:\n import sphinx_autobuild # noqa\n except ImportError:\n print('ERROR: watch task requires the sphinx_autobuild package.')\n print('Install it with:')\n print(' pip install sphinx-autobuild')\n sys.exit(1)\n run('sphinx-autobuild {0} {1} --watch {2}'.format(\n docs_dir, build_dir, 'webargs'), echo=True, pty=True)\n\n@task\ndef publish(test=False):\n \"\"\"Publish to the cheeseshop.\"\"\"\n clean()\n if test:\n run('python setup.py register -r test sdist bdist_wheel', echo=True)\n run('twine upload dist/* -r test', echo=True)\n else:\n run('python setup.py register sdist bdist_wheel', echo=True)\n run('twine upload dist/*', echo=True)\n", "path": "tasks.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nimport webbrowser\n\nfrom invoke import task, run\n\ndocs_dir = 'docs'\nbuild_dir = os.path.join(docs_dir, '_build')\n\n@task\ndef test(coverage=False, browse=False):\n flake()\n import pytest\n args = []\n if coverage:\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n\n if sys.version_info < (3, 4, 1):\n args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n sys.exit(retcode)\n\n@task\ndef flake():\n \"\"\"Run flake8 on codebase.\"\"\"\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n os.path.join('tests', 'apps', 'aiohttp_app.py'),\n os.path.join('tests', 'test_aiohttparser.py'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n 'build',\n ]\n cmd += ' --exclude={0}'.format(','.join(excludes))\n run(cmd, echo=True)\n\n@task\ndef clean():\n run(\"rm -rf build\")\n run(\"rm -rf dist\")\n run(\"rm -rf webargs.egg-info\")\n clean_docs()\n print(\"Cleaned up.\")\n\n@task\ndef readme(browse=False):\n run('rst2html.py README.rst > README.html')\n if browse:\n webbrowser.open_new_tab('README.html')\n\n@task\ndef clean_docs():\n run(\"rm -rf %s\" % build_dir)\n\n@task\ndef browse_docs():\n path = os.path.join(build_dir, 'index.html')\n webbrowser.open_new_tab(path)\n\n@task\ndef docs(clean=False, browse=False, watch=False):\n \"\"\"Build the docs.\"\"\"\n if clean:\n clean_docs()\n run(\"sphinx-build %s %s\" % (docs_dir, build_dir), echo=True)\n if browse:\n browse_docs()\n if watch:\n watch_docs()\n\n@task\ndef watch_docs():\n \"\"\"Run build the docs when a file changes.\"\"\"\n try:\n import sphinx_autobuild # noqa\n except ImportError:\n print('ERROR: watch task requires the sphinx_autobuild package.')\n print('Install it with:')\n print(' pip install sphinx-autobuild')\n sys.exit(1)\n run('sphinx-autobuild {0} {1} --watch {2}'.format(\n docs_dir, build_dir, 'webargs'), echo=True, pty=True)\n\n@task\ndef publish(test=False):\n \"\"\"Publish to the cheeseshop.\"\"\"\n clean()\n if test:\n run('python setup.py register -r test sdist bdist_wheel', echo=True)\n run('twine upload dist/* -r test', echo=True)\n else:\n run('python setup.py register sdist bdist_wheel', echo=True)\n run('twine upload dist/*', echo=True)\n", "path": "tasks.py"}]}
| 1,201 | 275 |
gh_patches_debug_26599
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-347
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Flask integration wrong User IP behind proxy
When sentry report bugs I get wrong users IP.
I'm using UWSGI and NGINX stack. I forward X-REAL-IP from NGINX to UWSGI. Also flask.request.remote_addr shows correct user IP address but sentry-sdk version 0.7.10 shows my server IP address as user IP.
Would you please assist?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/wsgi.py`
Content:
```
1 import sys
2
3 from sentry_sdk.hub import Hub, _should_send_default_pii
4 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
5 from sentry_sdk._compat import PY2, reraise
6 from sentry_sdk.integrations._wsgi_common import _filter_headers
7
8 if False:
9 from typing import Callable
10 from typing import Dict
11 from typing import List
12 from typing import Iterator
13 from typing import Any
14 from typing import Tuple
15 from typing import Optional
16
17 from sentry_sdk.utils import ExcInfo
18
19
20 if PY2:
21
22 def wsgi_decoding_dance(s, charset="utf-8", errors="replace"):
23 # type: (str, str, str) -> str
24 return s.decode(charset, errors)
25
26
27 else:
28
29 def wsgi_decoding_dance(s, charset="utf-8", errors="replace"):
30 # type: (str, str, str) -> str
31 return s.encode("latin1").decode(charset, errors)
32
33
34 def get_host(environ):
35 # type: (Dict[str, str]) -> str
36 """Return the host for the given WSGI environment. Yanked from Werkzeug."""
37 if environ.get("HTTP_HOST"):
38 rv = environ["HTTP_HOST"]
39 if environ["wsgi.url_scheme"] == "http" and rv.endswith(":80"):
40 rv = rv[:-3]
41 elif environ["wsgi.url_scheme"] == "https" and rv.endswith(":443"):
42 rv = rv[:-4]
43 elif environ.get("SERVER_NAME"):
44 rv = environ["SERVER_NAME"]
45 if (environ["wsgi.url_scheme"], environ["SERVER_PORT"]) not in (
46 ("https", "443"),
47 ("http", "80"),
48 ):
49 rv += ":" + environ["SERVER_PORT"]
50 else:
51 # In spite of the WSGI spec, SERVER_NAME might not be present.
52 rv = "unknown"
53
54 return rv
55
56
57 def get_request_url(environ):
58 # type: (Dict[str, str]) -> str
59 """Return the absolute URL without query string for the given WSGI
60 environment."""
61 return "%s://%s/%s" % (
62 environ.get("wsgi.url_scheme"),
63 get_host(environ),
64 wsgi_decoding_dance(environ.get("PATH_INFO") or "").lstrip("/"),
65 )
66
67
68 class SentryWsgiMiddleware(object):
69 __slots__ = ("app",)
70
71 def __init__(self, app):
72 # type: (Callable) -> None
73 self.app = app
74
75 def __call__(self, environ, start_response):
76 # type: (Dict[str, str], Callable) -> _ScopedResponse
77 hub = Hub(Hub.current)
78
79 with hub:
80 with capture_internal_exceptions():
81 with hub.configure_scope() as scope:
82 scope.clear_breadcrumbs()
83 scope._name = "wsgi"
84 scope.add_event_processor(_make_wsgi_event_processor(environ))
85
86 try:
87 rv = self.app(environ, start_response)
88 except Exception:
89 reraise(*_capture_exception(hub))
90
91 return _ScopedResponse(hub, rv)
92
93
94 def _get_environ(environ):
95 # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]
96 """
97 Returns our whitelisted environment variables.
98 """
99 keys = ("SERVER_NAME", "SERVER_PORT")
100 if _should_send_default_pii():
101 keys += ("REMOTE_ADDR",) # type: ignore
102
103 for key in keys:
104 if key in environ:
105 yield key, environ[key]
106
107
108 # `get_headers` comes from `werkzeug.datastructures.EnvironHeaders`
109 #
110 # We need this function because Django does not give us a "pure" http header
111 # dict. So we might as well use it for all WSGI integrations.
112 def _get_headers(environ):
113 # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]
114 """
115 Returns only proper HTTP headers.
116
117 """
118 for key, value in environ.items():
119 key = str(key)
120 if key.startswith("HTTP_") and key not in (
121 "HTTP_CONTENT_TYPE",
122 "HTTP_CONTENT_LENGTH",
123 ):
124 yield key[5:].replace("_", "-").title(), value
125 elif key in ("CONTENT_TYPE", "CONTENT_LENGTH"):
126 yield key.replace("_", "-").title(), value
127
128
129 def get_client_ip(environ):
130 # type: (Dict[str, str]) -> Optional[Any]
131 """
132 Naively yank the first IP address in an X-Forwarded-For header
133 and assume this is correct.
134
135 Note: Don't use this in security sensitive situations since this
136 value may be forged from a client.
137 """
138 try:
139 return environ["HTTP_X_FORWARDED_FOR"].split(",")[0].strip()
140 except (KeyError, IndexError):
141 return environ.get("REMOTE_ADDR")
142
143
144 def _capture_exception(hub):
145 # type: (Hub) -> ExcInfo
146 # Check client here as it might have been unset while streaming response
147 if hub.client is not None:
148 exc_info = sys.exc_info()
149 event, hint = event_from_exception(
150 exc_info,
151 client_options=hub.client.options,
152 mechanism={"type": "wsgi", "handled": False},
153 )
154 hub.capture_event(event, hint=hint)
155 return exc_info
156
157
158 class _ScopedResponse(object):
159 __slots__ = ("_response", "_hub")
160
161 def __init__(self, hub, response):
162 # type: (Hub, List[bytes]) -> None
163 self._hub = hub
164 self._response = response
165
166 def __iter__(self):
167 # type: () -> Iterator[bytes]
168 iterator = iter(self._response)
169
170 while True:
171 with self._hub:
172 try:
173 chunk = next(iterator)
174 except StopIteration:
175 break
176 except Exception:
177 reraise(*_capture_exception(self._hub))
178
179 yield chunk
180
181 def close(self):
182 with self._hub:
183 try:
184 self._response.close()
185 except AttributeError:
186 pass
187 except Exception:
188 reraise(*_capture_exception(self._hub))
189
190
191 def _make_wsgi_event_processor(environ):
192 # type: (Dict[str, str]) -> Callable
193 # It's a bit unfortunate that we have to extract and parse the request data
194 # from the environ so eagerly, but there are a few good reasons for this.
195 #
196 # We might be in a situation where the scope/hub never gets torn down
197 # properly. In that case we will have an unnecessary strong reference to
198 # all objects in the environ (some of which may take a lot of memory) when
199 # we're really just interested in a few of them.
200 #
201 # Keeping the environment around for longer than the request lifecycle is
202 # also not necessarily something uWSGI can deal with:
203 # https://github.com/unbit/uwsgi/issues/1950
204
205 client_ip = get_client_ip(environ)
206 request_url = get_request_url(environ)
207 query_string = environ.get("QUERY_STRING")
208 method = environ.get("REQUEST_METHOD")
209 env = dict(_get_environ(environ))
210 headers = _filter_headers(dict(_get_headers(environ)))
211
212 def event_processor(event, hint):
213 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
214 with capture_internal_exceptions():
215 # if the code below fails halfway through we at least have some data
216 request_info = event.setdefault("request", {})
217
218 if _should_send_default_pii():
219 user_info = event.setdefault("user", {})
220 user_info["ip_address"] = client_ip
221
222 request_info["url"] = request_url
223 request_info["query_string"] = query_string
224 request_info["method"] = method
225 request_info["env"] = env
226 request_info["headers"] = headers
227
228 return event
229
230 return event_processor
231
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/wsgi.py b/sentry_sdk/integrations/wsgi.py
--- a/sentry_sdk/integrations/wsgi.py
+++ b/sentry_sdk/integrations/wsgi.py
@@ -96,9 +96,10 @@
"""
Returns our whitelisted environment variables.
"""
- keys = ("SERVER_NAME", "SERVER_PORT")
+ keys = ["SERVER_NAME", "SERVER_PORT"]
if _should_send_default_pii():
- keys += ("REMOTE_ADDR",) # type: ignore
+ # Add all three headers here to make debugging of proxy setup easier.
+ keys += ["REMOTE_ADDR", "HTTP_X_FORWARDED_FOR", "HTTP_X_REAL_IP"]
for key in keys:
if key in environ:
@@ -129,16 +130,21 @@
def get_client_ip(environ):
# type: (Dict[str, str]) -> Optional[Any]
"""
- Naively yank the first IP address in an X-Forwarded-For header
- and assume this is correct.
-
- Note: Don't use this in security sensitive situations since this
- value may be forged from a client.
+ Infer the user IP address from various headers. This cannot be used in
+ security sensitive situations since the value may be forged from a client,
+ but it's good enough for the event payload.
"""
try:
return environ["HTTP_X_FORWARDED_FOR"].split(",")[0].strip()
except (KeyError, IndexError):
- return environ.get("REMOTE_ADDR")
+ pass
+
+ try:
+ return environ["HTTP_X_REAL_IP"]
+ except KeyError:
+ pass
+
+ return environ.get("REMOTE_ADDR")
def _capture_exception(hub):
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/wsgi.py b/sentry_sdk/integrations/wsgi.py\n--- a/sentry_sdk/integrations/wsgi.py\n+++ b/sentry_sdk/integrations/wsgi.py\n@@ -96,9 +96,10 @@\n \"\"\"\n Returns our whitelisted environment variables.\n \"\"\"\n- keys = (\"SERVER_NAME\", \"SERVER_PORT\")\n+ keys = [\"SERVER_NAME\", \"SERVER_PORT\"]\n if _should_send_default_pii():\n- keys += (\"REMOTE_ADDR\",) # type: ignore\n+ # Add all three headers here to make debugging of proxy setup easier.\n+ keys += [\"REMOTE_ADDR\", \"HTTP_X_FORWARDED_FOR\", \"HTTP_X_REAL_IP\"]\n \n for key in keys:\n if key in environ:\n@@ -129,16 +130,21 @@\n def get_client_ip(environ):\n # type: (Dict[str, str]) -> Optional[Any]\n \"\"\"\n- Naively yank the first IP address in an X-Forwarded-For header\n- and assume this is correct.\n-\n- Note: Don't use this in security sensitive situations since this\n- value may be forged from a client.\n+ Infer the user IP address from various headers. This cannot be used in\n+ security sensitive situations since the value may be forged from a client,\n+ but it's good enough for the event payload.\n \"\"\"\n try:\n return environ[\"HTTP_X_FORWARDED_FOR\"].split(\",\")[0].strip()\n except (KeyError, IndexError):\n- return environ.get(\"REMOTE_ADDR\")\n+ pass\n+\n+ try:\n+ return environ[\"HTTP_X_REAL_IP\"]\n+ except KeyError:\n+ pass\n+\n+ return environ.get(\"REMOTE_ADDR\")\n \n \n def _capture_exception(hub):\n", "issue": "Flask integration wrong User IP behind proxy\nWhen sentry report bugs I get wrong users IP. \r\n\r\nI'm using UWSGI and NGINX stack. I forward X-REAL-IP from NGINX to UWSGI. Also flask.request.remote_addr shows correct user IP address but sentry-sdk version 0.7.10 shows my server IP address as user IP.\r\n\r\nWould you please assist?\n", "before_files": [{"content": "import sys\n\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import PY2, reraise\nfrom sentry_sdk.integrations._wsgi_common import _filter_headers\n\nif False:\n from typing import Callable\n from typing import Dict\n from typing import List\n from typing import Iterator\n from typing import Any\n from typing import Tuple\n from typing import Optional\n\n from sentry_sdk.utils import ExcInfo\n\n\nif PY2:\n\n def wsgi_decoding_dance(s, charset=\"utf-8\", errors=\"replace\"):\n # type: (str, str, str) -> str\n return s.decode(charset, errors)\n\n\nelse:\n\n def wsgi_decoding_dance(s, charset=\"utf-8\", errors=\"replace\"):\n # type: (str, str, str) -> str\n return s.encode(\"latin1\").decode(charset, errors)\n\n\ndef get_host(environ):\n # type: (Dict[str, str]) -> str\n \"\"\"Return the host for the given WSGI environment. Yanked from Werkzeug.\"\"\"\n if environ.get(\"HTTP_HOST\"):\n rv = environ[\"HTTP_HOST\"]\n if environ[\"wsgi.url_scheme\"] == \"http\" and rv.endswith(\":80\"):\n rv = rv[:-3]\n elif environ[\"wsgi.url_scheme\"] == \"https\" and rv.endswith(\":443\"):\n rv = rv[:-4]\n elif environ.get(\"SERVER_NAME\"):\n rv = environ[\"SERVER_NAME\"]\n if (environ[\"wsgi.url_scheme\"], environ[\"SERVER_PORT\"]) not in (\n (\"https\", \"443\"),\n (\"http\", \"80\"),\n ):\n rv += \":\" + environ[\"SERVER_PORT\"]\n else:\n # In spite of the WSGI spec, SERVER_NAME might not be present.\n rv = \"unknown\"\n\n return rv\n\n\ndef get_request_url(environ):\n # type: (Dict[str, str]) -> str\n \"\"\"Return the absolute URL without query string for the given WSGI\n environment.\"\"\"\n return \"%s://%s/%s\" % (\n environ.get(\"wsgi.url_scheme\"),\n get_host(environ),\n wsgi_decoding_dance(environ.get(\"PATH_INFO\") or \"\").lstrip(\"/\"),\n )\n\n\nclass SentryWsgiMiddleware(object):\n __slots__ = (\"app\",)\n\n def __init__(self, app):\n # type: (Callable) -> None\n self.app = app\n\n def __call__(self, environ, start_response):\n # type: (Dict[str, str], Callable) -> _ScopedResponse\n hub = Hub(Hub.current)\n\n with hub:\n with capture_internal_exceptions():\n with hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope._name = \"wsgi\"\n scope.add_event_processor(_make_wsgi_event_processor(environ))\n\n try:\n rv = self.app(environ, start_response)\n except Exception:\n reraise(*_capture_exception(hub))\n\n return _ScopedResponse(hub, rv)\n\n\ndef _get_environ(environ):\n # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]\n \"\"\"\n Returns our whitelisted environment variables.\n \"\"\"\n keys = (\"SERVER_NAME\", \"SERVER_PORT\")\n if _should_send_default_pii():\n keys += (\"REMOTE_ADDR\",) # type: ignore\n\n for key in keys:\n if key in environ:\n yield key, environ[key]\n\n\n# `get_headers` comes from `werkzeug.datastructures.EnvironHeaders`\n#\n# We need this function because Django does not give us a \"pure\" http header\n# dict. So we might as well use it for all WSGI integrations.\ndef _get_headers(environ):\n # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]\n \"\"\"\n Returns only proper HTTP headers.\n\n \"\"\"\n for key, value in environ.items():\n key = str(key)\n if key.startswith(\"HTTP_\") and key not in (\n \"HTTP_CONTENT_TYPE\",\n \"HTTP_CONTENT_LENGTH\",\n ):\n yield key[5:].replace(\"_\", \"-\").title(), value\n elif key in (\"CONTENT_TYPE\", \"CONTENT_LENGTH\"):\n yield key.replace(\"_\", \"-\").title(), value\n\n\ndef get_client_ip(environ):\n # type: (Dict[str, str]) -> Optional[Any]\n \"\"\"\n Naively yank the first IP address in an X-Forwarded-For header\n and assume this is correct.\n\n Note: Don't use this in security sensitive situations since this\n value may be forged from a client.\n \"\"\"\n try:\n return environ[\"HTTP_X_FORWARDED_FOR\"].split(\",\")[0].strip()\n except (KeyError, IndexError):\n return environ.get(\"REMOTE_ADDR\")\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n # Check client here as it might have been unset while streaming response\n if hub.client is not None:\n exc_info = sys.exc_info()\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"wsgi\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n return exc_info\n\n\nclass _ScopedResponse(object):\n __slots__ = (\"_response\", \"_hub\")\n\n def __init__(self, hub, response):\n # type: (Hub, List[bytes]) -> None\n self._hub = hub\n self._response = response\n\n def __iter__(self):\n # type: () -> Iterator[bytes]\n iterator = iter(self._response)\n\n while True:\n with self._hub:\n try:\n chunk = next(iterator)\n except StopIteration:\n break\n except Exception:\n reraise(*_capture_exception(self._hub))\n\n yield chunk\n\n def close(self):\n with self._hub:\n try:\n self._response.close()\n except AttributeError:\n pass\n except Exception:\n reraise(*_capture_exception(self._hub))\n\n\ndef _make_wsgi_event_processor(environ):\n # type: (Dict[str, str]) -> Callable\n # It's a bit unfortunate that we have to extract and parse the request data\n # from the environ so eagerly, but there are a few good reasons for this.\n #\n # We might be in a situation where the scope/hub never gets torn down\n # properly. In that case we will have an unnecessary strong reference to\n # all objects in the environ (some of which may take a lot of memory) when\n # we're really just interested in a few of them.\n #\n # Keeping the environment around for longer than the request lifecycle is\n # also not necessarily something uWSGI can deal with:\n # https://github.com/unbit/uwsgi/issues/1950\n\n client_ip = get_client_ip(environ)\n request_url = get_request_url(environ)\n query_string = environ.get(\"QUERY_STRING\")\n method = environ.get(\"REQUEST_METHOD\")\n env = dict(_get_environ(environ))\n headers = _filter_headers(dict(_get_headers(environ)))\n\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n with capture_internal_exceptions():\n # if the code below fails halfway through we at least have some data\n request_info = event.setdefault(\"request\", {})\n\n if _should_send_default_pii():\n user_info = event.setdefault(\"user\", {})\n user_info[\"ip_address\"] = client_ip\n\n request_info[\"url\"] = request_url\n request_info[\"query_string\"] = query_string\n request_info[\"method\"] = method\n request_info[\"env\"] = env\n request_info[\"headers\"] = headers\n\n return event\n\n return event_processor\n", "path": "sentry_sdk/integrations/wsgi.py"}], "after_files": [{"content": "import sys\n\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import PY2, reraise\nfrom sentry_sdk.integrations._wsgi_common import _filter_headers\n\nif False:\n from typing import Callable\n from typing import Dict\n from typing import List\n from typing import Iterator\n from typing import Any\n from typing import Tuple\n from typing import Optional\n\n from sentry_sdk.utils import ExcInfo\n\n\nif PY2:\n\n def wsgi_decoding_dance(s, charset=\"utf-8\", errors=\"replace\"):\n # type: (str, str, str) -> str\n return s.decode(charset, errors)\n\n\nelse:\n\n def wsgi_decoding_dance(s, charset=\"utf-8\", errors=\"replace\"):\n # type: (str, str, str) -> str\n return s.encode(\"latin1\").decode(charset, errors)\n\n\ndef get_host(environ):\n # type: (Dict[str, str]) -> str\n \"\"\"Return the host for the given WSGI environment. Yanked from Werkzeug.\"\"\"\n if environ.get(\"HTTP_HOST\"):\n rv = environ[\"HTTP_HOST\"]\n if environ[\"wsgi.url_scheme\"] == \"http\" and rv.endswith(\":80\"):\n rv = rv[:-3]\n elif environ[\"wsgi.url_scheme\"] == \"https\" and rv.endswith(\":443\"):\n rv = rv[:-4]\n elif environ.get(\"SERVER_NAME\"):\n rv = environ[\"SERVER_NAME\"]\n if (environ[\"wsgi.url_scheme\"], environ[\"SERVER_PORT\"]) not in (\n (\"https\", \"443\"),\n (\"http\", \"80\"),\n ):\n rv += \":\" + environ[\"SERVER_PORT\"]\n else:\n # In spite of the WSGI spec, SERVER_NAME might not be present.\n rv = \"unknown\"\n\n return rv\n\n\ndef get_request_url(environ):\n # type: (Dict[str, str]) -> str\n \"\"\"Return the absolute URL without query string for the given WSGI\n environment.\"\"\"\n return \"%s://%s/%s\" % (\n environ.get(\"wsgi.url_scheme\"),\n get_host(environ),\n wsgi_decoding_dance(environ.get(\"PATH_INFO\") or \"\").lstrip(\"/\"),\n )\n\n\nclass SentryWsgiMiddleware(object):\n __slots__ = (\"app\",)\n\n def __init__(self, app):\n # type: (Callable) -> None\n self.app = app\n\n def __call__(self, environ, start_response):\n # type: (Dict[str, str], Callable) -> _ScopedResponse\n hub = Hub(Hub.current)\n\n with hub:\n with capture_internal_exceptions():\n with hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope._name = \"wsgi\"\n scope.add_event_processor(_make_wsgi_event_processor(environ))\n\n try:\n rv = self.app(environ, start_response)\n except Exception:\n reraise(*_capture_exception(hub))\n\n return _ScopedResponse(hub, rv)\n\n\ndef _get_environ(environ):\n # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]\n \"\"\"\n Returns our whitelisted environment variables.\n \"\"\"\n keys = [\"SERVER_NAME\", \"SERVER_PORT\"]\n if _should_send_default_pii():\n # Add all three headers here to make debugging of proxy setup easier.\n keys += [\"REMOTE_ADDR\", \"HTTP_X_FORWARDED_FOR\", \"HTTP_X_REAL_IP\"]\n\n for key in keys:\n if key in environ:\n yield key, environ[key]\n\n\n# `get_headers` comes from `werkzeug.datastructures.EnvironHeaders`\n#\n# We need this function because Django does not give us a \"pure\" http header\n# dict. So we might as well use it for all WSGI integrations.\ndef _get_headers(environ):\n # type: (Dict[str, str]) -> Iterator[Tuple[str, str]]\n \"\"\"\n Returns only proper HTTP headers.\n\n \"\"\"\n for key, value in environ.items():\n key = str(key)\n if key.startswith(\"HTTP_\") and key not in (\n \"HTTP_CONTENT_TYPE\",\n \"HTTP_CONTENT_LENGTH\",\n ):\n yield key[5:].replace(\"_\", \"-\").title(), value\n elif key in (\"CONTENT_TYPE\", \"CONTENT_LENGTH\"):\n yield key.replace(\"_\", \"-\").title(), value\n\n\ndef get_client_ip(environ):\n # type: (Dict[str, str]) -> Optional[Any]\n \"\"\"\n Infer the user IP address from various headers. This cannot be used in\n security sensitive situations since the value may be forged from a client,\n but it's good enough for the event payload.\n \"\"\"\n try:\n return environ[\"HTTP_X_FORWARDED_FOR\"].split(\",\")[0].strip()\n except (KeyError, IndexError):\n pass\n\n try:\n return environ[\"HTTP_X_REAL_IP\"]\n except KeyError:\n pass\n\n return environ.get(\"REMOTE_ADDR\")\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n # Check client here as it might have been unset while streaming response\n if hub.client is not None:\n exc_info = sys.exc_info()\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"wsgi\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n return exc_info\n\n\nclass _ScopedResponse(object):\n __slots__ = (\"_response\", \"_hub\")\n\n def __init__(self, hub, response):\n # type: (Hub, List[bytes]) -> None\n self._hub = hub\n self._response = response\n\n def __iter__(self):\n # type: () -> Iterator[bytes]\n iterator = iter(self._response)\n\n while True:\n with self._hub:\n try:\n chunk = next(iterator)\n except StopIteration:\n break\n except Exception:\n reraise(*_capture_exception(self._hub))\n\n yield chunk\n\n def close(self):\n with self._hub:\n try:\n self._response.close()\n except AttributeError:\n pass\n except Exception:\n reraise(*_capture_exception(self._hub))\n\n\ndef _make_wsgi_event_processor(environ):\n # type: (Dict[str, str]) -> Callable\n # It's a bit unfortunate that we have to extract and parse the request data\n # from the environ so eagerly, but there are a few good reasons for this.\n #\n # We might be in a situation where the scope/hub never gets torn down\n # properly. In that case we will have an unnecessary strong reference to\n # all objects in the environ (some of which may take a lot of memory) when\n # we're really just interested in a few of them.\n #\n # Keeping the environment around for longer than the request lifecycle is\n # also not necessarily something uWSGI can deal with:\n # https://github.com/unbit/uwsgi/issues/1950\n\n client_ip = get_client_ip(environ)\n request_url = get_request_url(environ)\n query_string = environ.get(\"QUERY_STRING\")\n method = environ.get(\"REQUEST_METHOD\")\n env = dict(_get_environ(environ))\n headers = _filter_headers(dict(_get_headers(environ)))\n\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n with capture_internal_exceptions():\n # if the code below fails halfway through we at least have some data\n request_info = event.setdefault(\"request\", {})\n\n if _should_send_default_pii():\n user_info = event.setdefault(\"user\", {})\n user_info[\"ip_address\"] = client_ip\n\n request_info[\"url\"] = request_url\n request_info[\"query_string\"] = query_string\n request_info[\"method\"] = method\n request_info[\"env\"] = env\n request_info[\"headers\"] = headers\n\n return event\n\n return event_processor\n", "path": "sentry_sdk/integrations/wsgi.py"}]}
| 2,681 | 394 |
gh_patches_debug_12492
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-9161
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
issue with permute_recursive
### What would you like to share?
Your code looks mostly correct, but there's one issue in the `permute_recursive` function due to the modification of the `nums` list. Lists in Python are mutable, and when you use `nums.pop(0)`, it modifies the original `nums` list. This can lead to incorrect results and even an infinite loop.
To fix this, you should pass a copy of the `nums` list to the recursive function. Here's the corrected `permute_recursive` function:
def permute_recursive(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
>>> permute_recursive([1, 2, 3])
[[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]
"""
result: list[list[int]] = []
if len(nums) == 0:
return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
permutations = permute_recursive(nums[:]) # Make a copy of nums
for perm in permutations:
perm.append(n)
result.extend(permutations)
nums.append(n)
return result
```
With this modification, your code should work correctly for both `permute_recursive` and `permute_backtrack`.
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `data_structures/arrays/permutations.py`
Content:
```
1 def permute_recursive(nums: list[int]) -> list[list[int]]:
2 """
3 Return all permutations.
4
5 >>> permute_recursive([1, 2, 3])
6 [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]
7 """
8 result: list[list[int]] = []
9 if len(nums) == 0:
10 return [[]]
11 for _ in range(len(nums)):
12 n = nums.pop(0)
13 permutations = permute_recursive(nums)
14 for perm in permutations:
15 perm.append(n)
16 result.extend(permutations)
17 nums.append(n)
18 return result
19
20
21 def permute_backtrack(nums: list[int]) -> list[list[int]]:
22 """
23 Return all permutations of the given list.
24
25 >>> permute_backtrack([1, 2, 3])
26 [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
27 """
28
29 def backtrack(start: int) -> None:
30 if start == len(nums) - 1:
31 output.append(nums[:])
32 else:
33 for i in range(start, len(nums)):
34 nums[start], nums[i] = nums[i], nums[start]
35 backtrack(start + 1)
36 nums[start], nums[i] = nums[i], nums[start] # backtrack
37
38 output: list[list[int]] = []
39 backtrack(0)
40 return output
41
42
43 if __name__ == "__main__":
44 import doctest
45
46 res = permute_backtrack([1, 2, 3])
47 print(res)
48 doctest.testmod()
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py
--- a/data_structures/arrays/permutations.py
+++ b/data_structures/arrays/permutations.py
@@ -10,7 +10,7 @@
return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
- permutations = permute_recursive(nums)
+ permutations = permute_recursive(nums.copy())
for perm in permutations:
perm.append(n)
result.extend(permutations)
@@ -43,6 +43,6 @@
if __name__ == "__main__":
import doctest
- res = permute_backtrack([1, 2, 3])
- print(res)
+ result = permute_backtrack([1, 2, 3])
+ print(result)
doctest.testmod()
|
{"golden_diff": "diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py\n--- a/data_structures/arrays/permutations.py\n+++ b/data_structures/arrays/permutations.py\n@@ -10,7 +10,7 @@\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n- permutations = permute_recursive(nums)\n+ permutations = permute_recursive(nums.copy())\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n@@ -43,6 +43,6 @@\n if __name__ == \"__main__\":\n import doctest\n \n- res = permute_backtrack([1, 2, 3])\n- print(res)\n+ result = permute_backtrack([1, 2, 3])\n+ print(result)\n doctest.testmod()\n", "issue": "issue with permute_recursive\n### What would you like to share?\n\nYour code looks mostly correct, but there's one issue in the `permute_recursive` function due to the modification of the `nums` list. Lists in Python are mutable, and when you use `nums.pop(0)`, it modifies the original `nums` list. This can lead to incorrect results and even an infinite loop.\r\n\r\nTo fix this, you should pass a copy of the `nums` list to the recursive function. Here's the corrected `permute_recursive` function:\r\n\r\n\r\ndef permute_recursive(nums: list[int]) -> list[list[int]]:\r\n \"\"\"\r\n Return all permutations.\r\n\r\n >>> permute_recursive([1, 2, 3])\r\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\r\n \"\"\"\r\n result: list[list[int]] = []\r\n if len(nums) == 0:\r\n return [[]]\r\n for _ in range(len(nums)):\r\n n = nums.pop(0)\r\n permutations = permute_recursive(nums[:]) # Make a copy of nums\r\n for perm in permutations:\r\n perm.append(n)\r\n result.extend(permutations)\r\n nums.append(n)\r\n return result\r\n```\r\n\r\nWith this modification, your code should work correctly for both `permute_recursive` and `permute_backtrack`.\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "def permute_recursive(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations.\n\n >>> permute_recursive([1, 2, 3])\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\n \"\"\"\n result: list[list[int]] = []\n if len(nums) == 0:\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n permutations = permute_recursive(nums)\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n nums.append(n)\n return result\n\n\ndef permute_backtrack(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations of the given list.\n\n >>> permute_backtrack([1, 2, 3])\n [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]\n \"\"\"\n\n def backtrack(start: int) -> None:\n if start == len(nums) - 1:\n output.append(nums[:])\n else:\n for i in range(start, len(nums)):\n nums[start], nums[i] = nums[i], nums[start]\n backtrack(start + 1)\n nums[start], nums[i] = nums[i], nums[start] # backtrack\n\n output: list[list[int]] = []\n backtrack(0)\n return output\n\n\nif __name__ == \"__main__\":\n import doctest\n\n res = permute_backtrack([1, 2, 3])\n print(res)\n doctest.testmod()\n", "path": "data_structures/arrays/permutations.py"}], "after_files": [{"content": "def permute_recursive(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations.\n\n >>> permute_recursive([1, 2, 3])\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\n \"\"\"\n result: list[list[int]] = []\n if len(nums) == 0:\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n permutations = permute_recursive(nums.copy())\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n nums.append(n)\n return result\n\n\ndef permute_backtrack(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations of the given list.\n\n >>> permute_backtrack([1, 2, 3])\n [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]\n \"\"\"\n\n def backtrack(start: int) -> None:\n if start == len(nums) - 1:\n output.append(nums[:])\n else:\n for i in range(start, len(nums)):\n nums[start], nums[i] = nums[i], nums[start]\n backtrack(start + 1)\n nums[start], nums[i] = nums[i], nums[start] # backtrack\n\n output: list[list[int]] = []\n backtrack(0)\n return output\n\n\nif __name__ == \"__main__\":\n import doctest\n\n result = permute_backtrack([1, 2, 3])\n print(result)\n doctest.testmod()\n", "path": "data_structures/arrays/permutations.py"}]}
| 1,078 | 187 |
gh_patches_debug_39293
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-451
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Choices in Context
Add support for choices in a context as suggested in #441.
I noticed that we are using custom methods for retrieving the user input and decided to let `click.prompt` handle the Python2/Python3 compatibility matters. This PR requires thorough reviews and testing before merging it as I touched some core functions and I want to be certain I didn't break anything :exclamation:
The CI might not detect these errors as most of the `read_response` calls are still `monkeypatched`. I'm afraid we need to do some manual testing to be 100% positive that everyting is working as expected :sweat:
I implemented several tests for the new functionality. :smiley:
Please let me know your thoughts and I'll be happy to update this PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/compat.py`
Content:
```
1 import os
2 import sys
3
4 PY3 = sys.version_info[0] == 3
5 OLD_PY2 = sys.version_info[:2] < (2, 7)
6
7 if PY3: # pragma: no cover
8 input_str = 'builtins.input'
9 iteritems = lambda d: iter(d.items())
10 from unittest.mock import patch
11 from io import StringIO
12
13 def read_response(prompt=''):
14 """
15 Prompt the user for a response.
16
17 Prints the given prompt (which should be a Unicode string),
18 and returns the text entered by the user as a Unicode string.
19
20 :param prompt: A Unicode string that is presented to the user.
21 """
22 # The Python 3 input function does exactly what we want
23 return input(prompt)
24
25 else: # pragma: no cover
26 from __builtin__ import raw_input
27 input = raw_input
28 input_str = '__builtin__.raw_input'
29 iteritems = lambda d: d.iteritems()
30 from mock import patch
31 from cStringIO import StringIO
32
33 def read_response(prompt=''):
34 """
35 Prompt the user for a response.
36
37 Prints the given prompt (which should be a Unicode string),
38 and returns the text entered by the user as a Unicode string.
39
40 :param prompt: A Unicode string that is presented to the user.
41 """
42 # For Python 2, raw_input takes a byte string argument for the prompt.
43 # This must be encoded using the encoding used by sys.stdout.
44 # The result is a byte string encoding using sys.stdin.encoding.
45 # However, if the program is not being run interactively, sys.stdout
46 # and sys.stdin may not have encoding attributes.
47 # In that case we don't print a prompt (stdin/out isn't interactive,
48 # so prompting is pointless), and we assume the returned data is
49 # encoded using sys.getdefaultencoding(). This may not be right,
50 # but it's likely the best we can do.
51 # Isn't Python 2 encoding support wonderful? :-)
52 if sys.stdout.encoding:
53 prompt = prompt.encode(sys.stdout.encoding)
54 else:
55 prompt = ''
56 enc = sys.stdin.encoding or sys.getdefaultencoding()
57 return raw_input(prompt).decode(enc)
58
59
60 if PY3: # Forced testing
61
62 from shutil import which
63
64 else: # Forced testing
65
66 def is_exe(program):
67 """
68 Returns whether or not a file is an executable.
69 """
70 return os.path.isfile(program) and os.access(program, os.X_OK)
71
72 def which(cmd, mode=os.F_OK | os.X_OK, path=None):
73 """Given a command, mode, and a PATH string, return the path which
74 conforms to the given mode on the PATH, or None if there is no such
75 file.
76 `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
77 of os.environ.get("PATH"), or can be overridden with a custom search
78 path.
79
80 Note: This function was backported from the Python 3 source code.
81 """
82 # Check that a given file can be accessed with the correct mode.
83 # Additionally check that `file` is not a directory, as on Windows
84 # directories pass the os.access check.
85 def _access_check(fn, mode):
86 return (os.path.exists(fn) and os.access(fn, mode) and
87 not os.path.isdir(fn))
88
89 # If we're given a path with a directory part, look it up directly
90 # rather than referring to PATH directories. This includes checking
91 # relative to the current directory, e.g. ./script
92 if os.path.dirname(cmd):
93 if _access_check(cmd, mode):
94 return cmd
95 return None
96
97 if path is None:
98 path = os.environ.get("PATH", os.defpath)
99 if not path:
100 return None
101 path = path.split(os.pathsep)
102
103 if sys.platform == "win32":
104 # The current directory takes precedence on Windows.
105 if os.curdir not in path:
106 path.insert(0, os.curdir)
107
108 # PATHEXT is necessary to check on Windows.
109 pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
110 # See if the given file matches any of the expected path
111 # extensions. This will allow us to short circuit when given
112 # "python.exe". If it does match, only test that one, otherwise we
113 # have to try others.
114 if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
115 files = [cmd]
116 else:
117 files = [cmd + ext for ext in pathext]
118 else:
119 # On other platforms you don't have things like PATHEXT to tell you
120 # what file suffixes are executable, so just pass on cmd as-is.
121 files = [cmd]
122
123 seen = set()
124 for dir in path:
125 normdir = os.path.normcase(dir)
126 if normdir not in seen:
127 seen.add(normdir)
128 for thefile in files:
129 name = os.path.join(dir, thefile)
130 if _access_check(name, mode):
131 return name
132 return None
133
134
135 def is_string(obj):
136 """Determine if an object is a string."""
137 return isinstance(obj, str if PY3 else basestring)
138
139
140 _hush_pyflakes = (patch, StringIO, which)
141
```
Path: `cookiecutter/prompt.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.prompt
6 ---------------------
7
8 Functions for prompting the user for project info.
9 """
10
11 from __future__ import unicode_literals
12 import sys
13
14 from .compat import iteritems, read_response, is_string
15 from jinja2.environment import Environment
16
17
18 def prompt_for_config(context, no_input=False):
19 """
20 Prompts the user to enter new config, using context as a source for the
21 field names and sample values.
22
23 :param no_input: Prompt the user at command line for manual configuration?
24 """
25 cookiecutter_dict = {}
26 env = Environment()
27
28 for key, raw in iteritems(context['cookiecutter']):
29 if key.startswith('_'):
30 cookiecutter_dict[key] = raw
31 continue
32
33 raw = raw if is_string(raw) else str(raw)
34 val = env.from_string(raw).render(cookiecutter=cookiecutter_dict)
35
36 if not no_input:
37 prompt = '{0} (default is "{1}")? '.format(key, val)
38
39 new_val = read_response(prompt).strip()
40
41 if new_val != '':
42 val = new_val
43
44 cookiecutter_dict[key] = val
45 return cookiecutter_dict
46
47
48 def query_yes_no(question, default='yes'):
49 """
50 Ask a yes/no question via `read_response()` and return their answer.
51
52 :param question: A string that is presented to the user.
53 :param default: The presumed answer if the user just hits <Enter>.
54 It must be "yes" (the default), "no" or None (meaning
55 an answer is required of the user).
56
57 The "answer" return value is one of "yes" or "no".
58
59 Adapted from
60 http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input
61 http://code.activestate.com/recipes/577058/
62
63 """
64 valid = {'yes': True, 'y': True, 'ye': True, 'no': False, 'n': False}
65 if default is None:
66 prompt = ' [y/n] '
67 elif default == 'yes':
68 prompt = ' [Y/n] '
69 elif default == 'no':
70 prompt = ' [y/N] '
71 else:
72 raise ValueError('Invalid default answer: "{0}"'.format(default))
73
74 while True:
75 sys.stdout.write(question + prompt)
76 choice = read_response().lower()
77
78 if default is not None and choice == '':
79 return valid[default]
80 elif choice in valid:
81 return valid[choice]
82 else:
83 sys.stdout.write('Please respond with "yes" or "no" '
84 '(or "y" or "n").\n')
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cookiecutter/compat.py b/cookiecutter/compat.py
--- a/cookiecutter/compat.py
+++ b/cookiecutter/compat.py
@@ -4,23 +4,13 @@
PY3 = sys.version_info[0] == 3
OLD_PY2 = sys.version_info[:2] < (2, 7)
+
if PY3: # pragma: no cover
input_str = 'builtins.input'
iteritems = lambda d: iter(d.items())
from unittest.mock import patch
from io import StringIO
- def read_response(prompt=''):
- """
- Prompt the user for a response.
-
- Prints the given prompt (which should be a Unicode string),
- and returns the text entered by the user as a Unicode string.
-
- :param prompt: A Unicode string that is presented to the user.
- """
- # The Python 3 input function does exactly what we want
- return input(prompt)
else: # pragma: no cover
from __builtin__ import raw_input
@@ -30,32 +20,6 @@
from mock import patch
from cStringIO import StringIO
- def read_response(prompt=''):
- """
- Prompt the user for a response.
-
- Prints the given prompt (which should be a Unicode string),
- and returns the text entered by the user as a Unicode string.
-
- :param prompt: A Unicode string that is presented to the user.
- """
- # For Python 2, raw_input takes a byte string argument for the prompt.
- # This must be encoded using the encoding used by sys.stdout.
- # The result is a byte string encoding using sys.stdin.encoding.
- # However, if the program is not being run interactively, sys.stdout
- # and sys.stdin may not have encoding attributes.
- # In that case we don't print a prompt (stdin/out isn't interactive,
- # so prompting is pointless), and we assume the returned data is
- # encoded using sys.getdefaultencoding(). This may not be right,
- # but it's likely the best we can do.
- # Isn't Python 2 encoding support wonderful? :-)
- if sys.stdout.encoding:
- prompt = prompt.encode(sys.stdout.encoding)
- else:
- prompt = ''
- enc = sys.stdin.encoding or sys.getdefaultencoding()
- return raw_input(prompt).decode(enc)
-
if PY3: # Forced testing
diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -11,10 +11,29 @@
from __future__ import unicode_literals
import sys
-from .compat import iteritems, read_response, is_string
+import click
+
+from .compat import iteritems, is_string
from jinja2.environment import Environment
+def read_response(prompt=''):
+ """Prompt the user and return the entered value or an empty string.
+
+ :param str prompt: Text to display to the user
+ """
+ # Please see http://click.pocoo.org/4/api/#click.prompt
+ # default: use an empty string if no input happens
+ # prompt_suffix: do not add a suffix to the prompt
+ # show_default: the prompt is expected to mention the default itself
+ return click.prompt(
+ prompt,
+ default='',
+ prompt_suffix='',
+ show_default=False,
+ )
+
+
def prompt_for_config(context, no_input=False):
"""
Prompts the user to enter new config, using context as a source for the
|
{"golden_diff": "diff --git a/cookiecutter/compat.py b/cookiecutter/compat.py\n--- a/cookiecutter/compat.py\n+++ b/cookiecutter/compat.py\n@@ -4,23 +4,13 @@\n PY3 = sys.version_info[0] == 3\n OLD_PY2 = sys.version_info[:2] < (2, 7)\n \n+\n if PY3: # pragma: no cover\n input_str = 'builtins.input'\n iteritems = lambda d: iter(d.items())\n from unittest.mock import patch\n from io import StringIO\n \n- def read_response(prompt=''):\n- \"\"\"\n- Prompt the user for a response.\n-\n- Prints the given prompt (which should be a Unicode string),\n- and returns the text entered by the user as a Unicode string.\n-\n- :param prompt: A Unicode string that is presented to the user.\n- \"\"\"\n- # The Python 3 input function does exactly what we want\n- return input(prompt)\n \n else: # pragma: no cover\n from __builtin__ import raw_input\n@@ -30,32 +20,6 @@\n from mock import patch\n from cStringIO import StringIO\n \n- def read_response(prompt=''):\n- \"\"\"\n- Prompt the user for a response.\n-\n- Prints the given prompt (which should be a Unicode string),\n- and returns the text entered by the user as a Unicode string.\n-\n- :param prompt: A Unicode string that is presented to the user.\n- \"\"\"\n- # For Python 2, raw_input takes a byte string argument for the prompt.\n- # This must be encoded using the encoding used by sys.stdout.\n- # The result is a byte string encoding using sys.stdin.encoding.\n- # However, if the program is not being run interactively, sys.stdout\n- # and sys.stdin may not have encoding attributes.\n- # In that case we don't print a prompt (stdin/out isn't interactive,\n- # so prompting is pointless), and we assume the returned data is\n- # encoded using sys.getdefaultencoding(). This may not be right,\n- # but it's likely the best we can do.\n- # Isn't Python 2 encoding support wonderful? :-)\n- if sys.stdout.encoding:\n- prompt = prompt.encode(sys.stdout.encoding)\n- else:\n- prompt = ''\n- enc = sys.stdin.encoding or sys.getdefaultencoding()\n- return raw_input(prompt).decode(enc)\n-\n \n if PY3: # Forced testing\n \ndiff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -11,10 +11,29 @@\n from __future__ import unicode_literals\n import sys\n \n-from .compat import iteritems, read_response, is_string\n+import click\n+\n+from .compat import iteritems, is_string\n from jinja2.environment import Environment\n \n \n+def read_response(prompt=''):\n+ \"\"\"Prompt the user and return the entered value or an empty string.\n+\n+ :param str prompt: Text to display to the user\n+ \"\"\"\n+ # Please see http://click.pocoo.org/4/api/#click.prompt\n+ # default: use an empty string if no input happens\n+ # prompt_suffix: do not add a suffix to the prompt\n+ # show_default: the prompt is expected to mention the default itself\n+ return click.prompt(\n+ prompt,\n+ default='',\n+ prompt_suffix='',\n+ show_default=False,\n+ )\n+\n+\n def prompt_for_config(context, no_input=False):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n", "issue": "Choices in Context\nAdd support for choices in a context as suggested in #441.\n\nI noticed that we are using custom methods for retrieving the user input and decided to let `click.prompt` handle the Python2/Python3 compatibility matters. This PR requires thorough reviews and testing before merging it as I touched some core functions and I want to be certain I didn't break anything :exclamation: \n\nThe CI might not detect these errors as most of the `read_response` calls are still `monkeypatched`. I'm afraid we need to do some manual testing to be 100% positive that everyting is working as expected :sweat: \n\nI implemented several tests for the new functionality. :smiley: \n\nPlease let me know your thoughts and I'll be happy to update this PR.\n\n", "before_files": [{"content": "import os\nimport sys\n\nPY3 = sys.version_info[0] == 3\nOLD_PY2 = sys.version_info[:2] < (2, 7)\n\nif PY3: # pragma: no cover\n input_str = 'builtins.input'\n iteritems = lambda d: iter(d.items())\n from unittest.mock import patch\n from io import StringIO\n\n def read_response(prompt=''):\n \"\"\"\n Prompt the user for a response.\n\n Prints the given prompt (which should be a Unicode string),\n and returns the text entered by the user as a Unicode string.\n\n :param prompt: A Unicode string that is presented to the user.\n \"\"\"\n # The Python 3 input function does exactly what we want\n return input(prompt)\n\nelse: # pragma: no cover\n from __builtin__ import raw_input\n input = raw_input\n input_str = '__builtin__.raw_input'\n iteritems = lambda d: d.iteritems()\n from mock import patch\n from cStringIO import StringIO\n\n def read_response(prompt=''):\n \"\"\"\n Prompt the user for a response.\n\n Prints the given prompt (which should be a Unicode string),\n and returns the text entered by the user as a Unicode string.\n\n :param prompt: A Unicode string that is presented to the user.\n \"\"\"\n # For Python 2, raw_input takes a byte string argument for the prompt.\n # This must be encoded using the encoding used by sys.stdout.\n # The result is a byte string encoding using sys.stdin.encoding.\n # However, if the program is not being run interactively, sys.stdout\n # and sys.stdin may not have encoding attributes.\n # In that case we don't print a prompt (stdin/out isn't interactive,\n # so prompting is pointless), and we assume the returned data is\n # encoded using sys.getdefaultencoding(). This may not be right,\n # but it's likely the best we can do.\n # Isn't Python 2 encoding support wonderful? :-)\n if sys.stdout.encoding:\n prompt = prompt.encode(sys.stdout.encoding)\n else:\n prompt = ''\n enc = sys.stdin.encoding or sys.getdefaultencoding()\n return raw_input(prompt).decode(enc)\n\n\nif PY3: # Forced testing\n\n from shutil import which\n\nelse: # Forced testing\n\n def is_exe(program):\n \"\"\"\n Returns whether or not a file is an executable.\n \"\"\"\n return os.path.isfile(program) and os.access(program, os.X_OK)\n\n def which(cmd, mode=os.F_OK | os.X_OK, path=None):\n \"\"\"Given a command, mode, and a PATH string, return the path which\n conforms to the given mode on the PATH, or None if there is no such\n file.\n `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result\n of os.environ.get(\"PATH\"), or can be overridden with a custom search\n path.\n\n Note: This function was backported from the Python 3 source code.\n \"\"\"\n # Check that a given file can be accessed with the correct mode.\n # Additionally check that `file` is not a directory, as on Windows\n # directories pass the os.access check.\n def _access_check(fn, mode):\n return (os.path.exists(fn) and os.access(fn, mode) and\n not os.path.isdir(fn))\n\n # If we're given a path with a directory part, look it up directly\n # rather than referring to PATH directories. This includes checking\n # relative to the current directory, e.g. ./script\n if os.path.dirname(cmd):\n if _access_check(cmd, mode):\n return cmd\n return None\n\n if path is None:\n path = os.environ.get(\"PATH\", os.defpath)\n if not path:\n return None\n path = path.split(os.pathsep)\n\n if sys.platform == \"win32\":\n # The current directory takes precedence on Windows.\n if os.curdir not in path:\n path.insert(0, os.curdir)\n\n # PATHEXT is necessary to check on Windows.\n pathext = os.environ.get(\"PATHEXT\", \"\").split(os.pathsep)\n # See if the given file matches any of the expected path\n # extensions. This will allow us to short circuit when given\n # \"python.exe\". If it does match, only test that one, otherwise we\n # have to try others.\n if any(cmd.lower().endswith(ext.lower()) for ext in pathext):\n files = [cmd]\n else:\n files = [cmd + ext for ext in pathext]\n else:\n # On other platforms you don't have things like PATHEXT to tell you\n # what file suffixes are executable, so just pass on cmd as-is.\n files = [cmd]\n\n seen = set()\n for dir in path:\n normdir = os.path.normcase(dir)\n if normdir not in seen:\n seen.add(normdir)\n for thefile in files:\n name = os.path.join(dir, thefile)\n if _access_check(name, mode):\n return name\n return None\n\n\ndef is_string(obj):\n \"\"\"Determine if an object is a string.\"\"\"\n return isinstance(obj, str if PY3 else basestring)\n\n\n_hush_pyflakes = (patch, StringIO, which)\n", "path": "cookiecutter/compat.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport sys\n\nfrom .compat import iteritems, read_response, is_string\nfrom jinja2.environment import Environment\n\n\ndef prompt_for_config(context, no_input=False):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n\n :param no_input: Prompt the user at command line for manual configuration?\n \"\"\"\n cookiecutter_dict = {}\n env = Environment()\n\n for key, raw in iteritems(context['cookiecutter']):\n if key.startswith('_'):\n cookiecutter_dict[key] = raw\n continue\n\n raw = raw if is_string(raw) else str(raw)\n val = env.from_string(raw).render(cookiecutter=cookiecutter_dict)\n\n if not no_input:\n prompt = '{0} (default is \"{1}\")? '.format(key, val)\n\n new_val = read_response(prompt).strip()\n\n if new_val != '':\n val = new_val\n\n cookiecutter_dict[key] = val\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default='yes'):\n \"\"\"\n Ask a yes/no question via `read_response()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {'yes': True, 'y': True, 'ye': True, 'no': False, 'n': False}\n if default is None:\n prompt = ' [y/n] '\n elif default == 'yes':\n prompt = ' [Y/n] '\n elif default == 'no':\n prompt = ' [y/N] '\n else:\n raise ValueError('Invalid default answer: \"{0}\"'.format(default))\n\n while True:\n sys.stdout.write(question + prompt)\n choice = read_response().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write('Please respond with \"yes\" or \"no\" '\n '(or \"y\" or \"n\").\\n')\n", "path": "cookiecutter/prompt.py"}], "after_files": [{"content": "import os\nimport sys\n\nPY3 = sys.version_info[0] == 3\nOLD_PY2 = sys.version_info[:2] < (2, 7)\n\n\nif PY3: # pragma: no cover\n input_str = 'builtins.input'\n iteritems = lambda d: iter(d.items())\n from unittest.mock import patch\n from io import StringIO\n\n\nelse: # pragma: no cover\n from __builtin__ import raw_input\n input = raw_input\n input_str = '__builtin__.raw_input'\n iteritems = lambda d: d.iteritems()\n from mock import patch\n from cStringIO import StringIO\n\n\nif PY3: # Forced testing\n\n from shutil import which\n\nelse: # Forced testing\n\n def is_exe(program):\n \"\"\"\n Returns whether or not a file is an executable.\n \"\"\"\n return os.path.isfile(program) and os.access(program, os.X_OK)\n\n def which(cmd, mode=os.F_OK | os.X_OK, path=None):\n \"\"\"Given a command, mode, and a PATH string, return the path which\n conforms to the given mode on the PATH, or None if there is no such\n file.\n `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result\n of os.environ.get(\"PATH\"), or can be overridden with a custom search\n path.\n\n Note: This function was backported from the Python 3 source code.\n \"\"\"\n # Check that a given file can be accessed with the correct mode.\n # Additionally check that `file` is not a directory, as on Windows\n # directories pass the os.access check.\n def _access_check(fn, mode):\n return (os.path.exists(fn) and os.access(fn, mode) and\n not os.path.isdir(fn))\n\n # If we're given a path with a directory part, look it up directly\n # rather than referring to PATH directories. This includes checking\n # relative to the current directory, e.g. ./script\n if os.path.dirname(cmd):\n if _access_check(cmd, mode):\n return cmd\n return None\n\n if path is None:\n path = os.environ.get(\"PATH\", os.defpath)\n if not path:\n return None\n path = path.split(os.pathsep)\n\n if sys.platform == \"win32\":\n # The current directory takes precedence on Windows.\n if os.curdir not in path:\n path.insert(0, os.curdir)\n\n # PATHEXT is necessary to check on Windows.\n pathext = os.environ.get(\"PATHEXT\", \"\").split(os.pathsep)\n # See if the given file matches any of the expected path\n # extensions. This will allow us to short circuit when given\n # \"python.exe\". If it does match, only test that one, otherwise we\n # have to try others.\n if any(cmd.lower().endswith(ext.lower()) for ext in pathext):\n files = [cmd]\n else:\n files = [cmd + ext for ext in pathext]\n else:\n # On other platforms you don't have things like PATHEXT to tell you\n # what file suffixes are executable, so just pass on cmd as-is.\n files = [cmd]\n\n seen = set()\n for dir in path:\n normdir = os.path.normcase(dir)\n if normdir not in seen:\n seen.add(normdir)\n for thefile in files:\n name = os.path.join(dir, thefile)\n if _access_check(name, mode):\n return name\n return None\n\n\ndef is_string(obj):\n \"\"\"Determine if an object is a string.\"\"\"\n return isinstance(obj, str if PY3 else basestring)\n\n\n_hush_pyflakes = (patch, StringIO, which)\n", "path": "cookiecutter/compat.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport sys\n\nimport click\n\nfrom .compat import iteritems, is_string\nfrom jinja2.environment import Environment\n\n\ndef read_response(prompt=''):\n \"\"\"Prompt the user and return the entered value or an empty string.\n\n :param str prompt: Text to display to the user\n \"\"\"\n # Please see http://click.pocoo.org/4/api/#click.prompt\n # default: use an empty string if no input happens\n # prompt_suffix: do not add a suffix to the prompt\n # show_default: the prompt is expected to mention the default itself\n return click.prompt(\n prompt,\n default='',\n prompt_suffix='',\n show_default=False,\n )\n\n\ndef prompt_for_config(context, no_input=False):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n\n :param no_input: Prompt the user at command line for manual configuration?\n \"\"\"\n cookiecutter_dict = {}\n env = Environment()\n\n for key, raw in iteritems(context['cookiecutter']):\n if key.startswith('_'):\n cookiecutter_dict[key] = raw\n continue\n\n raw = raw if is_string(raw) else str(raw)\n val = env.from_string(raw).render(cookiecutter=cookiecutter_dict)\n\n if not no_input:\n prompt = '{0} (default is \"{1}\")? '.format(key, val)\n\n new_val = read_response(prompt).strip()\n\n if new_val != '':\n val = new_val\n\n cookiecutter_dict[key] = val\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default='yes'):\n \"\"\"\n Ask a yes/no question via `read_response()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {'yes': True, 'y': True, 'ye': True, 'no': False, 'n': False}\n if default is None:\n prompt = ' [y/n] '\n elif default == 'yes':\n prompt = ' [Y/n] '\n elif default == 'no':\n prompt = ' [y/N] '\n else:\n raise ValueError('Invalid default answer: \"{0}\"'.format(default))\n\n while True:\n sys.stdout.write(question + prompt)\n choice = read_response().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write('Please respond with \"yes\" or \"no\" '\n '(or \"y\" or \"n\").\\n')\n", "path": "cookiecutter/prompt.py"}]}
| 2,688 | 817 |
gh_patches_debug_7516
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-624
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
commit-msg stage does not work
Everything works as expected when running just `pre-commit install`, then hooks work.
But when running `pre-commit install -t commit-msg` `IOError` happens, since template could not be found.
Here's the detailed information.
## Env
- `python2.7`
- `pipenv 7.3.7`
- `pre-commit 1.1.1`
Actually tested with both `python2` and `python3`.
## Configuration
```yaml
- repo: local
hooks:
- id: gitlint
name: gitlint
entry: "bash -c 'gitlint lint'"
language: system
stages: [commit-msg]
- id: pytest
name: pytest
entry: "bash -c 'python manage.py test'"
language: system
- id: safety
name: safety
entry: "bash -c 'safety check'"
language: system
```
## Output
```
» pre-commit install -t commit-msg
Running in migration mode with existing hooks at /Users/sobolev/Desktop/test/.git/hooks/commit-msg.legacy
Use -f to use only pre-commit.
An unexpected error has occurred: IOError: [Errno 2] No such file or directory: '/Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl'
Check the log at /Users/sobolev/.cache/pre-commit/pre-commit.log
```
When I do `ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl` that's what is see:
```
(test-p4WySO70) ~/Desktop/test master ✗ ✚ 2 ⚡
» ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources
empty_template pre-push-tmpl ruby-build.tar.gz
hook-tmpl rbenv.tar.gz ruby-download.tar.gz
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 setup(
6 name='pre_commit',
7 description=(
8 'A framework for managing and maintaining multi-language pre-commit '
9 'hooks.'
10 ),
11 url='https://github.com/pre-commit/pre-commit',
12 version='1.1.1',
13
14 author='Anthony Sottile',
15 author_email='[email protected]',
16
17 platforms='linux',
18 classifiers=[
19 'License :: OSI Approved :: MIT License',
20 'Programming Language :: Python :: 2',
21 'Programming Language :: Python :: 2.7',
22 'Programming Language :: Python :: 3',
23 'Programming Language :: Python :: 3.5',
24 'Programming Language :: Python :: 3.6',
25 'Programming Language :: Python :: Implementation :: CPython',
26 'Programming Language :: Python :: Implementation :: PyPy',
27 ],
28
29 packages=find_packages(exclude=('tests*', 'testing*')),
30 package_data={
31 'pre_commit': [
32 'resources/hook-tmpl',
33 'resources/pre-push-tmpl',
34 'resources/rbenv.tar.gz',
35 'resources/ruby-build.tar.gz',
36 'resources/ruby-download.tar.gz',
37 'resources/empty_template/*',
38 'resources/empty_template/.npmignore',
39 ],
40 },
41 install_requires=[
42 'aspy.yaml',
43 'cached-property',
44 'identify>=1.0.0',
45 'nodeenv>=0.11.1',
46 'pyyaml',
47 'six',
48 'virtualenv',
49 ],
50 entry_points={
51 'console_scripts': [
52 'pre-commit = pre_commit.main:main',
53 'pre-commit-validate-config = pre_commit.clientlib:validate_config_main', # noqa
54 'pre-commit-validate-manifest = pre_commit.clientlib:validate_manifest_main', # noqa
55 ],
56 },
57 )
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,11 +29,7 @@
packages=find_packages(exclude=('tests*', 'testing*')),
package_data={
'pre_commit': [
- 'resources/hook-tmpl',
- 'resources/pre-push-tmpl',
- 'resources/rbenv.tar.gz',
- 'resources/ruby-build.tar.gz',
- 'resources/ruby-download.tar.gz',
+ 'resources/*',
'resources/empty_template/*',
'resources/empty_template/.npmignore',
],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,11 +29,7 @@\n packages=find_packages(exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n- 'resources/hook-tmpl',\n- 'resources/pre-push-tmpl',\n- 'resources/rbenv.tar.gz',\n- 'resources/ruby-build.tar.gz',\n- 'resources/ruby-download.tar.gz',\n+ 'resources/*',\n 'resources/empty_template/*',\n 'resources/empty_template/.npmignore',\n ],\n", "issue": "commit-msg stage does not work\nEverything works as expected when running just `pre-commit install`, then hooks work.\r\nBut when running `pre-commit install -t commit-msg` `IOError` happens, since template could not be found.\r\n\r\nHere's the detailed information.\r\n\r\n## Env\r\n\r\n- `python2.7`\r\n- `pipenv 7.3.7`\r\n- `pre-commit 1.1.1`\r\n\r\nActually tested with both `python2` and `python3`.\r\n\r\n## Configuration\r\n\r\n```yaml\r\n- repo: local\r\n hooks:\r\n - id: gitlint\r\n name: gitlint\r\n entry: \"bash -c 'gitlint lint'\"\r\n language: system\r\n stages: [commit-msg]\r\n\r\n - id: pytest\r\n name: pytest\r\n entry: \"bash -c 'python manage.py test'\"\r\n language: system\r\n\r\n - id: safety\r\n name: safety\r\n entry: \"bash -c 'safety check'\"\r\n language: system\r\n```\r\n\r\n## Output\r\n\r\n```\r\n\u00bb pre-commit install -t commit-msg\r\nRunning in migration mode with existing hooks at /Users/sobolev/Desktop/test/.git/hooks/commit-msg.legacy\r\nUse -f to use only pre-commit.\r\nAn unexpected error has occurred: IOError: [Errno 2] No such file or directory: '/Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl'\r\nCheck the log at /Users/sobolev/.cache/pre-commit/pre-commit.log\r\n```\r\n\r\nWhen I do `ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl` that's what is see:\r\n\r\n```\r\n(test-p4WySO70) ~/Desktop/test master \u2717 \u271a 2 \u26a1\r\n\u00bb ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources \r\nempty_template pre-push-tmpl ruby-build.tar.gz\r\nhook-tmpl rbenv.tar.gz ruby-download.tar.gz\r\n```\n", "before_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='1.1.1',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages(exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n 'resources/empty_template/*',\n 'resources/empty_template/.npmignore',\n ],\n },\n install_requires=[\n 'aspy.yaml',\n 'cached-property',\n 'identify>=1.0.0',\n 'nodeenv>=0.11.1',\n 'pyyaml',\n 'six',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib:validate_config_main', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib:validate_manifest_main', # noqa\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='1.1.1',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages(exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/*',\n 'resources/empty_template/*',\n 'resources/empty_template/.npmignore',\n ],\n },\n install_requires=[\n 'aspy.yaml',\n 'cached-property',\n 'identify>=1.0.0',\n 'nodeenv>=0.11.1',\n 'pyyaml',\n 'six',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib:validate_config_main', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib:validate_manifest_main', # noqa\n ],\n },\n)\n", "path": "setup.py"}]}
| 1,223 | 130 |
gh_patches_debug_3561
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1101
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] some randomness occurring in tabular sample
**Describe the bug**
The results are not the same even when setting random states
**To Reproduce**
Run model error / performance report a couple of times
**Expected behavior**
Same results
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Quickstart in 5 minutes
4 ***********************
5 In order to run your first Deepchecks Suite all you need to have is the data
6 and model that you wish to validate. More specifically, you need:
7
8 * Your train and test data (in Pandas DataFrames or Numpy Arrays)
9 * (optional) A `supported model </user-guide/tabular/supported_models.html>`__ (including XGBoost,
10 scikit-learn models, and many more). Required for running checks that need the
11 model's predictions for running.
12
13 To run your first suite on your data and model, you need only a few lines of
14 code, that start here: `Define a Dataset Object <#define-a-dataset-object>`__.
15
16 # If you don't have deepchecks installed yet:
17
18 .. code:: python
19
20 # If you don't have deepchecks installed yet:
21 import sys
22 !{sys.executable} -m pip install deepchecks -U --quiet #--user
23
24 """
25
26 #%%
27 # Load Data, Split Train-Val, and Train a Simple Model
28 # ====================================================
29 # For the purpose of this guide we'll use the simple iris dataset and train a
30 # simple random forest model for multiclass classification:
31
32 # General imports
33 import pandas as pd
34 import numpy as np
35
36 from deepchecks.tabular.datasets.classification import iris
37 from sklearn.ensemble import RandomForestClassifier
38 from sklearn.model_selection import train_test_split
39
40 # Load Data
41 iris_df = iris.load_data(data_format='Dataframe', as_train_test=False)
42 label_col = 'target'
43 df_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)
44
45 # Train Model
46 rf_clf = RandomForestClassifier()
47 rf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);
48
49 #%%
50 # Define a Dataset Object
51 # =======================
52 # Initialize the Dataset object, stating the relevant metadata about the dataset
53 # (e.g. the name for the label column)
54 #
55 # Check out the Dataset's attributes to see which additional special columns can be
56 # declared and used (e.g. date column, index column).
57
58 from deepchecks.tabular import Dataset
59
60 # We explicitly state that this dataset has no categorical features, otherwise they will be automatically inferred
61 # If the dataset has categorical features, the best practice is to pass a list with their names
62
63 ds_train = Dataset(df_train, label=label_col, cat_features=[])
64 ds_test = Dataset(df_test, label=label_col, cat_features=[])
65
66 #%%
67 # Run a Deepchecks Suite
68 # ======================
69 # Run the full suite
70 # ------------------
71 # Use the ``full_suite`` that is a collection of (most of) the prebuilt checks.
72 #
73 # Check out the `when should you use </getting-started/when_should_you_use.html>`__
74 # deepchecks guide for some more info about the existing suites and when to use them.
75
76 from deepchecks.tabular.suites import full_suite
77
78 suite = full_suite()
79
80 #%%
81
82 suite.run(train_dataset=ds_train, test_dataset=ds_test, model=rf_clf)
83
84 #%%
85 # Run the integrity suite
86 # -----------------------
87 # If you still haven't started modeling and just have a single dataset, you
88 # can use the ``single_dataset_integrity``:
89
90 from deepchecks.tabular.suites import single_dataset_integrity
91
92 integ_suite = single_dataset_integrity()
93 integ_suite.run(ds_train)
94
95 #%%
96 # Run a Deepchecks Check
97 # ======================
98 # If you want to run a specific check, you can just import it and run it directly.
99 #
100 # Check out the `Check tabular examples </examples/index.html>`__ in
101 # the examples or the `API Reference </api/index.html>`__ for more info about the
102 # existing checks and their parameters.
103
104 from deepchecks.tabular.checks import TrainTestLabelDrift
105
106 #%%
107
108 check = TrainTestLabelDrift()
109 result = check.run(ds_train, ds_test)
110 result
111
112 #%%
113 # %%
114 # and also inspect the result value which has a check-dependant structure:
115
116 result.value
117
118 #%%
119 # Edit an Existing Suite
120 # ======================
121 # Inspect suite and remove condition
122 # ----------------------------------
123 # We can see that the single feature contribution failed, both for test and for
124 # train. Since this is a very simple dataset with few features and this behavior
125 # is not necessarily problematic, we will remove the existing conditions for the PPS
126
127 # Lets first print the suite to find the conditions that we want to change:
128
129 suite
130
131 #%%
132
133 # now we can use the check's index and the condition's number to remove it:
134 print(suite[6])
135 suite[6].remove_condition(0)
136
137 #%%
138
139 # print and see that the condition was removed
140 suite[6]
141
142 #%%
143 # If we now re-run the suite, all of the existing conditions will pass.
144 #
145 # *Note: the check we manipulated will still run as part of the Suite, however
146 # it won't appear in the Conditions Summary since it no longer has any
147 # conditions defined on it. You can still see its display results in the
148 # Additional Outputs section*
149 #
150 # **For more info about working with conditions, see the detailed configuring
151 # conditions guide.**
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py
--- a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py
+++ b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py
@@ -43,7 +43,7 @@
df_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)
# Train Model
-rf_clf = RandomForestClassifier()
+rf_clf = RandomForestClassifier(random_state=0)
rf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);
#%%
|
{"golden_diff": "diff --git a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py\n--- a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py\n+++ b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py\n@@ -43,7 +43,7 @@\n df_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)\n \n # Train Model\n-rf_clf = RandomForestClassifier()\n+rf_clf = RandomForestClassifier(random_state=0)\n rf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);\n \n #%%\n", "issue": "[BUG] some randomness occurring in tabular sample\n**Describe the bug**\r\nThe results are not the same even when setting random states\r\n\r\n**To Reproduce**\r\nRun model error / performance report a couple of times\r\n\r\n**Expected behavior**\r\nSame results\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nQuickstart in 5 minutes\n***********************\nIn order to run your first Deepchecks Suite all you need to have is the data\nand model that you wish to validate. More specifically, you need:\n\n* Your train and test data (in Pandas DataFrames or Numpy Arrays)\n* (optional) A `supported model </user-guide/tabular/supported_models.html>`__ (including XGBoost,\n scikit-learn models, and many more). Required for running checks that need the\n model's predictions for running.\n\nTo run your first suite on your data and model, you need only a few lines of\ncode, that start here: `Define a Dataset Object <#define-a-dataset-object>`__.\n\n# If you don't have deepchecks installed yet:\n\n.. code:: python\n\n # If you don't have deepchecks installed yet:\n import sys\n !{sys.executable} -m pip install deepchecks -U --quiet #--user\n\n\"\"\"\n\n#%%\n# Load Data, Split Train-Val, and Train a Simple Model\n# ====================================================\n# For the purpose of this guide we'll use the simple iris dataset and train a\n# simple random forest model for multiclass classification:\n\n# General imports\nimport pandas as pd\nimport numpy as np\n\nfrom deepchecks.tabular.datasets.classification import iris\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n# Load Data\niris_df = iris.load_data(data_format='Dataframe', as_train_test=False)\nlabel_col = 'target'\ndf_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)\n\n# Train Model\nrf_clf = RandomForestClassifier()\nrf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);\n\n#%%\n# Define a Dataset Object\n# =======================\n# Initialize the Dataset object, stating the relevant metadata about the dataset\n# (e.g. the name for the label column)\n#\n# Check out the Dataset's attributes to see which additional special columns can be\n# declared and used (e.g. date column, index column).\n\nfrom deepchecks.tabular import Dataset\n\n# We explicitly state that this dataset has no categorical features, otherwise they will be automatically inferred\n# If the dataset has categorical features, the best practice is to pass a list with their names\n\nds_train = Dataset(df_train, label=label_col, cat_features=[])\nds_test = Dataset(df_test, label=label_col, cat_features=[])\n\n#%%\n# Run a Deepchecks Suite\n# ======================\n# Run the full suite\n# ------------------\n# Use the ``full_suite`` that is a collection of (most of) the prebuilt checks.\n#\n# Check out the `when should you use </getting-started/when_should_you_use.html>`__\n# deepchecks guide for some more info about the existing suites and when to use them.\n\nfrom deepchecks.tabular.suites import full_suite\n\nsuite = full_suite()\n\n#%%\n\nsuite.run(train_dataset=ds_train, test_dataset=ds_test, model=rf_clf)\n\n#%%\n# Run the integrity suite\n# -----------------------\n# If you still haven't started modeling and just have a single dataset, you\n# can use the ``single_dataset_integrity``:\n\nfrom deepchecks.tabular.suites import single_dataset_integrity\n\ninteg_suite = single_dataset_integrity()\ninteg_suite.run(ds_train)\n\n#%%\n# Run a Deepchecks Check\n# ======================\n# If you want to run a specific check, you can just import it and run it directly.\n#\n# Check out the `Check tabular examples </examples/index.html>`__ in\n# the examples or the `API Reference </api/index.html>`__ for more info about the\n# existing checks and their parameters.\n\nfrom deepchecks.tabular.checks import TrainTestLabelDrift\n\n#%%\n\ncheck = TrainTestLabelDrift()\nresult = check.run(ds_train, ds_test)\nresult\n\n#%%\n# %%\n# and also inspect the result value which has a check-dependant structure:\n\nresult.value\n\n#%%\n# Edit an Existing Suite\n# ======================\n# Inspect suite and remove condition\n# ----------------------------------\n# We can see that the single feature contribution failed, both for test and for\n# train. Since this is a very simple dataset with few features and this behavior\n# is not necessarily problematic, we will remove the existing conditions for the PPS\n\n# Lets first print the suite to find the conditions that we want to change:\n\nsuite\n\n#%%\n\n# now we can use the check's index and the condition's number to remove it:\nprint(suite[6])\nsuite[6].remove_condition(0)\n\n#%%\n\n# print and see that the condition was removed\nsuite[6]\n\n#%%\n# If we now re-run the suite, all of the existing conditions will pass.\n#\n# *Note: the check we manipulated will still run as part of the Suite, however\n# it won't appear in the Conditions Summary since it no longer has any\n# conditions defined on it. You can still see its display results in the \n# Additional Outputs section*\n#\n# **For more info about working with conditions, see the detailed configuring \n# conditions guide.**", "path": "docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nQuickstart in 5 minutes\n***********************\nIn order to run your first Deepchecks Suite all you need to have is the data\nand model that you wish to validate. More specifically, you need:\n\n* Your train and test data (in Pandas DataFrames or Numpy Arrays)\n* (optional) A `supported model </user-guide/tabular/supported_models.html>`__ (including XGBoost,\n scikit-learn models, and many more). Required for running checks that need the\n model's predictions for running.\n\nTo run your first suite on your data and model, you need only a few lines of\ncode, that start here: `Define a Dataset Object <#define-a-dataset-object>`__.\n\n# If you don't have deepchecks installed yet:\n\n.. code:: python\n\n # If you don't have deepchecks installed yet:\n import sys\n !{sys.executable} -m pip install deepchecks -U --quiet #--user\n\n\"\"\"\n\n#%%\n# Load Data, Split Train-Val, and Train a Simple Model\n# ====================================================\n# For the purpose of this guide we'll use the simple iris dataset and train a\n# simple random forest model for multiclass classification:\n\n# General imports\nimport pandas as pd\nimport numpy as np\n\nfrom deepchecks.tabular.datasets.classification import iris\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n# Load Data\niris_df = iris.load_data(data_format='Dataframe', as_train_test=False)\nlabel_col = 'target'\ndf_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)\n\n# Train Model\nrf_clf = RandomForestClassifier(random_state=0)\nrf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);\n\n#%%\n# Define a Dataset Object\n# =======================\n# Initialize the Dataset object, stating the relevant metadata about the dataset\n# (e.g. the name for the label column)\n#\n# Check out the Dataset's attributes to see which additional special columns can be\n# declared and used (e.g. date column, index column).\n\nfrom deepchecks.tabular import Dataset\n\n# We explicitly state that this dataset has no categorical features, otherwise they will be automatically inferred\n# If the dataset has categorical features, the best practice is to pass a list with their names\n\nds_train = Dataset(df_train, label=label_col, cat_features=[])\nds_test = Dataset(df_test, label=label_col, cat_features=[])\n\n#%%\n# Run a Deepchecks Suite\n# ======================\n# Run the full suite\n# ------------------\n# Use the ``full_suite`` that is a collection of (most of) the prebuilt checks.\n#\n# Check out the `when should you use </getting-started/when_should_you_use.html>`__\n# deepchecks guide for some more info about the existing suites and when to use them.\n\nfrom deepchecks.tabular.suites import full_suite\n\nsuite = full_suite()\n\n#%%\n\nsuite.run(train_dataset=ds_train, test_dataset=ds_test, model=rf_clf)\n\n#%%\n# Run the integrity suite\n# -----------------------\n# If you still haven't started modeling and just have a single dataset, you\n# can use the ``single_dataset_integrity``:\n\nfrom deepchecks.tabular.suites import single_dataset_integrity\n\ninteg_suite = single_dataset_integrity()\ninteg_suite.run(ds_train)\n\n#%%\n# Run a Deepchecks Check\n# ======================\n# If you want to run a specific check, you can just import it and run it directly.\n#\n# Check out the `Check tabular examples </examples/index.html>`__ in\n# the examples or the `API Reference </api/index.html>`__ for more info about the\n# existing checks and their parameters.\n\nfrom deepchecks.tabular.checks import TrainTestLabelDrift\n\n#%%\n\ncheck = TrainTestLabelDrift()\nresult = check.run(ds_train, ds_test)\nresult\n\n#%%\n# %%\n# and also inspect the result value which has a check-dependant structure:\n\nresult.value\n\n#%%\n# Edit an Existing Suite\n# ======================\n# Inspect suite and remove condition\n# ----------------------------------\n# We can see that the single feature contribution failed, both for test and for\n# train. Since this is a very simple dataset with few features and this behavior\n# is not necessarily problematic, we will remove the existing conditions for the PPS\n\n# Lets first print the suite to find the conditions that we want to change:\n\nsuite\n\n#%%\n\n# now we can use the check's index and the condition's number to remove it:\nprint(suite[6])\nsuite[6].remove_condition(0)\n\n#%%\n\n# print and see that the condition was removed\nsuite[6]\n\n#%%\n# If we now re-run the suite, all of the existing conditions will pass.\n#\n# *Note: the check we manipulated will still run as part of the Suite, however\n# it won't appear in the Conditions Summary since it no longer has any\n# conditions defined on it. You can still see its display results in the \n# Additional Outputs section*\n#\n# **For more info about working with conditions, see the detailed configuring \n# conditions guide.**", "path": "docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py"}]}
| 1,794 | 157 |
gh_patches_debug_4293
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-1005
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WebSocket headers encoding problem
### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [x] The bug is reproducible against the latest release and/or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
<!-- A clear and concise description of what the bug is. -->
If there is a websocket connection with a non-ascii char in header, the server will explode. The problem seems to be from here:
https://github.com/encode/uvicorn/blob/9dc5a43209fe081ba3e74135189252bfddf75587/uvicorn/protocols/websockets/websockets_impl.py#L102
Also, I checked the [ASGI doc](https://buildmedia.readthedocs.org/media/pdf/asgi/latest/asgi.pdf) and in section 2.2.5 it states:
- These are byte strings of the exact byte sequences sent by the client/to be sent by the server. While
modern HTTP standards say that headers should be ASCII, older ones did not and allowed a wider range of
characters. Frameworks/applications should decode headers as they deem appropriate.
So, I think uvicorn should allow for non-ascii chars in header. It is definitely the only reason that we can't use uvicorn in production yet.
### To reproduce
<!-- Provide a *minimal* example with steps to reproduce the bug locally.
Run a websocket server. Then do a ws request with non-unicode header to it.
```bash
websocat -H "X-Test: Abraão" ws://localhost:5000/ws
```
NOTE: try to keep any external dependencies *at an absolute minimum* .
In other words, remove anything that doesn't make the bug go away.
-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Doesn't explode.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
Explodes with an exception.
### Debugging material
<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.
NOTE:
- Please list tracebacks in full (don't truncate them).
- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).
- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).
-->
### Environment
- OS / Python / Uvicorn version: just run `uvicorn --version`
- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.
### Additional context
<!-- Any additional information that can help understanding the problem.
Eg. linked issues, or a description of what you were trying to achieve. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/protocols/websockets/websockets_impl.py`
Content:
```
1 import asyncio
2 import http
3 import logging
4 from urllib.parse import unquote
5
6 import websockets
7 from websockets.extensions.permessage_deflate import ServerPerMessageDeflateFactory
8
9 from uvicorn.protocols.utils import get_local_addr, get_remote_addr, is_ssl
10
11
12 class Server:
13 closing = False
14
15 def register(self, ws):
16 pass
17
18 def unregister(self, ws):
19 pass
20
21 def is_serving(self):
22 return not self.closing
23
24
25 class WebSocketProtocol(websockets.WebSocketServerProtocol):
26 def __init__(self, config, server_state, _loop=None):
27 if not config.loaded:
28 config.load()
29
30 self.config = config
31 self.app = config.loaded_app
32 self.loop = _loop or asyncio.get_event_loop()
33 self.logger = logging.getLogger("uvicorn.error")
34 self.root_path = config.root_path
35
36 # Shared server state
37 self.connections = server_state.connections
38 self.tasks = server_state.tasks
39
40 # Connection state
41 self.transport = None
42 self.server = None
43 self.client = None
44 self.scheme = None
45
46 # Connection events
47 self.scope = None
48 self.handshake_started_event = asyncio.Event()
49 self.handshake_completed_event = asyncio.Event()
50 self.closed_event = asyncio.Event()
51 self.initial_response = None
52 self.connect_sent = False
53 self.accepted_subprotocol = None
54 self.transfer_data_task = None
55
56 self.ws_server = Server()
57
58 super().__init__(
59 ws_handler=self.ws_handler,
60 ws_server=self.ws_server,
61 extensions=[ServerPerMessageDeflateFactory()],
62 )
63
64 def connection_made(self, transport):
65 self.connections.add(self)
66 self.transport = transport
67 self.server = get_local_addr(transport)
68 self.client = get_remote_addr(transport)
69 self.scheme = "wss" if is_ssl(transport) else "ws"
70 super().connection_made(transport)
71
72 def connection_lost(self, exc):
73 self.connections.remove(self)
74 self.handshake_completed_event.set()
75 super().connection_lost(exc)
76
77 def shutdown(self):
78 self.ws_server.closing = True
79 self.transport.close()
80
81 def on_task_complete(self, task):
82 self.tasks.discard(task)
83
84 async def process_request(self, path, headers):
85 """
86 This hook is called to determine if the websocket should return
87 an HTTP response and close.
88
89 Our behavior here is to start the ASGI application, and then wait
90 for either `accept` or `close` in order to determine if we should
91 close the connection.
92 """
93 path_portion, _, query_string = path.partition("?")
94
95 websockets.handshake.check_request(headers)
96
97 subprotocols = []
98 for header in headers.get_all("Sec-WebSocket-Protocol"):
99 subprotocols.extend([token.strip() for token in header.split(",")])
100
101 asgi_headers = [
102 (name.encode("ascii"), value.encode("ascii"))
103 for name, value in headers.raw_items()
104 ]
105
106 self.scope = {
107 "type": "websocket",
108 "asgi": {"version": self.config.asgi_version, "spec_version": "2.1"},
109 "scheme": self.scheme,
110 "server": self.server,
111 "client": self.client,
112 "root_path": self.root_path,
113 "path": unquote(path_portion),
114 "raw_path": path_portion,
115 "query_string": query_string.encode("ascii"),
116 "headers": asgi_headers,
117 "subprotocols": subprotocols,
118 }
119 task = self.loop.create_task(self.run_asgi())
120 task.add_done_callback(self.on_task_complete)
121 self.tasks.add(task)
122 await self.handshake_started_event.wait()
123 return self.initial_response
124
125 def process_subprotocol(self, headers, available_subprotocols):
126 """
127 We override the standard 'process_subprotocol' behavior here so that
128 we return whatever subprotocol is sent in the 'accept' message.
129 """
130 return self.accepted_subprotocol
131
132 def send_500_response(self):
133 msg = b"Internal Server Error"
134 content = [
135 b"HTTP/1.1 500 Internal Server Error\r\n"
136 b"content-type: text/plain; charset=utf-8\r\n",
137 b"content-length: " + str(len(msg)).encode("ascii") + b"\r\n",
138 b"connection: close\r\n",
139 b"\r\n",
140 msg,
141 ]
142 self.transport.write(b"".join(content))
143 # Allow handler task to terminate cleanly, as websockets doesn't cancel it by
144 # itself (see https://github.com/encode/uvicorn/issues/920)
145 self.handshake_started_event.set()
146
147 async def ws_handler(self, protocol, path):
148 """
149 This is the main handler function for the 'websockets' implementation
150 to call into. We just wait for close then return, and instead allow
151 'send' and 'receive' events to drive the flow.
152 """
153 self.handshake_completed_event.set()
154 await self.closed_event.wait()
155
156 async def run_asgi(self):
157 """
158 Wrapper around the ASGI callable, handling exceptions and unexpected
159 termination states.
160 """
161 try:
162 result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
163 except BaseException as exc:
164 self.closed_event.set()
165 msg = "Exception in ASGI application\n"
166 self.logger.error(msg, exc_info=exc)
167 if not self.handshake_started_event.is_set():
168 self.send_500_response()
169 else:
170 await self.handshake_completed_event.wait()
171 self.transport.close()
172 else:
173 self.closed_event.set()
174 if not self.handshake_started_event.is_set():
175 msg = "ASGI callable returned without sending handshake."
176 self.logger.error(msg)
177 self.send_500_response()
178 self.transport.close()
179 elif result is not None:
180 msg = "ASGI callable should return None, but returned '%s'."
181 self.logger.error(msg, result)
182 await self.handshake_completed_event.wait()
183 self.transport.close()
184
185 async def asgi_send(self, message):
186 message_type = message["type"]
187
188 if not self.handshake_started_event.is_set():
189 if message_type == "websocket.accept":
190 self.logger.info(
191 '%s - "WebSocket %s" [accepted]',
192 self.scope["client"],
193 self.scope["root_path"] + self.scope["path"],
194 )
195 self.initial_response = None
196 self.accepted_subprotocol = message.get("subprotocol")
197 self.handshake_started_event.set()
198
199 elif message_type == "websocket.close":
200 self.logger.info(
201 '%s - "WebSocket %s" 403',
202 self.scope["client"],
203 self.scope["root_path"] + self.scope["path"],
204 )
205 self.initial_response = (http.HTTPStatus.FORBIDDEN, [], b"")
206 self.handshake_started_event.set()
207 self.closed_event.set()
208
209 else:
210 msg = (
211 "Expected ASGI message 'websocket.accept' or 'websocket.close', "
212 "but got '%s'."
213 )
214 raise RuntimeError(msg % message_type)
215
216 elif not self.closed_event.is_set():
217 await self.handshake_completed_event.wait()
218
219 if message_type == "websocket.send":
220 bytes_data = message.get("bytes")
221 text_data = message.get("text")
222 data = text_data if bytes_data is None else bytes_data
223 await self.send(data)
224
225 elif message_type == "websocket.close":
226 code = message.get("code", 1000)
227 reason = message.get("reason", "")
228 await self.close(code, reason)
229 self.closed_event.set()
230
231 else:
232 msg = (
233 "Expected ASGI message 'websocket.send' or 'websocket.close',"
234 " but got '%s'."
235 )
236 raise RuntimeError(msg % message_type)
237
238 else:
239 msg = "Unexpected ASGI message '%s', after sending 'websocket.close'."
240 raise RuntimeError(msg % message_type)
241
242 async def asgi_receive(self):
243 if not self.connect_sent:
244 self.connect_sent = True
245 return {"type": "websocket.connect"}
246
247 await self.handshake_completed_event.wait()
248 try:
249 await self.ensure_open()
250 data = await self.recv()
251 except websockets.ConnectionClosed as exc:
252 return {"type": "websocket.disconnect", "code": exc.code}
253
254 msg = {"type": "websocket.receive"}
255
256 if isinstance(data, str):
257 msg["text"] = data
258 else:
259 msg["bytes"] = data
260
261 return msg
262
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/uvicorn/protocols/websockets/websockets_impl.py b/uvicorn/protocols/websockets/websockets_impl.py
--- a/uvicorn/protocols/websockets/websockets_impl.py
+++ b/uvicorn/protocols/websockets/websockets_impl.py
@@ -99,7 +99,7 @@
subprotocols.extend([token.strip() for token in header.split(",")])
asgi_headers = [
- (name.encode("ascii"), value.encode("ascii"))
+ (name.encode("ascii"), value.encode("ascii", errors="surrogateescape"))
for name, value in headers.raw_items()
]
|
{"golden_diff": "diff --git a/uvicorn/protocols/websockets/websockets_impl.py b/uvicorn/protocols/websockets/websockets_impl.py\n--- a/uvicorn/protocols/websockets/websockets_impl.py\n+++ b/uvicorn/protocols/websockets/websockets_impl.py\n@@ -99,7 +99,7 @@\n subprotocols.extend([token.strip() for token in header.split(\",\")])\n \n asgi_headers = [\n- (name.encode(\"ascii\"), value.encode(\"ascii\"))\n+ (name.encode(\"ascii\"), value.encode(\"ascii\", errors=\"surrogateescape\"))\n for name, value in headers.raw_items()\n ]\n", "issue": "WebSocket headers encoding problem\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your bug report. -->\r\n\r\n- [x] The bug is reproducible against the latest release and/or `master`.\r\n- [x] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nIf there is a websocket connection with a non-ascii char in header, the server will explode. The problem seems to be from here:\r\nhttps://github.com/encode/uvicorn/blob/9dc5a43209fe081ba3e74135189252bfddf75587/uvicorn/protocols/websockets/websockets_impl.py#L102\r\nAlso, I checked the [ASGI doc](https://buildmedia.readthedocs.org/media/pdf/asgi/latest/asgi.pdf) and in section 2.2.5 it states:\r\n- These are byte strings of the exact byte sequences sent by the client/to be sent by the server. While\r\nmodern HTTP standards say that headers should be ASCII, older ones did not and allowed a wider range of\r\ncharacters. Frameworks/applications should decode headers as they deem appropriate.\r\n\r\nSo, I think uvicorn should allow for non-ascii chars in header. It is definitely the only reason that we can't use uvicorn in production yet.\r\n\r\n\r\n### To reproduce\r\n\r\n<!-- Provide a *minimal* example with steps to reproduce the bug locally.\r\nRun a websocket server. Then do a ws request with non-unicode header to it.\r\n```bash\r\nwebsocat -H \"X-Test: Abra\u00e3o\" ws://localhost:5000/ws\r\n```\r\n\r\nNOTE: try to keep any external dependencies *at an absolute minimum* .\r\nIn other words, remove anything that doesn't make the bug go away.\r\n\r\n-->\r\n\r\n### Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nDoesn't explode.\r\n\r\n### Actual behavior\r\n\r\n<!-- A clear and concise description of what actually happens. -->\r\nExplodes with an exception.\r\n\r\n### Debugging material\r\n\r\n<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.\r\n\r\nNOTE:\r\n- Please list tracebacks in full (don't truncate them).\r\n- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).\r\n- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).\r\n-->\r\n\r\n### Environment\r\n\r\n- OS / Python / Uvicorn version: just run `uvicorn --version`\r\n- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.\r\n\r\n### Additional context\r\n\r\n<!-- Any additional information that can help understanding the problem.\r\n\r\nEg. linked issues, or a description of what you were trying to achieve. -->\n", "before_files": [{"content": "import asyncio\nimport http\nimport logging\nfrom urllib.parse import unquote\n\nimport websockets\nfrom websockets.extensions.permessage_deflate import ServerPerMessageDeflateFactory\n\nfrom uvicorn.protocols.utils import get_local_addr, get_remote_addr, is_ssl\n\n\nclass Server:\n closing = False\n\n def register(self, ws):\n pass\n\n def unregister(self, ws):\n pass\n\n def is_serving(self):\n return not self.closing\n\n\nclass WebSocketProtocol(websockets.WebSocketServerProtocol):\n def __init__(self, config, server_state, _loop=None):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.app = config.loaded_app\n self.loop = _loop or asyncio.get_event_loop()\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.root_path = config.root_path\n\n # Shared server state\n self.connections = server_state.connections\n self.tasks = server_state.tasks\n\n # Connection state\n self.transport = None\n self.server = None\n self.client = None\n self.scheme = None\n\n # Connection events\n self.scope = None\n self.handshake_started_event = asyncio.Event()\n self.handshake_completed_event = asyncio.Event()\n self.closed_event = asyncio.Event()\n self.initial_response = None\n self.connect_sent = False\n self.accepted_subprotocol = None\n self.transfer_data_task = None\n\n self.ws_server = Server()\n\n super().__init__(\n ws_handler=self.ws_handler,\n ws_server=self.ws_server,\n extensions=[ServerPerMessageDeflateFactory()],\n )\n\n def connection_made(self, transport):\n self.connections.add(self)\n self.transport = transport\n self.server = get_local_addr(transport)\n self.client = get_remote_addr(transport)\n self.scheme = \"wss\" if is_ssl(transport) else \"ws\"\n super().connection_made(transport)\n\n def connection_lost(self, exc):\n self.connections.remove(self)\n self.handshake_completed_event.set()\n super().connection_lost(exc)\n\n def shutdown(self):\n self.ws_server.closing = True\n self.transport.close()\n\n def on_task_complete(self, task):\n self.tasks.discard(task)\n\n async def process_request(self, path, headers):\n \"\"\"\n This hook is called to determine if the websocket should return\n an HTTP response and close.\n\n Our behavior here is to start the ASGI application, and then wait\n for either `accept` or `close` in order to determine if we should\n close the connection.\n \"\"\"\n path_portion, _, query_string = path.partition(\"?\")\n\n websockets.handshake.check_request(headers)\n\n subprotocols = []\n for header in headers.get_all(\"Sec-WebSocket-Protocol\"):\n subprotocols.extend([token.strip() for token in header.split(\",\")])\n\n asgi_headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in headers.raw_items()\n ]\n\n self.scope = {\n \"type\": \"websocket\",\n \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.1\"},\n \"scheme\": self.scheme,\n \"server\": self.server,\n \"client\": self.client,\n \"root_path\": self.root_path,\n \"path\": unquote(path_portion),\n \"raw_path\": path_portion,\n \"query_string\": query_string.encode(\"ascii\"),\n \"headers\": asgi_headers,\n \"subprotocols\": subprotocols,\n }\n task = self.loop.create_task(self.run_asgi())\n task.add_done_callback(self.on_task_complete)\n self.tasks.add(task)\n await self.handshake_started_event.wait()\n return self.initial_response\n\n def process_subprotocol(self, headers, available_subprotocols):\n \"\"\"\n We override the standard 'process_subprotocol' behavior here so that\n we return whatever subprotocol is sent in the 'accept' message.\n \"\"\"\n return self.accepted_subprotocol\n\n def send_500_response(self):\n msg = b\"Internal Server Error\"\n content = [\n b\"HTTP/1.1 500 Internal Server Error\\r\\n\"\n b\"content-type: text/plain; charset=utf-8\\r\\n\",\n b\"content-length: \" + str(len(msg)).encode(\"ascii\") + b\"\\r\\n\",\n b\"connection: close\\r\\n\",\n b\"\\r\\n\",\n msg,\n ]\n self.transport.write(b\"\".join(content))\n # Allow handler task to terminate cleanly, as websockets doesn't cancel it by\n # itself (see https://github.com/encode/uvicorn/issues/920)\n self.handshake_started_event.set()\n\n async def ws_handler(self, protocol, path):\n \"\"\"\n This is the main handler function for the 'websockets' implementation\n to call into. We just wait for close then return, and instead allow\n 'send' and 'receive' events to drive the flow.\n \"\"\"\n self.handshake_completed_event.set()\n await self.closed_event.wait()\n\n async def run_asgi(self):\n \"\"\"\n Wrapper around the ASGI callable, handling exceptions and unexpected\n termination states.\n \"\"\"\n try:\n result = await self.app(self.scope, self.asgi_receive, self.asgi_send)\n except BaseException as exc:\n self.closed_event.set()\n msg = \"Exception in ASGI application\\n\"\n self.logger.error(msg, exc_info=exc)\n if not self.handshake_started_event.is_set():\n self.send_500_response()\n else:\n await self.handshake_completed_event.wait()\n self.transport.close()\n else:\n self.closed_event.set()\n if not self.handshake_started_event.is_set():\n msg = \"ASGI callable returned without sending handshake.\"\n self.logger.error(msg)\n self.send_500_response()\n self.transport.close()\n elif result is not None:\n msg = \"ASGI callable should return None, but returned '%s'.\"\n self.logger.error(msg, result)\n await self.handshake_completed_event.wait()\n self.transport.close()\n\n async def asgi_send(self, message):\n message_type = message[\"type\"]\n\n if not self.handshake_started_event.is_set():\n if message_type == \"websocket.accept\":\n self.logger.info(\n '%s - \"WebSocket %s\" [accepted]',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = None\n self.accepted_subprotocol = message.get(\"subprotocol\")\n self.handshake_started_event.set()\n\n elif message_type == \"websocket.close\":\n self.logger.info(\n '%s - \"WebSocket %s\" 403',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = (http.HTTPStatus.FORBIDDEN, [], b\"\")\n self.handshake_started_event.set()\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.accept' or 'websocket.close', \"\n \"but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n elif not self.closed_event.is_set():\n await self.handshake_completed_event.wait()\n\n if message_type == \"websocket.send\":\n bytes_data = message.get(\"bytes\")\n text_data = message.get(\"text\")\n data = text_data if bytes_data is None else bytes_data\n await self.send(data)\n\n elif message_type == \"websocket.close\":\n code = message.get(\"code\", 1000)\n reason = message.get(\"reason\", \"\")\n await self.close(code, reason)\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.send' or 'websocket.close',\"\n \" but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n else:\n msg = \"Unexpected ASGI message '%s', after sending 'websocket.close'.\"\n raise RuntimeError(msg % message_type)\n\n async def asgi_receive(self):\n if not self.connect_sent:\n self.connect_sent = True\n return {\"type\": \"websocket.connect\"}\n\n await self.handshake_completed_event.wait()\n try:\n await self.ensure_open()\n data = await self.recv()\n except websockets.ConnectionClosed as exc:\n return {\"type\": \"websocket.disconnect\", \"code\": exc.code}\n\n msg = {\"type\": \"websocket.receive\"}\n\n if isinstance(data, str):\n msg[\"text\"] = data\n else:\n msg[\"bytes\"] = data\n\n return msg\n", "path": "uvicorn/protocols/websockets/websockets_impl.py"}], "after_files": [{"content": "import asyncio\nimport http\nimport logging\nfrom urllib.parse import unquote\n\nimport websockets\nfrom websockets.extensions.permessage_deflate import ServerPerMessageDeflateFactory\n\nfrom uvicorn.protocols.utils import get_local_addr, get_remote_addr, is_ssl\n\n\nclass Server:\n closing = False\n\n def register(self, ws):\n pass\n\n def unregister(self, ws):\n pass\n\n def is_serving(self):\n return not self.closing\n\n\nclass WebSocketProtocol(websockets.WebSocketServerProtocol):\n def __init__(self, config, server_state, _loop=None):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.app = config.loaded_app\n self.loop = _loop or asyncio.get_event_loop()\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.root_path = config.root_path\n\n # Shared server state\n self.connections = server_state.connections\n self.tasks = server_state.tasks\n\n # Connection state\n self.transport = None\n self.server = None\n self.client = None\n self.scheme = None\n\n # Connection events\n self.scope = None\n self.handshake_started_event = asyncio.Event()\n self.handshake_completed_event = asyncio.Event()\n self.closed_event = asyncio.Event()\n self.initial_response = None\n self.connect_sent = False\n self.accepted_subprotocol = None\n self.transfer_data_task = None\n\n self.ws_server = Server()\n\n super().__init__(\n ws_handler=self.ws_handler,\n ws_server=self.ws_server,\n extensions=[ServerPerMessageDeflateFactory()],\n )\n\n def connection_made(self, transport):\n self.connections.add(self)\n self.transport = transport\n self.server = get_local_addr(transport)\n self.client = get_remote_addr(transport)\n self.scheme = \"wss\" if is_ssl(transport) else \"ws\"\n super().connection_made(transport)\n\n def connection_lost(self, exc):\n self.connections.remove(self)\n self.handshake_completed_event.set()\n super().connection_lost(exc)\n\n def shutdown(self):\n self.ws_server.closing = True\n self.transport.close()\n\n def on_task_complete(self, task):\n self.tasks.discard(task)\n\n async def process_request(self, path, headers):\n \"\"\"\n This hook is called to determine if the websocket should return\n an HTTP response and close.\n\n Our behavior here is to start the ASGI application, and then wait\n for either `accept` or `close` in order to determine if we should\n close the connection.\n \"\"\"\n path_portion, _, query_string = path.partition(\"?\")\n\n websockets.handshake.check_request(headers)\n\n subprotocols = []\n for header in headers.get_all(\"Sec-WebSocket-Protocol\"):\n subprotocols.extend([token.strip() for token in header.split(\",\")])\n\n asgi_headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\", errors=\"surrogateescape\"))\n for name, value in headers.raw_items()\n ]\n\n self.scope = {\n \"type\": \"websocket\",\n \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.1\"},\n \"scheme\": self.scheme,\n \"server\": self.server,\n \"client\": self.client,\n \"root_path\": self.root_path,\n \"path\": unquote(path_portion),\n \"raw_path\": path_portion,\n \"query_string\": query_string.encode(\"ascii\"),\n \"headers\": asgi_headers,\n \"subprotocols\": subprotocols,\n }\n task = self.loop.create_task(self.run_asgi())\n task.add_done_callback(self.on_task_complete)\n self.tasks.add(task)\n await self.handshake_started_event.wait()\n return self.initial_response\n\n def process_subprotocol(self, headers, available_subprotocols):\n \"\"\"\n We override the standard 'process_subprotocol' behavior here so that\n we return whatever subprotocol is sent in the 'accept' message.\n \"\"\"\n return self.accepted_subprotocol\n\n def send_500_response(self):\n msg = b\"Internal Server Error\"\n content = [\n b\"HTTP/1.1 500 Internal Server Error\\r\\n\"\n b\"content-type: text/plain; charset=utf-8\\r\\n\",\n b\"content-length: \" + str(len(msg)).encode(\"ascii\") + b\"\\r\\n\",\n b\"connection: close\\r\\n\",\n b\"\\r\\n\",\n msg,\n ]\n self.transport.write(b\"\".join(content))\n # Allow handler task to terminate cleanly, as websockets doesn't cancel it by\n # itself (see https://github.com/encode/uvicorn/issues/920)\n self.handshake_started_event.set()\n\n async def ws_handler(self, protocol, path):\n \"\"\"\n This is the main handler function for the 'websockets' implementation\n to call into. We just wait for close then return, and instead allow\n 'send' and 'receive' events to drive the flow.\n \"\"\"\n self.handshake_completed_event.set()\n await self.closed_event.wait()\n\n async def run_asgi(self):\n \"\"\"\n Wrapper around the ASGI callable, handling exceptions and unexpected\n termination states.\n \"\"\"\n try:\n result = await self.app(self.scope, self.asgi_receive, self.asgi_send)\n except BaseException as exc:\n self.closed_event.set()\n msg = \"Exception in ASGI application\\n\"\n self.logger.error(msg, exc_info=exc)\n if not self.handshake_started_event.is_set():\n self.send_500_response()\n else:\n await self.handshake_completed_event.wait()\n self.transport.close()\n else:\n self.closed_event.set()\n if not self.handshake_started_event.is_set():\n msg = \"ASGI callable returned without sending handshake.\"\n self.logger.error(msg)\n self.send_500_response()\n self.transport.close()\n elif result is not None:\n msg = \"ASGI callable should return None, but returned '%s'.\"\n self.logger.error(msg, result)\n await self.handshake_completed_event.wait()\n self.transport.close()\n\n async def asgi_send(self, message):\n message_type = message[\"type\"]\n\n if not self.handshake_started_event.is_set():\n if message_type == \"websocket.accept\":\n self.logger.info(\n '%s - \"WebSocket %s\" [accepted]',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = None\n self.accepted_subprotocol = message.get(\"subprotocol\")\n self.handshake_started_event.set()\n\n elif message_type == \"websocket.close\":\n self.logger.info(\n '%s - \"WebSocket %s\" 403',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = (http.HTTPStatus.FORBIDDEN, [], b\"\")\n self.handshake_started_event.set()\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.accept' or 'websocket.close', \"\n \"but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n elif not self.closed_event.is_set():\n await self.handshake_completed_event.wait()\n\n if message_type == \"websocket.send\":\n bytes_data = message.get(\"bytes\")\n text_data = message.get(\"text\")\n data = text_data if bytes_data is None else bytes_data\n await self.send(data)\n\n elif message_type == \"websocket.close\":\n code = message.get(\"code\", 1000)\n reason = message.get(\"reason\", \"\")\n await self.close(code, reason)\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.send' or 'websocket.close',\"\n \" but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n else:\n msg = \"Unexpected ASGI message '%s', after sending 'websocket.close'.\"\n raise RuntimeError(msg % message_type)\n\n async def asgi_receive(self):\n if not self.connect_sent:\n self.connect_sent = True\n return {\"type\": \"websocket.connect\"}\n\n await self.handshake_completed_event.wait()\n try:\n await self.ensure_open()\n data = await self.recv()\n except websockets.ConnectionClosed as exc:\n return {\"type\": \"websocket.disconnect\", \"code\": exc.code}\n\n msg = {\"type\": \"websocket.receive\"}\n\n if isinstance(data, str):\n msg[\"text\"] = data\n else:\n msg[\"bytes\"] = data\n\n return msg\n", "path": "uvicorn/protocols/websockets/websockets_impl.py"}]}
| 3,484 | 135 |
gh_patches_debug_14064
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-2255
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CPU/GPU Template
## 🐛 Bug
The GPU or CPU template do not run currently on master after changes including the setup hook.
```
python -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp
python -m pl_examples.basic_examples.cpu_template
```
CPU Template Error:
```
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 53, in <module>
main(args)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 34, in main
trainer.fit(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 1063, in run_pretrain_routine
self.reset_val_dataloader(ref_model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 331, in reset_val_dataloader
self._reset_eval_dataloader(model, 'val')
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 253, in _reset_eval_dataloader
dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 352, in request_dataloader
dataloader = dataloader_fx()
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 158, in val_dataloader
return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=4)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'LightningTemplateModel' object has no attribute 'mnist_test'
```
GPU Template Error:
```
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 64, in __init__
self.c_d1_drop = nn.Dropout(self.drop_prob)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 10, in __init__
if p < 0 or p > 1:
TypeError: '<' not supported between instances of 'Namespace' and 'int'
```
### Environment
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.4
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.8.0
- tensorboard: 2.2.1
- tqdm: 4.46.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.8
- version: #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pl_examples/basic_examples/gpu_template.py`
Content:
```
1 """
2 Runs a model on a single node across multiple gpus.
3 """
4 import os
5 from argparse import ArgumentParser
6
7 import numpy as np
8 import torch
9
10 import pytorch_lightning as pl
11 from pl_examples.models.lightning_template import LightningTemplateModel
12
13 SEED = 2334
14 torch.manual_seed(SEED)
15 np.random.seed(SEED)
16
17
18 def main(hparams):
19 """
20 Main training routine specific for this project
21 :param hparams:
22 """
23 # ------------------------
24 # 1 INIT LIGHTNING MODEL
25 # ------------------------
26 model = LightningTemplateModel(hparams)
27
28 # ------------------------
29 # 2 INIT TRAINER
30 # ------------------------
31 trainer = pl.Trainer(
32 max_epochs=hparams.epochs,
33 gpus=hparams.gpus,
34 distributed_backend=hparams.distributed_backend,
35 precision=16 if hparams.use_16bit else 32,
36 )
37
38 # ------------------------
39 # 3 START TRAINING
40 # ------------------------
41 trainer.fit(model)
42
43
44 if __name__ == '__main__':
45 # ------------------------
46 # TRAINING ARGUMENTS
47 # ------------------------
48 # these are project-wide arguments
49
50 root_dir = os.path.dirname(os.path.realpath(__file__))
51 parent_parser = ArgumentParser(add_help=False)
52
53 # gpu args
54 parent_parser.add_argument(
55 '--gpus',
56 type=int,
57 default=2,
58 help='how many gpus'
59 )
60 parent_parser.add_argument(
61 '--distributed_backend',
62 type=str,
63 default='dp',
64 help='supports three options dp, ddp, ddp2'
65 )
66 parent_parser.add_argument(
67 '--use_16bit',
68 dest='use_16bit',
69 action='store_true',
70 help='if true uses 16 bit precision'
71 )
72
73 # each LightningModule defines arguments relevant to it
74 parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)
75 hyperparams = parser.parse_args()
76
77 # ---------------------
78 # RUN TRAINING
79 # ---------------------
80 main(hyperparams)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py
--- a/pl_examples/basic_examples/gpu_template.py
+++ b/pl_examples/basic_examples/gpu_template.py
@@ -23,7 +23,7 @@
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
- model = LightningTemplateModel(hparams)
+ model = LightningTemplateModel(**vars(hparams))
# ------------------------
# 2 INIT TRAINER
@@ -61,7 +61,7 @@
'--distributed_backend',
type=str,
default='dp',
- help='supports three options dp, ddp, ddp2'
+ help='supports four options dp, ddp, ddp2, ddp_spawn'
)
parent_parser.add_argument(
'--use_16bit',
|
{"golden_diff": "diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py\n--- a/pl_examples/basic_examples/gpu_template.py\n+++ b/pl_examples/basic_examples/gpu_template.py\n@@ -23,7 +23,7 @@\n # ------------------------\n # 1 INIT LIGHTNING MODEL\n # ------------------------\n- model = LightningTemplateModel(hparams)\n+ model = LightningTemplateModel(**vars(hparams))\n \n # ------------------------\n # 2 INIT TRAINER\n@@ -61,7 +61,7 @@\n '--distributed_backend',\n type=str,\n default='dp',\n- help='supports three options dp, ddp, ddp2'\n+ help='supports four options dp, ddp, ddp2, ddp_spawn'\n )\n parent_parser.add_argument(\n '--use_16bit',\n", "issue": "CPU/GPU Template\n\r\n\r\n## \ud83d\udc1b Bug\r\n\r\nThe GPU or CPU template do not run currently on master after changes including the setup hook. \r\n\r\n```\r\npython -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp\r\npython -m pl_examples.basic_examples.cpu_template\r\n```\r\n\r\nCPU Template Error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py\", line 53, in <module>\r\n main(args)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py\", line 34, in main\r\n trainer.fit(model)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 952, in fit\r\n self.run_pretrain_routine(model)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 1063, in run_pretrain_routine\r\n self.reset_val_dataloader(ref_model)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py\", line 331, in reset_val_dataloader\r\n self._reset_eval_dataloader(model, 'val')\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py\", line 253, in _reset_eval_dataloader\r\n dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py\", line 352, in request_dataloader\r\n dataloader = dataloader_fx()\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py\", line 158, in val_dataloader\r\n return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=4)\r\n File \"/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 594, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'LightningTemplateModel' object has no attribute 'mnist_test'\r\n```\r\nGPU Template Error:\r\n```\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py\", line 64, in __init__\r\n self.c_d1_drop = nn.Dropout(self.drop_prob)\r\n File \"/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/dropout.py\", line 10, in __init__\r\n if p < 0 or p > 1:\r\nTypeError: '<' not supported between instances of 'Namespace' and 'int'\r\n```\r\n\r\n### Environment\r\n\r\n\r\n* CUDA:\r\n - GPU:\r\n - GeForce RTX 2080 Ti\r\n - GeForce RTX 2080 Ti\r\n - GeForce RTX 2080 Ti\r\n - GeForce RTX 2080 Ti\r\n - available: True\r\n - version: 10.2\r\n* Packages:\r\n - numpy: 1.18.4\r\n - pyTorch_debug: False\r\n - pyTorch_version: 1.5.0\r\n - pytorch-lightning: 0.8.0\r\n - tensorboard: 2.2.1\r\n - tqdm: 4.46.0\r\n* System:\r\n - OS: Linux\r\n - architecture:\r\n - 64bit\r\n - ELF\r\n - processor: x86_64\r\n - python: 3.6.8\r\n - version: #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nRuns a model on a single node across multiple gpus.\n\"\"\"\nimport os\nfrom argparse import ArgumentParser\n\nimport numpy as np\nimport torch\n\nimport pytorch_lightning as pl\nfrom pl_examples.models.lightning_template import LightningTemplateModel\n\nSEED = 2334\ntorch.manual_seed(SEED)\nnp.random.seed(SEED)\n\n\ndef main(hparams):\n \"\"\"\n Main training routine specific for this project\n :param hparams:\n \"\"\"\n # ------------------------\n # 1 INIT LIGHTNING MODEL\n # ------------------------\n model = LightningTemplateModel(hparams)\n\n # ------------------------\n # 2 INIT TRAINER\n # ------------------------\n trainer = pl.Trainer(\n max_epochs=hparams.epochs,\n gpus=hparams.gpus,\n distributed_backend=hparams.distributed_backend,\n precision=16 if hparams.use_16bit else 32,\n )\n\n # ------------------------\n # 3 START TRAINING\n # ------------------------\n trainer.fit(model)\n\n\nif __name__ == '__main__':\n # ------------------------\n # TRAINING ARGUMENTS\n # ------------------------\n # these are project-wide arguments\n\n root_dir = os.path.dirname(os.path.realpath(__file__))\n parent_parser = ArgumentParser(add_help=False)\n\n # gpu args\n parent_parser.add_argument(\n '--gpus',\n type=int,\n default=2,\n help='how many gpus'\n )\n parent_parser.add_argument(\n '--distributed_backend',\n type=str,\n default='dp',\n help='supports three options dp, ddp, ddp2'\n )\n parent_parser.add_argument(\n '--use_16bit',\n dest='use_16bit',\n action='store_true',\n help='if true uses 16 bit precision'\n )\n\n # each LightningModule defines arguments relevant to it\n parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)\n hyperparams = parser.parse_args()\n\n # ---------------------\n # RUN TRAINING\n # ---------------------\n main(hyperparams)\n", "path": "pl_examples/basic_examples/gpu_template.py"}], "after_files": [{"content": "\"\"\"\nRuns a model on a single node across multiple gpus.\n\"\"\"\nimport os\nfrom argparse import ArgumentParser\n\nimport numpy as np\nimport torch\n\nimport pytorch_lightning as pl\nfrom pl_examples.models.lightning_template import LightningTemplateModel\n\nSEED = 2334\ntorch.manual_seed(SEED)\nnp.random.seed(SEED)\n\n\ndef main(hparams):\n \"\"\"\n Main training routine specific for this project\n :param hparams:\n \"\"\"\n # ------------------------\n # 1 INIT LIGHTNING MODEL\n # ------------------------\n model = LightningTemplateModel(**vars(hparams))\n\n # ------------------------\n # 2 INIT TRAINER\n # ------------------------\n trainer = pl.Trainer(\n max_epochs=hparams.epochs,\n gpus=hparams.gpus,\n distributed_backend=hparams.distributed_backend,\n precision=16 if hparams.use_16bit else 32,\n )\n\n # ------------------------\n # 3 START TRAINING\n # ------------------------\n trainer.fit(model)\n\n\nif __name__ == '__main__':\n # ------------------------\n # TRAINING ARGUMENTS\n # ------------------------\n # these are project-wide arguments\n\n root_dir = os.path.dirname(os.path.realpath(__file__))\n parent_parser = ArgumentParser(add_help=False)\n\n # gpu args\n parent_parser.add_argument(\n '--gpus',\n type=int,\n default=2,\n help='how many gpus'\n )\n parent_parser.add_argument(\n '--distributed_backend',\n type=str,\n default='dp',\n help='supports four options dp, ddp, ddp2, ddp_spawn'\n )\n parent_parser.add_argument(\n '--use_16bit',\n dest='use_16bit',\n action='store_true',\n help='if true uses 16 bit precision'\n )\n\n # each LightningModule defines arguments relevant to it\n parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)\n hyperparams = parser.parse_args()\n\n # ---------------------\n # RUN TRAINING\n # ---------------------\n main(hyperparams)\n", "path": "pl_examples/basic_examples/gpu_template.py"}]}
| 1,831 | 186 |
gh_patches_debug_29155
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdetection-4400
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
VFNET device error when run inference_detector
Hi everyone,
When a run the inference_detector to VFNET algorithm, I got this error message (only for VFNET, for ATSS ... no problems):
/usr/local/lib/python3.6/dist-packages/mmcv/parallel/_functions.py in forward(target_gpus, input)
71 # Perform CPU to GPU copies in a background stream
72 streams = [_get_stream(device) for device in target_gpus]
---> 73
74 outputs = scatter(input, target_gpus, streams)
75 # Synchronize with the copy stream
/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/_functions.py in _get_stream(device)
117 if _streams is None:
118 _streams = [None] * torch.cuda.device_count()
--> 119 if _streams[device] is None:
120 _streams[device] = torch.cuda.Stream(device)
121 return _streams[device]
TypeError: list indices must be integers or slices, not torch.device
The device paramenter of init_detector is default ('cuda:0') and distributed=False. Can you help me?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/apis/inference.py`
Content:
```
1 import warnings
2
3 import matplotlib.pyplot as plt
4 import mmcv
5 import numpy as np
6 import torch
7 from mmcv.ops import RoIPool
8 from mmcv.parallel import collate, scatter
9 from mmcv.runner import load_checkpoint
10
11 from mmdet.core import get_classes
12 from mmdet.datasets.pipelines import Compose
13 from mmdet.models import build_detector
14
15
16 def init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):
17 """Initialize a detector from config file.
18
19 Args:
20 config (str or :obj:`mmcv.Config`): Config file path or the config
21 object.
22 checkpoint (str, optional): Checkpoint path. If left as None, the model
23 will not load any weights.
24 cfg_options (dict): Options to override some settings in the used
25 config.
26
27 Returns:
28 nn.Module: The constructed detector.
29 """
30 if isinstance(config, str):
31 config = mmcv.Config.fromfile(config)
32 elif not isinstance(config, mmcv.Config):
33 raise TypeError('config must be a filename or Config object, '
34 f'but got {type(config)}')
35 if cfg_options is not None:
36 config.merge_from_dict(cfg_options)
37 config.model.pretrained = None
38 model = build_detector(config.model, test_cfg=config.test_cfg)
39 if checkpoint is not None:
40 map_loc = 'cpu' if device == 'cpu' else None
41 checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc)
42 if 'CLASSES' in checkpoint['meta']:
43 model.CLASSES = checkpoint['meta']['CLASSES']
44 else:
45 warnings.simplefilter('once')
46 warnings.warn('Class names are not saved in the checkpoint\'s '
47 'meta data, use COCO classes by default.')
48 model.CLASSES = get_classes('coco')
49 model.cfg = config # save the config in the model for convenience
50 model.to(device)
51 model.eval()
52 return model
53
54
55 class LoadImage(object):
56 """A simple pipeline to load image."""
57
58 def __call__(self, results):
59 """Call function to load images into results.
60
61 Args:
62 results (dict): A result dict contains the file name
63 of the image to be read.
64
65 Returns:
66 dict: ``results`` will be returned containing loaded image.
67 """
68 if isinstance(results['img'], str):
69 results['filename'] = results['img']
70 results['ori_filename'] = results['img']
71 else:
72 results['filename'] = None
73 results['ori_filename'] = None
74 img = mmcv.imread(results['img'])
75 results['img'] = img
76 results['img_fields'] = ['img']
77 results['img_shape'] = img.shape
78 results['ori_shape'] = img.shape
79 return results
80
81
82 def inference_detector(model, img):
83 """Inference image(s) with the detector.
84
85 Args:
86 model (nn.Module): The loaded detector.
87 imgs (str/ndarray or list[str/ndarray]): Either image files or loaded
88 images.
89
90 Returns:
91 If imgs is a str, a generator will be returned, otherwise return the
92 detection results directly.
93 """
94 cfg = model.cfg
95 device = next(model.parameters()).device # model device
96 # prepare data
97 if isinstance(img, np.ndarray):
98 # directly add img
99 data = dict(img=img)
100 cfg = cfg.copy()
101 # set loading pipeline type
102 cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'
103 else:
104 # add information into dict
105 data = dict(img_info=dict(filename=img), img_prefix=None)
106 # build the data pipeline
107 test_pipeline = Compose(cfg.data.test.pipeline)
108 data = test_pipeline(data)
109 data = collate([data], samples_per_gpu=1)
110 if next(model.parameters()).is_cuda:
111 # scatter to specified GPU
112 data = scatter(data, [device])[0]
113 else:
114 for m in model.modules():
115 assert not isinstance(
116 m, RoIPool
117 ), 'CPU inference with RoIPool is not supported currently.'
118 # just get the actual data from DataContainer
119 data['img_metas'] = data['img_metas'][0].data
120
121 # forward the model
122 with torch.no_grad():
123 result = model(return_loss=False, rescale=True, **data)[0]
124 return result
125
126
127 async def async_inference_detector(model, img):
128 """Async inference image(s) with the detector.
129
130 Args:
131 model (nn.Module): The loaded detector.
132 img (str | ndarray): Either image files or loaded images.
133
134 Returns:
135 Awaitable detection results.
136 """
137 cfg = model.cfg
138 device = next(model.parameters()).device # model device
139 # prepare data
140 if isinstance(img, np.ndarray):
141 # directly add img
142 data = dict(img=img)
143 cfg = cfg.copy()
144 # set loading pipeline type
145 cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'
146 else:
147 # add information into dict
148 data = dict(img_info=dict(filename=img), img_prefix=None)
149 # build the data pipeline
150 test_pipeline = Compose(cfg.data.test.pipeline)
151 data = test_pipeline(data)
152 data = scatter(collate([data], samples_per_gpu=1), [device])[0]
153
154 # We don't restore `torch.is_grad_enabled()` value during concurrent
155 # inference since execution can overlap
156 torch.set_grad_enabled(False)
157 result = await model.aforward_test(rescale=True, **data)
158 return result
159
160
161 def show_result_pyplot(model,
162 img,
163 result,
164 score_thr=0.3,
165 fig_size=(15, 10),
166 title='result',
167 block=True):
168 """Visualize the detection results on the image.
169
170 Args:
171 model (nn.Module): The loaded detector.
172 img (str or np.ndarray): Image filename or loaded image.
173 result (tuple[list] or list): The detection result, can be either
174 (bbox, segm) or just bbox.
175 score_thr (float): The threshold to visualize the bboxes and masks.
176 fig_size (tuple): Figure size of the pyplot figure.
177 title (str): Title of the pyplot figure.
178 block (bool): Whether to block GUI.
179 """
180 if hasattr(model, 'module'):
181 model = model.module
182 img = model.show_result(img, result, score_thr=score_thr, show=False)
183 plt.figure(figsize=fig_size)
184 plt.imshow(mmcv.bgr2rgb(img))
185 plt.title(title)
186 plt.tight_layout()
187 plt.show(block=block)
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmdet/apis/inference.py b/mmdet/apis/inference.py
--- a/mmdet/apis/inference.py
+++ b/mmdet/apis/inference.py
@@ -9,6 +9,7 @@
from mmcv.runner import load_checkpoint
from mmdet.core import get_classes
+from mmdet.datasets import replace_ImageToTensor
from mmdet.datasets.pipelines import Compose
from mmdet.models import build_detector
@@ -104,9 +105,13 @@
# add information into dict
data = dict(img_info=dict(filename=img), img_prefix=None)
# build the data pipeline
+ cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
test_pipeline = Compose(cfg.data.test.pipeline)
data = test_pipeline(data)
data = collate([data], samples_per_gpu=1)
+ # just get the actual data from DataContainer
+ data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]
+ data['img'] = [img.data[0] for img in data['img']]
if next(model.parameters()).is_cuda:
# scatter to specified GPU
data = scatter(data, [device])[0]
@@ -115,8 +120,6 @@
assert not isinstance(
m, RoIPool
), 'CPU inference with RoIPool is not supported currently.'
- # just get the actual data from DataContainer
- data['img_metas'] = data['img_metas'][0].data
# forward the model
with torch.no_grad():
|
{"golden_diff": "diff --git a/mmdet/apis/inference.py b/mmdet/apis/inference.py\n--- a/mmdet/apis/inference.py\n+++ b/mmdet/apis/inference.py\n@@ -9,6 +9,7 @@\n from mmcv.runner import load_checkpoint\n \n from mmdet.core import get_classes\n+from mmdet.datasets import replace_ImageToTensor\n from mmdet.datasets.pipelines import Compose\n from mmdet.models import build_detector\n \n@@ -104,9 +105,13 @@\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n+ cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n data = test_pipeline(data)\n data = collate([data], samples_per_gpu=1)\n+ # just get the actual data from DataContainer\n+ data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]\n+ data['img'] = [img.data[0] for img in data['img']]\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n@@ -115,8 +120,6 @@\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n- # just get the actual data from DataContainer\n- data['img_metas'] = data['img_metas'][0].data\n \n # forward the model\n with torch.no_grad():\n", "issue": "VFNET device error when run inference_detector\nHi everyone,\r\n\r\nWhen a run the inference_detector to VFNET algorithm, I got this error message (only for VFNET, for ATSS ... no problems):\r\n\r\n/usr/local/lib/python3.6/dist-packages/mmcv/parallel/_functions.py in forward(target_gpus, input)\r\n 71 # Perform CPU to GPU copies in a background stream\r\n 72 streams = [_get_stream(device) for device in target_gpus]\r\n---> 73 \r\n 74 outputs = scatter(input, target_gpus, streams)\r\n 75 # Synchronize with the copy stream\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/_functions.py in _get_stream(device)\r\n 117 if _streams is None:\r\n 118 _streams = [None] * torch.cuda.device_count()\r\n--> 119 if _streams[device] is None:\r\n 120 _streams[device] = torch.cuda.Stream(device)\r\n 121 return _streams[device]\r\n\r\nTypeError: list indices must be integers or slices, not torch.device\r\n\r\nThe device paramenter of init_detector is default ('cuda:0') and distributed=False. Can you help me?\r\n\n", "before_files": [{"content": "import warnings\n\nimport matplotlib.pyplot as plt\nimport mmcv\nimport numpy as np\nimport torch\nfrom mmcv.ops import RoIPool\nfrom mmcv.parallel import collate, scatter\nfrom mmcv.runner import load_checkpoint\n\nfrom mmdet.core import get_classes\nfrom mmdet.datasets.pipelines import Compose\nfrom mmdet.models import build_detector\n\n\ndef init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):\n \"\"\"Initialize a detector from config file.\n\n Args:\n config (str or :obj:`mmcv.Config`): Config file path or the config\n object.\n checkpoint (str, optional): Checkpoint path. If left as None, the model\n will not load any weights.\n cfg_options (dict): Options to override some settings in the used\n config.\n\n Returns:\n nn.Module: The constructed detector.\n \"\"\"\n if isinstance(config, str):\n config = mmcv.Config.fromfile(config)\n elif not isinstance(config, mmcv.Config):\n raise TypeError('config must be a filename or Config object, '\n f'but got {type(config)}')\n if cfg_options is not None:\n config.merge_from_dict(cfg_options)\n config.model.pretrained = None\n model = build_detector(config.model, test_cfg=config.test_cfg)\n if checkpoint is not None:\n map_loc = 'cpu' if device == 'cpu' else None\n checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc)\n if 'CLASSES' in checkpoint['meta']:\n model.CLASSES = checkpoint['meta']['CLASSES']\n else:\n warnings.simplefilter('once')\n warnings.warn('Class names are not saved in the checkpoint\\'s '\n 'meta data, use COCO classes by default.')\n model.CLASSES = get_classes('coco')\n model.cfg = config # save the config in the model for convenience\n model.to(device)\n model.eval()\n return model\n\n\nclass LoadImage(object):\n \"\"\"A simple pipeline to load image.\"\"\"\n\n def __call__(self, results):\n \"\"\"Call function to load images into results.\n\n Args:\n results (dict): A result dict contains the file name\n of the image to be read.\n\n Returns:\n dict: ``results`` will be returned containing loaded image.\n \"\"\"\n if isinstance(results['img'], str):\n results['filename'] = results['img']\n results['ori_filename'] = results['img']\n else:\n results['filename'] = None\n results['ori_filename'] = None\n img = mmcv.imread(results['img'])\n results['img'] = img\n results['img_fields'] = ['img']\n results['img_shape'] = img.shape\n results['ori_shape'] = img.shape\n return results\n\n\ndef inference_detector(model, img):\n \"\"\"Inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n imgs (str/ndarray or list[str/ndarray]): Either image files or loaded\n images.\n\n Returns:\n If imgs is a str, a generator will be returned, otherwise return the\n detection results directly.\n \"\"\"\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n test_pipeline = Compose(cfg.data.test.pipeline)\n data = test_pipeline(data)\n data = collate([data], samples_per_gpu=1)\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n # just get the actual data from DataContainer\n data['img_metas'] = data['img_metas'][0].data\n\n # forward the model\n with torch.no_grad():\n result = model(return_loss=False, rescale=True, **data)[0]\n return result\n\n\nasync def async_inference_detector(model, img):\n \"\"\"Async inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str | ndarray): Either image files or loaded images.\n\n Returns:\n Awaitable detection results.\n \"\"\"\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n test_pipeline = Compose(cfg.data.test.pipeline)\n data = test_pipeline(data)\n data = scatter(collate([data], samples_per_gpu=1), [device])[0]\n\n # We don't restore `torch.is_grad_enabled()` value during concurrent\n # inference since execution can overlap\n torch.set_grad_enabled(False)\n result = await model.aforward_test(rescale=True, **data)\n return result\n\n\ndef show_result_pyplot(model,\n img,\n result,\n score_thr=0.3,\n fig_size=(15, 10),\n title='result',\n block=True):\n \"\"\"Visualize the detection results on the image.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str or np.ndarray): Image filename or loaded image.\n result (tuple[list] or list): The detection result, can be either\n (bbox, segm) or just bbox.\n score_thr (float): The threshold to visualize the bboxes and masks.\n fig_size (tuple): Figure size of the pyplot figure.\n title (str): Title of the pyplot figure.\n block (bool): Whether to block GUI.\n \"\"\"\n if hasattr(model, 'module'):\n model = model.module\n img = model.show_result(img, result, score_thr=score_thr, show=False)\n plt.figure(figsize=fig_size)\n plt.imshow(mmcv.bgr2rgb(img))\n plt.title(title)\n plt.tight_layout()\n plt.show(block=block)\n", "path": "mmdet/apis/inference.py"}], "after_files": [{"content": "import warnings\n\nimport matplotlib.pyplot as plt\nimport mmcv\nimport numpy as np\nimport torch\nfrom mmcv.ops import RoIPool\nfrom mmcv.parallel import collate, scatter\nfrom mmcv.runner import load_checkpoint\n\nfrom mmdet.core import get_classes\nfrom mmdet.datasets import replace_ImageToTensor\nfrom mmdet.datasets.pipelines import Compose\nfrom mmdet.models import build_detector\n\n\ndef init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):\n \"\"\"Initialize a detector from config file.\n\n Args:\n config (str or :obj:`mmcv.Config`): Config file path or the config\n object.\n checkpoint (str, optional): Checkpoint path. If left as None, the model\n will not load any weights.\n cfg_options (dict): Options to override some settings in the used\n config.\n\n Returns:\n nn.Module: The constructed detector.\n \"\"\"\n if isinstance(config, str):\n config = mmcv.Config.fromfile(config)\n elif not isinstance(config, mmcv.Config):\n raise TypeError('config must be a filename or Config object, '\n f'but got {type(config)}')\n if cfg_options is not None:\n config.merge_from_dict(cfg_options)\n config.model.pretrained = None\n model = build_detector(config.model, test_cfg=config.test_cfg)\n if checkpoint is not None:\n map_loc = 'cpu' if device == 'cpu' else None\n checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc)\n if 'CLASSES' in checkpoint['meta']:\n model.CLASSES = checkpoint['meta']['CLASSES']\n else:\n warnings.simplefilter('once')\n warnings.warn('Class names are not saved in the checkpoint\\'s '\n 'meta data, use COCO classes by default.')\n model.CLASSES = get_classes('coco')\n model.cfg = config # save the config in the model for convenience\n model.to(device)\n model.eval()\n return model\n\n\nclass LoadImage(object):\n \"\"\"A simple pipeline to load image.\"\"\"\n\n def __call__(self, results):\n \"\"\"Call function to load images into results.\n\n Args:\n results (dict): A result dict contains the file name\n of the image to be read.\n\n Returns:\n dict: ``results`` will be returned containing loaded image.\n \"\"\"\n if isinstance(results['img'], str):\n results['filename'] = results['img']\n results['ori_filename'] = results['img']\n else:\n results['filename'] = None\n results['ori_filename'] = None\n img = mmcv.imread(results['img'])\n results['img'] = img\n results['img_fields'] = ['img']\n results['img_shape'] = img.shape\n results['ori_shape'] = img.shape\n return results\n\n\ndef inference_detector(model, img):\n \"\"\"Inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n imgs (str/ndarray or list[str/ndarray]): Either image files or loaded\n images.\n\n Returns:\n If imgs is a str, a generator will be returned, otherwise return the\n detection results directly.\n \"\"\"\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)\n test_pipeline = Compose(cfg.data.test.pipeline)\n data = test_pipeline(data)\n data = collate([data], samples_per_gpu=1)\n # just get the actual data from DataContainer\n data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]\n data['img'] = [img.data[0] for img in data['img']]\n if next(model.parameters()).is_cuda:\n # scatter to specified GPU\n data = scatter(data, [device])[0]\n else:\n for m in model.modules():\n assert not isinstance(\n m, RoIPool\n ), 'CPU inference with RoIPool is not supported currently.'\n\n # forward the model\n with torch.no_grad():\n result = model(return_loss=False, rescale=True, **data)[0]\n return result\n\n\nasync def async_inference_detector(model, img):\n \"\"\"Async inference image(s) with the detector.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str | ndarray): Either image files or loaded images.\n\n Returns:\n Awaitable detection results.\n \"\"\"\n cfg = model.cfg\n device = next(model.parameters()).device # model device\n # prepare data\n if isinstance(img, np.ndarray):\n # directly add img\n data = dict(img=img)\n cfg = cfg.copy()\n # set loading pipeline type\n cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'\n else:\n # add information into dict\n data = dict(img_info=dict(filename=img), img_prefix=None)\n # build the data pipeline\n test_pipeline = Compose(cfg.data.test.pipeline)\n data = test_pipeline(data)\n data = scatter(collate([data], samples_per_gpu=1), [device])[0]\n\n # We don't restore `torch.is_grad_enabled()` value during concurrent\n # inference since execution can overlap\n torch.set_grad_enabled(False)\n result = await model.aforward_test(rescale=True, **data)\n return result\n\n\ndef show_result_pyplot(model,\n img,\n result,\n score_thr=0.3,\n fig_size=(15, 10),\n title='result',\n block=True):\n \"\"\"Visualize the detection results on the image.\n\n Args:\n model (nn.Module): The loaded detector.\n img (str or np.ndarray): Image filename or loaded image.\n result (tuple[list] or list): The detection result, can be either\n (bbox, segm) or just bbox.\n score_thr (float): The threshold to visualize the bboxes and masks.\n fig_size (tuple): Figure size of the pyplot figure.\n title (str): Title of the pyplot figure.\n block (bool): Whether to block GUI.\n \"\"\"\n if hasattr(model, 'module'):\n model = model.module\n img = model.show_result(img, result, score_thr=score_thr, show=False)\n plt.figure(figsize=fig_size)\n plt.imshow(mmcv.bgr2rgb(img))\n plt.title(title)\n plt.tight_layout()\n plt.show(block=block)\n", "path": "mmdet/apis/inference.py"}]}
| 2,431 | 360 |
gh_patches_debug_6694
|
rasdani/github-patches
|
git_diff
|
oppia__oppia-8797
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Blank page loaded when navigated to feedback page from creator dashboard_Both mobile and browsers
**Describe the bug**
Blank page loaded when navigated to feedback page from creator dashboard
**To Reproduce**
Steps to reproduce the behavior:
1. Create an exploration as user1 and logout
2. Play through the course as user2 and submit a feedback
3. Login as user1 and navigate to creator dashboard
4. Click on the open feedback count (2 in this scenario as in image1)
**Observed behavior**
Blank page loaded when navigated to feedback page from creator dashboard
**Expected behavior**
Meaningful message / page should be loaded
**Screenshots**
Image1

Blank page

**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**
OS: Chrome OS
Browser Chrome
Version 79.0.3945.119 (Official Build) (64-bit)
**Smartphone (please complete the following information; delete this section if the issue does not arise on smartphones):**
Device: [iPhoneXS]
OS: [ iOS13.3]
Browser [chrome]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/domain/config_domain.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Domain objects for configuration properties."""
18
19 from __future__ import absolute_import # pylint: disable=import-only-modules
20 from __future__ import unicode_literals # pylint: disable=import-only-modules
21
22 from core.domain import change_domain
23 from core.platform import models
24 import feconf
25 import python_utils
26 import schema_utils
27
28 (config_models,) = models.Registry.import_models([models.NAMES.config])
29 memcache_services = models.Registry.import_memcache_services()
30
31 CMD_CHANGE_PROPERTY_VALUE = 'change_property_value'
32
33 SET_OF_STRINGS_SCHEMA = {
34 'type': 'list',
35 'items': {
36 'type': 'unicode',
37 },
38 'validators': [{
39 'id': 'is_uniquified',
40 }],
41 }
42
43 SET_OF_CLASSROOM_DICTS_SCHEMA = {
44 'type': 'list',
45 'items': {
46 'type': 'dict',
47 'properties': [{
48 'name': 'name',
49 'schema': {
50 'type': 'unicode'
51 }
52 }, {
53 'name': 'topic_ids',
54 'schema': {
55 'type': 'list',
56 'items': {
57 'type': 'unicode',
58 },
59 'validators': [{
60 'id': 'is_uniquified',
61 }]
62 }
63 }]
64 }
65 }
66
67 VMID_SHARED_SECRET_KEY_SCHEMA = {
68 'type': 'list',
69 'items': {
70 'type': 'dict',
71 'properties': [{
72 'name': 'vm_id',
73 'schema': {
74 'type': 'unicode'
75 }
76 }, {
77 'name': 'shared_secret_key',
78 'schema': {
79 'type': 'unicode'
80 }
81 }]
82 }
83 }
84
85 BOOL_SCHEMA = {
86 'type': schema_utils.SCHEMA_TYPE_BOOL
87 }
88
89 UNICODE_SCHEMA = {
90 'type': schema_utils.SCHEMA_TYPE_UNICODE
91 }
92
93 FLOAT_SCHEMA = {
94 'type': schema_utils.SCHEMA_TYPE_FLOAT
95 }
96
97
98 class ConfigPropertyChange(change_domain.BaseChange):
99 """Domain object for changes made to a config property object.
100
101 The allowed commands, together with the attributes:
102 - 'change_property_value' (with new_value)
103 """
104
105 ALLOWED_COMMANDS = [{
106 'name': CMD_CHANGE_PROPERTY_VALUE,
107 'required_attribute_names': ['new_value'],
108 'optional_attribute_names': []
109 }]
110
111
112 class ConfigProperty(python_utils.OBJECT):
113 """A property with a name and a default value.
114
115 NOTE TO DEVELOPERS: These config properties are deprecated. Do not reuse
116 these names:
117 - about_page_youtube_video_id.
118 - admin_email_address.
119 - admin_ids.
120 - admin_usernames.
121 - allow_yaml_file_upload.
122 - banned_usernames.
123 - banner_alt_text.
124 - before_end_body_tag_hook.
125 - before_end_head_tag_hook.
126 - carousel_slides_config.
127 - collection_editor_whitelist.
128 - contact_email_address.
129 - contribute_gallery_page_announcement.
130 - default_twitter_share_message_editor.
131 - disabled_explorations.
132 - editor_page_announcement.
133 - editor_prerequisites_agreement.
134 - embedded_google_group_url.
135 - full_site_url.
136 - moderator_ids.
137 - moderator_request_forum_url.
138 - moderator_usernames.
139 - publicize_exploration_email_html_body.
140 - sharing_options.
141 - sharing_options_twitter_text.
142 - sidebar_menu_additional_links.
143 - site_forum_url.
144 - social_media_buttons.
145 - splash_page_exploration_id.
146 - splash_page_exploration_version.
147 - splash_page_youtube_video_id.
148 - ssl_challenge_responses.
149 - whitelisted_email_senders.
150 """
151
152 def __init__(self, name, schema, description, default_value):
153 if Registry.get_config_property(name):
154 raise Exception('Property with name %s already exists' % name)
155
156 self._name = name
157 self._schema = schema
158 self._description = description
159 self._default_value = schema_utils.normalize_against_schema(
160 default_value, self._schema)
161
162 Registry.init_config_property(self.name, self)
163
164 @property
165 def name(self):
166 """Returns the name of the configuration property."""
167
168 return self._name
169
170 @property
171 def schema(self):
172 """Returns the schema of the configuration property."""
173
174 return self._schema
175
176 @property
177 def description(self):
178 """Returns the description of the configuration property."""
179
180 return self._description
181
182 @property
183 def default_value(self):
184 """Returns the default value of the configuration property."""
185
186 return self._default_value
187
188 @property
189 def value(self):
190 """Get the latest value from memcache, datastore, or use default."""
191
192 memcached_items = memcache_services.get_multi([self.name])
193 if self.name in memcached_items:
194 return memcached_items[self.name]
195
196 datastore_item = config_models.ConfigPropertyModel.get(
197 self.name, strict=False)
198 if datastore_item is not None:
199 memcache_services.set_multi({
200 datastore_item.id: datastore_item.value})
201 return datastore_item.value
202
203 return self.default_value
204
205 def set_value(self, committer_id, raw_value):
206 """Sets the value of the property. In general, this should not be
207 called directly -- use config_services.set_property() instead.
208 """
209 value = self.normalize(raw_value)
210
211 # Set value in datastore.
212 model_instance = config_models.ConfigPropertyModel.get(
213 self.name, strict=False)
214 if model_instance is None:
215 model_instance = config_models.ConfigPropertyModel(
216 id=self.name)
217 model_instance.value = value
218 model_instance.commit(
219 committer_id, [{
220 'cmd': CMD_CHANGE_PROPERTY_VALUE,
221 'new_value': value
222 }])
223
224 # Set value in memcache.
225 memcache_services.set_multi({
226 model_instance.id: model_instance.value})
227
228 def normalize(self, value):
229 """Validates the given object using the schema and normalizes if
230 necessary.
231
232 Args:
233 value: The value of the configuration property.
234
235 Returns:
236 instance. The normalized object.
237 """
238 return schema_utils.normalize_against_schema(value, self._schema)
239
240
241 class Registry(python_utils.OBJECT):
242 """Registry of all configuration properties."""
243
244 # The keys of _config_registry are the property names, and the values are
245 # ConfigProperty instances.
246 _config_registry = {}
247
248 @classmethod
249 def init_config_property(cls, name, instance):
250 """Initializes _config_registry with keys as the property names and
251 values as instances of the specified property.
252
253 Args:
254 name: str. The name of the configuration property.
255 instance: *. The instance of the configuration property.
256 """
257 cls._config_registry[name] = instance
258
259 @classmethod
260 def get_config_property(cls, name):
261 """Returns the instance of the specified name of the configuration
262 property.
263
264 Args:
265 name: str. The name of the configuration property.
266
267 Returns:
268 instance. The instance of the specified configuration property.
269 """
270 return cls._config_registry.get(name)
271
272 @classmethod
273 def get_config_property_schemas(cls):
274 """Return a dict of editable config property schemas.
275
276 The keys of the dict are config property names. The values are dicts
277 with the following keys: schema, description, value.
278 """
279 schemas_dict = {}
280
281 for (property_name, instance) in cls._config_registry.items():
282 schemas_dict[property_name] = {
283 'schema': instance.schema,
284 'description': instance.description,
285 'value': instance.value
286 }
287
288 return schemas_dict
289
290 @classmethod
291 def get_all_config_property_names(cls):
292 """Return a list of all the config property names.
293
294 Returns:
295 list. The list of all config property names.
296 """
297 return list(cls._config_registry)
298
299
300 PROMO_BAR_ENABLED = ConfigProperty(
301 'promo_bar_enabled', BOOL_SCHEMA,
302 'Whether the promo bar should be enabled for all users', False)
303 PROMO_BAR_MESSAGE = ConfigProperty(
304 'promo_bar_message', UNICODE_SCHEMA,
305 'The message to show to all users if the promo bar is enabled', '')
306
307 VMID_SHARED_SECRET_KEY_MAPPING = ConfigProperty(
308 'vmid_shared_secret_key_mapping', VMID_SHARED_SECRET_KEY_SCHEMA,
309 'VMID and shared secret key corresponding to that VM',
310 [{
311 'vm_id': feconf.DEFAULT_VM_ID,
312 'shared_secret_key': feconf.DEFAULT_VM_SHARED_SECRET
313 }])
314
315 WHITELISTED_EXPLORATION_IDS_FOR_PLAYTHROUGHS = ConfigProperty(
316 'whitelisted_exploration_ids_for_playthroughs',
317 SET_OF_STRINGS_SCHEMA,
318 'The set of exploration IDs for recording playthrough issues', [
319 'umPkwp0L1M0-', 'MjZzEVOG47_1', '9trAQhj6uUC2', 'rfX8jNkPnA-1',
320 '0FBWxCE5egOw', '670bU6d9JGBh', 'aHikhPlxYgOH', '-tMgcP1i_4au',
321 'zW39GLG_BdN2', 'Xa3B_io-2WI5', '6Q6IyIDkjpYC', 'osw1m5Q3jK41'])
322
323 TOPIC_IDS_FOR_CLASSROOM_PAGES = ConfigProperty(
324 'topic_ids_for_classroom_pages', SET_OF_CLASSROOM_DICTS_SCHEMA,
325 'The set of topic IDs for each classroom page.', [{
326 'name': 'Math',
327 'topic_ids': []
328 }]
329 )
330
331 RECORD_PLAYTHROUGH_PROBABILITY = ConfigProperty(
332 'record_playthrough_probability', FLOAT_SCHEMA,
333 'The probability of recording playthroughs', 0.2)
334
335 IS_IMPROVEMENTS_TAB_ENABLED = ConfigProperty(
336 'is_improvements_tab_enabled', BOOL_SCHEMA,
337 'Exposes the Improvements Tab for creators in the exploration editor.',
338 True)
339
340 ALWAYS_ASK_LEARNERS_FOR_ANSWER_DETAILS = ConfigProperty(
341 'always_ask_learners_for_answer_details', BOOL_SCHEMA,
342 'Always ask learners for answer details. For testing -- do not use',
343 False)
344
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/domain/config_domain.py b/core/domain/config_domain.py
--- a/core/domain/config_domain.py
+++ b/core/domain/config_domain.py
@@ -335,7 +335,7 @@
IS_IMPROVEMENTS_TAB_ENABLED = ConfigProperty(
'is_improvements_tab_enabled', BOOL_SCHEMA,
'Exposes the Improvements Tab for creators in the exploration editor.',
- True)
+ False)
ALWAYS_ASK_LEARNERS_FOR_ANSWER_DETAILS = ConfigProperty(
'always_ask_learners_for_answer_details', BOOL_SCHEMA,
|
{"golden_diff": "diff --git a/core/domain/config_domain.py b/core/domain/config_domain.py\n--- a/core/domain/config_domain.py\n+++ b/core/domain/config_domain.py\n@@ -335,7 +335,7 @@\n IS_IMPROVEMENTS_TAB_ENABLED = ConfigProperty(\n 'is_improvements_tab_enabled', BOOL_SCHEMA,\n 'Exposes the Improvements Tab for creators in the exploration editor.',\n- True)\n+ False)\n \n ALWAYS_ASK_LEARNERS_FOR_ANSWER_DETAILS = ConfigProperty(\n 'always_ask_learners_for_answer_details', BOOL_SCHEMA,\n", "issue": "Blank page loaded when navigated to feedback page from creator dashboard_Both mobile and browsers\n**Describe the bug**\r\nBlank page loaded when navigated to feedback page from creator dashboard\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n 1. Create an exploration as user1 and logout\r\n 2. Play through the course as user2 and submit a feedback\r\n 3. Login as user1 and navigate to creator dashboard\r\n 4. Click on the open feedback count (2 in this scenario as in image1)\r\n\r\n**Observed behavior**\r\nBlank page loaded when navigated to feedback page from creator dashboard\r\n\r\n**Expected behavior**\r\nMeaningful message / page should be loaded\r\n\r\n**Screenshots**\r\nImage1\r\n\r\n\r\nBlank page\r\n\r\n\r\n\r\n**Desktop (please complete the following information; delete this section if the issue does not arise on desktop):**\r\nOS: Chrome OS\r\nBrowser Chrome\r\nVersion 79.0.3945.119 (Official Build) (64-bit)\r\n\r\n**Smartphone (please complete the following information; delete this section if the issue does not arise on smartphones):**\r\n Device: [iPhoneXS]\r\nOS: [ iOS13.3]\r\nBrowser [chrome]\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Domain objects for configuration properties.\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nfrom core.domain import change_domain\nfrom core.platform import models\nimport feconf\nimport python_utils\nimport schema_utils\n\n(config_models,) = models.Registry.import_models([models.NAMES.config])\nmemcache_services = models.Registry.import_memcache_services()\n\nCMD_CHANGE_PROPERTY_VALUE = 'change_property_value'\n\nSET_OF_STRINGS_SCHEMA = {\n 'type': 'list',\n 'items': {\n 'type': 'unicode',\n },\n 'validators': [{\n 'id': 'is_uniquified',\n }],\n}\n\nSET_OF_CLASSROOM_DICTS_SCHEMA = {\n 'type': 'list',\n 'items': {\n 'type': 'dict',\n 'properties': [{\n 'name': 'name',\n 'schema': {\n 'type': 'unicode'\n }\n }, {\n 'name': 'topic_ids',\n 'schema': {\n 'type': 'list',\n 'items': {\n 'type': 'unicode',\n },\n 'validators': [{\n 'id': 'is_uniquified',\n }]\n }\n }]\n }\n}\n\nVMID_SHARED_SECRET_KEY_SCHEMA = {\n 'type': 'list',\n 'items': {\n 'type': 'dict',\n 'properties': [{\n 'name': 'vm_id',\n 'schema': {\n 'type': 'unicode'\n }\n }, {\n 'name': 'shared_secret_key',\n 'schema': {\n 'type': 'unicode'\n }\n }]\n }\n}\n\nBOOL_SCHEMA = {\n 'type': schema_utils.SCHEMA_TYPE_BOOL\n}\n\nUNICODE_SCHEMA = {\n 'type': schema_utils.SCHEMA_TYPE_UNICODE\n}\n\nFLOAT_SCHEMA = {\n 'type': schema_utils.SCHEMA_TYPE_FLOAT\n}\n\n\nclass ConfigPropertyChange(change_domain.BaseChange):\n \"\"\"Domain object for changes made to a config property object.\n\n The allowed commands, together with the attributes:\n - 'change_property_value' (with new_value)\n \"\"\"\n\n ALLOWED_COMMANDS = [{\n 'name': CMD_CHANGE_PROPERTY_VALUE,\n 'required_attribute_names': ['new_value'],\n 'optional_attribute_names': []\n }]\n\n\nclass ConfigProperty(python_utils.OBJECT):\n \"\"\"A property with a name and a default value.\n\n NOTE TO DEVELOPERS: These config properties are deprecated. Do not reuse\n these names:\n - about_page_youtube_video_id.\n - admin_email_address.\n - admin_ids.\n - admin_usernames.\n - allow_yaml_file_upload.\n - banned_usernames.\n - banner_alt_text.\n - before_end_body_tag_hook.\n - before_end_head_tag_hook.\n - carousel_slides_config.\n - collection_editor_whitelist.\n - contact_email_address.\n - contribute_gallery_page_announcement.\n - default_twitter_share_message_editor.\n - disabled_explorations.\n - editor_page_announcement.\n - editor_prerequisites_agreement.\n - embedded_google_group_url.\n - full_site_url.\n - moderator_ids.\n - moderator_request_forum_url.\n - moderator_usernames.\n - publicize_exploration_email_html_body.\n - sharing_options.\n - sharing_options_twitter_text.\n - sidebar_menu_additional_links.\n - site_forum_url.\n - social_media_buttons.\n - splash_page_exploration_id.\n - splash_page_exploration_version.\n - splash_page_youtube_video_id.\n - ssl_challenge_responses.\n - whitelisted_email_senders.\n \"\"\"\n\n def __init__(self, name, schema, description, default_value):\n if Registry.get_config_property(name):\n raise Exception('Property with name %s already exists' % name)\n\n self._name = name\n self._schema = schema\n self._description = description\n self._default_value = schema_utils.normalize_against_schema(\n default_value, self._schema)\n\n Registry.init_config_property(self.name, self)\n\n @property\n def name(self):\n \"\"\"Returns the name of the configuration property.\"\"\"\n\n return self._name\n\n @property\n def schema(self):\n \"\"\"Returns the schema of the configuration property.\"\"\"\n\n return self._schema\n\n @property\n def description(self):\n \"\"\"Returns the description of the configuration property.\"\"\"\n\n return self._description\n\n @property\n def default_value(self):\n \"\"\"Returns the default value of the configuration property.\"\"\"\n\n return self._default_value\n\n @property\n def value(self):\n \"\"\"Get the latest value from memcache, datastore, or use default.\"\"\"\n\n memcached_items = memcache_services.get_multi([self.name])\n if self.name in memcached_items:\n return memcached_items[self.name]\n\n datastore_item = config_models.ConfigPropertyModel.get(\n self.name, strict=False)\n if datastore_item is not None:\n memcache_services.set_multi({\n datastore_item.id: datastore_item.value})\n return datastore_item.value\n\n return self.default_value\n\n def set_value(self, committer_id, raw_value):\n \"\"\"Sets the value of the property. In general, this should not be\n called directly -- use config_services.set_property() instead.\n \"\"\"\n value = self.normalize(raw_value)\n\n # Set value in datastore.\n model_instance = config_models.ConfigPropertyModel.get(\n self.name, strict=False)\n if model_instance is None:\n model_instance = config_models.ConfigPropertyModel(\n id=self.name)\n model_instance.value = value\n model_instance.commit(\n committer_id, [{\n 'cmd': CMD_CHANGE_PROPERTY_VALUE,\n 'new_value': value\n }])\n\n # Set value in memcache.\n memcache_services.set_multi({\n model_instance.id: model_instance.value})\n\n def normalize(self, value):\n \"\"\"Validates the given object using the schema and normalizes if\n necessary.\n\n Args:\n value: The value of the configuration property.\n\n Returns:\n instance. The normalized object.\n \"\"\"\n return schema_utils.normalize_against_schema(value, self._schema)\n\n\nclass Registry(python_utils.OBJECT):\n \"\"\"Registry of all configuration properties.\"\"\"\n\n # The keys of _config_registry are the property names, and the values are\n # ConfigProperty instances.\n _config_registry = {}\n\n @classmethod\n def init_config_property(cls, name, instance):\n \"\"\"Initializes _config_registry with keys as the property names and\n values as instances of the specified property.\n\n Args:\n name: str. The name of the configuration property.\n instance: *. The instance of the configuration property.\n \"\"\"\n cls._config_registry[name] = instance\n\n @classmethod\n def get_config_property(cls, name):\n \"\"\"Returns the instance of the specified name of the configuration\n property.\n\n Args:\n name: str. The name of the configuration property.\n\n Returns:\n instance. The instance of the specified configuration property.\n \"\"\"\n return cls._config_registry.get(name)\n\n @classmethod\n def get_config_property_schemas(cls):\n \"\"\"Return a dict of editable config property schemas.\n\n The keys of the dict are config property names. The values are dicts\n with the following keys: schema, description, value.\n \"\"\"\n schemas_dict = {}\n\n for (property_name, instance) in cls._config_registry.items():\n schemas_dict[property_name] = {\n 'schema': instance.schema,\n 'description': instance.description,\n 'value': instance.value\n }\n\n return schemas_dict\n\n @classmethod\n def get_all_config_property_names(cls):\n \"\"\"Return a list of all the config property names.\n\n Returns:\n list. The list of all config property names.\n \"\"\"\n return list(cls._config_registry)\n\n\nPROMO_BAR_ENABLED = ConfigProperty(\n 'promo_bar_enabled', BOOL_SCHEMA,\n 'Whether the promo bar should be enabled for all users', False)\nPROMO_BAR_MESSAGE = ConfigProperty(\n 'promo_bar_message', UNICODE_SCHEMA,\n 'The message to show to all users if the promo bar is enabled', '')\n\nVMID_SHARED_SECRET_KEY_MAPPING = ConfigProperty(\n 'vmid_shared_secret_key_mapping', VMID_SHARED_SECRET_KEY_SCHEMA,\n 'VMID and shared secret key corresponding to that VM',\n [{\n 'vm_id': feconf.DEFAULT_VM_ID,\n 'shared_secret_key': feconf.DEFAULT_VM_SHARED_SECRET\n }])\n\nWHITELISTED_EXPLORATION_IDS_FOR_PLAYTHROUGHS = ConfigProperty(\n 'whitelisted_exploration_ids_for_playthroughs',\n SET_OF_STRINGS_SCHEMA,\n 'The set of exploration IDs for recording playthrough issues', [\n 'umPkwp0L1M0-', 'MjZzEVOG47_1', '9trAQhj6uUC2', 'rfX8jNkPnA-1',\n '0FBWxCE5egOw', '670bU6d9JGBh', 'aHikhPlxYgOH', '-tMgcP1i_4au',\n 'zW39GLG_BdN2', 'Xa3B_io-2WI5', '6Q6IyIDkjpYC', 'osw1m5Q3jK41'])\n\nTOPIC_IDS_FOR_CLASSROOM_PAGES = ConfigProperty(\n 'topic_ids_for_classroom_pages', SET_OF_CLASSROOM_DICTS_SCHEMA,\n 'The set of topic IDs for each classroom page.', [{\n 'name': 'Math',\n 'topic_ids': []\n }]\n)\n\nRECORD_PLAYTHROUGH_PROBABILITY = ConfigProperty(\n 'record_playthrough_probability', FLOAT_SCHEMA,\n 'The probability of recording playthroughs', 0.2)\n\nIS_IMPROVEMENTS_TAB_ENABLED = ConfigProperty(\n 'is_improvements_tab_enabled', BOOL_SCHEMA,\n 'Exposes the Improvements Tab for creators in the exploration editor.',\n True)\n\nALWAYS_ASK_LEARNERS_FOR_ANSWER_DETAILS = ConfigProperty(\n 'always_ask_learners_for_answer_details', BOOL_SCHEMA,\n 'Always ask learners for answer details. For testing -- do not use',\n False)\n", "path": "core/domain/config_domain.py"}], "after_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Domain objects for configuration properties.\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nfrom core.domain import change_domain\nfrom core.platform import models\nimport feconf\nimport python_utils\nimport schema_utils\n\n(config_models,) = models.Registry.import_models([models.NAMES.config])\nmemcache_services = models.Registry.import_memcache_services()\n\nCMD_CHANGE_PROPERTY_VALUE = 'change_property_value'\n\nSET_OF_STRINGS_SCHEMA = {\n 'type': 'list',\n 'items': {\n 'type': 'unicode',\n },\n 'validators': [{\n 'id': 'is_uniquified',\n }],\n}\n\nSET_OF_CLASSROOM_DICTS_SCHEMA = {\n 'type': 'list',\n 'items': {\n 'type': 'dict',\n 'properties': [{\n 'name': 'name',\n 'schema': {\n 'type': 'unicode'\n }\n }, {\n 'name': 'topic_ids',\n 'schema': {\n 'type': 'list',\n 'items': {\n 'type': 'unicode',\n },\n 'validators': [{\n 'id': 'is_uniquified',\n }]\n }\n }]\n }\n}\n\nVMID_SHARED_SECRET_KEY_SCHEMA = {\n 'type': 'list',\n 'items': {\n 'type': 'dict',\n 'properties': [{\n 'name': 'vm_id',\n 'schema': {\n 'type': 'unicode'\n }\n }, {\n 'name': 'shared_secret_key',\n 'schema': {\n 'type': 'unicode'\n }\n }]\n }\n}\n\nBOOL_SCHEMA = {\n 'type': schema_utils.SCHEMA_TYPE_BOOL\n}\n\nUNICODE_SCHEMA = {\n 'type': schema_utils.SCHEMA_TYPE_UNICODE\n}\n\nFLOAT_SCHEMA = {\n 'type': schema_utils.SCHEMA_TYPE_FLOAT\n}\n\n\nclass ConfigPropertyChange(change_domain.BaseChange):\n \"\"\"Domain object for changes made to a config property object.\n\n The allowed commands, together with the attributes:\n - 'change_property_value' (with new_value)\n \"\"\"\n\n ALLOWED_COMMANDS = [{\n 'name': CMD_CHANGE_PROPERTY_VALUE,\n 'required_attribute_names': ['new_value'],\n 'optional_attribute_names': []\n }]\n\n\nclass ConfigProperty(python_utils.OBJECT):\n \"\"\"A property with a name and a default value.\n\n NOTE TO DEVELOPERS: These config properties are deprecated. Do not reuse\n these names:\n - about_page_youtube_video_id.\n - admin_email_address.\n - admin_ids.\n - admin_usernames.\n - allow_yaml_file_upload.\n - banned_usernames.\n - banner_alt_text.\n - before_end_body_tag_hook.\n - before_end_head_tag_hook.\n - carousel_slides_config.\n - collection_editor_whitelist.\n - contact_email_address.\n - contribute_gallery_page_announcement.\n - default_twitter_share_message_editor.\n - disabled_explorations.\n - editor_page_announcement.\n - editor_prerequisites_agreement.\n - embedded_google_group_url.\n - full_site_url.\n - moderator_ids.\n - moderator_request_forum_url.\n - moderator_usernames.\n - publicize_exploration_email_html_body.\n - sharing_options.\n - sharing_options_twitter_text.\n - sidebar_menu_additional_links.\n - site_forum_url.\n - social_media_buttons.\n - splash_page_exploration_id.\n - splash_page_exploration_version.\n - splash_page_youtube_video_id.\n - ssl_challenge_responses.\n - whitelisted_email_senders.\n \"\"\"\n\n def __init__(self, name, schema, description, default_value):\n if Registry.get_config_property(name):\n raise Exception('Property with name %s already exists' % name)\n\n self._name = name\n self._schema = schema\n self._description = description\n self._default_value = schema_utils.normalize_against_schema(\n default_value, self._schema)\n\n Registry.init_config_property(self.name, self)\n\n @property\n def name(self):\n \"\"\"Returns the name of the configuration property.\"\"\"\n\n return self._name\n\n @property\n def schema(self):\n \"\"\"Returns the schema of the configuration property.\"\"\"\n\n return self._schema\n\n @property\n def description(self):\n \"\"\"Returns the description of the configuration property.\"\"\"\n\n return self._description\n\n @property\n def default_value(self):\n \"\"\"Returns the default value of the configuration property.\"\"\"\n\n return self._default_value\n\n @property\n def value(self):\n \"\"\"Get the latest value from memcache, datastore, or use default.\"\"\"\n\n memcached_items = memcache_services.get_multi([self.name])\n if self.name in memcached_items:\n return memcached_items[self.name]\n\n datastore_item = config_models.ConfigPropertyModel.get(\n self.name, strict=False)\n if datastore_item is not None:\n memcache_services.set_multi({\n datastore_item.id: datastore_item.value})\n return datastore_item.value\n\n return self.default_value\n\n def set_value(self, committer_id, raw_value):\n \"\"\"Sets the value of the property. In general, this should not be\n called directly -- use config_services.set_property() instead.\n \"\"\"\n value = self.normalize(raw_value)\n\n # Set value in datastore.\n model_instance = config_models.ConfigPropertyModel.get(\n self.name, strict=False)\n if model_instance is None:\n model_instance = config_models.ConfigPropertyModel(\n id=self.name)\n model_instance.value = value\n model_instance.commit(\n committer_id, [{\n 'cmd': CMD_CHANGE_PROPERTY_VALUE,\n 'new_value': value\n }])\n\n # Set value in memcache.\n memcache_services.set_multi({\n model_instance.id: model_instance.value})\n\n def normalize(self, value):\n \"\"\"Validates the given object using the schema and normalizes if\n necessary.\n\n Args:\n value: The value of the configuration property.\n\n Returns:\n instance. The normalized object.\n \"\"\"\n return schema_utils.normalize_against_schema(value, self._schema)\n\n\nclass Registry(python_utils.OBJECT):\n \"\"\"Registry of all configuration properties.\"\"\"\n\n # The keys of _config_registry are the property names, and the values are\n # ConfigProperty instances.\n _config_registry = {}\n\n @classmethod\n def init_config_property(cls, name, instance):\n \"\"\"Initializes _config_registry with keys as the property names and\n values as instances of the specified property.\n\n Args:\n name: str. The name of the configuration property.\n instance: *. The instance of the configuration property.\n \"\"\"\n cls._config_registry[name] = instance\n\n @classmethod\n def get_config_property(cls, name):\n \"\"\"Returns the instance of the specified name of the configuration\n property.\n\n Args:\n name: str. The name of the configuration property.\n\n Returns:\n instance. The instance of the specified configuration property.\n \"\"\"\n return cls._config_registry.get(name)\n\n @classmethod\n def get_config_property_schemas(cls):\n \"\"\"Return a dict of editable config property schemas.\n\n The keys of the dict are config property names. The values are dicts\n with the following keys: schema, description, value.\n \"\"\"\n schemas_dict = {}\n\n for (property_name, instance) in cls._config_registry.items():\n schemas_dict[property_name] = {\n 'schema': instance.schema,\n 'description': instance.description,\n 'value': instance.value\n }\n\n return schemas_dict\n\n @classmethod\n def get_all_config_property_names(cls):\n \"\"\"Return a list of all the config property names.\n\n Returns:\n list. The list of all config property names.\n \"\"\"\n return list(cls._config_registry)\n\n\nPROMO_BAR_ENABLED = ConfigProperty(\n 'promo_bar_enabled', BOOL_SCHEMA,\n 'Whether the promo bar should be enabled for all users', False)\nPROMO_BAR_MESSAGE = ConfigProperty(\n 'promo_bar_message', UNICODE_SCHEMA,\n 'The message to show to all users if the promo bar is enabled', '')\n\nVMID_SHARED_SECRET_KEY_MAPPING = ConfigProperty(\n 'vmid_shared_secret_key_mapping', VMID_SHARED_SECRET_KEY_SCHEMA,\n 'VMID and shared secret key corresponding to that VM',\n [{\n 'vm_id': feconf.DEFAULT_VM_ID,\n 'shared_secret_key': feconf.DEFAULT_VM_SHARED_SECRET\n }])\n\nWHITELISTED_EXPLORATION_IDS_FOR_PLAYTHROUGHS = ConfigProperty(\n 'whitelisted_exploration_ids_for_playthroughs',\n SET_OF_STRINGS_SCHEMA,\n 'The set of exploration IDs for recording playthrough issues', [\n 'umPkwp0L1M0-', 'MjZzEVOG47_1', '9trAQhj6uUC2', 'rfX8jNkPnA-1',\n '0FBWxCE5egOw', '670bU6d9JGBh', 'aHikhPlxYgOH', '-tMgcP1i_4au',\n 'zW39GLG_BdN2', 'Xa3B_io-2WI5', '6Q6IyIDkjpYC', 'osw1m5Q3jK41'])\n\nTOPIC_IDS_FOR_CLASSROOM_PAGES = ConfigProperty(\n 'topic_ids_for_classroom_pages', SET_OF_CLASSROOM_DICTS_SCHEMA,\n 'The set of topic IDs for each classroom page.', [{\n 'name': 'Math',\n 'topic_ids': []\n }]\n)\n\nRECORD_PLAYTHROUGH_PROBABILITY = ConfigProperty(\n 'record_playthrough_probability', FLOAT_SCHEMA,\n 'The probability of recording playthroughs', 0.2)\n\nIS_IMPROVEMENTS_TAB_ENABLED = ConfigProperty(\n 'is_improvements_tab_enabled', BOOL_SCHEMA,\n 'Exposes the Improvements Tab for creators in the exploration editor.',\n False)\n\nALWAYS_ASK_LEARNERS_FOR_ANSWER_DETAILS = ConfigProperty(\n 'always_ask_learners_for_answer_details', BOOL_SCHEMA,\n 'Always ask learners for answer details. For testing -- do not use',\n False)\n", "path": "core/domain/config_domain.py"}]}
| 3,900 | 122 |
gh_patches_debug_4143
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-3566
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NVC: Remove experimental status
After the current pending PRs we can move the next release of NVC out of the experimental CI group and update the docs.
- [x] #3553 / #3559
- [x] #3554
- [x] NVC release (https://github.com/nickg/nvc/releases/tag/r1.11.0)
- [x] Update docs #3566
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `.github/generate-envs.py`
Content:
```
1 #!/usr/bin/env python3
2 # Copyright cocotb contributors
3 # Licensed under the Revised BSD License, see LICENSE for details.
4 # SPDX-License-Identifier: BSD-3-Clause
5
6 """Get a list test environments."""
7
8 import argparse
9 import json
10 import sys
11
12 ENVS = [
13 # Test different Python versions with package managed Icarus on Ubuntu
14 {
15 "lang": "verilog",
16 "sim": "icarus",
17 "sim-version": "apt",
18 # lowest version according to https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json
19 "os": "ubuntu-20.04",
20 "python-version": "3.6.7",
21 "group": "ci",
22 },
23 {
24 "lang": "verilog",
25 "sim": "icarus",
26 "sim-version": "apt",
27 # lowest version according to https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json
28 "os": "ubuntu-20.04",
29 "python-version": "3.7.1",
30 "group": "ci",
31 },
32 {
33 "lang": "verilog",
34 "sim": "icarus",
35 "sim-version": "apt",
36 "os": "ubuntu-20.04",
37 "python-version": "3.8",
38 "group": "ci",
39 },
40 {
41 "lang": "verilog",
42 "sim": "icarus",
43 "sim-version": "apt",
44 "os": "ubuntu-20.04",
45 "python-version": "3.9",
46 "group": "ci",
47 },
48 {
49 "lang": "verilog",
50 "sim": "icarus",
51 "sim-version": "apt",
52 "os": "ubuntu-20.04",
53 "python-version": "3.10",
54 "group": "ci",
55 },
56 {
57 "lang": "verilog",
58 "sim": "icarus",
59 "sim-version": "apt",
60 "os": "ubuntu-20.04",
61 "python-version": "3.11",
62 "group": "ci",
63 },
64 {
65 "lang": "verilog",
66 "sim": "icarus",
67 "sim-version": "apt",
68 "os": "ubuntu-20.04",
69 "python-version": "3.12",
70 "group": "ci",
71 },
72 # A single test for the upcoming Python version.
73 {
74 "lang": "verilog",
75 "sim": "icarus",
76 "sim-version": "apt",
77 "os": "ubuntu-20.04",
78 "python-version": "3.13.0-alpha - 3.13.0",
79 "group": "experimental",
80 },
81 # Test Icarus on Ubuntu
82 {
83 "lang": "verilog",
84 "sim": "icarus",
85 "sim-version": "master",
86 "os": "ubuntu-20.04",
87 "python-version": "3.8",
88 "group": "experimental",
89 },
90 {
91 "lang": "verilog",
92 "sim": "icarus",
93 "sim-version": "v12_0", # The latest release version.
94 "os": "ubuntu-20.04",
95 "python-version": "3.8",
96 "group": "experimental",
97 },
98 # Test GHDL on Ubuntu
99 {
100 "lang": "vhdl",
101 "sim": "ghdl",
102 "sim-version": "v2.0.0", # GHDL 2.0 is the minimum supported version.
103 "os": "ubuntu-latest",
104 "python-version": "3.8",
105 "group": "ci",
106 },
107 {
108 "lang": "vhdl",
109 "sim": "ghdl",
110 "sim-version": "master",
111 "os": "ubuntu-latest",
112 "python-version": "3.8",
113 "group": "experimental",
114 },
115 # Test NVC on Ubuntu
116 {
117 "lang": "vhdl",
118 "sim": "nvc",
119 "sim-version": "master", # Only master supported for now
120 "os": "ubuntu-latest",
121 "python-version": "3.8",
122 "group": "experimental",
123 },
124 # Test Verilator on Ubuntu
125 {
126 "lang": "verilog",
127 "sim": "verilator",
128 "sim-version": "v4.106",
129 "os": "ubuntu-20.04",
130 "python-version": "3.8",
131 # Various cocotb tests are known to fail with Verilator 4.106.
132 # Newer versions of Verilator are not working at all.
133 # See also https://github.com/cocotb/cocotb/issues/2300
134 "group": "experimental",
135 },
136 {
137 "lang": "verilog",
138 "sim": "verilator",
139 "sim-version": "master",
140 "os": "ubuntu-20.04",
141 "python-version": "3.8",
142 # Tests are currently not expected to work at all.
143 # See also https://github.com/cocotb/cocotb/issues/2300
144 "group": "experimental",
145 },
146 # Test other OSes
147 # Icarus homebrew
148 {
149 "lang": "verilog",
150 "sim": "icarus",
151 "sim-version": "homebrew-stable",
152 "os": "macos-11",
153 "python-version": "3.8",
154 "group": "ci",
155 },
156 # Icarus homebrew (HEAD/master)
157 {
158 "lang": "verilog",
159 "sim": "icarus",
160 "sim-version": "homebrew-HEAD",
161 "os": "macos-11",
162 "python-version": "3.8",
163 "group": "experimental",
164 },
165 # Icarus windows from source
166 {
167 "lang": "verilog",
168 "sim": "icarus",
169 "sim-version": "v12_0",
170 "os": "windows-latest",
171 "python-version": "3.8",
172 "toolchain": "mingw",
173 "extra_name": "mingw | ",
174 # mingw tests fail silently currently due to test harness limitations.
175 "group": "experimental",
176 },
177 # use msvc instead of mingw
178 {
179 "lang": "verilog",
180 "sim": "icarus",
181 "sim-version": "v12_0",
182 "os": "windows-latest",
183 "python-version": "3.11",
184 "toolchain": "msvc",
185 "extra_name": "msvc | ",
186 "group": "ci",
187 },
188 # Other
189 # use clang instead of gcc
190 {
191 "lang": "verilog",
192 "sim": "icarus",
193 "sim-version": "v12_0",
194 "os": "ubuntu-20.04",
195 "python-version": "3.8",
196 "cxx": "clang++",
197 "cc": "clang",
198 "extra_name": "clang | ",
199 "group": "ci",
200 },
201 # Test Siemens Questa on Ubuntu
202 {
203 "lang": "verilog",
204 "sim": "questa",
205 "sim-version": "siemens/questa/2023.2",
206 "os": "ubuntu-20.04",
207 "self-hosted": True,
208 "python-version": "3.8",
209 "group": "ci",
210 },
211 {
212 "lang": "vhdl and fli",
213 "sim": "questa",
214 "sim-version": "siemens/questa/2023.2",
215 "os": "ubuntu-20.04",
216 "self-hosted": True,
217 "python-version": "3.8",
218 "group": "ci",
219 },
220 {
221 "lang": "vhdl and vhpi",
222 "sim": "questa",
223 "sim-version": "siemens/questa/2023.2",
224 "os": "ubuntu-20.04",
225 "self-hosted": True,
226 "python-version": "3.8",
227 "group": "ci",
228 },
229 # Test Aldec Riviera-PRO on Ubuntu
230 {
231 "lang": "verilog",
232 "sim": "riviera",
233 "sim-version": "aldec/rivierapro/2022.04",
234 "os": "ubuntu-20.04",
235 "self-hosted": True,
236 "python-version": "3.8",
237 "group": "ci",
238 },
239 {
240 "lang": "vhdl",
241 "sim": "riviera",
242 "sim-version": "aldec/rivierapro/2022.04",
243 "os": "ubuntu-20.04",
244 "self-hosted": True,
245 "python-version": "3.8",
246 "group": "ci",
247 },
248 # Test Cadence Xcelium on Ubuntu
249 {
250 "lang": "verilog",
251 "sim": "xcelium",
252 "sim-version": "cadence/xcelium/2303",
253 "os": "ubuntu-20.04",
254 "self-hosted": True,
255 "python-version": "3.8",
256 "group": "ci",
257 },
258 # Xcelium VHDL (VHPI) is not yet supported.
259 # {
260 # "lang": "vhdl",
261 # "sim": "xcelium",
262 # "sim-version": "cadence/xcelium/2303",
263 # "os": "ubuntu-20.04",
264 # "self-hosted": True,
265 # "python-version": "3.8",
266 # "group": "ci",
267 # },
268 ]
269
270
271 def main() -> int:
272 parser = argparse.ArgumentParser(description=__doc__)
273 parser.add_argument("--group")
274 parser.add_argument("--output-format", choices=("gha", "json"), default="json")
275 parser.add_argument(
276 "--gha-output-file",
277 type=argparse.FileType("a", encoding="utf-8"),
278 help="The $GITHUB_OUTPUT file.",
279 )
280
281 args = parser.parse_args()
282
283 if args.group is not None and args.group != "":
284 selected_envs = [t for t in ENVS if "group" in t and t["group"] == args.group]
285 else:
286 # Return all tasks if no group is selected.
287 selected_envs = ENVS
288
289 # The "runs-on" job attribute is a string if we're using the GitHub-provided
290 # hosted runners, or an array with special keys if we're using self-hosted
291 # runners.
292 for env in selected_envs:
293 if "self-hosted" in env and env["self-hosted"] and "runs-on" not in env:
294 env["runs-on"] = ["self-hosted", "cocotb-private", env["os"]]
295 else:
296 env["runs-on"] = env["os"]
297
298 if args.output_format == "gha":
299 # Output for GitHub Actions (GHA). Appends the configuration to
300 # the file named in the "--gha-output-file" argument.
301
302 assert args.gha_output_file is not None
303
304 # The generated JSON output may not contain newlines to be parsed by GHA
305 print(f"envs={json.dumps(selected_envs)}", file=args.gha_output_file)
306
307 # Print the the selected environments for easier debugging.
308 print("Generated the following test configurations:")
309 print(json.dumps(selected_envs, indent=2))
310 elif args.output_format == "json":
311 print(json.dumps(selected_envs, indent=2))
312 else:
313 assert False
314
315 return 0
316
317
318 if __name__ == "__main__":
319 sys.exit(main())
320
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/.github/generate-envs.py b/.github/generate-envs.py
--- a/.github/generate-envs.py
+++ b/.github/generate-envs.py
@@ -116,10 +116,10 @@
{
"lang": "vhdl",
"sim": "nvc",
- "sim-version": "master", # Only master supported for now
+ "sim-version": "r1.11.0",
"os": "ubuntu-latest",
"python-version": "3.8",
- "group": "experimental",
+ "group": "ci",
},
# Test Verilator on Ubuntu
{
|
{"golden_diff": "diff --git a/.github/generate-envs.py b/.github/generate-envs.py\n--- a/.github/generate-envs.py\n+++ b/.github/generate-envs.py\n@@ -116,10 +116,10 @@\n {\n \"lang\": \"vhdl\",\n \"sim\": \"nvc\",\n- \"sim-version\": \"master\", # Only master supported for now\n+ \"sim-version\": \"r1.11.0\",\n \"os\": \"ubuntu-latest\",\n \"python-version\": \"3.8\",\n- \"group\": \"experimental\",\n+ \"group\": \"ci\",\n },\n # Test Verilator on Ubuntu\n {\n", "issue": "NVC: Remove experimental status\nAfter the current pending PRs we can move the next release of NVC out of the experimental CI group and update the docs.\r\n- [x] #3553 / #3559\r\n- [x] #3554 \r\n- [x] NVC release (https://github.com/nickg/nvc/releases/tag/r1.11.0)\r\n- [x] Update docs #3566\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright cocotb contributors\n# Licensed under the Revised BSD License, see LICENSE for details.\n# SPDX-License-Identifier: BSD-3-Clause\n\n\"\"\"Get a list test environments.\"\"\"\n\nimport argparse\nimport json\nimport sys\n\nENVS = [\n # Test different Python versions with package managed Icarus on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n # lowest version according to https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.6.7\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n # lowest version according to https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.7.1\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.9\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.10\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.11\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.12\",\n \"group\": \"ci\",\n },\n # A single test for the upcoming Python version.\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.13.0-alpha - 3.13.0\",\n \"group\": \"experimental\",\n },\n # Test Icarus on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"master\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\", # The latest release version.\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n # Test GHDL on Ubuntu\n {\n \"lang\": \"vhdl\",\n \"sim\": \"ghdl\",\n \"sim-version\": \"v2.0.0\", # GHDL 2.0 is the minimum supported version.\n \"os\": \"ubuntu-latest\",\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl\",\n \"sim\": \"ghdl\",\n \"sim-version\": \"master\",\n \"os\": \"ubuntu-latest\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n # Test NVC on Ubuntu\n {\n \"lang\": \"vhdl\",\n \"sim\": \"nvc\",\n \"sim-version\": \"master\", # Only master supported for now\n \"os\": \"ubuntu-latest\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n # Test Verilator on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"verilator\",\n \"sim-version\": \"v4.106\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n # Various cocotb tests are known to fail with Verilator 4.106.\n # Newer versions of Verilator are not working at all.\n # See also https://github.com/cocotb/cocotb/issues/2300\n \"group\": \"experimental\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"verilator\",\n \"sim-version\": \"master\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n # Tests are currently not expected to work at all.\n # See also https://github.com/cocotb/cocotb/issues/2300\n \"group\": \"experimental\",\n },\n # Test other OSes\n # Icarus homebrew\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"homebrew-stable\",\n \"os\": \"macos-11\",\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Icarus homebrew (HEAD/master)\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"homebrew-HEAD\",\n \"os\": \"macos-11\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n # Icarus windows from source\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\",\n \"os\": \"windows-latest\",\n \"python-version\": \"3.8\",\n \"toolchain\": \"mingw\",\n \"extra_name\": \"mingw | \",\n # mingw tests fail silently currently due to test harness limitations.\n \"group\": \"experimental\",\n },\n # use msvc instead of mingw\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\",\n \"os\": \"windows-latest\",\n \"python-version\": \"3.11\",\n \"toolchain\": \"msvc\",\n \"extra_name\": \"msvc | \",\n \"group\": \"ci\",\n },\n # Other\n # use clang instead of gcc\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"cxx\": \"clang++\",\n \"cc\": \"clang\",\n \"extra_name\": \"clang | \",\n \"group\": \"ci\",\n },\n # Test Siemens Questa on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"questa\",\n \"sim-version\": \"siemens/questa/2023.2\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl and fli\",\n \"sim\": \"questa\",\n \"sim-version\": \"siemens/questa/2023.2\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl and vhpi\",\n \"sim\": \"questa\",\n \"sim-version\": \"siemens/questa/2023.2\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Test Aldec Riviera-PRO on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"riviera\",\n \"sim-version\": \"aldec/rivierapro/2022.04\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl\",\n \"sim\": \"riviera\",\n \"sim-version\": \"aldec/rivierapro/2022.04\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Test Cadence Xcelium on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"xcelium\",\n \"sim-version\": \"cadence/xcelium/2303\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Xcelium VHDL (VHPI) is not yet supported.\n # {\n # \"lang\": \"vhdl\",\n # \"sim\": \"xcelium\",\n # \"sim-version\": \"cadence/xcelium/2303\",\n # \"os\": \"ubuntu-20.04\",\n # \"self-hosted\": True,\n # \"python-version\": \"3.8\",\n # \"group\": \"ci\",\n # },\n]\n\n\ndef main() -> int:\n parser = argparse.ArgumentParser(description=__doc__)\n parser.add_argument(\"--group\")\n parser.add_argument(\"--output-format\", choices=(\"gha\", \"json\"), default=\"json\")\n parser.add_argument(\n \"--gha-output-file\",\n type=argparse.FileType(\"a\", encoding=\"utf-8\"),\n help=\"The $GITHUB_OUTPUT file.\",\n )\n\n args = parser.parse_args()\n\n if args.group is not None and args.group != \"\":\n selected_envs = [t for t in ENVS if \"group\" in t and t[\"group\"] == args.group]\n else:\n # Return all tasks if no group is selected.\n selected_envs = ENVS\n\n # The \"runs-on\" job attribute is a string if we're using the GitHub-provided\n # hosted runners, or an array with special keys if we're using self-hosted\n # runners.\n for env in selected_envs:\n if \"self-hosted\" in env and env[\"self-hosted\"] and \"runs-on\" not in env:\n env[\"runs-on\"] = [\"self-hosted\", \"cocotb-private\", env[\"os\"]]\n else:\n env[\"runs-on\"] = env[\"os\"]\n\n if args.output_format == \"gha\":\n # Output for GitHub Actions (GHA). Appends the configuration to\n # the file named in the \"--gha-output-file\" argument.\n\n assert args.gha_output_file is not None\n\n # The generated JSON output may not contain newlines to be parsed by GHA\n print(f\"envs={json.dumps(selected_envs)}\", file=args.gha_output_file)\n\n # Print the the selected environments for easier debugging.\n print(\"Generated the following test configurations:\")\n print(json.dumps(selected_envs, indent=2))\n elif args.output_format == \"json\":\n print(json.dumps(selected_envs, indent=2))\n else:\n assert False\n\n return 0\n\n\nif __name__ == \"__main__\":\n sys.exit(main())\n", "path": ".github/generate-envs.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Copyright cocotb contributors\n# Licensed under the Revised BSD License, see LICENSE for details.\n# SPDX-License-Identifier: BSD-3-Clause\n\n\"\"\"Get a list test environments.\"\"\"\n\nimport argparse\nimport json\nimport sys\n\nENVS = [\n # Test different Python versions with package managed Icarus on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n # lowest version according to https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.6.7\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n # lowest version according to https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.7.1\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.9\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.10\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.11\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.12\",\n \"group\": \"ci\",\n },\n # A single test for the upcoming Python version.\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"apt\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.13.0-alpha - 3.13.0\",\n \"group\": \"experimental\",\n },\n # Test Icarus on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"master\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\", # The latest release version.\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n # Test GHDL on Ubuntu\n {\n \"lang\": \"vhdl\",\n \"sim\": \"ghdl\",\n \"sim-version\": \"v2.0.0\", # GHDL 2.0 is the minimum supported version.\n \"os\": \"ubuntu-latest\",\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl\",\n \"sim\": \"ghdl\",\n \"sim-version\": \"master\",\n \"os\": \"ubuntu-latest\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n # Test NVC on Ubuntu\n {\n \"lang\": \"vhdl\",\n \"sim\": \"nvc\",\n \"sim-version\": \"r1.11.0\",\n \"os\": \"ubuntu-latest\",\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Test Verilator on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"verilator\",\n \"sim-version\": \"v4.106\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n # Various cocotb tests are known to fail with Verilator 4.106.\n # Newer versions of Verilator are not working at all.\n # See also https://github.com/cocotb/cocotb/issues/2300\n \"group\": \"experimental\",\n },\n {\n \"lang\": \"verilog\",\n \"sim\": \"verilator\",\n \"sim-version\": \"master\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n # Tests are currently not expected to work at all.\n # See also https://github.com/cocotb/cocotb/issues/2300\n \"group\": \"experimental\",\n },\n # Test other OSes\n # Icarus homebrew\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"homebrew-stable\",\n \"os\": \"macos-11\",\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Icarus homebrew (HEAD/master)\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"homebrew-HEAD\",\n \"os\": \"macos-11\",\n \"python-version\": \"3.8\",\n \"group\": \"experimental\",\n },\n # Icarus windows from source\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\",\n \"os\": \"windows-latest\",\n \"python-version\": \"3.8\",\n \"toolchain\": \"mingw\",\n \"extra_name\": \"mingw | \",\n # mingw tests fail silently currently due to test harness limitations.\n \"group\": \"experimental\",\n },\n # use msvc instead of mingw\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\",\n \"os\": \"windows-latest\",\n \"python-version\": \"3.11\",\n \"toolchain\": \"msvc\",\n \"extra_name\": \"msvc | \",\n \"group\": \"ci\",\n },\n # Other\n # use clang instead of gcc\n {\n \"lang\": \"verilog\",\n \"sim\": \"icarus\",\n \"sim-version\": \"v12_0\",\n \"os\": \"ubuntu-20.04\",\n \"python-version\": \"3.8\",\n \"cxx\": \"clang++\",\n \"cc\": \"clang\",\n \"extra_name\": \"clang | \",\n \"group\": \"ci\",\n },\n # Test Siemens Questa on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"questa\",\n \"sim-version\": \"siemens/questa/2023.2\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl and fli\",\n \"sim\": \"questa\",\n \"sim-version\": \"siemens/questa/2023.2\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl and vhpi\",\n \"sim\": \"questa\",\n \"sim-version\": \"siemens/questa/2023.2\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Test Aldec Riviera-PRO on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"riviera\",\n \"sim-version\": \"aldec/rivierapro/2022.04\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n {\n \"lang\": \"vhdl\",\n \"sim\": \"riviera\",\n \"sim-version\": \"aldec/rivierapro/2022.04\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Test Cadence Xcelium on Ubuntu\n {\n \"lang\": \"verilog\",\n \"sim\": \"xcelium\",\n \"sim-version\": \"cadence/xcelium/2303\",\n \"os\": \"ubuntu-20.04\",\n \"self-hosted\": True,\n \"python-version\": \"3.8\",\n \"group\": \"ci\",\n },\n # Xcelium VHDL (VHPI) is not yet supported.\n # {\n # \"lang\": \"vhdl\",\n # \"sim\": \"xcelium\",\n # \"sim-version\": \"cadence/xcelium/2303\",\n # \"os\": \"ubuntu-20.04\",\n # \"self-hosted\": True,\n # \"python-version\": \"3.8\",\n # \"group\": \"ci\",\n # },\n]\n\n\ndef main() -> int:\n parser = argparse.ArgumentParser(description=__doc__)\n parser.add_argument(\"--group\")\n parser.add_argument(\"--output-format\", choices=(\"gha\", \"json\"), default=\"json\")\n parser.add_argument(\n \"--gha-output-file\",\n type=argparse.FileType(\"a\", encoding=\"utf-8\"),\n help=\"The $GITHUB_OUTPUT file.\",\n )\n\n args = parser.parse_args()\n\n if args.group is not None and args.group != \"\":\n selected_envs = [t for t in ENVS if \"group\" in t and t[\"group\"] == args.group]\n else:\n # Return all tasks if no group is selected.\n selected_envs = ENVS\n\n # The \"runs-on\" job attribute is a string if we're using the GitHub-provided\n # hosted runners, or an array with special keys if we're using self-hosted\n # runners.\n for env in selected_envs:\n if \"self-hosted\" in env and env[\"self-hosted\"] and \"runs-on\" not in env:\n env[\"runs-on\"] = [\"self-hosted\", \"cocotb-private\", env[\"os\"]]\n else:\n env[\"runs-on\"] = env[\"os\"]\n\n if args.output_format == \"gha\":\n # Output for GitHub Actions (GHA). Appends the configuration to\n # the file named in the \"--gha-output-file\" argument.\n\n assert args.gha_output_file is not None\n\n # The generated JSON output may not contain newlines to be parsed by GHA\n print(f\"envs={json.dumps(selected_envs)}\", file=args.gha_output_file)\n\n # Print the the selected environments for easier debugging.\n print(\"Generated the following test configurations:\")\n print(json.dumps(selected_envs, indent=2))\n elif args.output_format == \"json\":\n print(json.dumps(selected_envs, indent=2))\n else:\n assert False\n\n return 0\n\n\nif __name__ == \"__main__\":\n sys.exit(main())\n", "path": ".github/generate-envs.py"}]}
| 3,882 | 154 |
gh_patches_debug_26277
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpose-1373
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ImportError: cannot import name '_xywh2xyxy' from 'mmpose.apis.inference'
https://github.com/open-mmlab/mmpose/blob/04e57f8765c9489390e4af58f81085d383083da5/demo/body3d_two_stage_img_demo.py#L130
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `demo/body3d_two_stage_img_demo.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import os
3 import os.path as osp
4 import warnings
5 from argparse import ArgumentParser
6
7 import mmcv
8 import numpy as np
9 from xtcocotools.coco import COCO
10
11 from mmpose.apis import (inference_pose_lifter_model,
12 inference_top_down_pose_model, vis_3d_pose_result)
13 from mmpose.apis.inference import init_pose_model
14 from mmpose.core import SimpleCamera
15 from mmpose.datasets import DatasetInfo
16
17
18 def _keypoint_camera_to_world(keypoints,
19 camera_params,
20 image_name=None,
21 dataset='Body3DH36MDataset'):
22 """Project 3D keypoints from the camera space to the world space.
23
24 Args:
25 keypoints (np.ndarray): 3D keypoints in shape [..., 3]
26 camera_params (dict): Parameters for all cameras.
27 image_name (str): The image name to specify the camera.
28 dataset (str): The dataset type, e.g. Body3DH36MDataset.
29 """
30 cam_key = None
31 if dataset == 'Body3DH36MDataset':
32 subj, rest = osp.basename(image_name).split('_', 1)
33 _, rest = rest.split('.', 1)
34 camera, rest = rest.split('_', 1)
35 cam_key = (subj, camera)
36 else:
37 raise NotImplementedError
38
39 camera = SimpleCamera(camera_params[cam_key])
40 keypoints_world = keypoints.copy()
41 keypoints_world[..., :3] = camera.camera_to_world(keypoints[..., :3])
42
43 return keypoints_world
44
45
46 def main():
47 parser = ArgumentParser()
48 parser.add_argument(
49 'pose_lifter_config',
50 help='Config file for the 2nd stage pose lifter model')
51 parser.add_argument(
52 'pose_lifter_checkpoint',
53 help='Checkpoint file for the 2nd stage pose lifter model')
54 parser.add_argument(
55 '--pose-detector-config',
56 type=str,
57 default=None,
58 help='Config file for the 1st stage 2D pose detector')
59 parser.add_argument(
60 '--pose-detector-checkpoint',
61 type=str,
62 default=None,
63 help='Checkpoint file for the 1st stage 2D pose detector')
64 parser.add_argument('--img-root', type=str, default='', help='Image root')
65 parser.add_argument(
66 '--json-file',
67 type=str,
68 default=None,
69 help='Json file containing image and bbox information. Optionally,'
70 'The Json file can also contain 2D pose information. See'
71 '"only-second-stage"')
72 parser.add_argument(
73 '--camera-param-file',
74 type=str,
75 default=None,
76 help='Camera parameter file for converting 3D pose predictions from '
77 ' the camera space to to world space. If None, no conversion will be '
78 'applied.')
79 parser.add_argument(
80 '--only-second-stage',
81 action='store_true',
82 help='If true, load 2D pose detection result from the Json file and '
83 'skip the 1st stage. The pose detection model will be ignored.')
84 parser.add_argument(
85 '--rebase-keypoint-height',
86 action='store_true',
87 help='Rebase the predicted 3D pose so its lowest keypoint has a '
88 'height of 0 (landing on the ground). This is useful for '
89 'visualization when the model do not predict the global position '
90 'of the 3D pose.')
91 parser.add_argument(
92 '--show-ground-truth',
93 action='store_true',
94 help='If True, show ground truth if it is available. The ground truth '
95 'should be contained in the annotations in the Json file with the key '
96 '"keypoints_3d" for each instance.')
97 parser.add_argument(
98 '--show',
99 action='store_true',
100 default=False,
101 help='whether to show img')
102 parser.add_argument(
103 '--out-img-root',
104 type=str,
105 default=None,
106 help='Root of the output visualization images. '
107 'Default not saving the visualization images.')
108 parser.add_argument(
109 '--device', default='cuda:0', help='Device for inference')
110 parser.add_argument('--kpt-thr', type=float, default=0.3)
111 parser.add_argument(
112 '--radius',
113 type=int,
114 default=4,
115 help='Keypoint radius for visualization')
116 parser.add_argument(
117 '--thickness',
118 type=int,
119 default=1,
120 help='Link thickness for visualization')
121
122 args = parser.parse_args()
123 assert args.show or (args.out_img_root != '')
124
125 coco = COCO(args.json_file)
126
127 # First stage: 2D pose detection
128 pose_det_results_list = []
129 if args.only_second_stage:
130 from mmpose.apis.inference import _xywh2xyxy
131
132 print('Stage 1: load 2D pose results from Json file.')
133 for image_id, image in coco.imgs.items():
134 image_name = osp.join(args.img_root, image['file_name'])
135 ann_ids = coco.getAnnIds(image_id)
136 pose_det_results = []
137 for ann_id in ann_ids:
138 ann = coco.anns[ann_id]
139 keypoints = np.array(ann['keypoints']).reshape(-1, 3)
140 keypoints[..., 2] = keypoints[..., 2] >= 1
141 keypoints_3d = np.array(ann['keypoints_3d']).reshape(-1, 4)
142 keypoints_3d[..., 3] = keypoints_3d[..., 3] >= 1
143 bbox = np.array(ann['bbox']).reshape(1, -1)
144
145 pose_det_result = {
146 'image_name': image_name,
147 'bbox': _xywh2xyxy(bbox),
148 'keypoints': keypoints,
149 'keypoints_3d': keypoints_3d
150 }
151 pose_det_results.append(pose_det_result)
152 pose_det_results_list.append(pose_det_results)
153
154 else:
155 print('Stage 1: 2D pose detection.')
156
157 pose_det_model = init_pose_model(
158 args.pose_detector_config,
159 args.pose_detector_checkpoint,
160 device=args.device.lower())
161
162 assert pose_det_model.cfg.model.type == 'TopDown', 'Only "TopDown"' \
163 'model is supported for the 1st stage (2D pose detection)'
164
165 dataset = pose_det_model.cfg.data['test']['type']
166 dataset_info = pose_det_model.cfg.data['test'].get(
167 'dataset_info', None)
168 if dataset_info is None:
169 warnings.warn(
170 'Please set `dataset_info` in the config.'
171 'Check https://github.com/open-mmlab/mmpose/pull/663 '
172 'for details.', DeprecationWarning)
173 else:
174 dataset_info = DatasetInfo(dataset_info)
175
176 img_keys = list(coco.imgs.keys())
177
178 for i in mmcv.track_iter_progress(range(len(img_keys))):
179 # get bounding box annotations
180 image_id = img_keys[i]
181 image = coco.loadImgs(image_id)[0]
182 image_name = osp.join(args.img_root, image['file_name'])
183 ann_ids = coco.getAnnIds(image_id)
184
185 # make person results for single image
186 person_results = []
187 for ann_id in ann_ids:
188 person = {}
189 ann = coco.anns[ann_id]
190 person['bbox'] = ann['bbox']
191 person_results.append(person)
192
193 pose_det_results, _ = inference_top_down_pose_model(
194 pose_det_model,
195 image_name,
196 person_results,
197 bbox_thr=None,
198 format='xywh',
199 dataset=dataset,
200 dataset_info=dataset_info,
201 return_heatmap=False,
202 outputs=None)
203
204 for res in pose_det_results:
205 res['image_name'] = image_name
206 pose_det_results_list.append(pose_det_results)
207
208 # Second stage: Pose lifting
209 print('Stage 2: 2D-to-3D pose lifting.')
210
211 pose_lift_model = init_pose_model(
212 args.pose_lifter_config,
213 args.pose_lifter_checkpoint,
214 device=args.device.lower())
215
216 assert pose_lift_model.cfg.model.type == 'PoseLifter', 'Only' \
217 '"PoseLifter" model is supported for the 2nd stage ' \
218 '(2D-to-3D lifting)'
219 dataset = pose_lift_model.cfg.data['test']['type']
220 dataset_info = pose_lift_model.cfg.data['test'].get('dataset_info', None)
221 if dataset_info is None:
222 warnings.warn(
223 'Please set `dataset_info` in the config.'
224 'Check https://github.com/open-mmlab/mmpose/pull/663 for details.',
225 DeprecationWarning)
226 else:
227 dataset_info = DatasetInfo(dataset_info)
228
229 camera_params = None
230 if args.camera_param_file is not None:
231 camera_params = mmcv.load(args.camera_param_file)
232
233 for i, pose_det_results in enumerate(
234 mmcv.track_iter_progress(pose_det_results_list)):
235 # 2D-to-3D pose lifting
236 # Note that the pose_det_results are regarded as a single-frame pose
237 # sequence
238 pose_lift_results = inference_pose_lifter_model(
239 pose_lift_model,
240 pose_results_2d=[pose_det_results],
241 dataset=dataset,
242 dataset_info=dataset_info,
243 with_track_id=False)
244
245 image_name = pose_det_results[0]['image_name']
246
247 # Pose processing
248 pose_lift_results_vis = []
249 for idx, res in enumerate(pose_lift_results):
250 keypoints_3d = res['keypoints_3d']
251 # project to world space
252 if camera_params is not None:
253 keypoints_3d = _keypoint_camera_to_world(
254 keypoints_3d,
255 camera_params=camera_params,
256 image_name=image_name,
257 dataset=dataset)
258 # rebase height (z-axis)
259 if args.rebase_keypoint_height:
260 keypoints_3d[..., 2] -= np.min(
261 keypoints_3d[..., 2], axis=-1, keepdims=True)
262 res['keypoints_3d'] = keypoints_3d
263 # Add title
264 det_res = pose_det_results[idx]
265 instance_id = det_res.get('track_id', idx)
266 res['title'] = f'Prediction ({instance_id})'
267 pose_lift_results_vis.append(res)
268 # Add ground truth
269 if args.show_ground_truth:
270 if 'keypoints_3d' not in det_res:
271 print('Fail to show ground truth. Please make sure that'
272 ' the instance annotations from the Json file'
273 ' contain "keypoints_3d".')
274 else:
275 gt = res.copy()
276 gt['keypoints_3d'] = det_res['keypoints_3d']
277 gt['title'] = f'Ground truth ({instance_id})'
278 pose_lift_results_vis.append(gt)
279
280 # Visualization
281 if args.out_img_root is None:
282 out_file = None
283 else:
284 os.makedirs(args.out_img_root, exist_ok=True)
285 out_file = osp.join(args.out_img_root, f'vis_{i}.jpg')
286
287 vis_3d_pose_result(
288 pose_lift_model,
289 result=pose_lift_results_vis,
290 img=image_name,
291 dataset_info=dataset_info,
292 out_file=out_file)
293
294
295 if __name__ == '__main__':
296 main()
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/demo/body3d_two_stage_img_demo.py b/demo/body3d_two_stage_img_demo.py
--- a/demo/body3d_two_stage_img_demo.py
+++ b/demo/body3d_two_stage_img_demo.py
@@ -11,7 +11,8 @@
from mmpose.apis import (inference_pose_lifter_model,
inference_top_down_pose_model, vis_3d_pose_result)
from mmpose.apis.inference import init_pose_model
-from mmpose.core import SimpleCamera
+from mmpose.core.bbox import bbox_xywh2xyxy
+from mmpose.core.camera import SimpleCamera
from mmpose.datasets import DatasetInfo
@@ -127,7 +128,6 @@
# First stage: 2D pose detection
pose_det_results_list = []
if args.only_second_stage:
- from mmpose.apis.inference import _xywh2xyxy
print('Stage 1: load 2D pose results from Json file.')
for image_id, image in coco.imgs.items():
@@ -144,7 +144,7 @@
pose_det_result = {
'image_name': image_name,
- 'bbox': _xywh2xyxy(bbox),
+ 'bbox': bbox_xywh2xyxy(bbox),
'keypoints': keypoints,
'keypoints_3d': keypoints_3d
}
|
{"golden_diff": "diff --git a/demo/body3d_two_stage_img_demo.py b/demo/body3d_two_stage_img_demo.py\n--- a/demo/body3d_two_stage_img_demo.py\n+++ b/demo/body3d_two_stage_img_demo.py\n@@ -11,7 +11,8 @@\n from mmpose.apis import (inference_pose_lifter_model,\n inference_top_down_pose_model, vis_3d_pose_result)\n from mmpose.apis.inference import init_pose_model\n-from mmpose.core import SimpleCamera\n+from mmpose.core.bbox import bbox_xywh2xyxy\n+from mmpose.core.camera import SimpleCamera\n from mmpose.datasets import DatasetInfo\n \n \n@@ -127,7 +128,6 @@\n # First stage: 2D pose detection\n pose_det_results_list = []\n if args.only_second_stage:\n- from mmpose.apis.inference import _xywh2xyxy\n \n print('Stage 1: load 2D pose results from Json file.')\n for image_id, image in coco.imgs.items():\n@@ -144,7 +144,7 @@\n \n pose_det_result = {\n 'image_name': image_name,\n- 'bbox': _xywh2xyxy(bbox),\n+ 'bbox': bbox_xywh2xyxy(bbox),\n 'keypoints': keypoints,\n 'keypoints_3d': keypoints_3d\n }\n", "issue": "ImportError: cannot import name '_xywh2xyxy' from 'mmpose.apis.inference' \nhttps://github.com/open-mmlab/mmpose/blob/04e57f8765c9489390e4af58f81085d383083da5/demo/body3d_two_stage_img_demo.py#L130\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport os\nimport os.path as osp\nimport warnings\nfrom argparse import ArgumentParser\n\nimport mmcv\nimport numpy as np\nfrom xtcocotools.coco import COCO\n\nfrom mmpose.apis import (inference_pose_lifter_model,\n inference_top_down_pose_model, vis_3d_pose_result)\nfrom mmpose.apis.inference import init_pose_model\nfrom mmpose.core import SimpleCamera\nfrom mmpose.datasets import DatasetInfo\n\n\ndef _keypoint_camera_to_world(keypoints,\n camera_params,\n image_name=None,\n dataset='Body3DH36MDataset'):\n \"\"\"Project 3D keypoints from the camera space to the world space.\n\n Args:\n keypoints (np.ndarray): 3D keypoints in shape [..., 3]\n camera_params (dict): Parameters for all cameras.\n image_name (str): The image name to specify the camera.\n dataset (str): The dataset type, e.g. Body3DH36MDataset.\n \"\"\"\n cam_key = None\n if dataset == 'Body3DH36MDataset':\n subj, rest = osp.basename(image_name).split('_', 1)\n _, rest = rest.split('.', 1)\n camera, rest = rest.split('_', 1)\n cam_key = (subj, camera)\n else:\n raise NotImplementedError\n\n camera = SimpleCamera(camera_params[cam_key])\n keypoints_world = keypoints.copy()\n keypoints_world[..., :3] = camera.camera_to_world(keypoints[..., :3])\n\n return keypoints_world\n\n\ndef main():\n parser = ArgumentParser()\n parser.add_argument(\n 'pose_lifter_config',\n help='Config file for the 2nd stage pose lifter model')\n parser.add_argument(\n 'pose_lifter_checkpoint',\n help='Checkpoint file for the 2nd stage pose lifter model')\n parser.add_argument(\n '--pose-detector-config',\n type=str,\n default=None,\n help='Config file for the 1st stage 2D pose detector')\n parser.add_argument(\n '--pose-detector-checkpoint',\n type=str,\n default=None,\n help='Checkpoint file for the 1st stage 2D pose detector')\n parser.add_argument('--img-root', type=str, default='', help='Image root')\n parser.add_argument(\n '--json-file',\n type=str,\n default=None,\n help='Json file containing image and bbox information. Optionally,'\n 'The Json file can also contain 2D pose information. See'\n '\"only-second-stage\"')\n parser.add_argument(\n '--camera-param-file',\n type=str,\n default=None,\n help='Camera parameter file for converting 3D pose predictions from '\n ' the camera space to to world space. If None, no conversion will be '\n 'applied.')\n parser.add_argument(\n '--only-second-stage',\n action='store_true',\n help='If true, load 2D pose detection result from the Json file and '\n 'skip the 1st stage. The pose detection model will be ignored.')\n parser.add_argument(\n '--rebase-keypoint-height',\n action='store_true',\n help='Rebase the predicted 3D pose so its lowest keypoint has a '\n 'height of 0 (landing on the ground). This is useful for '\n 'visualization when the model do not predict the global position '\n 'of the 3D pose.')\n parser.add_argument(\n '--show-ground-truth',\n action='store_true',\n help='If True, show ground truth if it is available. The ground truth '\n 'should be contained in the annotations in the Json file with the key '\n '\"keypoints_3d\" for each instance.')\n parser.add_argument(\n '--show',\n action='store_true',\n default=False,\n help='whether to show img')\n parser.add_argument(\n '--out-img-root',\n type=str,\n default=None,\n help='Root of the output visualization images. '\n 'Default not saving the visualization images.')\n parser.add_argument(\n '--device', default='cuda:0', help='Device for inference')\n parser.add_argument('--kpt-thr', type=float, default=0.3)\n parser.add_argument(\n '--radius',\n type=int,\n default=4,\n help='Keypoint radius for visualization')\n parser.add_argument(\n '--thickness',\n type=int,\n default=1,\n help='Link thickness for visualization')\n\n args = parser.parse_args()\n assert args.show or (args.out_img_root != '')\n\n coco = COCO(args.json_file)\n\n # First stage: 2D pose detection\n pose_det_results_list = []\n if args.only_second_stage:\n from mmpose.apis.inference import _xywh2xyxy\n\n print('Stage 1: load 2D pose results from Json file.')\n for image_id, image in coco.imgs.items():\n image_name = osp.join(args.img_root, image['file_name'])\n ann_ids = coco.getAnnIds(image_id)\n pose_det_results = []\n for ann_id in ann_ids:\n ann = coco.anns[ann_id]\n keypoints = np.array(ann['keypoints']).reshape(-1, 3)\n keypoints[..., 2] = keypoints[..., 2] >= 1\n keypoints_3d = np.array(ann['keypoints_3d']).reshape(-1, 4)\n keypoints_3d[..., 3] = keypoints_3d[..., 3] >= 1\n bbox = np.array(ann['bbox']).reshape(1, -1)\n\n pose_det_result = {\n 'image_name': image_name,\n 'bbox': _xywh2xyxy(bbox),\n 'keypoints': keypoints,\n 'keypoints_3d': keypoints_3d\n }\n pose_det_results.append(pose_det_result)\n pose_det_results_list.append(pose_det_results)\n\n else:\n print('Stage 1: 2D pose detection.')\n\n pose_det_model = init_pose_model(\n args.pose_detector_config,\n args.pose_detector_checkpoint,\n device=args.device.lower())\n\n assert pose_det_model.cfg.model.type == 'TopDown', 'Only \"TopDown\"' \\\n 'model is supported for the 1st stage (2D pose detection)'\n\n dataset = pose_det_model.cfg.data['test']['type']\n dataset_info = pose_det_model.cfg.data['test'].get(\n 'dataset_info', None)\n if dataset_info is None:\n warnings.warn(\n 'Please set `dataset_info` in the config.'\n 'Check https://github.com/open-mmlab/mmpose/pull/663 '\n 'for details.', DeprecationWarning)\n else:\n dataset_info = DatasetInfo(dataset_info)\n\n img_keys = list(coco.imgs.keys())\n\n for i in mmcv.track_iter_progress(range(len(img_keys))):\n # get bounding box annotations\n image_id = img_keys[i]\n image = coco.loadImgs(image_id)[0]\n image_name = osp.join(args.img_root, image['file_name'])\n ann_ids = coco.getAnnIds(image_id)\n\n # make person results for single image\n person_results = []\n for ann_id in ann_ids:\n person = {}\n ann = coco.anns[ann_id]\n person['bbox'] = ann['bbox']\n person_results.append(person)\n\n pose_det_results, _ = inference_top_down_pose_model(\n pose_det_model,\n image_name,\n person_results,\n bbox_thr=None,\n format='xywh',\n dataset=dataset,\n dataset_info=dataset_info,\n return_heatmap=False,\n outputs=None)\n\n for res in pose_det_results:\n res['image_name'] = image_name\n pose_det_results_list.append(pose_det_results)\n\n # Second stage: Pose lifting\n print('Stage 2: 2D-to-3D pose lifting.')\n\n pose_lift_model = init_pose_model(\n args.pose_lifter_config,\n args.pose_lifter_checkpoint,\n device=args.device.lower())\n\n assert pose_lift_model.cfg.model.type == 'PoseLifter', 'Only' \\\n '\"PoseLifter\" model is supported for the 2nd stage ' \\\n '(2D-to-3D lifting)'\n dataset = pose_lift_model.cfg.data['test']['type']\n dataset_info = pose_lift_model.cfg.data['test'].get('dataset_info', None)\n if dataset_info is None:\n warnings.warn(\n 'Please set `dataset_info` in the config.'\n 'Check https://github.com/open-mmlab/mmpose/pull/663 for details.',\n DeprecationWarning)\n else:\n dataset_info = DatasetInfo(dataset_info)\n\n camera_params = None\n if args.camera_param_file is not None:\n camera_params = mmcv.load(args.camera_param_file)\n\n for i, pose_det_results in enumerate(\n mmcv.track_iter_progress(pose_det_results_list)):\n # 2D-to-3D pose lifting\n # Note that the pose_det_results are regarded as a single-frame pose\n # sequence\n pose_lift_results = inference_pose_lifter_model(\n pose_lift_model,\n pose_results_2d=[pose_det_results],\n dataset=dataset,\n dataset_info=dataset_info,\n with_track_id=False)\n\n image_name = pose_det_results[0]['image_name']\n\n # Pose processing\n pose_lift_results_vis = []\n for idx, res in enumerate(pose_lift_results):\n keypoints_3d = res['keypoints_3d']\n # project to world space\n if camera_params is not None:\n keypoints_3d = _keypoint_camera_to_world(\n keypoints_3d,\n camera_params=camera_params,\n image_name=image_name,\n dataset=dataset)\n # rebase height (z-axis)\n if args.rebase_keypoint_height:\n keypoints_3d[..., 2] -= np.min(\n keypoints_3d[..., 2], axis=-1, keepdims=True)\n res['keypoints_3d'] = keypoints_3d\n # Add title\n det_res = pose_det_results[idx]\n instance_id = det_res.get('track_id', idx)\n res['title'] = f'Prediction ({instance_id})'\n pose_lift_results_vis.append(res)\n # Add ground truth\n if args.show_ground_truth:\n if 'keypoints_3d' not in det_res:\n print('Fail to show ground truth. Please make sure that'\n ' the instance annotations from the Json file'\n ' contain \"keypoints_3d\".')\n else:\n gt = res.copy()\n gt['keypoints_3d'] = det_res['keypoints_3d']\n gt['title'] = f'Ground truth ({instance_id})'\n pose_lift_results_vis.append(gt)\n\n # Visualization\n if args.out_img_root is None:\n out_file = None\n else:\n os.makedirs(args.out_img_root, exist_ok=True)\n out_file = osp.join(args.out_img_root, f'vis_{i}.jpg')\n\n vis_3d_pose_result(\n pose_lift_model,\n result=pose_lift_results_vis,\n img=image_name,\n dataset_info=dataset_info,\n out_file=out_file)\n\n\nif __name__ == '__main__':\n main()\n", "path": "demo/body3d_two_stage_img_demo.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport os\nimport os.path as osp\nimport warnings\nfrom argparse import ArgumentParser\n\nimport mmcv\nimport numpy as np\nfrom xtcocotools.coco import COCO\n\nfrom mmpose.apis import (inference_pose_lifter_model,\n inference_top_down_pose_model, vis_3d_pose_result)\nfrom mmpose.apis.inference import init_pose_model\nfrom mmpose.core.bbox import bbox_xywh2xyxy\nfrom mmpose.core.camera import SimpleCamera\nfrom mmpose.datasets import DatasetInfo\n\n\ndef _keypoint_camera_to_world(keypoints,\n camera_params,\n image_name=None,\n dataset='Body3DH36MDataset'):\n \"\"\"Project 3D keypoints from the camera space to the world space.\n\n Args:\n keypoints (np.ndarray): 3D keypoints in shape [..., 3]\n camera_params (dict): Parameters for all cameras.\n image_name (str): The image name to specify the camera.\n dataset (str): The dataset type, e.g. Body3DH36MDataset.\n \"\"\"\n cam_key = None\n if dataset == 'Body3DH36MDataset':\n subj, rest = osp.basename(image_name).split('_', 1)\n _, rest = rest.split('.', 1)\n camera, rest = rest.split('_', 1)\n cam_key = (subj, camera)\n else:\n raise NotImplementedError\n\n camera = SimpleCamera(camera_params[cam_key])\n keypoints_world = keypoints.copy()\n keypoints_world[..., :3] = camera.camera_to_world(keypoints[..., :3])\n\n return keypoints_world\n\n\ndef main():\n parser = ArgumentParser()\n parser.add_argument(\n 'pose_lifter_config',\n help='Config file for the 2nd stage pose lifter model')\n parser.add_argument(\n 'pose_lifter_checkpoint',\n help='Checkpoint file for the 2nd stage pose lifter model')\n parser.add_argument(\n '--pose-detector-config',\n type=str,\n default=None,\n help='Config file for the 1st stage 2D pose detector')\n parser.add_argument(\n '--pose-detector-checkpoint',\n type=str,\n default=None,\n help='Checkpoint file for the 1st stage 2D pose detector')\n parser.add_argument('--img-root', type=str, default='', help='Image root')\n parser.add_argument(\n '--json-file',\n type=str,\n default=None,\n help='Json file containing image and bbox information. Optionally,'\n 'The Json file can also contain 2D pose information. See'\n '\"only-second-stage\"')\n parser.add_argument(\n '--camera-param-file',\n type=str,\n default=None,\n help='Camera parameter file for converting 3D pose predictions from '\n ' the camera space to to world space. If None, no conversion will be '\n 'applied.')\n parser.add_argument(\n '--only-second-stage',\n action='store_true',\n help='If true, load 2D pose detection result from the Json file and '\n 'skip the 1st stage. The pose detection model will be ignored.')\n parser.add_argument(\n '--rebase-keypoint-height',\n action='store_true',\n help='Rebase the predicted 3D pose so its lowest keypoint has a '\n 'height of 0 (landing on the ground). This is useful for '\n 'visualization when the model do not predict the global position '\n 'of the 3D pose.')\n parser.add_argument(\n '--show-ground-truth',\n action='store_true',\n help='If True, show ground truth if it is available. The ground truth '\n 'should be contained in the annotations in the Json file with the key '\n '\"keypoints_3d\" for each instance.')\n parser.add_argument(\n '--show',\n action='store_true',\n default=False,\n help='whether to show img')\n parser.add_argument(\n '--out-img-root',\n type=str,\n default=None,\n help='Root of the output visualization images. '\n 'Default not saving the visualization images.')\n parser.add_argument(\n '--device', default='cuda:0', help='Device for inference')\n parser.add_argument('--kpt-thr', type=float, default=0.3)\n parser.add_argument(\n '--radius',\n type=int,\n default=4,\n help='Keypoint radius for visualization')\n parser.add_argument(\n '--thickness',\n type=int,\n default=1,\n help='Link thickness for visualization')\n\n args = parser.parse_args()\n assert args.show or (args.out_img_root != '')\n\n coco = COCO(args.json_file)\n\n # First stage: 2D pose detection\n pose_det_results_list = []\n if args.only_second_stage:\n\n print('Stage 1: load 2D pose results from Json file.')\n for image_id, image in coco.imgs.items():\n image_name = osp.join(args.img_root, image['file_name'])\n ann_ids = coco.getAnnIds(image_id)\n pose_det_results = []\n for ann_id in ann_ids:\n ann = coco.anns[ann_id]\n keypoints = np.array(ann['keypoints']).reshape(-1, 3)\n keypoints[..., 2] = keypoints[..., 2] >= 1\n keypoints_3d = np.array(ann['keypoints_3d']).reshape(-1, 4)\n keypoints_3d[..., 3] = keypoints_3d[..., 3] >= 1\n bbox = np.array(ann['bbox']).reshape(1, -1)\n\n pose_det_result = {\n 'image_name': image_name,\n 'bbox': bbox_xywh2xyxy(bbox),\n 'keypoints': keypoints,\n 'keypoints_3d': keypoints_3d\n }\n pose_det_results.append(pose_det_result)\n pose_det_results_list.append(pose_det_results)\n\n else:\n print('Stage 1: 2D pose detection.')\n\n pose_det_model = init_pose_model(\n args.pose_detector_config,\n args.pose_detector_checkpoint,\n device=args.device.lower())\n\n assert pose_det_model.cfg.model.type == 'TopDown', 'Only \"TopDown\"' \\\n 'model is supported for the 1st stage (2D pose detection)'\n\n dataset = pose_det_model.cfg.data['test']['type']\n dataset_info = pose_det_model.cfg.data['test'].get(\n 'dataset_info', None)\n if dataset_info is None:\n warnings.warn(\n 'Please set `dataset_info` in the config.'\n 'Check https://github.com/open-mmlab/mmpose/pull/663 '\n 'for details.', DeprecationWarning)\n else:\n dataset_info = DatasetInfo(dataset_info)\n\n img_keys = list(coco.imgs.keys())\n\n for i in mmcv.track_iter_progress(range(len(img_keys))):\n # get bounding box annotations\n image_id = img_keys[i]\n image = coco.loadImgs(image_id)[0]\n image_name = osp.join(args.img_root, image['file_name'])\n ann_ids = coco.getAnnIds(image_id)\n\n # make person results for single image\n person_results = []\n for ann_id in ann_ids:\n person = {}\n ann = coco.anns[ann_id]\n person['bbox'] = ann['bbox']\n person_results.append(person)\n\n pose_det_results, _ = inference_top_down_pose_model(\n pose_det_model,\n image_name,\n person_results,\n bbox_thr=None,\n format='xywh',\n dataset=dataset,\n dataset_info=dataset_info,\n return_heatmap=False,\n outputs=None)\n\n for res in pose_det_results:\n res['image_name'] = image_name\n pose_det_results_list.append(pose_det_results)\n\n # Second stage: Pose lifting\n print('Stage 2: 2D-to-3D pose lifting.')\n\n pose_lift_model = init_pose_model(\n args.pose_lifter_config,\n args.pose_lifter_checkpoint,\n device=args.device.lower())\n\n assert pose_lift_model.cfg.model.type == 'PoseLifter', 'Only' \\\n '\"PoseLifter\" model is supported for the 2nd stage ' \\\n '(2D-to-3D lifting)'\n dataset = pose_lift_model.cfg.data['test']['type']\n dataset_info = pose_lift_model.cfg.data['test'].get('dataset_info', None)\n if dataset_info is None:\n warnings.warn(\n 'Please set `dataset_info` in the config.'\n 'Check https://github.com/open-mmlab/mmpose/pull/663 for details.',\n DeprecationWarning)\n else:\n dataset_info = DatasetInfo(dataset_info)\n\n camera_params = None\n if args.camera_param_file is not None:\n camera_params = mmcv.load(args.camera_param_file)\n\n for i, pose_det_results in enumerate(\n mmcv.track_iter_progress(pose_det_results_list)):\n # 2D-to-3D pose lifting\n # Note that the pose_det_results are regarded as a single-frame pose\n # sequence\n pose_lift_results = inference_pose_lifter_model(\n pose_lift_model,\n pose_results_2d=[pose_det_results],\n dataset=dataset,\n dataset_info=dataset_info,\n with_track_id=False)\n\n image_name = pose_det_results[0]['image_name']\n\n # Pose processing\n pose_lift_results_vis = []\n for idx, res in enumerate(pose_lift_results):\n keypoints_3d = res['keypoints_3d']\n # project to world space\n if camera_params is not None:\n keypoints_3d = _keypoint_camera_to_world(\n keypoints_3d,\n camera_params=camera_params,\n image_name=image_name,\n dataset=dataset)\n # rebase height (z-axis)\n if args.rebase_keypoint_height:\n keypoints_3d[..., 2] -= np.min(\n keypoints_3d[..., 2], axis=-1, keepdims=True)\n res['keypoints_3d'] = keypoints_3d\n # Add title\n det_res = pose_det_results[idx]\n instance_id = det_res.get('track_id', idx)\n res['title'] = f'Prediction ({instance_id})'\n pose_lift_results_vis.append(res)\n # Add ground truth\n if args.show_ground_truth:\n if 'keypoints_3d' not in det_res:\n print('Fail to show ground truth. Please make sure that'\n ' the instance annotations from the Json file'\n ' contain \"keypoints_3d\".')\n else:\n gt = res.copy()\n gt['keypoints_3d'] = det_res['keypoints_3d']\n gt['title'] = f'Ground truth ({instance_id})'\n pose_lift_results_vis.append(gt)\n\n # Visualization\n if args.out_img_root is None:\n out_file = None\n else:\n os.makedirs(args.out_img_root, exist_ok=True)\n out_file = osp.join(args.out_img_root, f'vis_{i}.jpg')\n\n vis_3d_pose_result(\n pose_lift_model,\n result=pose_lift_results_vis,\n img=image_name,\n dataset_info=dataset_info,\n out_file=out_file)\n\n\nif __name__ == '__main__':\n main()\n", "path": "demo/body3d_two_stage_img_demo.py"}]}
| 3,600 | 311 |
gh_patches_debug_37403
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-1670
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[plugin issue] plugin.vidio.com / can't handle stream URLs anymore
- [x] This is a bug report.
- [ ] This is a feature request.
- [ ] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
plugin handling of https://www.vidio.com/live URls not working anymore due to changes at provided stream structure
### Expected / Actual behavior
streamlink -l debug www.vidio.com/live/665-rcti-tv-stream
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.5.2
[cli][debug] Streamlink: 0.12.1
[cli][debug] Requests(2.18.4), Socks(1.6.7), Websocket(0.47.0)
[cli][info] Found matching plugin vidio for URL www.vidio.com/live/665-rcti-tv-stream
[plugin.vidio][debug] HLS URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8
error: Unable to open URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8 (403 Client Error: Forbidden for url: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/vidio.py`
Content:
```
1 '''
2 Plugin for vidio.com
3 - https://www.vidio.com/live/5075-dw-tv-stream
4 - https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid
5 '''
6 import re
7
8 from streamlink.plugin import Plugin
9 from streamlink.plugin.api import http
10 from streamlink.stream import HLSStream
11
12 _url_re = re.compile(r"https?://(?:www\.)?vidio\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\d+)-(?P<name>[^/?#&]+)")
13 _playlist_re = re.compile(r'''hls-url=["'](?P<url>[^"']+)["']''')
14
15
16 class Vidio(Plugin):
17 @classmethod
18 def can_handle_url(cls, url):
19 return _url_re.match(url)
20
21 def _get_streams(self):
22 res = http.get(self.url)
23
24 match = _playlist_re.search(res.text)
25 if match is None:
26 return
27
28 url = match.group('url')
29
30 if url:
31 self.logger.debug('HLS URL: {0}'.format(url))
32 for s in HLSStream.parse_variant_playlist(self.session, url).items():
33 yield s
34
35
36 __plugin__ = Vidio
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/vidio.py b/src/streamlink/plugins/vidio.py
--- a/src/streamlink/plugins/vidio.py
+++ b/src/streamlink/plugins/vidio.py
@@ -1,36 +1,61 @@
-'''
+"""
Plugin for vidio.com
- https://www.vidio.com/live/5075-dw-tv-stream
- https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid
-'''
+"""
import re
from streamlink.plugin import Plugin
-from streamlink.plugin.api import http
+from streamlink.plugin.api import http, useragents, validate
from streamlink.stream import HLSStream
-
-_url_re = re.compile(r"https?://(?:www\.)?vidio\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\d+)-(?P<name>[^/?#&]+)")
-_playlist_re = re.compile(r'''hls-url=["'](?P<url>[^"']+)["']''')
+from streamlink.utils import parse_json
class Vidio(Plugin):
+ _url_re = re.compile(r"https?://(?:www\.)?vidio\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\d+)-(?P<name>[^/?#&]+)")
+ _playlist_re = re.compile(r'''hls-url=["'](?P<url>[^"']+)["']''')
+ _data_id_re = re.compile(r'''meta\s+data-id=["'](?P<id>[^"']+)["']''')
+
+ csrf_tokens_url = "https://www.vidio.com/csrf_tokens"
+ tokens_url = "https://www.vidio.com/live/{id}/tokens"
+ token_schema = validate.Schema(validate.transform(parse_json),
+ {"token": str},
+ validate.get("token"))
+
@classmethod
def can_handle_url(cls, url):
- return _url_re.match(url)
+ return cls._url_re.match(url)
+
+ def get_csrf_tokens(self):
+ return http.get(self.csrf_tokens_url,
+ schema=self.token_schema)
+
+ def get_url_tokens(self, stream_id):
+ self.logger.debug("Getting stream tokens")
+ csrf_token = self.get_csrf_tokens()
+ return http.post(self.tokens_url.format(id=stream_id),
+ files={"authenticity_token": (None, csrf_token)},
+ headers={"User-Agent": useragents.CHROME,
+ "Referer": self.url},
+ schema=self.token_schema)
def _get_streams(self):
res = http.get(self.url)
- match = _playlist_re.search(res.text)
- if match is None:
- return
+ plmatch = self._playlist_re.search(res.text)
+ idmatch = self._data_id_re.search(res.text)
+
+ hls_url = plmatch and plmatch.group("url")
+ stream_id = idmatch and idmatch.group("id")
- url = match.group('url')
+ tokens = self.get_url_tokens(stream_id)
- if url:
- self.logger.debug('HLS URL: {0}'.format(url))
- for s in HLSStream.parse_variant_playlist(self.session, url).items():
- yield s
+ if hls_url:
+ self.logger.debug("HLS URL: {0}".format(hls_url))
+ self.logger.debug("Tokens: {0}".format(tokens))
+ return HLSStream.parse_variant_playlist(self.session, hls_url+"?"+tokens,
+ headers={"User-Agent": useragents.CHROME,
+ "Referer": self.url})
__plugin__ = Vidio
|
{"golden_diff": "diff --git a/src/streamlink/plugins/vidio.py b/src/streamlink/plugins/vidio.py\n--- a/src/streamlink/plugins/vidio.py\n+++ b/src/streamlink/plugins/vidio.py\n@@ -1,36 +1,61 @@\n-'''\n+\"\"\"\n Plugin for vidio.com\n - https://www.vidio.com/live/5075-dw-tv-stream\n - https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid\n-'''\n+\"\"\"\n import re\n \n from streamlink.plugin import Plugin\n-from streamlink.plugin.api import http\n+from streamlink.plugin.api import http, useragents, validate\n from streamlink.stream import HLSStream\n-\n-_url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n-_playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n+from streamlink.utils import parse_json\n \n \n class Vidio(Plugin):\n+ _url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n+ _playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n+ _data_id_re = re.compile(r'''meta\\s+data-id=[\"'](?P<id>[^\"']+)[\"']''')\n+\n+ csrf_tokens_url = \"https://www.vidio.com/csrf_tokens\"\n+ tokens_url = \"https://www.vidio.com/live/{id}/tokens\"\n+ token_schema = validate.Schema(validate.transform(parse_json),\n+ {\"token\": str},\n+ validate.get(\"token\"))\n+\n @classmethod\n def can_handle_url(cls, url):\n- return _url_re.match(url)\n+ return cls._url_re.match(url)\n+\n+ def get_csrf_tokens(self):\n+ return http.get(self.csrf_tokens_url,\n+ schema=self.token_schema)\n+\n+ def get_url_tokens(self, stream_id):\n+ self.logger.debug(\"Getting stream tokens\")\n+ csrf_token = self.get_csrf_tokens()\n+ return http.post(self.tokens_url.format(id=stream_id),\n+ files={\"authenticity_token\": (None, csrf_token)},\n+ headers={\"User-Agent\": useragents.CHROME,\n+ \"Referer\": self.url},\n+ schema=self.token_schema)\n \n def _get_streams(self):\n res = http.get(self.url)\n \n- match = _playlist_re.search(res.text)\n- if match is None:\n- return\n+ plmatch = self._playlist_re.search(res.text)\n+ idmatch = self._data_id_re.search(res.text)\n+\n+ hls_url = plmatch and plmatch.group(\"url\")\n+ stream_id = idmatch and idmatch.group(\"id\")\n \n- url = match.group('url')\n+ tokens = self.get_url_tokens(stream_id)\n \n- if url:\n- self.logger.debug('HLS URL: {0}'.format(url))\n- for s in HLSStream.parse_variant_playlist(self.session, url).items():\n- yield s\n+ if hls_url:\n+ self.logger.debug(\"HLS URL: {0}\".format(hls_url))\n+ self.logger.debug(\"Tokens: {0}\".format(tokens))\n+ return HLSStream.parse_variant_playlist(self.session, hls_url+\"?\"+tokens,\n+ headers={\"User-Agent\": useragents.CHROME,\n+ \"Referer\": self.url})\n \n \n __plugin__ = Vidio\n", "issue": "[plugin issue] plugin.vidio.com / can't handle stream URLs anymore\n- [x] This is a bug report.\r\n- [ ] This is a feature request.\r\n- [ ] This is a plugin (improvement) request.\r\n- [ ] I have read the contribution guidelines.\r\n\r\n### Description\r\nplugin handling of https://www.vidio.com/live URls not working anymore due to changes at provided stream structure\r\n\r\n### Expected / Actual behavior\r\nstreamlink -l debug www.vidio.com/live/665-rcti-tv-stream\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.5.2\r\n[cli][debug] Streamlink: 0.12.1\r\n[cli][debug] Requests(2.18.4), Socks(1.6.7), Websocket(0.47.0)\r\n[cli][info] Found matching plugin vidio for URL www.vidio.com/live/665-rcti-tv-stream\r\n[plugin.vidio][debug] HLS URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8\r\nerror: Unable to open URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8 (403 Client Error: Forbidden for url: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8)\r\n\n", "before_files": [{"content": "'''\nPlugin for vidio.com\n- https://www.vidio.com/live/5075-dw-tv-stream\n- https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid\n'''\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\n_url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n_playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n\n\nclass Vidio(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n res = http.get(self.url)\n\n match = _playlist_re.search(res.text)\n if match is None:\n return\n\n url = match.group('url')\n\n if url:\n self.logger.debug('HLS URL: {0}'.format(url))\n for s in HLSStream.parse_variant_playlist(self.session, url).items():\n yield s\n\n\n__plugin__ = Vidio\n", "path": "src/streamlink/plugins/vidio.py"}], "after_files": [{"content": "\"\"\"\nPlugin for vidio.com\n- https://www.vidio.com/live/5075-dw-tv-stream\n- https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid\n\"\"\"\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, useragents, validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_json\n\n\nclass Vidio(Plugin):\n _url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n _playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n _data_id_re = re.compile(r'''meta\\s+data-id=[\"'](?P<id>[^\"']+)[\"']''')\n\n csrf_tokens_url = \"https://www.vidio.com/csrf_tokens\"\n tokens_url = \"https://www.vidio.com/live/{id}/tokens\"\n token_schema = validate.Schema(validate.transform(parse_json),\n {\"token\": str},\n validate.get(\"token\"))\n\n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url)\n\n def get_csrf_tokens(self):\n return http.get(self.csrf_tokens_url,\n schema=self.token_schema)\n\n def get_url_tokens(self, stream_id):\n self.logger.debug(\"Getting stream tokens\")\n csrf_token = self.get_csrf_tokens()\n return http.post(self.tokens_url.format(id=stream_id),\n files={\"authenticity_token\": (None, csrf_token)},\n headers={\"User-Agent\": useragents.CHROME,\n \"Referer\": self.url},\n schema=self.token_schema)\n\n def _get_streams(self):\n res = http.get(self.url)\n\n plmatch = self._playlist_re.search(res.text)\n idmatch = self._data_id_re.search(res.text)\n\n hls_url = plmatch and plmatch.group(\"url\")\n stream_id = idmatch and idmatch.group(\"id\")\n\n tokens = self.get_url_tokens(stream_id)\n\n if hls_url:\n self.logger.debug(\"HLS URL: {0}\".format(hls_url))\n self.logger.debug(\"Tokens: {0}\".format(tokens))\n return HLSStream.parse_variant_playlist(self.session, hls_url+\"?\"+tokens,\n headers={\"User-Agent\": useragents.CHROME,\n \"Referer\": self.url})\n\n\n__plugin__ = Vidio\n", "path": "src/streamlink/plugins/vidio.py"}]}
| 961 | 835 |
gh_patches_debug_27243
|
rasdani/github-patches
|
git_diff
|
python__mypy-2772
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
overwriting None from Tuple[AnyStr]'s only item causes PartialType-related crash
``` python
import os.path
x = None
if 0:
_, x = os.path.split("blah")
```
results in:
```
Traceback (most recent call last):
File "/usr/local/bin/mypy", line 6, in <module>
main(__file__)
File "/usr/local/lib/python3.5/site-packages/mypy/main.py", line 54, in main
res = type_check_only(sources, bin_dir, options)
File "/usr/local/lib/python3.5/site-packages/mypy/main.py", line 102, in type_check_only
python_path=options.python_path)
File "/usr/local/lib/python3.5/site-packages/mypy/build.py", line 209, in build
dispatch(sources, manager)
File "/usr/local/lib/python3.5/site-packages/mypy/build.py", line 1325, in dispatch
process_graph(graph, manager)
File "/usr/local/lib/python3.5/site-packages/mypy/build.py", line 1456, in process_graph
process_stale_scc(graph, scc)
File "/usr/local/lib/python3.5/site-packages/mypy/build.py", line 1486, in process_stale_scc
graph[id].type_check()
File "/usr/local/lib/python3.5/site-packages/mypy/build.py", line 1305, in type_check
manager.type_checker.visit_file(self.tree, self.xpath)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 424, in visit_file
self.accept(d)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 465, in accept
typ = node.accept(self)
File "/usr/local/lib/python3.5/site-packages/mypy/nodes.py", line 862, in accept
return visitor.visit_if_stmt(self)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1661, in visit_if_stmt
self.accept(b)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 465, in accept
typ = node.accept(self)
File "/usr/local/lib/python3.5/site-packages/mypy/nodes.py", line 715, in accept
return visitor.visit_block(self)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1149, in visit_block
self.accept(s)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 465, in accept
typ = node.accept(self)
File "/usr/local/lib/python3.5/site-packages/mypy/nodes.py", line 753, in accept
return visitor.visit_assignment_stmt(self)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1158, in visit_assignment_stmt
self.check_assignment(s.lvalues[-1], s.rvalue, s.type is None)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1175, in check_assignment
infer_lvalue_type)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1244, in check_assignment_to_multiple_lvalues
self.check_multi_assignment(lvalues, rvalue, context, infer_lvalue_type)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1277, in check_multi_assignment
context, undefined_rvalue, infer_lvalue_type)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1297, in check_multi_assignment_from_tuple
rvalue_type = cast(TupleType, self.accept(rvalue, lvalue_type))
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 465, in accept
typ = node.accept(self)
File "/usr/local/lib/python3.5/site-packages/mypy/nodes.py", line 1185, in accept
return visitor.visit_call_expr(self)
File "/usr/local/lib/python3.5/site-packages/mypy/checker.py", line 1997, in visit_call_expr
return self.expr_checker.visit_call_expr(e)
File "/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py", line 141, in visit_call_expr
return self.check_call_expr_with_callee_type(callee_type, e)
File "/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py", line 192, in check_call_expr_with_callee_type
e.arg_names, callable_node=e.callee)[0]
File "/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py", line 230, in check_call
callee, context)
File "/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py", line 370, in infer_function_type_arguments_using_context
args = infer_type_arguments(callable.type_var_ids(), ret_type, erased_ctx)
File "/usr/local/lib/python3.5/site-packages/mypy/infer.py", line 43, in infer_type_arguments
return solve_constraints(type_var_ids, constraints)
File "/usr/local/lib/python3.5/site-packages/mypy/solve.py", line 50, in solve_constraints
top = meet_types(top, c.target)
File "/usr/local/lib/python3.5/site-packages/mypy/meet.py", line 22, in meet_types
return t.accept(TypeMeetVisitor(s))
File "/usr/local/lib/python3.5/site-packages/mypy/types.py", line 763, in accept
return visitor.visit_partial_type(self)
File "/usr/local/lib/python3.5/site-packages/mypy/meet.py", line 208, in visit_partial_type
assert False, 'Internal error'
AssertionError: Internal error
*** INTERNAL ERROR ***
test.py:5: error: Internal error -- please report a bug at https://github.com/python/mypy/issues
NOTE: you can use "mypy --pdb ..." to drop into the debugger when this happens.
```
We might want to fix this before releasing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mypy/applytype.py`
Content:
```
1 from typing import List, Dict
2
3 import mypy.subtypes
4 from mypy.sametypes import is_same_type
5 from mypy.expandtype import expand_type
6 from mypy.types import Type, TypeVarId, TypeVarType, CallableType, AnyType
7 from mypy.messages import MessageBuilder
8 from mypy.nodes import Context
9
10
11 def apply_generic_arguments(callable: CallableType, types: List[Type],
12 msg: MessageBuilder, context: Context) -> CallableType:
13 """Apply generic type arguments to a callable type.
14
15 For example, applying [int] to 'def [T] (T) -> T' results in
16 'def (int) -> int'.
17
18 Note that each type can be None; in this case, it will not be applied.
19 """
20 tvars = callable.variables
21 assert len(tvars) == len(types)
22 # Check that inferred type variable values are compatible with allowed
23 # values and bounds. Also, promote subtype values to allowed values.
24 types = types[:]
25 for i, type in enumerate(types):
26 values = callable.variables[i].values
27 if values and type:
28 if isinstance(type, AnyType):
29 continue
30 if isinstance(type, TypeVarType) and type.values:
31 # Allow substituting T1 for T if every allowed value of T1
32 # is also a legal value of T.
33 if all(any(is_same_type(v, v1) for v in values)
34 for v1 in type.values):
35 continue
36 for value in values:
37 if mypy.subtypes.is_subtype(type, value):
38 types[i] = value
39 break
40 else:
41 msg.incompatible_typevar_value(callable, i + 1, type, context)
42
43 upper_bound = callable.variables[i].upper_bound
44 if type and not mypy.subtypes.satisfies_upper_bound(type, upper_bound):
45 msg.incompatible_typevar_value(callable, i + 1, type, context)
46
47 # Create a map from type variable id to target type.
48 id_to_type = {} # type: Dict[TypeVarId, Type]
49 for i, tv in enumerate(tvars):
50 if types[i]:
51 id_to_type[tv.id] = types[i]
52
53 # Apply arguments to argument types.
54 arg_types = [expand_type(at, id_to_type) for at in callable.arg_types]
55
56 # The callable may retain some type vars if only some were applied.
57 remaining_tvars = [tv for tv in tvars if tv.id not in id_to_type]
58
59 return callable.copy_modified(
60 arg_types=arg_types,
61 ret_type=expand_type(callable.ret_type, id_to_type),
62 variables=remaining_tvars,
63 )
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mypy/applytype.py b/mypy/applytype.py
--- a/mypy/applytype.py
+++ b/mypy/applytype.py
@@ -3,7 +3,7 @@
import mypy.subtypes
from mypy.sametypes import is_same_type
from mypy.expandtype import expand_type
-from mypy.types import Type, TypeVarId, TypeVarType, CallableType, AnyType
+from mypy.types import Type, TypeVarId, TypeVarType, CallableType, AnyType, PartialType
from mypy.messages import MessageBuilder
from mypy.nodes import Context
@@ -34,14 +34,15 @@
for v1 in type.values):
continue
for value in values:
- if mypy.subtypes.is_subtype(type, value):
+ if isinstance(type, PartialType) or mypy.subtypes.is_subtype(type, value):
types[i] = value
break
else:
msg.incompatible_typevar_value(callable, i + 1, type, context)
upper_bound = callable.variables[i].upper_bound
- if type and not mypy.subtypes.satisfies_upper_bound(type, upper_bound):
+ if (type and not isinstance(type, PartialType) and
+ not mypy.subtypes.satisfies_upper_bound(type, upper_bound)):
msg.incompatible_typevar_value(callable, i + 1, type, context)
# Create a map from type variable id to target type.
|
{"golden_diff": "diff --git a/mypy/applytype.py b/mypy/applytype.py\n--- a/mypy/applytype.py\n+++ b/mypy/applytype.py\n@@ -3,7 +3,7 @@\n import mypy.subtypes\n from mypy.sametypes import is_same_type\n from mypy.expandtype import expand_type\n-from mypy.types import Type, TypeVarId, TypeVarType, CallableType, AnyType\n+from mypy.types import Type, TypeVarId, TypeVarType, CallableType, AnyType, PartialType\n from mypy.messages import MessageBuilder\n from mypy.nodes import Context\n \n@@ -34,14 +34,15 @@\n for v1 in type.values):\n continue\n for value in values:\n- if mypy.subtypes.is_subtype(type, value):\n+ if isinstance(type, PartialType) or mypy.subtypes.is_subtype(type, value):\n types[i] = value\n break\n else:\n msg.incompatible_typevar_value(callable, i + 1, type, context)\n \n upper_bound = callable.variables[i].upper_bound\n- if type and not mypy.subtypes.satisfies_upper_bound(type, upper_bound):\n+ if (type and not isinstance(type, PartialType) and\n+ not mypy.subtypes.satisfies_upper_bound(type, upper_bound)):\n msg.incompatible_typevar_value(callable, i + 1, type, context)\n \n # Create a map from type variable id to target type.\n", "issue": "overwriting None from Tuple[AnyStr]'s only item causes PartialType-related crash\n``` python\nimport os.path\n\nx = None\nif 0:\n _, x = os.path.split(\"blah\")\n```\n\nresults in:\n\n```\nTraceback (most recent call last):\n File \"/usr/local/bin/mypy\", line 6, in <module>\n main(__file__)\n File \"/usr/local/lib/python3.5/site-packages/mypy/main.py\", line 54, in main\n res = type_check_only(sources, bin_dir, options)\n File \"/usr/local/lib/python3.5/site-packages/mypy/main.py\", line 102, in type_check_only\n python_path=options.python_path)\n File \"/usr/local/lib/python3.5/site-packages/mypy/build.py\", line 209, in build\n dispatch(sources, manager)\n File \"/usr/local/lib/python3.5/site-packages/mypy/build.py\", line 1325, in dispatch\n process_graph(graph, manager)\n File \"/usr/local/lib/python3.5/site-packages/mypy/build.py\", line 1456, in process_graph\n process_stale_scc(graph, scc)\n File \"/usr/local/lib/python3.5/site-packages/mypy/build.py\", line 1486, in process_stale_scc\n graph[id].type_check()\n File \"/usr/local/lib/python3.5/site-packages/mypy/build.py\", line 1305, in type_check\n manager.type_checker.visit_file(self.tree, self.xpath)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 424, in visit_file\n self.accept(d)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 465, in accept\n typ = node.accept(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/nodes.py\", line 862, in accept\n return visitor.visit_if_stmt(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1661, in visit_if_stmt\n self.accept(b)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 465, in accept\n typ = node.accept(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/nodes.py\", line 715, in accept\n return visitor.visit_block(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1149, in visit_block\n self.accept(s)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 465, in accept\n typ = node.accept(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/nodes.py\", line 753, in accept\n return visitor.visit_assignment_stmt(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1158, in visit_assignment_stmt\n self.check_assignment(s.lvalues[-1], s.rvalue, s.type is None)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1175, in check_assignment\n infer_lvalue_type)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1244, in check_assignment_to_multiple_lvalues\n self.check_multi_assignment(lvalues, rvalue, context, infer_lvalue_type)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1277, in check_multi_assignment\n context, undefined_rvalue, infer_lvalue_type)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1297, in check_multi_assignment_from_tuple\n rvalue_type = cast(TupleType, self.accept(rvalue, lvalue_type))\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 465, in accept\n typ = node.accept(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/nodes.py\", line 1185, in accept\n return visitor.visit_call_expr(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checker.py\", line 1997, in visit_call_expr\n return self.expr_checker.visit_call_expr(e)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py\", line 141, in visit_call_expr\n return self.check_call_expr_with_callee_type(callee_type, e)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py\", line 192, in check_call_expr_with_callee_type\n e.arg_names, callable_node=e.callee)[0]\n File \"/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py\", line 230, in check_call\n callee, context)\n File \"/usr/local/lib/python3.5/site-packages/mypy/checkexpr.py\", line 370, in infer_function_type_arguments_using_context\n args = infer_type_arguments(callable.type_var_ids(), ret_type, erased_ctx)\n File \"/usr/local/lib/python3.5/site-packages/mypy/infer.py\", line 43, in infer_type_arguments\n return solve_constraints(type_var_ids, constraints)\n File \"/usr/local/lib/python3.5/site-packages/mypy/solve.py\", line 50, in solve_constraints\n top = meet_types(top, c.target)\n File \"/usr/local/lib/python3.5/site-packages/mypy/meet.py\", line 22, in meet_types\n return t.accept(TypeMeetVisitor(s))\n File \"/usr/local/lib/python3.5/site-packages/mypy/types.py\", line 763, in accept\n return visitor.visit_partial_type(self)\n File \"/usr/local/lib/python3.5/site-packages/mypy/meet.py\", line 208, in visit_partial_type\n assert False, 'Internal error'\nAssertionError: Internal error\n\n*** INTERNAL ERROR ***\n\ntest.py:5: error: Internal error -- please report a bug at https://github.com/python/mypy/issues\n\nNOTE: you can use \"mypy --pdb ...\" to drop into the debugger when this happens.\n```\n\nWe might want to fix this before releasing.\n\n", "before_files": [{"content": "from typing import List, Dict\n\nimport mypy.subtypes\nfrom mypy.sametypes import is_same_type\nfrom mypy.expandtype import expand_type\nfrom mypy.types import Type, TypeVarId, TypeVarType, CallableType, AnyType\nfrom mypy.messages import MessageBuilder\nfrom mypy.nodes import Context\n\n\ndef apply_generic_arguments(callable: CallableType, types: List[Type],\n msg: MessageBuilder, context: Context) -> CallableType:\n \"\"\"Apply generic type arguments to a callable type.\n\n For example, applying [int] to 'def [T] (T) -> T' results in\n 'def (int) -> int'.\n\n Note that each type can be None; in this case, it will not be applied.\n \"\"\"\n tvars = callable.variables\n assert len(tvars) == len(types)\n # Check that inferred type variable values are compatible with allowed\n # values and bounds. Also, promote subtype values to allowed values.\n types = types[:]\n for i, type in enumerate(types):\n values = callable.variables[i].values\n if values and type:\n if isinstance(type, AnyType):\n continue\n if isinstance(type, TypeVarType) and type.values:\n # Allow substituting T1 for T if every allowed value of T1\n # is also a legal value of T.\n if all(any(is_same_type(v, v1) for v in values)\n for v1 in type.values):\n continue\n for value in values:\n if mypy.subtypes.is_subtype(type, value):\n types[i] = value\n break\n else:\n msg.incompatible_typevar_value(callable, i + 1, type, context)\n\n upper_bound = callable.variables[i].upper_bound\n if type and not mypy.subtypes.satisfies_upper_bound(type, upper_bound):\n msg.incompatible_typevar_value(callable, i + 1, type, context)\n\n # Create a map from type variable id to target type.\n id_to_type = {} # type: Dict[TypeVarId, Type]\n for i, tv in enumerate(tvars):\n if types[i]:\n id_to_type[tv.id] = types[i]\n\n # Apply arguments to argument types.\n arg_types = [expand_type(at, id_to_type) for at in callable.arg_types]\n\n # The callable may retain some type vars if only some were applied.\n remaining_tvars = [tv for tv in tvars if tv.id not in id_to_type]\n\n return callable.copy_modified(\n arg_types=arg_types,\n ret_type=expand_type(callable.ret_type, id_to_type),\n variables=remaining_tvars,\n )\n", "path": "mypy/applytype.py"}], "after_files": [{"content": "from typing import List, Dict\n\nimport mypy.subtypes\nfrom mypy.sametypes import is_same_type\nfrom mypy.expandtype import expand_type\nfrom mypy.types import Type, TypeVarId, TypeVarType, CallableType, AnyType, PartialType\nfrom mypy.messages import MessageBuilder\nfrom mypy.nodes import Context\n\n\ndef apply_generic_arguments(callable: CallableType, types: List[Type],\n msg: MessageBuilder, context: Context) -> CallableType:\n \"\"\"Apply generic type arguments to a callable type.\n\n For example, applying [int] to 'def [T] (T) -> T' results in\n 'def (int) -> int'.\n\n Note that each type can be None; in this case, it will not be applied.\n \"\"\"\n tvars = callable.variables\n assert len(tvars) == len(types)\n # Check that inferred type variable values are compatible with allowed\n # values and bounds. Also, promote subtype values to allowed values.\n types = types[:]\n for i, type in enumerate(types):\n values = callable.variables[i].values\n if values and type:\n if isinstance(type, AnyType):\n continue\n if isinstance(type, TypeVarType) and type.values:\n # Allow substituting T1 for T if every allowed value of T1\n # is also a legal value of T.\n if all(any(is_same_type(v, v1) for v in values)\n for v1 in type.values):\n continue\n for value in values:\n if isinstance(type, PartialType) or mypy.subtypes.is_subtype(type, value):\n types[i] = value\n break\n else:\n msg.incompatible_typevar_value(callable, i + 1, type, context)\n\n upper_bound = callable.variables[i].upper_bound\n if (type and not isinstance(type, PartialType) and\n not mypy.subtypes.satisfies_upper_bound(type, upper_bound)):\n msg.incompatible_typevar_value(callable, i + 1, type, context)\n\n # Create a map from type variable id to target type.\n id_to_type = {} # type: Dict[TypeVarId, Type]\n for i, tv in enumerate(tvars):\n if types[i]:\n id_to_type[tv.id] = types[i]\n\n # Apply arguments to argument types.\n arg_types = [expand_type(at, id_to_type) for at in callable.arg_types]\n\n # The callable may retain some type vars if only some were applied.\n remaining_tvars = [tv for tv in tvars if tv.id not in id_to_type]\n\n return callable.copy_modified(\n arg_types=arg_types,\n ret_type=expand_type(callable.ret_type, id_to_type),\n variables=remaining_tvars,\n )\n", "path": "mypy/applytype.py"}]}
| 2,368 | 325 |
gh_patches_debug_7782
|
rasdani/github-patches
|
git_diff
|
microsoft__ptvsd-818
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
test_path_names_uppercase_enabled fails
```
2018-09-18T18:50:20.6706273Z ======================================================================
2018-09-18T18:50:20.6706627Z FAIL [0.001s]: test_path_names_uppercase_enabled (tests.ptvsd.test_pathutils.PathUtilTests)
2018-09-18T18:50:20.6706979Z ----------------------------------------------------------------------
2018-09-18T18:50:20.6707253Z Traceback (most recent call last):
2018-09-18T18:50:20.6707620Z File "D:\a\1\s\tests\ptvsd\test_pathutils.py", line 78, in test_path_names_uppercase_enabled
2018-09-18T18:50:20.6708077Z self.assertEqual(result, ACTUAL)
2018-09-18T18:50:20.6708307Z AssertionError: 'D:\\A\\1\\S\\TESTS\\PTVSD\\test_pathutils.py' != 'D:\\a\\1\\s\\tests\\ptvsd\\test_pathutils.py'
2018-09-18T18:50:20.6708746Z - D:\A\1\S\TESTS\PTVSD\test_pathutils.py
2018-09-18T18:50:20.6708945Z + D:\a\1\s\tests\ptvsd\test_pathutils.py
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/pathutils.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 from __future__ import print_function, with_statement, absolute_import
6
7 from glob import glob
8 import os.path
9 import platform
10
11
12 MAX_FILES_TO_CACHE = 1000
13
14
15 class PathUnNormcase(object):
16 """Ensures path names of files are returned as they exist on the fs."""
17
18 def __init__(self):
19 self._dict = {}
20 self._enabled = False
21
22 def enable(self):
23 self._enabled = platform.system() == 'Windows'
24
25 def un_normcase(self, file_path):
26 if not self._enabled or len(file_path) == 0:
27 return file_path
28 if file_path in self._dict:
29 return self._dict[file_path]
30 file_path_to_return = self._get_actual_filename(file_path)
31 self.track_file_path_case(file_path_to_return)
32 return file_path_to_return
33
34 def track_file_path_case(self, file_path):
35 if not self._enabled:
36 return
37 if len(self._dict) > MAX_FILES_TO_CACHE:
38 self._dict.clear()
39 self._dict[file_path] = file_path
40
41 def _get_actual_filename(self, name):
42 """
43 Use glob to search for a file by building a regex.
44 Original source from https://stackoverflow.com/a/30374360/4443457
45 (Modified to match file name as well).
46 """
47
48 sep = os.path.sep
49 parts = os.path.normpath(name).split(sep)
50 dirs = parts[0:-1]
51 filename = '{}[{}]'.format(parts[-1][:-1], parts[-1][-1:])
52 path_fragment1 = dirs[0].upper()
53 if dirs[0] == os.path.splitdrive(name)[0]:
54 fragments = [path_fragment1]
55 else:
56 dir_names = os.listdir(os.getcwd())
57 fragments = list(filter(
58 lambda x: x.upper() == path_fragment1, dir_names))
59 fragments += list(d for d in dirs[1:] if d)
60 path = glob(sep.join(fragments))
61 if not path:
62 return name
63 res = glob(sep.join((path[0], filename)))
64 if not res:
65 return name
66 return res[0]
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ptvsd/pathutils.py b/ptvsd/pathutils.py
--- a/ptvsd/pathutils.py
+++ b/ptvsd/pathutils.py
@@ -56,7 +56,8 @@
dir_names = os.listdir(os.getcwd())
fragments = list(filter(
lambda x: x.upper() == path_fragment1, dir_names))
- fragments += list(d for d in dirs[1:] if d)
+ fragments += list('{}[{}]'.format(d[:-1], d[-1])
+ for d in dirs[1:] if d)
path = glob(sep.join(fragments))
if not path:
return name
|
{"golden_diff": "diff --git a/ptvsd/pathutils.py b/ptvsd/pathutils.py\n--- a/ptvsd/pathutils.py\n+++ b/ptvsd/pathutils.py\n@@ -56,7 +56,8 @@\n dir_names = os.listdir(os.getcwd())\n fragments = list(filter(\n lambda x: x.upper() == path_fragment1, dir_names))\n- fragments += list(d for d in dirs[1:] if d)\n+ fragments += list('{}[{}]'.format(d[:-1], d[-1])\n+ for d in dirs[1:] if d)\n path = glob(sep.join(fragments))\n if not path:\n return name\n", "issue": "test_path_names_uppercase_enabled fails\n```\r\n2018-09-18T18:50:20.6706273Z ======================================================================\r\n2018-09-18T18:50:20.6706627Z FAIL [0.001s]: test_path_names_uppercase_enabled (tests.ptvsd.test_pathutils.PathUtilTests)\r\n2018-09-18T18:50:20.6706979Z ----------------------------------------------------------------------\r\n2018-09-18T18:50:20.6707253Z Traceback (most recent call last):\r\n2018-09-18T18:50:20.6707620Z File \"D:\\a\\1\\s\\tests\\ptvsd\\test_pathutils.py\", line 78, in test_path_names_uppercase_enabled\r\n2018-09-18T18:50:20.6708077Z self.assertEqual(result, ACTUAL)\r\n2018-09-18T18:50:20.6708307Z AssertionError: 'D:\\\\A\\\\1\\\\S\\\\TESTS\\\\PTVSD\\\\test_pathutils.py' != 'D:\\\\a\\\\1\\\\s\\\\tests\\\\ptvsd\\\\test_pathutils.py'\r\n2018-09-18T18:50:20.6708746Z - D:\\A\\1\\S\\TESTS\\PTVSD\\test_pathutils.py\r\n2018-09-18T18:50:20.6708945Z + D:\\a\\1\\s\\tests\\ptvsd\\test_pathutils.py\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import print_function, with_statement, absolute_import\n\nfrom glob import glob\nimport os.path\nimport platform\n\n\nMAX_FILES_TO_CACHE = 1000\n\n\nclass PathUnNormcase(object):\n \"\"\"Ensures path names of files are returned as they exist on the fs.\"\"\"\n\n def __init__(self):\n self._dict = {}\n self._enabled = False\n\n def enable(self):\n self._enabled = platform.system() == 'Windows'\n\n def un_normcase(self, file_path):\n if not self._enabled or len(file_path) == 0:\n return file_path\n if file_path in self._dict:\n return self._dict[file_path]\n file_path_to_return = self._get_actual_filename(file_path)\n self.track_file_path_case(file_path_to_return)\n return file_path_to_return\n\n def track_file_path_case(self, file_path):\n if not self._enabled:\n return\n if len(self._dict) > MAX_FILES_TO_CACHE:\n self._dict.clear()\n self._dict[file_path] = file_path\n\n def _get_actual_filename(self, name):\n \"\"\"\n Use glob to search for a file by building a regex.\n Original source from https://stackoverflow.com/a/30374360/4443457\n (Modified to match file name as well).\n \"\"\"\n\n sep = os.path.sep\n parts = os.path.normpath(name).split(sep)\n dirs = parts[0:-1]\n filename = '{}[{}]'.format(parts[-1][:-1], parts[-1][-1:])\n path_fragment1 = dirs[0].upper()\n if dirs[0] == os.path.splitdrive(name)[0]:\n fragments = [path_fragment1]\n else:\n dir_names = os.listdir(os.getcwd())\n fragments = list(filter(\n lambda x: x.upper() == path_fragment1, dir_names))\n fragments += list(d for d in dirs[1:] if d)\n path = glob(sep.join(fragments))\n if not path:\n return name\n res = glob(sep.join((path[0], filename)))\n if not res:\n return name\n return res[0]\n", "path": "ptvsd/pathutils.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import print_function, with_statement, absolute_import\n\nfrom glob import glob\nimport os.path\nimport platform\n\n\nMAX_FILES_TO_CACHE = 1000\n\n\nclass PathUnNormcase(object):\n \"\"\"Ensures path names of files are returned as they exist on the fs.\"\"\"\n\n def __init__(self):\n self._dict = {}\n self._enabled = False\n\n def enable(self):\n self._enabled = platform.system() == 'Windows'\n\n def un_normcase(self, file_path):\n if not self._enabled or len(file_path) == 0:\n return file_path\n if file_path in self._dict:\n return self._dict[file_path]\n file_path_to_return = self._get_actual_filename(file_path)\n self.track_file_path_case(file_path_to_return)\n return file_path_to_return\n\n def track_file_path_case(self, file_path):\n if not self._enabled:\n return\n if len(self._dict) > MAX_FILES_TO_CACHE:\n self._dict.clear()\n self._dict[file_path] = file_path\n\n def _get_actual_filename(self, name):\n \"\"\"\n Use glob to search for a file by building a regex.\n Original source from https://stackoverflow.com/a/30374360/4443457\n (Modified to match file name as well).\n \"\"\"\n\n sep = os.path.sep\n parts = os.path.normpath(name).split(sep)\n dirs = parts[0:-1]\n filename = '{}[{}]'.format(parts[-1][:-1], parts[-1][-1:])\n path_fragment1 = dirs[0].upper()\n if dirs[0] == os.path.splitdrive(name)[0]:\n fragments = [path_fragment1]\n else:\n dir_names = os.listdir(os.getcwd())\n fragments = list(filter(\n lambda x: x.upper() == path_fragment1, dir_names))\n fragments += list('{}[{}]'.format(d[:-1], d[-1])\n for d in dirs[1:] if d)\n path = glob(sep.join(fragments))\n if not path:\n return name\n res = glob(sep.join((path[0], filename)))\n if not res:\n return name\n return res[0]\n", "path": "ptvsd/pathutils.py"}]}
| 1,332 | 147 |
gh_patches_debug_8236
|
rasdani/github-patches
|
git_diff
|
scipy__scipy-3708
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent exceptions raised by scipy.io.loadmat
Depending on whether loadmat is passed a non-existent filename with or without the ".mat" extension, the exception raised chages from IOError to FileNotFoundError. I believe an easy patch would be to simply change `scipy.io.matlab.mio._open_file` to re-raise rather than raise `IOError(e)`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scipy/io/matlab/mio.py`
Content:
```
1 """
2 Module for reading and writing matlab (TM) .mat files
3 """
4 # Authors: Travis Oliphant, Matthew Brett
5
6 from __future__ import division, print_function, absolute_import
7
8 import numpy as np
9
10 from scipy.lib.six import string_types
11
12 from .miobase import get_matfile_version, docfiller
13 from .mio4 import MatFile4Reader, MatFile4Writer
14 from .mio5 import MatFile5Reader, MatFile5Writer
15
16 __all__ = ['mat_reader_factory', 'loadmat', 'savemat', 'whosmat']
17
18
19 def _open_file(file_like, appendmat):
20 ''' Open `file_like` and return as file-like object '''
21 if isinstance(file_like, string_types):
22 try:
23 return open(file_like, 'rb')
24 except IOError as e:
25 if appendmat and not file_like.endswith('.mat'):
26 file_like += '.mat'
27 return open(file_like, 'rb')
28 raise IOError(e)
29 # not a string - maybe file-like object
30 try:
31 file_like.read(0)
32 except AttributeError:
33 raise IOError('Reader needs file name or open file-like object')
34 return file_like
35
36
37 @docfiller
38 def mat_reader_factory(file_name, appendmat=True, **kwargs):
39 """Create reader for matlab .mat format files
40
41 Parameters
42 ----------
43 %(file_arg)s
44 %(append_arg)s
45 %(load_args)s
46 %(struct_arg)s
47
48 Returns
49 -------
50 matreader : MatFileReader object
51 Initialized instance of MatFileReader class matching the mat file
52 type detected in `filename`.
53 """
54 byte_stream = _open_file(file_name, appendmat)
55 mjv, mnv = get_matfile_version(byte_stream)
56 if mjv == 0:
57 return MatFile4Reader(byte_stream, **kwargs)
58 elif mjv == 1:
59 return MatFile5Reader(byte_stream, **kwargs)
60 elif mjv == 2:
61 raise NotImplementedError('Please use HDF reader for matlab v7.3 files')
62 else:
63 raise TypeError('Did not recognize version %s' % mjv)
64
65
66 @docfiller
67 def loadmat(file_name, mdict=None, appendmat=True, **kwargs):
68 """
69 Load MATLAB file
70
71 Parameters
72 ----------
73 file_name : str
74 Name of the mat file (do not need .mat extension if
75 appendmat==True) Can also pass open file-like object.
76 m_dict : dict, optional
77 Dictionary in which to insert matfile variables.
78 appendmat : bool, optional
79 True to append the .mat extension to the end of the given
80 filename, if not already present.
81 byte_order : str or None, optional
82 None by default, implying byte order guessed from mat
83 file. Otherwise can be one of ('native', '=', 'little', '<',
84 'BIG', '>').
85 mat_dtype : bool, optional
86 If True, return arrays in same dtype as would be loaded into
87 MATLAB (instead of the dtype with which they are saved).
88 squeeze_me : bool, optional
89 Whether to squeeze unit matrix dimensions or not.
90 chars_as_strings : bool, optional
91 Whether to convert char arrays to string arrays.
92 matlab_compatible : bool, optional
93 Returns matrices as would be loaded by MATLAB (implies
94 squeeze_me=False, chars_as_strings=False, mat_dtype=True,
95 struct_as_record=True).
96 struct_as_record : bool, optional
97 Whether to load MATLAB structs as numpy record arrays, or as
98 old-style numpy arrays with dtype=object. Setting this flag to
99 False replicates the behavior of scipy version 0.7.x (returning
100 numpy object arrays). The default setting is True, because it
101 allows easier round-trip load and save of MATLAB files.
102 verify_compressed_data_integrity : bool, optional
103 Whether the length of compressed sequences in the MATLAB file
104 should be checked, to ensure that they are not longer than we expect.
105 It is advisable to enable this (the default) because overlong
106 compressed sequences in MATLAB files generally indicate that the
107 files have experienced some sort of corruption.
108 variable_names : None or sequence
109 If None (the default) - read all variables in file. Otherwise
110 `variable_names` should be a sequence of strings, giving names of the
111 matlab variables to read from the file. The reader will skip any
112 variable with a name not in this sequence, possibly saving some read
113 processing.
114
115 Returns
116 -------
117 mat_dict : dict
118 dictionary with variable names as keys, and loaded matrices as
119 values
120
121 Notes
122 -----
123 v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported.
124
125 You will need an HDF5 python library to read matlab 7.3 format mat
126 files. Because scipy does not supply one, we do not implement the
127 HDF5 / 7.3 interface here.
128
129 """
130 variable_names = kwargs.pop('variable_names', None)
131 MR = mat_reader_factory(file_name, appendmat, **kwargs)
132 matfile_dict = MR.get_variables(variable_names)
133 if mdict is not None:
134 mdict.update(matfile_dict)
135 else:
136 mdict = matfile_dict
137 if isinstance(file_name, string_types):
138 MR.mat_stream.close()
139 return mdict
140
141
142 @docfiller
143 def savemat(file_name, mdict,
144 appendmat=True,
145 format='5',
146 long_field_names=False,
147 do_compression=False,
148 oned_as='row'):
149 """
150 Save a dictionary of names and arrays into a MATLAB-style .mat file.
151
152 This saves the array objects in the given dictionary to a MATLAB-
153 style .mat file.
154
155 Parameters
156 ----------
157 file_name : str or file-like object
158 Name of the .mat file (.mat extension not needed if ``appendmat ==
159 True``).
160 Can also pass open file_like object.
161 mdict : dict
162 Dictionary from which to save matfile variables.
163 appendmat : bool, optional
164 True (the default) to append the .mat extension to the end of the
165 given filename, if not already present.
166 format : {'5', '4'}, string, optional
167 '5' (the default) for MATLAB 5 and up (to 7.2),
168 '4' for MATLAB 4 .mat files
169 long_field_names : bool, optional
170 False (the default) - maximum field name length in a structure is
171 31 characters which is the documented maximum length.
172 True - maximum field name length in a structure is 63 characters
173 which works for MATLAB 7.6+
174 do_compression : bool, optional
175 Whether or not to compress matrices on write. Default is False.
176 oned_as : {'row', 'column'}, optional
177 If 'column', write 1-D numpy arrays as column vectors.
178 If 'row', write 1-D numpy arrays as row vectors.
179
180 See also
181 --------
182 mio4.MatFile4Writer
183 mio5.MatFile5Writer
184 """
185 file_is_string = isinstance(file_name, string_types)
186 if file_is_string:
187 if appendmat and file_name[-4:] != ".mat":
188 file_name = file_name + ".mat"
189 file_stream = open(file_name, 'wb')
190 else:
191 if not hasattr(file_name, 'write'):
192 raise IOError('Writer needs file name or writeable '
193 'file-like object')
194 file_stream = file_name
195 if format == '4':
196 if long_field_names:
197 raise ValueError("Long field names are not available for version 4 files")
198 MW = MatFile4Writer(file_stream, oned_as)
199 elif format == '5':
200 MW = MatFile5Writer(file_stream,
201 do_compression=do_compression,
202 unicode_strings=True,
203 long_field_names=long_field_names,
204 oned_as=oned_as)
205 else:
206 raise ValueError("Format should be '4' or '5'")
207 MW.put_variables(mdict)
208 if file_is_string:
209 file_stream.close()
210
211
212 @docfiller
213 def whosmat(file_name, appendmat=True, **kwargs):
214 """
215 List variables inside a MATLAB file
216
217 .. versionadded:: 0.12.0
218
219 Parameters
220 ----------
221 %(file_arg)s
222 %(append_arg)s
223 %(load_args)s
224 %(struct_arg)s
225
226 Returns
227 -------
228 variables : list of tuples
229 A list of tuples, where each tuple holds the matrix name (a string),
230 its shape (tuple of ints), and its data class (a string).
231 Possible data classes are: int8, uint8, int16, uint16, int32, uint32,
232 int64, uint64, single, double, cell, struct, object, char, sparse,
233 function, opaque, logical, unknown.
234
235 Notes
236 -----
237 v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported.
238
239 You will need an HDF5 python library to read matlab 7.3 format mat
240 files. Because scipy does not supply one, we do not implement the
241 HDF5 / 7.3 interface here.
242
243 """
244 ML = mat_reader_factory(file_name, **kwargs)
245 variables = ML.list_variables()
246 if isinstance(file_name, string_types):
247 ML.mat_stream.close()
248 return variables
249
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scipy/io/matlab/mio.py b/scipy/io/matlab/mio.py
--- a/scipy/io/matlab/mio.py
+++ b/scipy/io/matlab/mio.py
@@ -24,8 +24,11 @@
except IOError as e:
if appendmat and not file_like.endswith('.mat'):
file_like += '.mat'
- return open(file_like, 'rb')
- raise IOError(e)
+ try:
+ return open(file_like, 'rb')
+ except IOError:
+ pass # Rethrow the original exception.
+ raise
# not a string - maybe file-like object
try:
file_like.read(0)
|
{"golden_diff": "diff --git a/scipy/io/matlab/mio.py b/scipy/io/matlab/mio.py\n--- a/scipy/io/matlab/mio.py\n+++ b/scipy/io/matlab/mio.py\n@@ -24,8 +24,11 @@\n except IOError as e:\n if appendmat and not file_like.endswith('.mat'):\n file_like += '.mat'\n- return open(file_like, 'rb')\n- raise IOError(e)\n+ try:\n+ return open(file_like, 'rb')\n+ except IOError:\n+ pass # Rethrow the original exception.\n+ raise\n # not a string - maybe file-like object\n try:\n file_like.read(0)\n", "issue": "Inconsistent exceptions raised by scipy.io.loadmat\nDepending on whether loadmat is passed a non-existent filename with or without the \".mat\" extension, the exception raised chages from IOError to FileNotFoundError. I believe an easy patch would be to simply change `scipy.io.matlab.mio._open_file` to re-raise rather than raise `IOError(e)`.\n\n", "before_files": [{"content": "\"\"\"\nModule for reading and writing matlab (TM) .mat files\n\"\"\"\n# Authors: Travis Oliphant, Matthew Brett\n\nfrom __future__ import division, print_function, absolute_import\n\nimport numpy as np\n\nfrom scipy.lib.six import string_types\n\nfrom .miobase import get_matfile_version, docfiller\nfrom .mio4 import MatFile4Reader, MatFile4Writer\nfrom .mio5 import MatFile5Reader, MatFile5Writer\n\n__all__ = ['mat_reader_factory', 'loadmat', 'savemat', 'whosmat']\n\n\ndef _open_file(file_like, appendmat):\n ''' Open `file_like` and return as file-like object '''\n if isinstance(file_like, string_types):\n try:\n return open(file_like, 'rb')\n except IOError as e:\n if appendmat and not file_like.endswith('.mat'):\n file_like += '.mat'\n return open(file_like, 'rb')\n raise IOError(e)\n # not a string - maybe file-like object\n try:\n file_like.read(0)\n except AttributeError:\n raise IOError('Reader needs file name or open file-like object')\n return file_like\n\n\n@docfiller\ndef mat_reader_factory(file_name, appendmat=True, **kwargs):\n \"\"\"Create reader for matlab .mat format files\n\n Parameters\n ----------\n %(file_arg)s\n %(append_arg)s\n %(load_args)s\n %(struct_arg)s\n\n Returns\n -------\n matreader : MatFileReader object\n Initialized instance of MatFileReader class matching the mat file\n type detected in `filename`.\n \"\"\"\n byte_stream = _open_file(file_name, appendmat)\n mjv, mnv = get_matfile_version(byte_stream)\n if mjv == 0:\n return MatFile4Reader(byte_stream, **kwargs)\n elif mjv == 1:\n return MatFile5Reader(byte_stream, **kwargs)\n elif mjv == 2:\n raise NotImplementedError('Please use HDF reader for matlab v7.3 files')\n else:\n raise TypeError('Did not recognize version %s' % mjv)\n\n\n@docfiller\ndef loadmat(file_name, mdict=None, appendmat=True, **kwargs):\n \"\"\"\n Load MATLAB file\n\n Parameters\n ----------\n file_name : str\n Name of the mat file (do not need .mat extension if\n appendmat==True) Can also pass open file-like object.\n m_dict : dict, optional\n Dictionary in which to insert matfile variables.\n appendmat : bool, optional\n True to append the .mat extension to the end of the given\n filename, if not already present.\n byte_order : str or None, optional\n None by default, implying byte order guessed from mat\n file. Otherwise can be one of ('native', '=', 'little', '<',\n 'BIG', '>').\n mat_dtype : bool, optional\n If True, return arrays in same dtype as would be loaded into\n MATLAB (instead of the dtype with which they are saved).\n squeeze_me : bool, optional\n Whether to squeeze unit matrix dimensions or not.\n chars_as_strings : bool, optional\n Whether to convert char arrays to string arrays.\n matlab_compatible : bool, optional\n Returns matrices as would be loaded by MATLAB (implies\n squeeze_me=False, chars_as_strings=False, mat_dtype=True,\n struct_as_record=True).\n struct_as_record : bool, optional\n Whether to load MATLAB structs as numpy record arrays, or as\n old-style numpy arrays with dtype=object. Setting this flag to\n False replicates the behavior of scipy version 0.7.x (returning\n numpy object arrays). The default setting is True, because it\n allows easier round-trip load and save of MATLAB files.\n verify_compressed_data_integrity : bool, optional\n Whether the length of compressed sequences in the MATLAB file\n should be checked, to ensure that they are not longer than we expect.\n It is advisable to enable this (the default) because overlong\n compressed sequences in MATLAB files generally indicate that the\n files have experienced some sort of corruption.\n variable_names : None or sequence\n If None (the default) - read all variables in file. Otherwise\n `variable_names` should be a sequence of strings, giving names of the\n matlab variables to read from the file. The reader will skip any\n variable with a name not in this sequence, possibly saving some read\n processing.\n\n Returns\n -------\n mat_dict : dict\n dictionary with variable names as keys, and loaded matrices as\n values\n\n Notes\n -----\n v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported.\n\n You will need an HDF5 python library to read matlab 7.3 format mat\n files. Because scipy does not supply one, we do not implement the\n HDF5 / 7.3 interface here.\n\n \"\"\"\n variable_names = kwargs.pop('variable_names', None)\n MR = mat_reader_factory(file_name, appendmat, **kwargs)\n matfile_dict = MR.get_variables(variable_names)\n if mdict is not None:\n mdict.update(matfile_dict)\n else:\n mdict = matfile_dict\n if isinstance(file_name, string_types):\n MR.mat_stream.close()\n return mdict\n\n\n@docfiller\ndef savemat(file_name, mdict,\n appendmat=True,\n format='5',\n long_field_names=False,\n do_compression=False,\n oned_as='row'):\n \"\"\"\n Save a dictionary of names and arrays into a MATLAB-style .mat file.\n\n This saves the array objects in the given dictionary to a MATLAB-\n style .mat file.\n\n Parameters\n ----------\n file_name : str or file-like object\n Name of the .mat file (.mat extension not needed if ``appendmat ==\n True``).\n Can also pass open file_like object.\n mdict : dict\n Dictionary from which to save matfile variables.\n appendmat : bool, optional\n True (the default) to append the .mat extension to the end of the\n given filename, if not already present.\n format : {'5', '4'}, string, optional\n '5' (the default) for MATLAB 5 and up (to 7.2),\n '4' for MATLAB 4 .mat files\n long_field_names : bool, optional\n False (the default) - maximum field name length in a structure is\n 31 characters which is the documented maximum length.\n True - maximum field name length in a structure is 63 characters\n which works for MATLAB 7.6+\n do_compression : bool, optional\n Whether or not to compress matrices on write. Default is False.\n oned_as : {'row', 'column'}, optional\n If 'column', write 1-D numpy arrays as column vectors.\n If 'row', write 1-D numpy arrays as row vectors.\n\n See also\n --------\n mio4.MatFile4Writer\n mio5.MatFile5Writer\n \"\"\"\n file_is_string = isinstance(file_name, string_types)\n if file_is_string:\n if appendmat and file_name[-4:] != \".mat\":\n file_name = file_name + \".mat\"\n file_stream = open(file_name, 'wb')\n else:\n if not hasattr(file_name, 'write'):\n raise IOError('Writer needs file name or writeable '\n 'file-like object')\n file_stream = file_name\n if format == '4':\n if long_field_names:\n raise ValueError(\"Long field names are not available for version 4 files\")\n MW = MatFile4Writer(file_stream, oned_as)\n elif format == '5':\n MW = MatFile5Writer(file_stream,\n do_compression=do_compression,\n unicode_strings=True,\n long_field_names=long_field_names,\n oned_as=oned_as)\n else:\n raise ValueError(\"Format should be '4' or '5'\")\n MW.put_variables(mdict)\n if file_is_string:\n file_stream.close()\n\n\n@docfiller\ndef whosmat(file_name, appendmat=True, **kwargs):\n \"\"\"\n List variables inside a MATLAB file\n\n .. versionadded:: 0.12.0\n\n Parameters\n ----------\n %(file_arg)s\n %(append_arg)s\n %(load_args)s\n %(struct_arg)s\n\n Returns\n -------\n variables : list of tuples\n A list of tuples, where each tuple holds the matrix name (a string),\n its shape (tuple of ints), and its data class (a string).\n Possible data classes are: int8, uint8, int16, uint16, int32, uint32,\n int64, uint64, single, double, cell, struct, object, char, sparse,\n function, opaque, logical, unknown.\n\n Notes\n -----\n v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported.\n\n You will need an HDF5 python library to read matlab 7.3 format mat\n files. Because scipy does not supply one, we do not implement the\n HDF5 / 7.3 interface here.\n\n \"\"\"\n ML = mat_reader_factory(file_name, **kwargs)\n variables = ML.list_variables()\n if isinstance(file_name, string_types):\n ML.mat_stream.close()\n return variables\n", "path": "scipy/io/matlab/mio.py"}], "after_files": [{"content": "\"\"\"\nModule for reading and writing matlab (TM) .mat files\n\"\"\"\n# Authors: Travis Oliphant, Matthew Brett\n\nfrom __future__ import division, print_function, absolute_import\n\nimport numpy as np\n\nfrom scipy.lib.six import string_types\n\nfrom .miobase import get_matfile_version, docfiller\nfrom .mio4 import MatFile4Reader, MatFile4Writer\nfrom .mio5 import MatFile5Reader, MatFile5Writer\n\n__all__ = ['mat_reader_factory', 'loadmat', 'savemat', 'whosmat']\n\n\ndef _open_file(file_like, appendmat):\n ''' Open `file_like` and return as file-like object '''\n if isinstance(file_like, string_types):\n try:\n return open(file_like, 'rb')\n except IOError as e:\n if appendmat and not file_like.endswith('.mat'):\n file_like += '.mat'\n try:\n return open(file_like, 'rb')\n except IOError:\n pass # Rethrow the original exception.\n raise\n # not a string - maybe file-like object\n try:\n file_like.read(0)\n except AttributeError:\n raise IOError('Reader needs file name or open file-like object')\n return file_like\n\n\n@docfiller\ndef mat_reader_factory(file_name, appendmat=True, **kwargs):\n \"\"\"Create reader for matlab .mat format files\n\n Parameters\n ----------\n %(file_arg)s\n %(append_arg)s\n %(load_args)s\n %(struct_arg)s\n\n Returns\n -------\n matreader : MatFileReader object\n Initialized instance of MatFileReader class matching the mat file\n type detected in `filename`.\n \"\"\"\n byte_stream = _open_file(file_name, appendmat)\n mjv, mnv = get_matfile_version(byte_stream)\n if mjv == 0:\n return MatFile4Reader(byte_stream, **kwargs)\n elif mjv == 1:\n return MatFile5Reader(byte_stream, **kwargs)\n elif mjv == 2:\n raise NotImplementedError('Please use HDF reader for matlab v7.3 files')\n else:\n raise TypeError('Did not recognize version %s' % mjv)\n\n\n@docfiller\ndef loadmat(file_name, mdict=None, appendmat=True, **kwargs):\n \"\"\"\n Load MATLAB file\n\n Parameters\n ----------\n file_name : str\n Name of the mat file (do not need .mat extension if\n appendmat==True) Can also pass open file-like object.\n m_dict : dict, optional\n Dictionary in which to insert matfile variables.\n appendmat : bool, optional\n True to append the .mat extension to the end of the given\n filename, if not already present.\n byte_order : str or None, optional\n None by default, implying byte order guessed from mat\n file. Otherwise can be one of ('native', '=', 'little', '<',\n 'BIG', '>').\n mat_dtype : bool, optional\n If True, return arrays in same dtype as would be loaded into\n MATLAB (instead of the dtype with which they are saved).\n squeeze_me : bool, optional\n Whether to squeeze unit matrix dimensions or not.\n chars_as_strings : bool, optional\n Whether to convert char arrays to string arrays.\n matlab_compatible : bool, optional\n Returns matrices as would be loaded by MATLAB (implies\n squeeze_me=False, chars_as_strings=False, mat_dtype=True,\n struct_as_record=True).\n struct_as_record : bool, optional\n Whether to load MATLAB structs as numpy record arrays, or as\n old-style numpy arrays with dtype=object. Setting this flag to\n False replicates the behavior of scipy version 0.7.x (returning\n numpy object arrays). The default setting is True, because it\n allows easier round-trip load and save of MATLAB files.\n verify_compressed_data_integrity : bool, optional\n Whether the length of compressed sequences in the MATLAB file\n should be checked, to ensure that they are not longer than we expect.\n It is advisable to enable this (the default) because overlong\n compressed sequences in MATLAB files generally indicate that the\n files have experienced some sort of corruption.\n variable_names : None or sequence\n If None (the default) - read all variables in file. Otherwise\n `variable_names` should be a sequence of strings, giving names of the\n matlab variables to read from the file. The reader will skip any\n variable with a name not in this sequence, possibly saving some read\n processing.\n\n Returns\n -------\n mat_dict : dict\n dictionary with variable names as keys, and loaded matrices as\n values\n\n Notes\n -----\n v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported.\n\n You will need an HDF5 python library to read matlab 7.3 format mat\n files. Because scipy does not supply one, we do not implement the\n HDF5 / 7.3 interface here.\n\n \"\"\"\n variable_names = kwargs.pop('variable_names', None)\n MR = mat_reader_factory(file_name, appendmat, **kwargs)\n matfile_dict = MR.get_variables(variable_names)\n if mdict is not None:\n mdict.update(matfile_dict)\n else:\n mdict = matfile_dict\n if isinstance(file_name, string_types):\n MR.mat_stream.close()\n return mdict\n\n\n@docfiller\ndef savemat(file_name, mdict,\n appendmat=True,\n format='5',\n long_field_names=False,\n do_compression=False,\n oned_as='row'):\n \"\"\"\n Save a dictionary of names and arrays into a MATLAB-style .mat file.\n\n This saves the array objects in the given dictionary to a MATLAB-\n style .mat file.\n\n Parameters\n ----------\n file_name : str or file-like object\n Name of the .mat file (.mat extension not needed if ``appendmat ==\n True``).\n Can also pass open file_like object.\n mdict : dict\n Dictionary from which to save matfile variables.\n appendmat : bool, optional\n True (the default) to append the .mat extension to the end of the\n given filename, if not already present.\n format : {'5', '4'}, string, optional\n '5' (the default) for MATLAB 5 and up (to 7.2),\n '4' for MATLAB 4 .mat files\n long_field_names : bool, optional\n False (the default) - maximum field name length in a structure is\n 31 characters which is the documented maximum length.\n True - maximum field name length in a structure is 63 characters\n which works for MATLAB 7.6+\n do_compression : bool, optional\n Whether or not to compress matrices on write. Default is False.\n oned_as : {'row', 'column'}, optional\n If 'column', write 1-D numpy arrays as column vectors.\n If 'row', write 1-D numpy arrays as row vectors.\n\n See also\n --------\n mio4.MatFile4Writer\n mio5.MatFile5Writer\n \"\"\"\n file_is_string = isinstance(file_name, string_types)\n if file_is_string:\n if appendmat and file_name[-4:] != \".mat\":\n file_name = file_name + \".mat\"\n file_stream = open(file_name, 'wb')\n else:\n if not hasattr(file_name, 'write'):\n raise IOError('Writer needs file name or writeable '\n 'file-like object')\n file_stream = file_name\n if format == '4':\n if long_field_names:\n raise ValueError(\"Long field names are not available for version 4 files\")\n MW = MatFile4Writer(file_stream, oned_as)\n elif format == '5':\n MW = MatFile5Writer(file_stream,\n do_compression=do_compression,\n unicode_strings=True,\n long_field_names=long_field_names,\n oned_as=oned_as)\n else:\n raise ValueError(\"Format should be '4' or '5'\")\n MW.put_variables(mdict)\n if file_is_string:\n file_stream.close()\n\n\n@docfiller\ndef whosmat(file_name, appendmat=True, **kwargs):\n \"\"\"\n List variables inside a MATLAB file\n\n .. versionadded:: 0.12.0\n\n Parameters\n ----------\n %(file_arg)s\n %(append_arg)s\n %(load_args)s\n %(struct_arg)s\n\n Returns\n -------\n variables : list of tuples\n A list of tuples, where each tuple holds the matrix name (a string),\n its shape (tuple of ints), and its data class (a string).\n Possible data classes are: int8, uint8, int16, uint16, int32, uint32,\n int64, uint64, single, double, cell, struct, object, char, sparse,\n function, opaque, logical, unknown.\n\n Notes\n -----\n v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported.\n\n You will need an HDF5 python library to read matlab 7.3 format mat\n files. Because scipy does not supply one, we do not implement the\n HDF5 / 7.3 interface here.\n\n \"\"\"\n ML = mat_reader_factory(file_name, **kwargs)\n variables = ML.list_variables()\n if isinstance(file_name, string_types):\n ML.mat_stream.close()\n return variables\n", "path": "scipy/io/matlab/mio.py"}]}
| 3,055 | 154 |
gh_patches_debug_8485
|
rasdani/github-patches
|
git_diff
|
SigmaHQ__sigma-1026
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error while pushing sigma to misp using sigma2misp script
(venv) hydra@Kaushals-MacBook-Air session2/sigma (master %) » tools/sigma2misp @misp.conf --insecure --same-event --info "Test Event" -r rules/windows/sysmon 2 ↵
Traceback (most recent call last):
File "tools/sigma2misp", line 57, in <module>
eventid = create_new_event()
File "tools/sigma2misp", line 14, in create_new_event
event = misp.MISPEvent()
AttributeError: 'PyMISP' object has no attribute 'MISPEvent'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/sigma/sigma2misp.py`
Content:
```
1 #!/usr/bin/env python3
2 # Import given Sigma rules to MISP
3
4 import argparse
5 import pathlib
6 import urllib3
7 urllib3.disable_warnings()
8 from pymisp import PyMISP
9
10 def create_new_event(args, misp):
11 if hasattr(misp, "new_event"):
12 return misp.new_event(info=args.info)["Event"]["id"]
13
14 event = misp.MISPEvent()
15 event.info = args.info
16 return misp.add_event(event)["Event"]["id"]
17
18
19 class MISPImportArgumentParser(argparse.ArgumentParser):
20 def __init__(self, *args, **kwargs):
21 super().__init__(
22 description="Import Sigma rules into MISP events",
23 epilog="Parameters can be read from a file by a @filename parameter. The file should contain one parameter per line. Dashes may be omitted.",
24 fromfile_prefix_chars="@",
25 )
26
27 def convert_arg_line_to_args(self, line : str):
28 return ("--" + line.lstrip("--")).split()
29
30 def main():
31 argparser = MISPImportArgumentParser()
32 argparser.add_argument("--url", "-u", default="https://localhost", help="URL of MISP instance")
33 argparser.add_argument("--key", "-k", required=True, help="API key")
34 argparser.add_argument("--insecure", "-I", action="store_false", help="Disable TLS certifcate validation.")
35 argparser.add_argument("--event", "-e", type=int, help="Add Sigma rule to event with this ID. If not set, create new event.")
36 argparser.add_argument("--same-event", "-s", action="store_true", help="Import all Sigma rules to the same event, if no event is set.")
37 argparser.add_argument("--info", "-i", default="Sigma import", help="Event Information field for newly created MISP event.")
38 argparser.add_argument("--recursive", "-r", action="store_true", help="Recursive traversal of directory")
39 argparser.add_argument("sigma", nargs="+", help="Sigma rule file that should be imported")
40 args = argparser.parse_args()
41
42 if args.recursive:
43 paths = [ p for pathname in args.sigma for p in pathlib.Path(pathname).glob("**/*") if p.is_file() ]
44 else:
45 paths = [ pathlib.Path(sigma) for sigma in args.sigma ]
46
47 misp = PyMISP(args.url, args.key, args.insecure)
48 if args.event:
49 if hasattr(misp, "get"):
50 eventid = misp.get(args.event)["Event"]["id"]
51 else:
52 eventid = misp.get_event(args.event)["Event"]["id"]
53
54 first = True
55
56 for sigma in paths:
57 if not args.event and (first or not args.same_event):
58 eventid = create_new_event(args, misp)
59 print("Importing Sigma rule {} into MISP event {}...".format(sigma, eventid, end=""))
60 f = sigma.open("rt")
61
62 if hasattr(misp, "add_named_attribute"):
63 misp.add_named_attribute(eventid, "sigma", f.read())
64 else:
65 event = misp.get_event(eventid, pythonify=True)
66 event.add_attribute("sigma", f.read())
67 misp.update_event(event)
68
69 f.close()
70 first = False
71
72 if __name__ == "__main__":
73 main()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/sigma/sigma2misp.py b/tools/sigma/sigma2misp.py
--- a/tools/sigma/sigma2misp.py
+++ b/tools/sigma/sigma2misp.py
@@ -5,13 +5,13 @@
import pathlib
import urllib3
urllib3.disable_warnings()
-from pymisp import PyMISP
+from pymisp import PyMISP, MISPEvent
def create_new_event(args, misp):
if hasattr(misp, "new_event"):
return misp.new_event(info=args.info)["Event"]["id"]
- event = misp.MISPEvent()
+ event = MISPEvent()
event.info = args.info
return misp.add_event(event)["Event"]["id"]
|
{"golden_diff": "diff --git a/tools/sigma/sigma2misp.py b/tools/sigma/sigma2misp.py\n--- a/tools/sigma/sigma2misp.py\n+++ b/tools/sigma/sigma2misp.py\n@@ -5,13 +5,13 @@\n import pathlib\n import urllib3\n urllib3.disable_warnings()\n-from pymisp import PyMISP\n+from pymisp import PyMISP, MISPEvent\n \n def create_new_event(args, misp):\n if hasattr(misp, \"new_event\"):\n return misp.new_event(info=args.info)[\"Event\"][\"id\"]\n \n- event = misp.MISPEvent()\n+ event = MISPEvent()\n event.info = args.info\n return misp.add_event(event)[\"Event\"][\"id\"]\n", "issue": "Error while pushing sigma to misp using sigma2misp script\n(venv) hydra@Kaushals-MacBook-Air session2/sigma (master %) \u00bb tools/sigma2misp @misp.conf --insecure --same-event --info \"Test Event\" -r rules/windows/sysmon 2 \u21b5\r\nTraceback (most recent call last):\r\n File \"tools/sigma2misp\", line 57, in <module>\r\n eventid = create_new_event()\r\n File \"tools/sigma2misp\", line 14, in create_new_event\r\n event = misp.MISPEvent()\r\nAttributeError: 'PyMISP' object has no attribute 'MISPEvent'\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Import given Sigma rules to MISP\n\nimport argparse\nimport pathlib\nimport urllib3\nurllib3.disable_warnings()\nfrom pymisp import PyMISP\n\ndef create_new_event(args, misp):\n if hasattr(misp, \"new_event\"):\n return misp.new_event(info=args.info)[\"Event\"][\"id\"]\n \n event = misp.MISPEvent()\n event.info = args.info\n return misp.add_event(event)[\"Event\"][\"id\"]\n\n\nclass MISPImportArgumentParser(argparse.ArgumentParser):\n def __init__(self, *args, **kwargs):\n super().__init__(\n description=\"Import Sigma rules into MISP events\",\n epilog=\"Parameters can be read from a file by a @filename parameter. The file should contain one parameter per line. Dashes may be omitted.\",\n fromfile_prefix_chars=\"@\",\n )\n\n def convert_arg_line_to_args(self, line : str):\n return (\"--\" + line.lstrip(\"--\")).split()\n\ndef main():\n argparser = MISPImportArgumentParser()\n argparser.add_argument(\"--url\", \"-u\", default=\"https://localhost\", help=\"URL of MISP instance\")\n argparser.add_argument(\"--key\", \"-k\", required=True, help=\"API key\")\n argparser.add_argument(\"--insecure\", \"-I\", action=\"store_false\", help=\"Disable TLS certifcate validation.\")\n argparser.add_argument(\"--event\", \"-e\", type=int, help=\"Add Sigma rule to event with this ID. If not set, create new event.\")\n argparser.add_argument(\"--same-event\", \"-s\", action=\"store_true\", help=\"Import all Sigma rules to the same event, if no event is set.\")\n argparser.add_argument(\"--info\", \"-i\", default=\"Sigma import\", help=\"Event Information field for newly created MISP event.\")\n argparser.add_argument(\"--recursive\", \"-r\", action=\"store_true\", help=\"Recursive traversal of directory\")\n argparser.add_argument(\"sigma\", nargs=\"+\", help=\"Sigma rule file that should be imported\")\n args = argparser.parse_args()\n\n if args.recursive:\n paths = [ p for pathname in args.sigma for p in pathlib.Path(pathname).glob(\"**/*\") if p.is_file() ]\n else:\n paths = [ pathlib.Path(sigma) for sigma in args.sigma ]\n\n misp = PyMISP(args.url, args.key, args.insecure)\n if args.event:\n if hasattr(misp, \"get\"):\n eventid = misp.get(args.event)[\"Event\"][\"id\"]\n else:\n eventid = misp.get_event(args.event)[\"Event\"][\"id\"]\n\n first = True\n\n for sigma in paths:\n if not args.event and (first or not args.same_event):\n eventid = create_new_event(args, misp)\n print(\"Importing Sigma rule {} into MISP event {}...\".format(sigma, eventid, end=\"\"))\n f = sigma.open(\"rt\")\n\n if hasattr(misp, \"add_named_attribute\"):\n misp.add_named_attribute(eventid, \"sigma\", f.read())\n else:\n event = misp.get_event(eventid, pythonify=True)\n event.add_attribute(\"sigma\", f.read())\n misp.update_event(event)\n\n f.close()\n first = False\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/sigma/sigma2misp.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Import given Sigma rules to MISP\n\nimport argparse\nimport pathlib\nimport urllib3\nurllib3.disable_warnings()\nfrom pymisp import PyMISP, MISPEvent\n\ndef create_new_event(args, misp):\n if hasattr(misp, \"new_event\"):\n return misp.new_event(info=args.info)[\"Event\"][\"id\"]\n \n event = MISPEvent()\n event.info = args.info\n return misp.add_event(event)[\"Event\"][\"id\"]\n\n\nclass MISPImportArgumentParser(argparse.ArgumentParser):\n def __init__(self, *args, **kwargs):\n super().__init__(\n description=\"Import Sigma rules into MISP events\",\n epilog=\"Parameters can be read from a file by a @filename parameter. The file should contain one parameter per line. Dashes may be omitted.\",\n fromfile_prefix_chars=\"@\",\n )\n\n def convert_arg_line_to_args(self, line : str):\n return (\"--\" + line.lstrip(\"--\")).split()\n\ndef main():\n argparser = MISPImportArgumentParser()\n argparser.add_argument(\"--url\", \"-u\", default=\"https://localhost\", help=\"URL of MISP instance\")\n argparser.add_argument(\"--key\", \"-k\", required=True, help=\"API key\")\n argparser.add_argument(\"--insecure\", \"-I\", action=\"store_false\", help=\"Disable TLS certifcate validation.\")\n argparser.add_argument(\"--event\", \"-e\", type=int, help=\"Add Sigma rule to event with this ID. If not set, create new event.\")\n argparser.add_argument(\"--same-event\", \"-s\", action=\"store_true\", help=\"Import all Sigma rules to the same event, if no event is set.\")\n argparser.add_argument(\"--info\", \"-i\", default=\"Sigma import\", help=\"Event Information field for newly created MISP event.\")\n argparser.add_argument(\"--recursive\", \"-r\", action=\"store_true\", help=\"Recursive traversal of directory\")\n argparser.add_argument(\"sigma\", nargs=\"+\", help=\"Sigma rule file that should be imported\")\n args = argparser.parse_args()\n\n if args.recursive:\n paths = [ p for pathname in args.sigma for p in pathlib.Path(pathname).glob(\"**/*\") if p.is_file() ]\n else:\n paths = [ pathlib.Path(sigma) for sigma in args.sigma ]\n\n misp = PyMISP(args.url, args.key, args.insecure)\n if args.event:\n if hasattr(misp, \"get\"):\n eventid = misp.get(args.event)[\"Event\"][\"id\"]\n else:\n eventid = misp.get_event(args.event)[\"Event\"][\"id\"]\n\n first = True\n\n for sigma in paths:\n if not args.event and (first or not args.same_event):\n eventid = create_new_event(args, misp)\n print(\"Importing Sigma rule {} into MISP event {}...\".format(sigma, eventid, end=\"\"))\n f = sigma.open(\"rt\")\n\n if hasattr(misp, \"add_named_attribute\"):\n misp.add_named_attribute(eventid, \"sigma\", f.read())\n else:\n event = misp.get_event(eventid, pythonify=True)\n event.add_attribute(\"sigma\", f.read())\n misp.update_event(event)\n\n f.close()\n first = False\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/sigma/sigma2misp.py"}]}
| 1,269 | 164 |
gh_patches_debug_31119
|
rasdani/github-patches
|
git_diff
|
kornia__kornia-305
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] crop_and_resize doesn't work when it changes image aspect ratio
Here is a failing test-case, adapted from [this test](https://github.com/arraiyopensource/kornia/blob/a48164b1274827773f3da36dcbe2879c69f4471c/test/geometry/transform/test_crop.py#L14) but with some aspect-ratio distortion:
```python3
>>> inp = torch.tensor([[
[1., 2., 3., 4.],
[5., 6., 7., 8.],
[9., 10., 11., 12.],
[13., 14., 15., 16.],
]])
>>> height, width = 2, 3
>>> boxes = torch.tensor([[
[1., 1.],
[1., 2.],
[2., 1.],
[2., 2.],
]])
>>> kornia.crop_and_resize(inp, boxes, (height, width))
tensor([[[ 6.0000, 7.0000, 8.0000],
[ 8.0000, 9.0000, 10.0000]]])
```
But the expected result, as per [documentation](https://torchgeometry.readthedocs.io/en/latest/geometry.transform.html#kornia.geometry.transform.crop_and_resize), is:
```python
torch.tensor([[[6., 6.5, 7.],
[10., 10.5, 11.]]])
```
The issue seems to be caused by a mis-use of the `get_perspective_transform` function by `crop_and_resize`. The arguments to [`get_perspective_transform`](https://torchgeometry.readthedocs.io/en/latest/geometry.transform.html#kornia.geometry.transform.get_perspective_transform) expect points to be written in [x,y] format (not documented), but `crop_and_resize` passes them in [y,x]. The [x,y] ordering in `get_perspective_transform` is [used in this example](https://kornia.readthedocs.io/en/latest/tutorials/warp_perspective.html).
The existing tests for `crop_and_resize` are written in such an unfortunate way that they work both ways.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/geometry/transform/crop.py`
Content:
```
1 from typing import Tuple
2
3 import torch
4
5 from kornia.geometry.transform.imgwarp import (
6 warp_perspective, get_perspective_transform
7 )
8
9 __all__ = [
10 "crop_and_resize",
11 "center_crop",
12 ]
13
14
15 def crop_and_resize(tensor: torch.Tensor, boxes: torch.Tensor,
16 size: Tuple[int, int]) -> torch.Tensor:
17 r"""Extracts crops from the input tensor and resizes them.
18
19 Args:
20 tensor (torch.Tensor): the reference tensor of shape BxCxHxW.
21 boxes (torch.Tensor): a tensor containing the coordinates of the
22 bounding boxes to be extracted. The tensor must have the shape
23 of Bx4x2, where each box is defined in the following order: top-left,
24 top-right, bottom-left and bottom-right. The coordinates order must
25 be in y, x respectively.
26 size (Tuple[int, int]): a tuple with the height and width that will be
27 used to resize the extracted patches.
28
29 Returns:
30 torch.Tensor: tensor containing the patches with shape BxN1xN2
31
32 Example:
33 >>> input = torch.tensor([[
34 [1., 2., 3., 4.],
35 [5., 6., 7., 8.],
36 [9., 10., 11., 12.],
37 [13., 14., 15., 16.],
38 ]])
39 >>> boxes = torch.tensor([[
40 [1., 1.],
41 [1., 2.],
42 [2., 1.],
43 [2., 2.],
44 ]]) # 1x4x2
45 >>> kornia.crop_and_resize(input, boxes, (2, 2))
46 tensor([[[ 6.0000, 7.0000],
47 [ 10.0000, 11.0000]]])
48 """
49 if not torch.is_tensor(tensor):
50 raise TypeError("Input tensor type is not a torch.Tensor. Got {}"
51 .format(type(tensor)))
52 if not torch.is_tensor(boxes):
53 raise TypeError("Input boxes type is not a torch.Tensor. Got {}"
54 .format(type(boxes)))
55 if not len(tensor.shape) in (3, 4,):
56 raise ValueError("Input tensor must be in the shape of CxHxW or "
57 "BxCxHxW. Got {}".format(tensor.shape))
58 if not isinstance(size, (tuple, list,)) and len(size) == 2:
59 raise ValueError("Input size must be a tuple/list of length 2. Got {}"
60 .format(size))
61 # unpack input data
62 dst_h: torch.Tensor = torch.tensor(size[0])
63 dst_w: torch.Tensor = torch.tensor(size[1])
64
65 # [y, x] origin
66 # top-left, top-right, bottom-left, bottom-right
67 points_src: torch.Tensor = boxes.to(
68 tensor.device).to(tensor.dtype)
69
70 # [y, x] destination
71 # top-left, top-right, bottom-left, bottom-right
72 points_dst: torch.Tensor = torch.tensor([[
73 [0, 0],
74 [0, dst_w - 1],
75 [dst_h - 1, 0],
76 [dst_h - 1, dst_w - 1],
77 ]]).repeat(points_src.shape[0], 1, 1).to(
78 tensor.device).to(tensor.dtype)
79
80 # warping needs data in the shape of BCHW
81 is_unbatched: bool = tensor.ndimension() == 3
82 if is_unbatched:
83 tensor = torch.unsqueeze(tensor, dim=0)
84
85 # compute transformation between points and warp
86 dst_trans_src: torch.Tensor = get_perspective_transform(
87 points_src, points_dst)
88
89 # simulate broadcasting
90 dst_trans_src = dst_trans_src.expand(tensor.shape[0], -1, -1)
91
92 patches: torch.Tensor = warp_perspective(
93 tensor, dst_trans_src, (dst_h, dst_w))
94
95 # return in the original shape
96 if is_unbatched:
97 patches = torch.squeeze(patches, dim=0)
98
99 return patches
100
101
102 def center_crop(tensor: torch.Tensor, size: Tuple[int, int]) -> torch.Tensor:
103 r"""Crops the given tensor at the center.
104
105 Args:
106 tensor (torch.Tensor): the input tensor with shape (C, H, W) or
107 (B, C, H, W).
108 size (Tuple[int, int]): a tuple with the expected height and width
109 of the output patch.
110
111 Returns:
112 torch.Tensor: the output tensor with patches.
113
114 Examples:
115 >>> input = torch.tensor([[
116 [1., 2., 3., 4.],
117 [5., 6., 7., 8.],
118 [9., 10., 11., 12.],
119 [13., 14., 15., 16.],
120 ]])
121 >>> kornia.center_crop(input, (2, 4))
122 tensor([[[ 5.0000, 6.0000, 7.0000, 8.0000],
123 [ 9.0000, 10.0000, 11.0000, 12.0000]]])
124 """
125 if not torch.is_tensor(tensor):
126 raise TypeError("Input tensor type is not a torch.Tensor. Got {}"
127 .format(type(tensor)))
128 if not len(tensor.shape) in (3, 4,):
129 raise ValueError("Input tensor must be in the shape of CxHxW or "
130 "BxCxHxW. Got {}".format(tensor.shape))
131 if not isinstance(size, (tuple, list,)) and len(size) == 2:
132 raise ValueError("Input size must be a tuple/list of length 2. Got {}"
133 .format(size))
134 # unpack input sizes
135 dst_h: torch.Tensor = torch.tensor(size[0])
136 dst_w: torch.Tensor = torch.tensor(size[1])
137 src_h: torch.Tensor = torch.tensor(tensor.shape[-2])
138 src_w: torch.Tensor = torch.tensor(tensor.shape[-1])
139
140 # compute start/end offsets
141 dst_h_half: torch.Tensor = dst_h / 2
142 dst_w_half: torch.Tensor = dst_w / 2
143 src_h_half: torch.Tensor = src_h / 2
144 src_w_half: torch.Tensor = src_w / 2
145
146 start_x: torch.Tensor = src_h_half - dst_h_half
147 start_y: torch.Tensor = src_w_half - dst_w_half
148
149 end_x: torch.Tensor = start_x + dst_w - 1
150 end_y: torch.Tensor = start_y + dst_h - 1
151
152 # [y, x] origin
153 # top-left, top-right, bottom-left, bottom-right
154 points_src: torch.Tensor = torch.tensor([[
155 [start_y, start_x],
156 [start_y, end_x],
157 [end_y, start_x],
158 [end_y, end_x],
159 ]]).to(tensor.device).to(tensor.dtype)
160
161 # [y, x] destination
162 # top-left, top-right, bottom-left, bottom-right
163 points_dst: torch.Tensor = torch.tensor([[
164 [0, 0],
165 [0, dst_w - 1],
166 [dst_h - 1, 0],
167 [dst_h - 1, dst_w - 1],
168 ]]).to(tensor.device).to(tensor.dtype)
169
170 # warping needs data in the shape of BCHW
171 is_unbatched: bool = tensor.ndimension() == 3
172 if is_unbatched:
173 tensor = torch.unsqueeze(tensor, dim=0)
174
175 # compute transformation between points and warp
176 dst_trans_src: torch.Tensor = get_perspective_transform(
177 points_src, points_dst)
178 dst_trans_src = dst_trans_src.repeat(tensor.shape[0], 1, 1)
179
180 patches: torch.Tensor = warp_perspective(
181 tensor, dst_trans_src, (dst_h, dst_w))
182
183 # return in the original shape
184 if is_unbatched:
185 patches = torch.squeeze(patches, dim=0)
186
187 return patches
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kornia/geometry/transform/crop.py b/kornia/geometry/transform/crop.py
--- a/kornia/geometry/transform/crop.py
+++ b/kornia/geometry/transform/crop.py
@@ -20,9 +20,9 @@
tensor (torch.Tensor): the reference tensor of shape BxCxHxW.
boxes (torch.Tensor): a tensor containing the coordinates of the
bounding boxes to be extracted. The tensor must have the shape
- of Bx4x2, where each box is defined in the following order: top-left,
- top-right, bottom-left and bottom-right. The coordinates order must
- be in y, x respectively.
+ of Bx4x2, where each box is defined in the following (clockwise)
+ order: top-left, top-right, bottom-right and bottom-left. The
+ coordinates must be in the x, y order.
size (Tuple[int, int]): a tuple with the height and width that will be
used to resize the extracted patches.
@@ -62,18 +62,18 @@
dst_h: torch.Tensor = torch.tensor(size[0])
dst_w: torch.Tensor = torch.tensor(size[1])
- # [y, x] origin
- # top-left, top-right, bottom-left, bottom-right
+ # [x, y] origin
+ # top-left, top-right, bottom-right, bottom-left
points_src: torch.Tensor = boxes.to(
tensor.device).to(tensor.dtype)
- # [y, x] destination
- # top-left, top-right, bottom-left, bottom-right
+ # [x, y] destination
+ # top-left, top-right, bottom-right, bottom-left
points_dst: torch.Tensor = torch.tensor([[
[0, 0],
- [0, dst_w - 1],
- [dst_h - 1, 0],
- [dst_h - 1, dst_w - 1],
+ [dst_w - 1, 0],
+ [dst_w - 1, dst_h - 1],
+ [0, dst_h - 1],
]]).repeat(points_src.shape[0], 1, 1).to(
tensor.device).to(tensor.dtype)
|
{"golden_diff": "diff --git a/kornia/geometry/transform/crop.py b/kornia/geometry/transform/crop.py\n--- a/kornia/geometry/transform/crop.py\n+++ b/kornia/geometry/transform/crop.py\n@@ -20,9 +20,9 @@\n tensor (torch.Tensor): the reference tensor of shape BxCxHxW.\n boxes (torch.Tensor): a tensor containing the coordinates of the\n bounding boxes to be extracted. The tensor must have the shape\n- of Bx4x2, where each box is defined in the following order: top-left,\n- top-right, bottom-left and bottom-right. The coordinates order must\n- be in y, x respectively.\n+ of Bx4x2, where each box is defined in the following (clockwise)\n+ order: top-left, top-right, bottom-right and bottom-left. The\n+ coordinates must be in the x, y order.\n size (Tuple[int, int]): a tuple with the height and width that will be\n used to resize the extracted patches.\n \n@@ -62,18 +62,18 @@\n dst_h: torch.Tensor = torch.tensor(size[0])\n dst_w: torch.Tensor = torch.tensor(size[1])\n \n- # [y, x] origin\n- # top-left, top-right, bottom-left, bottom-right\n+ # [x, y] origin\n+ # top-left, top-right, bottom-right, bottom-left\n points_src: torch.Tensor = boxes.to(\n tensor.device).to(tensor.dtype)\n \n- # [y, x] destination\n- # top-left, top-right, bottom-left, bottom-right\n+ # [x, y] destination\n+ # top-left, top-right, bottom-right, bottom-left\n points_dst: torch.Tensor = torch.tensor([[\n [0, 0],\n- [0, dst_w - 1],\n- [dst_h - 1, 0],\n- [dst_h - 1, dst_w - 1],\n+ [dst_w - 1, 0],\n+ [dst_w - 1, dst_h - 1],\n+ [0, dst_h - 1],\n ]]).repeat(points_src.shape[0], 1, 1).to(\n tensor.device).to(tensor.dtype)\n", "issue": "[BUG] crop_and_resize doesn't work when it changes image aspect ratio\nHere is a failing test-case, adapted from [this test](https://github.com/arraiyopensource/kornia/blob/a48164b1274827773f3da36dcbe2879c69f4471c/test/geometry/transform/test_crop.py#L14) but with some aspect-ratio distortion:\r\n```python3\r\n>>> inp = torch.tensor([[\r\n [1., 2., 3., 4.],\r\n [5., 6., 7., 8.],\r\n [9., 10., 11., 12.],\r\n [13., 14., 15., 16.],\r\n ]])\r\n>>> height, width = 2, 3\r\n>>> boxes = torch.tensor([[\r\n [1., 1.],\r\n [1., 2.],\r\n [2., 1.],\r\n [2., 2.],\r\n ]])\r\n>>> kornia.crop_and_resize(inp, boxes, (height, width))\r\ntensor([[[ 6.0000, 7.0000, 8.0000],\r\n [ 8.0000, 9.0000, 10.0000]]])\r\n```\r\nBut the expected result, as per [documentation](https://torchgeometry.readthedocs.io/en/latest/geometry.transform.html#kornia.geometry.transform.crop_and_resize), is:\r\n```python\r\ntorch.tensor([[[6., 6.5, 7.],\r\n [10., 10.5, 11.]]])\r\n```\r\n\r\nThe issue seems to be caused by a mis-use of the `get_perspective_transform` function by `crop_and_resize`. The arguments to [`get_perspective_transform`](https://torchgeometry.readthedocs.io/en/latest/geometry.transform.html#kornia.geometry.transform.get_perspective_transform) expect points to be written in [x,y] format (not documented), but `crop_and_resize` passes them in [y,x]. The [x,y] ordering in `get_perspective_transform` is [used in this example](https://kornia.readthedocs.io/en/latest/tutorials/warp_perspective.html).\r\n\r\nThe existing tests for `crop_and_resize` are written in such an unfortunate way that they work both ways.\n", "before_files": [{"content": "from typing import Tuple\n\nimport torch\n\nfrom kornia.geometry.transform.imgwarp import (\n warp_perspective, get_perspective_transform\n)\n\n__all__ = [\n \"crop_and_resize\",\n \"center_crop\",\n]\n\n\ndef crop_and_resize(tensor: torch.Tensor, boxes: torch.Tensor,\n size: Tuple[int, int]) -> torch.Tensor:\n r\"\"\"Extracts crops from the input tensor and resizes them.\n\n Args:\n tensor (torch.Tensor): the reference tensor of shape BxCxHxW.\n boxes (torch.Tensor): a tensor containing the coordinates of the\n bounding boxes to be extracted. The tensor must have the shape\n of Bx4x2, where each box is defined in the following order: top-left,\n top-right, bottom-left and bottom-right. The coordinates order must\n be in y, x respectively.\n size (Tuple[int, int]): a tuple with the height and width that will be\n used to resize the extracted patches.\n\n Returns:\n torch.Tensor: tensor containing the patches with shape BxN1xN2\n\n Example:\n >>> input = torch.tensor([[\n [1., 2., 3., 4.],\n [5., 6., 7., 8.],\n [9., 10., 11., 12.],\n [13., 14., 15., 16.],\n ]])\n >>> boxes = torch.tensor([[\n [1., 1.],\n [1., 2.],\n [2., 1.],\n [2., 2.],\n ]]) # 1x4x2\n >>> kornia.crop_and_resize(input, boxes, (2, 2))\n tensor([[[ 6.0000, 7.0000],\n [ 10.0000, 11.0000]]])\n \"\"\"\n if not torch.is_tensor(tensor):\n raise TypeError(\"Input tensor type is not a torch.Tensor. Got {}\"\n .format(type(tensor)))\n if not torch.is_tensor(boxes):\n raise TypeError(\"Input boxes type is not a torch.Tensor. Got {}\"\n .format(type(boxes)))\n if not len(tensor.shape) in (3, 4,):\n raise ValueError(\"Input tensor must be in the shape of CxHxW or \"\n \"BxCxHxW. Got {}\".format(tensor.shape))\n if not isinstance(size, (tuple, list,)) and len(size) == 2:\n raise ValueError(\"Input size must be a tuple/list of length 2. Got {}\"\n .format(size))\n # unpack input data\n dst_h: torch.Tensor = torch.tensor(size[0])\n dst_w: torch.Tensor = torch.tensor(size[1])\n\n # [y, x] origin\n # top-left, top-right, bottom-left, bottom-right\n points_src: torch.Tensor = boxes.to(\n tensor.device).to(tensor.dtype)\n\n # [y, x] destination\n # top-left, top-right, bottom-left, bottom-right\n points_dst: torch.Tensor = torch.tensor([[\n [0, 0],\n [0, dst_w - 1],\n [dst_h - 1, 0],\n [dst_h - 1, dst_w - 1],\n ]]).repeat(points_src.shape[0], 1, 1).to(\n tensor.device).to(tensor.dtype)\n\n # warping needs data in the shape of BCHW\n is_unbatched: bool = tensor.ndimension() == 3\n if is_unbatched:\n tensor = torch.unsqueeze(tensor, dim=0)\n\n # compute transformation between points and warp\n dst_trans_src: torch.Tensor = get_perspective_transform(\n points_src, points_dst)\n\n # simulate broadcasting\n dst_trans_src = dst_trans_src.expand(tensor.shape[0], -1, -1)\n\n patches: torch.Tensor = warp_perspective(\n tensor, dst_trans_src, (dst_h, dst_w))\n\n # return in the original shape\n if is_unbatched:\n patches = torch.squeeze(patches, dim=0)\n\n return patches\n\n\ndef center_crop(tensor: torch.Tensor, size: Tuple[int, int]) -> torch.Tensor:\n r\"\"\"Crops the given tensor at the center.\n\n Args:\n tensor (torch.Tensor): the input tensor with shape (C, H, W) or\n (B, C, H, W).\n size (Tuple[int, int]): a tuple with the expected height and width\n of the output patch.\n\n Returns:\n torch.Tensor: the output tensor with patches.\n\n Examples:\n >>> input = torch.tensor([[\n [1., 2., 3., 4.],\n [5., 6., 7., 8.],\n [9., 10., 11., 12.],\n [13., 14., 15., 16.],\n ]])\n >>> kornia.center_crop(input, (2, 4))\n tensor([[[ 5.0000, 6.0000, 7.0000, 8.0000],\n [ 9.0000, 10.0000, 11.0000, 12.0000]]])\n \"\"\"\n if not torch.is_tensor(tensor):\n raise TypeError(\"Input tensor type is not a torch.Tensor. Got {}\"\n .format(type(tensor)))\n if not len(tensor.shape) in (3, 4,):\n raise ValueError(\"Input tensor must be in the shape of CxHxW or \"\n \"BxCxHxW. Got {}\".format(tensor.shape))\n if not isinstance(size, (tuple, list,)) and len(size) == 2:\n raise ValueError(\"Input size must be a tuple/list of length 2. Got {}\"\n .format(size))\n # unpack input sizes\n dst_h: torch.Tensor = torch.tensor(size[0])\n dst_w: torch.Tensor = torch.tensor(size[1])\n src_h: torch.Tensor = torch.tensor(tensor.shape[-2])\n src_w: torch.Tensor = torch.tensor(tensor.shape[-1])\n\n # compute start/end offsets\n dst_h_half: torch.Tensor = dst_h / 2\n dst_w_half: torch.Tensor = dst_w / 2\n src_h_half: torch.Tensor = src_h / 2\n src_w_half: torch.Tensor = src_w / 2\n\n start_x: torch.Tensor = src_h_half - dst_h_half\n start_y: torch.Tensor = src_w_half - dst_w_half\n\n end_x: torch.Tensor = start_x + dst_w - 1\n end_y: torch.Tensor = start_y + dst_h - 1\n\n # [y, x] origin\n # top-left, top-right, bottom-left, bottom-right\n points_src: torch.Tensor = torch.tensor([[\n [start_y, start_x],\n [start_y, end_x],\n [end_y, start_x],\n [end_y, end_x],\n ]]).to(tensor.device).to(tensor.dtype)\n\n # [y, x] destination\n # top-left, top-right, bottom-left, bottom-right\n points_dst: torch.Tensor = torch.tensor([[\n [0, 0],\n [0, dst_w - 1],\n [dst_h - 1, 0],\n [dst_h - 1, dst_w - 1],\n ]]).to(tensor.device).to(tensor.dtype)\n\n # warping needs data in the shape of BCHW\n is_unbatched: bool = tensor.ndimension() == 3\n if is_unbatched:\n tensor = torch.unsqueeze(tensor, dim=0)\n\n # compute transformation between points and warp\n dst_trans_src: torch.Tensor = get_perspective_transform(\n points_src, points_dst)\n dst_trans_src = dst_trans_src.repeat(tensor.shape[0], 1, 1)\n\n patches: torch.Tensor = warp_perspective(\n tensor, dst_trans_src, (dst_h, dst_w))\n\n # return in the original shape\n if is_unbatched:\n patches = torch.squeeze(patches, dim=0)\n\n return patches\n", "path": "kornia/geometry/transform/crop.py"}], "after_files": [{"content": "from typing import Tuple\n\nimport torch\n\nfrom kornia.geometry.transform.imgwarp import (\n warp_perspective, get_perspective_transform\n)\n\n__all__ = [\n \"crop_and_resize\",\n \"center_crop\",\n]\n\n\ndef crop_and_resize(tensor: torch.Tensor, boxes: torch.Tensor,\n size: Tuple[int, int]) -> torch.Tensor:\n r\"\"\"Extracts crops from the input tensor and resizes them.\n\n Args:\n tensor (torch.Tensor): the reference tensor of shape BxCxHxW.\n boxes (torch.Tensor): a tensor containing the coordinates of the\n bounding boxes to be extracted. The tensor must have the shape\n of Bx4x2, where each box is defined in the following (clockwise)\n order: top-left, top-right, bottom-right and bottom-left. The\n coordinates must be in the x, y order.\n size (Tuple[int, int]): a tuple with the height and width that will be\n used to resize the extracted patches.\n\n Returns:\n torch.Tensor: tensor containing the patches with shape BxN1xN2\n\n Example:\n >>> input = torch.tensor([[\n [1., 2., 3., 4.],\n [5., 6., 7., 8.],\n [9., 10., 11., 12.],\n [13., 14., 15., 16.],\n ]])\n >>> boxes = torch.tensor([[\n [1., 1.],\n [1., 2.],\n [2., 1.],\n [2., 2.],\n ]]) # 1x4x2\n >>> kornia.crop_and_resize(input, boxes, (2, 2))\n tensor([[[ 6.0000, 7.0000],\n [ 10.0000, 11.0000]]])\n \"\"\"\n if not torch.is_tensor(tensor):\n raise TypeError(\"Input tensor type is not a torch.Tensor. Got {}\"\n .format(type(tensor)))\n if not torch.is_tensor(boxes):\n raise TypeError(\"Input boxes type is not a torch.Tensor. Got {}\"\n .format(type(boxes)))\n if not len(tensor.shape) in (3, 4,):\n raise ValueError(\"Input tensor must be in the shape of CxHxW or \"\n \"BxCxHxW. Got {}\".format(tensor.shape))\n if not isinstance(size, (tuple, list,)) and len(size) == 2:\n raise ValueError(\"Input size must be a tuple/list of length 2. Got {}\"\n .format(size))\n # unpack input data\n dst_h: torch.Tensor = torch.tensor(size[0])\n dst_w: torch.Tensor = torch.tensor(size[1])\n\n # [x, y] origin\n # top-left, top-right, bottom-right, bottom-left\n points_src: torch.Tensor = boxes.to(\n tensor.device).to(tensor.dtype)\n\n # [x, y] destination\n # top-left, top-right, bottom-right, bottom-left\n points_dst: torch.Tensor = torch.tensor([[\n [0, 0],\n [dst_w - 1, 0],\n [dst_w - 1, dst_h - 1],\n [0, dst_h - 1],\n ]]).repeat(points_src.shape[0], 1, 1).to(\n tensor.device).to(tensor.dtype)\n\n # warping needs data in the shape of BCHW\n is_unbatched: bool = tensor.ndimension() == 3\n if is_unbatched:\n tensor = torch.unsqueeze(tensor, dim=0)\n\n # compute transformation between points and warp\n dst_trans_src: torch.Tensor = get_perspective_transform(\n points_src, points_dst)\n\n # simulate broadcasting\n dst_trans_src = dst_trans_src.expand(tensor.shape[0], -1, -1)\n\n patches: torch.Tensor = warp_perspective(\n tensor, dst_trans_src, (dst_h, dst_w))\n\n # return in the original shape\n if is_unbatched:\n patches = torch.squeeze(patches, dim=0)\n\n return patches\n\n\ndef center_crop(tensor: torch.Tensor, size: Tuple[int, int]) -> torch.Tensor:\n r\"\"\"Crops the given tensor at the center.\n\n Args:\n tensor (torch.Tensor): the input tensor with shape (C, H, W) or\n (B, C, H, W).\n size (Tuple[int, int]): a tuple with the expected height and width\n of the output patch.\n\n Returns:\n torch.Tensor: the output tensor with patches.\n\n Examples:\n >>> input = torch.tensor([[\n [1., 2., 3., 4.],\n [5., 6., 7., 8.],\n [9., 10., 11., 12.],\n [13., 14., 15., 16.],\n ]])\n >>> kornia.center_crop(input, (2, 4))\n tensor([[[ 5.0000, 6.0000, 7.0000, 8.0000],\n [ 9.0000, 10.0000, 11.0000, 12.0000]]])\n \"\"\"\n if not torch.is_tensor(tensor):\n raise TypeError(\"Input tensor type is not a torch.Tensor. Got {}\"\n .format(type(tensor)))\n if not len(tensor.shape) in (3, 4,):\n raise ValueError(\"Input tensor must be in the shape of CxHxW or \"\n \"BxCxHxW. Got {}\".format(tensor.shape))\n if not isinstance(size, (tuple, list,)) and len(size) == 2:\n raise ValueError(\"Input size must be a tuple/list of length 2. Got {}\"\n .format(size))\n # unpack input sizes\n dst_h: torch.Tensor = torch.tensor(size[0])\n dst_w: torch.Tensor = torch.tensor(size[1])\n src_h: torch.Tensor = torch.tensor(tensor.shape[-2])\n src_w: torch.Tensor = torch.tensor(tensor.shape[-1])\n\n # compute start/end offsets\n dst_h_half: torch.Tensor = dst_h / 2\n dst_w_half: torch.Tensor = dst_w / 2\n src_h_half: torch.Tensor = src_h / 2\n src_w_half: torch.Tensor = src_w / 2\n\n start_x: torch.Tensor = src_h_half - dst_h_half\n start_y: torch.Tensor = src_w_half - dst_w_half\n\n end_x: torch.Tensor = start_x + dst_w - 1\n end_y: torch.Tensor = start_y + dst_h - 1\n\n # [y, x] origin\n # top-left, top-right, bottom-left, bottom-right\n points_src: torch.Tensor = torch.tensor([[\n [start_y, start_x],\n [start_y, end_x],\n [end_y, start_x],\n [end_y, end_x],\n ]]).to(tensor.device).to(tensor.dtype)\n\n # [y, x] destination\n # top-left, top-right, bottom-left, bottom-right\n points_dst: torch.Tensor = torch.tensor([[\n [0, 0],\n [0, dst_w - 1],\n [dst_h - 1, 0],\n [dst_h - 1, dst_w - 1],\n ]]).to(tensor.device).to(tensor.dtype)\n\n # warping needs data in the shape of BCHW\n is_unbatched: bool = tensor.ndimension() == 3\n if is_unbatched:\n tensor = torch.unsqueeze(tensor, dim=0)\n\n # compute transformation between points and warp\n dst_trans_src: torch.Tensor = get_perspective_transform(\n points_src, points_dst)\n dst_trans_src = dst_trans_src.repeat(tensor.shape[0], 1, 1)\n\n patches: torch.Tensor = warp_perspective(\n tensor, dst_trans_src, (dst_h, dst_w))\n\n # return in the original shape\n if is_unbatched:\n patches = torch.squeeze(patches, dim=0)\n\n return patches\n", "path": "kornia/geometry/transform/crop.py"}]}
| 3,078 | 516 |
gh_patches_debug_3928
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-4244
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
streamlit `--log-level` appears to have no affect to resultant logging level
### Summary
The CLI argument settings `--log-level` appear to have no affect. `log.debug` is never rendered or changing the logging level does not appear to suppress logs not required.
### Steps to reproduce
Code snippet:
```
import logging
import streamlit as st
import pandas as pd
log.info("Test LOG INFO")
log.debug("Test LOG DEBUG")
log.warning("Test LOG WARNING")
log.error("Test LOG ERROR")
st.write("Here's our first attempt at using data to create a table:")
st.write(pd.DataFrame({
'first column': [1, 2, 3, 4],
'second column': [10, 20, 30, 40]
}))
```
**Actual behavior:**
1. for all debug `streamlit --log_level=debug run logging_test_app.py` produces

not rendering `log.debug("Test LOG DEBUG")`
2. for only error `streamlit --log_level=error run logging_test_app.py` but produces same output

only `log.error("Test LOG ERROR")` should be present
**Expected behavior:**
for case 1 the line `log.debug(..)` should be rendered with the other in the terminal
for case 2 only the `log.error(..)` should be the only log line rendered in the terminal
### Is this a regression?
no
### Debug info
- Streamlit version: 1.3.1
- Python version: 3.9.2
- Using : Poetry
- OS version: Windows 10
- Browser version: N/A
### Additional information
to ensure no enviromental or instance issues I have tested a very basic python script and with various logging levels as described above selected. In all cases the correct log results are rendered in the terminal.


```python
import logging
logging.basicConfig(level=logging.ERROR)
# logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger(__name__)
print("-"*80)
print(f"Logging Level: {logging.getLevelName(log.getEffectiveLevel())}")
log.info("Test LOG INFO")
log.debug("Test LOG DEBUG")
log.warning("Test LOG WARNING")
log.error("Test LOG ERROR")
print("-"*80)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/cli.py`
Content:
```
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """This is a script which is run when the Streamlit package is executed."""
16
17 from streamlit import config as _config
18
19 import os
20 from typing import Optional
21
22 import click
23
24 import streamlit
25 from streamlit.credentials import Credentials, check_credentials
26 import streamlit.bootstrap as bootstrap
27 from streamlit.case_converters import to_snake_case
28
29 ACCEPTED_FILE_EXTENSIONS = ("py", "py3")
30
31 LOG_LEVELS = ("error", "warning", "info", "debug")
32
33
34 def _convert_config_option_to_click_option(config_option):
35 """Composes given config option options as options for click lib."""
36 option = "--{}".format(config_option.key)
37 param = config_option.key.replace(".", "_")
38 description = config_option.description
39 if config_option.deprecated:
40 description += "\n {} - {}".format(
41 config_option.deprecation_text, config_option.expiration_date
42 )
43 envvar = "STREAMLIT_{}".format(to_snake_case(param).upper())
44
45 return {
46 "param": param,
47 "description": description,
48 "type": config_option.type,
49 "option": option,
50 "envvar": envvar,
51 }
52
53
54 def configurator_options(func):
55 """Decorator that adds config param keys to click dynamically."""
56 for _, value in reversed(_config._config_options_template.items()):
57 parsed_parameter = _convert_config_option_to_click_option(value)
58 config_option = click.option(
59 parsed_parameter["option"],
60 parsed_parameter["param"],
61 help=parsed_parameter["description"],
62 type=parsed_parameter["type"],
63 show_envvar=True,
64 envvar=parsed_parameter["envvar"],
65 )
66 func = config_option(func)
67 return func
68
69
70 # Fetch remote file at url_path to script_path
71 def _download_remote(script_path, url_path):
72 import requests
73
74 with open(script_path, "wb") as fp:
75 try:
76 resp = requests.get(url_path)
77 resp.raise_for_status()
78 fp.write(resp.content)
79 except requests.exceptions.RequestException as e:
80 raise click.BadParameter(("Unable to fetch {}.\n{}".format(url_path, e)))
81
82
83 @click.group(context_settings={"auto_envvar_prefix": "STREAMLIT"})
84 @click.option("--log_level", show_default=True, type=click.Choice(LOG_LEVELS))
85 @click.version_option(prog_name="Streamlit")
86 @click.pass_context
87 def main(ctx, log_level="info"):
88 """Try out a demo with:
89
90 $ streamlit hello
91
92 Or use the line below to run your own script:
93
94 $ streamlit run your_script.py
95 """
96
97 if log_level:
98 import streamlit.logger
99
100 streamlit.logger.set_log_level(log_level.upper())
101
102
103 @main.command("help")
104 @click.pass_context
105 def help(ctx):
106 """Print this help message."""
107 # Pretend user typed 'streamlit --help' instead of 'streamlit help'.
108 import sys
109
110 assert len(sys.argv) == 2 # This is always true, but let's assert anyway.
111 sys.argv[1] = "--help"
112 main(prog_name="streamlit")
113
114
115 @main.command("version")
116 @click.pass_context
117 def main_version(ctx):
118 """Print Streamlit's version number."""
119 # Pretend user typed 'streamlit --version' instead of 'streamlit version'
120 import sys
121
122 assert len(sys.argv) == 2 # This is always true, but let's assert anyway.
123 sys.argv[1] = "--version"
124 main()
125
126
127 @main.command("docs")
128 def main_docs():
129 """Show help in browser."""
130 print("Showing help page in browser...")
131 from streamlit import util
132
133 util.open_browser("https://docs.streamlit.io")
134
135
136 @main.command("hello")
137 @configurator_options
138 def main_hello(**kwargs):
139 """Runs the Hello World script."""
140 from streamlit.hello import hello
141
142 bootstrap.load_config_options(flag_options=kwargs)
143 filename = hello.__file__
144 _main_run(filename, flag_options=kwargs)
145
146
147 @main.command("run")
148 @configurator_options
149 @click.argument("target", required=True, envvar="STREAMLIT_RUN_TARGET")
150 @click.argument("args", nargs=-1)
151 def main_run(target, args=None, **kwargs):
152 """Run a Python script, piping stderr to Streamlit.
153
154 The script can be local or it can be an url. In the latter case, Streamlit
155 will download the script to a temporary file and runs this file.
156
157 """
158 from validators import url
159
160 bootstrap.load_config_options(flag_options=kwargs)
161
162 _, extension = os.path.splitext(target)
163 if extension[1:] not in ACCEPTED_FILE_EXTENSIONS:
164 if extension[1:] == "":
165 raise click.BadArgumentUsage(
166 "Streamlit requires raw Python (.py) files, but the provided file has no extension.\nFor more information, please see https://docs.streamlit.io"
167 )
168 else:
169 raise click.BadArgumentUsage(
170 "Streamlit requires raw Python (.py) files, not %s.\nFor more information, please see https://docs.streamlit.io"
171 % extension
172 )
173
174 if url(target):
175 from streamlit.temporary_directory import TemporaryDirectory
176
177 with TemporaryDirectory() as temp_dir:
178 from urllib.parse import urlparse
179 from streamlit import url_util
180
181 path = urlparse(target).path
182 script_path = os.path.join(temp_dir, path.strip("/").rsplit("/", 1)[-1])
183 # if this is a GitHub/Gist blob url, convert to a raw URL first.
184 target = url_util.process_gitblob_url(target)
185 _download_remote(script_path, target)
186 _main_run(script_path, args, flag_options=kwargs)
187 else:
188 if not os.path.exists(target):
189 raise click.BadParameter("File does not exist: {}".format(target))
190 _main_run(target, args, flag_options=kwargs)
191
192
193 def _get_command_line_as_string() -> Optional[str]:
194 import subprocess
195
196 parent = click.get_current_context().parent
197 if parent is None:
198 return None
199 cmd_line_as_list = [parent.command_path]
200 cmd_line_as_list.extend(click.get_os_args())
201 return subprocess.list2cmdline(cmd_line_as_list)
202
203
204 def _main_run(file, args=None, flag_options=None):
205 if args is None:
206 args = []
207
208 if flag_options is None:
209 flag_options = {}
210
211 command_line = _get_command_line_as_string()
212
213 # Set a global flag indicating that we're "within" streamlit.
214 streamlit._is_running_with_streamlit = True
215
216 check_credentials()
217
218 bootstrap.run(file, command_line, args, flag_options)
219
220
221 # SUBCOMMAND: cache
222
223
224 @main.group("cache")
225 def cache():
226 """Manage the Streamlit cache."""
227 pass
228
229
230 @cache.command("clear")
231 def cache_clear():
232 """Clear st.cache, st.memo, and st.singleton caches."""
233 import streamlit.legacy_caching
234 import streamlit.caching
235
236 result = streamlit.legacy_caching.clear_cache()
237 cache_path = streamlit.legacy_caching.get_cache_path()
238 if result:
239 print("Cleared directory %s." % cache_path)
240 else:
241 print("Nothing to clear at %s." % cache_path)
242
243 streamlit.caching.clear_memo_cache()
244 streamlit.caching.clear_singleton_cache()
245
246
247 # SUBCOMMAND: config
248
249
250 @main.group("config")
251 def config():
252 """Manage Streamlit's config settings."""
253 pass
254
255
256 @config.command("show")
257 @configurator_options
258 def config_show(**kwargs):
259 """Show all of Streamlit's config settings."""
260
261 bootstrap.load_config_options(flag_options=kwargs)
262
263 _config.show_config()
264
265
266 # SUBCOMMAND: activate
267
268
269 @main.group("activate", invoke_without_command=True)
270 @click.pass_context
271 def activate(ctx):
272 """Activate Streamlit by entering your email."""
273 if not ctx.invoked_subcommand:
274 Credentials.get_current().activate()
275
276
277 @activate.command("reset")
278 def activate_reset():
279 """Reset Activation Credentials."""
280 Credentials.get_current().reset()
281
282
283 if __name__ == "__main__":
284 main()
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/streamlit/cli.py b/lib/streamlit/cli.py
--- a/lib/streamlit/cli.py
+++ b/lib/streamlit/cli.py
@@ -95,9 +95,13 @@
"""
if log_level:
- import streamlit.logger
+ from streamlit.logger import get_logger
- streamlit.logger.set_log_level(log_level.upper())
+ LOGGER = get_logger(__name__)
+ LOGGER.warning(
+ "Setting the log level using the --log_level flag is unsupported."
+ "\nUse the --logger.level flag (after your streamlit command) instead."
+ )
@main.command("help")
|
{"golden_diff": "diff --git a/lib/streamlit/cli.py b/lib/streamlit/cli.py\n--- a/lib/streamlit/cli.py\n+++ b/lib/streamlit/cli.py\n@@ -95,9 +95,13 @@\n \"\"\"\n \n if log_level:\n- import streamlit.logger\n+ from streamlit.logger import get_logger\n \n- streamlit.logger.set_log_level(log_level.upper())\n+ LOGGER = get_logger(__name__)\n+ LOGGER.warning(\n+ \"Setting the log level using the --log_level flag is unsupported.\"\n+ \"\\nUse the --logger.level flag (after your streamlit command) instead.\"\n+ )\n \n \n @main.command(\"help\")\n", "issue": "streamlit `--log-level` appears to have no affect to resultant logging level\n### Summary\r\n\r\nThe CLI argument settings `--log-level` appear to have no affect. `log.debug` is never rendered or changing the logging level does not appear to suppress logs not required.\r\n\r\n### Steps to reproduce\r\n\r\nCode snippet:\r\n\r\n```\r\nimport logging\r\nimport streamlit as st\r\nimport pandas as pd\r\n\r\nlog.info(\"Test LOG INFO\")\r\nlog.debug(\"Test LOG DEBUG\")\r\nlog.warning(\"Test LOG WARNING\")\r\nlog.error(\"Test LOG ERROR\")\r\n\r\nst.write(\"Here's our first attempt at using data to create a table:\")\r\nst.write(pd.DataFrame({\r\n 'first column': [1, 2, 3, 4],\r\n 'second column': [10, 20, 30, 40]\r\n}))\r\n```\r\n\r\n**Actual behavior:**\r\n\r\n1. for all debug `streamlit --log_level=debug run logging_test_app.py` produces\r\n\r\nnot rendering `log.debug(\"Test LOG DEBUG\")`\r\n\r\n2. for only error `streamlit --log_level=error run logging_test_app.py` but produces same output\r\n\r\nonly `log.error(\"Test LOG ERROR\")` should be present\r\n\r\n\r\n**Expected behavior:**\r\nfor case 1 the line `log.debug(..)` should be rendered with the other in the terminal\r\nfor case 2 only the `log.error(..)` should be the only log line rendered in the terminal\r\n\r\n### Is this a regression?\r\n\r\n no\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 1.3.1\r\n- Python version: 3.9.2\r\n- Using : Poetry \r\n- OS version: Windows 10\r\n- Browser version: N/A\r\n\r\n### Additional information\r\nto ensure no enviromental or instance issues I have tested a very basic python script and with various logging levels as described above selected. In all cases the correct log results are rendered in the terminal.\r\n\r\n\r\n\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.ERROR)\r\n# logging.basicConfig(level=logging.DEBUG)\r\nlog = logging.getLogger(__name__)\r\n\r\nprint(\"-\"*80)\r\nprint(f\"Logging Level: {logging.getLevelName(log.getEffectiveLevel())}\")\r\nlog.info(\"Test LOG INFO\")\r\nlog.debug(\"Test LOG DEBUG\")\r\nlog.warning(\"Test LOG WARNING\")\r\nlog.error(\"Test LOG ERROR\")\r\nprint(\"-\"*80)\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This is a script which is run when the Streamlit package is executed.\"\"\"\n\nfrom streamlit import config as _config\n\nimport os\nfrom typing import Optional\n\nimport click\n\nimport streamlit\nfrom streamlit.credentials import Credentials, check_credentials\nimport streamlit.bootstrap as bootstrap\nfrom streamlit.case_converters import to_snake_case\n\nACCEPTED_FILE_EXTENSIONS = (\"py\", \"py3\")\n\nLOG_LEVELS = (\"error\", \"warning\", \"info\", \"debug\")\n\n\ndef _convert_config_option_to_click_option(config_option):\n \"\"\"Composes given config option options as options for click lib.\"\"\"\n option = \"--{}\".format(config_option.key)\n param = config_option.key.replace(\".\", \"_\")\n description = config_option.description\n if config_option.deprecated:\n description += \"\\n {} - {}\".format(\n config_option.deprecation_text, config_option.expiration_date\n )\n envvar = \"STREAMLIT_{}\".format(to_snake_case(param).upper())\n\n return {\n \"param\": param,\n \"description\": description,\n \"type\": config_option.type,\n \"option\": option,\n \"envvar\": envvar,\n }\n\n\ndef configurator_options(func):\n \"\"\"Decorator that adds config param keys to click dynamically.\"\"\"\n for _, value in reversed(_config._config_options_template.items()):\n parsed_parameter = _convert_config_option_to_click_option(value)\n config_option = click.option(\n parsed_parameter[\"option\"],\n parsed_parameter[\"param\"],\n help=parsed_parameter[\"description\"],\n type=parsed_parameter[\"type\"],\n show_envvar=True,\n envvar=parsed_parameter[\"envvar\"],\n )\n func = config_option(func)\n return func\n\n\n# Fetch remote file at url_path to script_path\ndef _download_remote(script_path, url_path):\n import requests\n\n with open(script_path, \"wb\") as fp:\n try:\n resp = requests.get(url_path)\n resp.raise_for_status()\n fp.write(resp.content)\n except requests.exceptions.RequestException as e:\n raise click.BadParameter((\"Unable to fetch {}.\\n{}\".format(url_path, e)))\n\n\[email protected](context_settings={\"auto_envvar_prefix\": \"STREAMLIT\"})\[email protected](\"--log_level\", show_default=True, type=click.Choice(LOG_LEVELS))\[email protected]_option(prog_name=\"Streamlit\")\[email protected]_context\ndef main(ctx, log_level=\"info\"):\n \"\"\"Try out a demo with:\n\n $ streamlit hello\n\n Or use the line below to run your own script:\n\n $ streamlit run your_script.py\n \"\"\"\n\n if log_level:\n import streamlit.logger\n\n streamlit.logger.set_log_level(log_level.upper())\n\n\[email protected](\"help\")\[email protected]_context\ndef help(ctx):\n \"\"\"Print this help message.\"\"\"\n # Pretend user typed 'streamlit --help' instead of 'streamlit help'.\n import sys\n\n assert len(sys.argv) == 2 # This is always true, but let's assert anyway.\n sys.argv[1] = \"--help\"\n main(prog_name=\"streamlit\")\n\n\[email protected](\"version\")\[email protected]_context\ndef main_version(ctx):\n \"\"\"Print Streamlit's version number.\"\"\"\n # Pretend user typed 'streamlit --version' instead of 'streamlit version'\n import sys\n\n assert len(sys.argv) == 2 # This is always true, but let's assert anyway.\n sys.argv[1] = \"--version\"\n main()\n\n\[email protected](\"docs\")\ndef main_docs():\n \"\"\"Show help in browser.\"\"\"\n print(\"Showing help page in browser...\")\n from streamlit import util\n\n util.open_browser(\"https://docs.streamlit.io\")\n\n\[email protected](\"hello\")\n@configurator_options\ndef main_hello(**kwargs):\n \"\"\"Runs the Hello World script.\"\"\"\n from streamlit.hello import hello\n\n bootstrap.load_config_options(flag_options=kwargs)\n filename = hello.__file__\n _main_run(filename, flag_options=kwargs)\n\n\[email protected](\"run\")\n@configurator_options\[email protected](\"target\", required=True, envvar=\"STREAMLIT_RUN_TARGET\")\[email protected](\"args\", nargs=-1)\ndef main_run(target, args=None, **kwargs):\n \"\"\"Run a Python script, piping stderr to Streamlit.\n\n The script can be local or it can be an url. In the latter case, Streamlit\n will download the script to a temporary file and runs this file.\n\n \"\"\"\n from validators import url\n\n bootstrap.load_config_options(flag_options=kwargs)\n\n _, extension = os.path.splitext(target)\n if extension[1:] not in ACCEPTED_FILE_EXTENSIONS:\n if extension[1:] == \"\":\n raise click.BadArgumentUsage(\n \"Streamlit requires raw Python (.py) files, but the provided file has no extension.\\nFor more information, please see https://docs.streamlit.io\"\n )\n else:\n raise click.BadArgumentUsage(\n \"Streamlit requires raw Python (.py) files, not %s.\\nFor more information, please see https://docs.streamlit.io\"\n % extension\n )\n\n if url(target):\n from streamlit.temporary_directory import TemporaryDirectory\n\n with TemporaryDirectory() as temp_dir:\n from urllib.parse import urlparse\n from streamlit import url_util\n\n path = urlparse(target).path\n script_path = os.path.join(temp_dir, path.strip(\"/\").rsplit(\"/\", 1)[-1])\n # if this is a GitHub/Gist blob url, convert to a raw URL first.\n target = url_util.process_gitblob_url(target)\n _download_remote(script_path, target)\n _main_run(script_path, args, flag_options=kwargs)\n else:\n if not os.path.exists(target):\n raise click.BadParameter(\"File does not exist: {}\".format(target))\n _main_run(target, args, flag_options=kwargs)\n\n\ndef _get_command_line_as_string() -> Optional[str]:\n import subprocess\n\n parent = click.get_current_context().parent\n if parent is None:\n return None\n cmd_line_as_list = [parent.command_path]\n cmd_line_as_list.extend(click.get_os_args())\n return subprocess.list2cmdline(cmd_line_as_list)\n\n\ndef _main_run(file, args=None, flag_options=None):\n if args is None:\n args = []\n\n if flag_options is None:\n flag_options = {}\n\n command_line = _get_command_line_as_string()\n\n # Set a global flag indicating that we're \"within\" streamlit.\n streamlit._is_running_with_streamlit = True\n\n check_credentials()\n\n bootstrap.run(file, command_line, args, flag_options)\n\n\n# SUBCOMMAND: cache\n\n\[email protected](\"cache\")\ndef cache():\n \"\"\"Manage the Streamlit cache.\"\"\"\n pass\n\n\[email protected](\"clear\")\ndef cache_clear():\n \"\"\"Clear st.cache, st.memo, and st.singleton caches.\"\"\"\n import streamlit.legacy_caching\n import streamlit.caching\n\n result = streamlit.legacy_caching.clear_cache()\n cache_path = streamlit.legacy_caching.get_cache_path()\n if result:\n print(\"Cleared directory %s.\" % cache_path)\n else:\n print(\"Nothing to clear at %s.\" % cache_path)\n\n streamlit.caching.clear_memo_cache()\n streamlit.caching.clear_singleton_cache()\n\n\n# SUBCOMMAND: config\n\n\[email protected](\"config\")\ndef config():\n \"\"\"Manage Streamlit's config settings.\"\"\"\n pass\n\n\[email protected](\"show\")\n@configurator_options\ndef config_show(**kwargs):\n \"\"\"Show all of Streamlit's config settings.\"\"\"\n\n bootstrap.load_config_options(flag_options=kwargs)\n\n _config.show_config()\n\n\n# SUBCOMMAND: activate\n\n\[email protected](\"activate\", invoke_without_command=True)\[email protected]_context\ndef activate(ctx):\n \"\"\"Activate Streamlit by entering your email.\"\"\"\n if not ctx.invoked_subcommand:\n Credentials.get_current().activate()\n\n\[email protected](\"reset\")\ndef activate_reset():\n \"\"\"Reset Activation Credentials.\"\"\"\n Credentials.get_current().reset()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "lib/streamlit/cli.py"}], "after_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This is a script which is run when the Streamlit package is executed.\"\"\"\n\nfrom streamlit import config as _config\n\nimport os\nfrom typing import Optional\n\nimport click\n\nimport streamlit\nfrom streamlit.credentials import Credentials, check_credentials\nimport streamlit.bootstrap as bootstrap\nfrom streamlit.case_converters import to_snake_case\n\nACCEPTED_FILE_EXTENSIONS = (\"py\", \"py3\")\n\nLOG_LEVELS = (\"error\", \"warning\", \"info\", \"debug\")\n\n\ndef _convert_config_option_to_click_option(config_option):\n \"\"\"Composes given config option options as options for click lib.\"\"\"\n option = \"--{}\".format(config_option.key)\n param = config_option.key.replace(\".\", \"_\")\n description = config_option.description\n if config_option.deprecated:\n description += \"\\n {} - {}\".format(\n config_option.deprecation_text, config_option.expiration_date\n )\n envvar = \"STREAMLIT_{}\".format(to_snake_case(param).upper())\n\n return {\n \"param\": param,\n \"description\": description,\n \"type\": config_option.type,\n \"option\": option,\n \"envvar\": envvar,\n }\n\n\ndef configurator_options(func):\n \"\"\"Decorator that adds config param keys to click dynamically.\"\"\"\n for _, value in reversed(_config._config_options_template.items()):\n parsed_parameter = _convert_config_option_to_click_option(value)\n config_option = click.option(\n parsed_parameter[\"option\"],\n parsed_parameter[\"param\"],\n help=parsed_parameter[\"description\"],\n type=parsed_parameter[\"type\"],\n show_envvar=True,\n envvar=parsed_parameter[\"envvar\"],\n )\n func = config_option(func)\n return func\n\n\n# Fetch remote file at url_path to script_path\ndef _download_remote(script_path, url_path):\n import requests\n\n with open(script_path, \"wb\") as fp:\n try:\n resp = requests.get(url_path)\n resp.raise_for_status()\n fp.write(resp.content)\n except requests.exceptions.RequestException as e:\n raise click.BadParameter((\"Unable to fetch {}.\\n{}\".format(url_path, e)))\n\n\[email protected](context_settings={\"auto_envvar_prefix\": \"STREAMLIT\"})\[email protected](\"--log_level\", show_default=True, type=click.Choice(LOG_LEVELS))\[email protected]_option(prog_name=\"Streamlit\")\[email protected]_context\ndef main(ctx, log_level=\"info\"):\n \"\"\"Try out a demo with:\n\n $ streamlit hello\n\n Or use the line below to run your own script:\n\n $ streamlit run your_script.py\n \"\"\"\n\n if log_level:\n from streamlit.logger import get_logger\n\n LOGGER = get_logger(__name__)\n LOGGER.warning(\n \"Setting the log level using the --log_level flag is unsupported.\"\n \"\\nUse the --logger.level flag (after your streamlit command) instead.\"\n )\n\n\[email protected](\"help\")\[email protected]_context\ndef help(ctx):\n \"\"\"Print this help message.\"\"\"\n # Pretend user typed 'streamlit --help' instead of 'streamlit help'.\n import sys\n\n assert len(sys.argv) == 2 # This is always true, but let's assert anyway.\n sys.argv[1] = \"--help\"\n main(prog_name=\"streamlit\")\n\n\[email protected](\"version\")\[email protected]_context\ndef main_version(ctx):\n \"\"\"Print Streamlit's version number.\"\"\"\n # Pretend user typed 'streamlit --version' instead of 'streamlit version'\n import sys\n\n assert len(sys.argv) == 2 # This is always true, but let's assert anyway.\n sys.argv[1] = \"--version\"\n main()\n\n\[email protected](\"docs\")\ndef main_docs():\n \"\"\"Show help in browser.\"\"\"\n print(\"Showing help page in browser...\")\n from streamlit import util\n\n util.open_browser(\"https://docs.streamlit.io\")\n\n\[email protected](\"hello\")\n@configurator_options\ndef main_hello(**kwargs):\n \"\"\"Runs the Hello World script.\"\"\"\n from streamlit.hello import hello\n\n bootstrap.load_config_options(flag_options=kwargs)\n filename = hello.__file__\n _main_run(filename, flag_options=kwargs)\n\n\[email protected](\"run\")\n@configurator_options\[email protected](\"target\", required=True, envvar=\"STREAMLIT_RUN_TARGET\")\[email protected](\"args\", nargs=-1)\ndef main_run(target, args=None, **kwargs):\n \"\"\"Run a Python script, piping stderr to Streamlit.\n\n The script can be local or it can be an url. In the latter case, Streamlit\n will download the script to a temporary file and runs this file.\n\n \"\"\"\n from validators import url\n\n bootstrap.load_config_options(flag_options=kwargs)\n\n _, extension = os.path.splitext(target)\n if extension[1:] not in ACCEPTED_FILE_EXTENSIONS:\n if extension[1:] == \"\":\n raise click.BadArgumentUsage(\n \"Streamlit requires raw Python (.py) files, but the provided file has no extension.\\nFor more information, please see https://docs.streamlit.io\"\n )\n else:\n raise click.BadArgumentUsage(\n \"Streamlit requires raw Python (.py) files, not %s.\\nFor more information, please see https://docs.streamlit.io\"\n % extension\n )\n\n if url(target):\n from streamlit.temporary_directory import TemporaryDirectory\n\n with TemporaryDirectory() as temp_dir:\n from urllib.parse import urlparse\n from streamlit import url_util\n\n path = urlparse(target).path\n script_path = os.path.join(temp_dir, path.strip(\"/\").rsplit(\"/\", 1)[-1])\n # if this is a GitHub/Gist blob url, convert to a raw URL first.\n target = url_util.process_gitblob_url(target)\n _download_remote(script_path, target)\n _main_run(script_path, args, flag_options=kwargs)\n else:\n if not os.path.exists(target):\n raise click.BadParameter(\"File does not exist: {}\".format(target))\n _main_run(target, args, flag_options=kwargs)\n\n\ndef _get_command_line_as_string() -> Optional[str]:\n import subprocess\n\n parent = click.get_current_context().parent\n if parent is None:\n return None\n cmd_line_as_list = [parent.command_path]\n cmd_line_as_list.extend(click.get_os_args())\n return subprocess.list2cmdline(cmd_line_as_list)\n\n\ndef _main_run(file, args=None, flag_options=None):\n if args is None:\n args = []\n\n if flag_options is None:\n flag_options = {}\n\n command_line = _get_command_line_as_string()\n\n # Set a global flag indicating that we're \"within\" streamlit.\n streamlit._is_running_with_streamlit = True\n\n check_credentials()\n\n bootstrap.run(file, command_line, args, flag_options)\n\n\n# SUBCOMMAND: cache\n\n\[email protected](\"cache\")\ndef cache():\n \"\"\"Manage the Streamlit cache.\"\"\"\n pass\n\n\[email protected](\"clear\")\ndef cache_clear():\n \"\"\"Clear st.cache, st.memo, and st.singleton caches.\"\"\"\n import streamlit.legacy_caching\n import streamlit.caching\n\n result = streamlit.legacy_caching.clear_cache()\n cache_path = streamlit.legacy_caching.get_cache_path()\n if result:\n print(\"Cleared directory %s.\" % cache_path)\n else:\n print(\"Nothing to clear at %s.\" % cache_path)\n\n streamlit.caching.clear_memo_cache()\n streamlit.caching.clear_singleton_cache()\n\n\n# SUBCOMMAND: config\n\n\[email protected](\"config\")\ndef config():\n \"\"\"Manage Streamlit's config settings.\"\"\"\n pass\n\n\[email protected](\"show\")\n@configurator_options\ndef config_show(**kwargs):\n \"\"\"Show all of Streamlit's config settings.\"\"\"\n\n bootstrap.load_config_options(flag_options=kwargs)\n\n _config.show_config()\n\n\n# SUBCOMMAND: activate\n\n\[email protected](\"activate\", invoke_without_command=True)\[email protected]_context\ndef activate(ctx):\n \"\"\"Activate Streamlit by entering your email.\"\"\"\n if not ctx.invoked_subcommand:\n Credentials.get_current().activate()\n\n\[email protected](\"reset\")\ndef activate_reset():\n \"\"\"Reset Activation Credentials.\"\"\"\n Credentials.get_current().reset()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "lib/streamlit/cli.py"}]}
| 3,657 | 141 |
gh_patches_debug_5382
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-1268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ddtrace not working with new aiobotocore library
### Which version of dd-trace-py are you using?
0.34.0
### Which version of the libraries are you using?
```
aiobotocore==0.12.0
aiohttp==3.6.2
aioitertools==0.5.1
async-timeout==3.0.1
attrs==19.3.0
botocore==1.15.15
chardet==3.0.4
ddtrace==0.34.0
docutils==0.15.2
idna==2.9
jmespath==0.9.5
msgpack==1.0.0
multidict==4.7.5
python-dateutil==2.8.1
six==1.14.0
urllib3==1.25.8
wrapt==1.12.0
yarl==1.4.2
```
### How can we reproduce your problem?
```python
from ddtrace import patch
patch(aiobotocore=True)
```
### What is the result that you get?
```
ImportError: cannot import name 'ClientResponseContentProxy' from 'aiobotocore.endpoint'
raise PatchException('integration not available')
ddtrace.monkey.PatchException: integration not available
```
### What is result that you expected?
Successfully patched `aiobotocore` library
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/aiobotocore/patch.py`
Content:
```
1 import asyncio
2 from ddtrace.vendor import wrapt
3 from ddtrace import config
4 import aiobotocore.client
5
6 from aiobotocore.endpoint import ClientResponseContentProxy
7
8 from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY
9 from ...pin import Pin
10 from ...ext import SpanTypes, http, aws
11 from ...compat import PYTHON_VERSION_INFO
12 from ...utils.formats import deep_getattr
13 from ...utils.wrappers import unwrap
14
15
16 ARGS_NAME = ('action', 'params', 'path', 'verb')
17 TRACED_ARGS = ['params', 'path', 'verb']
18
19
20 def patch():
21 if getattr(aiobotocore.client, '_datadog_patch', False):
22 return
23 setattr(aiobotocore.client, '_datadog_patch', True)
24
25 wrapt.wrap_function_wrapper('aiobotocore.client', 'AioBaseClient._make_api_call', _wrapped_api_call)
26 Pin(service='aws', app='aws').onto(aiobotocore.client.AioBaseClient)
27
28
29 def unpatch():
30 if getattr(aiobotocore.client, '_datadog_patch', False):
31 setattr(aiobotocore.client, '_datadog_patch', False)
32 unwrap(aiobotocore.client.AioBaseClient, '_make_api_call')
33
34
35 class WrappedClientResponseContentProxy(wrapt.ObjectProxy):
36 def __init__(self, body, pin, parent_span):
37 super(WrappedClientResponseContentProxy, self).__init__(body)
38 self._self_pin = pin
39 self._self_parent_span = parent_span
40
41 @asyncio.coroutine
42 def read(self, *args, **kwargs):
43 # async read that must be child of the parent span operation
44 operation_name = '{}.read'.format(self._self_parent_span.name)
45
46 with self._self_pin.tracer.start_span(operation_name, child_of=self._self_parent_span) as span:
47 # inherit parent attributes
48 span.resource = self._self_parent_span.resource
49 span.span_type = self._self_parent_span.span_type
50 span.meta = dict(self._self_parent_span.meta)
51 span.metrics = dict(self._self_parent_span.metrics)
52
53 result = yield from self.__wrapped__.read(*args, **kwargs)
54 span.set_tag('Length', len(result))
55
56 return result
57
58 # wrapt doesn't proxy `async with` context managers
59 if PYTHON_VERSION_INFO >= (3, 5, 0):
60 @asyncio.coroutine
61 def __aenter__(self):
62 # call the wrapped method but return the object proxy
63 yield from self.__wrapped__.__aenter__()
64 return self
65
66 @asyncio.coroutine
67 def __aexit__(self, *args, **kwargs):
68 response = yield from self.__wrapped__.__aexit__(*args, **kwargs)
69 return response
70
71
72 @asyncio.coroutine
73 def _wrapped_api_call(original_func, instance, args, kwargs):
74 pin = Pin.get_from(instance)
75 if not pin or not pin.enabled():
76 result = yield from original_func(*args, **kwargs)
77 return result
78
79 endpoint_name = deep_getattr(instance, '_endpoint._endpoint_prefix')
80
81 with pin.tracer.trace('{}.command'.format(endpoint_name),
82 service='{}.{}'.format(pin.service, endpoint_name),
83 span_type=SpanTypes.HTTP) as span:
84 span.set_tag(SPAN_MEASURED_KEY)
85
86 if len(args) > 0:
87 operation = args[0]
88 span.resource = '{}.{}'.format(endpoint_name, operation.lower())
89 else:
90 operation = None
91 span.resource = endpoint_name
92
93 aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)
94
95 region_name = deep_getattr(instance, 'meta.region_name')
96
97 meta = {
98 'aws.agent': 'aiobotocore',
99 'aws.operation': operation,
100 'aws.region': region_name,
101 }
102 span.set_tags(meta)
103
104 result = yield from original_func(*args, **kwargs)
105
106 body = result.get('Body')
107 if isinstance(body, ClientResponseContentProxy):
108 result['Body'] = WrappedClientResponseContentProxy(body, pin, span)
109
110 response_meta = result['ResponseMetadata']
111 response_headers = response_meta['HTTPHeaders']
112
113 span.set_tag(http.STATUS_CODE, response_meta['HTTPStatusCode'])
114 span.set_tag('retry_attempts', response_meta['RetryAttempts'])
115
116 request_id = response_meta.get('RequestId')
117 if request_id:
118 span.set_tag('aws.requestid', request_id)
119
120 request_id2 = response_headers.get('x-amz-id-2')
121 if request_id2:
122 span.set_tag('aws.requestid2', request_id2)
123
124 # set analytics sample rate
125 span.set_tag(
126 ANALYTICS_SAMPLE_RATE_KEY,
127 config.aiobotocore.get_analytics_sample_rate()
128 )
129
130 return result
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/contrib/aiobotocore/patch.py b/ddtrace/contrib/aiobotocore/patch.py
--- a/ddtrace/contrib/aiobotocore/patch.py
+++ b/ddtrace/contrib/aiobotocore/patch.py
@@ -3,7 +3,11 @@
from ddtrace import config
import aiobotocore.client
-from aiobotocore.endpoint import ClientResponseContentProxy
+try:
+ from aiobotocore.endpoint import ClientResponseContentProxy
+except ImportError:
+ # aiobotocore>=0.11.0
+ from aiobotocore._endpoint_helpers import ClientResponseContentProxy
from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY
from ...pin import Pin
|
{"golden_diff": "diff --git a/ddtrace/contrib/aiobotocore/patch.py b/ddtrace/contrib/aiobotocore/patch.py\n--- a/ddtrace/contrib/aiobotocore/patch.py\n+++ b/ddtrace/contrib/aiobotocore/patch.py\n@@ -3,7 +3,11 @@\n from ddtrace import config\n import aiobotocore.client\n \n-from aiobotocore.endpoint import ClientResponseContentProxy\n+try:\n+ from aiobotocore.endpoint import ClientResponseContentProxy\n+except ImportError:\n+ # aiobotocore>=0.11.0\n+ from aiobotocore._endpoint_helpers import ClientResponseContentProxy\n \n from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\n from ...pin import Pin\n", "issue": "ddtrace not working with new aiobotocore library\n\r\n### Which version of dd-trace-py are you using?\r\n0.34.0\r\n\r\n### Which version of the libraries are you using?\r\n\r\n```\r\naiobotocore==0.12.0\r\naiohttp==3.6.2\r\naioitertools==0.5.1\r\nasync-timeout==3.0.1\r\nattrs==19.3.0\r\nbotocore==1.15.15\r\nchardet==3.0.4\r\nddtrace==0.34.0\r\ndocutils==0.15.2\r\nidna==2.9\r\njmespath==0.9.5\r\nmsgpack==1.0.0\r\nmultidict==4.7.5\r\npython-dateutil==2.8.1\r\nsix==1.14.0\r\nurllib3==1.25.8\r\nwrapt==1.12.0\r\nyarl==1.4.2\r\n```\r\n\r\n### How can we reproduce your problem?\r\n```python\r\n from ddtrace import patch\r\n\r\n patch(aiobotocore=True)\r\n```\r\n\r\n### What is the result that you get?\r\n```\r\nImportError: cannot import name 'ClientResponseContentProxy' from 'aiobotocore.endpoint' \r\n raise PatchException('integration not available')\r\nddtrace.monkey.PatchException: integration not available\r\n```\r\n\r\n### What is result that you expected?\r\nSuccessfully patched `aiobotocore` library\n", "before_files": [{"content": "import asyncio\nfrom ddtrace.vendor import wrapt\nfrom ddtrace import config\nimport aiobotocore.client\n\nfrom aiobotocore.endpoint import ClientResponseContentProxy\n\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...pin import Pin\nfrom ...ext import SpanTypes, http, aws\nfrom ...compat import PYTHON_VERSION_INFO\nfrom ...utils.formats import deep_getattr\nfrom ...utils.wrappers import unwrap\n\n\nARGS_NAME = ('action', 'params', 'path', 'verb')\nTRACED_ARGS = ['params', 'path', 'verb']\n\n\ndef patch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n return\n setattr(aiobotocore.client, '_datadog_patch', True)\n\n wrapt.wrap_function_wrapper('aiobotocore.client', 'AioBaseClient._make_api_call', _wrapped_api_call)\n Pin(service='aws', app='aws').onto(aiobotocore.client.AioBaseClient)\n\n\ndef unpatch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n setattr(aiobotocore.client, '_datadog_patch', False)\n unwrap(aiobotocore.client.AioBaseClient, '_make_api_call')\n\n\nclass WrappedClientResponseContentProxy(wrapt.ObjectProxy):\n def __init__(self, body, pin, parent_span):\n super(WrappedClientResponseContentProxy, self).__init__(body)\n self._self_pin = pin\n self._self_parent_span = parent_span\n\n @asyncio.coroutine\n def read(self, *args, **kwargs):\n # async read that must be child of the parent span operation\n operation_name = '{}.read'.format(self._self_parent_span.name)\n\n with self._self_pin.tracer.start_span(operation_name, child_of=self._self_parent_span) as span:\n # inherit parent attributes\n span.resource = self._self_parent_span.resource\n span.span_type = self._self_parent_span.span_type\n span.meta = dict(self._self_parent_span.meta)\n span.metrics = dict(self._self_parent_span.metrics)\n\n result = yield from self.__wrapped__.read(*args, **kwargs)\n span.set_tag('Length', len(result))\n\n return result\n\n # wrapt doesn't proxy `async with` context managers\n if PYTHON_VERSION_INFO >= (3, 5, 0):\n @asyncio.coroutine\n def __aenter__(self):\n # call the wrapped method but return the object proxy\n yield from self.__wrapped__.__aenter__()\n return self\n\n @asyncio.coroutine\n def __aexit__(self, *args, **kwargs):\n response = yield from self.__wrapped__.__aexit__(*args, **kwargs)\n return response\n\n\[email protected]\ndef _wrapped_api_call(original_func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n result = yield from original_func(*args, **kwargs)\n return result\n\n endpoint_name = deep_getattr(instance, '_endpoint._endpoint_prefix')\n\n with pin.tracer.trace('{}.command'.format(endpoint_name),\n service='{}.{}'.format(pin.service, endpoint_name),\n span_type=SpanTypes.HTTP) as span:\n span.set_tag(SPAN_MEASURED_KEY)\n\n if len(args) > 0:\n operation = args[0]\n span.resource = '{}.{}'.format(endpoint_name, operation.lower())\n else:\n operation = None\n span.resource = endpoint_name\n\n aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)\n\n region_name = deep_getattr(instance, 'meta.region_name')\n\n meta = {\n 'aws.agent': 'aiobotocore',\n 'aws.operation': operation,\n 'aws.region': region_name,\n }\n span.set_tags(meta)\n\n result = yield from original_func(*args, **kwargs)\n\n body = result.get('Body')\n if isinstance(body, ClientResponseContentProxy):\n result['Body'] = WrappedClientResponseContentProxy(body, pin, span)\n\n response_meta = result['ResponseMetadata']\n response_headers = response_meta['HTTPHeaders']\n\n span.set_tag(http.STATUS_CODE, response_meta['HTTPStatusCode'])\n span.set_tag('retry_attempts', response_meta['RetryAttempts'])\n\n request_id = response_meta.get('RequestId')\n if request_id:\n span.set_tag('aws.requestid', request_id)\n\n request_id2 = response_headers.get('x-amz-id-2')\n if request_id2:\n span.set_tag('aws.requestid2', request_id2)\n\n # set analytics sample rate\n span.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.aiobotocore.get_analytics_sample_rate()\n )\n\n return result\n", "path": "ddtrace/contrib/aiobotocore/patch.py"}], "after_files": [{"content": "import asyncio\nfrom ddtrace.vendor import wrapt\nfrom ddtrace import config\nimport aiobotocore.client\n\ntry:\n from aiobotocore.endpoint import ClientResponseContentProxy\nexcept ImportError:\n # aiobotocore>=0.11.0\n from aiobotocore._endpoint_helpers import ClientResponseContentProxy\n\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...pin import Pin\nfrom ...ext import SpanTypes, http, aws\nfrom ...compat import PYTHON_VERSION_INFO\nfrom ...utils.formats import deep_getattr\nfrom ...utils.wrappers import unwrap\n\n\nARGS_NAME = ('action', 'params', 'path', 'verb')\nTRACED_ARGS = ['params', 'path', 'verb']\n\n\ndef patch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n return\n setattr(aiobotocore.client, '_datadog_patch', True)\n\n wrapt.wrap_function_wrapper('aiobotocore.client', 'AioBaseClient._make_api_call', _wrapped_api_call)\n Pin(service='aws', app='aws').onto(aiobotocore.client.AioBaseClient)\n\n\ndef unpatch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n setattr(aiobotocore.client, '_datadog_patch', False)\n unwrap(aiobotocore.client.AioBaseClient, '_make_api_call')\n\n\nclass WrappedClientResponseContentProxy(wrapt.ObjectProxy):\n def __init__(self, body, pin, parent_span):\n super(WrappedClientResponseContentProxy, self).__init__(body)\n self._self_pin = pin\n self._self_parent_span = parent_span\n\n @asyncio.coroutine\n def read(self, *args, **kwargs):\n # async read that must be child of the parent span operation\n operation_name = '{}.read'.format(self._self_parent_span.name)\n\n with self._self_pin.tracer.start_span(operation_name, child_of=self._self_parent_span) as span:\n # inherit parent attributes\n span.resource = self._self_parent_span.resource\n span.span_type = self._self_parent_span.span_type\n span.meta = dict(self._self_parent_span.meta)\n span.metrics = dict(self._self_parent_span.metrics)\n\n result = yield from self.__wrapped__.read(*args, **kwargs)\n span.set_tag('Length', len(result))\n\n return result\n\n # wrapt doesn't proxy `async with` context managers\n if PYTHON_VERSION_INFO >= (3, 5, 0):\n @asyncio.coroutine\n def __aenter__(self):\n # call the wrapped method but return the object proxy\n yield from self.__wrapped__.__aenter__()\n return self\n\n @asyncio.coroutine\n def __aexit__(self, *args, **kwargs):\n response = yield from self.__wrapped__.__aexit__(*args, **kwargs)\n return response\n\n\[email protected]\ndef _wrapped_api_call(original_func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n result = yield from original_func(*args, **kwargs)\n return result\n\n endpoint_name = deep_getattr(instance, '_endpoint._endpoint_prefix')\n\n with pin.tracer.trace('{}.command'.format(endpoint_name),\n service='{}.{}'.format(pin.service, endpoint_name),\n span_type=SpanTypes.HTTP) as span:\n span.set_tag(SPAN_MEASURED_KEY)\n\n if len(args) > 0:\n operation = args[0]\n span.resource = '{}.{}'.format(endpoint_name, operation.lower())\n else:\n operation = None\n span.resource = endpoint_name\n\n aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)\n\n region_name = deep_getattr(instance, 'meta.region_name')\n\n meta = {\n 'aws.agent': 'aiobotocore',\n 'aws.operation': operation,\n 'aws.region': region_name,\n }\n span.set_tags(meta)\n\n result = yield from original_func(*args, **kwargs)\n\n body = result.get('Body')\n if isinstance(body, ClientResponseContentProxy):\n result['Body'] = WrappedClientResponseContentProxy(body, pin, span)\n\n response_meta = result['ResponseMetadata']\n response_headers = response_meta['HTTPHeaders']\n\n span.set_tag(http.STATUS_CODE, response_meta['HTTPStatusCode'])\n span.set_tag('retry_attempts', response_meta['RetryAttempts'])\n\n request_id = response_meta.get('RequestId')\n if request_id:\n span.set_tag('aws.requestid', request_id)\n\n request_id2 = response_headers.get('x-amz-id-2')\n if request_id2:\n span.set_tag('aws.requestid2', request_id2)\n\n # set analytics sample rate\n span.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.aiobotocore.get_analytics_sample_rate()\n )\n\n return result\n", "path": "ddtrace/contrib/aiobotocore/patch.py"}]}
| 1,938 | 172 |
gh_patches_debug_39221
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-1834
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"No such file or directory" occasionally happens when the PS save checkpoint.
```
File "/elasticdl/python/worker/worker.py", line 579, in report_gradient_to_ps
res = report_future.result()
File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 625, in result
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Exception calling application: [Errno 2] No such file or directory: '/testdata/ckpt/20200309-19/version-26000/variables-0-of-3.ckpt'"
debug_error_string = "{"created":"@1583757719.258002676","description":"Error received from peer ipv4:11.149.168.3:2222","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Exception calling application: [Errno 2] No such file or directory: '/testdata/elasticdl/kept/dban_lookup/20200309-19/version-26000/variables-0-of-3.ckpt'","grpc_status":2}"
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/common/save_utils.py`
Content:
```
1 import contextlib
2 import os
3 import tempfile
4
5 import tensorflow as tf
6
7 from elasticdl.proto import elasticdl_pb2
8 from elasticdl.python.common.hash_utils import int_to_id, string_to_id
9 from elasticdl.python.common.tensor_utils import (
10 pb_to_indexed_slices,
11 pb_to_ndarray,
12 )
13 from elasticdl.python.ps.embedding_table import create_embedding_table
14 from elasticdl.python.ps.parameters import Parameters
15
16
17 def save_pb_to_file(pb_obj, file_name):
18 """Save a protobuf object to file"""
19 encoded_model = pb_obj.SerializeToString()
20 with open(file_name, "wb") as f:
21 f.write(encoded_model)
22
23
24 def load_pb_from_file(pb_obj, file_name):
25 """Load a protobuf object from a file"""
26 with open(file_name, "rb") as f:
27 pb_obj.ParseFromString(f.read())
28 return pb_obj
29
30
31 def _get_params_shard_from_pb(model_pb, shard_index, shard_num):
32 """Get parameters including variables values and embedding table
33 from a model protobuf.
34 Args:
35 model_pb: A Model protobuf instance.
36 shard_index: Model shard index.
37 shard_num: The total number of model shards.
38 Return:
39 non_embedding_vars: A Python dict in which the key is a variable
40 name and the value is a `tf.Variable` object.
41 embedding_table_values: A Python dict in which the key is an embedding
42 table name and the value is a tuple with 2 elements. The value[0]
43 is indices and value[1] is the corresponding embedding vector.
44 """
45 non_embedding_vars = {}
46 embedding_table_values = {}
47
48 for name, pb in model_pb.dense_parameters.items():
49 if string_to_id(name, shard_num) == shard_index:
50 non_embedding_vars[name] = tf.Variable(
51 initial_value=pb_to_ndarray(pb), trainable=True
52 )
53 for name, pb in model_pb.embedding_tables.items():
54 embedding_table_values.setdefault(name, ([], []))
55 t = pb_to_indexed_slices(pb)
56 for embedding_id, vector in zip(t.indices, t.values):
57 if int_to_id(embedding_id, shard_num) == shard_index:
58 embedding_table_values[name][0].append(embedding_id)
59 embedding_table_values[name][1].append(vector)
60 return non_embedding_vars, embedding_table_values
61
62
63 class Checkpoint(object):
64 def __init__(self, version, file):
65 self.version = version
66 self.file = file
67
68
69 class CheckpointSaver(object):
70 """Checkpoint Saver implementation"""
71
72 def __init__(
73 self,
74 checkpoint_dir,
75 checkpoint_steps,
76 keep_checkpoint_max,
77 include_evaluation,
78 ):
79 """
80 Arguments:
81 checkpoint_dir: The directory to store the checkpoint files.
82 Directory will be created if not exist.
83 checkpoint_steps: Save checkpoint every this many steps.
84 keep_checkpoint_max: The maximum number of recent checkpoint
85 files to keep.
86 """
87 self._directory = checkpoint_dir
88 self._steps = checkpoint_steps
89 self._max_versions = keep_checkpoint_max
90 if not self._directory:
91 self._directory = os.getcwd() + "/checkpoint_dir"
92 if self._steps:
93 os.makedirs(self._directory, exist_ok=True)
94 self._checkpoint_list = []
95 self._include_evaluation = include_evaluation
96 self._eval_checkpoint_dir = (
97 tempfile.mkdtemp() if include_evaluation else ""
98 )
99
100 def _get_checkpoint_file(
101 self, version, is_eval_checkpoint=False, shard_index=0, shard_num=1
102 ):
103 checkpoint_dir = (
104 self._eval_checkpoint_dir
105 if is_eval_checkpoint
106 else self._directory
107 )
108 checkpoint_version_dir = os.path.join(
109 checkpoint_dir, "version-%s" % str(version)
110 )
111 with contextlib.suppress(FileExistsError):
112 os.makedirs(checkpoint_version_dir, exist_ok=True)
113 return "%s/variables-%s-of-%s.ckpt" % (
114 checkpoint_version_dir,
115 str(shard_index),
116 str(shard_num),
117 )
118
119 def is_enabled(self):
120 """Checkpoint is enabled or not"""
121 return self._steps
122
123 def need_to_checkpoint(self, version):
124 """Check if the given model version needs to be checkpointed"""
125 return self.is_enabled() and version % self._steps == 0
126
127 def save(
128 self, version, model, is_eval_checkpoint, shard_index=0, shard_num=1
129 ):
130 """Checkpoint the given model
131
132 Args:
133 version (int): iteration steps
134 model: a model protobuf instance
135 is_eval_checkpoint (bool): if True, the model will be saved to
136 a temporary directory.
137 shard_index (int): default 0. The shard index in all
138 model shard files, e.g. the shard_index is PS instance index
139 using ParameterServerStrategy.
140 shard_number (int): default 1. The number of model shards,
141 e.g. shard_number is the number of PS instances using
142 ParameterServerStrategy.
143 """
144 file = self._get_checkpoint_file(
145 version, is_eval_checkpoint, shard_index, shard_num
146 )
147 save_pb_to_file(model, file)
148 if not is_eval_checkpoint:
149 self._checkpoint_list.append(Checkpoint(version, file))
150 if self._max_versions:
151 self._delete_old_checkpoints_if_needed()
152
153 def _delete_old_checkpoints_if_needed(self):
154 """Delete the oldest checkpoint files and keep the number of
155 checkpoints is not beyond max_version.
156 """
157 while len(self._checkpoint_list) > self._max_versions:
158 checkpoint_file = self._checkpoint_list.pop(0).file
159 checkpoint_version_dir = os.path.dirname(checkpoint_file)
160 with contextlib.suppress(FileNotFoundError):
161 os.remove(checkpoint_file)
162 # Remove the directory if empty
163 if not os.listdir(checkpoint_version_dir):
164 os.rmdir(checkpoint_version_dir)
165
166 def get_latest_checkpoint_version(self):
167 """Get the latest checkpointed model version"""
168 if not self._checkpoint_list:
169 raise RuntimeError("No model checkpoint available")
170 return self._checkpoint_list[-1].version
171
172 @staticmethod
173 def get_valid_lastest_version_dir(checkpoint_dir):
174 """Get the valid and lastest version checkpoint directory"""
175 if not checkpoint_dir or not os.path.exists(checkpoint_dir):
176 return None
177
178 version_folders = os.listdir(checkpoint_dir)
179 if not version_folders:
180 return None
181 version_num = [int(v.split("-")[-1]) for v in version_folders]
182 version_folder_pairs = sorted(
183 zip(version_num, version_folders), reverse=True
184 )
185 for version, folder in version_folder_pairs:
186 folder_dir = os.path.join(checkpoint_dir, folder)
187 if CheckpointSaver.check_checkpoint_valid(folder_dir):
188 return folder_dir
189 return None
190
191 @staticmethod
192 def check_checkpoint_valid(checkpoint_dir):
193 """Check whether the checkpoint directory is valid. The filename template
194 in the checkpoint directory like "variables-{i}-of-{N}.ckpt". We will
195 parse any filename to get N which is the total number of parameters
196 shards. It is valid if the number of files in the directory N.
197 """
198 if not os.path.exists(checkpoint_dir):
199 return False
200
201 shard_files = os.listdir(checkpoint_dir)
202 if not shard_files:
203 return False
204
205 shard_file_prefix = shard_files[0].split(".")[0]
206 expected_shard_num = int(shard_file_prefix.split("-")[-1])
207 return expected_shard_num == len(shard_files)
208
209 @staticmethod
210 def restore_params_from_checkpoint(checkpoint_dir, shard_index, shard_num):
211 """Restore a shard parameters from the checkpoint directory.
212 If shard_num=1, a entire model parameters will be restored.
213
214 Args:
215 checkpoint_dir: a directory with checkpoint files.
216 shard_index: Model shard index, e.g. the PS instance index
217 using ParameterServerStrategy with multiple PS instances.
218 shard_num: The total number of model shards, e.g. the total PS
219 instancecount using ParameterServerStrategy with multiple
220 PS instances.
221
222 Return:
223 parameters: A Parameter object which contains model version,
224 non-embedding parameters and embedding tables for the
225 PS instance with ps_id.
226 """
227
228 variable_shard_files = os.listdir(checkpoint_dir)
229 non_embedding_vars = {}
230 embedding_tables = {}
231 version = None
232 for shard_file in variable_shard_files:
233 shard_file_path = os.path.join(checkpoint_dir, shard_file)
234 model_pb = elasticdl_pb2.Model()
235 model_pb = load_pb_from_file(model_pb, shard_file_path)
236 if version is None:
237 version = model_pb.version
238 elif version != model_pb.version:
239 raise ValueError(
240 "The versions in model shards are not consistent"
241 )
242
243 for embedding_info_pb in model_pb.embedding_table_infos:
244 embedding_table = create_embedding_table(embedding_info_pb)
245 embedding_tables.setdefault(
246 embedding_table.name, embedding_table
247 )
248
249 (
250 shard_non_embedding_vars,
251 shard_embedding_table_values,
252 ) = _get_params_shard_from_pb(model_pb, shard_index, shard_num)
253
254 non_embedding_vars.update(shard_non_embedding_vars)
255 for name, pair in shard_embedding_table_values.items():
256 embedding_tables[name].set(pair[0], pair[1])
257
258 parameters = Parameters()
259 parameters.non_embedding_params.update(non_embedding_vars)
260 parameters.embedding_params.update(embedding_tables)
261 parameters.version = version
262 return parameters
263
264 @staticmethod
265 def get_version_from_checkpoint(checkpoint_dir):
266 """Get model version from the checkpoint. There may be several shard
267 files in the checkpoint directory. The model versions of shard files
268 are same, so we only need to read one shard file to get model version.
269 """
270 variable_shard_files = os.listdir(checkpoint_dir)
271 shard_file_path = os.path.join(checkpoint_dir, variable_shard_files[0])
272 model_pb = elasticdl_pb2.Model()
273 model_pb = load_pb_from_file(model_pb, shard_file_path)
274 return model_pb.version
275
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticdl/python/common/save_utils.py b/elasticdl/python/common/save_utils.py
--- a/elasticdl/python/common/save_utils.py
+++ b/elasticdl/python/common/save_utils.py
@@ -1,5 +1,6 @@
import contextlib
import os
+import shutil
import tempfile
import tensorflow as tf
@@ -91,7 +92,7 @@
self._directory = os.getcwd() + "/checkpoint_dir"
if self._steps:
os.makedirs(self._directory, exist_ok=True)
- self._checkpoint_list = []
+ self._checkpoint_dir_list = []
self._include_evaluation = include_evaluation
self._eval_checkpoint_dir = (
tempfile.mkdtemp() if include_evaluation else ""
@@ -141,12 +142,12 @@
e.g. shard_number is the number of PS instances using
ParameterServerStrategy.
"""
- file = self._get_checkpoint_file(
+ filename = self._get_checkpoint_file(
version, is_eval_checkpoint, shard_index, shard_num
)
- save_pb_to_file(model, file)
+ save_pb_to_file(model, filename)
if not is_eval_checkpoint:
- self._checkpoint_list.append(Checkpoint(version, file))
+ self._checkpoint_dir_list.append(os.path.dirname(filename))
if self._max_versions:
self._delete_old_checkpoints_if_needed()
@@ -154,20 +155,16 @@
"""Delete the oldest checkpoint files and keep the number of
checkpoints is not beyond max_version.
"""
- while len(self._checkpoint_list) > self._max_versions:
- checkpoint_file = self._checkpoint_list.pop(0).file
- checkpoint_version_dir = os.path.dirname(checkpoint_file)
- with contextlib.suppress(FileNotFoundError):
- os.remove(checkpoint_file)
- # Remove the directory if empty
- if not os.listdir(checkpoint_version_dir):
- os.rmdir(checkpoint_version_dir)
-
- def get_latest_checkpoint_version(self):
- """Get the latest checkpointed model version"""
- if not self._checkpoint_list:
- raise RuntimeError("No model checkpoint available")
- return self._checkpoint_list[-1].version
+ if len(self._checkpoint_dir_list) > self._max_versions:
+ old_version_dir = self._checkpoint_dir_list[0]
+
+ # Some PS instances have not saved checkpoint shard files of
+ # the version if invalid. And the slowest PS will remove the
+ # old version checkpoint.
+ if self.check_checkpoint_valid(old_version_dir):
+ self._checkpoint_dir_list.pop(0)
+ with contextlib.suppress(FileNotFoundError):
+ shutil.rmtree(old_version_dir)
@staticmethod
def get_valid_lastest_version_dir(checkpoint_dir):
|
{"golden_diff": "diff --git a/elasticdl/python/common/save_utils.py b/elasticdl/python/common/save_utils.py\n--- a/elasticdl/python/common/save_utils.py\n+++ b/elasticdl/python/common/save_utils.py\n@@ -1,5 +1,6 @@\n import contextlib\n import os\n+import shutil\n import tempfile\n \n import tensorflow as tf\n@@ -91,7 +92,7 @@\n self._directory = os.getcwd() + \"/checkpoint_dir\"\n if self._steps:\n os.makedirs(self._directory, exist_ok=True)\n- self._checkpoint_list = []\n+ self._checkpoint_dir_list = []\n self._include_evaluation = include_evaluation\n self._eval_checkpoint_dir = (\n tempfile.mkdtemp() if include_evaluation else \"\"\n@@ -141,12 +142,12 @@\n e.g. shard_number is the number of PS instances using\n ParameterServerStrategy.\n \"\"\"\n- file = self._get_checkpoint_file(\n+ filename = self._get_checkpoint_file(\n version, is_eval_checkpoint, shard_index, shard_num\n )\n- save_pb_to_file(model, file)\n+ save_pb_to_file(model, filename)\n if not is_eval_checkpoint:\n- self._checkpoint_list.append(Checkpoint(version, file))\n+ self._checkpoint_dir_list.append(os.path.dirname(filename))\n if self._max_versions:\n self._delete_old_checkpoints_if_needed()\n \n@@ -154,20 +155,16 @@\n \"\"\"Delete the oldest checkpoint files and keep the number of\n checkpoints is not beyond max_version.\n \"\"\"\n- while len(self._checkpoint_list) > self._max_versions:\n- checkpoint_file = self._checkpoint_list.pop(0).file\n- checkpoint_version_dir = os.path.dirname(checkpoint_file)\n- with contextlib.suppress(FileNotFoundError):\n- os.remove(checkpoint_file)\n- # Remove the directory if empty\n- if not os.listdir(checkpoint_version_dir):\n- os.rmdir(checkpoint_version_dir)\n-\n- def get_latest_checkpoint_version(self):\n- \"\"\"Get the latest checkpointed model version\"\"\"\n- if not self._checkpoint_list:\n- raise RuntimeError(\"No model checkpoint available\")\n- return self._checkpoint_list[-1].version\n+ if len(self._checkpoint_dir_list) > self._max_versions:\n+ old_version_dir = self._checkpoint_dir_list[0]\n+\n+ # Some PS instances have not saved checkpoint shard files of\n+ # the version if invalid. And the slowest PS will remove the\n+ # old version checkpoint.\n+ if self.check_checkpoint_valid(old_version_dir):\n+ self._checkpoint_dir_list.pop(0)\n+ with contextlib.suppress(FileNotFoundError):\n+ shutil.rmtree(old_version_dir)\n \n @staticmethod\n def get_valid_lastest_version_dir(checkpoint_dir):\n", "issue": "\"No such file or directory\" occasionally happens when the PS save checkpoint.\n``` \r\nFile \"/elasticdl/python/worker/worker.py\", line 579, in report_gradient_to_ps\r\n res = report_future.result()\r\n File \"/usr/local/lib/python3.6/dist-packages/grpc/_channel.py\", line 625, in result\r\n raise self\r\ngrpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:\r\n status = StatusCode.UNKNOWN\r\n details = \"Exception calling application: [Errno 2] No such file or directory: '/testdata/ckpt/20200309-19/version-26000/variables-0-of-3.ckpt'\"\r\n debug_error_string = \"{\"created\":\"@1583757719.258002676\",\"description\":\"Error received from peer ipv4:11.149.168.3:2222\",\"file\":\"src/core/lib/surface/call.cc\",\"file_line\":1056,\"grpc_message\":\"Exception calling application: [Errno 2] No such file or directory: '/testdata/elasticdl/kept/dban_lookup/20200309-19/version-26000/variables-0-of-3.ckpt'\",\"grpc_status\":2}\"\r\n```\n", "before_files": [{"content": "import contextlib\nimport os\nimport tempfile\n\nimport tensorflow as tf\n\nfrom elasticdl.proto import elasticdl_pb2\nfrom elasticdl.python.common.hash_utils import int_to_id, string_to_id\nfrom elasticdl.python.common.tensor_utils import (\n pb_to_indexed_slices,\n pb_to_ndarray,\n)\nfrom elasticdl.python.ps.embedding_table import create_embedding_table\nfrom elasticdl.python.ps.parameters import Parameters\n\n\ndef save_pb_to_file(pb_obj, file_name):\n \"\"\"Save a protobuf object to file\"\"\"\n encoded_model = pb_obj.SerializeToString()\n with open(file_name, \"wb\") as f:\n f.write(encoded_model)\n\n\ndef load_pb_from_file(pb_obj, file_name):\n \"\"\"Load a protobuf object from a file\"\"\"\n with open(file_name, \"rb\") as f:\n pb_obj.ParseFromString(f.read())\n return pb_obj\n\n\ndef _get_params_shard_from_pb(model_pb, shard_index, shard_num):\n \"\"\"Get parameters including variables values and embedding table\n from a model protobuf.\n Args:\n model_pb: A Model protobuf instance.\n shard_index: Model shard index.\n shard_num: The total number of model shards.\n Return:\n non_embedding_vars: A Python dict in which the key is a variable\n name and the value is a `tf.Variable` object.\n embedding_table_values: A Python dict in which the key is an embedding\n table name and the value is a tuple with 2 elements. The value[0]\n is indices and value[1] is the corresponding embedding vector.\n \"\"\"\n non_embedding_vars = {}\n embedding_table_values = {}\n\n for name, pb in model_pb.dense_parameters.items():\n if string_to_id(name, shard_num) == shard_index:\n non_embedding_vars[name] = tf.Variable(\n initial_value=pb_to_ndarray(pb), trainable=True\n )\n for name, pb in model_pb.embedding_tables.items():\n embedding_table_values.setdefault(name, ([], []))\n t = pb_to_indexed_slices(pb)\n for embedding_id, vector in zip(t.indices, t.values):\n if int_to_id(embedding_id, shard_num) == shard_index:\n embedding_table_values[name][0].append(embedding_id)\n embedding_table_values[name][1].append(vector)\n return non_embedding_vars, embedding_table_values\n\n\nclass Checkpoint(object):\n def __init__(self, version, file):\n self.version = version\n self.file = file\n\n\nclass CheckpointSaver(object):\n \"\"\"Checkpoint Saver implementation\"\"\"\n\n def __init__(\n self,\n checkpoint_dir,\n checkpoint_steps,\n keep_checkpoint_max,\n include_evaluation,\n ):\n \"\"\"\n Arguments:\n checkpoint_dir: The directory to store the checkpoint files.\n Directory will be created if not exist.\n checkpoint_steps: Save checkpoint every this many steps.\n keep_checkpoint_max: The maximum number of recent checkpoint\n files to keep.\n \"\"\"\n self._directory = checkpoint_dir\n self._steps = checkpoint_steps\n self._max_versions = keep_checkpoint_max\n if not self._directory:\n self._directory = os.getcwd() + \"/checkpoint_dir\"\n if self._steps:\n os.makedirs(self._directory, exist_ok=True)\n self._checkpoint_list = []\n self._include_evaluation = include_evaluation\n self._eval_checkpoint_dir = (\n tempfile.mkdtemp() if include_evaluation else \"\"\n )\n\n def _get_checkpoint_file(\n self, version, is_eval_checkpoint=False, shard_index=0, shard_num=1\n ):\n checkpoint_dir = (\n self._eval_checkpoint_dir\n if is_eval_checkpoint\n else self._directory\n )\n checkpoint_version_dir = os.path.join(\n checkpoint_dir, \"version-%s\" % str(version)\n )\n with contextlib.suppress(FileExistsError):\n os.makedirs(checkpoint_version_dir, exist_ok=True)\n return \"%s/variables-%s-of-%s.ckpt\" % (\n checkpoint_version_dir,\n str(shard_index),\n str(shard_num),\n )\n\n def is_enabled(self):\n \"\"\"Checkpoint is enabled or not\"\"\"\n return self._steps\n\n def need_to_checkpoint(self, version):\n \"\"\"Check if the given model version needs to be checkpointed\"\"\"\n return self.is_enabled() and version % self._steps == 0\n\n def save(\n self, version, model, is_eval_checkpoint, shard_index=0, shard_num=1\n ):\n \"\"\"Checkpoint the given model\n\n Args:\n version (int): iteration steps\n model: a model protobuf instance\n is_eval_checkpoint (bool): if True, the model will be saved to\n a temporary directory.\n shard_index (int): default 0. The shard index in all\n model shard files, e.g. the shard_index is PS instance index\n using ParameterServerStrategy.\n shard_number (int): default 1. The number of model shards,\n e.g. shard_number is the number of PS instances using\n ParameterServerStrategy.\n \"\"\"\n file = self._get_checkpoint_file(\n version, is_eval_checkpoint, shard_index, shard_num\n )\n save_pb_to_file(model, file)\n if not is_eval_checkpoint:\n self._checkpoint_list.append(Checkpoint(version, file))\n if self._max_versions:\n self._delete_old_checkpoints_if_needed()\n\n def _delete_old_checkpoints_if_needed(self):\n \"\"\"Delete the oldest checkpoint files and keep the number of\n checkpoints is not beyond max_version.\n \"\"\"\n while len(self._checkpoint_list) > self._max_versions:\n checkpoint_file = self._checkpoint_list.pop(0).file\n checkpoint_version_dir = os.path.dirname(checkpoint_file)\n with contextlib.suppress(FileNotFoundError):\n os.remove(checkpoint_file)\n # Remove the directory if empty\n if not os.listdir(checkpoint_version_dir):\n os.rmdir(checkpoint_version_dir)\n\n def get_latest_checkpoint_version(self):\n \"\"\"Get the latest checkpointed model version\"\"\"\n if not self._checkpoint_list:\n raise RuntimeError(\"No model checkpoint available\")\n return self._checkpoint_list[-1].version\n\n @staticmethod\n def get_valid_lastest_version_dir(checkpoint_dir):\n \"\"\"Get the valid and lastest version checkpoint directory\"\"\"\n if not checkpoint_dir or not os.path.exists(checkpoint_dir):\n return None\n\n version_folders = os.listdir(checkpoint_dir)\n if not version_folders:\n return None\n version_num = [int(v.split(\"-\")[-1]) for v in version_folders]\n version_folder_pairs = sorted(\n zip(version_num, version_folders), reverse=True\n )\n for version, folder in version_folder_pairs:\n folder_dir = os.path.join(checkpoint_dir, folder)\n if CheckpointSaver.check_checkpoint_valid(folder_dir):\n return folder_dir\n return None\n\n @staticmethod\n def check_checkpoint_valid(checkpoint_dir):\n \"\"\"Check whether the checkpoint directory is valid. The filename template\n in the checkpoint directory like \"variables-{i}-of-{N}.ckpt\". We will\n parse any filename to get N which is the total number of parameters\n shards. It is valid if the number of files in the directory N.\n \"\"\"\n if not os.path.exists(checkpoint_dir):\n return False\n\n shard_files = os.listdir(checkpoint_dir)\n if not shard_files:\n return False\n\n shard_file_prefix = shard_files[0].split(\".\")[0]\n expected_shard_num = int(shard_file_prefix.split(\"-\")[-1])\n return expected_shard_num == len(shard_files)\n\n @staticmethod\n def restore_params_from_checkpoint(checkpoint_dir, shard_index, shard_num):\n \"\"\"Restore a shard parameters from the checkpoint directory.\n If shard_num=1, a entire model parameters will be restored.\n\n Args:\n checkpoint_dir: a directory with checkpoint files.\n shard_index: Model shard index, e.g. the PS instance index\n using ParameterServerStrategy with multiple PS instances.\n shard_num: The total number of model shards, e.g. the total PS\n instancecount using ParameterServerStrategy with multiple\n PS instances.\n\n Return:\n parameters: A Parameter object which contains model version,\n non-embedding parameters and embedding tables for the\n PS instance with ps_id.\n \"\"\"\n\n variable_shard_files = os.listdir(checkpoint_dir)\n non_embedding_vars = {}\n embedding_tables = {}\n version = None\n for shard_file in variable_shard_files:\n shard_file_path = os.path.join(checkpoint_dir, shard_file)\n model_pb = elasticdl_pb2.Model()\n model_pb = load_pb_from_file(model_pb, shard_file_path)\n if version is None:\n version = model_pb.version\n elif version != model_pb.version:\n raise ValueError(\n \"The versions in model shards are not consistent\"\n )\n\n for embedding_info_pb in model_pb.embedding_table_infos:\n embedding_table = create_embedding_table(embedding_info_pb)\n embedding_tables.setdefault(\n embedding_table.name, embedding_table\n )\n\n (\n shard_non_embedding_vars,\n shard_embedding_table_values,\n ) = _get_params_shard_from_pb(model_pb, shard_index, shard_num)\n\n non_embedding_vars.update(shard_non_embedding_vars)\n for name, pair in shard_embedding_table_values.items():\n embedding_tables[name].set(pair[0], pair[1])\n\n parameters = Parameters()\n parameters.non_embedding_params.update(non_embedding_vars)\n parameters.embedding_params.update(embedding_tables)\n parameters.version = version\n return parameters\n\n @staticmethod\n def get_version_from_checkpoint(checkpoint_dir):\n \"\"\"Get model version from the checkpoint. There may be several shard\n files in the checkpoint directory. The model versions of shard files\n are same, so we only need to read one shard file to get model version.\n \"\"\"\n variable_shard_files = os.listdir(checkpoint_dir)\n shard_file_path = os.path.join(checkpoint_dir, variable_shard_files[0])\n model_pb = elasticdl_pb2.Model()\n model_pb = load_pb_from_file(model_pb, shard_file_path)\n return model_pb.version\n", "path": "elasticdl/python/common/save_utils.py"}], "after_files": [{"content": "import contextlib\nimport os\nimport shutil\nimport tempfile\n\nimport tensorflow as tf\n\nfrom elasticdl.proto import elasticdl_pb2\nfrom elasticdl.python.common.hash_utils import int_to_id, string_to_id\nfrom elasticdl.python.common.tensor_utils import (\n pb_to_indexed_slices,\n pb_to_ndarray,\n)\nfrom elasticdl.python.ps.embedding_table import create_embedding_table\nfrom elasticdl.python.ps.parameters import Parameters\n\n\ndef save_pb_to_file(pb_obj, file_name):\n \"\"\"Save a protobuf object to file\"\"\"\n encoded_model = pb_obj.SerializeToString()\n with open(file_name, \"wb\") as f:\n f.write(encoded_model)\n\n\ndef load_pb_from_file(pb_obj, file_name):\n \"\"\"Load a protobuf object from a file\"\"\"\n with open(file_name, \"rb\") as f:\n pb_obj.ParseFromString(f.read())\n return pb_obj\n\n\ndef _get_params_shard_from_pb(model_pb, shard_index, shard_num):\n \"\"\"Get parameters including variables values and embedding table\n from a model protobuf.\n Args:\n model_pb: A Model protobuf instance.\n shard_index: Model shard index.\n shard_num: The total number of model shards.\n Return:\n non_embedding_vars: A Python dict in which the key is a variable\n name and the value is a `tf.Variable` object.\n embedding_table_values: A Python dict in which the key is an embedding\n table name and the value is a tuple with 2 elements. The value[0]\n is indices and value[1] is the corresponding embedding vector.\n \"\"\"\n non_embedding_vars = {}\n embedding_table_values = {}\n\n for name, pb in model_pb.dense_parameters.items():\n if string_to_id(name, shard_num) == shard_index:\n non_embedding_vars[name] = tf.Variable(\n initial_value=pb_to_ndarray(pb), trainable=True\n )\n for name, pb in model_pb.embedding_tables.items():\n embedding_table_values.setdefault(name, ([], []))\n t = pb_to_indexed_slices(pb)\n for embedding_id, vector in zip(t.indices, t.values):\n if int_to_id(embedding_id, shard_num) == shard_index:\n embedding_table_values[name][0].append(embedding_id)\n embedding_table_values[name][1].append(vector)\n return non_embedding_vars, embedding_table_values\n\n\nclass Checkpoint(object):\n def __init__(self, version, file):\n self.version = version\n self.file = file\n\n\nclass CheckpointSaver(object):\n \"\"\"Checkpoint Saver implementation\"\"\"\n\n def __init__(\n self,\n checkpoint_dir,\n checkpoint_steps,\n keep_checkpoint_max,\n include_evaluation,\n ):\n \"\"\"\n Arguments:\n checkpoint_dir: The directory to store the checkpoint files.\n Directory will be created if not exist.\n checkpoint_steps: Save checkpoint every this many steps.\n keep_checkpoint_max: The maximum number of recent checkpoint\n files to keep.\n \"\"\"\n self._directory = checkpoint_dir\n self._steps = checkpoint_steps\n self._max_versions = keep_checkpoint_max\n if not self._directory:\n self._directory = os.getcwd() + \"/checkpoint_dir\"\n if self._steps:\n os.makedirs(self._directory, exist_ok=True)\n self._checkpoint_dir_list = []\n self._include_evaluation = include_evaluation\n self._eval_checkpoint_dir = (\n tempfile.mkdtemp() if include_evaluation else \"\"\n )\n\n def _get_checkpoint_file(\n self, version, is_eval_checkpoint=False, shard_index=0, shard_num=1\n ):\n checkpoint_dir = (\n self._eval_checkpoint_dir\n if is_eval_checkpoint\n else self._directory\n )\n checkpoint_version_dir = os.path.join(\n checkpoint_dir, \"version-%s\" % str(version)\n )\n with contextlib.suppress(FileExistsError):\n os.makedirs(checkpoint_version_dir, exist_ok=True)\n return \"%s/variables-%s-of-%s.ckpt\" % (\n checkpoint_version_dir,\n str(shard_index),\n str(shard_num),\n )\n\n def is_enabled(self):\n \"\"\"Checkpoint is enabled or not\"\"\"\n return self._steps\n\n def need_to_checkpoint(self, version):\n \"\"\"Check if the given model version needs to be checkpointed\"\"\"\n return self.is_enabled() and version % self._steps == 0\n\n def save(\n self, version, model, is_eval_checkpoint, shard_index=0, shard_num=1\n ):\n \"\"\"Checkpoint the given model\n\n Args:\n version (int): iteration steps\n model: a model protobuf instance\n is_eval_checkpoint (bool): if True, the model will be saved to\n a temporary directory.\n shard_index (int): default 0. The shard index in all\n model shard files, e.g. the shard_index is PS instance index\n using ParameterServerStrategy.\n shard_number (int): default 1. The number of model shards,\n e.g. shard_number is the number of PS instances using\n ParameterServerStrategy.\n \"\"\"\n filename = self._get_checkpoint_file(\n version, is_eval_checkpoint, shard_index, shard_num\n )\n save_pb_to_file(model, filename)\n if not is_eval_checkpoint:\n self._checkpoint_dir_list.append(os.path.dirname(filename))\n if self._max_versions:\n self._delete_old_checkpoints_if_needed()\n\n def _delete_old_checkpoints_if_needed(self):\n \"\"\"Delete the oldest checkpoint files and keep the number of\n checkpoints is not beyond max_version.\n \"\"\"\n if len(self._checkpoint_dir_list) > self._max_versions:\n old_version_dir = self._checkpoint_dir_list[0]\n\n # Some PS instances have not saved checkpoint shard files of\n # the version if invalid. And the slowest PS will remove the\n # old version checkpoint.\n if self.check_checkpoint_valid(old_version_dir):\n self._checkpoint_dir_list.pop(0)\n with contextlib.suppress(FileNotFoundError):\n shutil.rmtree(old_version_dir)\n\n @staticmethod\n def get_valid_lastest_version_dir(checkpoint_dir):\n \"\"\"Get the valid and lastest version checkpoint directory\"\"\"\n if not checkpoint_dir or not os.path.exists(checkpoint_dir):\n return None\n\n version_folders = os.listdir(checkpoint_dir)\n if not version_folders:\n return None\n version_num = [int(v.split(\"-\")[-1]) for v in version_folders]\n version_folder_pairs = sorted(\n zip(version_num, version_folders), reverse=True\n )\n for version, folder in version_folder_pairs:\n folder_dir = os.path.join(checkpoint_dir, folder)\n if CheckpointSaver.check_checkpoint_valid(folder_dir):\n return folder_dir\n return None\n\n @staticmethod\n def check_checkpoint_valid(checkpoint_dir):\n \"\"\"Check whether the checkpoint directory is valid. The filename template\n in the checkpoint directory like \"variables-{i}-of-{N}.ckpt\". We will\n parse any filename to get N which is the total number of parameters\n shards. It is valid if the number of files in the directory N.\n \"\"\"\n if not os.path.exists(checkpoint_dir):\n return False\n\n shard_files = os.listdir(checkpoint_dir)\n if not shard_files:\n return False\n\n shard_file_prefix = shard_files[0].split(\".\")[0]\n expected_shard_num = int(shard_file_prefix.split(\"-\")[-1])\n return expected_shard_num == len(shard_files)\n\n @staticmethod\n def restore_params_from_checkpoint(checkpoint_dir, shard_index, shard_num):\n \"\"\"Restore a shard parameters from the checkpoint directory.\n If shard_num=1, a entire model parameters will be restored.\n\n Args:\n checkpoint_dir: a directory with checkpoint files.\n shard_index: Model shard index, e.g. the PS instance index\n using ParameterServerStrategy with multiple PS instances.\n shard_num: The total number of model shards, e.g. the total PS\n instancecount using ParameterServerStrategy with multiple\n PS instances.\n\n Return:\n parameters: A Parameter object which contains model version,\n non-embedding parameters and embedding tables for the\n PS instance with ps_id.\n \"\"\"\n\n variable_shard_files = os.listdir(checkpoint_dir)\n non_embedding_vars = {}\n embedding_tables = {}\n version = None\n for shard_file in variable_shard_files:\n shard_file_path = os.path.join(checkpoint_dir, shard_file)\n model_pb = elasticdl_pb2.Model()\n model_pb = load_pb_from_file(model_pb, shard_file_path)\n if version is None:\n version = model_pb.version\n elif version != model_pb.version:\n raise ValueError(\n \"The versions in model shards are not consistent\"\n )\n\n for embedding_info_pb in model_pb.embedding_table_infos:\n embedding_table = create_embedding_table(embedding_info_pb)\n embedding_tables.setdefault(\n embedding_table.name, embedding_table\n )\n\n (\n shard_non_embedding_vars,\n shard_embedding_table_values,\n ) = _get_params_shard_from_pb(model_pb, shard_index, shard_num)\n\n non_embedding_vars.update(shard_non_embedding_vars)\n for name, pair in shard_embedding_table_values.items():\n embedding_tables[name].set(pair[0], pair[1])\n\n parameters = Parameters()\n parameters.non_embedding_params.update(non_embedding_vars)\n parameters.embedding_params.update(embedding_tables)\n parameters.version = version\n return parameters\n\n @staticmethod\n def get_version_from_checkpoint(checkpoint_dir):\n \"\"\"Get model version from the checkpoint. There may be several shard\n files in the checkpoint directory. The model versions of shard files\n are same, so we only need to read one shard file to get model version.\n \"\"\"\n variable_shard_files = os.listdir(checkpoint_dir)\n shard_file_path = os.path.join(checkpoint_dir, variable_shard_files[0])\n model_pb = elasticdl_pb2.Model()\n model_pb = load_pb_from_file(model_pb, shard_file_path)\n return model_pb.version\n", "path": "elasticdl/python/common/save_utils.py"}]}
| 3,448 | 612 |
gh_patches_debug_38674
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-163
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change default modules directory
Currently the default modules directory location is `./modules`.
This makes a few assumptions:
* Current directory is in the python path
* There are no other python modules in the current directory
* There are no other modules named `modules`
* Current directory is writable
A better default location may be `~/.opsdroid/modules/opsdroid-modules`. This would be created if it doesn't exist and `~/.opsdroid/modules` could be added to the python path without fear of collision as `opsdroid-modules` is less generic. As it is in the home directory we can be fairly sure it is writable.
Also when a user specifies a custom modules directory it should still be suffixed with `/opsdroid-modules` and the custom directory should be added to the python path.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/loader.py`
Content:
```
1 """Class for loading in modules to OpsDroid."""
2
3 import logging
4 import os
5 import sys
6 import shutil
7 import subprocess
8 import importlib
9 import yaml
10 from opsdroid.const import (
11 DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)
12
13
14 _LOGGER = logging.getLogger(__name__)
15
16
17 class Loader:
18 """Class to load in config and modules."""
19
20 def __init__(self, opsdroid):
21 """Create object with opsdroid instance."""
22 self.opsdroid = opsdroid
23 self.modules_directory = MODULES_DIRECTORY
24 self.current_import_config = None
25 _LOGGER.debug("Loaded loader")
26
27 @staticmethod
28 def import_module(config):
29 """Import module namespace as variable and return it."""
30 try:
31 module = importlib.import_module(
32 config["module_path"] + "." + config["name"])
33 _LOGGER.debug("Loaded " + config["type"] + ": " +
34 config["module_path"])
35 return module
36 except ImportError as error:
37 _LOGGER.debug("Failed to load " + config["type"] +
38 " " + config["module_path"] + "." + config["name"])
39 _LOGGER.debug(error)
40
41 try:
42 module = importlib.import_module(
43 config["module_path"])
44 _LOGGER.debug("Loaded " + config["type"] + ": " +
45 config["module_path"])
46 return module
47 except ImportError as error:
48 _LOGGER.debug("Failed to load " + config["type"] +
49 " " + config["module_path"])
50 _LOGGER.debug(error)
51
52 _LOGGER.error("Failed to load " + config["type"] +
53 " " + config["module_path"])
54 return None
55
56 @staticmethod
57 def check_cache(config):
58 """Remove module if 'no-cache' set in config."""
59 if "no-cache" in config \
60 and config["no-cache"]:
61 _LOGGER.debug("'no-cache' set, removing " + config["install_path"])
62 if os.path.isdir(config["install_path"]):
63 shutil.rmtree(config["install_path"])
64 if os.path.isfile(config["install_path"] + ".py"):
65 os.remove(config["install_path"] + ".py")
66
67 def build_module_path(self, path_type, config):
68 """Generate the module path from name and type."""
69 if path_type == "import":
70 return MODULES_DIRECTORY + "." + config["type"] + \
71 "." + config["name"]
72 elif path_type == "install":
73 return self.modules_directory + "/" + config["type"] + \
74 "/" + config["name"]
75
76 @staticmethod
77 def git_clone(git_url, install_path, branch):
78 """Clone a git repo to a location and wait for finish."""
79 process = subprocess.Popen(["git", "clone", "-b", branch,
80 git_url, install_path], shell=False,
81 stdout=subprocess.PIPE,
82 stderr=subprocess.PIPE)
83 for output in process.communicate():
84 if output != "":
85 for line in output.splitlines():
86 _LOGGER.debug(str(line).strip())
87 process.wait()
88
89 @staticmethod
90 def pip_install_deps(requirements_path):
91 """Pip install a requirements.txt file and wait for finish."""
92 process = subprocess.Popen(["pip", "install", "-r", requirements_path],
93 shell=False,
94 stdout=subprocess.PIPE,
95 stderr=subprocess.PIPE)
96 for output in process.communicate():
97 if output != "":
98 for line in output.splitlines():
99 _LOGGER.debug(str(line).strip())
100 process.wait()
101
102 def load_config_file(self, config_paths):
103 """Load a yaml config file from path."""
104 config_path = ""
105 for possible_path in config_paths:
106 if not os.path.isfile(possible_path):
107 _LOGGER.debug("Config file " + possible_path +
108 " not found")
109 else:
110 config_path = possible_path
111 break
112
113 if not config_path:
114 self.opsdroid.critical("No configuration files found", 1)
115
116 try:
117 with open(config_path, 'r') as stream:
118 _LOGGER.info("Loaded config from %s", config_path)
119 return yaml.load(stream)
120 except yaml.YAMLError as error:
121 self.opsdroid.critical(error, 1)
122 except FileNotFoundError as error:
123 self.opsdroid.critical(str(error), 1)
124
125 def load_modules_from_config(self, config):
126 """Load all module types based on config."""
127 _LOGGER.debug("Loading modules from config")
128
129 if "module-path" in config:
130 sys.path.append(config["module-path"])
131 if not os.path.isdir(config["module-path"]):
132 os.makedirs(config["module-path"], exist_ok=True)
133 self.modules_directory = os.path.join(config["module-path"],
134 self.modules_directory)
135
136 connectors, databases, skills = None, None, None
137
138 if 'databases' in config.keys():
139 databases = self._load_modules('database', config['databases'])
140 else:
141 _LOGGER.warning("No databases in configuration")
142
143 if 'skills' in config.keys():
144 skills = self._load_modules('skill', config['skills'])
145 else:
146 self.opsdroid.critical(
147 "No skills in configuration, at least 1 required", 1)
148
149 if 'connectors' in config.keys():
150 connectors = self._load_modules('connector', config['connectors'])
151 else:
152 self.opsdroid.critical(
153 "No connectors in configuration, at least 1 required", 1)
154
155 return connectors, databases, skills
156
157 def _load_modules(self, modules_type, modules):
158 """Install and load modules."""
159 _LOGGER.debug("Loading " + modules_type + " modules")
160 loaded_modules = []
161
162 # Create modules directory if doesn't exist
163 if not os.path.isdir(self.modules_directory):
164 os.makedirs(self.modules_directory)
165
166 for module in modules:
167
168 # Set up module config
169 config = module
170 config = {} if config is None else config
171 config["name"] = module["name"]
172 config["type"] = modules_type
173 config["module_path"] = self.build_module_path("import", config)
174 config["install_path"] = self.build_module_path("install", config)
175 if "branch" not in config:
176 config["branch"] = DEFAULT_MODULE_BRANCH
177
178 # Remove module for reinstall if no-cache set
179 self.check_cache(config)
180
181 # Install module
182 self._install_module(config)
183
184 # Import module
185 self.current_import_config = config
186 module = self.import_module(config)
187 if module is not None:
188 loaded_modules.append({
189 "module": module,
190 "config": config})
191 else:
192 _LOGGER.error(
193 "Module " + config["name"] + " failed to import")
194
195 return loaded_modules
196
197 def _install_module(self, config):
198 # pylint: disable=R0201
199 """Install a module."""
200 _LOGGER.debug("Installing " + config["name"])
201
202 if os.path.isdir(config["install_path"]) or \
203 os.path.isfile(config["install_path"] + ".py"):
204 # TODO Allow for updating or reinstalling of modules
205 _LOGGER.debug("Module " + config["name"] +
206 " already installed, skipping")
207 return
208
209 if "path" in config:
210 self._install_local_module(config)
211 else:
212 self._install_git_module(config)
213
214 if os.path.isdir(config["install_path"]):
215 _LOGGER.debug("Installed " + config["name"] +
216 " to " + config["install_path"])
217 else:
218 _LOGGER.debug("Install of " + config["name"] + " failed")
219
220 # Install module dependancies
221 if os.path.isfile(config["install_path"] + "/requirements.txt"):
222 self.pip_install_deps(config["install_path"] +
223 "/requirements.txt")
224
225 def _install_git_module(self, config):
226 """Install a module from a git repository."""
227 if config is not None and "repo" in config:
228 git_url = config["repo"]
229 else:
230 git_url = DEFAULT_GIT_URL + config["type"] + \
231 "-" + config["name"] + ".git"
232
233 if any(prefix in git_url for prefix in ["http", "https", "ssh"]):
234 # TODO Test if url or ssh path exists
235 # TODO Handle github authentication
236 _LOGGER.debug("Cloning from remote repository")
237 self.git_clone(git_url, config["install_path"],
238 config["branch"])
239 else:
240 if os.path.isdir(git_url):
241 _LOGGER.debug("Cloning from local repository")
242 self.git_clone(git_url, config["install_path"],
243 config["branch"])
244 else:
245 _LOGGER.debug("Could not find local git repo " + git_url)
246
247 @staticmethod
248 def _install_local_module(config):
249 """Install a module from a local path."""
250 installed = False
251
252 installdir, _ = os.path.split(config["install_path"])
253 if not os.path.isdir(installdir):
254 os.makedirs(installdir, exist_ok=True)
255
256 if os.path.isdir(config["path"]):
257 shutil.copytree(config["path"], config["install_path"])
258 installed = True
259
260 if os.path.isfile(config["path"]):
261 os.makedirs(config["install_path"], exist_ok=True)
262 shutil.copyfile(config["path"], config["install_path"] +
263 "/__init__.py")
264 installed = True
265
266 if not installed:
267 _LOGGER.error("Failed to install from " + config["path"])
268
```
Path: `opsdroid/const.py`
Content:
```
1 """Constants used by OpsDroid."""
2
3 __version__ = "0.7.1"
4
5 LOG_FILENAME = 'output.log'
6 DEFAULT_GIT_URL = "https://github.com/opsdroid/"
7 MODULES_DIRECTORY = "modules"
8 DEFAULT_MODULE_BRANCH = "master"
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opsdroid/const.py b/opsdroid/const.py
--- a/opsdroid/const.py
+++ b/opsdroid/const.py
@@ -4,5 +4,6 @@
LOG_FILENAME = 'output.log'
DEFAULT_GIT_URL = "https://github.com/opsdroid/"
-MODULES_DIRECTORY = "modules"
+MODULES_DIRECTORY = "opsdroid-modules"
+DEFAULT_MODULES_PATH = "~/.opsdroid/modules"
DEFAULT_MODULE_BRANCH = "master"
diff --git a/opsdroid/loader.py b/opsdroid/loader.py
--- a/opsdroid/loader.py
+++ b/opsdroid/loader.py
@@ -8,7 +8,8 @@
import importlib
import yaml
from opsdroid.const import (
- DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)
+ DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULES_PATH,
+ DEFAULT_MODULE_BRANCH)
_LOGGER = logging.getLogger(__name__)
@@ -20,7 +21,7 @@
def __init__(self, opsdroid):
"""Create object with opsdroid instance."""
self.opsdroid = opsdroid
- self.modules_directory = MODULES_DIRECTORY
+ self.modules_directory = None
self.current_import_config = None
_LOGGER.debug("Loaded loader")
@@ -122,16 +123,26 @@
except FileNotFoundError as error:
self.opsdroid.critical(str(error), 1)
+ def setup_modules_directory(self, config):
+ """Create and configure the modules directory."""
+ module_path = os.path.expanduser(
+ config.get("module-path", DEFAULT_MODULES_PATH))
+ sys.path.append(module_path)
+
+ if not os.path.isdir(module_path):
+ os.makedirs(module_path, exist_ok=True)
+
+ self.modules_directory = os.path.join(module_path, MODULES_DIRECTORY)
+
+ # Create modules directory if doesn't exist
+ if not os.path.isdir(self.modules_directory):
+ os.makedirs(self.modules_directory)
+
def load_modules_from_config(self, config):
"""Load all module types based on config."""
_LOGGER.debug("Loading modules from config")
- if "module-path" in config:
- sys.path.append(config["module-path"])
- if not os.path.isdir(config["module-path"]):
- os.makedirs(config["module-path"], exist_ok=True)
- self.modules_directory = os.path.join(config["module-path"],
- self.modules_directory)
+ self.setup_modules_directory(config)
connectors, databases, skills = None, None, None
@@ -159,10 +170,6 @@
_LOGGER.debug("Loading " + modules_type + " modules")
loaded_modules = []
- # Create modules directory if doesn't exist
- if not os.path.isdir(self.modules_directory):
- os.makedirs(self.modules_directory)
-
for module in modules:
# Set up module config
|
{"golden_diff": "diff --git a/opsdroid/const.py b/opsdroid/const.py\n--- a/opsdroid/const.py\n+++ b/opsdroid/const.py\n@@ -4,5 +4,6 @@\n \n LOG_FILENAME = 'output.log'\n DEFAULT_GIT_URL = \"https://github.com/opsdroid/\"\n-MODULES_DIRECTORY = \"modules\"\n+MODULES_DIRECTORY = \"opsdroid-modules\"\n+DEFAULT_MODULES_PATH = \"~/.opsdroid/modules\"\n DEFAULT_MODULE_BRANCH = \"master\"\ndiff --git a/opsdroid/loader.py b/opsdroid/loader.py\n--- a/opsdroid/loader.py\n+++ b/opsdroid/loader.py\n@@ -8,7 +8,8 @@\n import importlib\n import yaml\n from opsdroid.const import (\n- DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)\n+ DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULES_PATH,\n+ DEFAULT_MODULE_BRANCH)\n \n \n _LOGGER = logging.getLogger(__name__)\n@@ -20,7 +21,7 @@\n def __init__(self, opsdroid):\n \"\"\"Create object with opsdroid instance.\"\"\"\n self.opsdroid = opsdroid\n- self.modules_directory = MODULES_DIRECTORY\n+ self.modules_directory = None\n self.current_import_config = None\n _LOGGER.debug(\"Loaded loader\")\n \n@@ -122,16 +123,26 @@\n except FileNotFoundError as error:\n self.opsdroid.critical(str(error), 1)\n \n+ def setup_modules_directory(self, config):\n+ \"\"\"Create and configure the modules directory.\"\"\"\n+ module_path = os.path.expanduser(\n+ config.get(\"module-path\", DEFAULT_MODULES_PATH))\n+ sys.path.append(module_path)\n+\n+ if not os.path.isdir(module_path):\n+ os.makedirs(module_path, exist_ok=True)\n+\n+ self.modules_directory = os.path.join(module_path, MODULES_DIRECTORY)\n+\n+ # Create modules directory if doesn't exist\n+ if not os.path.isdir(self.modules_directory):\n+ os.makedirs(self.modules_directory)\n+\n def load_modules_from_config(self, config):\n \"\"\"Load all module types based on config.\"\"\"\n _LOGGER.debug(\"Loading modules from config\")\n \n- if \"module-path\" in config:\n- sys.path.append(config[\"module-path\"])\n- if not os.path.isdir(config[\"module-path\"]):\n- os.makedirs(config[\"module-path\"], exist_ok=True)\n- self.modules_directory = os.path.join(config[\"module-path\"],\n- self.modules_directory)\n+ self.setup_modules_directory(config)\n \n connectors, databases, skills = None, None, None\n \n@@ -159,10 +170,6 @@\n _LOGGER.debug(\"Loading \" + modules_type + \" modules\")\n loaded_modules = []\n \n- # Create modules directory if doesn't exist\n- if not os.path.isdir(self.modules_directory):\n- os.makedirs(self.modules_directory)\n-\n for module in modules:\n \n # Set up module config\n", "issue": "Change default modules directory\nCurrently the default modules directory location is `./modules`. \r\n\r\nThis makes a few assumptions: \r\n* Current directory is in the python path \r\n* There are no other python modules in the current directory\r\n* There are no other modules named `modules`\r\n* Current directory is writable\r\n\r\nA better default location may be `~/.opsdroid/modules/opsdroid-modules`. This would be created if it doesn't exist and `~/.opsdroid/modules` could be added to the python path without fear of collision as `opsdroid-modules` is less generic. As it is in the home directory we can be fairly sure it is writable. \r\n\r\nAlso when a user specifies a custom modules directory it should still be suffixed with `/opsdroid-modules` and the custom directory should be added to the python path. \n", "before_files": [{"content": "\"\"\"Class for loading in modules to OpsDroid.\"\"\"\n\nimport logging\nimport os\nimport sys\nimport shutil\nimport subprocess\nimport importlib\nimport yaml\nfrom opsdroid.const import (\n DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Loader:\n \"\"\"Class to load in config and modules.\"\"\"\n\n def __init__(self, opsdroid):\n \"\"\"Create object with opsdroid instance.\"\"\"\n self.opsdroid = opsdroid\n self.modules_directory = MODULES_DIRECTORY\n self.current_import_config = None\n _LOGGER.debug(\"Loaded loader\")\n\n @staticmethod\n def import_module(config):\n \"\"\"Import module namespace as variable and return it.\"\"\"\n try:\n module = importlib.import_module(\n config[\"module_path\"] + \".\" + config[\"name\"])\n _LOGGER.debug(\"Loaded \" + config[\"type\"] + \": \" +\n config[\"module_path\"])\n return module\n except ImportError as error:\n _LOGGER.debug(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"module_path\"] + \".\" + config[\"name\"])\n _LOGGER.debug(error)\n\n try:\n module = importlib.import_module(\n config[\"module_path\"])\n _LOGGER.debug(\"Loaded \" + config[\"type\"] + \": \" +\n config[\"module_path\"])\n return module\n except ImportError as error:\n _LOGGER.debug(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"module_path\"])\n _LOGGER.debug(error)\n\n _LOGGER.error(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"module_path\"])\n return None\n\n @staticmethod\n def check_cache(config):\n \"\"\"Remove module if 'no-cache' set in config.\"\"\"\n if \"no-cache\" in config \\\n and config[\"no-cache\"]:\n _LOGGER.debug(\"'no-cache' set, removing \" + config[\"install_path\"])\n if os.path.isdir(config[\"install_path\"]):\n shutil.rmtree(config[\"install_path\"])\n if os.path.isfile(config[\"install_path\"] + \".py\"):\n os.remove(config[\"install_path\"] + \".py\")\n\n def build_module_path(self, path_type, config):\n \"\"\"Generate the module path from name and type.\"\"\"\n if path_type == \"import\":\n return MODULES_DIRECTORY + \".\" + config[\"type\"] + \\\n \".\" + config[\"name\"]\n elif path_type == \"install\":\n return self.modules_directory + \"/\" + config[\"type\"] + \\\n \"/\" + config[\"name\"]\n\n @staticmethod\n def git_clone(git_url, install_path, branch):\n \"\"\"Clone a git repo to a location and wait for finish.\"\"\"\n process = subprocess.Popen([\"git\", \"clone\", \"-b\", branch,\n git_url, install_path], shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n for output in process.communicate():\n if output != \"\":\n for line in output.splitlines():\n _LOGGER.debug(str(line).strip())\n process.wait()\n\n @staticmethod\n def pip_install_deps(requirements_path):\n \"\"\"Pip install a requirements.txt file and wait for finish.\"\"\"\n process = subprocess.Popen([\"pip\", \"install\", \"-r\", requirements_path],\n shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n for output in process.communicate():\n if output != \"\":\n for line in output.splitlines():\n _LOGGER.debug(str(line).strip())\n process.wait()\n\n def load_config_file(self, config_paths):\n \"\"\"Load a yaml config file from path.\"\"\"\n config_path = \"\"\n for possible_path in config_paths:\n if not os.path.isfile(possible_path):\n _LOGGER.debug(\"Config file \" + possible_path +\n \" not found\")\n else:\n config_path = possible_path\n break\n\n if not config_path:\n self.opsdroid.critical(\"No configuration files found\", 1)\n\n try:\n with open(config_path, 'r') as stream:\n _LOGGER.info(\"Loaded config from %s\", config_path)\n return yaml.load(stream)\n except yaml.YAMLError as error:\n self.opsdroid.critical(error, 1)\n except FileNotFoundError as error:\n self.opsdroid.critical(str(error), 1)\n\n def load_modules_from_config(self, config):\n \"\"\"Load all module types based on config.\"\"\"\n _LOGGER.debug(\"Loading modules from config\")\n\n if \"module-path\" in config:\n sys.path.append(config[\"module-path\"])\n if not os.path.isdir(config[\"module-path\"]):\n os.makedirs(config[\"module-path\"], exist_ok=True)\n self.modules_directory = os.path.join(config[\"module-path\"],\n self.modules_directory)\n\n connectors, databases, skills = None, None, None\n\n if 'databases' in config.keys():\n databases = self._load_modules('database', config['databases'])\n else:\n _LOGGER.warning(\"No databases in configuration\")\n\n if 'skills' in config.keys():\n skills = self._load_modules('skill', config['skills'])\n else:\n self.opsdroid.critical(\n \"No skills in configuration, at least 1 required\", 1)\n\n if 'connectors' in config.keys():\n connectors = self._load_modules('connector', config['connectors'])\n else:\n self.opsdroid.critical(\n \"No connectors in configuration, at least 1 required\", 1)\n\n return connectors, databases, skills\n\n def _load_modules(self, modules_type, modules):\n \"\"\"Install and load modules.\"\"\"\n _LOGGER.debug(\"Loading \" + modules_type + \" modules\")\n loaded_modules = []\n\n # Create modules directory if doesn't exist\n if not os.path.isdir(self.modules_directory):\n os.makedirs(self.modules_directory)\n\n for module in modules:\n\n # Set up module config\n config = module\n config = {} if config is None else config\n config[\"name\"] = module[\"name\"]\n config[\"type\"] = modules_type\n config[\"module_path\"] = self.build_module_path(\"import\", config)\n config[\"install_path\"] = self.build_module_path(\"install\", config)\n if \"branch\" not in config:\n config[\"branch\"] = DEFAULT_MODULE_BRANCH\n\n # Remove module for reinstall if no-cache set\n self.check_cache(config)\n\n # Install module\n self._install_module(config)\n\n # Import module\n self.current_import_config = config\n module = self.import_module(config)\n if module is not None:\n loaded_modules.append({\n \"module\": module,\n \"config\": config})\n else:\n _LOGGER.error(\n \"Module \" + config[\"name\"] + \" failed to import\")\n\n return loaded_modules\n\n def _install_module(self, config):\n # pylint: disable=R0201\n \"\"\"Install a module.\"\"\"\n _LOGGER.debug(\"Installing \" + config[\"name\"])\n\n if os.path.isdir(config[\"install_path\"]) or \\\n os.path.isfile(config[\"install_path\"] + \".py\"):\n # TODO Allow for updating or reinstalling of modules\n _LOGGER.debug(\"Module \" + config[\"name\"] +\n \" already installed, skipping\")\n return\n\n if \"path\" in config:\n self._install_local_module(config)\n else:\n self._install_git_module(config)\n\n if os.path.isdir(config[\"install_path\"]):\n _LOGGER.debug(\"Installed \" + config[\"name\"] +\n \" to \" + config[\"install_path\"])\n else:\n _LOGGER.debug(\"Install of \" + config[\"name\"] + \" failed\")\n\n # Install module dependancies\n if os.path.isfile(config[\"install_path\"] + \"/requirements.txt\"):\n self.pip_install_deps(config[\"install_path\"] +\n \"/requirements.txt\")\n\n def _install_git_module(self, config):\n \"\"\"Install a module from a git repository.\"\"\"\n if config is not None and \"repo\" in config:\n git_url = config[\"repo\"]\n else:\n git_url = DEFAULT_GIT_URL + config[\"type\"] + \\\n \"-\" + config[\"name\"] + \".git\"\n\n if any(prefix in git_url for prefix in [\"http\", \"https\", \"ssh\"]):\n # TODO Test if url or ssh path exists\n # TODO Handle github authentication\n _LOGGER.debug(\"Cloning from remote repository\")\n self.git_clone(git_url, config[\"install_path\"],\n config[\"branch\"])\n else:\n if os.path.isdir(git_url):\n _LOGGER.debug(\"Cloning from local repository\")\n self.git_clone(git_url, config[\"install_path\"],\n config[\"branch\"])\n else:\n _LOGGER.debug(\"Could not find local git repo \" + git_url)\n\n @staticmethod\n def _install_local_module(config):\n \"\"\"Install a module from a local path.\"\"\"\n installed = False\n\n installdir, _ = os.path.split(config[\"install_path\"])\n if not os.path.isdir(installdir):\n os.makedirs(installdir, exist_ok=True)\n\n if os.path.isdir(config[\"path\"]):\n shutil.copytree(config[\"path\"], config[\"install_path\"])\n installed = True\n\n if os.path.isfile(config[\"path\"]):\n os.makedirs(config[\"install_path\"], exist_ok=True)\n shutil.copyfile(config[\"path\"], config[\"install_path\"] +\n \"/__init__.py\")\n installed = True\n\n if not installed:\n _LOGGER.error(\"Failed to install from \" + config[\"path\"])\n", "path": "opsdroid/loader.py"}, {"content": "\"\"\"Constants used by OpsDroid.\"\"\"\n\n__version__ = \"0.7.1\"\n\nLOG_FILENAME = 'output.log'\nDEFAULT_GIT_URL = \"https://github.com/opsdroid/\"\nMODULES_DIRECTORY = \"modules\"\nDEFAULT_MODULE_BRANCH = \"master\"\n", "path": "opsdroid/const.py"}], "after_files": [{"content": "\"\"\"Class for loading in modules to OpsDroid.\"\"\"\n\nimport logging\nimport os\nimport sys\nimport shutil\nimport subprocess\nimport importlib\nimport yaml\nfrom opsdroid.const import (\n DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULES_PATH,\n DEFAULT_MODULE_BRANCH)\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Loader:\n \"\"\"Class to load in config and modules.\"\"\"\n\n def __init__(self, opsdroid):\n \"\"\"Create object with opsdroid instance.\"\"\"\n self.opsdroid = opsdroid\n self.modules_directory = None\n self.current_import_config = None\n _LOGGER.debug(\"Loaded loader\")\n\n @staticmethod\n def import_module(config):\n \"\"\"Import module namespace as variable and return it.\"\"\"\n try:\n module = importlib.import_module(\n config[\"module_path\"] + \".\" + config[\"name\"])\n _LOGGER.debug(\"Loaded \" + config[\"type\"] + \": \" +\n config[\"module_path\"])\n return module\n except ImportError as error:\n _LOGGER.debug(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"module_path\"] + \".\" + config[\"name\"])\n _LOGGER.debug(error)\n\n try:\n module = importlib.import_module(\n config[\"module_path\"])\n _LOGGER.debug(\"Loaded \" + config[\"type\"] + \": \" +\n config[\"module_path\"])\n return module\n except ImportError as error:\n _LOGGER.debug(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"module_path\"])\n _LOGGER.debug(error)\n\n _LOGGER.error(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"module_path\"])\n return None\n\n @staticmethod\n def check_cache(config):\n \"\"\"Remove module if 'no-cache' set in config.\"\"\"\n if \"no-cache\" in config \\\n and config[\"no-cache\"]:\n _LOGGER.debug(\"'no-cache' set, removing \" + config[\"install_path\"])\n if os.path.isdir(config[\"install_path\"]):\n shutil.rmtree(config[\"install_path\"])\n if os.path.isfile(config[\"install_path\"] + \".py\"):\n os.remove(config[\"install_path\"] + \".py\")\n\n def build_module_path(self, path_type, config):\n \"\"\"Generate the module path from name and type.\"\"\"\n if path_type == \"import\":\n return MODULES_DIRECTORY + \".\" + config[\"type\"] + \\\n \".\" + config[\"name\"]\n elif path_type == \"install\":\n return self.modules_directory + \"/\" + config[\"type\"] + \\\n \"/\" + config[\"name\"]\n\n @staticmethod\n def git_clone(git_url, install_path, branch):\n \"\"\"Clone a git repo to a location and wait for finish.\"\"\"\n process = subprocess.Popen([\"git\", \"clone\", \"-b\", branch,\n git_url, install_path], shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n for output in process.communicate():\n if output != \"\":\n for line in output.splitlines():\n _LOGGER.debug(str(line).strip())\n process.wait()\n\n @staticmethod\n def pip_install_deps(requirements_path):\n \"\"\"Pip install a requirements.txt file and wait for finish.\"\"\"\n process = subprocess.Popen([\"pip\", \"install\", \"-r\", requirements_path],\n shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n for output in process.communicate():\n if output != \"\":\n for line in output.splitlines():\n _LOGGER.debug(str(line).strip())\n process.wait()\n\n def load_config_file(self, config_paths):\n \"\"\"Load a yaml config file from path.\"\"\"\n config_path = \"\"\n for possible_path in config_paths:\n if not os.path.isfile(possible_path):\n _LOGGER.debug(\"Config file \" + possible_path +\n \" not found\")\n else:\n config_path = possible_path\n break\n\n if not config_path:\n self.opsdroid.critical(\"No configuration files found\", 1)\n\n try:\n with open(config_path, 'r') as stream:\n _LOGGER.info(\"Loaded config from %s\", config_path)\n return yaml.load(stream)\n except yaml.YAMLError as error:\n self.opsdroid.critical(error, 1)\n except FileNotFoundError as error:\n self.opsdroid.critical(str(error), 1)\n\n def setup_modules_directory(self, config):\n \"\"\"Create and configure the modules directory.\"\"\"\n module_path = os.path.expanduser(\n config.get(\"module-path\", DEFAULT_MODULES_PATH))\n sys.path.append(module_path)\n\n if not os.path.isdir(module_path):\n os.makedirs(module_path, exist_ok=True)\n\n self.modules_directory = os.path.join(module_path, MODULES_DIRECTORY)\n\n # Create modules directory if doesn't exist\n if not os.path.isdir(self.modules_directory):\n os.makedirs(self.modules_directory)\n\n def load_modules_from_config(self, config):\n \"\"\"Load all module types based on config.\"\"\"\n _LOGGER.debug(\"Loading modules from config\")\n\n self.setup_modules_directory(config)\n\n connectors, databases, skills = None, None, None\n\n if 'databases' in config.keys():\n databases = self._load_modules('database', config['databases'])\n else:\n _LOGGER.warning(\"No databases in configuration\")\n\n if 'skills' in config.keys():\n skills = self._load_modules('skill', config['skills'])\n else:\n self.opsdroid.critical(\n \"No skills in configuration, at least 1 required\", 1)\n\n if 'connectors' in config.keys():\n connectors = self._load_modules('connector', config['connectors'])\n else:\n self.opsdroid.critical(\n \"No connectors in configuration, at least 1 required\", 1)\n\n return connectors, databases, skills\n\n def _load_modules(self, modules_type, modules):\n \"\"\"Install and load modules.\"\"\"\n _LOGGER.debug(\"Loading \" + modules_type + \" modules\")\n loaded_modules = []\n\n for module in modules:\n\n # Set up module config\n config = module\n config = {} if config is None else config\n config[\"name\"] = module[\"name\"]\n config[\"type\"] = modules_type\n config[\"module_path\"] = self.build_module_path(\"import\", config)\n config[\"install_path\"] = self.build_module_path(\"install\", config)\n if \"branch\" not in config:\n config[\"branch\"] = DEFAULT_MODULE_BRANCH\n\n # Remove module for reinstall if no-cache set\n self.check_cache(config)\n\n # Install module\n self._install_module(config)\n\n # Import module\n self.current_import_config = config\n module = self.import_module(config)\n if module is not None:\n loaded_modules.append({\n \"module\": module,\n \"config\": config})\n else:\n _LOGGER.error(\n \"Module \" + config[\"name\"] + \" failed to import\")\n\n return loaded_modules\n\n def _install_module(self, config):\n # pylint: disable=R0201\n \"\"\"Install a module.\"\"\"\n _LOGGER.debug(\"Installing \" + config[\"name\"])\n\n if os.path.isdir(config[\"install_path\"]) or \\\n os.path.isfile(config[\"install_path\"] + \".py\"):\n # TODO Allow for updating or reinstalling of modules\n _LOGGER.debug(\"Module \" + config[\"name\"] +\n \" already installed, skipping\")\n return\n\n if \"path\" in config:\n self._install_local_module(config)\n else:\n self._install_git_module(config)\n\n if os.path.isdir(config[\"install_path\"]):\n _LOGGER.debug(\"Installed \" + config[\"name\"] +\n \" to \" + config[\"install_path\"])\n else:\n _LOGGER.debug(\"Install of \" + config[\"name\"] + \" failed\")\n\n # Install module dependancies\n if os.path.isfile(config[\"install_path\"] + \"/requirements.txt\"):\n self.pip_install_deps(config[\"install_path\"] +\n \"/requirements.txt\")\n\n def _install_git_module(self, config):\n \"\"\"Install a module from a git repository.\"\"\"\n if config is not None and \"repo\" in config:\n git_url = config[\"repo\"]\n else:\n git_url = DEFAULT_GIT_URL + config[\"type\"] + \\\n \"-\" + config[\"name\"] + \".git\"\n\n if any(prefix in git_url for prefix in [\"http\", \"https\", \"ssh\"]):\n # TODO Test if url or ssh path exists\n # TODO Handle github authentication\n _LOGGER.debug(\"Cloning from remote repository\")\n self.git_clone(git_url, config[\"install_path\"],\n config[\"branch\"])\n else:\n if os.path.isdir(git_url):\n _LOGGER.debug(\"Cloning from local repository\")\n self.git_clone(git_url, config[\"install_path\"],\n config[\"branch\"])\n else:\n _LOGGER.debug(\"Could not find local git repo \" + git_url)\n\n @staticmethod\n def _install_local_module(config):\n \"\"\"Install a module from a local path.\"\"\"\n installed = False\n\n installdir, _ = os.path.split(config[\"install_path\"])\n if not os.path.isdir(installdir):\n os.makedirs(installdir, exist_ok=True)\n\n if os.path.isdir(config[\"path\"]):\n shutil.copytree(config[\"path\"], config[\"install_path\"])\n installed = True\n\n if os.path.isfile(config[\"path\"]):\n os.makedirs(config[\"install_path\"], exist_ok=True)\n shutil.copyfile(config[\"path\"], config[\"install_path\"] +\n \"/__init__.py\")\n installed = True\n\n if not installed:\n _LOGGER.error(\"Failed to install from \" + config[\"path\"])\n", "path": "opsdroid/loader.py"}, {"content": "\"\"\"Constants used by OpsDroid.\"\"\"\n\n__version__ = \"0.7.1\"\n\nLOG_FILENAME = 'output.log'\nDEFAULT_GIT_URL = \"https://github.com/opsdroid/\"\nMODULES_DIRECTORY = \"opsdroid-modules\"\nDEFAULT_MODULES_PATH = \"~/.opsdroid/modules\"\nDEFAULT_MODULE_BRANCH = \"master\"\n", "path": "opsdroid/const.py"}]}
| 3,244 | 650 |
gh_patches_debug_20207
|
rasdani/github-patches
|
git_diff
|
freqtrade__freqtrade-6744
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NaN values on /status
<!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: Raspbian
* Python Version: 3.9.9 (`python -V`)
* CCXT version: 1.80.61 (`pip freeze | grep ccxt`)
* Freqtrade Version: docker-44000ae0 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
*Explain the problem you have encountered*
### Steps to reproduce:
1. Open FreqUI
2. Go to Dashboard
### Observed Results:
Due to the API sending "NaN" as values, the frontend can't parse the JSON properly and instead is being converted into a string.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3
4 # Requirements used for submodules
5 plot = ['plotly>=4.0']
6 hyperopt = [
7 'scipy',
8 'scikit-learn',
9 'scikit-optimize>=0.7.0',
10 'filelock',
11 'joblib',
12 'progressbar2',
13 ]
14
15 develop = [
16 'coveralls',
17 'flake8',
18 'flake8-tidy-imports',
19 'mypy',
20 'pytest',
21 'pytest-asyncio',
22 'pytest-cov',
23 'pytest-mock',
24 'pytest-random-order',
25 ]
26
27 jupyter = [
28 'jupyter',
29 'nbstripout',
30 'ipykernel',
31 'nbconvert',
32 ]
33
34 all_extra = plot + develop + jupyter + hyperopt
35
36 setup(
37 tests_require=[
38 'pytest',
39 'pytest-asyncio',
40 'pytest-cov',
41 'pytest-mock',
42 ],
43 install_requires=[
44 # from requirements.txt
45 'ccxt>=1.79.69',
46 'SQLAlchemy',
47 'python-telegram-bot>=13.4',
48 'arrow>=0.17.0',
49 'cachetools',
50 'requests',
51 'urllib3',
52 'jsonschema',
53 'TA-Lib',
54 'pandas-ta',
55 'technical',
56 'tabulate',
57 'pycoingecko',
58 'py_find_1st',
59 'python-rapidjson',
60 'sdnotify',
61 'colorama',
62 'jinja2',
63 'questionary',
64 'prompt-toolkit',
65 'numpy',
66 'pandas',
67 'tables',
68 'blosc',
69 'fastapi',
70 'uvicorn',
71 'psutil',
72 'pyjwt',
73 'aiofiles',
74 'schedule'
75 ],
76 extras_require={
77 'dev': all_extra,
78 'plot': plot,
79 'jupyter': jupyter,
80 'hyperopt': hyperopt,
81 'all': all_extra,
82 },
83 )
84
```
Path: `freqtrade/rpc/api_server/webserver.py`
Content:
```
1 import logging
2 from ipaddress import IPv4Address
3 from typing import Any, Dict
4
5 import rapidjson
6 import uvicorn
7 from fastapi import Depends, FastAPI
8 from fastapi.middleware.cors import CORSMiddleware
9 from starlette.responses import JSONResponse
10
11 from freqtrade.exceptions import OperationalException
12 from freqtrade.rpc.api_server.uvicorn_threaded import UvicornServer
13 from freqtrade.rpc.rpc import RPC, RPCException, RPCHandler
14
15
16 logger = logging.getLogger(__name__)
17
18
19 class FTJSONResponse(JSONResponse):
20 media_type = "application/json"
21
22 def render(self, content: Any) -> bytes:
23 """
24 Use rapidjson for responses
25 Handles NaN and Inf / -Inf in a javascript way by default.
26 """
27 return rapidjson.dumps(content).encode("utf-8")
28
29
30 class ApiServer(RPCHandler):
31
32 __instance = None
33 __initialized = False
34
35 _rpc: RPC
36 # Backtesting type: Backtesting
37 _bt = None
38 _bt_data = None
39 _bt_timerange = None
40 _bt_last_config: Dict[str, Any] = {}
41 _has_rpc: bool = False
42 _bgtask_running: bool = False
43 _config: Dict[str, Any] = {}
44 # Exchange - only available in webserver mode.
45 _exchange = None
46
47 def __new__(cls, *args, **kwargs):
48 """
49 This class is a singleton.
50 We'll only have one instance of it around.
51 """
52 if ApiServer.__instance is None:
53 ApiServer.__instance = object.__new__(cls)
54 ApiServer.__initialized = False
55 return ApiServer.__instance
56
57 def __init__(self, config: Dict[str, Any], standalone: bool = False) -> None:
58 ApiServer._config = config
59 if self.__initialized and (standalone or self._standalone):
60 return
61 self._standalone: bool = standalone
62 self._server = None
63 ApiServer.__initialized = True
64
65 api_config = self._config['api_server']
66
67 self.app = FastAPI(title="Freqtrade API",
68 docs_url='/docs' if api_config.get('enable_openapi', False) else None,
69 redoc_url=None,
70 default_response_class=FTJSONResponse,
71 )
72 self.configure_app(self.app, self._config)
73
74 self.start_api()
75
76 def add_rpc_handler(self, rpc: RPC):
77 """
78 Attach rpc handler
79 """
80 if not self._has_rpc:
81 ApiServer._rpc = rpc
82 ApiServer._has_rpc = True
83 else:
84 # This should not happen assuming we didn't mess up.
85 raise OperationalException('RPC Handler already attached.')
86
87 def cleanup(self) -> None:
88 """ Cleanup pending module resources """
89 ApiServer._has_rpc = False
90 del ApiServer._rpc
91 if self._server and not self._standalone:
92 logger.info("Stopping API Server")
93 self._server.cleanup()
94
95 @classmethod
96 def shutdown(cls):
97 cls.__initialized = False
98 del cls.__instance
99 cls.__instance = None
100 cls._has_rpc = False
101 cls._rpc = None
102
103 def send_msg(self, msg: Dict[str, str]) -> None:
104 pass
105
106 def handle_rpc_exception(self, request, exc):
107 logger.exception(f"API Error calling: {exc}")
108 return JSONResponse(
109 status_code=502,
110 content={'error': f"Error querying {request.url.path}: {exc.message}"}
111 )
112
113 def configure_app(self, app: FastAPI, config):
114 from freqtrade.rpc.api_server.api_auth import http_basic_or_jwt_token, router_login
115 from freqtrade.rpc.api_server.api_backtest import router as api_backtest
116 from freqtrade.rpc.api_server.api_v1 import router as api_v1
117 from freqtrade.rpc.api_server.api_v1 import router_public as api_v1_public
118 from freqtrade.rpc.api_server.web_ui import router_ui
119
120 app.include_router(api_v1_public, prefix="/api/v1")
121
122 app.include_router(api_v1, prefix="/api/v1",
123 dependencies=[Depends(http_basic_or_jwt_token)],
124 )
125 app.include_router(api_backtest, prefix="/api/v1",
126 dependencies=[Depends(http_basic_or_jwt_token)],
127 )
128 app.include_router(router_login, prefix="/api/v1", tags=["auth"])
129 # UI Router MUST be last!
130 app.include_router(router_ui, prefix='')
131
132 app.add_middleware(
133 CORSMiddleware,
134 allow_origins=config['api_server'].get('CORS_origins', []),
135 allow_credentials=True,
136 allow_methods=["*"],
137 allow_headers=["*"],
138 )
139
140 app.add_exception_handler(RPCException, self.handle_rpc_exception)
141
142 def start_api(self):
143 """
144 Start API ... should be run in thread.
145 """
146 rest_ip = self._config['api_server']['listen_ip_address']
147 rest_port = self._config['api_server']['listen_port']
148
149 logger.info(f'Starting HTTP Server at {rest_ip}:{rest_port}')
150 if not IPv4Address(rest_ip).is_loopback:
151 logger.warning("SECURITY WARNING - Local Rest Server listening to external connections")
152 logger.warning("SECURITY WARNING - This is insecure please set to your loopback,"
153 "e.g 127.0.0.1 in config.json")
154
155 if not self._config['api_server'].get('password'):
156 logger.warning("SECURITY WARNING - No password for local REST Server defined. "
157 "Please make sure that this is intentional!")
158
159 if (self._config['api_server'].get('jwt_secret_key', 'super-secret')
160 in ('super-secret, somethingrandom')):
161 logger.warning("SECURITY WARNING - `jwt_secret_key` seems to be default."
162 "Others may be able to log into your bot.")
163
164 logger.info('Starting Local Rest Server.')
165 verbosity = self._config['api_server'].get('verbosity', 'error')
166
167 uvconfig = uvicorn.Config(self.app,
168 port=rest_port,
169 host=rest_ip,
170 use_colors=False,
171 log_config=None,
172 access_log=True if verbosity != 'error' else False,
173 )
174 try:
175 self._server = UvicornServer(uvconfig)
176 if self._standalone:
177 self._server.run()
178 else:
179 self._server.run_in_thread()
180 except Exception:
181 logger.exception("Api server failed to start.")
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/freqtrade/rpc/api_server/webserver.py b/freqtrade/rpc/api_server/webserver.py
--- a/freqtrade/rpc/api_server/webserver.py
+++ b/freqtrade/rpc/api_server/webserver.py
@@ -2,7 +2,7 @@
from ipaddress import IPv4Address
from typing import Any, Dict
-import rapidjson
+import orjson
import uvicorn
from fastapi import Depends, FastAPI
from fastapi.middleware.cors import CORSMiddleware
@@ -24,7 +24,7 @@
Use rapidjson for responses
Handles NaN and Inf / -Inf in a javascript way by default.
"""
- return rapidjson.dumps(content).encode("utf-8")
+ return orjson.dumps(content, option=orjson.OPT_SERIALIZE_NUMPY)
class ApiServer(RPCHandler):
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -57,6 +57,7 @@
'pycoingecko',
'py_find_1st',
'python-rapidjson',
+ 'orjson',
'sdnotify',
'colorama',
'jinja2',
|
{"golden_diff": "diff --git a/freqtrade/rpc/api_server/webserver.py b/freqtrade/rpc/api_server/webserver.py\n--- a/freqtrade/rpc/api_server/webserver.py\n+++ b/freqtrade/rpc/api_server/webserver.py\n@@ -2,7 +2,7 @@\n from ipaddress import IPv4Address\n from typing import Any, Dict\n \n-import rapidjson\n+import orjson\n import uvicorn\n from fastapi import Depends, FastAPI\n from fastapi.middleware.cors import CORSMiddleware\n@@ -24,7 +24,7 @@\n Use rapidjson for responses\n Handles NaN and Inf / -Inf in a javascript way by default.\n \"\"\"\n- return rapidjson.dumps(content).encode(\"utf-8\")\n+ return orjson.dumps(content, option=orjson.OPT_SERIALIZE_NUMPY)\n \n \n class ApiServer(RPCHandler):\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -57,6 +57,7 @@\n 'pycoingecko',\n 'py_find_1st',\n 'python-rapidjson',\n+ 'orjson',\n 'sdnotify',\n 'colorama',\n 'jinja2',\n", "issue": "NaN values on /status\n<!-- \r\nHave you searched for similar issues before posting it?\r\n\r\nIf you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue). \r\nIf it hasn't been reported, please create a new issue.\r\n\r\nPlease do not use bug reports to request new features.\r\n-->\r\n\r\n## Describe your environment\r\n\r\n * Operating system: Raspbian\r\n * Python Version: 3.9.9 (`python -V`)\r\n * CCXT version: 1.80.61 (`pip freeze | grep ccxt`)\r\n * Freqtrade Version: docker-44000ae0 (`freqtrade -V` or `docker-compose run --rm freqtrade -V` for Freqtrade running in docker)\r\n \r\nNote: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.\r\n\r\n## Describe the problem:\r\n\r\n*Explain the problem you have encountered*\r\n\r\n### Steps to reproduce:\r\n\r\n 1. Open FreqUI\r\n 2. Go to Dashboard\r\n \r\n### Observed Results:\r\n\r\nDue to the API sending \"NaN\" as values, the frontend can't parse the JSON properly and instead is being converted into a string.\r\n\n", "before_files": [{"content": "from setuptools import setup\n\n\n# Requirements used for submodules\nplot = ['plotly>=4.0']\nhyperopt = [\n 'scipy',\n 'scikit-learn',\n 'scikit-optimize>=0.7.0',\n 'filelock',\n 'joblib',\n 'progressbar2',\n]\n\ndevelop = [\n 'coveralls',\n 'flake8',\n 'flake8-tidy-imports',\n 'mypy',\n 'pytest',\n 'pytest-asyncio',\n 'pytest-cov',\n 'pytest-mock',\n 'pytest-random-order',\n]\n\njupyter = [\n 'jupyter',\n 'nbstripout',\n 'ipykernel',\n 'nbconvert',\n]\n\nall_extra = plot + develop + jupyter + hyperopt\n\nsetup(\n tests_require=[\n 'pytest',\n 'pytest-asyncio',\n 'pytest-cov',\n 'pytest-mock',\n ],\n install_requires=[\n # from requirements.txt\n 'ccxt>=1.79.69',\n 'SQLAlchemy',\n 'python-telegram-bot>=13.4',\n 'arrow>=0.17.0',\n 'cachetools',\n 'requests',\n 'urllib3',\n 'jsonschema',\n 'TA-Lib',\n 'pandas-ta',\n 'technical',\n 'tabulate',\n 'pycoingecko',\n 'py_find_1st',\n 'python-rapidjson',\n 'sdnotify',\n 'colorama',\n 'jinja2',\n 'questionary',\n 'prompt-toolkit',\n 'numpy',\n 'pandas',\n 'tables',\n 'blosc',\n 'fastapi',\n 'uvicorn',\n 'psutil',\n 'pyjwt',\n 'aiofiles',\n 'schedule'\n ],\n extras_require={\n 'dev': all_extra,\n 'plot': plot,\n 'jupyter': jupyter,\n 'hyperopt': hyperopt,\n 'all': all_extra,\n },\n)\n", "path": "setup.py"}, {"content": "import logging\nfrom ipaddress import IPv4Address\nfrom typing import Any, Dict\n\nimport rapidjson\nimport uvicorn\nfrom fastapi import Depends, FastAPI\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom starlette.responses import JSONResponse\n\nfrom freqtrade.exceptions import OperationalException\nfrom freqtrade.rpc.api_server.uvicorn_threaded import UvicornServer\nfrom freqtrade.rpc.rpc import RPC, RPCException, RPCHandler\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass FTJSONResponse(JSONResponse):\n media_type = \"application/json\"\n\n def render(self, content: Any) -> bytes:\n \"\"\"\n Use rapidjson for responses\n Handles NaN and Inf / -Inf in a javascript way by default.\n \"\"\"\n return rapidjson.dumps(content).encode(\"utf-8\")\n\n\nclass ApiServer(RPCHandler):\n\n __instance = None\n __initialized = False\n\n _rpc: RPC\n # Backtesting type: Backtesting\n _bt = None\n _bt_data = None\n _bt_timerange = None\n _bt_last_config: Dict[str, Any] = {}\n _has_rpc: bool = False\n _bgtask_running: bool = False\n _config: Dict[str, Any] = {}\n # Exchange - only available in webserver mode.\n _exchange = None\n\n def __new__(cls, *args, **kwargs):\n \"\"\"\n This class is a singleton.\n We'll only have one instance of it around.\n \"\"\"\n if ApiServer.__instance is None:\n ApiServer.__instance = object.__new__(cls)\n ApiServer.__initialized = False\n return ApiServer.__instance\n\n def __init__(self, config: Dict[str, Any], standalone: bool = False) -> None:\n ApiServer._config = config\n if self.__initialized and (standalone or self._standalone):\n return\n self._standalone: bool = standalone\n self._server = None\n ApiServer.__initialized = True\n\n api_config = self._config['api_server']\n\n self.app = FastAPI(title=\"Freqtrade API\",\n docs_url='/docs' if api_config.get('enable_openapi', False) else None,\n redoc_url=None,\n default_response_class=FTJSONResponse,\n )\n self.configure_app(self.app, self._config)\n\n self.start_api()\n\n def add_rpc_handler(self, rpc: RPC):\n \"\"\"\n Attach rpc handler\n \"\"\"\n if not self._has_rpc:\n ApiServer._rpc = rpc\n ApiServer._has_rpc = True\n else:\n # This should not happen assuming we didn't mess up.\n raise OperationalException('RPC Handler already attached.')\n\n def cleanup(self) -> None:\n \"\"\" Cleanup pending module resources \"\"\"\n ApiServer._has_rpc = False\n del ApiServer._rpc\n if self._server and not self._standalone:\n logger.info(\"Stopping API Server\")\n self._server.cleanup()\n\n @classmethod\n def shutdown(cls):\n cls.__initialized = False\n del cls.__instance\n cls.__instance = None\n cls._has_rpc = False\n cls._rpc = None\n\n def send_msg(self, msg: Dict[str, str]) -> None:\n pass\n\n def handle_rpc_exception(self, request, exc):\n logger.exception(f\"API Error calling: {exc}\")\n return JSONResponse(\n status_code=502,\n content={'error': f\"Error querying {request.url.path}: {exc.message}\"}\n )\n\n def configure_app(self, app: FastAPI, config):\n from freqtrade.rpc.api_server.api_auth import http_basic_or_jwt_token, router_login\n from freqtrade.rpc.api_server.api_backtest import router as api_backtest\n from freqtrade.rpc.api_server.api_v1 import router as api_v1\n from freqtrade.rpc.api_server.api_v1 import router_public as api_v1_public\n from freqtrade.rpc.api_server.web_ui import router_ui\n\n app.include_router(api_v1_public, prefix=\"/api/v1\")\n\n app.include_router(api_v1, prefix=\"/api/v1\",\n dependencies=[Depends(http_basic_or_jwt_token)],\n )\n app.include_router(api_backtest, prefix=\"/api/v1\",\n dependencies=[Depends(http_basic_or_jwt_token)],\n )\n app.include_router(router_login, prefix=\"/api/v1\", tags=[\"auth\"])\n # UI Router MUST be last!\n app.include_router(router_ui, prefix='')\n\n app.add_middleware(\n CORSMiddleware,\n allow_origins=config['api_server'].get('CORS_origins', []),\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n\n app.add_exception_handler(RPCException, self.handle_rpc_exception)\n\n def start_api(self):\n \"\"\"\n Start API ... should be run in thread.\n \"\"\"\n rest_ip = self._config['api_server']['listen_ip_address']\n rest_port = self._config['api_server']['listen_port']\n\n logger.info(f'Starting HTTP Server at {rest_ip}:{rest_port}')\n if not IPv4Address(rest_ip).is_loopback:\n logger.warning(\"SECURITY WARNING - Local Rest Server listening to external connections\")\n logger.warning(\"SECURITY WARNING - This is insecure please set to your loopback,\"\n \"e.g 127.0.0.1 in config.json\")\n\n if not self._config['api_server'].get('password'):\n logger.warning(\"SECURITY WARNING - No password for local REST Server defined. \"\n \"Please make sure that this is intentional!\")\n\n if (self._config['api_server'].get('jwt_secret_key', 'super-secret')\n in ('super-secret, somethingrandom')):\n logger.warning(\"SECURITY WARNING - `jwt_secret_key` seems to be default.\"\n \"Others may be able to log into your bot.\")\n\n logger.info('Starting Local Rest Server.')\n verbosity = self._config['api_server'].get('verbosity', 'error')\n\n uvconfig = uvicorn.Config(self.app,\n port=rest_port,\n host=rest_ip,\n use_colors=False,\n log_config=None,\n access_log=True if verbosity != 'error' else False,\n )\n try:\n self._server = UvicornServer(uvconfig)\n if self._standalone:\n self._server.run()\n else:\n self._server.run_in_thread()\n except Exception:\n logger.exception(\"Api server failed to start.\")\n", "path": "freqtrade/rpc/api_server/webserver.py"}], "after_files": [{"content": "from setuptools import setup\n\n\n# Requirements used for submodules\nplot = ['plotly>=4.0']\nhyperopt = [\n 'scipy',\n 'scikit-learn',\n 'scikit-optimize>=0.7.0',\n 'filelock',\n 'joblib',\n 'progressbar2',\n]\n\ndevelop = [\n 'coveralls',\n 'flake8',\n 'flake8-tidy-imports',\n 'mypy',\n 'pytest',\n 'pytest-asyncio',\n 'pytest-cov',\n 'pytest-mock',\n 'pytest-random-order',\n]\n\njupyter = [\n 'jupyter',\n 'nbstripout',\n 'ipykernel',\n 'nbconvert',\n]\n\nall_extra = plot + develop + jupyter + hyperopt\n\nsetup(\n tests_require=[\n 'pytest',\n 'pytest-asyncio',\n 'pytest-cov',\n 'pytest-mock',\n ],\n install_requires=[\n # from requirements.txt\n 'ccxt>=1.79.69',\n 'SQLAlchemy',\n 'python-telegram-bot>=13.4',\n 'arrow>=0.17.0',\n 'cachetools',\n 'requests',\n 'urllib3',\n 'jsonschema',\n 'TA-Lib',\n 'pandas-ta',\n 'technical',\n 'tabulate',\n 'pycoingecko',\n 'py_find_1st',\n 'python-rapidjson',\n 'orjson',\n 'sdnotify',\n 'colorama',\n 'jinja2',\n 'questionary',\n 'prompt-toolkit',\n 'numpy',\n 'pandas',\n 'tables',\n 'blosc',\n 'fastapi',\n 'uvicorn',\n 'psutil',\n 'pyjwt',\n 'aiofiles',\n 'schedule'\n ],\n extras_require={\n 'dev': all_extra,\n 'plot': plot,\n 'jupyter': jupyter,\n 'hyperopt': hyperopt,\n 'all': all_extra,\n },\n)\n", "path": "setup.py"}, {"content": "import logging\nfrom ipaddress import IPv4Address\nfrom typing import Any, Dict\n\nimport orjson\nimport uvicorn\nfrom fastapi import Depends, FastAPI\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom starlette.responses import JSONResponse\n\nfrom freqtrade.exceptions import OperationalException\nfrom freqtrade.rpc.api_server.uvicorn_threaded import UvicornServer\nfrom freqtrade.rpc.rpc import RPC, RPCException, RPCHandler\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass FTJSONResponse(JSONResponse):\n media_type = \"application/json\"\n\n def render(self, content: Any) -> bytes:\n \"\"\"\n Use rapidjson for responses\n Handles NaN and Inf / -Inf in a javascript way by default.\n \"\"\"\n return orjson.dumps(content, option=orjson.OPT_SERIALIZE_NUMPY)\n\n\nclass ApiServer(RPCHandler):\n\n __instance = None\n __initialized = False\n\n _rpc: RPC\n # Backtesting type: Backtesting\n _bt = None\n _bt_data = None\n _bt_timerange = None\n _bt_last_config: Dict[str, Any] = {}\n _has_rpc: bool = False\n _bgtask_running: bool = False\n _config: Dict[str, Any] = {}\n # Exchange - only available in webserver mode.\n _exchange = None\n\n def __new__(cls, *args, **kwargs):\n \"\"\"\n This class is a singleton.\n We'll only have one instance of it around.\n \"\"\"\n if ApiServer.__instance is None:\n ApiServer.__instance = object.__new__(cls)\n ApiServer.__initialized = False\n return ApiServer.__instance\n\n def __init__(self, config: Dict[str, Any], standalone: bool = False) -> None:\n ApiServer._config = config\n if self.__initialized and (standalone or self._standalone):\n return\n self._standalone: bool = standalone\n self._server = None\n ApiServer.__initialized = True\n\n api_config = self._config['api_server']\n\n self.app = FastAPI(title=\"Freqtrade API\",\n docs_url='/docs' if api_config.get('enable_openapi', False) else None,\n redoc_url=None,\n default_response_class=FTJSONResponse,\n )\n self.configure_app(self.app, self._config)\n\n self.start_api()\n\n def add_rpc_handler(self, rpc: RPC):\n \"\"\"\n Attach rpc handler\n \"\"\"\n if not self._has_rpc:\n ApiServer._rpc = rpc\n ApiServer._has_rpc = True\n else:\n # This should not happen assuming we didn't mess up.\n raise OperationalException('RPC Handler already attached.')\n\n def cleanup(self) -> None:\n \"\"\" Cleanup pending module resources \"\"\"\n ApiServer._has_rpc = False\n del ApiServer._rpc\n if self._server and not self._standalone:\n logger.info(\"Stopping API Server\")\n self._server.cleanup()\n\n @classmethod\n def shutdown(cls):\n cls.__initialized = False\n del cls.__instance\n cls.__instance = None\n cls._has_rpc = False\n cls._rpc = None\n\n def send_msg(self, msg: Dict[str, str]) -> None:\n pass\n\n def handle_rpc_exception(self, request, exc):\n logger.exception(f\"API Error calling: {exc}\")\n return JSONResponse(\n status_code=502,\n content={'error': f\"Error querying {request.url.path}: {exc.message}\"}\n )\n\n def configure_app(self, app: FastAPI, config):\n from freqtrade.rpc.api_server.api_auth import http_basic_or_jwt_token, router_login\n from freqtrade.rpc.api_server.api_backtest import router as api_backtest\n from freqtrade.rpc.api_server.api_v1 import router as api_v1\n from freqtrade.rpc.api_server.api_v1 import router_public as api_v1_public\n from freqtrade.rpc.api_server.web_ui import router_ui\n\n app.include_router(api_v1_public, prefix=\"/api/v1\")\n\n app.include_router(api_v1, prefix=\"/api/v1\",\n dependencies=[Depends(http_basic_or_jwt_token)],\n )\n app.include_router(api_backtest, prefix=\"/api/v1\",\n dependencies=[Depends(http_basic_or_jwt_token)],\n )\n app.include_router(router_login, prefix=\"/api/v1\", tags=[\"auth\"])\n # UI Router MUST be last!\n app.include_router(router_ui, prefix='')\n\n app.add_middleware(\n CORSMiddleware,\n allow_origins=config['api_server'].get('CORS_origins', []),\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n\n app.add_exception_handler(RPCException, self.handle_rpc_exception)\n\n def start_api(self):\n \"\"\"\n Start API ... should be run in thread.\n \"\"\"\n rest_ip = self._config['api_server']['listen_ip_address']\n rest_port = self._config['api_server']['listen_port']\n\n logger.info(f'Starting HTTP Server at {rest_ip}:{rest_port}')\n if not IPv4Address(rest_ip).is_loopback:\n logger.warning(\"SECURITY WARNING - Local Rest Server listening to external connections\")\n logger.warning(\"SECURITY WARNING - This is insecure please set to your loopback,\"\n \"e.g 127.0.0.1 in config.json\")\n\n if not self._config['api_server'].get('password'):\n logger.warning(\"SECURITY WARNING - No password for local REST Server defined. \"\n \"Please make sure that this is intentional!\")\n\n if (self._config['api_server'].get('jwt_secret_key', 'super-secret')\n in ('super-secret, somethingrandom')):\n logger.warning(\"SECURITY WARNING - `jwt_secret_key` seems to be default.\"\n \"Others may be able to log into your bot.\")\n\n logger.info('Starting Local Rest Server.')\n verbosity = self._config['api_server'].get('verbosity', 'error')\n\n uvconfig = uvicorn.Config(self.app,\n port=rest_port,\n host=rest_ip,\n use_colors=False,\n log_config=None,\n access_log=True if verbosity != 'error' else False,\n )\n try:\n self._server = UvicornServer(uvconfig)\n if self._standalone:\n self._server.run()\n else:\n self._server.run_in_thread()\n except Exception:\n logger.exception(\"Api server failed to start.\")\n", "path": "freqtrade/rpc/api_server/webserver.py"}]}
| 2,993 | 264 |
gh_patches_debug_18208
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-799
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Formbuilder number fields do not work in 0.8
Steps to reproduce:
1) Create a form and give it a field type set to 'number'.
2) Submit the form with a number in the number field.
Result:
`Traceback (most recent call last):
File "/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailcore/views.py", line 27, in serve
return page.serve(request, *args, **kwargs)
File "/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailforms/models.py", line 148, in serve
self.process_form_submission(form)
File "/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailforms/models.py", line 190, in process_form_submission
super(AbstractEmailForm, self).process_form_submission(form)
File "/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailforms/models.py", line 135, in process_form_submission
form_data=json.dumps(form.cleaned_data),
File "/usr/lib/python2.7/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 263, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 177, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: Decimal('5421515526') is not JSON serializable`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailforms/models.py`
Content:
```
1 import json
2 import re
3
4 from six import text_type
5
6 from unidecode import unidecode
7
8 from django.db import models
9 from django.shortcuts import render
10 from django.utils.translation import ugettext_lazy as _
11 from django.utils.text import slugify
12 from django.utils.encoding import python_2_unicode_compatible
13
14 from wagtail.wagtailcore.models import Page, Orderable, UserPagePermissionsProxy, get_page_types
15 from wagtail.wagtailadmin.edit_handlers import FieldPanel
16 from wagtail.wagtailadmin import tasks
17
18 from .forms import FormBuilder
19
20
21 FORM_FIELD_CHOICES = (
22 ('singleline', _('Single line text')),
23 ('multiline', _('Multi-line text')),
24 ('email', _('Email')),
25 ('number', _('Number')),
26 ('url', _('URL')),
27 ('checkbox', _('Checkbox')),
28 ('checkboxes', _('Checkboxes')),
29 ('dropdown', _('Drop down')),
30 ('radio', _('Radio buttons')),
31 ('date', _('Date')),
32 ('datetime', _('Date/time')),
33 )
34
35
36 HTML_EXTENSION_RE = re.compile(r"(.*)\.html")
37
38
39 @python_2_unicode_compatible
40 class FormSubmission(models.Model):
41 """Data for a Form submission."""
42
43 form_data = models.TextField()
44 page = models.ForeignKey(Page)
45
46 submit_time = models.DateTimeField(auto_now_add=True)
47
48 def get_data(self):
49 return json.loads(self.form_data)
50
51 def __str__(self):
52 return self.form_data
53
54
55 class AbstractFormField(Orderable):
56 """Database Fields required for building a Django Form field."""
57
58 label = models.CharField(
59 max_length=255,
60 help_text=_('The label of the form field')
61 )
62 field_type = models.CharField(max_length=16, choices=FORM_FIELD_CHOICES)
63 required = models.BooleanField(default=True)
64 choices = models.CharField(
65 max_length=512,
66 blank=True,
67 help_text=_('Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.')
68 )
69 default_value = models.CharField(
70 max_length=255,
71 blank=True,
72 help_text=_('Default value. Comma separated values supported for checkboxes.')
73 )
74 help_text = models.CharField(max_length=255, blank=True)
75
76 @property
77 def clean_name(self):
78 # unidecode will return an ascii string while slugify wants a
79 # unicode string on the other hand, slugify returns a safe-string
80 # which will be converted to a normal str
81 return str(slugify(text_type(unidecode(self.label))))
82
83 panels = [
84 FieldPanel('label'),
85 FieldPanel('help_text'),
86 FieldPanel('required'),
87 FieldPanel('field_type', classname="formbuilder-type"),
88 FieldPanel('choices', classname="formbuilder-choices"),
89 FieldPanel('default_value', classname="formbuilder-default"),
90 ]
91
92 class Meta:
93 abstract = True
94 ordering = ['sort_order']
95
96
97 _FORM_CONTENT_TYPES = None
98
99 def get_form_types():
100 global _FORM_CONTENT_TYPES
101 if _FORM_CONTENT_TYPES is None:
102 _FORM_CONTENT_TYPES = [
103 ct for ct in get_page_types()
104 if issubclass(ct.model_class(), AbstractForm)
105 ]
106 return _FORM_CONTENT_TYPES
107
108
109 def get_forms_for_user(user):
110 """Return a queryset of form pages that this user is allowed to access the submissions for"""
111 editable_pages = UserPagePermissionsProxy(user).editable_pages()
112 return editable_pages.filter(content_type__in=get_form_types())
113
114
115 class AbstractForm(Page):
116 """A Form Page. Pages implementing a form should inhert from it"""
117
118 form_builder = FormBuilder
119 is_abstract = True # Don't display me in "Add"
120
121 def __init__(self, *args, **kwargs):
122 super(AbstractForm, self).__init__(*args, **kwargs)
123 if not hasattr(self, 'landing_page_template'):
124 template_wo_ext = re.match(HTML_EXTENSION_RE, self.template).group(1)
125 self.landing_page_template = template_wo_ext + '_landing.html'
126
127 class Meta:
128 abstract = True
129
130 def get_form_parameters(self):
131 return {}
132
133 def process_form_submission(self, form):
134 FormSubmission.objects.create(
135 form_data=json.dumps(form.cleaned_data),
136 page=self,
137 )
138
139 def serve(self, request):
140 fb = self.form_builder(self.form_fields.all())
141 form_class = fb.get_form_class()
142 form_params = self.get_form_parameters()
143
144 if request.method == 'POST':
145 form = form_class(request.POST, **form_params)
146
147 if form.is_valid():
148 self.process_form_submission(form)
149 # If we have a form_processing_backend call its process method
150 if hasattr(self, 'form_processing_backend'):
151 form_processor = self.form_processing_backend()
152 form_processor.process(self, form)
153
154 # render the landing_page
155 # TODO: It is much better to redirect to it
156 return render(request, self.landing_page_template, {
157 'self': self,
158 })
159 else:
160 form = form_class(**form_params)
161
162 return render(request, self.template, {
163 'self': self,
164 'form': form,
165 })
166
167 preview_modes = [
168 ('form', 'Form'),
169 ('landing', 'Landing page'),
170 ]
171
172 def serve_preview(self, request, mode):
173 if mode == 'landing':
174 return render(request, self.landing_page_template, {
175 'self': self,
176 })
177 else:
178 return super(AbstractForm, self).serve_preview(request, mode)
179
180
181 class AbstractEmailForm(AbstractForm):
182 """A Form Page that sends email. Pages implementing a form to be send to an email should inherit from it"""
183 is_abstract = True # Don't display me in "Add"
184
185 to_address = models.CharField(max_length=255, blank=True, help_text=_("Optional - form submissions will be emailed to this address"))
186 from_address = models.CharField(max_length=255, blank=True)
187 subject = models.CharField(max_length=255, blank=True)
188
189 def process_form_submission(self, form):
190 super(AbstractEmailForm, self).process_form_submission(form)
191
192 if self.to_address:
193 content = '\n'.join([x[1].label + ': ' + form.data.get(x[0]) for x in form.fields.items()])
194 tasks.send_email_task.delay(self.subject, content, [self.to_address], self.from_address,)
195
196
197 class Meta:
198 abstract = True
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/wagtailforms/models.py b/wagtail/wagtailforms/models.py
--- a/wagtail/wagtailforms/models.py
+++ b/wagtail/wagtailforms/models.py
@@ -10,6 +10,7 @@
from django.utils.translation import ugettext_lazy as _
from django.utils.text import slugify
from django.utils.encoding import python_2_unicode_compatible
+from django.core.serializers.json import DjangoJSONEncoder
from wagtail.wagtailcore.models import Page, Orderable, UserPagePermissionsProxy, get_page_types
from wagtail.wagtailadmin.edit_handlers import FieldPanel
@@ -132,7 +133,7 @@
def process_form_submission(self, form):
FormSubmission.objects.create(
- form_data=json.dumps(form.cleaned_data),
+ form_data=json.dumps(form.cleaned_data, cls=DjangoJSONEncoder),
page=self,
)
|
{"golden_diff": "diff --git a/wagtail/wagtailforms/models.py b/wagtail/wagtailforms/models.py\n--- a/wagtail/wagtailforms/models.py\n+++ b/wagtail/wagtailforms/models.py\n@@ -10,6 +10,7 @@\n from django.utils.translation import ugettext_lazy as _\n from django.utils.text import slugify\n from django.utils.encoding import python_2_unicode_compatible\n+from django.core.serializers.json import DjangoJSONEncoder\n \n from wagtail.wagtailcore.models import Page, Orderable, UserPagePermissionsProxy, get_page_types\n from wagtail.wagtailadmin.edit_handlers import FieldPanel\n@@ -132,7 +133,7 @@\n \n def process_form_submission(self, form):\n FormSubmission.objects.create(\n- form_data=json.dumps(form.cleaned_data),\n+ form_data=json.dumps(form.cleaned_data, cls=DjangoJSONEncoder),\n page=self,\n )\n", "issue": "Formbuilder number fields do not work in 0.8\nSteps to reproduce:\n\n1) Create a form and give it a field type set to 'number'.\n2) Submit the form with a number in the number field.\n\nResult:\n\n`Traceback (most recent call last):\n File \"/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/django/core/handlers/base.py\", line 111, in get_response\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\n File \"/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailcore/views.py\", line 27, in serve\n return page.serve(request, *args, **kwargs)\n File \"/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailforms/models.py\", line 148, in serve\n self.process_form_submission(form)\n File \"/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailforms/models.py\", line 190, in process_form_submission\n super(AbstractEmailForm, self).process_form_submission(form)\n File \"/usr/local/django/virtualenvs/bhfnsdwagtail/local/lib/python2.7/site-packages/wagtail/wagtailforms/models.py\", line 135, in process_form_submission\n form_data=json.dumps(form.cleaned_data),\n File \"/usr/lib/python2.7/json/__init__.py\", line 231, in dumps\n return _default_encoder.encode(obj)\n File \"/usr/lib/python2.7/json/encoder.py\", line 200, in encode\n chunks = self.iterencode(o, _one_shot=True)\n File \"/usr/lib/python2.7/json/encoder.py\", line 263, in iterencode\n return _iterencode(o, 0)\n File \"/usr/lib/python2.7/json/encoder.py\", line 177, in default\n raise TypeError(repr(o) + \" is not JSON serializable\")\nTypeError: Decimal('5421515526') is not JSON serializable`\n\n", "before_files": [{"content": "import json\nimport re\n\nfrom six import text_type\n\nfrom unidecode import unidecode\n\nfrom django.db import models\nfrom django.shortcuts import render\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.utils.text import slugify\nfrom django.utils.encoding import python_2_unicode_compatible\n\nfrom wagtail.wagtailcore.models import Page, Orderable, UserPagePermissionsProxy, get_page_types\nfrom wagtail.wagtailadmin.edit_handlers import FieldPanel\nfrom wagtail.wagtailadmin import tasks\n\nfrom .forms import FormBuilder\n\n\nFORM_FIELD_CHOICES = (\n ('singleline', _('Single line text')),\n ('multiline', _('Multi-line text')),\n ('email', _('Email')),\n ('number', _('Number')),\n ('url', _('URL')),\n ('checkbox', _('Checkbox')),\n ('checkboxes', _('Checkboxes')),\n ('dropdown', _('Drop down')),\n ('radio', _('Radio buttons')),\n ('date', _('Date')),\n ('datetime', _('Date/time')),\n)\n\n\nHTML_EXTENSION_RE = re.compile(r\"(.*)\\.html\")\n\n\n@python_2_unicode_compatible\nclass FormSubmission(models.Model):\n \"\"\"Data for a Form submission.\"\"\"\n\n form_data = models.TextField()\n page = models.ForeignKey(Page)\n\n submit_time = models.DateTimeField(auto_now_add=True)\n\n def get_data(self):\n return json.loads(self.form_data)\n\n def __str__(self):\n return self.form_data\n\n\nclass AbstractFormField(Orderable):\n \"\"\"Database Fields required for building a Django Form field.\"\"\"\n\n label = models.CharField(\n max_length=255,\n help_text=_('The label of the form field')\n )\n field_type = models.CharField(max_length=16, choices=FORM_FIELD_CHOICES)\n required = models.BooleanField(default=True)\n choices = models.CharField(\n max_length=512,\n blank=True,\n help_text=_('Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.')\n )\n default_value = models.CharField(\n max_length=255,\n blank=True,\n help_text=_('Default value. Comma separated values supported for checkboxes.')\n )\n help_text = models.CharField(max_length=255, blank=True)\n\n @property\n def clean_name(self):\n # unidecode will return an ascii string while slugify wants a\n # unicode string on the other hand, slugify returns a safe-string\n # which will be converted to a normal str\n return str(slugify(text_type(unidecode(self.label))))\n\n panels = [\n FieldPanel('label'),\n FieldPanel('help_text'),\n FieldPanel('required'),\n FieldPanel('field_type', classname=\"formbuilder-type\"),\n FieldPanel('choices', classname=\"formbuilder-choices\"),\n FieldPanel('default_value', classname=\"formbuilder-default\"),\n ]\n\n class Meta:\n abstract = True\n ordering = ['sort_order']\n\n\n_FORM_CONTENT_TYPES = None\n\ndef get_form_types():\n global _FORM_CONTENT_TYPES\n if _FORM_CONTENT_TYPES is None:\n _FORM_CONTENT_TYPES = [\n ct for ct in get_page_types()\n if issubclass(ct.model_class(), AbstractForm)\n ]\n return _FORM_CONTENT_TYPES\n\n\ndef get_forms_for_user(user):\n \"\"\"Return a queryset of form pages that this user is allowed to access the submissions for\"\"\"\n editable_pages = UserPagePermissionsProxy(user).editable_pages()\n return editable_pages.filter(content_type__in=get_form_types())\n\n\nclass AbstractForm(Page):\n \"\"\"A Form Page. Pages implementing a form should inhert from it\"\"\"\n\n form_builder = FormBuilder\n is_abstract = True # Don't display me in \"Add\"\n\n def __init__(self, *args, **kwargs):\n super(AbstractForm, self).__init__(*args, **kwargs)\n if not hasattr(self, 'landing_page_template'):\n template_wo_ext = re.match(HTML_EXTENSION_RE, self.template).group(1)\n self.landing_page_template = template_wo_ext + '_landing.html'\n\n class Meta:\n abstract = True\n\n def get_form_parameters(self):\n return {}\n\n def process_form_submission(self, form):\n FormSubmission.objects.create(\n form_data=json.dumps(form.cleaned_data),\n page=self,\n )\n\n def serve(self, request):\n fb = self.form_builder(self.form_fields.all())\n form_class = fb.get_form_class()\n form_params = self.get_form_parameters()\n\n if request.method == 'POST':\n form = form_class(request.POST, **form_params)\n\n if form.is_valid():\n self.process_form_submission(form)\n # If we have a form_processing_backend call its process method\n if hasattr(self, 'form_processing_backend'):\n form_processor = self.form_processing_backend()\n form_processor.process(self, form)\n\n # render the landing_page\n # TODO: It is much better to redirect to it\n return render(request, self.landing_page_template, {\n 'self': self,\n })\n else:\n form = form_class(**form_params)\n\n return render(request, self.template, {\n 'self': self,\n 'form': form,\n })\n\n preview_modes = [\n ('form', 'Form'),\n ('landing', 'Landing page'),\n ]\n\n def serve_preview(self, request, mode):\n if mode == 'landing':\n return render(request, self.landing_page_template, {\n 'self': self,\n })\n else:\n return super(AbstractForm, self).serve_preview(request, mode)\n\n\nclass AbstractEmailForm(AbstractForm):\n \"\"\"A Form Page that sends email. Pages implementing a form to be send to an email should inherit from it\"\"\"\n is_abstract = True # Don't display me in \"Add\"\n\n to_address = models.CharField(max_length=255, blank=True, help_text=_(\"Optional - form submissions will be emailed to this address\"))\n from_address = models.CharField(max_length=255, blank=True)\n subject = models.CharField(max_length=255, blank=True)\n\n def process_form_submission(self, form):\n super(AbstractEmailForm, self).process_form_submission(form)\n\n if self.to_address:\n content = '\\n'.join([x[1].label + ': ' + form.data.get(x[0]) for x in form.fields.items()])\n tasks.send_email_task.delay(self.subject, content, [self.to_address], self.from_address,)\n\n\n class Meta:\n abstract = True\n", "path": "wagtail/wagtailforms/models.py"}], "after_files": [{"content": "import json\nimport re\n\nfrom six import text_type\n\nfrom unidecode import unidecode\n\nfrom django.db import models\nfrom django.shortcuts import render\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.utils.text import slugify\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.core.serializers.json import DjangoJSONEncoder\n\nfrom wagtail.wagtailcore.models import Page, Orderable, UserPagePermissionsProxy, get_page_types\nfrom wagtail.wagtailadmin.edit_handlers import FieldPanel\nfrom wagtail.wagtailadmin import tasks\n\nfrom .forms import FormBuilder\n\n\nFORM_FIELD_CHOICES = (\n ('singleline', _('Single line text')),\n ('multiline', _('Multi-line text')),\n ('email', _('Email')),\n ('number', _('Number')),\n ('url', _('URL')),\n ('checkbox', _('Checkbox')),\n ('checkboxes', _('Checkboxes')),\n ('dropdown', _('Drop down')),\n ('radio', _('Radio buttons')),\n ('date', _('Date')),\n ('datetime', _('Date/time')),\n)\n\n\nHTML_EXTENSION_RE = re.compile(r\"(.*)\\.html\")\n\n\n@python_2_unicode_compatible\nclass FormSubmission(models.Model):\n \"\"\"Data for a Form submission.\"\"\"\n\n form_data = models.TextField()\n page = models.ForeignKey(Page)\n\n submit_time = models.DateTimeField(auto_now_add=True)\n\n def get_data(self):\n return json.loads(self.form_data)\n\n def __str__(self):\n return self.form_data\n\n\nclass AbstractFormField(Orderable):\n \"\"\"Database Fields required for building a Django Form field.\"\"\"\n\n label = models.CharField(\n max_length=255,\n help_text=_('The label of the form field')\n )\n field_type = models.CharField(max_length=16, choices=FORM_FIELD_CHOICES)\n required = models.BooleanField(default=True)\n choices = models.CharField(\n max_length=512,\n blank=True,\n help_text=_('Comma separated list of choices. Only applicable in checkboxes, radio and dropdown.')\n )\n default_value = models.CharField(\n max_length=255,\n blank=True,\n help_text=_('Default value. Comma separated values supported for checkboxes.')\n )\n help_text = models.CharField(max_length=255, blank=True)\n\n @property\n def clean_name(self):\n # unidecode will return an ascii string while slugify wants a\n # unicode string on the other hand, slugify returns a safe-string\n # which will be converted to a normal str\n return str(slugify(text_type(unidecode(self.label))))\n\n panels = [\n FieldPanel('label'),\n FieldPanel('help_text'),\n FieldPanel('required'),\n FieldPanel('field_type', classname=\"formbuilder-type\"),\n FieldPanel('choices', classname=\"formbuilder-choices\"),\n FieldPanel('default_value', classname=\"formbuilder-default\"),\n ]\n\n class Meta:\n abstract = True\n ordering = ['sort_order']\n\n\n_FORM_CONTENT_TYPES = None\n\ndef get_form_types():\n global _FORM_CONTENT_TYPES\n if _FORM_CONTENT_TYPES is None:\n _FORM_CONTENT_TYPES = [\n ct for ct in get_page_types()\n if issubclass(ct.model_class(), AbstractForm)\n ]\n return _FORM_CONTENT_TYPES\n\n\ndef get_forms_for_user(user):\n \"\"\"Return a queryset of form pages that this user is allowed to access the submissions for\"\"\"\n editable_pages = UserPagePermissionsProxy(user).editable_pages()\n return editable_pages.filter(content_type__in=get_form_types())\n\n\nclass AbstractForm(Page):\n \"\"\"A Form Page. Pages implementing a form should inhert from it\"\"\"\n\n form_builder = FormBuilder\n is_abstract = True # Don't display me in \"Add\"\n\n def __init__(self, *args, **kwargs):\n super(AbstractForm, self).__init__(*args, **kwargs)\n if not hasattr(self, 'landing_page_template'):\n template_wo_ext = re.match(HTML_EXTENSION_RE, self.template).group(1)\n self.landing_page_template = template_wo_ext + '_landing.html'\n\n class Meta:\n abstract = True\n\n def get_form_parameters(self):\n return {}\n\n def process_form_submission(self, form):\n FormSubmission.objects.create(\n form_data=json.dumps(form.cleaned_data, cls=DjangoJSONEncoder),\n page=self,\n )\n\n def serve(self, request):\n fb = self.form_builder(self.form_fields.all())\n form_class = fb.get_form_class()\n form_params = self.get_form_parameters()\n\n if request.method == 'POST':\n form = form_class(request.POST, **form_params)\n\n if form.is_valid():\n self.process_form_submission(form)\n # If we have a form_processing_backend call its process method\n if hasattr(self, 'form_processing_backend'):\n form_processor = self.form_processing_backend()\n form_processor.process(self, form)\n\n # render the landing_page\n # TODO: It is much better to redirect to it\n return render(request, self.landing_page_template, {\n 'self': self,\n })\n else:\n form = form_class(**form_params)\n\n return render(request, self.template, {\n 'self': self,\n 'form': form,\n })\n\n preview_modes = [\n ('form', 'Form'),\n ('landing', 'Landing page'),\n ]\n\n def serve_preview(self, request, mode):\n if mode == 'landing':\n return render(request, self.landing_page_template, {\n 'self': self,\n })\n else:\n return super(AbstractForm, self).serve_preview(request, mode)\n\n\nclass AbstractEmailForm(AbstractForm):\n \"\"\"A Form Page that sends email. Pages implementing a form to be send to an email should inherit from it\"\"\"\n is_abstract = True # Don't display me in \"Add\"\n\n to_address = models.CharField(max_length=255, blank=True, help_text=_(\"Optional - form submissions will be emailed to this address\"))\n from_address = models.CharField(max_length=255, blank=True)\n subject = models.CharField(max_length=255, blank=True)\n\n def process_form_submission(self, form):\n super(AbstractEmailForm, self).process_form_submission(form)\n\n if self.to_address:\n content = '\\n'.join([x[1].label + ': ' + form.data.get(x[0]) for x in form.fields.items()])\n tasks.send_email_task.delay(self.subject, content, [self.to_address], self.from_address,)\n\n\n class Meta:\n abstract = True\n", "path": "wagtail/wagtailforms/models.py"}]}
| 2,649 | 200 |
gh_patches_debug_39363
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-6564
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
feat: Add embed_meta_fields support to rankers like in Haystack v1
**Is your feature request related to a problem? Please describe.**
We have found that `embed_meta_fields` greatly improves ranking when metadata is relevant while searching. This is a feature that we use a lot from v1 that would be great to also have in v2.
**Describe the solution you'd like**
Add support for `embed_meta_fields` as an init parameter to rankers like `TransformersSimilarityRanker`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/components/rankers/transformers_similarity.py`
Content:
```
1 import logging
2 from pathlib import Path
3 from typing import List, Union, Dict, Any, Optional
4
5 from haystack import ComponentError, Document, component, default_to_dict
6 from haystack.lazy_imports import LazyImport
7
8 logger = logging.getLogger(__name__)
9
10
11 with LazyImport(message="Run 'pip install transformers[torch,sentencepiece]'") as torch_and_transformers_import:
12 import torch
13 from transformers import AutoModelForSequenceClassification, AutoTokenizer
14
15
16 @component
17 class TransformersSimilarityRanker:
18 """
19 Ranks Documents based on their similarity to the query.
20 It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.
21
22 Usage example:
23 ```
24 from haystack import Document
25 from haystack.components.rankers import TransformersSimilarityRanker
26
27 ranker = TransformersSimilarityRanker()
28 docs = [Document(content="Paris"), Document(content="Berlin")]
29 query = "City in Germany"
30 output = ranker.run(query=query, documents=docs)
31 docs = output["documents"]
32 assert len(docs) == 2
33 assert docs[0].content == "Berlin"
34 ```
35 """
36
37 def __init__(
38 self,
39 model_name_or_path: Union[str, Path] = "cross-encoder/ms-marco-MiniLM-L-6-v2",
40 device: str = "cpu",
41 token: Union[bool, str, None] = None,
42 top_k: int = 10,
43 ):
44 """
45 Creates an instance of TransformersSimilarityRanker.
46
47 :param model_name_or_path: The name or path of a pre-trained cross-encoder model
48 from the Hugging Face Hub.
49 :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.
50 :param token: The API token used to download private models from Hugging Face.
51 If this parameter is set to `True`, the token generated when running
52 `transformers-cli login` (stored in ~/.huggingface) is used.
53 :param top_k: The maximum number of Documents to return per query.
54 """
55 torch_and_transformers_import.check()
56
57 self.model_name_or_path = model_name_or_path
58 if top_k <= 0:
59 raise ValueError(f"top_k must be > 0, but got {top_k}")
60 self.top_k = top_k
61 self.device = device
62 self.token = token
63 self.model = None
64 self.tokenizer = None
65
66 def _get_telemetry_data(self) -> Dict[str, Any]:
67 """
68 Data that is sent to Posthog for usage analytics.
69 """
70 return {"model": str(self.model_name_or_path)}
71
72 def warm_up(self):
73 """
74 Warm up the model and tokenizer used for scoring the Documents.
75 """
76 if self.model_name_or_path and not self.model:
77 self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)
78 self.model = self.model.to(self.device)
79 self.model.eval()
80 self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)
81
82 def to_dict(self) -> Dict[str, Any]:
83 """
84 Serialize this component to a dictionary.
85 """
86 return default_to_dict(
87 self,
88 device=self.device,
89 model_name_or_path=self.model_name_or_path,
90 token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens
91 top_k=self.top_k,
92 )
93
94 @component.output_types(documents=List[Document])
95 def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):
96 """
97 Returns a list of Documents ranked by their similarity to the given query.
98
99 :param query: Query string.
100 :param documents: List of Documents.
101 :param top_k: The maximum number of Documents you want the Ranker to return.
102 :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.
103 """
104 if not documents:
105 return {"documents": []}
106
107 if top_k is None:
108 top_k = self.top_k
109
110 elif top_k <= 0:
111 raise ValueError(f"top_k must be > 0, but got {top_k}")
112
113 # If a model path is provided but the model isn't loaded
114 if self.model_name_or_path and not self.model:
115 raise ComponentError(
116 f"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'."
117 )
118
119 query_doc_pairs = [[query, doc.content] for doc in documents]
120 features = self.tokenizer(
121 query_doc_pairs, padding=True, truncation=True, return_tensors="pt"
122 ).to( # type: ignore
123 self.device
124 )
125 with torch.inference_mode():
126 similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore
127
128 _, sorted_indices = torch.sort(similarity_scores, descending=True)
129 ranked_docs = []
130 for sorted_index_tensor in sorted_indices:
131 i = sorted_index_tensor.item()
132 documents[i].score = similarity_scores[i].item()
133 ranked_docs.append(documents[i])
134 return {"documents": ranked_docs[:top_k]}
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/haystack/components/rankers/transformers_similarity.py b/haystack/components/rankers/transformers_similarity.py
--- a/haystack/components/rankers/transformers_similarity.py
+++ b/haystack/components/rankers/transformers_similarity.py
@@ -40,6 +40,8 @@
device: str = "cpu",
token: Union[bool, str, None] = None,
top_k: int = 10,
+ meta_fields_to_embed: Optional[List[str]] = None,
+ embedding_separator: str = "\n",
):
"""
Creates an instance of TransformersSimilarityRanker.
@@ -51,6 +53,8 @@
If this parameter is set to `True`, the token generated when running
`transformers-cli login` (stored in ~/.huggingface) is used.
:param top_k: The maximum number of Documents to return per query.
+ :param meta_fields_to_embed: List of meta fields that should be embedded along with the Document content.
+ :param embedding_separator: Separator used to concatenate the meta fields to the Document content.
"""
torch_and_transformers_import.check()
@@ -62,6 +66,8 @@
self.token = token
self.model = None
self.tokenizer = None
+ self.meta_fields_to_embed = meta_fields_to_embed or []
+ self.embedding_separator = embedding_separator
def _get_telemetry_data(self) -> Dict[str, Any]:
"""
@@ -89,6 +95,8 @@
model_name_or_path=self.model_name_or_path,
token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens
top_k=self.top_k,
+ meta_fields_to_embed=self.meta_fields_to_embed,
+ embedding_separator=self.embedding_separator,
)
@component.output_types(documents=List[Document])
@@ -116,7 +124,14 @@
f"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'."
)
- query_doc_pairs = [[query, doc.content] for doc in documents]
+ query_doc_pairs = []
+ for doc in documents:
+ meta_values_to_embed = [
+ str(doc.meta[key]) for key in self.meta_fields_to_embed if key in doc.meta and doc.meta[key]
+ ]
+ text_to_embed = self.embedding_separator.join(meta_values_to_embed + [doc.content or ""])
+ query_doc_pairs.append([query, text_to_embed])
+
features = self.tokenizer(
query_doc_pairs, padding=True, truncation=True, return_tensors="pt"
).to( # type: ignore
|
{"golden_diff": "diff --git a/haystack/components/rankers/transformers_similarity.py b/haystack/components/rankers/transformers_similarity.py\n--- a/haystack/components/rankers/transformers_similarity.py\n+++ b/haystack/components/rankers/transformers_similarity.py\n@@ -40,6 +40,8 @@\n device: str = \"cpu\",\n token: Union[bool, str, None] = None,\n top_k: int = 10,\n+ meta_fields_to_embed: Optional[List[str]] = None,\n+ embedding_separator: str = \"\\n\",\n ):\n \"\"\"\n Creates an instance of TransformersSimilarityRanker.\n@@ -51,6 +53,8 @@\n If this parameter is set to `True`, the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) is used.\n :param top_k: The maximum number of Documents to return per query.\n+ :param meta_fields_to_embed: List of meta fields that should be embedded along with the Document content.\n+ :param embedding_separator: Separator used to concatenate the meta fields to the Document content.\n \"\"\"\n torch_and_transformers_import.check()\n \n@@ -62,6 +66,8 @@\n self.token = token\n self.model = None\n self.tokenizer = None\n+ self.meta_fields_to_embed = meta_fields_to_embed or []\n+ self.embedding_separator = embedding_separator\n \n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n@@ -89,6 +95,8 @@\n model_name_or_path=self.model_name_or_path,\n token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens\n top_k=self.top_k,\n+ meta_fields_to_embed=self.meta_fields_to_embed,\n+ embedding_separator=self.embedding_separator,\n )\n \n @component.output_types(documents=List[Document])\n@@ -116,7 +124,14 @@\n f\"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'.\"\n )\n \n- query_doc_pairs = [[query, doc.content] for doc in documents]\n+ query_doc_pairs = []\n+ for doc in documents:\n+ meta_values_to_embed = [\n+ str(doc.meta[key]) for key in self.meta_fields_to_embed if key in doc.meta and doc.meta[key]\n+ ]\n+ text_to_embed = self.embedding_separator.join(meta_values_to_embed + [doc.content or \"\"])\n+ query_doc_pairs.append([query, text_to_embed])\n+\n features = self.tokenizer(\n query_doc_pairs, padding=True, truncation=True, return_tensors=\"pt\"\n ).to( # type: ignore\n", "issue": "feat: Add embed_meta_fields support to rankers like in Haystack v1\n**Is your feature request related to a problem? Please describe.**\r\nWe have found that `embed_meta_fields` greatly improves ranking when metadata is relevant while searching. This is a feature that we use a lot from v1 that would be great to also have in v2.\r\n\r\n**Describe the solution you'd like**\r\nAdd support for `embed_meta_fields` as an init parameter to rankers like `TransformersSimilarityRanker`.\r\n\n", "before_files": [{"content": "import logging\nfrom pathlib import Path\nfrom typing import List, Union, Dict, Any, Optional\n\nfrom haystack import ComponentError, Document, component, default_to_dict\nfrom haystack.lazy_imports import LazyImport\n\nlogger = logging.getLogger(__name__)\n\n\nwith LazyImport(message=\"Run 'pip install transformers[torch,sentencepiece]'\") as torch_and_transformers_import:\n import torch\n from transformers import AutoModelForSequenceClassification, AutoTokenizer\n\n\n@component\nclass TransformersSimilarityRanker:\n \"\"\"\n Ranks Documents based on their similarity to the query.\n It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.\n\n Usage example:\n ```\n from haystack import Document\n from haystack.components.rankers import TransformersSimilarityRanker\n\n ranker = TransformersSimilarityRanker()\n docs = [Document(content=\"Paris\"), Document(content=\"Berlin\")]\n query = \"City in Germany\"\n output = ranker.run(query=query, documents=docs)\n docs = output[\"documents\"]\n assert len(docs) == 2\n assert docs[0].content == \"Berlin\"\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: Union[str, Path] = \"cross-encoder/ms-marco-MiniLM-L-6-v2\",\n device: str = \"cpu\",\n token: Union[bool, str, None] = None,\n top_k: int = 10,\n ):\n \"\"\"\n Creates an instance of TransformersSimilarityRanker.\n\n :param model_name_or_path: The name or path of a pre-trained cross-encoder model\n from the Hugging Face Hub.\n :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.\n :param token: The API token used to download private models from Hugging Face.\n If this parameter is set to `True`, the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) is used.\n :param top_k: The maximum number of Documents to return per query.\n \"\"\"\n torch_and_transformers_import.check()\n\n self.model_name_or_path = model_name_or_path\n if top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n self.top_k = top_k\n self.device = device\n self.token = token\n self.model = None\n self.tokenizer = None\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n return {\"model\": str(self.model_name_or_path)}\n\n def warm_up(self):\n \"\"\"\n Warm up the model and tokenizer used for scoring the Documents.\n \"\"\"\n if self.model_name_or_path and not self.model:\n self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)\n self.model = self.model.to(self.device)\n self.model.eval()\n self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n \"\"\"\n return default_to_dict(\n self,\n device=self.device,\n model_name_or_path=self.model_name_or_path,\n token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens\n top_k=self.top_k,\n )\n\n @component.output_types(documents=List[Document])\n def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):\n \"\"\"\n Returns a list of Documents ranked by their similarity to the given query.\n\n :param query: Query string.\n :param documents: List of Documents.\n :param top_k: The maximum number of Documents you want the Ranker to return.\n :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.\n \"\"\"\n if not documents:\n return {\"documents\": []}\n\n if top_k is None:\n top_k = self.top_k\n\n elif top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n\n # If a model path is provided but the model isn't loaded\n if self.model_name_or_path and not self.model:\n raise ComponentError(\n f\"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'.\"\n )\n\n query_doc_pairs = [[query, doc.content] for doc in documents]\n features = self.tokenizer(\n query_doc_pairs, padding=True, truncation=True, return_tensors=\"pt\"\n ).to( # type: ignore\n self.device\n )\n with torch.inference_mode():\n similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore\n\n _, sorted_indices = torch.sort(similarity_scores, descending=True)\n ranked_docs = []\n for sorted_index_tensor in sorted_indices:\n i = sorted_index_tensor.item()\n documents[i].score = similarity_scores[i].item()\n ranked_docs.append(documents[i])\n return {\"documents\": ranked_docs[:top_k]}\n", "path": "haystack/components/rankers/transformers_similarity.py"}], "after_files": [{"content": "import logging\nfrom pathlib import Path\nfrom typing import List, Union, Dict, Any, Optional\n\nfrom haystack import ComponentError, Document, component, default_to_dict\nfrom haystack.lazy_imports import LazyImport\n\nlogger = logging.getLogger(__name__)\n\n\nwith LazyImport(message=\"Run 'pip install transformers[torch,sentencepiece]'\") as torch_and_transformers_import:\n import torch\n from transformers import AutoModelForSequenceClassification, AutoTokenizer\n\n\n@component\nclass TransformersSimilarityRanker:\n \"\"\"\n Ranks Documents based on their similarity to the query.\n It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.\n\n Usage example:\n ```\n from haystack import Document\n from haystack.components.rankers import TransformersSimilarityRanker\n\n ranker = TransformersSimilarityRanker()\n docs = [Document(content=\"Paris\"), Document(content=\"Berlin\")]\n query = \"City in Germany\"\n output = ranker.run(query=query, documents=docs)\n docs = output[\"documents\"]\n assert len(docs) == 2\n assert docs[0].content == \"Berlin\"\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: Union[str, Path] = \"cross-encoder/ms-marco-MiniLM-L-6-v2\",\n device: str = \"cpu\",\n token: Union[bool, str, None] = None,\n top_k: int = 10,\n meta_fields_to_embed: Optional[List[str]] = None,\n embedding_separator: str = \"\\n\",\n ):\n \"\"\"\n Creates an instance of TransformersSimilarityRanker.\n\n :param model_name_or_path: The name or path of a pre-trained cross-encoder model\n from the Hugging Face Hub.\n :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.\n :param token: The API token used to download private models from Hugging Face.\n If this parameter is set to `True`, the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) is used.\n :param top_k: The maximum number of Documents to return per query.\n :param meta_fields_to_embed: List of meta fields that should be embedded along with the Document content.\n :param embedding_separator: Separator used to concatenate the meta fields to the Document content.\n \"\"\"\n torch_and_transformers_import.check()\n\n self.model_name_or_path = model_name_or_path\n if top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n self.top_k = top_k\n self.device = device\n self.token = token\n self.model = None\n self.tokenizer = None\n self.meta_fields_to_embed = meta_fields_to_embed or []\n self.embedding_separator = embedding_separator\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n return {\"model\": str(self.model_name_or_path)}\n\n def warm_up(self):\n \"\"\"\n Warm up the model and tokenizer used for scoring the Documents.\n \"\"\"\n if self.model_name_or_path and not self.model:\n self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)\n self.model = self.model.to(self.device)\n self.model.eval()\n self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n \"\"\"\n return default_to_dict(\n self,\n device=self.device,\n model_name_or_path=self.model_name_or_path,\n token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens\n top_k=self.top_k,\n meta_fields_to_embed=self.meta_fields_to_embed,\n embedding_separator=self.embedding_separator,\n )\n\n @component.output_types(documents=List[Document])\n def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):\n \"\"\"\n Returns a list of Documents ranked by their similarity to the given query.\n\n :param query: Query string.\n :param documents: List of Documents.\n :param top_k: The maximum number of Documents you want the Ranker to return.\n :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.\n \"\"\"\n if not documents:\n return {\"documents\": []}\n\n if top_k is None:\n top_k = self.top_k\n\n elif top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n\n # If a model path is provided but the model isn't loaded\n if self.model_name_or_path and not self.model:\n raise ComponentError(\n f\"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'.\"\n )\n\n query_doc_pairs = []\n for doc in documents:\n meta_values_to_embed = [\n str(doc.meta[key]) for key in self.meta_fields_to_embed if key in doc.meta and doc.meta[key]\n ]\n text_to_embed = self.embedding_separator.join(meta_values_to_embed + [doc.content or \"\"])\n query_doc_pairs.append([query, text_to_embed])\n\n features = self.tokenizer(\n query_doc_pairs, padding=True, truncation=True, return_tensors=\"pt\"\n ).to( # type: ignore\n self.device\n )\n with torch.inference_mode():\n similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore\n\n _, sorted_indices = torch.sort(similarity_scores, descending=True)\n ranked_docs = []\n for sorted_index_tensor in sorted_indices:\n i = sorted_index_tensor.item()\n documents[i].score = similarity_scores[i].item()\n ranked_docs.append(documents[i])\n return {\"documents\": ranked_docs[:top_k]}\n", "path": "haystack/components/rankers/transformers_similarity.py"}]}
| 1,824 | 601 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.