problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_60955
|
rasdani/github-patches
|
git_diff
|
voicepaw__so-vits-svc-fork-1155
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ensure proper sorting by name for output files
### Is your feature request related to a problem? Please describe.
To enhance readability and ensure proper sorting by name, the numeric part of output file names should have a fixed width. This can be achieved by adding leading zeros to the numeric part, with four digits likely being sufficient.
### Describe alternatives you've considered
I don't have any.
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Are you willing to resolve this issue by submitting a Pull Request?
- [X] Yes, I have the time, and I know how to start.
- [ ] Yes, I have the time, but I don't know how to start. I would need guidance.
- [ ] No, I don't have the time, although I believe I could do it if I had the time...
- [ ] No, I don't have the time and I wouldn't even know how to start.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py`
Content:
```
1 from __future__ import annotations
2
3 from collections import defaultdict
4 from logging import getLogger
5 from pathlib import Path
6
7 import librosa
8 import soundfile as sf
9 import torch
10 from joblib import Parallel, delayed
11 from pyannote.audio import Pipeline
12 from tqdm import tqdm
13 from tqdm_joblib import tqdm_joblib
14
15 LOG = getLogger(__name__)
16
17
18 def _process_one(
19 input_path: Path,
20 output_dir: Path,
21 sr: int,
22 *,
23 min_speakers: int = 1,
24 max_speakers: int = 1,
25 huggingface_token: str | None = None,
26 ) -> None:
27 try:
28 audio, sr = librosa.load(input_path, sr=sr, mono=True)
29 except Exception as e:
30 LOG.warning(f"Failed to read {input_path}: {e}")
31 return
32 pipeline = Pipeline.from_pretrained(
33 "pyannote/speaker-diarization", use_auth_token=huggingface_token
34 )
35 if pipeline is None:
36 raise ValueError("Failed to load pipeline")
37 pipeline = pipeline.to(torch.device("cuda"))
38 LOG.info(f"Processing {input_path}. This may take a while...")
39 diarization = pipeline(
40 input_path, min_speakers=min_speakers, max_speakers=max_speakers
41 )
42
43 LOG.info(f"Found {len(diarization)} tracks, writing to {output_dir}")
44 speaker_count = defaultdict(int)
45
46 output_dir.mkdir(parents=True, exist_ok=True)
47 for segment, track, speaker in tqdm(
48 list(diarization.itertracks(yield_label=True)), desc=f"Writing {input_path}"
49 ):
50 if segment.end - segment.start < 1:
51 continue
52 speaker_count[speaker] += 1
53 audio_cut = audio[int(segment.start * sr) : int(segment.end * sr)]
54 sf.write(
55 (output_dir / f"{speaker}_{speaker_count[speaker]}.wav"),
56 audio_cut,
57 sr,
58 )
59
60 LOG.info(f"Speaker count: {speaker_count}")
61
62
63 def preprocess_speaker_diarization(
64 input_dir: Path | str,
65 output_dir: Path | str,
66 sr: int,
67 *,
68 min_speakers: int = 1,
69 max_speakers: int = 1,
70 huggingface_token: str | None = None,
71 n_jobs: int = -1,
72 ) -> None:
73 if huggingface_token is not None and not huggingface_token.startswith("hf_"):
74 LOG.warning("Huggingface token probably should start with hf_")
75 if not torch.cuda.is_available():
76 LOG.warning("CUDA is not available. This will be extremely slow.")
77 input_dir = Path(input_dir)
78 output_dir = Path(output_dir)
79 input_dir.mkdir(parents=True, exist_ok=True)
80 output_dir.mkdir(parents=True, exist_ok=True)
81 input_paths = list(input_dir.rglob("*.*"))
82 with tqdm_joblib(desc="Preprocessing speaker diarization", total=len(input_paths)):
83 Parallel(n_jobs=n_jobs)(
84 delayed(_process_one)(
85 input_path,
86 output_dir / input_path.relative_to(input_dir).parent / input_path.stem,
87 sr,
88 max_speakers=max_speakers,
89 min_speakers=min_speakers,
90 huggingface_token=huggingface_token,
91 )
92 for input_path in input_paths
93 )
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py b/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py
--- a/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py
+++ b/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py
@@ -52,7 +52,7 @@
speaker_count[speaker] += 1
audio_cut = audio[int(segment.start * sr) : int(segment.end * sr)]
sf.write(
- (output_dir / f"{speaker}_{speaker_count[speaker]}.wav"),
+ (output_dir / f"{speaker}_{speaker_count[speaker]:04d}.wav"),
audio_cut,
sr,
)
|
{"golden_diff": "diff --git a/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py b/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py\n--- a/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py\n+++ b/src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py\n@@ -52,7 +52,7 @@\n speaker_count[speaker] += 1\n audio_cut = audio[int(segment.start * sr) : int(segment.end * sr)]\n sf.write(\n- (output_dir / f\"{speaker}_{speaker_count[speaker]}.wav\"),\n+ (output_dir / f\"{speaker}_{speaker_count[speaker]:04d}.wav\"),\n audio_cut,\n sr,\n )\n", "issue": "Ensure proper sorting by name for output files\n### Is your feature request related to a problem? Please describe.\n\nTo enhance readability and ensure proper sorting by name, the numeric part of output file names should have a fixed width. This can be achieved by adding leading zeros to the numeric part, with four digits likely being sufficient.\n\n### Describe alternatives you've considered\n\nI don't have any.\n\n### Additional context\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n\n### Are you willing to resolve this issue by submitting a Pull Request?\n\n- [X] Yes, I have the time, and I know how to start.\n- [ ] Yes, I have the time, but I don't know how to start. I would need guidance.\n- [ ] No, I don't have the time, although I believe I could do it if I had the time...\n- [ ] No, I don't have the time and I wouldn't even know how to start.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom collections import defaultdict\nfrom logging import getLogger\nfrom pathlib import Path\n\nimport librosa\nimport soundfile as sf\nimport torch\nfrom joblib import Parallel, delayed\nfrom pyannote.audio import Pipeline\nfrom tqdm import tqdm\nfrom tqdm_joblib import tqdm_joblib\n\nLOG = getLogger(__name__)\n\n\ndef _process_one(\n input_path: Path,\n output_dir: Path,\n sr: int,\n *,\n min_speakers: int = 1,\n max_speakers: int = 1,\n huggingface_token: str | None = None,\n) -> None:\n try:\n audio, sr = librosa.load(input_path, sr=sr, mono=True)\n except Exception as e:\n LOG.warning(f\"Failed to read {input_path}: {e}\")\n return\n pipeline = Pipeline.from_pretrained(\n \"pyannote/speaker-diarization\", use_auth_token=huggingface_token\n )\n if pipeline is None:\n raise ValueError(\"Failed to load pipeline\")\n pipeline = pipeline.to(torch.device(\"cuda\"))\n LOG.info(f\"Processing {input_path}. This may take a while...\")\n diarization = pipeline(\n input_path, min_speakers=min_speakers, max_speakers=max_speakers\n )\n\n LOG.info(f\"Found {len(diarization)} tracks, writing to {output_dir}\")\n speaker_count = defaultdict(int)\n\n output_dir.mkdir(parents=True, exist_ok=True)\n for segment, track, speaker in tqdm(\n list(diarization.itertracks(yield_label=True)), desc=f\"Writing {input_path}\"\n ):\n if segment.end - segment.start < 1:\n continue\n speaker_count[speaker] += 1\n audio_cut = audio[int(segment.start * sr) : int(segment.end * sr)]\n sf.write(\n (output_dir / f\"{speaker}_{speaker_count[speaker]}.wav\"),\n audio_cut,\n sr,\n )\n\n LOG.info(f\"Speaker count: {speaker_count}\")\n\n\ndef preprocess_speaker_diarization(\n input_dir: Path | str,\n output_dir: Path | str,\n sr: int,\n *,\n min_speakers: int = 1,\n max_speakers: int = 1,\n huggingface_token: str | None = None,\n n_jobs: int = -1,\n) -> None:\n if huggingface_token is not None and not huggingface_token.startswith(\"hf_\"):\n LOG.warning(\"Huggingface token probably should start with hf_\")\n if not torch.cuda.is_available():\n LOG.warning(\"CUDA is not available. This will be extremely slow.\")\n input_dir = Path(input_dir)\n output_dir = Path(output_dir)\n input_dir.mkdir(parents=True, exist_ok=True)\n output_dir.mkdir(parents=True, exist_ok=True)\n input_paths = list(input_dir.rglob(\"*.*\"))\n with tqdm_joblib(desc=\"Preprocessing speaker diarization\", total=len(input_paths)):\n Parallel(n_jobs=n_jobs)(\n delayed(_process_one)(\n input_path,\n output_dir / input_path.relative_to(input_dir).parent / input_path.stem,\n sr,\n max_speakers=max_speakers,\n min_speakers=min_speakers,\n huggingface_token=huggingface_token,\n )\n for input_path in input_paths\n )\n", "path": "src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom collections import defaultdict\nfrom logging import getLogger\nfrom pathlib import Path\n\nimport librosa\nimport soundfile as sf\nimport torch\nfrom joblib import Parallel, delayed\nfrom pyannote.audio import Pipeline\nfrom tqdm import tqdm\nfrom tqdm_joblib import tqdm_joblib\n\nLOG = getLogger(__name__)\n\n\ndef _process_one(\n input_path: Path,\n output_dir: Path,\n sr: int,\n *,\n min_speakers: int = 1,\n max_speakers: int = 1,\n huggingface_token: str | None = None,\n) -> None:\n try:\n audio, sr = librosa.load(input_path, sr=sr, mono=True)\n except Exception as e:\n LOG.warning(f\"Failed to read {input_path}: {e}\")\n return\n pipeline = Pipeline.from_pretrained(\n \"pyannote/speaker-diarization\", use_auth_token=huggingface_token\n )\n if pipeline is None:\n raise ValueError(\"Failed to load pipeline\")\n pipeline = pipeline.to(torch.device(\"cuda\"))\n LOG.info(f\"Processing {input_path}. This may take a while...\")\n diarization = pipeline(\n input_path, min_speakers=min_speakers, max_speakers=max_speakers\n )\n\n LOG.info(f\"Found {len(diarization)} tracks, writing to {output_dir}\")\n speaker_count = defaultdict(int)\n\n output_dir.mkdir(parents=True, exist_ok=True)\n for segment, track, speaker in tqdm(\n list(diarization.itertracks(yield_label=True)), desc=f\"Writing {input_path}\"\n ):\n if segment.end - segment.start < 1:\n continue\n speaker_count[speaker] += 1\n audio_cut = audio[int(segment.start * sr) : int(segment.end * sr)]\n sf.write(\n (output_dir / f\"{speaker}_{speaker_count[speaker]:04d}.wav\"),\n audio_cut,\n sr,\n )\n\n LOG.info(f\"Speaker count: {speaker_count}\")\n\n\ndef preprocess_speaker_diarization(\n input_dir: Path | str,\n output_dir: Path | str,\n sr: int,\n *,\n min_speakers: int = 1,\n max_speakers: int = 1,\n huggingface_token: str | None = None,\n n_jobs: int = -1,\n) -> None:\n if huggingface_token is not None and not huggingface_token.startswith(\"hf_\"):\n LOG.warning(\"Huggingface token probably should start with hf_\")\n if not torch.cuda.is_available():\n LOG.warning(\"CUDA is not available. This will be extremely slow.\")\n input_dir = Path(input_dir)\n output_dir = Path(output_dir)\n input_dir.mkdir(parents=True, exist_ok=True)\n output_dir.mkdir(parents=True, exist_ok=True)\n input_paths = list(input_dir.rglob(\"*.*\"))\n with tqdm_joblib(desc=\"Preprocessing speaker diarization\", total=len(input_paths)):\n Parallel(n_jobs=n_jobs)(\n delayed(_process_one)(\n input_path,\n output_dir / input_path.relative_to(input_dir).parent / input_path.stem,\n sr,\n max_speakers=max_speakers,\n min_speakers=min_speakers,\n huggingface_token=huggingface_token,\n )\n for input_path in input_paths\n )\n", "path": "src/so_vits_svc_fork/preprocessing/preprocess_speaker_diarization.py"}]}
| 1,388 | 176 |
gh_patches_debug_30960
|
rasdani/github-patches
|
git_diff
|
spack__spack-7852
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PMIx build fails on aarch64
PMIx up through version 2.1.0 has a bug in its test code that causes problems on strict alignment architectures like aarch64. (The issue is explained in this commit: pmix/pmix@3a8bc4503a04158e3be072c92eb0253c6c77e534) The fix is included in the 2.1.1 release. The issue is exacerbated by building with optimizations turned on (which Spack does by default); it doesn't seem to happen when compiling with ```-O0```.
Since the issue is in test code, the work-around is to just not build that code.
### Expected Result
The install should work.
### Actual Result
The install fails attempting to link pmix_test.
### Steps to reproduce the issue
On an aarch64 machine:
```console
$ spack install -j 56 pmix %[email protected]
```
### Information on your system
1. HPE (pre-production) Apache servers (w/ Cavium ThunderX2 CPU's)
1. RHEL 7.4
1. GCC 7.3.0 (built by Spack) and GCC 4.8.5 (RHEL 7 default version)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/pmix/package.py`
Content:
```
1 ##############################################################################
2 # Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/spack/spack
10 # Please also see the NOTICE and LICENSE files for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25
26 from spack import *
27
28
29 class Pmix(AutotoolsPackage):
30 """The Process Management Interface (PMI) has been used for quite some time
31 as a means of exchanging wireup information needed for interprocess
32 communication. Two versions (PMI-1 and PMI-2) have been released as part
33 of the MPICH effort. While PMI-2 demonstrates better scaling properties
34 than its PMI-1 predecessor, attaining rapid launch and wireup of the
35 roughly 1M processes executing across 100k nodes expected for exascale
36 operations remains challenging. PMI Exascale (PMIx) represents an
37 attempt to resolve these questions by providing an extended version
38 of the PMI definitions specifically designed to support clusters up
39 to and including exascale sizes. The overall objective of the project
40 is not to branch the existing definitions - in fact, PMIx fully
41 supports both of the existing PMI-1 and PMI-2 APIs - but rather to
42 (a) augment and extend those APIs to eliminate some current restrictions
43 that impact scalability, (b) establish a standards-like body for
44 maintaining the definitions, and (c) provide a reference implementation
45 of the PMIx standard that demonstrates the desired level of
46 scalability."""
47
48 homepage = "https://pmix.github.io/pmix"
49 url = "https://github.com/pmix/pmix/releases/download/v2.0.1/pmix-2.0.1.tar.bz2"
50
51 version('2.1.0', 'fc97513b601d78fe7c6bb20c6a21df3c')
52 version('2.0.2', 'e3ed1deed87c84f9b43da2621c6ad689')
53 version('2.0.1', 'ba3193b485843516e6b4e8641e443b1e')
54 version('2.0.0', '3e047c2ea0ba8ee9925ed92b205fd92e')
55 version('1.2.4', '242a812e206e7c5948f1f5c9688eb2a7')
56 version('1.2.3', '102b1cc650018b62348b45d572b158e9')
57 version('1.2.2', 'd85c8fd437bd88f984549425ad369e9f')
58 version('1.2.1', 'f090f524681c52001ea2db3b0285596f')
59 version('1.2.0', '6a42472d5a32e1c31ce5da19d50fc21a')
60
61 depends_on('libevent')
62
63 def configure_args(self):
64 spec = self.spec
65 config_args = [
66 '--enable-shared',
67 '--enable-static'
68 ]
69
70 # external libevent support (needed to keep Open MPI happy)
71 config_args.append(
72 '--with-libevent={0}'.format(spec['libevent'].prefix))
73
74 return config_args
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/pmix/package.py b/var/spack/repos/builtin/packages/pmix/package.py
--- a/var/spack/repos/builtin/packages/pmix/package.py
+++ b/var/spack/repos/builtin/packages/pmix/package.py
@@ -24,6 +24,7 @@
##############################################################################
from spack import *
+import spack.architecture
class Pmix(AutotoolsPackage):
@@ -48,6 +49,7 @@
homepage = "https://pmix.github.io/pmix"
url = "https://github.com/pmix/pmix/releases/download/v2.0.1/pmix-2.0.1.tar.bz2"
+ version('2.1.1', 'f9f109421661b757245d5e0bd44a38b3')
version('2.1.0', 'fc97513b601d78fe7c6bb20c6a21df3c')
version('2.0.2', 'e3ed1deed87c84f9b43da2621c6ad689')
version('2.0.1', 'ba3193b485843516e6b4e8641e443b1e')
@@ -61,6 +63,7 @@
depends_on('libevent')
def configure_args(self):
+
spec = self.spec
config_args = [
'--enable-shared',
@@ -71,4 +74,11 @@
config_args.append(
'--with-libevent={0}'.format(spec['libevent'].prefix))
+ # Versions < 2.1.1 have a bug in the test code that *sometimes*
+ # causes problems on strict alignment architectures such as
+ # aarch64. Work-around is to just not build the test code.
+ if 'aarch64' in spack.architecture.sys_type() and \
+ self.spec.version < Version('2.1.1'):
+ config_args.append('--without-tests-examples')
+
return config_args
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/pmix/package.py b/var/spack/repos/builtin/packages/pmix/package.py\n--- a/var/spack/repos/builtin/packages/pmix/package.py\n+++ b/var/spack/repos/builtin/packages/pmix/package.py\n@@ -24,6 +24,7 @@\n ##############################################################################\n \n from spack import *\n+import spack.architecture\n \n \n class Pmix(AutotoolsPackage):\n@@ -48,6 +49,7 @@\n homepage = \"https://pmix.github.io/pmix\"\n url = \"https://github.com/pmix/pmix/releases/download/v2.0.1/pmix-2.0.1.tar.bz2\"\n \n+ version('2.1.1', 'f9f109421661b757245d5e0bd44a38b3')\n version('2.1.0', 'fc97513b601d78fe7c6bb20c6a21df3c')\n version('2.0.2', 'e3ed1deed87c84f9b43da2621c6ad689')\n version('2.0.1', 'ba3193b485843516e6b4e8641e443b1e')\n@@ -61,6 +63,7 @@\n depends_on('libevent')\n \n def configure_args(self):\n+\n spec = self.spec\n config_args = [\n '--enable-shared',\n@@ -71,4 +74,11 @@\n config_args.append(\n '--with-libevent={0}'.format(spec['libevent'].prefix))\n \n+ # Versions < 2.1.1 have a bug in the test code that *sometimes*\n+ # causes problems on strict alignment architectures such as\n+ # aarch64. Work-around is to just not build the test code.\n+ if 'aarch64' in spack.architecture.sys_type() and \\\n+ self.spec.version < Version('2.1.1'):\n+ config_args.append('--without-tests-examples')\n+\n return config_args\n", "issue": "PMIx build fails on aarch64\n\r\nPMIx up through version 2.1.0 has a bug in its test code that causes problems on strict alignment architectures like aarch64. (The issue is explained in this commit: pmix/pmix@3a8bc4503a04158e3be072c92eb0253c6c77e534) The fix is included in the 2.1.1 release. The issue is exacerbated by building with optimizations turned on (which Spack does by default); it doesn't seem to happen when compiling with ```-O0```.\r\n\r\nSince the issue is in test code, the work-around is to just not build that code.\r\n### Expected Result\r\n\r\nThe install should work.\r\n\r\n### Actual Result\r\n\r\nThe install fails attempting to link pmix_test.\r\n\r\n### Steps to reproduce the issue\r\n\r\nOn an aarch64 machine:\r\n```console\r\n$ spack install -j 56 pmix %[email protected]\r\n```\r\n\r\n### Information on your system\r\n\r\n1. HPE (pre-production) Apache servers (w/ Cavium ThunderX2 CPU's)\r\n1. RHEL 7.4\r\n1. GCC 7.3.0 (built by Spack) and GCC 4.8.5 (RHEL 7 default version)\r\n\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\n\nfrom spack import *\n\n\nclass Pmix(AutotoolsPackage):\n \"\"\"The Process Management Interface (PMI) has been used for quite some time\n as a means of exchanging wireup information needed for interprocess\n communication. Two versions (PMI-1 and PMI-2) have been released as part\n of the MPICH effort. While PMI-2 demonstrates better scaling properties\n than its PMI-1 predecessor, attaining rapid launch and wireup of the\n roughly 1M processes executing across 100k nodes expected for exascale\n operations remains challenging. PMI Exascale (PMIx) represents an\n attempt to resolve these questions by providing an extended version\n of the PMI definitions specifically designed to support clusters up\n to and including exascale sizes. The overall objective of the project\n is not to branch the existing definitions - in fact, PMIx fully\n supports both of the existing PMI-1 and PMI-2 APIs - but rather to\n (a) augment and extend those APIs to eliminate some current restrictions\n that impact scalability, (b) establish a standards-like body for\n maintaining the definitions, and (c) provide a reference implementation\n of the PMIx standard that demonstrates the desired level of\n scalability.\"\"\"\n\n homepage = \"https://pmix.github.io/pmix\"\n url = \"https://github.com/pmix/pmix/releases/download/v2.0.1/pmix-2.0.1.tar.bz2\"\n\n version('2.1.0', 'fc97513b601d78fe7c6bb20c6a21df3c')\n version('2.0.2', 'e3ed1deed87c84f9b43da2621c6ad689')\n version('2.0.1', 'ba3193b485843516e6b4e8641e443b1e')\n version('2.0.0', '3e047c2ea0ba8ee9925ed92b205fd92e')\n version('1.2.4', '242a812e206e7c5948f1f5c9688eb2a7')\n version('1.2.3', '102b1cc650018b62348b45d572b158e9')\n version('1.2.2', 'd85c8fd437bd88f984549425ad369e9f')\n version('1.2.1', 'f090f524681c52001ea2db3b0285596f')\n version('1.2.0', '6a42472d5a32e1c31ce5da19d50fc21a')\n\n depends_on('libevent')\n\n def configure_args(self):\n spec = self.spec\n config_args = [\n '--enable-shared',\n '--enable-static'\n ]\n\n # external libevent support (needed to keep Open MPI happy)\n config_args.append(\n '--with-libevent={0}'.format(spec['libevent'].prefix))\n\n return config_args\n", "path": "var/spack/repos/builtin/packages/pmix/package.py"}], "after_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\n\nfrom spack import *\nimport spack.architecture\n\n\nclass Pmix(AutotoolsPackage):\n \"\"\"The Process Management Interface (PMI) has been used for quite some time\n as a means of exchanging wireup information needed for interprocess\n communication. Two versions (PMI-1 and PMI-2) have been released as part\n of the MPICH effort. While PMI-2 demonstrates better scaling properties\n than its PMI-1 predecessor, attaining rapid launch and wireup of the\n roughly 1M processes executing across 100k nodes expected for exascale\n operations remains challenging. PMI Exascale (PMIx) represents an\n attempt to resolve these questions by providing an extended version\n of the PMI definitions specifically designed to support clusters up\n to and including exascale sizes. The overall objective of the project\n is not to branch the existing definitions - in fact, PMIx fully\n supports both of the existing PMI-1 and PMI-2 APIs - but rather to\n (a) augment and extend those APIs to eliminate some current restrictions\n that impact scalability, (b) establish a standards-like body for\n maintaining the definitions, and (c) provide a reference implementation\n of the PMIx standard that demonstrates the desired level of\n scalability.\"\"\"\n\n homepage = \"https://pmix.github.io/pmix\"\n url = \"https://github.com/pmix/pmix/releases/download/v2.0.1/pmix-2.0.1.tar.bz2\"\n\n version('2.1.1', 'f9f109421661b757245d5e0bd44a38b3')\n version('2.1.0', 'fc97513b601d78fe7c6bb20c6a21df3c')\n version('2.0.2', 'e3ed1deed87c84f9b43da2621c6ad689')\n version('2.0.1', 'ba3193b485843516e6b4e8641e443b1e')\n version('2.0.0', '3e047c2ea0ba8ee9925ed92b205fd92e')\n version('1.2.4', '242a812e206e7c5948f1f5c9688eb2a7')\n version('1.2.3', '102b1cc650018b62348b45d572b158e9')\n version('1.2.2', 'd85c8fd437bd88f984549425ad369e9f')\n version('1.2.1', 'f090f524681c52001ea2db3b0285596f')\n version('1.2.0', '6a42472d5a32e1c31ce5da19d50fc21a')\n\n depends_on('libevent')\n\n def configure_args(self):\n\n spec = self.spec\n config_args = [\n '--enable-shared',\n '--enable-static'\n ]\n\n # external libevent support (needed to keep Open MPI happy)\n config_args.append(\n '--with-libevent={0}'.format(spec['libevent'].prefix))\n\n # Versions < 2.1.1 have a bug in the test code that *sometimes*\n # causes problems on strict alignment architectures such as\n # aarch64. Work-around is to just not build the test code.\n if 'aarch64' in spack.architecture.sys_type() and \\\n self.spec.version < Version('2.1.1'):\n config_args.append('--without-tests-examples')\n\n return config_args\n", "path": "var/spack/repos/builtin/packages/pmix/package.py"}]}
| 1,769 | 500 |
gh_patches_debug_2385
|
rasdani/github-patches
|
git_diff
|
ansible__awx-13645
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Websocket not working at non-root path
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Summary
Changes from #11342 and #652 are not full
### AWX version
21.0.0
### Select the relevant components
- [X] UI
- [ ] API
- [ ] Docs
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
Deploy AWX with custom `ingress_path: /awx`
### Expected results
websocket should work
### Actual results
`2022-05-17 08:46:41,031 ERROR [-] daphne.ws_protocol [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect
--- <exception caught here> ---
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__
]
2022-05-17 08:46:41,031 ERROR [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect
--- <exception caught here> ---
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__
/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__`
### Additional information
It seems that issue is in https://github.com/ansible/awx/blob/48b016802c517ff04d1cff4c43e64f17bb77a7a8/awx/main/routing.py
```
websocket_urlpatterns = [
re_path(r'websocket/$', consumers.EventConsumer),
re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer),
]
```
From https://docs.djangoproject.com/en/4.0/ref/urls/:
When a route ends with $ the whole requested URL, matching against path_info, must match the regular expression pattern (re.fullmatch() is used).
Replacing with
```
websocket_urlpatterns = [
re_path(r'websocket/', consumers.EventConsumer),
re_path(r'websocket/broadcast/', consumers.BroadcastConsumer),
]
```
solves the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awx/main/routing.py`
Content:
```
1 import redis
2 import logging
3
4 from django.conf import settings
5 from django.urls import re_path
6
7 from channels.auth import AuthMiddlewareStack
8 from channels.routing import ProtocolTypeRouter, URLRouter
9
10 from . import consumers
11
12
13 logger = logging.getLogger('awx.main.routing')
14
15
16 class AWXProtocolTypeRouter(ProtocolTypeRouter):
17 def __init__(self, *args, **kwargs):
18 try:
19 r = redis.Redis.from_url(settings.BROKER_URL)
20 for k in r.scan_iter('asgi:*', 500):
21 logger.debug(f"cleaning up Redis key {k}")
22 r.delete(k)
23 except redis.exceptions.RedisError as e:
24 logger.warning("encountered an error communicating with redis.")
25 raise e
26 super().__init__(*args, **kwargs)
27
28
29 websocket_urlpatterns = [
30 re_path(r'websocket/', consumers.EventConsumer.as_asgi()),
31 re_path(r'websocket/broadcast/', consumers.BroadcastConsumer.as_asgi()),
32 ]
33
34 application = AWXProtocolTypeRouter(
35 {
36 'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),
37 }
38 )
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awx/main/routing.py b/awx/main/routing.py
--- a/awx/main/routing.py
+++ b/awx/main/routing.py
@@ -27,8 +27,8 @@
websocket_urlpatterns = [
- re_path(r'websocket/', consumers.EventConsumer.as_asgi()),
- re_path(r'websocket/broadcast/', consumers.BroadcastConsumer.as_asgi()),
+ re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
+ re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),
]
application = AWXProtocolTypeRouter(
|
{"golden_diff": "diff --git a/awx/main/routing.py b/awx/main/routing.py\n--- a/awx/main/routing.py\n+++ b/awx/main/routing.py\n@@ -27,8 +27,8 @@\n \n \n websocket_urlpatterns = [\n- re_path(r'websocket/', consumers.EventConsumer.as_asgi()),\n- re_path(r'websocket/broadcast/', consumers.BroadcastConsumer.as_asgi()),\n+ re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),\n+ re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),\n ]\n \n application = AWXProtocolTypeRouter(\n", "issue": "Websocket not working at non-root path\n### Please confirm the following\n\n- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).\n- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.\n- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.\n\n### Summary\n\nChanges from #11342 and #652 are not full\n\n### AWX version\n\n21.0.0\n\n### Select the relevant components\n\n- [X] UI\n- [ ] API\n- [ ] Docs\n\n### Installation method\n\nkubernetes\n\n### Modifications\n\nno\n\n### Ansible version\n\n_No response_\n\n### Operating system\n\n_No response_\n\n### Web browser\n\n_No response_\n\n### Steps to reproduce\n\nDeploy AWX with custom `ingress_path: /awx`\n\n### Expected results\n\nwebsocket should work\n\n### Actual results\n\n`2022-05-17 08:46:41,031 ERROR [-] daphne.ws_protocol [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect\r\n--- <exception caught here> ---\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__\r\n]\r\n2022-05-17 08:46:41,031 ERROR [Failure instance: Traceback: <class 'ValueError'>: No route found for path 'awx/websocket/'.\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/autobahn/websocket/protocol.py:2841:processHandshake\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/txaio/tx.py:366:as_future\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/ws_protocol.py:72:onConnect\r\n--- <exception caught here> ---\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/twisted/internet/defer.py:151:maybeDeferred\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/daphne/server.py:201:create_application\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:54:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:47:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:145:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/sessions.py:169:__init__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/middleware.py:31:__call__\r\n/var/lib/awx/venv/awx/lib64/python3.9/site-packages/channels/routing.py:150:__call__`\n\n### Additional information\n\nIt seems that issue is in https://github.com/ansible/awx/blob/48b016802c517ff04d1cff4c43e64f17bb77a7a8/awx/main/routing.py\r\n\r\n```\r\nwebsocket_urlpatterns = [\r\n re_path(r'websocket/$', consumers.EventConsumer),\r\n re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer),\r\n]\r\n```\r\nFrom https://docs.djangoproject.com/en/4.0/ref/urls/:\r\n\r\nWhen a route ends with $ the whole requested URL, matching against path_info, must match the regular expression pattern (re.fullmatch() is used).\r\n\r\nReplacing with\r\n\r\n```\r\nwebsocket_urlpatterns = [\r\n re_path(r'websocket/', consumers.EventConsumer),\r\n re_path(r'websocket/broadcast/', consumers.BroadcastConsumer),\r\n]\r\n```\r\n\r\nsolves the issue\n", "before_files": [{"content": "import redis\nimport logging\n\nfrom django.conf import settings\nfrom django.urls import re_path\n\nfrom channels.auth import AuthMiddlewareStack\nfrom channels.routing import ProtocolTypeRouter, URLRouter\n\nfrom . import consumers\n\n\nlogger = logging.getLogger('awx.main.routing')\n\n\nclass AWXProtocolTypeRouter(ProtocolTypeRouter):\n def __init__(self, *args, **kwargs):\n try:\n r = redis.Redis.from_url(settings.BROKER_URL)\n for k in r.scan_iter('asgi:*', 500):\n logger.debug(f\"cleaning up Redis key {k}\")\n r.delete(k)\n except redis.exceptions.RedisError as e:\n logger.warning(\"encountered an error communicating with redis.\")\n raise e\n super().__init__(*args, **kwargs)\n\n\nwebsocket_urlpatterns = [\n re_path(r'websocket/', consumers.EventConsumer.as_asgi()),\n re_path(r'websocket/broadcast/', consumers.BroadcastConsumer.as_asgi()),\n]\n\napplication = AWXProtocolTypeRouter(\n {\n 'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),\n }\n)\n", "path": "awx/main/routing.py"}], "after_files": [{"content": "import redis\nimport logging\n\nfrom django.conf import settings\nfrom django.urls import re_path\n\nfrom channels.auth import AuthMiddlewareStack\nfrom channels.routing import ProtocolTypeRouter, URLRouter\n\nfrom . import consumers\n\n\nlogger = logging.getLogger('awx.main.routing')\n\n\nclass AWXProtocolTypeRouter(ProtocolTypeRouter):\n def __init__(self, *args, **kwargs):\n try:\n r = redis.Redis.from_url(settings.BROKER_URL)\n for k in r.scan_iter('asgi:*', 500):\n logger.debug(f\"cleaning up Redis key {k}\")\n r.delete(k)\n except redis.exceptions.RedisError as e:\n logger.warning(\"encountered an error communicating with redis.\")\n raise e\n super().__init__(*args, **kwargs)\n\n\nwebsocket_urlpatterns = [\n re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),\n re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),\n]\n\napplication = AWXProtocolTypeRouter(\n {\n 'websocket': AuthMiddlewareStack(URLRouter(websocket_urlpatterns)),\n }\n)\n", "path": "awx/main/routing.py"}]}
| 1,911 | 132 |
gh_patches_debug_6308
|
rasdani/github-patches
|
git_diff
|
DataBiosphere__toil-2583
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Env value of TOIL_DOCKER_REGISTRY is not honored in the docker image
When a docker image is build with a custom `TOIL_DOCKER_REGISTRY`, the built docker image still looks at `quay.io/ucsc_cgl` for pulling related images (e.g. `mtail`, `grafana`, etc). The fix is to put the actual registry in `version_template.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version_template.py`
Content:
```
1 # Copyright (C) 2015-2018 Regents of the University of California
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """This script is a template for src/toil/version.py. Running it without arguments echoes all
16 globals, i.e. module attributes. Constant assignments will be echoed verbatim while callables
17 will be invoked and their result echoed as an assignment using the function name as the left-hand
18 side and the return value of the function as right-hand side. To prevent a module attribute from
19 being echoed, start or end the attribute name with an underscore. To print the value of a single
20 symbol, pass the name of that attribute to the script as a command line argument. You can also
21 import the expand_ function and invoke it directly with either no or exactly one argument."""
22
23 # Note to maintainers:
24 #
25 # - don't import at module level unless you want the imported value to be included in the output
26 # - only import from the Python standard run-time library (you can't have any dependencies)
27
28
29 baseVersion = '3.20.0a1'
30 cgcloudVersion = '1.6.0a1.dev393'
31 dockerRegistry = 'quay.io/ucsc_cgl'
32 dockerName = 'toil'
33
34
35 def version():
36 """
37 A version identifier that includes the full-legth commit SHA1 and an optional suffix to
38 indicate that the working copy is dirty.
39 """
40 return _version()
41
42
43 def shortVersion():
44 """
45 A version identifier that includes the abbreviated commit SHA1 and an optional suffix to
46 indicate that the working copy is dirty.
47 """
48 return _version(shorten=True)
49
50
51 def _version(shorten=False):
52 return '-'.join(filter(None, [distVersion(),
53 currentCommit()[:7 if shorten else None],
54 ('dirty' if dirty() else None)]))
55
56
57 def distVersion():
58 """
59 The distribution version identifying a published release on PyPI.
60 """
61 from pkg_resources import parse_version
62 build_number = buildNumber()
63 parsedBaseVersion = parse_version(baseVersion)
64 if isinstance(parsedBaseVersion, tuple):
65 raise RuntimeError("Setuptools version 8.0 or newer required. Update by running "
66 "'pip install setuptools --upgrade'")
67
68 if build_number is not None and parsedBaseVersion.is_prerelease:
69 return baseVersion + '.dev' + build_number
70 else:
71 return baseVersion
72
73
74 def dockerTag():
75 """
76 The primary tag of the Docker image for the appliance. This uniquely identifies the appliance
77 image.
78 """
79 return version()
80
81
82 def dockerShortTag():
83 """
84 A secondary, shortened form of :func:`dockerTag` with which to tag the appliance image for
85 convenience.
86 """
87 return shortVersion()
88
89
90 def dockerMinimalTag():
91 """
92 A minimal tag with which to tag the appliance image for convenience. Does not include
93 information about the git commit or working copy dirtyness.
94 """
95 return distVersion()
96
97
98 def buildNumber():
99 """
100 The Jenkins build number, if defined, else None.
101 """
102 import os
103 return os.getenv('BUILD_NUMBER')
104
105
106 def currentCommit():
107 from subprocess import check_output
108 try:
109 output = check_output('git log --pretty=oneline -n 1 -- $(pwd)', shell=True).decode('utf-8').split()[0]
110 except:
111 # Return this we are not in a git environment.
112 return '000'
113 if isinstance(output, bytes):
114 return output.decode('utf-8')
115 return str(output)
116
117
118 def dirty():
119 from subprocess import call
120 try:
121 return 0 != call('(git diff --exit-code '
122 '&& git diff --cached --exit-code) > /dev/null', shell=True)
123 except:
124 return False # In case the git call fails.
125
126
127 def expand_(name=None):
128 variables = {k: v for k, v in globals().items()
129 if not k.startswith('_') and not k.endswith('_')}
130
131 def resolve(k):
132 v = variables[k]
133 if callable(v):
134 v = v()
135 return v
136
137 if name is None:
138 return ''.join("%s = %s\n" % (k, repr(resolve(k))) for k, v in variables.items())
139 else:
140 return resolve(name)
141
142
143 def _main():
144 import sys
145 sys.stdout.write(expand_(*sys.argv[1:]))
146
147
148 if __name__ == '__main__':
149 _main()
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/version_template.py b/version_template.py
--- a/version_template.py
+++ b/version_template.py
@@ -28,7 +28,6 @@
baseVersion = '3.20.0a1'
cgcloudVersion = '1.6.0a1.dev393'
-dockerRegistry = 'quay.io/ucsc_cgl'
dockerName = 'toil'
@@ -115,6 +114,11 @@
return str(output)
+def dockerRegistry():
+ import os
+ return os.getenv('TOIL_DOCKER_REGISTRY', 'quay.io/ucsc_cgl')
+
+
def dirty():
from subprocess import call
try:
|
{"golden_diff": "diff --git a/version_template.py b/version_template.py\n--- a/version_template.py\n+++ b/version_template.py\n@@ -28,7 +28,6 @@\n \n baseVersion = '3.20.0a1'\n cgcloudVersion = '1.6.0a1.dev393'\n-dockerRegistry = 'quay.io/ucsc_cgl'\n dockerName = 'toil'\n \n \n@@ -115,6 +114,11 @@\n return str(output)\n \n \n+def dockerRegistry():\n+ import os\n+ return os.getenv('TOIL_DOCKER_REGISTRY', 'quay.io/ucsc_cgl')\n+\n+\n def dirty():\n from subprocess import call\n try:\n", "issue": "Env value of TOIL_DOCKER_REGISTRY is not honored in the docker image\nWhen a docker image is build with a custom `TOIL_DOCKER_REGISTRY`, the built docker image still looks at `quay.io/ucsc_cgl` for pulling related images (e.g. `mtail`, `grafana`, etc). The fix is to put the actual registry in `version_template.py`.\n", "before_files": [{"content": "# Copyright (C) 2015-2018 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script is a template for src/toil/version.py. Running it without arguments echoes all\nglobals, i.e. module attributes. Constant assignments will be echoed verbatim while callables\nwill be invoked and their result echoed as an assignment using the function name as the left-hand\nside and the return value of the function as right-hand side. To prevent a module attribute from\nbeing echoed, start or end the attribute name with an underscore. To print the value of a single\nsymbol, pass the name of that attribute to the script as a command line argument. You can also\nimport the expand_ function and invoke it directly with either no or exactly one argument.\"\"\"\n\n# Note to maintainers:\n#\n# - don't import at module level unless you want the imported value to be included in the output\n# - only import from the Python standard run-time library (you can't have any dependencies)\n\n\nbaseVersion = '3.20.0a1'\ncgcloudVersion = '1.6.0a1.dev393'\ndockerRegistry = 'quay.io/ucsc_cgl'\ndockerName = 'toil'\n\n\ndef version():\n \"\"\"\n A version identifier that includes the full-legth commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version()\n\n\ndef shortVersion():\n \"\"\"\n A version identifier that includes the abbreviated commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version(shorten=True)\n\n\ndef _version(shorten=False):\n return '-'.join(filter(None, [distVersion(),\n currentCommit()[:7 if shorten else None],\n ('dirty' if dirty() else None)]))\n\n\ndef distVersion():\n \"\"\"\n The distribution version identifying a published release on PyPI.\n \"\"\"\n from pkg_resources import parse_version\n build_number = buildNumber()\n parsedBaseVersion = parse_version(baseVersion)\n if isinstance(parsedBaseVersion, tuple):\n raise RuntimeError(\"Setuptools version 8.0 or newer required. Update by running \"\n \"'pip install setuptools --upgrade'\")\n\n if build_number is not None and parsedBaseVersion.is_prerelease:\n return baseVersion + '.dev' + build_number\n else:\n return baseVersion\n\n\ndef dockerTag():\n \"\"\"\n The primary tag of the Docker image for the appliance. This uniquely identifies the appliance\n image.\n \"\"\"\n return version()\n\n\ndef dockerShortTag():\n \"\"\"\n A secondary, shortened form of :func:`dockerTag` with which to tag the appliance image for\n convenience.\n \"\"\"\n return shortVersion()\n\n\ndef dockerMinimalTag():\n \"\"\"\n A minimal tag with which to tag the appliance image for convenience. Does not include\n information about the git commit or working copy dirtyness.\n \"\"\"\n return distVersion()\n\n\ndef buildNumber():\n \"\"\"\n The Jenkins build number, if defined, else None.\n \"\"\"\n import os\n return os.getenv('BUILD_NUMBER')\n\n\ndef currentCommit():\n from subprocess import check_output\n try:\n output = check_output('git log --pretty=oneline -n 1 -- $(pwd)', shell=True).decode('utf-8').split()[0]\n except:\n # Return this we are not in a git environment.\n return '000'\n if isinstance(output, bytes):\n return output.decode('utf-8')\n return str(output)\n\n\ndef dirty():\n from subprocess import call\n try:\n return 0 != call('(git diff --exit-code '\n '&& git diff --cached --exit-code) > /dev/null', shell=True)\n except:\n return False # In case the git call fails.\n\n\ndef expand_(name=None):\n variables = {k: v for k, v in globals().items()\n if not k.startswith('_') and not k.endswith('_')}\n\n def resolve(k):\n v = variables[k]\n if callable(v):\n v = v()\n return v\n\n if name is None:\n return ''.join(\"%s = %s\\n\" % (k, repr(resolve(k))) for k, v in variables.items())\n else:\n return resolve(name)\n\n\ndef _main():\n import sys\n sys.stdout.write(expand_(*sys.argv[1:]))\n\n\nif __name__ == '__main__':\n _main()\n", "path": "version_template.py"}], "after_files": [{"content": "# Copyright (C) 2015-2018 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script is a template for src/toil/version.py. Running it without arguments echoes all\nglobals, i.e. module attributes. Constant assignments will be echoed verbatim while callables\nwill be invoked and their result echoed as an assignment using the function name as the left-hand\nside and the return value of the function as right-hand side. To prevent a module attribute from\nbeing echoed, start or end the attribute name with an underscore. To print the value of a single\nsymbol, pass the name of that attribute to the script as a command line argument. You can also\nimport the expand_ function and invoke it directly with either no or exactly one argument.\"\"\"\n\n# Note to maintainers:\n#\n# - don't import at module level unless you want the imported value to be included in the output\n# - only import from the Python standard run-time library (you can't have any dependencies)\n\n\nbaseVersion = '3.20.0a1'\ncgcloudVersion = '1.6.0a1.dev393'\ndockerName = 'toil'\n\n\ndef version():\n \"\"\"\n A version identifier that includes the full-legth commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version()\n\n\ndef shortVersion():\n \"\"\"\n A version identifier that includes the abbreviated commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version(shorten=True)\n\n\ndef _version(shorten=False):\n return '-'.join(filter(None, [distVersion(),\n currentCommit()[:7 if shorten else None],\n ('dirty' if dirty() else None)]))\n\n\ndef distVersion():\n \"\"\"\n The distribution version identifying a published release on PyPI.\n \"\"\"\n from pkg_resources import parse_version\n build_number = buildNumber()\n parsedBaseVersion = parse_version(baseVersion)\n if isinstance(parsedBaseVersion, tuple):\n raise RuntimeError(\"Setuptools version 8.0 or newer required. Update by running \"\n \"'pip install setuptools --upgrade'\")\n\n if build_number is not None and parsedBaseVersion.is_prerelease:\n return baseVersion + '.dev' + build_number\n else:\n return baseVersion\n\n\ndef dockerTag():\n \"\"\"\n The primary tag of the Docker image for the appliance. This uniquely identifies the appliance\n image.\n \"\"\"\n return version()\n\n\ndef dockerShortTag():\n \"\"\"\n A secondary, shortened form of :func:`dockerTag` with which to tag the appliance image for\n convenience.\n \"\"\"\n return shortVersion()\n\n\ndef dockerMinimalTag():\n \"\"\"\n A minimal tag with which to tag the appliance image for convenience. Does not include\n information about the git commit or working copy dirtyness.\n \"\"\"\n return distVersion()\n\n\ndef buildNumber():\n \"\"\"\n The Jenkins build number, if defined, else None.\n \"\"\"\n import os\n return os.getenv('BUILD_NUMBER')\n\n\ndef currentCommit():\n from subprocess import check_output\n try:\n output = check_output('git log --pretty=oneline -n 1 -- $(pwd)', shell=True).decode('utf-8').split()[0]\n except:\n # Return this we are not in a git environment.\n return '000'\n if isinstance(output, bytes):\n return output.decode('utf-8')\n return str(output)\n\n\ndef dockerRegistry():\n import os\n return os.getenv('TOIL_DOCKER_REGISTRY', 'quay.io/ucsc_cgl')\n\n\ndef dirty():\n from subprocess import call\n try:\n return 0 != call('(git diff --exit-code '\n '&& git diff --cached --exit-code) > /dev/null', shell=True)\n except:\n return False # In case the git call fails.\n\n\ndef expand_(name=None):\n variables = {k: v for k, v in globals().items()\n if not k.startswith('_') and not k.endswith('_')}\n\n def resolve(k):\n v = variables[k]\n if callable(v):\n v = v()\n return v\n\n if name is None:\n return ''.join(\"%s = %s\\n\" % (k, repr(resolve(k))) for k, v in variables.items())\n else:\n return resolve(name)\n\n\ndef _main():\n import sys\n sys.stdout.write(expand_(*sys.argv[1:]))\n\n\nif __name__ == '__main__':\n _main()\n", "path": "version_template.py"}]}
| 1,766 | 157 |
gh_patches_debug_21900
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-928
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Get rid of pack prices multiplication
PO said that 1C stock now is giving already multiplied prices
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/management/commands/_update_catalog/update_pack.py`
Content:
```
1 """
2 Update Product.in_pack and prices.
3
4 The update_catalog command always resets product prices to per unit format, so:
5 1. Parse in pack quantity from Tag.name and save it to Product.in_pack
6 2. Multiply product prices by in_pack value and save.
7 """
8 import logging
9
10 from django.conf import settings
11 from django.db import models, transaction
12
13 from catalog.models_expressions import Substring
14
15 from shopelectro.exception import UpdateCatalogException
16 from shopelectro.models import TagQuerySet, TagGroup
17
18 logger = logging.getLogger(__name__)
19 PRICES = ['price', 'purchase_price', 'wholesale_small', 'wholesale_medium', 'wholesale_large']
20
21
22 def find_pack_group() -> TagGroup:
23 try:
24 pack_group = TagGroup.objects.get_pack()
25 except TagGroup.DoesNotExist as error:
26 raise UpdateCatalogException(
27 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID. '
28 + str(error)
29 )
30 if settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():
31 raise UpdateCatalogException(
32 'The pack group name isn\'t matched with the set name:'
33 f' Pack group name: {pack_group.name}\n'
34 f' Set name: {settings.PACK_GROUP_NAME}\n'
35 'Update the PACK_GROUP_NAME django settings variable to set the new relevant name.'
36 )
37
38 return pack_group
39
40
41 def update_in_packs(packs: TagQuerySet):
42 """Parse and save in pack quantity values."""
43 packs = (
44 packs
45 .annotate(
46 in_pack_str=Substring(
47 models.F('name'),
48 models.Value('[0-9]+\+?[0-9]*')))
49 .exclude(in_pack_str__exact='')
50 )
51
52 for pack in packs:
53 in_pack = sum(map(int, pack.in_pack_str.split('+')))
54 pack.products.all().update(in_pack=max(in_pack, 1))
55
56
57 def update_prices(packs: TagQuerySet):
58 """Multiply product prices on in pack quantity."""
59 fields_to_update = {}
60 for price in PRICES:
61 fields_to_update[price] = models.F(price) * models.F('in_pack')
62
63 with transaction.atomic():
64 packs.products().update(**fields_to_update)
65
66
67 def main(*args, **kwargs):
68 packs = find_pack_group().tags.all().prefetch_related('products')
69 update_in_packs(packs)
70 update_prices(packs)
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/shopelectro/management/commands/_update_catalog/update_pack.py b/shopelectro/management/commands/_update_catalog/update_pack.py
--- a/shopelectro/management/commands/_update_catalog/update_pack.py
+++ b/shopelectro/management/commands/_update_catalog/update_pack.py
@@ -8,10 +8,9 @@
import logging
from django.conf import settings
-from django.db import models, transaction
+from django.db import models
from catalog.models_expressions import Substring
-
from shopelectro.exception import UpdateCatalogException
from shopelectro.models import TagQuerySet, TagGroup
@@ -54,17 +53,6 @@
pack.products.all().update(in_pack=max(in_pack, 1))
-def update_prices(packs: TagQuerySet):
- """Multiply product prices on in pack quantity."""
- fields_to_update = {}
- for price in PRICES:
- fields_to_update[price] = models.F(price) * models.F('in_pack')
-
- with transaction.atomic():
- packs.products().update(**fields_to_update)
-
-
def main(*args, **kwargs):
packs = find_pack_group().tags.all().prefetch_related('products')
update_in_packs(packs)
- update_prices(packs)
|
{"golden_diff": "diff --git a/shopelectro/management/commands/_update_catalog/update_pack.py b/shopelectro/management/commands/_update_catalog/update_pack.py\n--- a/shopelectro/management/commands/_update_catalog/update_pack.py\n+++ b/shopelectro/management/commands/_update_catalog/update_pack.py\n@@ -8,10 +8,9 @@\n import logging\n \n from django.conf import settings\n-from django.db import models, transaction\n+from django.db import models\n \n from catalog.models_expressions import Substring\n-\n from shopelectro.exception import UpdateCatalogException\n from shopelectro.models import TagQuerySet, TagGroup\n \n@@ -54,17 +53,6 @@\n pack.products.all().update(in_pack=max(in_pack, 1))\n \n \n-def update_prices(packs: TagQuerySet):\n- \"\"\"Multiply product prices on in pack quantity.\"\"\"\n- fields_to_update = {}\n- for price in PRICES:\n- fields_to_update[price] = models.F(price) * models.F('in_pack')\n-\n- with transaction.atomic():\n- packs.products().update(**fields_to_update)\n-\n-\n def main(*args, **kwargs):\n packs = find_pack_group().tags.all().prefetch_related('products')\n update_in_packs(packs)\n- update_prices(packs)\n", "issue": "Get rid of pack prices multiplication\nPO said that 1C stock now is giving already multiplied prices\n", "before_files": [{"content": "\"\"\"\nUpdate Product.in_pack and prices.\n\nThe update_catalog command always resets product prices to per unit format, so:\n1. Parse in pack quantity from Tag.name and save it to Product.in_pack\n2. Multiply product prices by in_pack value and save.\n\"\"\"\nimport logging\n\nfrom django.conf import settings\nfrom django.db import models, transaction\n\nfrom catalog.models_expressions import Substring\n\nfrom shopelectro.exception import UpdateCatalogException\nfrom shopelectro.models import TagQuerySet, TagGroup\n\nlogger = logging.getLogger(__name__)\nPRICES = ['price', 'purchase_price', 'wholesale_small', 'wholesale_medium', 'wholesale_large']\n\n\ndef find_pack_group() -> TagGroup:\n try:\n pack_group = TagGroup.objects.get_pack()\n except TagGroup.DoesNotExist as error:\n raise UpdateCatalogException(\n 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID. '\n + str(error)\n )\n if settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():\n raise UpdateCatalogException(\n 'The pack group name isn\\'t matched with the set name:'\n f' Pack group name: {pack_group.name}\\n'\n f' Set name: {settings.PACK_GROUP_NAME}\\n'\n 'Update the PACK_GROUP_NAME django settings variable to set the new relevant name.'\n )\n\n return pack_group\n\n\ndef update_in_packs(packs: TagQuerySet):\n \"\"\"Parse and save in pack quantity values.\"\"\"\n packs = (\n packs\n .annotate(\n in_pack_str=Substring(\n models.F('name'),\n models.Value('[0-9]+\\+?[0-9]*')))\n .exclude(in_pack_str__exact='')\n )\n\n for pack in packs:\n in_pack = sum(map(int, pack.in_pack_str.split('+')))\n pack.products.all().update(in_pack=max(in_pack, 1))\n\n\ndef update_prices(packs: TagQuerySet):\n \"\"\"Multiply product prices on in pack quantity.\"\"\"\n fields_to_update = {}\n for price in PRICES:\n fields_to_update[price] = models.F(price) * models.F('in_pack')\n\n with transaction.atomic():\n packs.products().update(**fields_to_update)\n\n\ndef main(*args, **kwargs):\n packs = find_pack_group().tags.all().prefetch_related('products')\n update_in_packs(packs)\n update_prices(packs)\n", "path": "shopelectro/management/commands/_update_catalog/update_pack.py"}], "after_files": [{"content": "\"\"\"\nUpdate Product.in_pack and prices.\n\nThe update_catalog command always resets product prices to per unit format, so:\n1. Parse in pack quantity from Tag.name and save it to Product.in_pack\n2. Multiply product prices by in_pack value and save.\n\"\"\"\nimport logging\n\nfrom django.conf import settings\nfrom django.db import models\n\nfrom catalog.models_expressions import Substring\nfrom shopelectro.exception import UpdateCatalogException\nfrom shopelectro.models import TagQuerySet, TagGroup\n\nlogger = logging.getLogger(__name__)\nPRICES = ['price', 'purchase_price', 'wholesale_small', 'wholesale_medium', 'wholesale_large']\n\n\ndef find_pack_group() -> TagGroup:\n try:\n pack_group = TagGroup.objects.get_pack()\n except TagGroup.DoesNotExist as error:\n raise UpdateCatalogException(\n 'Update the PACK_GROUP_UUID django settings variable to set the new relevant UUID. '\n + str(error)\n )\n if settings.PACK_GROUP_NAME.lower() not in pack_group.name.lower():\n raise UpdateCatalogException(\n 'The pack group name isn\\'t matched with the set name:'\n f' Pack group name: {pack_group.name}\\n'\n f' Set name: {settings.PACK_GROUP_NAME}\\n'\n 'Update the PACK_GROUP_NAME django settings variable to set the new relevant name.'\n )\n\n return pack_group\n\n\ndef update_in_packs(packs: TagQuerySet):\n \"\"\"Parse and save in pack quantity values.\"\"\"\n packs = (\n packs\n .annotate(\n in_pack_str=Substring(\n models.F('name'),\n models.Value('[0-9]+\\+?[0-9]*')))\n .exclude(in_pack_str__exact='')\n )\n\n for pack in packs:\n in_pack = sum(map(int, pack.in_pack_str.split('+')))\n pack.products.all().update(in_pack=max(in_pack, 1))\n\n\ndef main(*args, **kwargs):\n packs = find_pack_group().tags.all().prefetch_related('products')\n update_in_packs(packs)\n", "path": "shopelectro/management/commands/_update_catalog/update_pack.py"}]}
| 940 | 282 |
gh_patches_debug_17573
|
rasdani/github-patches
|
git_diff
|
huggingface__accelerate-2388
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.12 distutils deprecation error
### System Info
```Shell
accelerate 0.26.1, Windows 11, Python 3.12.1
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
To reproduce behaviour:
1. In Python 3.12 install latest accelerate with
` pip3 install accelerate --upgrade `
2. Try to initialise accelerate with:
```python
from accelerate import Accelerator
accelerator = Accelerator()
```
This results in the error caused by https://github.com/huggingface/accelerate/blob/f88661b5d9f4977b5a1555a92e64ef3cd359cbc9/src/accelerate/utils/environment.py#L19

The fundamental issue is deprecation of distutils as part of https://peps.python.org/pep-0632/
### Expected behavior
Accelerate should not depend on deprecated package and be compatible with Python 3.12+
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/accelerate/utils/environment.py`
Content:
```
1 # Copyright 2022 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import platform
17 import subprocess
18 import sys
19 from distutils import spawn
20 from typing import Dict
21
22 import torch
23
24
25 def str_to_bool(value) -> int:
26 """
27 Converts a string representation of truth to `True` (1) or `False` (0).
28
29 True values are `y`, `yes`, `t`, `true`, `on`, and `1`; False value are `n`, `no`, `f`, `false`, `off`, and `0`;
30 """
31 value = value.lower()
32 if value in ("y", "yes", "t", "true", "on", "1"):
33 return 1
34 elif value in ("n", "no", "f", "false", "off", "0"):
35 return 0
36 else:
37 raise ValueError(f"invalid truth value {value}")
38
39
40 def get_int_from_env(env_keys, default):
41 """Returns the first positive env value found in the `env_keys` list or the default."""
42 for e in env_keys:
43 val = int(os.environ.get(e, -1))
44 if val >= 0:
45 return val
46 return default
47
48
49 def parse_flag_from_env(key, default=False):
50 """Returns truthy value for `key` from the env if available else the default."""
51 value = os.environ.get(key, str(default))
52 return str_to_bool(value) == 1 # As its name indicates `str_to_bool` actually returns an int...
53
54
55 def parse_choice_from_env(key, default="no"):
56 value = os.environ.get(key, str(default))
57 return value
58
59
60 def are_libraries_initialized(*library_names: str) -> Dict[str, bool]:
61 """
62 Checks if any of `library_names` are imported in the environment. Will return results as a `key:bool` pair.
63 """
64 return [lib_name for lib_name in library_names if lib_name in sys.modules.keys()]
65
66
67 def get_gpu_info():
68 """
69 Gets GPU count and names using `nvidia-smi` instead of torch to not initialize CUDA.
70
71 Largely based on the `gputil` library.
72 """
73 if platform.system() == "Windows":
74 # If platform is Windows and nvidia-smi can't be found in path
75 # try from systemd rive with default installation path
76 command = spawn.find_executable("nvidia-smi")
77 if command is None:
78 command = "%s\\Program Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe" % os.environ["systemdrive"]
79 else:
80 command = "nvidia-smi"
81 # Returns as list of `n` GPUs and their names
82 output = subprocess.check_output(
83 [command, "--query-gpu=count,name", "--format=csv,noheader"], universal_newlines=True
84 )
85 output = output.strip()
86 gpus = output.split(os.linesep)
87 # Get names from output
88 gpu_count = len(gpus)
89 gpu_names = [gpu.split(",")[1].strip() for gpu in gpus]
90 return gpu_names, gpu_count
91
92
93 def check_cuda_p2p_ib_support():
94 """
95 Checks if the devices being used have issues with P2P and IB communications, namely any consumer GPU hardware after
96 the 3090.
97
98 Noteably uses `nvidia-smi` instead of torch to not initialize CUDA.
99 """
100 try:
101 device_names, device_count = get_gpu_info()
102 # As new consumer GPUs get released, add them to `unsupported_devices``
103 unsupported_devices = {"RTX 40"}
104 if device_count > 1:
105 if any(
106 unsupported_device in device_name
107 for device_name in device_names
108 for unsupported_device in unsupported_devices
109 ):
110 return False
111 except Exception:
112 pass
113 return True
114
115
116 def check_fp8_capability():
117 """
118 Checks if all the current GPUs available support FP8.
119
120 Notably must initialize `torch.cuda` to check.
121 """
122 cuda_device_capacity = torch.cuda.get_device_capability()
123 return cuda_device_capacity >= (8, 9)
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/accelerate/utils/environment.py b/src/accelerate/utils/environment.py
--- a/src/accelerate/utils/environment.py
+++ b/src/accelerate/utils/environment.py
@@ -16,7 +16,7 @@
import platform
import subprocess
import sys
-from distutils import spawn
+from shutil import which
from typing import Dict
import torch
@@ -72,8 +72,8 @@
"""
if platform.system() == "Windows":
# If platform is Windows and nvidia-smi can't be found in path
- # try from systemd rive with default installation path
- command = spawn.find_executable("nvidia-smi")
+ # try from systemd drive with default installation path
+ command = which("nvidia-smi")
if command is None:
command = "%s\\Program Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe" % os.environ["systemdrive"]
else:
|
{"golden_diff": "diff --git a/src/accelerate/utils/environment.py b/src/accelerate/utils/environment.py\n--- a/src/accelerate/utils/environment.py\n+++ b/src/accelerate/utils/environment.py\n@@ -16,7 +16,7 @@\n import platform\n import subprocess\n import sys\n-from distutils import spawn\n+from shutil import which\n from typing import Dict\n \n import torch\n@@ -72,8 +72,8 @@\n \"\"\"\n if platform.system() == \"Windows\":\n # If platform is Windows and nvidia-smi can't be found in path\n- # try from systemd rive with default installation path\n- command = spawn.find_executable(\"nvidia-smi\")\n+ # try from systemd drive with default installation path\n+ command = which(\"nvidia-smi\")\n if command is None:\n command = \"%s\\\\Program Files\\\\NVIDIA Corporation\\\\NVSMI\\\\nvidia-smi.exe\" % os.environ[\"systemdrive\"]\n else:\n", "issue": "Python 3.12 distutils deprecation error\n### System Info\n\n```Shell\naccelerate 0.26.1, Windows 11, Python 3.12.1\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nTo reproduce behaviour:\r\n1. In Python 3.12 install latest accelerate with\r\n` pip3 install accelerate --upgrade `\r\n2. Try to initialise accelerate with:\r\n```python\r\nfrom accelerate import Accelerator\r\naccelerator = Accelerator()\r\n```\r\n\r\nThis results in the error caused by https://github.com/huggingface/accelerate/blob/f88661b5d9f4977b5a1555a92e64ef3cd359cbc9/src/accelerate/utils/environment.py#L19\r\n\r\n\r\nThe fundamental issue is deprecation of distutils as part of https://peps.python.org/pep-0632/\n\n### Expected behavior\n\nAccelerate should not depend on deprecated package and be compatible with Python 3.12+\n", "before_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils import spawn\nfrom typing import Dict\n\nimport torch\n\n\ndef str_to_bool(value) -> int:\n \"\"\"\n Converts a string representation of truth to `True` (1) or `False` (0).\n\n True values are `y`, `yes`, `t`, `true`, `on`, and `1`; False value are `n`, `no`, `f`, `false`, `off`, and `0`;\n \"\"\"\n value = value.lower()\n if value in (\"y\", \"yes\", \"t\", \"true\", \"on\", \"1\"):\n return 1\n elif value in (\"n\", \"no\", \"f\", \"false\", \"off\", \"0\"):\n return 0\n else:\n raise ValueError(f\"invalid truth value {value}\")\n\n\ndef get_int_from_env(env_keys, default):\n \"\"\"Returns the first positive env value found in the `env_keys` list or the default.\"\"\"\n for e in env_keys:\n val = int(os.environ.get(e, -1))\n if val >= 0:\n return val\n return default\n\n\ndef parse_flag_from_env(key, default=False):\n \"\"\"Returns truthy value for `key` from the env if available else the default.\"\"\"\n value = os.environ.get(key, str(default))\n return str_to_bool(value) == 1 # As its name indicates `str_to_bool` actually returns an int...\n\n\ndef parse_choice_from_env(key, default=\"no\"):\n value = os.environ.get(key, str(default))\n return value\n\n\ndef are_libraries_initialized(*library_names: str) -> Dict[str, bool]:\n \"\"\"\n Checks if any of `library_names` are imported in the environment. Will return results as a `key:bool` pair.\n \"\"\"\n return [lib_name for lib_name in library_names if lib_name in sys.modules.keys()]\n\n\ndef get_gpu_info():\n \"\"\"\n Gets GPU count and names using `nvidia-smi` instead of torch to not initialize CUDA.\n\n Largely based on the `gputil` library.\n \"\"\"\n if platform.system() == \"Windows\":\n # If platform is Windows and nvidia-smi can't be found in path\n # try from systemd rive with default installation path\n command = spawn.find_executable(\"nvidia-smi\")\n if command is None:\n command = \"%s\\\\Program Files\\\\NVIDIA Corporation\\\\NVSMI\\\\nvidia-smi.exe\" % os.environ[\"systemdrive\"]\n else:\n command = \"nvidia-smi\"\n # Returns as list of `n` GPUs and their names\n output = subprocess.check_output(\n [command, \"--query-gpu=count,name\", \"--format=csv,noheader\"], universal_newlines=True\n )\n output = output.strip()\n gpus = output.split(os.linesep)\n # Get names from output\n gpu_count = len(gpus)\n gpu_names = [gpu.split(\",\")[1].strip() for gpu in gpus]\n return gpu_names, gpu_count\n\n\ndef check_cuda_p2p_ib_support():\n \"\"\"\n Checks if the devices being used have issues with P2P and IB communications, namely any consumer GPU hardware after\n the 3090.\n\n Noteably uses `nvidia-smi` instead of torch to not initialize CUDA.\n \"\"\"\n try:\n device_names, device_count = get_gpu_info()\n # As new consumer GPUs get released, add them to `unsupported_devices``\n unsupported_devices = {\"RTX 40\"}\n if device_count > 1:\n if any(\n unsupported_device in device_name\n for device_name in device_names\n for unsupported_device in unsupported_devices\n ):\n return False\n except Exception:\n pass\n return True\n\n\ndef check_fp8_capability():\n \"\"\"\n Checks if all the current GPUs available support FP8.\n\n Notably must initialize `torch.cuda` to check.\n \"\"\"\n cuda_device_capacity = torch.cuda.get_device_capability()\n return cuda_device_capacity >= (8, 9)\n", "path": "src/accelerate/utils/environment.py"}], "after_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom shutil import which\nfrom typing import Dict\n\nimport torch\n\n\ndef str_to_bool(value) -> int:\n \"\"\"\n Converts a string representation of truth to `True` (1) or `False` (0).\n\n True values are `y`, `yes`, `t`, `true`, `on`, and `1`; False value are `n`, `no`, `f`, `false`, `off`, and `0`;\n \"\"\"\n value = value.lower()\n if value in (\"y\", \"yes\", \"t\", \"true\", \"on\", \"1\"):\n return 1\n elif value in (\"n\", \"no\", \"f\", \"false\", \"off\", \"0\"):\n return 0\n else:\n raise ValueError(f\"invalid truth value {value}\")\n\n\ndef get_int_from_env(env_keys, default):\n \"\"\"Returns the first positive env value found in the `env_keys` list or the default.\"\"\"\n for e in env_keys:\n val = int(os.environ.get(e, -1))\n if val >= 0:\n return val\n return default\n\n\ndef parse_flag_from_env(key, default=False):\n \"\"\"Returns truthy value for `key` from the env if available else the default.\"\"\"\n value = os.environ.get(key, str(default))\n return str_to_bool(value) == 1 # As its name indicates `str_to_bool` actually returns an int...\n\n\ndef parse_choice_from_env(key, default=\"no\"):\n value = os.environ.get(key, str(default))\n return value\n\n\ndef are_libraries_initialized(*library_names: str) -> Dict[str, bool]:\n \"\"\"\n Checks if any of `library_names` are imported in the environment. Will return results as a `key:bool` pair.\n \"\"\"\n return [lib_name for lib_name in library_names if lib_name in sys.modules.keys()]\n\n\ndef get_gpu_info():\n \"\"\"\n Gets GPU count and names using `nvidia-smi` instead of torch to not initialize CUDA.\n\n Largely based on the `gputil` library.\n \"\"\"\n if platform.system() == \"Windows\":\n # If platform is Windows and nvidia-smi can't be found in path\n # try from systemd drive with default installation path\n command = which(\"nvidia-smi\")\n if command is None:\n command = \"%s\\\\Program Files\\\\NVIDIA Corporation\\\\NVSMI\\\\nvidia-smi.exe\" % os.environ[\"systemdrive\"]\n else:\n command = \"nvidia-smi\"\n # Returns as list of `n` GPUs and their names\n output = subprocess.check_output(\n [command, \"--query-gpu=count,name\", \"--format=csv,noheader\"], universal_newlines=True\n )\n output = output.strip()\n gpus = output.split(os.linesep)\n # Get names from output\n gpu_count = len(gpus)\n gpu_names = [gpu.split(\",\")[1].strip() for gpu in gpus]\n return gpu_names, gpu_count\n\n\ndef check_cuda_p2p_ib_support():\n \"\"\"\n Checks if the devices being used have issues with P2P and IB communications, namely any consumer GPU hardware after\n the 3090.\n\n Noteably uses `nvidia-smi` instead of torch to not initialize CUDA.\n \"\"\"\n try:\n device_names, device_count = get_gpu_info()\n # As new consumer GPUs get released, add them to `unsupported_devices``\n unsupported_devices = {\"RTX 40\"}\n if device_count > 1:\n if any(\n unsupported_device in device_name\n for device_name in device_names\n for unsupported_device in unsupported_devices\n ):\n return False\n except Exception:\n pass\n return True\n\n\ndef check_fp8_capability():\n \"\"\"\n Checks if all the current GPUs available support FP8.\n\n Notably must initialize `torch.cuda` to check.\n \"\"\"\n cuda_device_capacity = torch.cuda.get_device_capability()\n return cuda_device_capacity >= (8, 9)\n", "path": "src/accelerate/utils/environment.py"}]}
| 1,909 | 211 |
gh_patches_debug_40301
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5543
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/shardformer/shard/shard_config.py`
Content:
```
1 from dataclasses import dataclass, field
2 from typing import Any, Dict, Optional
3
4 import torch.distributed as dist
5 from torch.distributed import ProcessGroup
6
7 from colossalai.pipeline.stage_manager import PipelineStageManager
8
9 __all__ = ["ShardConfig"]
10
11
12 @dataclass
13 class ShardConfig:
14 r"""
15 The config for sharding the huggingface model
16
17 Args:
18 tensor_parallel_process_group (Optional[ProcessGroup]): The process group of tensor parallelism, it's necessary when using tensor parallel. Defaults to None, which is the global process group.
19 pipeline_stage_manager (Optional[PipelineStageManager]): If using pipeline parallelism, it's necessary to specify a pipeline stage manager for inter-process communication in pipeline parallelism. Defaults to None, which means not using pipeline parallelism.
20 enable_tensor_parallelism (bool): Whether to use tensor parallelism. Defaults to True.
21 enable_fused_normalization (bool): Whether to use fused layernorm. Defaults to False.
22 enable_flash_attention (bool, optional): Whether to switch on flash attention. Defaults to False.
23 enable_jit_fused (bool, optional): Whether to switch on JIT fused operators. Defaults to False.
24 enable_sequence_parallelism (bool): Whether to turn on sequence parallelism, which partitions non-tensor-parallel regions along the sequence dimension. Defaults to False.
25 enable_sequence_overlap (bool): Whether to turn on sequence overlap, which overlap the computation and communication in sequence parallelism. It can only be used when enable_sequence_parallelism is True. Defaults to False.
26 enable_all_optimization (bool): Whether to turn on all optimization tools including 'fused normalization', 'flash attention', 'JIT fused operators', 'sequence parallelism' and 'sequence overlap'. Defaults to False.
27 """
28 tensor_parallel_process_group: Optional[ProcessGroup] = None
29 pipeline_stage_manager: Optional[PipelineStageManager] = None
30 enable_tensor_parallelism: bool = True
31 enable_fused_normalization: bool = False
32 enable_flash_attention: bool = False
33 enable_jit_fused: bool = False
34 enable_all_optimization: bool = False
35 enable_sequence_parallelism: bool = False
36 enable_sequence_overlap: bool = False
37 parallel_output = True
38 extra_kwargs: Dict[str, Any] = field(default_factory=dict)
39 # pipeline_parallel_size: int
40 # data_parallel_size: int
41 # tensor_parallel_mode: Literal['1d', '2d', '2.5d', '3d']
42
43 @property
44 def tensor_parallel_size(self):
45 return self._tensor_parallel_size
46
47 def __post_init__(self):
48 if not self.enable_tensor_parallelism and self.enable_sequence_parallelism:
49 raise ValueError(
50 "enable_sequence_parallelism can only be set to True when enable_tensor_parallelism is True"
51 )
52 if not self.enable_sequence_parallelism and self.enable_sequence_overlap:
53 raise ValueError("enable_sequence_overlap can only be set to True when enable_sequence_parallelism is True")
54 if not self.enable_tensor_parallelism:
55 self._tensor_parallel_size = 1
56 else:
57 # get the parallel size
58 self._tensor_parallel_size = dist.get_world_size(self.tensor_parallel_process_group)
59 # turn on all optimization if all_optimization is set to True
60 if self.enable_all_optimization:
61 self._turn_on_all_optimization()
62
63 def _turn_on_all_optimization(self):
64 """
65 Turn on all optimization.
66 """
67 # you can add all the optimization flag here
68 self.enable_fused_normalization = True
69 self.enable_flash_attention = True
70 self.enable_jit_fused = True
71 self.enable_sequence_parallelism = True
72 self.enable_sequence_overlap = True
73
74 def _infer(self):
75 """
76 Set default params for inference.
77 """
78 # assert self.pipeline_stage_manager is None, "pipeline parallelism is not supported in inference for now"
79
```
Path: `colossalai/inference/modeling/policy/nopadding_llama.py`
Content:
```
1 from functools import partial
2
3 from torch.nn import Parameter
4 from transformers.models.llama.modeling_llama import LlamaDecoderLayer, LlamaForCausalLM, LlamaModel, LlamaRMSNorm
5
6 from colossalai.inference.modeling.models.nopadding_llama import (
7 NopadLlamaAttention,
8 NopadLlamaMLP,
9 llama_causal_lm_forward,
10 llama_decoder_layer_forward,
11 llama_model_forward,
12 llama_rmsnorm_forward,
13 )
14 from colossalai.inference.utils import init_to_get_rotary
15 from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription
16
17 # import colossalai
18 from colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy
19
20
21 class NoPaddingLlamaModelInferPolicy(LlamaForCausalLMPolicy):
22 def __init__(self) -> None:
23 super().__init__()
24
25 def module_policy(self):
26 policy = super().module_policy()
27
28 decoder_attribute_replacement = {
29 "lm_head.weight": Parameter(self.model.lm_head.weight.transpose(0, 1), requires_grad=False),
30 }
31 policy[LlamaForCausalLM] = ModulePolicyDescription(
32 attribute_replacement=decoder_attribute_replacement,
33 )
34
35 policy[LlamaDecoderLayer] = ModulePolicyDescription(
36 sub_module_replacement=[
37 SubModuleReplacementDescription(
38 suffix="mlp",
39 target_module=NopadLlamaMLP,
40 ),
41 SubModuleReplacementDescription(
42 suffix="self_attn",
43 target_module=NopadLlamaAttention,
44 ),
45 ]
46 )
47
48 self.shard_config._infer()
49
50 infer_forward = llama_causal_lm_forward
51 method_replacement = {"forward": partial(infer_forward)}
52 self.append_or_create_method_replacement(
53 description=method_replacement, policy=policy, target_key=LlamaForCausalLM
54 )
55
56 infer_forward = llama_model_forward
57 method_replacement = {"forward": partial(infer_forward)}
58 self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaModel)
59
60 infer_forward = llama_decoder_layer_forward
61 method_replacement = {"forward": partial(infer_forward)}
62 self.append_or_create_method_replacement(
63 description=method_replacement, policy=policy, target_key=LlamaDecoderLayer
64 )
65
66 infer_forward = llama_rmsnorm_forward
67 method_replacement = {"forward": partial(infer_forward)}
68 self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaRMSNorm)
69
70 return policy
71
72 def postprocess(self):
73 init_to_get_rotary(self.model.model)
74 return self.model
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/colossalai/inference/modeling/policy/nopadding_llama.py b/colossalai/inference/modeling/policy/nopadding_llama.py
--- a/colossalai/inference/modeling/policy/nopadding_llama.py
+++ b/colossalai/inference/modeling/policy/nopadding_llama.py
@@ -1,5 +1,3 @@
-from functools import partial
-
from torch.nn import Parameter
from transformers.models.llama.modeling_llama import LlamaDecoderLayer, LlamaForCausalLM, LlamaModel, LlamaRMSNorm
@@ -13,8 +11,6 @@
)
from colossalai.inference.utils import init_to_get_rotary
from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription
-
-# import colossalai
from colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy
@@ -45,27 +41,18 @@
]
)
- self.shard_config._infer()
-
- infer_forward = llama_causal_lm_forward
- method_replacement = {"forward": partial(infer_forward)}
self.append_or_create_method_replacement(
- description=method_replacement, policy=policy, target_key=LlamaForCausalLM
+ description={"forward": llama_causal_lm_forward}, policy=policy, target_key=LlamaForCausalLM
)
-
- infer_forward = llama_model_forward
- method_replacement = {"forward": partial(infer_forward)}
- self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaModel)
-
- infer_forward = llama_decoder_layer_forward
- method_replacement = {"forward": partial(infer_forward)}
self.append_or_create_method_replacement(
- description=method_replacement, policy=policy, target_key=LlamaDecoderLayer
+ description={"forward": llama_model_forward}, policy=policy, target_key=LlamaModel
+ )
+ self.append_or_create_method_replacement(
+ description={"forward": llama_decoder_layer_forward}, policy=policy, target_key=LlamaDecoderLayer
+ )
+ self.append_or_create_method_replacement(
+ description={"forward": llama_rmsnorm_forward}, policy=policy, target_key=LlamaRMSNorm
)
-
- infer_forward = llama_rmsnorm_forward
- method_replacement = {"forward": partial(infer_forward)}
- self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaRMSNorm)
return policy
diff --git a/colossalai/shardformer/shard/shard_config.py b/colossalai/shardformer/shard/shard_config.py
--- a/colossalai/shardformer/shard/shard_config.py
+++ b/colossalai/shardformer/shard/shard_config.py
@@ -36,8 +36,6 @@
enable_sequence_overlap: bool = False
parallel_output = True
extra_kwargs: Dict[str, Any] = field(default_factory=dict)
- # pipeline_parallel_size: int
- # data_parallel_size: int
# tensor_parallel_mode: Literal['1d', '2d', '2.5d', '3d']
@property
@@ -70,9 +68,3 @@
self.enable_jit_fused = True
self.enable_sequence_parallelism = True
self.enable_sequence_overlap = True
-
- def _infer(self):
- """
- Set default params for inference.
- """
- # assert self.pipeline_stage_manager is None, "pipeline parallelism is not supported in inference for now"
|
{"golden_diff": "diff --git a/colossalai/inference/modeling/policy/nopadding_llama.py b/colossalai/inference/modeling/policy/nopadding_llama.py\n--- a/colossalai/inference/modeling/policy/nopadding_llama.py\n+++ b/colossalai/inference/modeling/policy/nopadding_llama.py\n@@ -1,5 +1,3 @@\n-from functools import partial\n-\n from torch.nn import Parameter\n from transformers.models.llama.modeling_llama import LlamaDecoderLayer, LlamaForCausalLM, LlamaModel, LlamaRMSNorm\n \n@@ -13,8 +11,6 @@\n )\n from colossalai.inference.utils import init_to_get_rotary\n from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription\n-\n-# import colossalai\n from colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy\n \n \n@@ -45,27 +41,18 @@\n ]\n )\n \n- self.shard_config._infer()\n-\n- infer_forward = llama_causal_lm_forward\n- method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n- description=method_replacement, policy=policy, target_key=LlamaForCausalLM\n+ description={\"forward\": llama_causal_lm_forward}, policy=policy, target_key=LlamaForCausalLM\n )\n-\n- infer_forward = llama_model_forward\n- method_replacement = {\"forward\": partial(infer_forward)}\n- self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaModel)\n-\n- infer_forward = llama_decoder_layer_forward\n- method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n- description=method_replacement, policy=policy, target_key=LlamaDecoderLayer\n+ description={\"forward\": llama_model_forward}, policy=policy, target_key=LlamaModel\n+ )\n+ self.append_or_create_method_replacement(\n+ description={\"forward\": llama_decoder_layer_forward}, policy=policy, target_key=LlamaDecoderLayer\n+ )\n+ self.append_or_create_method_replacement(\n+ description={\"forward\": llama_rmsnorm_forward}, policy=policy, target_key=LlamaRMSNorm\n )\n-\n- infer_forward = llama_rmsnorm_forward\n- method_replacement = {\"forward\": partial(infer_forward)}\n- self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaRMSNorm)\n \n return policy\n \ndiff --git a/colossalai/shardformer/shard/shard_config.py b/colossalai/shardformer/shard/shard_config.py\n--- a/colossalai/shardformer/shard/shard_config.py\n+++ b/colossalai/shardformer/shard/shard_config.py\n@@ -36,8 +36,6 @@\n enable_sequence_overlap: bool = False\n parallel_output = True\n extra_kwargs: Dict[str, Any] = field(default_factory=dict)\n- # pipeline_parallel_size: int\n- # data_parallel_size: int\n # tensor_parallel_mode: Literal['1d', '2d', '2.5d', '3d']\n \n @property\n@@ -70,9 +68,3 @@\n self.enable_jit_fused = True\n self.enable_sequence_parallelism = True\n self.enable_sequence_overlap = True\n-\n- def _infer(self):\n- \"\"\"\n- Set default params for inference.\n- \"\"\"\n- # assert self.pipeline_stage_manager is None, \"pipeline parallelism is not supported in inference for now\"\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from dataclasses import dataclass, field\nfrom typing import Any, Dict, Optional\n\nimport torch.distributed as dist\nfrom torch.distributed import ProcessGroup\n\nfrom colossalai.pipeline.stage_manager import PipelineStageManager\n\n__all__ = [\"ShardConfig\"]\n\n\n@dataclass\nclass ShardConfig:\n r\"\"\"\n The config for sharding the huggingface model\n\n Args:\n tensor_parallel_process_group (Optional[ProcessGroup]): The process group of tensor parallelism, it's necessary when using tensor parallel. Defaults to None, which is the global process group.\n pipeline_stage_manager (Optional[PipelineStageManager]): If using pipeline parallelism, it's necessary to specify a pipeline stage manager for inter-process communication in pipeline parallelism. Defaults to None, which means not using pipeline parallelism.\n enable_tensor_parallelism (bool): Whether to use tensor parallelism. Defaults to True.\n enable_fused_normalization (bool): Whether to use fused layernorm. Defaults to False.\n enable_flash_attention (bool, optional): Whether to switch on flash attention. Defaults to False.\n enable_jit_fused (bool, optional): Whether to switch on JIT fused operators. Defaults to False.\n enable_sequence_parallelism (bool): Whether to turn on sequence parallelism, which partitions non-tensor-parallel regions along the sequence dimension. Defaults to False.\n enable_sequence_overlap (bool): Whether to turn on sequence overlap, which overlap the computation and communication in sequence parallelism. It can only be used when enable_sequence_parallelism is True. Defaults to False.\n enable_all_optimization (bool): Whether to turn on all optimization tools including 'fused normalization', 'flash attention', 'JIT fused operators', 'sequence parallelism' and 'sequence overlap'. Defaults to False.\n \"\"\"\n tensor_parallel_process_group: Optional[ProcessGroup] = None\n pipeline_stage_manager: Optional[PipelineStageManager] = None\n enable_tensor_parallelism: bool = True\n enable_fused_normalization: bool = False\n enable_flash_attention: bool = False\n enable_jit_fused: bool = False\n enable_all_optimization: bool = False\n enable_sequence_parallelism: bool = False\n enable_sequence_overlap: bool = False\n parallel_output = True\n extra_kwargs: Dict[str, Any] = field(default_factory=dict)\n # pipeline_parallel_size: int\n # data_parallel_size: int\n # tensor_parallel_mode: Literal['1d', '2d', '2.5d', '3d']\n\n @property\n def tensor_parallel_size(self):\n return self._tensor_parallel_size\n\n def __post_init__(self):\n if not self.enable_tensor_parallelism and self.enable_sequence_parallelism:\n raise ValueError(\n \"enable_sequence_parallelism can only be set to True when enable_tensor_parallelism is True\"\n )\n if not self.enable_sequence_parallelism and self.enable_sequence_overlap:\n raise ValueError(\"enable_sequence_overlap can only be set to True when enable_sequence_parallelism is True\")\n if not self.enable_tensor_parallelism:\n self._tensor_parallel_size = 1\n else:\n # get the parallel size\n self._tensor_parallel_size = dist.get_world_size(self.tensor_parallel_process_group)\n # turn on all optimization if all_optimization is set to True\n if self.enable_all_optimization:\n self._turn_on_all_optimization()\n\n def _turn_on_all_optimization(self):\n \"\"\"\n Turn on all optimization.\n \"\"\"\n # you can add all the optimization flag here\n self.enable_fused_normalization = True\n self.enable_flash_attention = True\n self.enable_jit_fused = True\n self.enable_sequence_parallelism = True\n self.enable_sequence_overlap = True\n\n def _infer(self):\n \"\"\"\n Set default params for inference.\n \"\"\"\n # assert self.pipeline_stage_manager is None, \"pipeline parallelism is not supported in inference for now\"\n", "path": "colossalai/shardformer/shard/shard_config.py"}, {"content": "from functools import partial\n\nfrom torch.nn import Parameter\nfrom transformers.models.llama.modeling_llama import LlamaDecoderLayer, LlamaForCausalLM, LlamaModel, LlamaRMSNorm\n\nfrom colossalai.inference.modeling.models.nopadding_llama import (\n NopadLlamaAttention,\n NopadLlamaMLP,\n llama_causal_lm_forward,\n llama_decoder_layer_forward,\n llama_model_forward,\n llama_rmsnorm_forward,\n)\nfrom colossalai.inference.utils import init_to_get_rotary\nfrom colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription\n\n# import colossalai\nfrom colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy\n\n\nclass NoPaddingLlamaModelInferPolicy(LlamaForCausalLMPolicy):\n def __init__(self) -> None:\n super().__init__()\n\n def module_policy(self):\n policy = super().module_policy()\n\n decoder_attribute_replacement = {\n \"lm_head.weight\": Parameter(self.model.lm_head.weight.transpose(0, 1), requires_grad=False),\n }\n policy[LlamaForCausalLM] = ModulePolicyDescription(\n attribute_replacement=decoder_attribute_replacement,\n )\n\n policy[LlamaDecoderLayer] = ModulePolicyDescription(\n sub_module_replacement=[\n SubModuleReplacementDescription(\n suffix=\"mlp\",\n target_module=NopadLlamaMLP,\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn\",\n target_module=NopadLlamaAttention,\n ),\n ]\n )\n\n self.shard_config._infer()\n\n infer_forward = llama_causal_lm_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaForCausalLM\n )\n\n infer_forward = llama_model_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaModel)\n\n infer_forward = llama_decoder_layer_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaDecoderLayer\n )\n\n infer_forward = llama_rmsnorm_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaRMSNorm)\n\n return policy\n\n def postprocess(self):\n init_to_get_rotary(self.model.model)\n return self.model\n", "path": "colossalai/inference/modeling/policy/nopadding_llama.py"}], "after_files": [{"content": "from dataclasses import dataclass, field\nfrom typing import Any, Dict, Optional\n\nimport torch.distributed as dist\nfrom torch.distributed import ProcessGroup\n\nfrom colossalai.pipeline.stage_manager import PipelineStageManager\n\n__all__ = [\"ShardConfig\"]\n\n\n@dataclass\nclass ShardConfig:\n r\"\"\"\n The config for sharding the huggingface model\n\n Args:\n tensor_parallel_process_group (Optional[ProcessGroup]): The process group of tensor parallelism, it's necessary when using tensor parallel. Defaults to None, which is the global process group.\n pipeline_stage_manager (Optional[PipelineStageManager]): If using pipeline parallelism, it's necessary to specify a pipeline stage manager for inter-process communication in pipeline parallelism. Defaults to None, which means not using pipeline parallelism.\n enable_tensor_parallelism (bool): Whether to use tensor parallelism. Defaults to True.\n enable_fused_normalization (bool): Whether to use fused layernorm. Defaults to False.\n enable_flash_attention (bool, optional): Whether to switch on flash attention. Defaults to False.\n enable_jit_fused (bool, optional): Whether to switch on JIT fused operators. Defaults to False.\n enable_sequence_parallelism (bool): Whether to turn on sequence parallelism, which partitions non-tensor-parallel regions along the sequence dimension. Defaults to False.\n enable_sequence_overlap (bool): Whether to turn on sequence overlap, which overlap the computation and communication in sequence parallelism. It can only be used when enable_sequence_parallelism is True. Defaults to False.\n enable_all_optimization (bool): Whether to turn on all optimization tools including 'fused normalization', 'flash attention', 'JIT fused operators', 'sequence parallelism' and 'sequence overlap'. Defaults to False.\n \"\"\"\n tensor_parallel_process_group: Optional[ProcessGroup] = None\n pipeline_stage_manager: Optional[PipelineStageManager] = None\n enable_tensor_parallelism: bool = True\n enable_fused_normalization: bool = False\n enable_flash_attention: bool = False\n enable_jit_fused: bool = False\n enable_all_optimization: bool = False\n enable_sequence_parallelism: bool = False\n enable_sequence_overlap: bool = False\n parallel_output = True\n extra_kwargs: Dict[str, Any] = field(default_factory=dict)\n # tensor_parallel_mode: Literal['1d', '2d', '2.5d', '3d']\n\n @property\n def tensor_parallel_size(self):\n return self._tensor_parallel_size\n\n def __post_init__(self):\n if not self.enable_tensor_parallelism and self.enable_sequence_parallelism:\n raise ValueError(\n \"enable_sequence_parallelism can only be set to True when enable_tensor_parallelism is True\"\n )\n if not self.enable_sequence_parallelism and self.enable_sequence_overlap:\n raise ValueError(\"enable_sequence_overlap can only be set to True when enable_sequence_parallelism is True\")\n if not self.enable_tensor_parallelism:\n self._tensor_parallel_size = 1\n else:\n # get the parallel size\n self._tensor_parallel_size = dist.get_world_size(self.tensor_parallel_process_group)\n # turn on all optimization if all_optimization is set to True\n if self.enable_all_optimization:\n self._turn_on_all_optimization()\n\n def _turn_on_all_optimization(self):\n \"\"\"\n Turn on all optimization.\n \"\"\"\n # you can add all the optimization flag here\n self.enable_fused_normalization = True\n self.enable_flash_attention = True\n self.enable_jit_fused = True\n self.enable_sequence_parallelism = True\n self.enable_sequence_overlap = True\n", "path": "colossalai/shardformer/shard/shard_config.py"}, {"content": "from torch.nn import Parameter\nfrom transformers.models.llama.modeling_llama import LlamaDecoderLayer, LlamaForCausalLM, LlamaModel, LlamaRMSNorm\n\nfrom colossalai.inference.modeling.models.nopadding_llama import (\n NopadLlamaAttention,\n NopadLlamaMLP,\n llama_causal_lm_forward,\n llama_decoder_layer_forward,\n llama_model_forward,\n llama_rmsnorm_forward,\n)\nfrom colossalai.inference.utils import init_to_get_rotary\nfrom colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription\nfrom colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy\n\n\nclass NoPaddingLlamaModelInferPolicy(LlamaForCausalLMPolicy):\n def __init__(self) -> None:\n super().__init__()\n\n def module_policy(self):\n policy = super().module_policy()\n\n decoder_attribute_replacement = {\n \"lm_head.weight\": Parameter(self.model.lm_head.weight.transpose(0, 1), requires_grad=False),\n }\n policy[LlamaForCausalLM] = ModulePolicyDescription(\n attribute_replacement=decoder_attribute_replacement,\n )\n\n policy[LlamaDecoderLayer] = ModulePolicyDescription(\n sub_module_replacement=[\n SubModuleReplacementDescription(\n suffix=\"mlp\",\n target_module=NopadLlamaMLP,\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn\",\n target_module=NopadLlamaAttention,\n ),\n ]\n )\n\n self.append_or_create_method_replacement(\n description={\"forward\": llama_causal_lm_forward}, policy=policy, target_key=LlamaForCausalLM\n )\n self.append_or_create_method_replacement(\n description={\"forward\": llama_model_forward}, policy=policy, target_key=LlamaModel\n )\n self.append_or_create_method_replacement(\n description={\"forward\": llama_decoder_layer_forward}, policy=policy, target_key=LlamaDecoderLayer\n )\n self.append_or_create_method_replacement(\n description={\"forward\": llama_rmsnorm_forward}, policy=policy, target_key=LlamaRMSNorm\n )\n\n return policy\n\n def postprocess(self):\n init_to_get_rotary(self.model.model)\n return self.model\n", "path": "colossalai/inference/modeling/policy/nopadding_llama.py"}]}
| 2,025 | 807 |
gh_patches_debug_6604
|
rasdani/github-patches
|
git_diff
|
OpenNMT__OpenNMT-tf-222
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
assert_state_is_compatible() cannot detect dimension difference between encoder_state and decoder_zero_state when encoder and decoder dimensions are not the same in NMTSmall model
I just followed the instructions on the page [http://opennmt.net/OpenNMT-tf/quickstart.html](http://opennmt.net/OpenNMT-tf/quickstart.html) and played around a little bit with the NMTSmall model by setting a different `num_units` value to the `UnidirectionalRNNEncoder`, say `256`, which is different from the `512` for the `AttentionalRNNDecoder`.
This line
https://github.com/OpenNMT/OpenNMT-tf/blob/ae0ada93a652c02793de3b481e20315716ea4f23/opennmt/layers/bridge.py#L56
in the `CopyBridge` did not throw any error, even though the `encoder_state` and `decoder_zero_state` do not have the same dimensions, `256` vs `512`.
It probably natual for someone to think of using the `DenseBridge` when dimensions are set differently. However, the `CopyBridge` should be throwing some errors in such misusage cases here, instead of letting one to figure out that with some error message as follows
`ValueError: Dimensions must be equal, but are 1280 and 1536 for 'seq2seq/parallel_0/seq2seq/decoder_1/decoder/while/BasicDecoderStep/decoder/attention_wrapper/attention_wrapper/multi_rnn_cell/cell_0/lstm_cell/MatMul' (op: 'MatMul') with input shapes: [?,1280], [1536,2048].`
Can anyone please explain why the
https://github.com/OpenNMT/OpenNMT-tf/blob/ae0ada93a652c02793de3b481e20315716ea4f23/opennmt/layers/bridge.py#L28
passed without an issue?
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opennmt/layers/bridge.py`
Content:
```
1 """Define bridges: logic of passing the encoder state to the decoder."""
2
3 import abc
4 import six
5
6 import tensorflow as tf
7
8
9 def assert_state_is_compatible(expected_state, state):
10 """Asserts that states are compatible.
11
12 Args:
13 expected_state: The reference state.
14 state: The state that must be compatible with :obj:`expected_state`.
15
16 Raises:
17 ValueError: if the states are incompatible.
18 """
19 # Check structure compatibility.
20 tf.contrib.framework.nest.assert_same_structure(expected_state, state)
21
22 # Check shape compatibility.
23 expected_state_flat = tf.contrib.framework.nest.flatten(expected_state)
24 state_flat = tf.contrib.framework.nest.flatten(state)
25
26 for x, y in zip(expected_state_flat, state_flat):
27 if tf.contrib.framework.is_tensor(x):
28 tf.contrib.framework.with_same_shape(x, y)
29
30
31 @six.add_metaclass(abc.ABCMeta)
32 class Bridge(object):
33 """Base class for bridges."""
34
35 def __call__(self, encoder_state, decoder_zero_state):
36 """Returns the initial decoder state.
37
38 Args:
39 encoder_state: The encoder state.
40 decoder_zero_state: The default decoder state.
41
42 Returns:
43 The decoder initial state.
44 """
45 return self._build(encoder_state, decoder_zero_state)
46
47 @abc.abstractmethod
48 def _build(self, encoder_state, decoder_zero_state):
49 raise NotImplementedError()
50
51
52 class CopyBridge(Bridge):
53 """A bridge that passes the encoder state as is."""
54
55 def _build(self, encoder_state, decoder_zero_state):
56 assert_state_is_compatible(decoder_zero_state, encoder_state)
57 return encoder_state
58
59
60 class ZeroBridge(Bridge):
61 """A bridge that does not pass information from the encoder."""
62
63 def _build(self, encoder_state, decoder_zero_state):
64 # Simply return the default decoder state.
65 return decoder_zero_state
66
67
68 class DenseBridge(Bridge):
69 """A bridge that applies a parameterized linear transformation from the
70 encoder state to the decoder state size.
71 """
72
73 def __init__(self, activation=None):
74 """Initializes the bridge.
75
76 Args:
77 activation: Activation function (a callable).
78 Set it to ``None`` to maintain a linear activation.
79 """
80 self.activation = activation
81
82 def _build(self, encoder_state, decoder_zero_state):
83 # Flattened states.
84 encoder_state_flat = tf.contrib.framework.nest.flatten(encoder_state)
85 decoder_state_flat = tf.contrib.framework.nest.flatten(decoder_zero_state)
86
87 # View encoder state as a single tensor.
88 encoder_state_concat = tf.concat(encoder_state_flat, 1)
89
90 # Extract decoder state sizes.
91 decoder_state_size = []
92 for tensor in decoder_state_flat:
93 decoder_state_size.append(tensor.get_shape().as_list()[-1])
94
95 decoder_total_size = sum(decoder_state_size)
96
97 # Apply linear transformation.
98 transformed = tf.layers.dense(
99 encoder_state_concat,
100 decoder_total_size,
101 activation=self.activation)
102
103 # Split resulting tensor to match the decoder state size.
104 splitted = tf.split(transformed, decoder_state_size, axis=1)
105
106 # Pack as the origial decoder state.
107 return tf.contrib.framework.nest.pack_sequence_as(decoder_zero_state, splitted)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opennmt/layers/bridge.py b/opennmt/layers/bridge.py
--- a/opennmt/layers/bridge.py
+++ b/opennmt/layers/bridge.py
@@ -25,7 +25,11 @@
for x, y in zip(expected_state_flat, state_flat):
if tf.contrib.framework.is_tensor(x):
- tf.contrib.framework.with_same_shape(x, y)
+ expected_depth = x.get_shape().as_list()[-1]
+ depth = y.get_shape().as_list()[-1]
+ if depth != expected_depth:
+ raise ValueError("Tensor %s in state has shape %s which is incompatible "
+ "with the target shape %s" % (y.name, y.shape, x.shape))
@six.add_metaclass(abc.ABCMeta)
|
{"golden_diff": "diff --git a/opennmt/layers/bridge.py b/opennmt/layers/bridge.py\n--- a/opennmt/layers/bridge.py\n+++ b/opennmt/layers/bridge.py\n@@ -25,7 +25,11 @@\n \n for x, y in zip(expected_state_flat, state_flat):\n if tf.contrib.framework.is_tensor(x):\n- tf.contrib.framework.with_same_shape(x, y)\n+ expected_depth = x.get_shape().as_list()[-1]\n+ depth = y.get_shape().as_list()[-1]\n+ if depth != expected_depth:\n+ raise ValueError(\"Tensor %s in state has shape %s which is incompatible \"\n+ \"with the target shape %s\" % (y.name, y.shape, x.shape))\n \n \n @six.add_metaclass(abc.ABCMeta)\n", "issue": "assert_state_is_compatible() cannot detect dimension difference between encoder_state and decoder_zero_state when encoder and decoder dimensions are not the same in NMTSmall model\nI just followed the instructions on the page [http://opennmt.net/OpenNMT-tf/quickstart.html](http://opennmt.net/OpenNMT-tf/quickstart.html) and played around a little bit with the NMTSmall model by setting a different `num_units` value to the `UnidirectionalRNNEncoder`, say `256`, which is different from the `512` for the `AttentionalRNNDecoder`. \r\n\r\nThis line\r\n\r\nhttps://github.com/OpenNMT/OpenNMT-tf/blob/ae0ada93a652c02793de3b481e20315716ea4f23/opennmt/layers/bridge.py#L56\r\n\r\nin the `CopyBridge` did not throw any error, even though the `encoder_state` and `decoder_zero_state` do not have the same dimensions, `256` vs `512`.\r\n\r\nIt probably natual for someone to think of using the `DenseBridge` when dimensions are set differently. However, the `CopyBridge` should be throwing some errors in such misusage cases here, instead of letting one to figure out that with some error message as follows\r\n\r\n`ValueError: Dimensions must be equal, but are 1280 and 1536 for 'seq2seq/parallel_0/seq2seq/decoder_1/decoder/while/BasicDecoderStep/decoder/attention_wrapper/attention_wrapper/multi_rnn_cell/cell_0/lstm_cell/MatMul' (op: 'MatMul') with input shapes: [?,1280], [1536,2048].`\r\n\r\nCan anyone please explain why the \r\nhttps://github.com/OpenNMT/OpenNMT-tf/blob/ae0ada93a652c02793de3b481e20315716ea4f23/opennmt/layers/bridge.py#L28\r\npassed without an issue?\r\n\r\nThanks!\n", "before_files": [{"content": "\"\"\"Define bridges: logic of passing the encoder state to the decoder.\"\"\"\n\nimport abc\nimport six\n\nimport tensorflow as tf\n\n\ndef assert_state_is_compatible(expected_state, state):\n \"\"\"Asserts that states are compatible.\n\n Args:\n expected_state: The reference state.\n state: The state that must be compatible with :obj:`expected_state`.\n\n Raises:\n ValueError: if the states are incompatible.\n \"\"\"\n # Check structure compatibility.\n tf.contrib.framework.nest.assert_same_structure(expected_state, state)\n\n # Check shape compatibility.\n expected_state_flat = tf.contrib.framework.nest.flatten(expected_state)\n state_flat = tf.contrib.framework.nest.flatten(state)\n\n for x, y in zip(expected_state_flat, state_flat):\n if tf.contrib.framework.is_tensor(x):\n tf.contrib.framework.with_same_shape(x, y)\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass Bridge(object):\n \"\"\"Base class for bridges.\"\"\"\n\n def __call__(self, encoder_state, decoder_zero_state):\n \"\"\"Returns the initial decoder state.\n\n Args:\n encoder_state: The encoder state.\n decoder_zero_state: The default decoder state.\n\n Returns:\n The decoder initial state.\n \"\"\"\n return self._build(encoder_state, decoder_zero_state)\n\n @abc.abstractmethod\n def _build(self, encoder_state, decoder_zero_state):\n raise NotImplementedError()\n\n\nclass CopyBridge(Bridge):\n \"\"\"A bridge that passes the encoder state as is.\"\"\"\n\n def _build(self, encoder_state, decoder_zero_state):\n assert_state_is_compatible(decoder_zero_state, encoder_state)\n return encoder_state\n\n\nclass ZeroBridge(Bridge):\n \"\"\"A bridge that does not pass information from the encoder.\"\"\"\n\n def _build(self, encoder_state, decoder_zero_state):\n # Simply return the default decoder state.\n return decoder_zero_state\n\n\nclass DenseBridge(Bridge):\n \"\"\"A bridge that applies a parameterized linear transformation from the\n encoder state to the decoder state size.\n \"\"\"\n\n def __init__(self, activation=None):\n \"\"\"Initializes the bridge.\n\n Args:\n activation: Activation function (a callable).\n Set it to ``None`` to maintain a linear activation.\n \"\"\"\n self.activation = activation\n\n def _build(self, encoder_state, decoder_zero_state):\n # Flattened states.\n encoder_state_flat = tf.contrib.framework.nest.flatten(encoder_state)\n decoder_state_flat = tf.contrib.framework.nest.flatten(decoder_zero_state)\n\n # View encoder state as a single tensor.\n encoder_state_concat = tf.concat(encoder_state_flat, 1)\n\n # Extract decoder state sizes.\n decoder_state_size = []\n for tensor in decoder_state_flat:\n decoder_state_size.append(tensor.get_shape().as_list()[-1])\n\n decoder_total_size = sum(decoder_state_size)\n\n # Apply linear transformation.\n transformed = tf.layers.dense(\n encoder_state_concat,\n decoder_total_size,\n activation=self.activation)\n\n # Split resulting tensor to match the decoder state size.\n splitted = tf.split(transformed, decoder_state_size, axis=1)\n\n # Pack as the origial decoder state.\n return tf.contrib.framework.nest.pack_sequence_as(decoder_zero_state, splitted)\n", "path": "opennmt/layers/bridge.py"}], "after_files": [{"content": "\"\"\"Define bridges: logic of passing the encoder state to the decoder.\"\"\"\n\nimport abc\nimport six\n\nimport tensorflow as tf\n\n\ndef assert_state_is_compatible(expected_state, state):\n \"\"\"Asserts that states are compatible.\n\n Args:\n expected_state: The reference state.\n state: The state that must be compatible with :obj:`expected_state`.\n\n Raises:\n ValueError: if the states are incompatible.\n \"\"\"\n # Check structure compatibility.\n tf.contrib.framework.nest.assert_same_structure(expected_state, state)\n\n # Check shape compatibility.\n expected_state_flat = tf.contrib.framework.nest.flatten(expected_state)\n state_flat = tf.contrib.framework.nest.flatten(state)\n\n for x, y in zip(expected_state_flat, state_flat):\n if tf.contrib.framework.is_tensor(x):\n expected_depth = x.get_shape().as_list()[-1]\n depth = y.get_shape().as_list()[-1]\n if depth != expected_depth:\n raise ValueError(\"Tensor %s in state has shape %s which is incompatible \"\n \"with the target shape %s\" % (y.name, y.shape, x.shape))\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass Bridge(object):\n \"\"\"Base class for bridges.\"\"\"\n\n def __call__(self, encoder_state, decoder_zero_state):\n \"\"\"Returns the initial decoder state.\n\n Args:\n encoder_state: The encoder state.\n decoder_zero_state: The default decoder state.\n\n Returns:\n The decoder initial state.\n \"\"\"\n return self._build(encoder_state, decoder_zero_state)\n\n @abc.abstractmethod\n def _build(self, encoder_state, decoder_zero_state):\n raise NotImplementedError()\n\n\nclass CopyBridge(Bridge):\n \"\"\"A bridge that passes the encoder state as is.\"\"\"\n\n def _build(self, encoder_state, decoder_zero_state):\n assert_state_is_compatible(decoder_zero_state, encoder_state)\n return encoder_state\n\n\nclass ZeroBridge(Bridge):\n \"\"\"A bridge that does not pass information from the encoder.\"\"\"\n\n def _build(self, encoder_state, decoder_zero_state):\n # Simply return the default decoder state.\n return decoder_zero_state\n\n\nclass DenseBridge(Bridge):\n \"\"\"A bridge that applies a parameterized linear transformation from the\n encoder state to the decoder state size.\n \"\"\"\n\n def __init__(self, activation=None):\n \"\"\"Initializes the bridge.\n\n Args:\n activation: Activation function (a callable).\n Set it to ``None`` to maintain a linear activation.\n \"\"\"\n self.activation = activation\n\n def _build(self, encoder_state, decoder_zero_state):\n # Flattened states.\n encoder_state_flat = tf.contrib.framework.nest.flatten(encoder_state)\n decoder_state_flat = tf.contrib.framework.nest.flatten(decoder_zero_state)\n\n # View encoder state as a single tensor.\n encoder_state_concat = tf.concat(encoder_state_flat, 1)\n\n # Extract decoder state sizes.\n decoder_state_size = []\n for tensor in decoder_state_flat:\n decoder_state_size.append(tensor.get_shape().as_list()[-1])\n\n decoder_total_size = sum(decoder_state_size)\n\n # Apply linear transformation.\n transformed = tf.layers.dense(\n encoder_state_concat,\n decoder_total_size,\n activation=self.activation)\n\n # Split resulting tensor to match the decoder state size.\n splitted = tf.split(transformed, decoder_state_size, axis=1)\n\n # Pack as the origial decoder state.\n return tf.contrib.framework.nest.pack_sequence_as(decoder_zero_state, splitted)\n", "path": "opennmt/layers/bridge.py"}]}
| 1,647 | 181 |
gh_patches_debug_17593
|
rasdani/github-patches
|
git_diff
|
OpenMined__PySyft-5122
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add DCGAN example Duet Notebooks
## Description
Add two notebook's which reflect the DCGAN example split into DO (Data Owner) and DS (Data Scientist):
https://github.com/pytorch/examples/blob/master/dcgan/
## Definition of Done
The partially runnable DCGAN example notebooks should be in the examples/duet/dcgan folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.
Add Super Resolution Example Duet Notebooks
## Description
Add two notebook's which reflect the Super Resolution example split into DO (Data Owner) and DS (Data Scientist):
https://github.com/pytorch/examples/blob/master/super_resolution/
## Definition of Done
The partially runnable Super Resolution example notebooks should be in the examples/duet/super_resolution folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.
Add DCGAN example Duet Notebooks
## Description
Add two notebook's which reflect the DCGAN example split into DO (Data Owner) and DS (Data Scientist):
https://github.com/pytorch/examples/blob/master/dcgan/
## Definition of Done
The partially runnable DCGAN example notebooks should be in the examples/duet/dcgan folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/syft/lib/torch/__init__.py`
Content:
```
1 # stdlib
2 from typing import Any
3 from typing import Dict
4 from typing import Union
5
6 # third party
7 from packaging import version
8 import torch
9
10 # syft relative
11 from . import device # noqa: 401
12 from . import parameter # noqa: 401
13 from . import uppercase_tensor # noqa: 401
14 from ...ast.globals import Globals
15 from ...logger import critical
16 from .allowlist import allowlist
17
18 TORCH_VERSION = version.parse(torch.__version__.split("+")[0])
19
20
21 def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:
22 if isinstance(support_dict, str):
23 return support_dict
24 else:
25 return support_dict["return_type"]
26
27
28 def version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:
29 if isinstance(support_dict, str):
30 return True
31 else:
32 # if we are on either side of the min or max versions we don't support this op
33 if "min_version" in support_dict and TORCH_VERSION < version.parse(
34 support_dict["min_version"]
35 ):
36 return False
37 if "max_version" in support_dict and TORCH_VERSION > version.parse(
38 support_dict["max_version"]
39 ):
40 return False
41 return True
42
43
44 def create_torch_ast(client: Any = None) -> Globals:
45 ast = Globals(client)
46
47 # most methods work in all versions and have a single return type
48 # for the more complicated ones we pass a dict with keys like return_type and
49 # min_version
50 for method, return_type_name_or_dict in allowlist.items():
51 if version_supported(support_dict=return_type_name_or_dict):
52 return_type = get_return_type(support_dict=return_type_name_or_dict)
53 if return_type == "unknown":
54 # this allows us to import them for testing
55 continue
56 ast.add_path(
57 path=method, framework_reference=torch, return_type_name=return_type
58 )
59 # add all the torch.nn.Parameter hooks
60 if method.startswith("torch.Tensor."):
61 method = method.replace("torch.Tensor.", "torch.nn.Parameter.")
62 return_type = return_type.replace("torch.Tensor", "torch.nn.Parameter")
63 ast.add_path(
64 path=method, framework_reference=torch, return_type_name=return_type
65 )
66 else:
67 critical(f"Skipping {method} not supported in {TORCH_VERSION}")
68
69 for klass in ast.classes:
70 klass.create_pointer_class()
71 klass.create_send_method()
72 klass.create_storable_object_attr_convenience_methods()
73 return ast
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py
--- a/src/syft/lib/torch/__init__.py
+++ b/src/syft/lib/torch/__init__.py
@@ -12,7 +12,7 @@
from . import parameter # noqa: 401
from . import uppercase_tensor # noqa: 401
from ...ast.globals import Globals
-from ...logger import critical
+from ...logger import info
from .allowlist import allowlist
TORCH_VERSION = version.parse(torch.__version__.split("+")[0])
@@ -64,7 +64,7 @@
path=method, framework_reference=torch, return_type_name=return_type
)
else:
- critical(f"Skipping {method} not supported in {TORCH_VERSION}")
+ info(f"Skipping {method} not supported in {TORCH_VERSION}")
for klass in ast.classes:
klass.create_pointer_class()
|
{"golden_diff": "diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py\n--- a/src/syft/lib/torch/__init__.py\n+++ b/src/syft/lib/torch/__init__.py\n@@ -12,7 +12,7 @@\n from . import parameter # noqa: 401\n from . import uppercase_tensor # noqa: 401\n from ...ast.globals import Globals\n-from ...logger import critical\n+from ...logger import info\n from .allowlist import allowlist\n \n TORCH_VERSION = version.parse(torch.__version__.split(\"+\")[0])\n@@ -64,7 +64,7 @@\n path=method, framework_reference=torch, return_type_name=return_type\n )\n else:\n- critical(f\"Skipping {method} not supported in {TORCH_VERSION}\")\n+ info(f\"Skipping {method} not supported in {TORCH_VERSION}\")\n \n for klass in ast.classes:\n klass.create_pointer_class()\n", "issue": "Add DCGAN example Duet Notebooks\n## Description\r\nAdd two notebook's which reflect the DCGAN example split into DO (Data Owner) and DS (Data Scientist):\r\nhttps://github.com/pytorch/examples/blob/master/dcgan/\r\n\r\n## Definition of Done\r\nThe partially runnable DCGAN example notebooks should be in the examples/duet/dcgan folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.\nAdd Super Resolution Example Duet Notebooks\n## Description\r\nAdd two notebook's which reflect the Super Resolution example split into DO (Data Owner) and DS (Data Scientist):\r\nhttps://github.com/pytorch/examples/blob/master/super_resolution/\r\n\r\n## Definition of Done\r\nThe partially runnable Super Resolution example notebooks should be in the examples/duet/super_resolution folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.\nAdd DCGAN example Duet Notebooks\n## Description\r\nAdd two notebook's which reflect the DCGAN example split into DO (Data Owner) and DS (Data Scientist):\r\nhttps://github.com/pytorch/examples/blob/master/dcgan/\r\n\r\n## Definition of Done\r\nThe partially runnable DCGAN example notebooks should be in the examples/duet/dcgan folder and a README.md should be added in the parent examples/duet directory with a link to the original example and our notebook.\n", "before_files": [{"content": "# stdlib\nfrom typing import Any\nfrom typing import Dict\nfrom typing import Union\n\n# third party\nfrom packaging import version\nimport torch\n\n# syft relative\nfrom . import device # noqa: 401\nfrom . import parameter # noqa: 401\nfrom . import uppercase_tensor # noqa: 401\nfrom ...ast.globals import Globals\nfrom ...logger import critical\nfrom .allowlist import allowlist\n\nTORCH_VERSION = version.parse(torch.__version__.split(\"+\")[0])\n\n\ndef get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:\n if isinstance(support_dict, str):\n return support_dict\n else:\n return support_dict[\"return_type\"]\n\n\ndef version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:\n if isinstance(support_dict, str):\n return True\n else:\n # if we are on either side of the min or max versions we don't support this op\n if \"min_version\" in support_dict and TORCH_VERSION < version.parse(\n support_dict[\"min_version\"]\n ):\n return False\n if \"max_version\" in support_dict and TORCH_VERSION > version.parse(\n support_dict[\"max_version\"]\n ):\n return False\n return True\n\n\ndef create_torch_ast(client: Any = None) -> Globals:\n ast = Globals(client)\n\n # most methods work in all versions and have a single return type\n # for the more complicated ones we pass a dict with keys like return_type and\n # min_version\n for method, return_type_name_or_dict in allowlist.items():\n if version_supported(support_dict=return_type_name_or_dict):\n return_type = get_return_type(support_dict=return_type_name_or_dict)\n if return_type == \"unknown\":\n # this allows us to import them for testing\n continue\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n # add all the torch.nn.Parameter hooks\n if method.startswith(\"torch.Tensor.\"):\n method = method.replace(\"torch.Tensor.\", \"torch.nn.Parameter.\")\n return_type = return_type.replace(\"torch.Tensor\", \"torch.nn.Parameter\")\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n else:\n critical(f\"Skipping {method} not supported in {TORCH_VERSION}\")\n\n for klass in ast.classes:\n klass.create_pointer_class()\n klass.create_send_method()\n klass.create_storable_object_attr_convenience_methods()\n return ast\n", "path": "src/syft/lib/torch/__init__.py"}], "after_files": [{"content": "# stdlib\nfrom typing import Any\nfrom typing import Dict\nfrom typing import Union\n\n# third party\nfrom packaging import version\nimport torch\n\n# syft relative\nfrom . import device # noqa: 401\nfrom . import parameter # noqa: 401\nfrom . import uppercase_tensor # noqa: 401\nfrom ...ast.globals import Globals\nfrom ...logger import info\nfrom .allowlist import allowlist\n\nTORCH_VERSION = version.parse(torch.__version__.split(\"+\")[0])\n\n\ndef get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:\n if isinstance(support_dict, str):\n return support_dict\n else:\n return support_dict[\"return_type\"]\n\n\ndef version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:\n if isinstance(support_dict, str):\n return True\n else:\n # if we are on either side of the min or max versions we don't support this op\n if \"min_version\" in support_dict and TORCH_VERSION < version.parse(\n support_dict[\"min_version\"]\n ):\n return False\n if \"max_version\" in support_dict and TORCH_VERSION > version.parse(\n support_dict[\"max_version\"]\n ):\n return False\n return True\n\n\ndef create_torch_ast(client: Any = None) -> Globals:\n ast = Globals(client)\n\n # most methods work in all versions and have a single return type\n # for the more complicated ones we pass a dict with keys like return_type and\n # min_version\n for method, return_type_name_or_dict in allowlist.items():\n if version_supported(support_dict=return_type_name_or_dict):\n return_type = get_return_type(support_dict=return_type_name_or_dict)\n if return_type == \"unknown\":\n # this allows us to import them for testing\n continue\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n # add all the torch.nn.Parameter hooks\n if method.startswith(\"torch.Tensor.\"):\n method = method.replace(\"torch.Tensor.\", \"torch.nn.Parameter.\")\n return_type = return_type.replace(\"torch.Tensor\", \"torch.nn.Parameter\")\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n else:\n info(f\"Skipping {method} not supported in {TORCH_VERSION}\")\n\n for klass in ast.classes:\n klass.create_pointer_class()\n klass.create_send_method()\n klass.create_storable_object_attr_convenience_methods()\n return ast\n", "path": "src/syft/lib/torch/__init__.py"}]}
| 1,264 | 220 |
gh_patches_debug_11307
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-15802
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
asin
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/math.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
10 @to_ivy_arrays_and_back
11 def sin(x, name=None):
12 return ivy.sin(x)
13
14
15 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
16 @to_ivy_arrays_and_back
17 def cos(x, name=None):
18 return ivy.cos(x)
19
20
21 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
22 @to_ivy_arrays_and_back
23 def acos(x, name=None):
24 return ivy.acos(x)
25
26
27 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
28 @to_ivy_arrays_and_back
29 def cosh(x, name=None):
30 return ivy.cosh(x)
31
32
33 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
34 @to_ivy_arrays_and_back
35 def tanh(x, name=None):
36 return ivy.tanh(x)
37
38
39 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
40 @to_ivy_arrays_and_back
41 def acosh(x, name=None):
42 return ivy.acosh(x)
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py
--- a/ivy/functional/frontends/paddle/tensor/math.py
+++ b/ivy/functional/frontends/paddle/tensor/math.py
@@ -1,6 +1,6 @@
# global
import ivy
-from ivy.func_wrapper import with_unsupported_dtypes
+from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.paddle.func_wrapper import (
to_ivy_arrays_and_back,
)
@@ -40,3 +40,9 @@
@to_ivy_arrays_and_back
def acosh(x, name=None):
return ivy.acosh(x)
+
+
+@with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
+@to_ivy_arrays_and_back
+def asin(x, name=None):
+ return ivy.asin(x)
|
{"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py\n--- a/ivy/functional/frontends/paddle/tensor/math.py\n+++ b/ivy/functional/frontends/paddle/tensor/math.py\n@@ -1,6 +1,6 @@\n # global\n import ivy\n-from ivy.func_wrapper import with_unsupported_dtypes\n+from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\n from ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n )\n@@ -40,3 +40,9 @@\n @to_ivy_arrays_and_back\n def acosh(x, name=None):\n return ivy.acosh(x)\n+\n+\n+@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def asin(x, name=None):\n+ return ivy.asin(x)\n", "issue": "asin\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}]}
| 721 | 223 |
gh_patches_debug_26207
|
rasdani/github-patches
|
git_diff
|
liqd__a4-opin-663
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor idea to use a4 comments and ratings queryset
https://github.com/liqd/adhocracy4/pull/20
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/ideas/models.py`
Content:
```
1 from autoslug import AutoSlugField
2 from ckeditor.fields import RichTextField
3 from django.contrib.contenttypes.fields import GenericRelation
4 from django.db import models
5
6 from adhocracy4 import transforms
7 from adhocracy4.comments import models as comment_models
8 from adhocracy4.images import fields
9 from adhocracy4.modules import models as module_models
10 from adhocracy4.ratings import models as rating_models
11
12
13 class IdeaQuerySet(models.QuerySet):
14
15 def _rate_value_condition(self, value):
16 return models.Case(
17 models.When(ratings__value=value, then=models.F('ratings__id')),
18 output_field=models.IntegerField()
19 )
20
21 def annotate_positive_rating_count(self):
22 return self.annotate(
23 positive_rating_count=models.Count(
24 self._rate_value_condition(1),
25 distinct=True # needed to combine with other count annotations
26 )
27 )
28
29 def annotate_negative_rating_count(self):
30 return self.annotate(
31 negative_rating_count=models.Count(
32 self._rate_value_condition(-1),
33 distinct=True # needed to combine with other count annotations
34 )
35 )
36
37 def annotate_comment_count(self):
38 return self.annotate(
39 comment_count=models.Count(
40 'comments',
41 distinct=True # needed to combine with other count annotations
42 )
43 )
44
45
46 class Idea(module_models.Item):
47 slug = AutoSlugField(populate_from='name', unique=True)
48 name = models.CharField(max_length=120)
49 description = RichTextField()
50 image = fields.ConfiguredImageField(
51 'idea_image',
52 upload_to='ideas/images',
53 blank=True,
54 )
55 ratings = GenericRelation(rating_models.Rating,
56 related_query_name='idea',
57 object_id_field='object_pk')
58 comments = GenericRelation(comment_models.Comment,
59 related_query_name='idea',
60 object_id_field='object_pk')
61
62 objects = IdeaQuerySet.as_manager()
63
64 def __str__(self):
65 return self.name
66
67 def save(self, *args, **kwargs):
68 self.description = transforms.clean_html_field(
69 self.description)
70 super(Idea, self).save(*args, **kwargs)
71
72 def get_absolute_url(self):
73 from django.core.urlresolvers import reverse
74 return reverse('idea-detail', args=[str(self.slug)])
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/euth/ideas/models.py b/euth/ideas/models.py
--- a/euth/ideas/models.py
+++ b/euth/ideas/models.py
@@ -6,41 +6,13 @@
from adhocracy4 import transforms
from adhocracy4.comments import models as comment_models
from adhocracy4.images import fields
+from adhocracy4.models import query
from adhocracy4.modules import models as module_models
from adhocracy4.ratings import models as rating_models
-class IdeaQuerySet(models.QuerySet):
-
- def _rate_value_condition(self, value):
- return models.Case(
- models.When(ratings__value=value, then=models.F('ratings__id')),
- output_field=models.IntegerField()
- )
-
- def annotate_positive_rating_count(self):
- return self.annotate(
- positive_rating_count=models.Count(
- self._rate_value_condition(1),
- distinct=True # needed to combine with other count annotations
- )
- )
-
- def annotate_negative_rating_count(self):
- return self.annotate(
- negative_rating_count=models.Count(
- self._rate_value_condition(-1),
- distinct=True # needed to combine with other count annotations
- )
- )
-
- def annotate_comment_count(self):
- return self.annotate(
- comment_count=models.Count(
- 'comments',
- distinct=True # needed to combine with other count annotations
- )
- )
+class IdeaQuerySet(query.RateableQuerySet, query.CommentableQuerySet):
+ pass
class Idea(module_models.Item):
|
{"golden_diff": "diff --git a/euth/ideas/models.py b/euth/ideas/models.py\n--- a/euth/ideas/models.py\n+++ b/euth/ideas/models.py\n@@ -6,41 +6,13 @@\n from adhocracy4 import transforms\n from adhocracy4.comments import models as comment_models\n from adhocracy4.images import fields\n+from adhocracy4.models import query\n from adhocracy4.modules import models as module_models\n from adhocracy4.ratings import models as rating_models\n \n \n-class IdeaQuerySet(models.QuerySet):\n-\n- def _rate_value_condition(self, value):\n- return models.Case(\n- models.When(ratings__value=value, then=models.F('ratings__id')),\n- output_field=models.IntegerField()\n- )\n-\n- def annotate_positive_rating_count(self):\n- return self.annotate(\n- positive_rating_count=models.Count(\n- self._rate_value_condition(1),\n- distinct=True # needed to combine with other count annotations\n- )\n- )\n-\n- def annotate_negative_rating_count(self):\n- return self.annotate(\n- negative_rating_count=models.Count(\n- self._rate_value_condition(-1),\n- distinct=True # needed to combine with other count annotations\n- )\n- )\n-\n- def annotate_comment_count(self):\n- return self.annotate(\n- comment_count=models.Count(\n- 'comments',\n- distinct=True # needed to combine with other count annotations\n- )\n- )\n+class IdeaQuerySet(query.RateableQuerySet, query.CommentableQuerySet):\n+ pass\n \n \n class Idea(module_models.Item):\n", "issue": "Refactor idea to use a4 comments and ratings queryset\nhttps://github.com/liqd/adhocracy4/pull/20\n", "before_files": [{"content": "from autoslug import AutoSlugField\nfrom ckeditor.fields import RichTextField\nfrom django.contrib.contenttypes.fields import GenericRelation\nfrom django.db import models\n\nfrom adhocracy4 import transforms\nfrom adhocracy4.comments import models as comment_models\nfrom adhocracy4.images import fields\nfrom adhocracy4.modules import models as module_models\nfrom adhocracy4.ratings import models as rating_models\n\n\nclass IdeaQuerySet(models.QuerySet):\n\n def _rate_value_condition(self, value):\n return models.Case(\n models.When(ratings__value=value, then=models.F('ratings__id')),\n output_field=models.IntegerField()\n )\n\n def annotate_positive_rating_count(self):\n return self.annotate(\n positive_rating_count=models.Count(\n self._rate_value_condition(1),\n distinct=True # needed to combine with other count annotations\n )\n )\n\n def annotate_negative_rating_count(self):\n return self.annotate(\n negative_rating_count=models.Count(\n self._rate_value_condition(-1),\n distinct=True # needed to combine with other count annotations\n )\n )\n\n def annotate_comment_count(self):\n return self.annotate(\n comment_count=models.Count(\n 'comments',\n distinct=True # needed to combine with other count annotations\n )\n )\n\n\nclass Idea(module_models.Item):\n slug = AutoSlugField(populate_from='name', unique=True)\n name = models.CharField(max_length=120)\n description = RichTextField()\n image = fields.ConfiguredImageField(\n 'idea_image',\n upload_to='ideas/images',\n blank=True,\n )\n ratings = GenericRelation(rating_models.Rating,\n related_query_name='idea',\n object_id_field='object_pk')\n comments = GenericRelation(comment_models.Comment,\n related_query_name='idea',\n object_id_field='object_pk')\n\n objects = IdeaQuerySet.as_manager()\n\n def __str__(self):\n return self.name\n\n def save(self, *args, **kwargs):\n self.description = transforms.clean_html_field(\n self.description)\n super(Idea, self).save(*args, **kwargs)\n\n def get_absolute_url(self):\n from django.core.urlresolvers import reverse\n return reverse('idea-detail', args=[str(self.slug)])\n", "path": "euth/ideas/models.py"}], "after_files": [{"content": "from autoslug import AutoSlugField\nfrom ckeditor.fields import RichTextField\nfrom django.contrib.contenttypes.fields import GenericRelation\nfrom django.db import models\n\nfrom adhocracy4 import transforms\nfrom adhocracy4.comments import models as comment_models\nfrom adhocracy4.images import fields\nfrom adhocracy4.models import query\nfrom adhocracy4.modules import models as module_models\nfrom adhocracy4.ratings import models as rating_models\n\n\nclass IdeaQuerySet(query.RateableQuerySet, query.CommentableQuerySet):\n pass\n\n\nclass Idea(module_models.Item):\n slug = AutoSlugField(populate_from='name', unique=True)\n name = models.CharField(max_length=120)\n description = RichTextField()\n image = fields.ConfiguredImageField(\n 'idea_image',\n upload_to='ideas/images',\n blank=True,\n )\n ratings = GenericRelation(rating_models.Rating,\n related_query_name='idea',\n object_id_field='object_pk')\n comments = GenericRelation(comment_models.Comment,\n related_query_name='idea',\n object_id_field='object_pk')\n\n objects = IdeaQuerySet.as_manager()\n\n def __str__(self):\n return self.name\n\n def save(self, *args, **kwargs):\n self.description = transforms.clean_html_field(\n self.description)\n super(Idea, self).save(*args, **kwargs)\n\n def get_absolute_url(self):\n from django.core.urlresolvers import reverse\n return reverse('idea-detail', args=[str(self.slug)])\n", "path": "euth/ideas/models.py"}]}
| 913 | 352 |
gh_patches_debug_13963
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-254
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Modify stale files warning to ignore .htaccess, .gitignore and .git
I am not sure if this is the same as issue #167 or not, or if there a better workaround than the setup I am using.
I have the mkdocs project in one folder, and I am building to a sibling folder ( `site_dir: ../site` )
My site dir contains its own git repo, and .htaccess and .gitignore, which I want to keep there, but I get that stale warning.
This causes two issues:
1. Someone may actually run this clean command and delete files I want to keep
2. I cannot run `--clean`
Perhaps the behavior of that warning (and of `--clean`) should be changed to accommodate such scenario?
If it cannot be solved automatically, then perhaps by a directive in the config? `build_ignore: [file, folder]`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/utils.py`
Content:
```
1 # coding: utf-8
2
3 """
4 Standalone file utils.
5
6 Nothing in this module should have an knowledge of config or the layout
7 and structure of the site and pages in the site.
8 """
9
10 import os
11 import shutil
12
13 from mkdocs.compat import urlparse
14
15
16 def copy_file(source_path, output_path):
17 """
18 Copy source_path to output_path, making sure any parent directories exist.
19 """
20 output_dir = os.path.dirname(output_path)
21 if not os.path.exists(output_dir):
22 os.makedirs(output_dir)
23 shutil.copy(source_path, output_path)
24
25
26 def write_file(content, output_path):
27 """
28 Write content to output_path, making sure any parent directories exist.
29 """
30 output_dir = os.path.dirname(output_path)
31 if not os.path.exists(output_dir):
32 os.makedirs(output_dir)
33 open(output_path, 'wb').write(content)
34
35
36 def clean_directory(directory):
37 """
38 Remove the content of a directory recursively but not the directory itself.
39 """
40 if os.path.exists(directory):
41 for entry in os.listdir(directory):
42 path = os.path.join(directory, entry)
43 if os.path.isdir(path):
44 shutil.rmtree(path, True)
45 else:
46 os.unlink(path)
47
48
49 def copy_media_files(from_dir, to_dir):
50 """
51 Recursively copy all files except markdown and HTML into another directory.
52 """
53 for (source_dir, dirnames, filenames) in os.walk(from_dir):
54 relative_path = os.path.relpath(source_dir, from_dir)
55 output_dir = os.path.normpath(os.path.join(to_dir, relative_path))
56
57 for filename in filenames:
58 if not is_markdown_file(filename) and not is_html_file(filename):
59 source_path = os.path.join(source_dir, filename)
60 output_path = os.path.join(output_dir, filename)
61 copy_file(source_path, output_path)
62
63
64 def get_html_path(path):
65 """
66 Map a source file path to an output html path.
67
68 Paths like 'index.md' will be converted to 'index.html'
69 Paths like 'about.md' will be converted to 'about/index.html'
70 Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'
71 """
72 path = os.path.splitext(path)[0]
73 if os.path.basename(path) == 'index':
74 return path + '.html'
75 return "/".join((path, 'index.html'))
76
77
78 def get_url_path(path, use_directory_urls=True):
79 """
80 Map a source file path to an output html path.
81
82 Paths like 'index.md' will be converted to '/'
83 Paths like 'about.md' will be converted to '/about/'
84 Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'
85
86 If `use_directory_urls` is `False`, returned URLs will include the a trailing
87 `index.html` rather than just returning the directory path.
88 """
89 path = get_html_path(path)
90 url = '/' + path.replace(os.path.sep, '/')
91 if use_directory_urls:
92 return url[:-len('index.html')]
93 return url
94
95
96 def is_homepage(path):
97 return os.path.splitext(path)[0] == 'index'
98
99
100 def is_markdown_file(path):
101 """
102 Return True if the given file path is a Markdown file.
103
104 http://superuser.com/questions/249436/file-extension-for-markdown-files
105 """
106 ext = os.path.splitext(path)[1].lower()
107 return ext in [
108 '.markdown',
109 '.mdown',
110 '.mkdn',
111 '.mkd',
112 '.md',
113 ]
114
115
116 def is_css_file(path):
117 """
118 Return True if the given file path is a CSS file.
119 """
120 ext = os.path.splitext(path)[1].lower()
121 return ext in [
122 '.css',
123 ]
124
125
126 def is_javascript_file(path):
127 """
128 Return True if the given file path is a Javascript file.
129 """
130 ext = os.path.splitext(path)[1].lower()
131 return ext in [
132 '.js',
133 '.javascript'
134 ]
135
136
137 def is_html_file(path):
138 """
139 Return True if the given file path is an HTML file.
140 """
141 ext = os.path.splitext(path)[1].lower()
142 return ext in [
143 '.html',
144 '.htm',
145 ]
146
147
148 def create_media_urls(nav, url_list):
149 """
150 Return a list of URLs that have been processed correctly for inclusion in a page.
151 """
152 final_urls = []
153 for url in url_list:
154 # Allow links to fully qualified URL's
155 parsed = urlparse(url)
156 if parsed.netloc:
157 final_urls.append(url)
158 else:
159 relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)
160 final_urls.append(relative_url)
161 return final_urls
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkdocs/utils.py b/mkdocs/utils.py
--- a/mkdocs/utils.py
+++ b/mkdocs/utils.py
@@ -54,6 +54,13 @@
relative_path = os.path.relpath(source_dir, from_dir)
output_dir = os.path.normpath(os.path.join(to_dir, relative_path))
+ # Filter filenames starting with a '.'
+ filenames = [f for f in filenames if not f.startswith('.')]
+
+ # Filter the dirnames that start with a '.' and update the list in
+ # place to prevent us walking these.
+ dirnames[:] = [d for d in dirnames if not d.startswith('.')]
+
for filename in filenames:
if not is_markdown_file(filename) and not is_html_file(filename):
source_path = os.path.join(source_dir, filename)
|
{"golden_diff": "diff --git a/mkdocs/utils.py b/mkdocs/utils.py\n--- a/mkdocs/utils.py\n+++ b/mkdocs/utils.py\n@@ -54,6 +54,13 @@\n relative_path = os.path.relpath(source_dir, from_dir)\n output_dir = os.path.normpath(os.path.join(to_dir, relative_path))\n \n+ # Filter filenames starting with a '.'\n+ filenames = [f for f in filenames if not f.startswith('.')]\n+\n+ # Filter the dirnames that start with a '.' and update the list in\n+ # place to prevent us walking these.\n+ dirnames[:] = [d for d in dirnames if not d.startswith('.')]\n+\n for filename in filenames:\n if not is_markdown_file(filename) and not is_html_file(filename):\n source_path = os.path.join(source_dir, filename)\n", "issue": "Modify stale files warning to ignore .htaccess, .gitignore and .git\nI am not sure if this is the same as issue #167 or not, or if there a better workaround than the setup I am using.\n\nI have the mkdocs project in one folder, and I am building to a sibling folder ( `site_dir: ../site` )\nMy site dir contains its own git repo, and .htaccess and .gitignore, which I want to keep there, but I get that stale warning.\n\nThis causes two issues:\n1. Someone may actually run this clean command and delete files I want to keep\n2. I cannot run `--clean`\n\nPerhaps the behavior of that warning (and of `--clean`) should be changed to accommodate such scenario? \n\nIf it cannot be solved automatically, then perhaps by a directive in the config? `build_ignore: [file, folder]`\n\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nimport os\nimport shutil\n\nfrom mkdocs.compat import urlparse\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n shutil.copy(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n open(output_path, 'wb').write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if os.path.exists(directory):\n for entry in os.listdir(directory):\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef copy_media_files(from_dir, to_dir):\n \"\"\"\n Recursively copy all files except markdown and HTML into another directory.\n \"\"\"\n for (source_dir, dirnames, filenames) in os.walk(from_dir):\n relative_path = os.path.relpath(source_dir, from_dir)\n output_dir = os.path.normpath(os.path.join(to_dir, relative_path))\n\n for filename in filenames:\n if not is_markdown_file(filename) and not is_html_file(filename):\n source_path = os.path.join(source_dir, filename)\n output_path = os.path.join(output_dir, filename)\n copy_file(source_path, output_path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_homepage(path):\n return os.path.splitext(path)[0] == 'index'\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n http://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md',\n ]\n\n\ndef is_css_file(path):\n \"\"\"\n Return True if the given file path is a CSS file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.css',\n ]\n\n\ndef is_javascript_file(path):\n \"\"\"\n Return True if the given file path is a Javascript file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.js',\n '.javascript'\n ]\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef create_media_urls(nav, url_list):\n \"\"\"\n Return a list of URLs that have been processed correctly for inclusion in a page.\n \"\"\"\n final_urls = []\n for url in url_list:\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n final_urls.append(url)\n else:\n relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)\n final_urls.append(relative_url)\n return final_urls\n", "path": "mkdocs/utils.py"}], "after_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nimport os\nimport shutil\n\nfrom mkdocs.compat import urlparse\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n shutil.copy(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n open(output_path, 'wb').write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if os.path.exists(directory):\n for entry in os.listdir(directory):\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef copy_media_files(from_dir, to_dir):\n \"\"\"\n Recursively copy all files except markdown and HTML into another directory.\n \"\"\"\n for (source_dir, dirnames, filenames) in os.walk(from_dir):\n relative_path = os.path.relpath(source_dir, from_dir)\n output_dir = os.path.normpath(os.path.join(to_dir, relative_path))\n\n # Filter filenames starting with a '.'\n filenames = [f for f in filenames if not f.startswith('.')]\n\n # Filter the dirnames that start with a '.' and update the list in\n # place to prevent us walking these.\n dirnames[:] = [d for d in dirnames if not d.startswith('.')]\n\n for filename in filenames:\n if not is_markdown_file(filename) and not is_html_file(filename):\n source_path = os.path.join(source_dir, filename)\n output_path = os.path.join(output_dir, filename)\n copy_file(source_path, output_path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_homepage(path):\n return os.path.splitext(path)[0] == 'index'\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n http://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md',\n ]\n\n\ndef is_css_file(path):\n \"\"\"\n Return True if the given file path is a CSS file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.css',\n ]\n\n\ndef is_javascript_file(path):\n \"\"\"\n Return True if the given file path is a Javascript file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.js',\n '.javascript'\n ]\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef create_media_urls(nav, url_list):\n \"\"\"\n Return a list of URLs that have been processed correctly for inclusion in a page.\n \"\"\"\n final_urls = []\n for url in url_list:\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n final_urls.append(url)\n else:\n relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)\n final_urls.append(relative_url)\n return final_urls\n", "path": "mkdocs/utils.py"}]}
| 1,846 | 186 |
gh_patches_debug_13000
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-1432
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ExampleApp Syntax Highlighter broken in PyQt5
### Short description
Since merge of #1302, the syntax highlighter in the ExampleApp is broken (CC @j9ac9k ).
### Code to reproduce
```bash
cd pyqtgraph\examples
python3 .
```
### Expected behavior
Syntax highlighter should work
### Real behavior
Following error:
```
Traceback (most recent call last):
File "/home/b2/Documents/synced/pyqtgraph/examples/ExampleApp.py", line 298, in highlightBlock
format = self.styles[format]
AttributeError: 'PythonHighlighter' object has no attribute 'styles'
```
### Tested environment(s)
* PyQtGraph version: 7e57e07068fc8a62e9ae3b68e5556b11b6ef1841 and newer
* Qt Python binding: yQt5 5.15.1 Qt 5.15.1
* Python version: Python 3.8.5
* NumPy version: 1.19.2
* Operating system: Linux 5.8.16-2-MANJARO
* Installation method: Run in folder (see above)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/__main__.py`
Content:
```
1 import sys, os
2 import pyqtgraph as pg
3 from pyqtgraph.Qt import QtCore, QtGui
4
5
6
7 if __name__ == '__main__':
8 if __package__ is None or __package__ == "":
9 parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
10 sys.path.insert(0, parent_dir)
11 import examples
12 __package__ = "examples"
13
14 from .ExampleApp import main as run
15 run()
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/__main__.py b/examples/__main__.py
--- a/examples/__main__.py
+++ b/examples/__main__.py
@@ -1,15 +1,16 @@
import sys, os
+
+# Set up path to contain pyqtgraph module when run without installation
+if __name__ == "__main__" and (__package__ is None or __package__==''):
+ parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+ sys.path.insert(0, parent_dir)
+ import examples
+ __package__ = "examples"
+
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtGui
-
if __name__ == '__main__':
- if __package__ is None or __package__ == "":
- parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
- sys.path.insert(0, parent_dir)
- import examples
- __package__ = "examples"
-
from .ExampleApp import main as run
run()
|
{"golden_diff": "diff --git a/examples/__main__.py b/examples/__main__.py\n--- a/examples/__main__.py\n+++ b/examples/__main__.py\n@@ -1,15 +1,16 @@\n import sys, os\n+\n+# Set up path to contain pyqtgraph module when run without installation\n+if __name__ == \"__main__\" and (__package__ is None or __package__==''):\n+ parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n+ sys.path.insert(0, parent_dir)\n+ import examples\n+ __package__ = \"examples\"\n+\n import pyqtgraph as pg\n from pyqtgraph.Qt import QtCore, QtGui\n \n \n-\n if __name__ == '__main__':\n- if __package__ is None or __package__ == \"\":\n- parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n- sys.path.insert(0, parent_dir)\n- import examples\n- __package__ = \"examples\"\n-\n from .ExampleApp import main as run\n run()\n", "issue": "ExampleApp Syntax Highlighter broken in PyQt5\n### Short description\r\nSince merge of #1302, the syntax highlighter in the ExampleApp is broken (CC @j9ac9k ).\r\n\r\n### Code to reproduce\r\n```bash\r\ncd pyqtgraph\\examples\r\npython3 .\r\n```\r\n\r\n### Expected behavior\r\nSyntax highlighter should work\r\n\r\n### Real behavior\r\nFollowing error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/b2/Documents/synced/pyqtgraph/examples/ExampleApp.py\", line 298, in highlightBlock\r\n format = self.styles[format]\r\nAttributeError: 'PythonHighlighter' object has no attribute 'styles'\r\n```\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: 7e57e07068fc8a62e9ae3b68e5556b11b6ef1841 and newer\r\n * Qt Python binding: yQt5 5.15.1 Qt 5.15.1\r\n * Python version: Python 3.8.5\r\n * NumPy version: 1.19.2\r\n * Operating system: Linux 5.8.16-2-MANJARO\r\n * Installation method: Run in folder (see above)\n", "before_files": [{"content": "import sys, os\nimport pyqtgraph as pg\nfrom pyqtgraph.Qt import QtCore, QtGui\n\n\n\nif __name__ == '__main__':\n if __package__ is None or __package__ == \"\":\n parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n sys.path.insert(0, parent_dir)\n import examples\n __package__ = \"examples\"\n\n from .ExampleApp import main as run\n run()\n", "path": "examples/__main__.py"}], "after_files": [{"content": "import sys, os\n\n# Set up path to contain pyqtgraph module when run without installation\nif __name__ == \"__main__\" and (__package__ is None or __package__==''):\n parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n sys.path.insert(0, parent_dir)\n import examples\n __package__ = \"examples\"\n\nimport pyqtgraph as pg\nfrom pyqtgraph.Qt import QtCore, QtGui\n\n\nif __name__ == '__main__':\n from .ExampleApp import main as run\n run()\n", "path": "examples/__main__.py"}]}
| 657 | 229 |
gh_patches_debug_25299
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-2674
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problem trying to install
Use this to open other questions or issues, and provide context here.
Hi, I'm kinda new to this, when I was trying to install it, I got the following error:
File "setup.py", line 30
version=f'0.1.{DATE}{BUILD}',
^
SyntaxError: invalid syntax
I used the command:
git clone https://github.com/facebookresearch/ParlAI.git
cd ParlAI; python setup.py develop
Any Idea why?
Thank you.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7
8 import datetime
9 import sys
10
11 from setuptools import setup, find_packages
12
13 BUILD = '' # if multiple in one day, use "dev0", "dev1", ...
14 DATE = datetime.date.today().isoformat().replace('-', '')
15
16 if sys.version_info < (3, 6):
17 sys.exit('Sorry, Python >=3.6 is required for ParlAI.')
18
19 with open('README.md', encoding="utf8") as f:
20 # strip the header and badges etc
21 readme = f.read().split('--------------------')[-1]
22
23 with open('requirements.txt') as f:
24 reqs = f.read()
25
26
27 if __name__ == '__main__':
28 setup(
29 name='parlai',
30 version=f'0.1.{DATE}{BUILD}',
31 description='Unified platform for dialogue research.',
32 long_description=readme,
33 long_description_content_type='text/markdown',
34 url='http://parl.ai/',
35 python_requires='>=3.6',
36 scripts=['bin/parlai'],
37 packages=find_packages(
38 exclude=('data', 'docs', 'examples', 'tests', 'parlai_internal',)
39 ),
40 install_requires=reqs.strip().split('\n'),
41 include_package_data=True,
42 entry_points={"flake8.extension": ["PAI = parlai.utils.flake8:ParlAIChecker"]},
43 classifiers=[
44 "Programming Language :: Python :: 3",
45 "License :: OSI Approved :: MIT License",
46 "Topic :: Scientific/Engineering :: Artificial Intelligence",
47 "Natural Language :: English",
48 ],
49 )
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
from setuptools import setup, find_packages
-BUILD = '' # if multiple in one day, use "dev0", "dev1", ...
+BUILD = '' # test by setting to ".dev0" if multiple in one day, use ".dev1", ...
DATE = datetime.date.today().isoformat().replace('-', '')
if sys.version_info < (3, 6):
@@ -27,7 +27,7 @@
if __name__ == '__main__':
setup(
name='parlai',
- version=f'0.1.{DATE}{BUILD}',
+ version='0.1.{DATE}{BUILD}'.format(DATE=DATE, BUILD=BUILD),
description='Unified platform for dialogue research.',
long_description=readme,
long_description_content_type='text/markdown',
@@ -39,6 +39,7 @@
),
install_requires=reqs.strip().split('\n'),
include_package_data=True,
+ package_data={'': ['*.txt', '*.md']},
entry_points={"flake8.extension": ["PAI = parlai.utils.flake8:ParlAIChecker"]},
classifiers=[
"Programming Language :: Python :: 3",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n \n from setuptools import setup, find_packages\n \n-BUILD = '' # if multiple in one day, use \"dev0\", \"dev1\", ...\n+BUILD = '' # test by setting to \".dev0\" if multiple in one day, use \".dev1\", ...\n DATE = datetime.date.today().isoformat().replace('-', '')\n \n if sys.version_info < (3, 6):\n@@ -27,7 +27,7 @@\n if __name__ == '__main__':\n setup(\n name='parlai',\n- version=f'0.1.{DATE}{BUILD}',\n+ version='0.1.{DATE}{BUILD}'.format(DATE=DATE, BUILD=BUILD),\n description='Unified platform for dialogue research.',\n long_description=readme,\n long_description_content_type='text/markdown',\n@@ -39,6 +39,7 @@\n ),\n install_requires=reqs.strip().split('\\n'),\n include_package_data=True,\n+ package_data={'': ['*.txt', '*.md']},\n entry_points={\"flake8.extension\": [\"PAI = parlai.utils.flake8:ParlAIChecker\"]},\n classifiers=[\n \"Programming Language :: Python :: 3\",\n", "issue": "Problem trying to install \nUse this to open other questions or issues, and provide context here.\r\nHi, I'm kinda new to this, when I was trying to install it, I got the following error: \r\n\r\n File \"setup.py\", line 30\r\n version=f'0.1.{DATE}{BUILD}',\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\nI used the command: \r\n\r\ngit clone https://github.com/facebookresearch/ParlAI.git\r\ncd ParlAI; python setup.py develop\r\n\r\nAny Idea why?\r\n\r\nThank you. \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nimport datetime\nimport sys\n\nfrom setuptools import setup, find_packages\n\nBUILD = '' # if multiple in one day, use \"dev0\", \"dev1\", ...\nDATE = datetime.date.today().isoformat().replace('-', '')\n\nif sys.version_info < (3, 6):\n sys.exit('Sorry, Python >=3.6 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n # strip the header and badges etc\n readme = f.read().split('--------------------')[-1]\n\nwith open('requirements.txt') as f:\n reqs = f.read()\n\n\nif __name__ == '__main__':\n setup(\n name='parlai',\n version=f'0.1.{DATE}{BUILD}',\n description='Unified platform for dialogue research.',\n long_description=readme,\n long_description_content_type='text/markdown',\n url='http://parl.ai/',\n python_requires='>=3.6',\n scripts=['bin/parlai'],\n packages=find_packages(\n exclude=('data', 'docs', 'examples', 'tests', 'parlai_internal',)\n ),\n install_requires=reqs.strip().split('\\n'),\n include_package_data=True,\n entry_points={\"flake8.extension\": [\"PAI = parlai.utils.flake8:ParlAIChecker\"]},\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"License :: OSI Approved :: MIT License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Natural Language :: English\",\n ],\n )\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nimport datetime\nimport sys\n\nfrom setuptools import setup, find_packages\n\nBUILD = '' # test by setting to \".dev0\" if multiple in one day, use \".dev1\", ...\nDATE = datetime.date.today().isoformat().replace('-', '')\n\nif sys.version_info < (3, 6):\n sys.exit('Sorry, Python >=3.6 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n # strip the header and badges etc\n readme = f.read().split('--------------------')[-1]\n\nwith open('requirements.txt') as f:\n reqs = f.read()\n\n\nif __name__ == '__main__':\n setup(\n name='parlai',\n version='0.1.{DATE}{BUILD}'.format(DATE=DATE, BUILD=BUILD),\n description='Unified platform for dialogue research.',\n long_description=readme,\n long_description_content_type='text/markdown',\n url='http://parl.ai/',\n python_requires='>=3.6',\n scripts=['bin/parlai'],\n packages=find_packages(\n exclude=('data', 'docs', 'examples', 'tests', 'parlai_internal',)\n ),\n install_requires=reqs.strip().split('\\n'),\n include_package_data=True,\n package_data={'': ['*.txt', '*.md']},\n entry_points={\"flake8.extension\": [\"PAI = parlai.utils.flake8:ParlAIChecker\"]},\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"License :: OSI Approved :: MIT License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Natural Language :: English\",\n ],\n )\n", "path": "setup.py"}]}
| 842 | 287 |
gh_patches_debug_61
|
rasdani/github-patches
|
git_diff
|
holoviz__panel-5919
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plotting styling guides are gone
As discussed on Discourse the styling gallery notebooks for Plotly, Altair, Seaborn and Matplotlib have gone missing with Panel 1.0 release.
Please re-add. Thx.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `panel/models/echarts.py`
Content:
```
1 """
2 Defines custom bokeh model to render ECharts plots.
3 """
4 from bokeh.core.properties import (
5 Any, Dict, Enum, List, Nullable, String,
6 )
7 from bokeh.events import ModelEvent
8 from bokeh.models import LayoutDOM
9
10 from ..config import config
11 from ..io.resources import bundled_files
12 from ..util import classproperty
13
14
15 class EChartsEvent(ModelEvent):
16
17 event_name = 'echarts_event'
18
19 def __init__(self, model, type=None, data=None, query=None):
20 self.type = type
21 self.data = data
22 self.query = query
23 super().__init__(model=model)
24
25
26 class ECharts(LayoutDOM):
27 """
28 A Bokeh model that wraps around an ECharts plot and renders it
29 inside a Bokeh.
30 """
31
32 __javascript_raw__ = [
33 f"{config.npm_cdn}/[email protected]/dist/echarts.min.js",
34 f"{config.npm_cdn}/[email protected]/dist/echarts-gl.min.js"
35 ]
36
37 @classproperty
38 def __javascript__(cls):
39 return bundled_files(cls)
40
41 @classproperty
42 def __js_skip__(cls):
43 return {
44 'echarts': cls.__javascript__[:1]
45 }
46
47 __js_require__ = {
48 'paths': {
49 "echarts": f"{config.npm_cdn}/[email protected]/dist/echarts.min",
50 "echarts-gl": f"{config.npm_cdn}/[email protected]/dist/echarts-gl.min"
51 },
52 'exports': {}
53 }
54
55 data = Nullable(Dict(String, Any))
56
57 options = Nullable(Dict(String, Any))
58
59 event_config = Dict(String, Any)
60
61 js_events = Dict(String, List(Any))
62
63 renderer = Enum("canvas", "svg")
64
65 theme = Enum("default", "light", "dark")
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/panel/models/echarts.py b/panel/models/echarts.py
--- a/panel/models/echarts.py
+++ b/panel/models/echarts.py
@@ -62,4 +62,4 @@
renderer = Enum("canvas", "svg")
- theme = Enum("default", "light", "dark")
+ theme = String("default")
|
{"golden_diff": "diff --git a/panel/models/echarts.py b/panel/models/echarts.py\n--- a/panel/models/echarts.py\n+++ b/panel/models/echarts.py\n@@ -62,4 +62,4 @@\n \n renderer = Enum(\"canvas\", \"svg\")\n \n- theme = Enum(\"default\", \"light\", \"dark\")\n+ theme = String(\"default\")\n", "issue": "Plotting styling guides are gone\nAs discussed on Discourse the styling gallery notebooks for Plotly, Altair, Seaborn and Matplotlib have gone missing with Panel 1.0 release.\r\n\r\nPlease re-add. Thx.\n", "before_files": [{"content": "\"\"\"\nDefines custom bokeh model to render ECharts plots.\n\"\"\"\nfrom bokeh.core.properties import (\n Any, Dict, Enum, List, Nullable, String,\n)\nfrom bokeh.events import ModelEvent\nfrom bokeh.models import LayoutDOM\n\nfrom ..config import config\nfrom ..io.resources import bundled_files\nfrom ..util import classproperty\n\n\nclass EChartsEvent(ModelEvent):\n\n event_name = 'echarts_event'\n\n def __init__(self, model, type=None, data=None, query=None):\n self.type = type\n self.data = data\n self.query = query\n super().__init__(model=model)\n\n\nclass ECharts(LayoutDOM):\n \"\"\"\n A Bokeh model that wraps around an ECharts plot and renders it\n inside a Bokeh.\n \"\"\"\n\n __javascript_raw__ = [\n f\"{config.npm_cdn}/[email protected]/dist/echarts.min.js\",\n f\"{config.npm_cdn}/[email protected]/dist/echarts-gl.min.js\"\n ]\n\n @classproperty\n def __javascript__(cls):\n return bundled_files(cls)\n\n @classproperty\n def __js_skip__(cls):\n return {\n 'echarts': cls.__javascript__[:1]\n }\n\n __js_require__ = {\n 'paths': {\n \"echarts\": f\"{config.npm_cdn}/[email protected]/dist/echarts.min\",\n \"echarts-gl\": f\"{config.npm_cdn}/[email protected]/dist/echarts-gl.min\"\n },\n 'exports': {}\n }\n\n data = Nullable(Dict(String, Any))\n\n options = Nullable(Dict(String, Any))\n\n event_config = Dict(String, Any)\n\n js_events = Dict(String, List(Any))\n\n renderer = Enum(\"canvas\", \"svg\")\n\n theme = Enum(\"default\", \"light\", \"dark\")\n", "path": "panel/models/echarts.py"}], "after_files": [{"content": "\"\"\"\nDefines custom bokeh model to render ECharts plots.\n\"\"\"\nfrom bokeh.core.properties import (\n Any, Dict, Enum, List, Nullable, String,\n)\nfrom bokeh.events import ModelEvent\nfrom bokeh.models import LayoutDOM\n\nfrom ..config import config\nfrom ..io.resources import bundled_files\nfrom ..util import classproperty\n\n\nclass EChartsEvent(ModelEvent):\n\n event_name = 'echarts_event'\n\n def __init__(self, model, type=None, data=None, query=None):\n self.type = type\n self.data = data\n self.query = query\n super().__init__(model=model)\n\n\nclass ECharts(LayoutDOM):\n \"\"\"\n A Bokeh model that wraps around an ECharts plot and renders it\n inside a Bokeh.\n \"\"\"\n\n __javascript_raw__ = [\n f\"{config.npm_cdn}/[email protected]/dist/echarts.min.js\",\n f\"{config.npm_cdn}/[email protected]/dist/echarts-gl.min.js\"\n ]\n\n @classproperty\n def __javascript__(cls):\n return bundled_files(cls)\n\n @classproperty\n def __js_skip__(cls):\n return {\n 'echarts': cls.__javascript__[:1]\n }\n\n __js_require__ = {\n 'paths': {\n \"echarts\": f\"{config.npm_cdn}/[email protected]/dist/echarts.min\",\n \"echarts-gl\": f\"{config.npm_cdn}/[email protected]/dist/echarts-gl.min\"\n },\n 'exports': {}\n }\n\n data = Nullable(Dict(String, Any))\n\n options = Nullable(Dict(String, Any))\n\n event_config = Dict(String, Any)\n\n js_events = Dict(String, List(Any))\n\n renderer = Enum(\"canvas\", \"svg\")\n\n theme = String(\"default\")\n", "path": "panel/models/echarts.py"}]}
| 859 | 86 |
gh_patches_debug_600
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1319
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.39
On the docket:
+ [x] Running opvault 0.4.9 pex leads to infinite recursion in setup tools #1316
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.38"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.38"
+__version__ = "2.1.39"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.38\"\n+__version__ = \"2.1.39\"\n", "issue": "Release 2.1.39\nOn the docket:\r\n+ [x] Running opvault 0.4.9 pex leads to infinite recursion in setup tools #1316\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.38\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.39\"\n", "path": "pex/version.py"}]}
| 349 | 96 |
gh_patches_debug_21533
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-464
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
there is duplicated code in EvaP
while browsing the code, i ran across a lot of structurally very similar blocks of code.
this is a reminder for me to search for these code parts, collect a few, and find the cause and solutions for them.
examples:
- [x] https://github.com/fsr-itse/EvaP/commit/ca4b28a6054944859ec4447fff95fc36095a2ffb.
- [x] the contributor course view/template vs the fsr course view/template.
- [x] student_index and contributor_index (html and view)
- [ ] fsr_import and user_import
- [x] the multiselect initialization inside the script blocks exists multiple times in different files. one can find it by searching for select[multiple='multiple']
- [x] https://github.com/janno42/EvaP/blob/reward_points/evap/fsr/templates/fsr_reward_point_redemption_event_delete.html
- [x] search for $(".twipsify") and $("input[data-datepicker='datepicker']:not([readonly='True'])")
- [x] sortable and apply_ordering functions
- [x] if ($.fn.button.noConflict != undefined)
- [x] $('#id_degree').combobox() and similar
- [x] fsr_course_preview.html, fsr_questionnaire_view.html, contributor_course_preview.html
- [x] javascript-code of the sortable forms: [contributor/staff]_course_form.html (or rather, evap_course_edit_js.html), staff_faq_index.html and staff_faq_section.html
- [x] all the leave.js occurrences
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/management/commands/merge_users.py`
Content:
```
1 from django.core.management.base import BaseCommand
2
3 from evap.evaluation.models import UserProfile
4 from evap.evaluation.merge import merge_model_objects
5
6
7 class Command(BaseCommand):
8 args = '<user ID 1> <user ID 2>'
9 help = 'Merge two users'
10
11 def handle(self, *args, **options):
12 try:
13 user1 = UserProfile.objects.get(pk=int(args[0]))
14 user2 = UserProfile.objects.get(pk=int(args[1]))
15
16 print("Merging user '{1}' into user '{0}'".format(user1, user2))
17 merge_model_objects(user1, user2)
18 except Exception:
19 import traceback
20 traceback.print_exc()
21
```
Path: `evap/staff/templatetags/staff_templatetags.py`
Content:
```
1 from django.template import Library
2
3 from evap.evaluation.models import Semester
4
5 register = Library()
6
7
8 @register.inclusion_tag("staff_semester_menu.html")
9 def include_staff_semester_menu():
10 return dict(semesters=Semester.objects.all()[:5])
11
12 @register.inclusion_tag('staff_course_selection_list.html')
13 def include_staff_course_selection_list(course_forms, btn_label, empty_msg):
14 return {
15 'course_forms': course_forms,
16 'btn_label': btn_label,
17 'empty_msg': empty_msg
18 }
19
20 @register.inclusion_tag('staff_user_labels.html')
21 def include_staff_user_labels(user):
22 return dict(user=user)
23
```
Path: `evap/evaluation/templatetags/evaluation_templatetags.py`
Content:
```
1 from django.template import Library
2 from evap.evaluation.models import Semester
3
4 register = Library()
5
6
7 @register.inclusion_tag("user_list_with_links.html")
8 def include_user_list_with_links(users):
9 return dict(users=users)
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/evaluation/management/commands/merge_users.py b/evap/evaluation/management/commands/merge_users.py
--- a/evap/evaluation/management/commands/merge_users.py
+++ b/evap/evaluation/management/commands/merge_users.py
@@ -5,8 +5,8 @@
class Command(BaseCommand):
- args = '<user ID 1> <user ID 2>'
- help = 'Merge two users'
+ args = '<primary user ID> <duplicate user ID>'
+ help = 'Merges duplicate user into primary user'
def handle(self, *args, **options):
try:
diff --git a/evap/evaluation/templatetags/evaluation_templatetags.py b/evap/evaluation/templatetags/evaluation_templatetags.py
--- a/evap/evaluation/templatetags/evaluation_templatetags.py
+++ b/evap/evaluation/templatetags/evaluation_templatetags.py
@@ -7,3 +7,9 @@
@register.inclusion_tag("user_list_with_links.html")
def include_user_list_with_links(users):
return dict(users=users)
+
+
[email protected]_tag("sortable_form_js.html")
+def include_sortable_form_js():
+ return dict()
+
diff --git a/evap/staff/templatetags/staff_templatetags.py b/evap/staff/templatetags/staff_templatetags.py
--- a/evap/staff/templatetags/staff_templatetags.py
+++ b/evap/staff/templatetags/staff_templatetags.py
@@ -20,3 +20,7 @@
@register.inclusion_tag('staff_user_labels.html')
def include_staff_user_labels(user):
return dict(user=user)
+
[email protected]_tag("staff_questionnaire_index_list.html")
+def include_staff_questionnaire_index_list(questionnaires, headline):
+ return dict(questionnaires=questionnaires, headline=headline)
|
{"golden_diff": "diff --git a/evap/evaluation/management/commands/merge_users.py b/evap/evaluation/management/commands/merge_users.py\n--- a/evap/evaluation/management/commands/merge_users.py\n+++ b/evap/evaluation/management/commands/merge_users.py\n@@ -5,8 +5,8 @@\n \n \n class Command(BaseCommand):\n- args = '<user ID 1> <user ID 2>'\n- help = 'Merge two users'\n+ args = '<primary user ID> <duplicate user ID>'\n+ help = 'Merges duplicate user into primary user'\n \n def handle(self, *args, **options):\n try:\ndiff --git a/evap/evaluation/templatetags/evaluation_templatetags.py b/evap/evaluation/templatetags/evaluation_templatetags.py\n--- a/evap/evaluation/templatetags/evaluation_templatetags.py\n+++ b/evap/evaluation/templatetags/evaluation_templatetags.py\n@@ -7,3 +7,9 @@\n @register.inclusion_tag(\"user_list_with_links.html\")\n def include_user_list_with_links(users):\n return dict(users=users)\n+\n+\[email protected]_tag(\"sortable_form_js.html\")\n+def include_sortable_form_js():\n+ return dict()\n+\ndiff --git a/evap/staff/templatetags/staff_templatetags.py b/evap/staff/templatetags/staff_templatetags.py\n--- a/evap/staff/templatetags/staff_templatetags.py\n+++ b/evap/staff/templatetags/staff_templatetags.py\n@@ -20,3 +20,7 @@\n @register.inclusion_tag('staff_user_labels.html')\n def include_staff_user_labels(user):\n return dict(user=user)\n+\[email protected]_tag(\"staff_questionnaire_index_list.html\")\n+def include_staff_questionnaire_index_list(questionnaires, headline):\n+ return dict(questionnaires=questionnaires, headline=headline)\n", "issue": "there is duplicated code in EvaP\nwhile browsing the code, i ran across a lot of structurally very similar blocks of code. \n\nthis is a reminder for me to search for these code parts, collect a few, and find the cause and solutions for them.\n\nexamples: \n- [x] https://github.com/fsr-itse/EvaP/commit/ca4b28a6054944859ec4447fff95fc36095a2ffb.\n- [x] the contributor course view/template vs the fsr course view/template.\n- [x] student_index and contributor_index (html and view)\n- [ ] fsr_import and user_import\n- [x] the multiselect initialization inside the script blocks exists multiple times in different files. one can find it by searching for select[multiple='multiple']\n- [x] https://github.com/janno42/EvaP/blob/reward_points/evap/fsr/templates/fsr_reward_point_redemption_event_delete.html\n- [x] search for $(\".twipsify\") and $(\"input[data-datepicker='datepicker']:not([readonly='True'])\")\n- [x] sortable and apply_ordering functions\n- [x] if ($.fn.button.noConflict != undefined)\n- [x] $('#id_degree').combobox() and similar\n- [x] fsr_course_preview.html, fsr_questionnaire_view.html, contributor_course_preview.html\n- [x] javascript-code of the sortable forms: [contributor/staff]_course_form.html (or rather, evap_course_edit_js.html), staff_faq_index.html and staff_faq_section.html\n- [x] all the leave.js occurrences\n\n", "before_files": [{"content": "from django.core.management.base import BaseCommand\n\nfrom evap.evaluation.models import UserProfile\nfrom evap.evaluation.merge import merge_model_objects\n\n\nclass Command(BaseCommand):\n args = '<user ID 1> <user ID 2>'\n help = 'Merge two users'\n\n def handle(self, *args, **options):\n try:\n user1 = UserProfile.objects.get(pk=int(args[0]))\n user2 = UserProfile.objects.get(pk=int(args[1]))\n\n print(\"Merging user '{1}' into user '{0}'\".format(user1, user2))\n merge_model_objects(user1, user2)\n except Exception:\n import traceback\n traceback.print_exc()\n", "path": "evap/evaluation/management/commands/merge_users.py"}, {"content": "from django.template import Library\n\nfrom evap.evaluation.models import Semester\n\nregister = Library()\n\n\[email protected]_tag(\"staff_semester_menu.html\")\ndef include_staff_semester_menu():\n return dict(semesters=Semester.objects.all()[:5])\n\[email protected]_tag('staff_course_selection_list.html')\ndef include_staff_course_selection_list(course_forms, btn_label, empty_msg):\n return {\n 'course_forms': course_forms,\n 'btn_label': btn_label,\n 'empty_msg': empty_msg\n }\n\[email protected]_tag('staff_user_labels.html')\ndef include_staff_user_labels(user):\n return dict(user=user)\n", "path": "evap/staff/templatetags/staff_templatetags.py"}, {"content": "from django.template import Library\nfrom evap.evaluation.models import Semester\n\nregister = Library()\n\n\[email protected]_tag(\"user_list_with_links.html\")\ndef include_user_list_with_links(users):\n return dict(users=users)\n", "path": "evap/evaluation/templatetags/evaluation_templatetags.py"}], "after_files": [{"content": "from django.core.management.base import BaseCommand\n\nfrom evap.evaluation.models import UserProfile\nfrom evap.evaluation.merge import merge_model_objects\n\n\nclass Command(BaseCommand):\n args = '<primary user ID> <duplicate user ID>'\n help = 'Merges duplicate user into primary user'\n\n def handle(self, *args, **options):\n try:\n user1 = UserProfile.objects.get(pk=int(args[0]))\n user2 = UserProfile.objects.get(pk=int(args[1]))\n\n print(\"Merging user '{1}' into user '{0}'\".format(user1, user2))\n merge_model_objects(user1, user2)\n except Exception:\n import traceback\n traceback.print_exc()\n", "path": "evap/evaluation/management/commands/merge_users.py"}, {"content": "from django.template import Library\n\nfrom evap.evaluation.models import Semester\n\nregister = Library()\n\n\[email protected]_tag(\"staff_semester_menu.html\")\ndef include_staff_semester_menu():\n return dict(semesters=Semester.objects.all()[:5])\n\[email protected]_tag('staff_course_selection_list.html')\ndef include_staff_course_selection_list(course_forms, btn_label, empty_msg):\n return {\n 'course_forms': course_forms,\n 'btn_label': btn_label,\n 'empty_msg': empty_msg\n }\n\[email protected]_tag('staff_user_labels.html')\ndef include_staff_user_labels(user):\n return dict(user=user)\n\[email protected]_tag(\"staff_questionnaire_index_list.html\")\ndef include_staff_questionnaire_index_list(questionnaires, headline):\n return dict(questionnaires=questionnaires, headline=headline)\n", "path": "evap/staff/templatetags/staff_templatetags.py"}, {"content": "from django.template import Library\nfrom evap.evaluation.models import Semester\n\nregister = Library()\n\n\[email protected]_tag(\"user_list_with_links.html\")\ndef include_user_list_with_links(users):\n return dict(users=users)\n\n\[email protected]_tag(\"sortable_form_js.html\")\ndef include_sortable_form_js():\n return dict()\n\n", "path": "evap/evaluation/templatetags/evaluation_templatetags.py"}]}
| 1,095 | 459 |
gh_patches_debug_1903
|
rasdani/github-patches
|
git_diff
|
systemd__mkosi-1956
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Meta] declare a policy about adding new distributions
Before people start creating issues asking to support their favorite distribution, I think that mkosi should declare its policy regarding new distributions support.
The policy should state in which terms (if any) you will be willing to support a new distributions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkosi/distributions/__init__.py`
Content:
```
1 # SPDX-License-Identifier: LGPL-2.1+
2
3 import enum
4 import importlib
5 import re
6 from collections.abc import Sequence
7 from typing import TYPE_CHECKING, Optional, cast
8
9 from mkosi.architecture import Architecture
10 from mkosi.util import StrEnum, read_os_release
11
12 if TYPE_CHECKING:
13 from mkosi.state import MkosiState
14
15
16 class PackageType(StrEnum):
17 none = enum.auto()
18 rpm = enum.auto()
19 deb = enum.auto()
20 pkg = enum.auto()
21 ebuild = enum.auto()
22
23
24 class DistributionInstaller:
25 @classmethod
26 def pretty_name(cls) -> str:
27 raise NotImplementedError
28
29 @classmethod
30 def setup(cls, state: "MkosiState") -> None:
31 raise NotImplementedError
32
33 @classmethod
34 def install(cls, state: "MkosiState") -> None:
35 raise NotImplementedError
36
37 @classmethod
38 def install_packages(cls, state: "MkosiState", packages: Sequence[str]) -> None:
39 raise NotImplementedError
40
41 @classmethod
42 def remove_packages(cls, state: "MkosiState", packages: Sequence[str]) -> None:
43 raise NotImplementedError
44
45 @classmethod
46 def filesystem(cls) -> str:
47 return "ext4"
48
49 @staticmethod
50 def architecture(arch: Architecture) -> str:
51 return str(arch)
52
53 @classmethod
54 def package_type(cls) -> PackageType:
55 return PackageType.none
56
57 @classmethod
58 def default_release(cls) -> str:
59 return ""
60
61 @classmethod
62 def default_tools_tree_distribution(cls) -> Optional["Distribution"]:
63 return None
64
65 @classmethod
66 def tools_tree_repositories(cls) -> list[str]:
67 return []
68
69 @classmethod
70 def tools_tree_packages(cls) -> list[str]:
71 return []
72
73
74 class Distribution(StrEnum):
75 fedora = enum.auto()
76 debian = enum.auto()
77 ubuntu = enum.auto()
78 arch = enum.auto()
79 opensuse = enum.auto()
80 mageia = enum.auto()
81 centos = enum.auto()
82 rhel_ubi = enum.auto()
83 openmandriva = enum.auto()
84 rocky = enum.auto()
85 alma = enum.auto()
86 gentoo = enum.auto()
87 custom = enum.auto()
88
89 def is_centos_variant(self) -> bool:
90 return self in (Distribution.centos, Distribution.alma, Distribution.rocky)
91
92 def is_dnf_distribution(self) -> bool:
93 return self in (
94 Distribution.fedora,
95 Distribution.mageia,
96 Distribution.centos,
97 Distribution.rhel_ubi,
98 Distribution.openmandriva,
99 Distribution.rocky,
100 Distribution.alma,
101 )
102
103 def is_apt_distribution(self) -> bool:
104 return self in (Distribution.debian, Distribution.ubuntu)
105
106 def setup(self, state: "MkosiState") -> None:
107 return self.installer().setup(state)
108
109 def install(self, state: "MkosiState") -> None:
110 return self.installer().install(state)
111
112 def install_packages(self, state: "MkosiState", packages: Sequence[str]) -> None:
113 return self.installer().install_packages(state, packages)
114
115 def remove_packages(self, state: "MkosiState", packages: Sequence[str]) -> None:
116 return self.installer().remove_packages(state, packages)
117
118 def filesystem(self) -> str:
119 return self.installer().filesystem()
120
121 def architecture(self, arch: Architecture) -> str:
122 return self.installer().architecture(arch)
123
124 def package_type(self) -> PackageType:
125 return self.installer().package_type()
126
127 def default_release(self) -> str:
128 return self.installer().default_release()
129
130 def default_tools_tree_distribution(self) -> Optional["Distribution"]:
131 return self.installer().default_tools_tree_distribution()
132
133 def tools_tree_repositories(self) -> list[str]:
134 return self.installer().tools_tree_repositories()
135
136 def tools_tree_packages(self) -> list[str]:
137 return self.installer().tools_tree_packages()
138
139 def installer(self) -> type[DistributionInstaller]:
140 modname = str(self).replace('-', '_')
141 mod = importlib.import_module(f"mkosi.distributions.{modname}")
142 installer = getattr(mod, "Installer")
143 assert issubclass(installer, DistributionInstaller)
144 return cast(type[DistributionInstaller], installer)
145
146
147 def detect_distribution() -> tuple[Optional[Distribution], Optional[str]]:
148 try:
149 os_release = read_os_release()
150 except FileNotFoundError:
151 return None, None
152
153 dist_id = os_release.get("ID", "linux")
154 dist_id_like = os_release.get("ID_LIKE", "").split()
155 version = os_release.get("VERSION", None)
156 version_id = os_release.get("VERSION_ID", None)
157 version_codename = os_release.get("VERSION_CODENAME", None)
158 extracted_codename = None
159
160 if version:
161 # extract Debian release codename
162 m = re.search(r"\((.*?)\)", version)
163 if m:
164 extracted_codename = m.group(1)
165
166 d: Optional[Distribution] = None
167 for the_id in [dist_id, *dist_id_like]:
168 d = Distribution.__members__.get(the_id, None)
169 if d is not None:
170 break
171
172 if d in {Distribution.debian, Distribution.ubuntu} and (version_codename or extracted_codename):
173 version_id = version_codename or extracted_codename
174
175 return d, version_id
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkosi/distributions/__init__.py b/mkosi/distributions/__init__.py
--- a/mkosi/distributions/__init__.py
+++ b/mkosi/distributions/__init__.py
@@ -72,6 +72,8 @@
class Distribution(StrEnum):
+ # Please consult docs/distribution-policy.md and contact one
+ # of the mkosi maintainers before implementing a new distribution.
fedora = enum.auto()
debian = enum.auto()
ubuntu = enum.auto()
|
{"golden_diff": "diff --git a/mkosi/distributions/__init__.py b/mkosi/distributions/__init__.py\n--- a/mkosi/distributions/__init__.py\n+++ b/mkosi/distributions/__init__.py\n@@ -72,6 +72,8 @@\n \n \n class Distribution(StrEnum):\n+ # Please consult docs/distribution-policy.md and contact one\n+ # of the mkosi maintainers before implementing a new distribution.\n fedora = enum.auto()\n debian = enum.auto()\n ubuntu = enum.auto()\n", "issue": "[Meta] declare a policy about adding new distributions\nBefore people start creating issues asking to support their favorite distribution, I think that mkosi should declare its policy regarding new distributions support.\r\n\r\nThe policy should state in which terms (if any) you will be willing to support a new distributions.\n", "before_files": [{"content": "# SPDX-License-Identifier: LGPL-2.1+\n\nimport enum\nimport importlib\nimport re\nfrom collections.abc import Sequence\nfrom typing import TYPE_CHECKING, Optional, cast\n\nfrom mkosi.architecture import Architecture\nfrom mkosi.util import StrEnum, read_os_release\n\nif TYPE_CHECKING:\n from mkosi.state import MkosiState\n\n\nclass PackageType(StrEnum):\n none = enum.auto()\n rpm = enum.auto()\n deb = enum.auto()\n pkg = enum.auto()\n ebuild = enum.auto()\n\n\nclass DistributionInstaller:\n @classmethod\n def pretty_name(cls) -> str:\n raise NotImplementedError\n\n @classmethod\n def setup(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def remove_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def filesystem(cls) -> str:\n return \"ext4\"\n\n @staticmethod\n def architecture(arch: Architecture) -> str:\n return str(arch)\n\n @classmethod\n def package_type(cls) -> PackageType:\n return PackageType.none\n\n @classmethod\n def default_release(cls) -> str:\n return \"\"\n\n @classmethod\n def default_tools_tree_distribution(cls) -> Optional[\"Distribution\"]:\n return None\n\n @classmethod\n def tools_tree_repositories(cls) -> list[str]:\n return []\n\n @classmethod\n def tools_tree_packages(cls) -> list[str]:\n return []\n\n\nclass Distribution(StrEnum):\n fedora = enum.auto()\n debian = enum.auto()\n ubuntu = enum.auto()\n arch = enum.auto()\n opensuse = enum.auto()\n mageia = enum.auto()\n centos = enum.auto()\n rhel_ubi = enum.auto()\n openmandriva = enum.auto()\n rocky = enum.auto()\n alma = enum.auto()\n gentoo = enum.auto()\n custom = enum.auto()\n\n def is_centos_variant(self) -> bool:\n return self in (Distribution.centos, Distribution.alma, Distribution.rocky)\n\n def is_dnf_distribution(self) -> bool:\n return self in (\n Distribution.fedora,\n Distribution.mageia,\n Distribution.centos,\n Distribution.rhel_ubi,\n Distribution.openmandriva,\n Distribution.rocky,\n Distribution.alma,\n )\n\n def is_apt_distribution(self) -> bool:\n return self in (Distribution.debian, Distribution.ubuntu)\n\n def setup(self, state: \"MkosiState\") -> None:\n return self.installer().setup(state)\n\n def install(self, state: \"MkosiState\") -> None:\n return self.installer().install(state)\n\n def install_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().install_packages(state, packages)\n\n def remove_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().remove_packages(state, packages)\n\n def filesystem(self) -> str:\n return self.installer().filesystem()\n\n def architecture(self, arch: Architecture) -> str:\n return self.installer().architecture(arch)\n\n def package_type(self) -> PackageType:\n return self.installer().package_type()\n\n def default_release(self) -> str:\n return self.installer().default_release()\n\n def default_tools_tree_distribution(self) -> Optional[\"Distribution\"]:\n return self.installer().default_tools_tree_distribution()\n\n def tools_tree_repositories(self) -> list[str]:\n return self.installer().tools_tree_repositories()\n\n def tools_tree_packages(self) -> list[str]:\n return self.installer().tools_tree_packages()\n\n def installer(self) -> type[DistributionInstaller]:\n modname = str(self).replace('-', '_')\n mod = importlib.import_module(f\"mkosi.distributions.{modname}\")\n installer = getattr(mod, \"Installer\")\n assert issubclass(installer, DistributionInstaller)\n return cast(type[DistributionInstaller], installer)\n\n\ndef detect_distribution() -> tuple[Optional[Distribution], Optional[str]]:\n try:\n os_release = read_os_release()\n except FileNotFoundError:\n return None, None\n\n dist_id = os_release.get(\"ID\", \"linux\")\n dist_id_like = os_release.get(\"ID_LIKE\", \"\").split()\n version = os_release.get(\"VERSION\", None)\n version_id = os_release.get(\"VERSION_ID\", None)\n version_codename = os_release.get(\"VERSION_CODENAME\", None)\n extracted_codename = None\n\n if version:\n # extract Debian release codename\n m = re.search(r\"\\((.*?)\\)\", version)\n if m:\n extracted_codename = m.group(1)\n\n d: Optional[Distribution] = None\n for the_id in [dist_id, *dist_id_like]:\n d = Distribution.__members__.get(the_id, None)\n if d is not None:\n break\n\n if d in {Distribution.debian, Distribution.ubuntu} and (version_codename or extracted_codename):\n version_id = version_codename or extracted_codename\n\n return d, version_id\n", "path": "mkosi/distributions/__init__.py"}], "after_files": [{"content": "# SPDX-License-Identifier: LGPL-2.1+\n\nimport enum\nimport importlib\nimport re\nfrom collections.abc import Sequence\nfrom typing import TYPE_CHECKING, Optional, cast\n\nfrom mkosi.architecture import Architecture\nfrom mkosi.util import StrEnum, read_os_release\n\nif TYPE_CHECKING:\n from mkosi.state import MkosiState\n\n\nclass PackageType(StrEnum):\n none = enum.auto()\n rpm = enum.auto()\n deb = enum.auto()\n pkg = enum.auto()\n ebuild = enum.auto()\n\n\nclass DistributionInstaller:\n @classmethod\n def pretty_name(cls) -> str:\n raise NotImplementedError\n\n @classmethod\n def setup(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def remove_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def filesystem(cls) -> str:\n return \"ext4\"\n\n @staticmethod\n def architecture(arch: Architecture) -> str:\n return str(arch)\n\n @classmethod\n def package_type(cls) -> PackageType:\n return PackageType.none\n\n @classmethod\n def default_release(cls) -> str:\n return \"\"\n\n @classmethod\n def default_tools_tree_distribution(cls) -> Optional[\"Distribution\"]:\n return None\n\n @classmethod\n def tools_tree_repositories(cls) -> list[str]:\n return []\n\n @classmethod\n def tools_tree_packages(cls) -> list[str]:\n return []\n\n\nclass Distribution(StrEnum):\n # Please consult docs/distribution-policy.md and contact one\n # of the mkosi maintainers before implementing a new distribution.\n fedora = enum.auto()\n debian = enum.auto()\n ubuntu = enum.auto()\n arch = enum.auto()\n opensuse = enum.auto()\n mageia = enum.auto()\n centos = enum.auto()\n rhel_ubi = enum.auto()\n openmandriva = enum.auto()\n rocky = enum.auto()\n alma = enum.auto()\n gentoo = enum.auto()\n custom = enum.auto()\n\n def is_centos_variant(self) -> bool:\n return self in (Distribution.centos, Distribution.alma, Distribution.rocky)\n\n def is_dnf_distribution(self) -> bool:\n return self in (\n Distribution.fedora,\n Distribution.mageia,\n Distribution.centos,\n Distribution.rhel_ubi,\n Distribution.openmandriva,\n Distribution.rocky,\n Distribution.alma,\n )\n\n def is_apt_distribution(self) -> bool:\n return self in (Distribution.debian, Distribution.ubuntu)\n\n def setup(self, state: \"MkosiState\") -> None:\n return self.installer().setup(state)\n\n def install(self, state: \"MkosiState\") -> None:\n return self.installer().install(state)\n\n def install_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().install_packages(state, packages)\n\n def remove_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().remove_packages(state, packages)\n\n def filesystem(self) -> str:\n return self.installer().filesystem()\n\n def architecture(self, arch: Architecture) -> str:\n return self.installer().architecture(arch)\n\n def package_type(self) -> PackageType:\n return self.installer().package_type()\n\n def default_release(self) -> str:\n return self.installer().default_release()\n\n def default_tools_tree_distribution(self) -> Optional[\"Distribution\"]:\n return self.installer().default_tools_tree_distribution()\n\n def tools_tree_repositories(self) -> list[str]:\n return self.installer().tools_tree_repositories()\n\n def tools_tree_packages(self) -> list[str]:\n return self.installer().tools_tree_packages()\n\n def installer(self) -> type[DistributionInstaller]:\n modname = str(self).replace('-', '_')\n mod = importlib.import_module(f\"mkosi.distributions.{modname}\")\n installer = getattr(mod, \"Installer\")\n assert issubclass(installer, DistributionInstaller)\n return cast(type[DistributionInstaller], installer)\n\n\ndef detect_distribution() -> tuple[Optional[Distribution], Optional[str]]:\n try:\n os_release = read_os_release()\n except FileNotFoundError:\n return None, None\n\n dist_id = os_release.get(\"ID\", \"linux\")\n dist_id_like = os_release.get(\"ID_LIKE\", \"\").split()\n version = os_release.get(\"VERSION\", None)\n version_id = os_release.get(\"VERSION_ID\", None)\n version_codename = os_release.get(\"VERSION_CODENAME\", None)\n extracted_codename = None\n\n if version:\n # extract Debian release codename\n m = re.search(r\"\\((.*?)\\)\", version)\n if m:\n extracted_codename = m.group(1)\n\n d: Optional[Distribution] = None\n for the_id in [dist_id, *dist_id_like]:\n d = Distribution.__members__.get(the_id, None)\n if d is not None:\n break\n\n if d in {Distribution.debian, Distribution.ubuntu} and (version_codename or extracted_codename):\n version_id = version_codename or extracted_codename\n\n return d, version_id\n", "path": "mkosi/distributions/__init__.py"}]}
| 1,943 | 117 |
gh_patches_debug_23967
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-1856
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add git information in about and console
**Information**
Installed version of solaar:: 1.1.7+git1897-9c361c4~ubuntu22.10.1
Distribution: Ubuntu 22.10 + https://launchpad.net/~solaar-unifying/+archive/ubuntu/ppa (git)
Kernel version: Linux 6.1.0-060100rc5-generic x86_64 GNU/Linux
Example:
a)

b)
$ solaar show
Solaar version 1.1.7-9c361c4
[...]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4
5 try:
6 from setuptools import setup
7 except ImportError:
8 from distutils.core import setup
9
10 main_ns = {}
11 with open('lib/solaar/__init__.py') as ver_file:
12 exec(ver_file.read(), main_ns)
13 NAME = main_ns['NAME']
14 __version__ = main_ns['__version__']
15
16
17 def _data_files():
18 from os.path import dirname as _dirname
19
20 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
21 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
22 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
23
24 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
25 yield _dirname(mo), [mo]
26
27 yield 'share/applications', ['share/applications/solaar.desktop']
28 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
29 yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']
30
31 del _dirname
32
33
34 setup(
35 name=NAME.lower(),
36 version=__version__,
37 description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',
38 long_description='''
39 Solaar is a Linux device manager for many Logitech peripherals that connect through
40 Unifying and other receivers or via USB or Bluetooth.
41 Solaar is able to pair/unpair devices with receivers and show and modify some of the
42 modifiable features of devices.
43 For instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),
44 author='Daniel Pavel',
45 license='GPLv2',
46 url='http://pwr-solaar.github.io/Solaar/',
47 classifiers=[
48 'Development Status :: 4 - Beta',
49 'Environment :: X11 Applications :: GTK',
50 'Environment :: Console',
51 'Intended Audience :: End Users/Desktop',
52 'License :: DFSG approved',
53 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
54 'Natural Language :: English',
55 'Programming Language :: Python :: 3 :: Only',
56 'Operating System :: POSIX :: Linux',
57 'Topic :: Utilities',
58 ],
59 platforms=['linux'],
60
61 # sudo apt install python-gi python3-gi \
62 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
63 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
64 python_requires='>=3.7',
65 install_requires=[
66 'evdev (>= 1.1.2)',
67 'pyudev (>= 0.13)',
68 'PyYAML (>= 3.12)',
69 'python-xlib (>= 0.27)',
70 'psutil (>= 5.4.3)',
71 ],
72 extras_require={
73 'report-descriptor': ['hid-parser'],
74 'desktop-notifications': ['Notify (>= 0.7)'],
75 },
76 package_dir={'': 'lib'},
77 packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
78 data_files=list(_data_files()),
79 scripts=_glob('bin/*'),
80 )
81
```
Path: `lib/solaar/__init__.py`
Content:
```
1 # -*- python-mode -*-
2
3 ## Copyright (C) 2012-2013 Daniel Pavel
4 ##
5 ## This program is free software; you can redistribute it and/or modify
6 ## it under the terms of the GNU General Public License as published by
7 ## the Free Software Foundation; either version 2 of the License, or
8 ## (at your option) any later version.
9 ##
10 ## This program is distributed in the hope that it will be useful,
11 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
12 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 ## GNU General Public License for more details.
14 ##
15 ## You should have received a copy of the GNU General Public License along
16 ## with this program; if not, write to the Free Software Foundation, Inc.,
17 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
18
19 __version__ = '1.1.7'
20 NAME = 'Solaar'
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/solaar/__init__.py b/lib/solaar/__init__.py
--- a/lib/solaar/__init__.py
+++ b/lib/solaar/__init__.py
@@ -16,5 +16,11 @@
## with this program; if not, write to the Free Software Foundation, Inc.,
## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-__version__ = '1.1.7'
+__version_short__ = '1.1.7'
NAME = 'Solaar'
+
+try:
+ import gitinfo
+ __version__ = __version_short__ + '-' + gitinfo.get_git_info()['commit'][:8]
+except Exception:
+ __version__ = __version_short__
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,7 +11,7 @@
with open('lib/solaar/__init__.py') as ver_file:
exec(ver_file.read(), main_ns)
NAME = main_ns['NAME']
-__version__ = main_ns['__version__']
+__version__ = main_ns['__version_short__']
def _data_files():
@@ -72,6 +72,7 @@
extras_require={
'report-descriptor': ['hid-parser'],
'desktop-notifications': ['Notify (>= 0.7)'],
+ 'git-commit': ['python-git-info'],
},
package_dir={'': 'lib'},
packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
|
{"golden_diff": "diff --git a/lib/solaar/__init__.py b/lib/solaar/__init__.py\n--- a/lib/solaar/__init__.py\n+++ b/lib/solaar/__init__.py\n@@ -16,5 +16,11 @@\n ## with this program; if not, write to the Free Software Foundation, Inc.,\n ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n \n-__version__ = '1.1.7'\n+__version_short__ = '1.1.7'\n NAME = 'Solaar'\n+\n+try:\n+ import gitinfo\n+ __version__ = __version_short__ + '-' + gitinfo.get_git_info()['commit'][:8]\n+except Exception:\n+ __version__ = __version_short__\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,7 +11,7 @@\n with open('lib/solaar/__init__.py') as ver_file:\n exec(ver_file.read(), main_ns)\n NAME = main_ns['NAME']\n-__version__ = main_ns['__version__']\n+__version__ = main_ns['__version_short__']\n \n \n def _data_files():\n@@ -72,6 +72,7 @@\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n+ 'git-commit': ['python-git-info'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n", "issue": "Add git information in about and console\n**Information**\r\nInstalled version of solaar:: 1.1.7+git1897-9c361c4~ubuntu22.10.1\r\nDistribution: Ubuntu 22.10 + https://launchpad.net/~solaar-unifying/+archive/ubuntu/ppa (git)\r\nKernel version: Linux 6.1.0-060100rc5-generic x86_64 GNU/Linux\r\n\r\nExample:\r\na)\r\n\r\n\r\nb)\r\n$ solaar show\r\nSolaar version 1.1.7-9c361c4\r\n\r\n[...]\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nmain_ns = {}\nwith open('lib/solaar/__init__.py') as ver_file:\n exec(ver_file.read(), main_ns)\nNAME = main_ns['NAME']\n__version__ = main_ns['__version__']\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}, {"content": "# -*- python-mode -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\n__version__ = '1.1.7'\nNAME = 'Solaar'\n", "path": "lib/solaar/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nmain_ns = {}\nwith open('lib/solaar/__init__.py') as ver_file:\n exec(ver_file.read(), main_ns)\nNAME = main_ns['NAME']\n__version__ = main_ns['__version_short__']\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n 'git-commit': ['python-git-info'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}, {"content": "# -*- python-mode -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\n__version_short__ = '1.1.7'\nNAME = 'Solaar'\n\ntry:\n import gitinfo\n __version__ = __version_short__ + '-' + gitinfo.get_git_info()['commit'][:8]\nexcept Exception:\n __version__ = __version_short__\n", "path": "lib/solaar/__init__.py"}]}
| 1,652 | 368 |
gh_patches_debug_2662
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-6343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
imageIO warnings due to v2 -> v3 migration
## Description
As of imageIO 2.16.0 (Feb22) there are now a v2 and v3 namespaces in addition to the top-level namespace. As of 2.16.2 (released Apr22) directly using the top-level namespace results in warnings to either explicitly opt-into the v3 API or opt-out and import the v2.
This in turn causes warnings when using `skimage.io.imread`.
I suggest that this is a good first issue as there is no API design choices here (at least to start) and only needs the
```python
try:
import newway
except ImportError:
import old way
```
dance.
The warnings look like (lifted from a test suite):
```
____________________________________________________________________________ ReaderSequence.test_slice_of_slice ____________________________________________________________________________
pims/tests/test_imseq.py:256: in setUp
self.v = self.klass(self.filename, **self.kwargs)
pims/image_sequence.py:217: in __init__
with self.reader_cls(self._filepaths[0], **self.kwargs) as reader:
pims/image_reader.py:60: in __init__
self._data = Frame(imread(filename, **kwargs), frame_no=0)
../../../../.pybuild/bleeding/lib/python3.11/contextlib.py:155: in __exit__
self.gen.throw(typ, value, traceback)
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/util.py:43: in file_or_url_context
yield resource_name
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_io.py:53: in imread
img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/manage_plugins.py:207: in call_plugin
return func(*args, **kwargs)
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_plugins/imageio_plugin.py:10: in imread
return np.asarray(imageio_imread(*args, **kwargs))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
uri = '/home/tcaswell/source/bnl/soft-matter/pims/pims/tests/data/image_sequence3d/file001.png', format = None, kwargs = {}
def imread(uri, format=None, **kwargs):
"""imread(uri, format=None, **kwargs)
Reads an image from the specified file. Returns a numpy array, which
comes with a dict of meta data at its 'meta' attribute.
Note that the image data is returned as-is, and may not always have
a dtype of uint8 (and thus may differ from what e.g. PIL returns).
Parameters
----------
uri : {str, pathlib.Path, bytes, file}
The resource to load the image from, e.g. a filename, pathlib.Path,
http address or file object, see the docs for more info.
format : str
The format to use to read the file. By default imageio selects
the appropriate for you based on the filename and its contents.
kwargs : ...
Further keyword arguments are passed to the reader. See :func:`.help`
to see what arguments are available for a particular format.
"""
> warnings.warn(
"Starting with ImageIO v3 the behavior of this function will switch to that of"
" iio.v3.imread. To keep the current behavior (and make this warning dissapear)"
" use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.",
DeprecationWarning,
)
E DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning dissapear) use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/imageio/__init__.py:89: DeprecationWarning
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/io/_plugins/imageio_plugin.py`
Content:
```
1 __all__ = ['imread', 'imsave']
2
3 from functools import wraps
4 import numpy as np
5 from imageio import imread as imageio_imread, imsave
6
7
8 @wraps(imageio_imread)
9 def imread(*args, **kwargs):
10 return np.asarray(imageio_imread(*args, **kwargs))
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/io/_plugins/imageio_plugin.py b/skimage/io/_plugins/imageio_plugin.py
--- a/skimage/io/_plugins/imageio_plugin.py
+++ b/skimage/io/_plugins/imageio_plugin.py
@@ -2,7 +2,12 @@
from functools import wraps
import numpy as np
-from imageio import imread as imageio_imread, imsave
+
+try:
+ # Try using the v2 API directly to avoid a warning from imageio >= 2.16.2
+ from imageio.v2 import imread as imageio_imread, imsave
+except ImportError:
+ from imageio import imread as imageio_imread, imsave
@wraps(imageio_imread)
|
{"golden_diff": "diff --git a/skimage/io/_plugins/imageio_plugin.py b/skimage/io/_plugins/imageio_plugin.py\n--- a/skimage/io/_plugins/imageio_plugin.py\n+++ b/skimage/io/_plugins/imageio_plugin.py\n@@ -2,7 +2,12 @@\n \n from functools import wraps\n import numpy as np\n-from imageio import imread as imageio_imread, imsave\n+\n+try:\n+ # Try using the v2 API directly to avoid a warning from imageio >= 2.16.2\n+ from imageio.v2 import imread as imageio_imread, imsave\n+except ImportError:\n+ from imageio import imread as imageio_imread, imsave\n \n \n @wraps(imageio_imread)\n", "issue": "imageIO warnings due to v2 -> v3 migration\n## Description\r\n\r\nAs of imageIO 2.16.0 (Feb22) there are now a v2 and v3 namespaces in addition to the top-level namespace. As of 2.16.2 (released Apr22) directly using the top-level namespace results in warnings to either explicitly opt-into the v3 API or opt-out and import the v2. \r\n\r\nThis in turn causes warnings when using `skimage.io.imread`. \r\n\r\nI suggest that this is a good first issue as there is no API design choices here (at least to start) and only needs the \r\n```python\r\ntry: \r\n import newway\r\nexcept ImportError: \r\n import old way\r\n```\r\ndance.\r\n\r\nThe warnings look like (lifted from a test suite):\r\n\r\n```\r\n____________________________________________________________________________ ReaderSequence.test_slice_of_slice ____________________________________________________________________________\r\npims/tests/test_imseq.py:256: in setUp\r\n self.v = self.klass(self.filename, **self.kwargs)\r\npims/image_sequence.py:217: in __init__\r\n with self.reader_cls(self._filepaths[0], **self.kwargs) as reader:\r\npims/image_reader.py:60: in __init__\r\n self._data = Frame(imread(filename, **kwargs), frame_no=0)\r\n../../../../.pybuild/bleeding/lib/python3.11/contextlib.py:155: in __exit__\r\n self.gen.throw(typ, value, traceback)\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/util.py:43: in file_or_url_context\r\n yield resource_name\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_io.py:53: in imread\r\n img = call_plugin('imread', fname, plugin=plugin, **plugin_args)\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/manage_plugins.py:207: in call_plugin\r\n return func(*args, **kwargs)\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_plugins/imageio_plugin.py:10: in imread\r\n return np.asarray(imageio_imread(*args, **kwargs))\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nuri = '/home/tcaswell/source/bnl/soft-matter/pims/pims/tests/data/image_sequence3d/file001.png', format = None, kwargs = {}\r\n\r\n def imread(uri, format=None, **kwargs):\r\n \"\"\"imread(uri, format=None, **kwargs)\r\n \r\n Reads an image from the specified file. Returns a numpy array, which\r\n comes with a dict of meta data at its 'meta' attribute.\r\n \r\n Note that the image data is returned as-is, and may not always have\r\n a dtype of uint8 (and thus may differ from what e.g. PIL returns).\r\n \r\n Parameters\r\n ----------\r\n uri : {str, pathlib.Path, bytes, file}\r\n The resource to load the image from, e.g. a filename, pathlib.Path,\r\n http address or file object, see the docs for more info.\r\n format : str\r\n The format to use to read the file. By default imageio selects\r\n the appropriate for you based on the filename and its contents.\r\n kwargs : ...\r\n Further keyword arguments are passed to the reader. See :func:`.help`\r\n to see what arguments are available for a particular format.\r\n \"\"\"\r\n \r\n> warnings.warn(\r\n \"Starting with ImageIO v3 the behavior of this function will switch to that of\"\r\n \" iio.v3.imread. To keep the current behavior (and make this warning dissapear)\"\r\n \" use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.\",\r\n DeprecationWarning,\r\n )\r\nE DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning dissapear) use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.\r\n\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/imageio/__init__.py:89: DeprecationWarning\r\n\r\n```\n", "before_files": [{"content": "__all__ = ['imread', 'imsave']\n\nfrom functools import wraps\nimport numpy as np\nfrom imageio import imread as imageio_imread, imsave\n\n\n@wraps(imageio_imread)\ndef imread(*args, **kwargs):\n return np.asarray(imageio_imread(*args, **kwargs))\n", "path": "skimage/io/_plugins/imageio_plugin.py"}], "after_files": [{"content": "__all__ = ['imread', 'imsave']\n\nfrom functools import wraps\nimport numpy as np\n\ntry:\n # Try using the v2 API directly to avoid a warning from imageio >= 2.16.2\n from imageio.v2 import imread as imageio_imread, imsave\nexcept ImportError:\n from imageio import imread as imageio_imread, imsave\n\n\n@wraps(imageio_imread)\ndef imread(*args, **kwargs):\n return np.asarray(imageio_imread(*args, **kwargs))\n", "path": "skimage/io/_plugins/imageio_plugin.py"}]}
| 1,356 | 165 |
gh_patches_debug_16306
|
rasdani/github-patches
|
git_diff
|
ARM-DOE__ACT-653
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Example Incorrect
### Description
This example says it's for plotting sounding data but the example uses MET data. We should update this to be what it was originally intended.
https://arm-doe.github.io/ACT/source/auto_examples/plotting/plot_sonde.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plotting/plot_sonde.py`
Content:
```
1 """
2 Plot a timeseries of sounding data
3 ----------------------------------------------------
4
5 This is a simple example for how to plot a timeseries of sounding
6 data from the ARM SGP site.
7
8 Author: Robert Jackson
9 """
10
11 from matplotlib import pyplot as plt
12
13 import act
14
15 files = act.tests.sample_files.EXAMPLE_MET_WILDCARD
16 met_ds = act.io.armfiles.read_netcdf(files)
17 print(met_ds)
18 met_temp = met_ds.temp_mean
19 met_rh = met_ds.rh_mean
20 met_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0
21 met_ds['met_lcl'] = met_lcl * 1000.0
22 met_ds['met_lcl'].attrs['units'] = 'm'
23 met_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'
24
25 # Plot data
26 display = act.plotting.TimeSeriesDisplay(met_ds)
27 display.add_subplots((3,), figsize=(15, 10))
28 display.plot('wspd_vec_mean', subplot_index=(0,))
29 display.plot('temp_mean', subplot_index=(1,))
30 display.plot('rh_mean', subplot_index=(2,))
31 plt.show()
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/plotting/plot_sonde.py b/examples/plotting/plot_sonde.py
--- a/examples/plotting/plot_sonde.py
+++ b/examples/plotting/plot_sonde.py
@@ -12,20 +12,14 @@
import act
-files = act.tests.sample_files.EXAMPLE_MET_WILDCARD
-met_ds = act.io.armfiles.read_netcdf(files)
-print(met_ds)
-met_temp = met_ds.temp_mean
-met_rh = met_ds.rh_mean
-met_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0
-met_ds['met_lcl'] = met_lcl * 1000.0
-met_ds['met_lcl'].attrs['units'] = 'm'
-met_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'
+files = act.tests.sample_files.EXAMPLE_SONDE1
+sonde_ds = act.io.armfiles.read_netcdf(files)
+print(sonde_ds)
# Plot data
-display = act.plotting.TimeSeriesDisplay(met_ds)
+display = act.plotting.TimeSeriesDisplay(sonde_ds)
display.add_subplots((3,), figsize=(15, 10))
-display.plot('wspd_vec_mean', subplot_index=(0,))
-display.plot('temp_mean', subplot_index=(1,))
-display.plot('rh_mean', subplot_index=(2,))
+display.plot('wspd', subplot_index=(0,))
+display.plot('tdry', subplot_index=(1,))
+display.plot('rh', subplot_index=(2,))
plt.show()
|
{"golden_diff": "diff --git a/examples/plotting/plot_sonde.py b/examples/plotting/plot_sonde.py\n--- a/examples/plotting/plot_sonde.py\n+++ b/examples/plotting/plot_sonde.py\n@@ -12,20 +12,14 @@\n \n import act\n \n-files = act.tests.sample_files.EXAMPLE_MET_WILDCARD\n-met_ds = act.io.armfiles.read_netcdf(files)\n-print(met_ds)\n-met_temp = met_ds.temp_mean\n-met_rh = met_ds.rh_mean\n-met_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0\n-met_ds['met_lcl'] = met_lcl * 1000.0\n-met_ds['met_lcl'].attrs['units'] = 'm'\n-met_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'\n+files = act.tests.sample_files.EXAMPLE_SONDE1\n+sonde_ds = act.io.armfiles.read_netcdf(files)\n+print(sonde_ds)\n \n # Plot data\n-display = act.plotting.TimeSeriesDisplay(met_ds)\n+display = act.plotting.TimeSeriesDisplay(sonde_ds)\n display.add_subplots((3,), figsize=(15, 10))\n-display.plot('wspd_vec_mean', subplot_index=(0,))\n-display.plot('temp_mean', subplot_index=(1,))\n-display.plot('rh_mean', subplot_index=(2,))\n+display.plot('wspd', subplot_index=(0,))\n+display.plot('tdry', subplot_index=(1,))\n+display.plot('rh', subplot_index=(2,))\n plt.show()\n", "issue": "Example Incorrect\n### Description\r\n\r\nThis example says it's for plotting sounding data but the example uses MET data. We should update this to be what it was originally intended.\r\nhttps://arm-doe.github.io/ACT/source/auto_examples/plotting/plot_sonde.html\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nPlot a timeseries of sounding data\n----------------------------------------------------\n\nThis is a simple example for how to plot a timeseries of sounding\ndata from the ARM SGP site.\n\nAuthor: Robert Jackson\n\"\"\"\n\nfrom matplotlib import pyplot as plt\n\nimport act\n\nfiles = act.tests.sample_files.EXAMPLE_MET_WILDCARD\nmet_ds = act.io.armfiles.read_netcdf(files)\nprint(met_ds)\nmet_temp = met_ds.temp_mean\nmet_rh = met_ds.rh_mean\nmet_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0\nmet_ds['met_lcl'] = met_lcl * 1000.0\nmet_ds['met_lcl'].attrs['units'] = 'm'\nmet_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'\n\n# Plot data\ndisplay = act.plotting.TimeSeriesDisplay(met_ds)\ndisplay.add_subplots((3,), figsize=(15, 10))\ndisplay.plot('wspd_vec_mean', subplot_index=(0,))\ndisplay.plot('temp_mean', subplot_index=(1,))\ndisplay.plot('rh_mean', subplot_index=(2,))\nplt.show()\n", "path": "examples/plotting/plot_sonde.py"}], "after_files": [{"content": "\"\"\"\nPlot a timeseries of sounding data\n----------------------------------------------------\n\nThis is a simple example for how to plot a timeseries of sounding\ndata from the ARM SGP site.\n\nAuthor: Robert Jackson\n\"\"\"\n\nfrom matplotlib import pyplot as plt\n\nimport act\n\nfiles = act.tests.sample_files.EXAMPLE_SONDE1\nsonde_ds = act.io.armfiles.read_netcdf(files)\nprint(sonde_ds)\n\n# Plot data\ndisplay = act.plotting.TimeSeriesDisplay(sonde_ds)\ndisplay.add_subplots((3,), figsize=(15, 10))\ndisplay.plot('wspd', subplot_index=(0,))\ndisplay.plot('tdry', subplot_index=(1,))\ndisplay.plot('rh', subplot_index=(2,))\nplt.show()\n", "path": "examples/plotting/plot_sonde.py"}]}
| 652 | 379 |
gh_patches_debug_135
|
rasdani/github-patches
|
git_diff
|
encode__httpx-691
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Version 0.10.0
Let's get squared away what we need for Version 0.10.0.
The key here is we need to make sure we've sorted out any API that'd block us from reintroducing the sync API, so...
* [x] Let's drop `.read()` from `Request` - It's not documented anywhere, not *actually* required by users, and the content is available on `request.stream` if absolutely needed. #679
* [x] We need `Response.read()` and `Response.close()` to be `Response.aread()` and `Response.aclose()`. The only point where this is relevant is users performing conditional reads inside a `with httpx.stream(method, url) as response` block. #674
* [x] We ought to have `Client.close()` become `Client.aclose()` for consistency. #675
* [x] Good point to get #617 in.
* [x] We'll need to change `response.next()`. We could *either* underspecify the return type, and allow it to be either sync or async depending on the context, *or* we use `response.anext()` and `response.next()`. #676
* [ ] ~Good point to address~ #656.
* [x] *Potentially* we could introduce an `httpx.AsyncClient` synonm for `httpx.Client`, and advise our users to switch towards that usage, so that there's no breaking changes for them once we fully reintroduce a sync API all the way through. #680
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/__version__.py`
Content:
```
1 __title__ = "httpx"
2 __description__ = "A next generation HTTP client, for Python 3."
3 __version__ = "0.9.5"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/httpx/__version__.py b/httpx/__version__.py
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.9.5"
+__version__ = "0.10.0"
|
{"golden_diff": "diff --git a/httpx/__version__.py b/httpx/__version__.py\n--- a/httpx/__version__.py\n+++ b/httpx/__version__.py\n@@ -1,3 +1,3 @@\n __title__ = \"httpx\"\n __description__ = \"A next generation HTTP client, for Python 3.\"\n-__version__ = \"0.9.5\"\n+__version__ = \"0.10.0\"\n", "issue": "Version 0.10.0\nLet's get squared away what we need for Version 0.10.0.\r\n\r\nThe key here is we need to make sure we've sorted out any API that'd block us from reintroducing the sync API, so...\r\n\r\n* [x] Let's drop `.read()` from `Request` - It's not documented anywhere, not *actually* required by users, and the content is available on `request.stream` if absolutely needed. #679\r\n* [x] We need `Response.read()` and `Response.close()` to be `Response.aread()` and `Response.aclose()`. The only point where this is relevant is users performing conditional reads inside a `with httpx.stream(method, url) as response` block. #674\r\n* [x] We ought to have `Client.close()` become `Client.aclose()` for consistency. #675\r\n* [x] Good point to get #617 in.\r\n* [x] We'll need to change `response.next()`. We could *either* underspecify the return type, and allow it to be either sync or async depending on the context, *or* we use `response.anext()` and `response.next()`. #676\r\n* [ ] ~Good point to address~ #656.\r\n* [x] *Potentially* we could introduce an `httpx.AsyncClient` synonm for `httpx.Client`, and advise our users to switch towards that usage, so that there's no breaking changes for them once we fully reintroduce a sync API all the way through. #680\n", "before_files": [{"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.9.5\"\n", "path": "httpx/__version__.py"}], "after_files": [{"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.10.0\"\n", "path": "httpx/__version__.py"}]}
| 644 | 95 |
gh_patches_debug_674
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-3562
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error about `--user` and `--target` flag when installing xbuildenv
I sometimes get following error while installing xbuild environment:
```bash
$ pyodide build .
Downloading xbuild environment
Installing xbuild environment
stderr:
ERROR: Can not combine '--user' and '--target'
[notice] A new release of pip available: 22.3.1 -> 23.0
[notice] To update, run: /home/gitpod/.pyenv/versions/3.10.2/bin/python -m pip install --upgrade pip
```
It happens here, which installs host site packages:
https://github.com/pyodide/pyodide/blob/7cc1058358242a5a9012edbb8163d86a860a1a28/pyodide-build/pyodide_build/install_xbuildenv.py#L50-L57
I think we need to add `--no-user` flag explicitly to prevent this error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyodide-build/pyodide_build/install_xbuildenv.py`
Content:
```
1 import argparse
2 import json
3 import shutil
4 import subprocess
5 from pathlib import Path
6 from urllib.request import urlopen, urlretrieve
7
8 from .common import exit_with_stdio, get_make_flag, get_pyodide_root
9 from .create_pypa_index import create_pypa_index
10 from .logger import logger
11
12
13 def make_parser(parser: argparse.ArgumentParser) -> argparse.ArgumentParser:
14 parser.description = (
15 "Install xbuild env.\n\n"
16 "The installed environment is the same as the one that would result from\n"
17 "`PYODIDE_PACKAGES='scipy' make` except that it is much faster.\n"
18 "The goal is to enable out-of-tree builds for binary packages that depend\n"
19 "on numpy or scipy.\n"
20 "Note: this is a private endpoint that should not be used outside of the Pyodide Makefile."
21 )
22 parser.add_argument("--download", action="store_true", help="Download xbuild env")
23 parser.add_argument("xbuildenv", type=str, nargs=1)
24 return parser
25
26
27 def download_xbuildenv(version: str, xbuildenv_path: Path) -> None:
28 from shutil import rmtree, unpack_archive
29 from tempfile import NamedTemporaryFile
30
31 logger.info("Downloading xbuild environment")
32 rmtree(xbuildenv_path, ignore_errors=True)
33 with NamedTemporaryFile(suffix=".tar") as f:
34 urlretrieve(
35 f"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2",
36 f.name,
37 )
38 unpack_archive(f.name, xbuildenv_path)
39
40
41 def install_xbuildenv(version: str, xbuildenv_path: Path) -> None:
42 logger.info("Installing xbuild environment")
43 xbuildenv_path = xbuildenv_path / "xbuildenv"
44 pyodide_root = get_pyodide_root()
45 xbuildenv_root = xbuildenv_path / "pyodide-root"
46 host_site_packages = xbuildenv_root / Path(
47 get_make_flag("HOSTSITEPACKAGES")
48 ).relative_to(pyodide_root)
49 host_site_packages.mkdir(exist_ok=True, parents=True)
50 result = subprocess.run(
51 [
52 "pip",
53 "install",
54 "-t",
55 host_site_packages,
56 "-r",
57 xbuildenv_path / "requirements.txt",
58 ],
59 capture_output=True,
60 encoding="utf8",
61 )
62 if result.returncode != 0:
63 exit_with_stdio(result)
64 # Copy the site-packages-extras (coming from the cross-build-files meta.yaml
65 # key) over the site-packages directory with the newly installed packages.
66 shutil.copytree(
67 xbuildenv_path / "site-packages-extras", host_site_packages, dirs_exist_ok=True
68 )
69 cdn_base = f"https://cdn.jsdelivr.net/pyodide/v{version}/full/"
70 if (xbuildenv_root / "repodata.json").exists():
71 repodata_bytes = (xbuildenv_root / "repodata.json").read_bytes()
72 else:
73 repodata_url = cdn_base + "repodata.json"
74 with urlopen(repodata_url) as response:
75 repodata_bytes = response.read()
76 repodata = json.loads(repodata_bytes)
77 version = repodata["info"]["version"]
78 create_pypa_index(repodata["packages"], xbuildenv_root, cdn_base)
79
80
81 def main(args: argparse.Namespace) -> None:
82 from . import __version__
83
84 xbuildenv_path = Path(args.xbuildenv[0])
85 version = __version__
86 if args.download:
87 download_xbuildenv(version, xbuildenv_path)
88 install_xbuildenv(version, xbuildenv_path)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyodide-build/pyodide_build/install_xbuildenv.py b/pyodide-build/pyodide_build/install_xbuildenv.py
--- a/pyodide-build/pyodide_build/install_xbuildenv.py
+++ b/pyodide-build/pyodide_build/install_xbuildenv.py
@@ -51,6 +51,7 @@
[
"pip",
"install",
+ "--no-user",
"-t",
host_site_packages,
"-r",
|
{"golden_diff": "diff --git a/pyodide-build/pyodide_build/install_xbuildenv.py b/pyodide-build/pyodide_build/install_xbuildenv.py\n--- a/pyodide-build/pyodide_build/install_xbuildenv.py\n+++ b/pyodide-build/pyodide_build/install_xbuildenv.py\n@@ -51,6 +51,7 @@\n [\n \"pip\",\n \"install\",\n+ \"--no-user\",\n \"-t\",\n host_site_packages,\n \"-r\",\n", "issue": "Error about `--user` and `--target` flag when installing xbuildenv\nI sometimes get following error while installing xbuild environment:\r\n\r\n```bash\r\n$ pyodide build .\r\nDownloading xbuild environment\r\nInstalling xbuild environment\r\n stderr:\r\n ERROR: Can not combine '--user' and '--target'\r\n\r\n [notice] A new release of pip available: 22.3.1 -> 23.0\r\n [notice] To update, run: /home/gitpod/.pyenv/versions/3.10.2/bin/python -m pip install --upgrade pip\r\n```\r\n\r\nIt happens here, which installs host site packages:\r\n\r\nhttps://github.com/pyodide/pyodide/blob/7cc1058358242a5a9012edbb8163d86a860a1a28/pyodide-build/pyodide_build/install_xbuildenv.py#L50-L57\r\n\r\nI think we need to add `--no-user` flag explicitly to prevent this error.\n", "before_files": [{"content": "import argparse\nimport json\nimport shutil\nimport subprocess\nfrom pathlib import Path\nfrom urllib.request import urlopen, urlretrieve\n\nfrom .common import exit_with_stdio, get_make_flag, get_pyodide_root\nfrom .create_pypa_index import create_pypa_index\nfrom .logger import logger\n\n\ndef make_parser(parser: argparse.ArgumentParser) -> argparse.ArgumentParser:\n parser.description = (\n \"Install xbuild env.\\n\\n\"\n \"The installed environment is the same as the one that would result from\\n\"\n \"`PYODIDE_PACKAGES='scipy' make` except that it is much faster.\\n\"\n \"The goal is to enable out-of-tree builds for binary packages that depend\\n\"\n \"on numpy or scipy.\\n\"\n \"Note: this is a private endpoint that should not be used outside of the Pyodide Makefile.\"\n )\n parser.add_argument(\"--download\", action=\"store_true\", help=\"Download xbuild env\")\n parser.add_argument(\"xbuildenv\", type=str, nargs=1)\n return parser\n\n\ndef download_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n from shutil import rmtree, unpack_archive\n from tempfile import NamedTemporaryFile\n\n logger.info(\"Downloading xbuild environment\")\n rmtree(xbuildenv_path, ignore_errors=True)\n with NamedTemporaryFile(suffix=\".tar\") as f:\n urlretrieve(\n f\"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2\",\n f.name,\n )\n unpack_archive(f.name, xbuildenv_path)\n\n\ndef install_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n logger.info(\"Installing xbuild environment\")\n xbuildenv_path = xbuildenv_path / \"xbuildenv\"\n pyodide_root = get_pyodide_root()\n xbuildenv_root = xbuildenv_path / \"pyodide-root\"\n host_site_packages = xbuildenv_root / Path(\n get_make_flag(\"HOSTSITEPACKAGES\")\n ).relative_to(pyodide_root)\n host_site_packages.mkdir(exist_ok=True, parents=True)\n result = subprocess.run(\n [\n \"pip\",\n \"install\",\n \"-t\",\n host_site_packages,\n \"-r\",\n xbuildenv_path / \"requirements.txt\",\n ],\n capture_output=True,\n encoding=\"utf8\",\n )\n if result.returncode != 0:\n exit_with_stdio(result)\n # Copy the site-packages-extras (coming from the cross-build-files meta.yaml\n # key) over the site-packages directory with the newly installed packages.\n shutil.copytree(\n xbuildenv_path / \"site-packages-extras\", host_site_packages, dirs_exist_ok=True\n )\n cdn_base = f\"https://cdn.jsdelivr.net/pyodide/v{version}/full/\"\n if (xbuildenv_root / \"repodata.json\").exists():\n repodata_bytes = (xbuildenv_root / \"repodata.json\").read_bytes()\n else:\n repodata_url = cdn_base + \"repodata.json\"\n with urlopen(repodata_url) as response:\n repodata_bytes = response.read()\n repodata = json.loads(repodata_bytes)\n version = repodata[\"info\"][\"version\"]\n create_pypa_index(repodata[\"packages\"], xbuildenv_root, cdn_base)\n\n\ndef main(args: argparse.Namespace) -> None:\n from . import __version__\n\n xbuildenv_path = Path(args.xbuildenv[0])\n version = __version__\n if args.download:\n download_xbuildenv(version, xbuildenv_path)\n install_xbuildenv(version, xbuildenv_path)\n", "path": "pyodide-build/pyodide_build/install_xbuildenv.py"}], "after_files": [{"content": "import argparse\nimport json\nimport shutil\nimport subprocess\nfrom pathlib import Path\nfrom urllib.request import urlopen, urlretrieve\n\nfrom .common import exit_with_stdio, get_make_flag, get_pyodide_root\nfrom .create_pypa_index import create_pypa_index\nfrom .logger import logger\n\n\ndef make_parser(parser: argparse.ArgumentParser) -> argparse.ArgumentParser:\n parser.description = (\n \"Install xbuild env.\\n\\n\"\n \"The installed environment is the same as the one that would result from\\n\"\n \"`PYODIDE_PACKAGES='scipy' make` except that it is much faster.\\n\"\n \"The goal is to enable out-of-tree builds for binary packages that depend\\n\"\n \"on numpy or scipy.\\n\"\n \"Note: this is a private endpoint that should not be used outside of the Pyodide Makefile.\"\n )\n parser.add_argument(\"--download\", action=\"store_true\", help=\"Download xbuild env\")\n parser.add_argument(\"xbuildenv\", type=str, nargs=1)\n return parser\n\n\ndef download_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n from shutil import rmtree, unpack_archive\n from tempfile import NamedTemporaryFile\n\n logger.info(\"Downloading xbuild environment\")\n rmtree(xbuildenv_path, ignore_errors=True)\n with NamedTemporaryFile(suffix=\".tar\") as f:\n urlretrieve(\n f\"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2\",\n f.name,\n )\n unpack_archive(f.name, xbuildenv_path)\n\n\ndef install_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n logger.info(\"Installing xbuild environment\")\n xbuildenv_path = xbuildenv_path / \"xbuildenv\"\n pyodide_root = get_pyodide_root()\n xbuildenv_root = xbuildenv_path / \"pyodide-root\"\n host_site_packages = xbuildenv_root / Path(\n get_make_flag(\"HOSTSITEPACKAGES\")\n ).relative_to(pyodide_root)\n host_site_packages.mkdir(exist_ok=True, parents=True)\n result = subprocess.run(\n [\n \"pip\",\n \"install\",\n \"--no-user\",\n \"-t\",\n host_site_packages,\n \"-r\",\n xbuildenv_path / \"requirements.txt\",\n ],\n capture_output=True,\n encoding=\"utf8\",\n )\n if result.returncode != 0:\n exit_with_stdio(result)\n # Copy the site-packages-extras (coming from the cross-build-files meta.yaml\n # key) over the site-packages directory with the newly installed packages.\n shutil.copytree(\n xbuildenv_path / \"site-packages-extras\", host_site_packages, dirs_exist_ok=True\n )\n cdn_base = f\"https://cdn.jsdelivr.net/pyodide/v{version}/full/\"\n if (xbuildenv_root / \"repodata.json\").exists():\n repodata_bytes = (xbuildenv_root / \"repodata.json\").read_bytes()\n else:\n repodata_url = cdn_base + \"repodata.json\"\n with urlopen(repodata_url) as response:\n repodata_bytes = response.read()\n repodata = json.loads(repodata_bytes)\n version = repodata[\"info\"][\"version\"]\n create_pypa_index(repodata[\"packages\"], xbuildenv_root, cdn_base)\n\n\ndef main(args: argparse.Namespace) -> None:\n from . import __version__\n\n xbuildenv_path = Path(args.xbuildenv[0])\n version = __version__\n if args.download:\n download_xbuildenv(version, xbuildenv_path)\n install_xbuildenv(version, xbuildenv_path)\n", "path": "pyodide-build/pyodide_build/install_xbuildenv.py"}]}
| 1,457 | 105 |
gh_patches_debug_16969
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-430
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update rendering of Agency index page
## Background
In tandem with #366, this ticket moves the bulk of the contextual information from the application index to the agency index.
We need more flexibility on this page than the current viewmodel / page template allows for.

## Tasks
* [x] Create/update text entries in English and Spanish for each paragraph, title, etc.
* [x] Can we embed the **bold** font style in the translation?
* [x] Create text entries for a pre-button label / subhead
* [x] Create a new template `core/agency_index.html` to use for this page
* [x] In the view use the new template
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/core/views.py`
Content:
```
1 """
2 The core application: view definition for the root of the webapp.
3 """
4 from django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError
5 from django.template import loader
6 from django.template.response import TemplateResponse
7 from django.urls import reverse
8 from django.utils.translation import gettext as _
9
10 from . import middleware, models, session, viewmodels
11
12
13 def PageTemplateResponse(request, page_vm):
14 """Helper returns a TemplateResponse using the common page template."""
15 return TemplateResponse(request, "core/page.html", page_vm.context_dict())
16
17
18 def _index_content_title():
19 """Helper returns the content title for the common index page."""
20 return _("core.pages.index.content_title")
21
22
23 def _index_paragraphs():
24 """Helper returns the content paragraphs for the common index page."""
25 return [_("core.pages.index.p[0]"), _("core.pages.index.p[1]"), _("core.pages.index.p[2]")]
26
27
28 def _index_url():
29 """Helper computes the index url path."""
30 return reverse("core:index")
31
32
33 @middleware.pageview_decorator
34 def index(request):
35 """View handler for the main entry page."""
36 session.reset(request)
37
38 # generate a button to the landing page for each active agency
39 agencies = models.TransitAgency.all_active()
40 buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]
41 buttons[0].classes.append("mt-3")
42 buttons[0].label = _("core.pages.index.chooseprovider")
43
44 page = viewmodels.Page(
45 content_title=_index_content_title(),
46 paragraphs=_index_paragraphs(),
47 buttons=buttons,
48 classes="home",
49 )
50
51 return PageTemplateResponse(request, page)
52
53
54 @middleware.pageview_decorator
55 def agency_index(request, agency):
56 """View handler for an agency entry page."""
57 session.reset(request)
58 session.update(request, agency=agency, origin=agency.index_url)
59
60 page = viewmodels.Page(
61 content_title=_index_content_title(),
62 paragraphs=_index_paragraphs(),
63 button=viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index")),
64 classes="home",
65 )
66
67 return PageTemplateResponse(request, page)
68
69
70 @middleware.pageview_decorator
71 def help(request):
72 """View handler for the help page."""
73 if session.active_agency(request):
74 agency = session.agency(request)
75 buttons = viewmodels.Button.agency_contact_links(agency)
76 else:
77 buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]
78
79 buttons.append(viewmodels.Button.home(request, _("core.buttons.back")))
80
81 page = viewmodels.Page(
82 title=_("core.buttons.help"),
83 content_title=_("core.buttons.help"),
84 buttons=buttons,
85 classes="text-lg-center",
86 noimage=True,
87 )
88
89 return TemplateResponse(request, "core/help.html", page.context_dict())
90
91
92 @middleware.pageview_decorator
93 def bad_request(request, exception, template_name="400.html"):
94 """View handler for HTTP 400 Bad Request responses."""
95 if session.active_agency(request):
96 session.update(request, origin=session.agency(request).index_url)
97 else:
98 session.update(request, origin=_index_url())
99
100 home = viewmodels.Button.home(request)
101 page = viewmodels.ErrorPage.error(button=home)
102 t = loader.get_template(template_name)
103
104 return HttpResponseBadRequest(t.render(page.context_dict()))
105
106
107 @middleware.pageview_decorator
108 def csrf_failure(request, reason):
109 """
110 View handler for CSRF_FAILURE_VIEW with custom data.
111 """
112 if session.active_agency(request):
113 session.update(request, origin=session.agency(request).index_url)
114 else:
115 session.update(request, origin=_index_url())
116
117 home = viewmodels.Button.home(request)
118 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
119 t = loader.get_template("400.html")
120
121 return HttpResponseNotFound(t.render(page.context_dict()))
122
123
124 @middleware.pageview_decorator
125 def page_not_found(request, exception, template_name="404.html"):
126 """View handler for HTTP 404 Not Found responses."""
127 if session.active_agency(request):
128 session.update(request, origin=session.agency(request).index_url)
129 else:
130 session.update(request, origin=_index_url())
131
132 home = viewmodels.Button.home(request)
133 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
134 t = loader.get_template(template_name)
135
136 return HttpResponseNotFound(t.render(page.context_dict()))
137
138
139 @middleware.pageview_decorator
140 def server_error(request, template_name="500.html"):
141 """View handler for HTTP 500 Server Error responses."""
142 if session.active_agency(request):
143 session.update(request, origin=session.agency(request).index_url)
144 else:
145 session.update(request, origin=_index_url())
146
147 home = viewmodels.Button.home(request)
148 page = viewmodels.ErrorPage.error(button=home)
149 t = loader.get_template(template_name)
150
151 return HttpResponseServerError(t.render(page.context_dict()))
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/benefits/core/views.py b/benefits/core/views.py
--- a/benefits/core/views.py
+++ b/benefits/core/views.py
@@ -57,14 +57,19 @@
session.reset(request)
session.update(request, agency=agency, origin=agency.index_url)
+ button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index"))
+ button.label = _("core.pages.agency_index.button.label")
+
page = viewmodels.Page(
- content_title=_index_content_title(),
- paragraphs=_index_paragraphs(),
- button=viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index")),
+ content_title=_("core.pages.agency_index.content_title"),
+ button=button,
classes="home",
)
- return PageTemplateResponse(request, page)
+ help_page = reverse("core:help")
+ context_dict = {**page.context_dict(), **{"info_link": f"{help_page}#about"}}
+
+ return TemplateResponse(request, "core/agency_index.html", context_dict)
@middleware.pageview_decorator
|
{"golden_diff": "diff --git a/benefits/core/views.py b/benefits/core/views.py\n--- a/benefits/core/views.py\n+++ b/benefits/core/views.py\n@@ -57,14 +57,19 @@\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n \n+ button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n+ button.label = _(\"core.pages.agency_index.button.label\")\n+\n page = viewmodels.Page(\n- content_title=_index_content_title(),\n- paragraphs=_index_paragraphs(),\n- button=viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\")),\n+ content_title=_(\"core.pages.agency_index.content_title\"),\n+ button=button,\n classes=\"home\",\n )\n \n- return PageTemplateResponse(request, page)\n+ help_page = reverse(\"core:help\")\n+ context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n+\n+ return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n \n \n @middleware.pageview_decorator\n", "issue": "Update rendering of Agency index page\n## Background\r\n\r\nIn tandem with #366, this ticket moves the bulk of the contextual information from the application index to the agency index.\r\n\r\nWe need more flexibility on this page than the current viewmodel / page template allows for.\r\n\r\n\r\n\r\n## Tasks\r\n\r\n* [x] Create/update text entries in English and Spanish for each paragraph, title, etc.\r\n* [x] Can we embed the **bold** font style in the translation?\r\n* [x] Create text entries for a pre-button label / subhead\r\n* [x] Create a new template `core/agency_index.html` to use for this page\r\n* [x] In the view use the new template\n", "before_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import middleware, models, session, viewmodels\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_paragraphs():\n \"\"\"Helper returns the content paragraphs for the common index page.\"\"\"\n return [_(\"core.pages.index.p[0]\"), _(\"core.pages.index.p[1]\"), _(\"core.pages.index.p[2]\")]\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\[email protected]_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n # generate a button to the landing page for each active agency\n agencies = models.TransitAgency.all_active()\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n content_title=_index_content_title(),\n paragraphs=_index_paragraphs(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\[email protected]_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n page = viewmodels.Page(\n content_title=_index_content_title(),\n paragraphs=_index_paragraphs(),\n button=viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\")),\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\[email protected]_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n classes=\"text-lg-center\",\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\[email protected]_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}], "after_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import middleware, models, session, viewmodels\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_paragraphs():\n \"\"\"Helper returns the content paragraphs for the common index page.\"\"\"\n return [_(\"core.pages.index.p[0]\"), _(\"core.pages.index.p[1]\"), _(\"core.pages.index.p[2]\")]\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\[email protected]_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n # generate a button to the landing page for each active agency\n agencies = models.TransitAgency.all_active()\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n content_title=_index_content_title(),\n paragraphs=_index_paragraphs(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\[email protected]_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n button.label = _(\"core.pages.agency_index.button.label\")\n\n page = viewmodels.Page(\n content_title=_(\"core.pages.agency_index.content_title\"),\n button=button,\n classes=\"home\",\n )\n\n help_page = reverse(\"core:help\")\n context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n\n return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n\n\[email protected]_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n classes=\"text-lg-center\",\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\[email protected]_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}]}
| 1,890 | 254 |
gh_patches_debug_44234
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-3953
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove frontmatter and add to `meta` in markdown converter
**Is your feature request related to a problem? Please describe.**
We would like to use the markdown converter for the `haystack-tutorials` search
**Describe the solution you'd like**
Frontmatter is often used to store some meta information of markdown files. So it would be a useful feature to be able to auto add these to the `meta` of the created documents
It's also a good idea to remove the fronmatter from the created documents too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/nodes/file_converter/markdown.py`
Content:
```
1 import logging
2 import re
3 from pathlib import Path
4 from typing import Dict, List, Optional, Tuple, Any
5
6 try:
7 from bs4 import BeautifulSoup, NavigableString
8 from markdown import markdown
9 except (ImportError, ModuleNotFoundError) as ie:
10 from haystack.utils.import_utils import _optional_component_not_installed
11
12 _optional_component_not_installed(__name__, "preprocessing", ie)
13
14 from haystack.nodes.file_converter.base import BaseConverter
15 from haystack.schema import Document
16
17
18 logger = logging.getLogger(__name__)
19
20
21 class MarkdownConverter(BaseConverter):
22 def __init__(
23 self,
24 remove_numeric_tables: bool = False,
25 valid_languages: Optional[List[str]] = None,
26 id_hash_keys: Optional[List[str]] = None,
27 progress_bar: bool = True,
28 remove_code_snippets: bool = True,
29 extract_headlines: bool = False,
30 ):
31 """
32 :param remove_numeric_tables: Not applicable.
33 :param valid_languages: Not applicable.
34 :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's
35 attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are
36 not unique, you can modify the metadata and pass for example, `"meta"` to this field ([`"content"`, `"meta"`]).
37 In this case, the ID is generated by using the content and the defined metadata.
38 :param progress_bar: Show a progress bar for the conversion.
39 :param remove_code_snippets: Whether to remove snippets from the markdown file.
40 :param extract_headlines: Whether to extract headings from the markdown file.
41 """
42 super().__init__(
43 remove_numeric_tables=remove_numeric_tables,
44 valid_languages=valid_languages,
45 id_hash_keys=id_hash_keys,
46 progress_bar=progress_bar,
47 )
48
49 self.remove_code_snippets = remove_code_snippets
50 self.extract_headlines = extract_headlines
51
52 def convert(
53 self,
54 file_path: Path,
55 meta: Optional[Dict[str, Any]] = None,
56 remove_numeric_tables: Optional[bool] = None,
57 valid_languages: Optional[List[str]] = None,
58 encoding: Optional[str] = "utf-8",
59 id_hash_keys: Optional[List[str]] = None,
60 remove_code_snippets: Optional[bool] = None,
61 extract_headlines: Optional[bool] = None,
62 ) -> List[Document]:
63 """
64 Reads text from a markdown file and executes optional preprocessing steps.
65
66 :param file_path: path of the file to convert
67 :param meta: dictionary of meta data key-value pairs to append in the returned document.
68 :param encoding: Select the file encoding (default is `utf-8`)
69 :param remove_numeric_tables: Not applicable
70 :param valid_languages: Not applicable
71 :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's
72 attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are
73 not unique, you can modify the metadata and pass e.g. `"meta"` to this field (e.g. [`"content"`, `"meta"`]).
74 In this case the id will be generated by using the content and the defined metadata.
75 :param remove_code_snippets: Whether to remove snippets from the markdown file.
76 :param extract_headlines: Whether to extract headings from the markdown file.
77 """
78
79 id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys
80 remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets
81 extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines
82
83 with open(file_path, encoding=encoding, errors="ignore") as f:
84 markdown_text = f.read()
85
86 # md -> html -> text since BeautifulSoup can extract text cleanly
87 html = markdown(markdown_text)
88
89 # remove code snippets
90 if remove_code_snippets:
91 html = re.sub(r"<pre>(.*?)</pre>", " ", html, flags=re.DOTALL)
92 html = re.sub(r"<code>(.*?)</code>", " ", html, flags=re.DOTALL)
93 soup = BeautifulSoup(html, "html.parser")
94
95 if extract_headlines:
96 text, headlines = self._extract_text_and_headlines(soup)
97 if meta is None:
98 meta = {}
99 meta["headlines"] = headlines
100 else:
101 text = soup.get_text()
102
103 document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)
104 return [document]
105
106 @staticmethod
107 def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:
108 """
109 Extracts text and headings from a soup object.
110 """
111 headline_tags = {"h1", "h2", "h3", "h4", "h5", "h6"}
112 headlines = []
113 text = ""
114 for desc in soup.descendants:
115 if desc.name in headline_tags:
116 current_headline = desc.get_text()
117 current_start_idx = len(text)
118 current_level = int(desc.name[-1]) - 1
119 headlines.append({"headline": current_headline, "start_idx": current_start_idx, "level": current_level})
120
121 if isinstance(desc, NavigableString):
122 text += desc.get_text()
123
124 return text, headlines
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/haystack/nodes/file_converter/markdown.py b/haystack/nodes/file_converter/markdown.py
--- a/haystack/nodes/file_converter/markdown.py
+++ b/haystack/nodes/file_converter/markdown.py
@@ -4,6 +4,7 @@
from typing import Dict, List, Optional, Tuple, Any
try:
+ import frontmatter
from bs4 import BeautifulSoup, NavigableString
from markdown import markdown
except (ImportError, ModuleNotFoundError) as ie:
@@ -27,6 +28,7 @@
progress_bar: bool = True,
remove_code_snippets: bool = True,
extract_headlines: bool = False,
+ add_frontmatter_to_meta: bool = False,
):
"""
:param remove_numeric_tables: Not applicable.
@@ -38,6 +40,7 @@
:param progress_bar: Show a progress bar for the conversion.
:param remove_code_snippets: Whether to remove snippets from the markdown file.
:param extract_headlines: Whether to extract headings from the markdown file.
+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.
"""
super().__init__(
remove_numeric_tables=remove_numeric_tables,
@@ -48,6 +51,7 @@
self.remove_code_snippets = remove_code_snippets
self.extract_headlines = extract_headlines
+ self.add_frontmatter_to_meta = add_frontmatter_to_meta
def convert(
self,
@@ -59,6 +63,7 @@
id_hash_keys: Optional[List[str]] = None,
remove_code_snippets: Optional[bool] = None,
extract_headlines: Optional[bool] = None,
+ add_frontmatter_to_meta: Optional[bool] = None,
) -> List[Document]:
"""
Reads text from a markdown file and executes optional preprocessing steps.
@@ -74,14 +79,18 @@
In this case the id will be generated by using the content and the defined metadata.
:param remove_code_snippets: Whether to remove snippets from the markdown file.
:param extract_headlines: Whether to extract headings from the markdown file.
+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.
"""
id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys
remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets
extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines
+ add_frontmatter_to_meta = (
+ add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta
+ )
with open(file_path, encoding=encoding, errors="ignore") as f:
- markdown_text = f.read()
+ metadata, markdown_text = frontmatter.parse(f.read())
# md -> html -> text since BeautifulSoup can extract text cleanly
html = markdown(markdown_text)
@@ -92,6 +101,12 @@
html = re.sub(r"<code>(.*?)</code>", " ", html, flags=re.DOTALL)
soup = BeautifulSoup(html, "html.parser")
+ if add_frontmatter_to_meta:
+ if meta is None:
+ meta = metadata
+ else:
+ meta.update(metadata)
+
if extract_headlines:
text, headlines = self._extract_text_and_headlines(soup)
if meta is None:
|
{"golden_diff": "diff --git a/haystack/nodes/file_converter/markdown.py b/haystack/nodes/file_converter/markdown.py\n--- a/haystack/nodes/file_converter/markdown.py\n+++ b/haystack/nodes/file_converter/markdown.py\n@@ -4,6 +4,7 @@\n from typing import Dict, List, Optional, Tuple, Any\n \n try:\n+ import frontmatter\n from bs4 import BeautifulSoup, NavigableString\n from markdown import markdown\n except (ImportError, ModuleNotFoundError) as ie:\n@@ -27,6 +28,7 @@\n progress_bar: bool = True,\n remove_code_snippets: bool = True,\n extract_headlines: bool = False,\n+ add_frontmatter_to_meta: bool = False,\n ):\n \"\"\"\n :param remove_numeric_tables: Not applicable.\n@@ -38,6 +40,7 @@\n :param progress_bar: Show a progress bar for the conversion.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n super().__init__(\n remove_numeric_tables=remove_numeric_tables,\n@@ -48,6 +51,7 @@\n \n self.remove_code_snippets = remove_code_snippets\n self.extract_headlines = extract_headlines\n+ self.add_frontmatter_to_meta = add_frontmatter_to_meta\n \n def convert(\n self,\n@@ -59,6 +63,7 @@\n id_hash_keys: Optional[List[str]] = None,\n remove_code_snippets: Optional[bool] = None,\n extract_headlines: Optional[bool] = None,\n+ add_frontmatter_to_meta: Optional[bool] = None,\n ) -> List[Document]:\n \"\"\"\n Reads text from a markdown file and executes optional preprocessing steps.\n@@ -74,14 +79,18 @@\n In this case the id will be generated by using the content and the defined metadata.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n \n id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys\n remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets\n extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines\n+ add_frontmatter_to_meta = (\n+ add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta\n+ )\n \n with open(file_path, encoding=encoding, errors=\"ignore\") as f:\n- markdown_text = f.read()\n+ metadata, markdown_text = frontmatter.parse(f.read())\n \n # md -> html -> text since BeautifulSoup can extract text cleanly\n html = markdown(markdown_text)\n@@ -92,6 +101,12 @@\n html = re.sub(r\"<code>(.*?)</code>\", \" \", html, flags=re.DOTALL)\n soup = BeautifulSoup(html, \"html.parser\")\n \n+ if add_frontmatter_to_meta:\n+ if meta is None:\n+ meta = metadata\n+ else:\n+ meta.update(metadata)\n+\n if extract_headlines:\n text, headlines = self._extract_text_and_headlines(soup)\n if meta is None:\n", "issue": "Remove frontmatter and add to `meta` in markdown converter\n**Is your feature request related to a problem? Please describe.**\r\nWe would like to use the markdown converter for the `haystack-tutorials` search\r\n\r\n**Describe the solution you'd like**\r\nFrontmatter is often used to store some meta information of markdown files. So it would be a useful feature to be able to auto add these to the `meta` of the created documents\r\nIt's also a good idea to remove the fronmatter from the created documents too. \n", "before_files": [{"content": "import logging\nimport re\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Tuple, Any\n\ntry:\n from bs4 import BeautifulSoup, NavigableString\n from markdown import markdown\nexcept (ImportError, ModuleNotFoundError) as ie:\n from haystack.utils.import_utils import _optional_component_not_installed\n\n _optional_component_not_installed(__name__, \"preprocessing\", ie)\n\nfrom haystack.nodes.file_converter.base import BaseConverter\nfrom haystack.schema import Document\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass MarkdownConverter(BaseConverter):\n def __init__(\n self,\n remove_numeric_tables: bool = False,\n valid_languages: Optional[List[str]] = None,\n id_hash_keys: Optional[List[str]] = None,\n progress_bar: bool = True,\n remove_code_snippets: bool = True,\n extract_headlines: bool = False,\n ):\n \"\"\"\n :param remove_numeric_tables: Not applicable.\n :param valid_languages: Not applicable.\n :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's\n attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are\n not unique, you can modify the metadata and pass for example, `\"meta\"` to this field ([`\"content\"`, `\"meta\"`]).\n In this case, the ID is generated by using the content and the defined metadata.\n :param progress_bar: Show a progress bar for the conversion.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n \"\"\"\n super().__init__(\n remove_numeric_tables=remove_numeric_tables,\n valid_languages=valid_languages,\n id_hash_keys=id_hash_keys,\n progress_bar=progress_bar,\n )\n\n self.remove_code_snippets = remove_code_snippets\n self.extract_headlines = extract_headlines\n\n def convert(\n self,\n file_path: Path,\n meta: Optional[Dict[str, Any]] = None,\n remove_numeric_tables: Optional[bool] = None,\n valid_languages: Optional[List[str]] = None,\n encoding: Optional[str] = \"utf-8\",\n id_hash_keys: Optional[List[str]] = None,\n remove_code_snippets: Optional[bool] = None,\n extract_headlines: Optional[bool] = None,\n ) -> List[Document]:\n \"\"\"\n Reads text from a markdown file and executes optional preprocessing steps.\n\n :param file_path: path of the file to convert\n :param meta: dictionary of meta data key-value pairs to append in the returned document.\n :param encoding: Select the file encoding (default is `utf-8`)\n :param remove_numeric_tables: Not applicable\n :param valid_languages: Not applicable\n :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's\n attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are\n not unique, you can modify the metadata and pass e.g. `\"meta\"` to this field (e.g. [`\"content\"`, `\"meta\"`]).\n In this case the id will be generated by using the content and the defined metadata.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n \"\"\"\n\n id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys\n remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets\n extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines\n\n with open(file_path, encoding=encoding, errors=\"ignore\") as f:\n markdown_text = f.read()\n\n # md -> html -> text since BeautifulSoup can extract text cleanly\n html = markdown(markdown_text)\n\n # remove code snippets\n if remove_code_snippets:\n html = re.sub(r\"<pre>(.*?)</pre>\", \" \", html, flags=re.DOTALL)\n html = re.sub(r\"<code>(.*?)</code>\", \" \", html, flags=re.DOTALL)\n soup = BeautifulSoup(html, \"html.parser\")\n\n if extract_headlines:\n text, headlines = self._extract_text_and_headlines(soup)\n if meta is None:\n meta = {}\n meta[\"headlines\"] = headlines\n else:\n text = soup.get_text()\n\n document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)\n return [document]\n\n @staticmethod\n def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:\n \"\"\"\n Extracts text and headings from a soup object.\n \"\"\"\n headline_tags = {\"h1\", \"h2\", \"h3\", \"h4\", \"h5\", \"h6\"}\n headlines = []\n text = \"\"\n for desc in soup.descendants:\n if desc.name in headline_tags:\n current_headline = desc.get_text()\n current_start_idx = len(text)\n current_level = int(desc.name[-1]) - 1\n headlines.append({\"headline\": current_headline, \"start_idx\": current_start_idx, \"level\": current_level})\n\n if isinstance(desc, NavigableString):\n text += desc.get_text()\n\n return text, headlines\n", "path": "haystack/nodes/file_converter/markdown.py"}], "after_files": [{"content": "import logging\nimport re\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Tuple, Any\n\ntry:\n import frontmatter\n from bs4 import BeautifulSoup, NavigableString\n from markdown import markdown\nexcept (ImportError, ModuleNotFoundError) as ie:\n from haystack.utils.import_utils import _optional_component_not_installed\n\n _optional_component_not_installed(__name__, \"preprocessing\", ie)\n\nfrom haystack.nodes.file_converter.base import BaseConverter\nfrom haystack.schema import Document\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass MarkdownConverter(BaseConverter):\n def __init__(\n self,\n remove_numeric_tables: bool = False,\n valid_languages: Optional[List[str]] = None,\n id_hash_keys: Optional[List[str]] = None,\n progress_bar: bool = True,\n remove_code_snippets: bool = True,\n extract_headlines: bool = False,\n add_frontmatter_to_meta: bool = False,\n ):\n \"\"\"\n :param remove_numeric_tables: Not applicable.\n :param valid_languages: Not applicable.\n :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's\n attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are\n not unique, you can modify the metadata and pass for example, `\"meta\"` to this field ([`\"content\"`, `\"meta\"`]).\n In this case, the ID is generated by using the content and the defined metadata.\n :param progress_bar: Show a progress bar for the conversion.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n super().__init__(\n remove_numeric_tables=remove_numeric_tables,\n valid_languages=valid_languages,\n id_hash_keys=id_hash_keys,\n progress_bar=progress_bar,\n )\n\n self.remove_code_snippets = remove_code_snippets\n self.extract_headlines = extract_headlines\n self.add_frontmatter_to_meta = add_frontmatter_to_meta\n\n def convert(\n self,\n file_path: Path,\n meta: Optional[Dict[str, Any]] = None,\n remove_numeric_tables: Optional[bool] = None,\n valid_languages: Optional[List[str]] = None,\n encoding: Optional[str] = \"utf-8\",\n id_hash_keys: Optional[List[str]] = None,\n remove_code_snippets: Optional[bool] = None,\n extract_headlines: Optional[bool] = None,\n add_frontmatter_to_meta: Optional[bool] = None,\n ) -> List[Document]:\n \"\"\"\n Reads text from a markdown file and executes optional preprocessing steps.\n\n :param file_path: path of the file to convert\n :param meta: dictionary of meta data key-value pairs to append in the returned document.\n :param encoding: Select the file encoding (default is `utf-8`)\n :param remove_numeric_tables: Not applicable\n :param valid_languages: Not applicable\n :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's\n attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are\n not unique, you can modify the metadata and pass e.g. `\"meta\"` to this field (e.g. [`\"content\"`, `\"meta\"`]).\n In this case the id will be generated by using the content and the defined metadata.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n\n id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys\n remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets\n extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines\n add_frontmatter_to_meta = (\n add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta\n )\n\n with open(file_path, encoding=encoding, errors=\"ignore\") as f:\n metadata, markdown_text = frontmatter.parse(f.read())\n\n # md -> html -> text since BeautifulSoup can extract text cleanly\n html = markdown(markdown_text)\n\n # remove code snippets\n if remove_code_snippets:\n html = re.sub(r\"<pre>(.*?)</pre>\", \" \", html, flags=re.DOTALL)\n html = re.sub(r\"<code>(.*?)</code>\", \" \", html, flags=re.DOTALL)\n soup = BeautifulSoup(html, \"html.parser\")\n\n if add_frontmatter_to_meta:\n if meta is None:\n meta = metadata\n else:\n meta.update(metadata)\n\n if extract_headlines:\n text, headlines = self._extract_text_and_headlines(soup)\n if meta is None:\n meta = {}\n meta[\"headlines\"] = headlines\n else:\n text = soup.get_text()\n\n document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)\n return [document]\n\n @staticmethod\n def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:\n \"\"\"\n Extracts text and headings from a soup object.\n \"\"\"\n headline_tags = {\"h1\", \"h2\", \"h3\", \"h4\", \"h5\", \"h6\"}\n headlines = []\n text = \"\"\n for desc in soup.descendants:\n if desc.name in headline_tags:\n current_headline = desc.get_text()\n current_start_idx = len(text)\n current_level = int(desc.name[-1]) - 1\n headlines.append({\"headline\": current_headline, \"start_idx\": current_start_idx, \"level\": current_level})\n\n if isinstance(desc, NavigableString):\n text += desc.get_text()\n\n return text, headlines\n", "path": "haystack/nodes/file_converter/markdown.py"}]}
| 1,806 | 782 |
gh_patches_debug_25505
|
rasdani/github-patches
|
git_diff
|
onnx__onnx-1648
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Proposal: Op Registry
Once we start getting into more complicated passes having knowledge about operators becomes incredibly useful. What I propose is to maintain a registry of operators mapping to a set of descriptors.
For example let's take a look at the [eliminate_nop_monotone_argmax](https://github.com/onnx/onnx/blob/master/onnx/optimizer/passes/eliminate_nop_monotone_argmax.h) pass. Within the pass we have to maintain an unordered set of ops that are monotone. The correct design would be to have a registry which we can ask whether or not if a certain op is monotone. Once such a registry is implemented, passes can be generalized through descriptors, therefore as we add new ops and their respective descriptors our passes naturally scale out.
Some possible descriptors we could have are:
- monotonic
- elementwise independent (op works on an individual element without knowledge of other elements, such as log, but not softmax since it requires calculating a sum over an axis)
- elementwise dependent (softmax, logsoftmax, softmin, softplus)
- reduction
- shape transform (e.g. transpose, view)
@houseroad @bddppq What do you guys think?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onnx/optimizer.py`
Content:
```
1 # ATTENTION: The code in this file is highly EXPERIMENTAL.
2 # Adventurous users should note that the APIs will probably change.
3
4 """onnx optimizer
5
6 This enables users to optimize their models.
7 """
8 from __future__ import absolute_import
9 from __future__ import division
10 from __future__ import print_function
11 from __future__ import unicode_literals
12
13 import onnx
14 import onnx.onnx_cpp2py_export.optimizer as C
15 from onnx import ModelProto
16 from typing import Text, Sequence, Optional
17
18 """Apply the optimization on the serialized ModelProto.
19
20 Arguments:
21 input (ModelProto): model
22 names (list of string): list of optimization names
23
24 Return:
25 return (ModelProto) optimized model
26
27 Supported pass names:
28 -- nop
29 -- eliminate_identity
30 -- eliminate_nop_transpose
31 -- eliminate_nop_pad
32 -- eliminate_unused_initializer
33 -- fuse_consecutive_squeezes
34 -- fuse_consecutive_transposes
35 -- fuse_add_bias_into_conv
36 -- fuse_transpose_into_gemm
37 """
38
39 get_available_passes = C.get_available_passes
40
41
42 def optimize(model, passes=None, fixed_point=False): # type: (ModelProto, Optional[Sequence[Text]], bool) -> ModelProto
43 if passes is None:
44 passes = ['eliminate_nop_transpose',
45 'eliminate_nop_pad',
46 'fuse_consecutive_transposes',
47 'fuse_transpose_into_gemm']
48 if not isinstance(model, ModelProto):
49 raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
50
51 model_str = model.SerializeToString()
52 if fixed_point:
53 optimized_model_str = C.optimize_fixedpoint(model_str, passes)
54 else:
55 optimized_model_str = C.optimize(model_str, passes)
56
57 return onnx.load_from_string(optimized_model_str)
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/onnx/optimizer.py b/onnx/optimizer.py
--- a/onnx/optimizer.py
+++ b/onnx/optimizer.py
@@ -26,13 +26,23 @@
Supported pass names:
-- nop
+ -- eliminate_deadend
-- eliminate_identity
- -- eliminate_nop_transpose
+ -- eliminate_nop_dropout
+ -- eliminate_nop_monotone_argmax
-- eliminate_nop_pad
+ -- eliminate_nop_transpose
-- eliminate_unused_initializer
+ -- extract_constant_to_initializer
+ -- fuse_add_bias_into_conv
+ -- fuse_bn_into_conv
+ -- fuse_consecutive_concats
+ -- fuse_consecutive_log_softmax
+ -- fuse_consecutive_reduce_unsqueeze
-- fuse_consecutive_squeezes
-- fuse_consecutive_transposes
- -- fuse_add_bias_into_conv
+ -- fuse_matmul_add_bias_into_gemm
+ -- fuse_pad_into_conv
-- fuse_transpose_into_gemm
"""
@@ -45,8 +55,10 @@
'eliminate_nop_pad',
'fuse_consecutive_transposes',
'fuse_transpose_into_gemm']
+
if not isinstance(model, ModelProto):
- raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
+ raise ValueError(
+ 'Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
model_str = model.SerializeToString()
if fixed_point:
|
{"golden_diff": "diff --git a/onnx/optimizer.py b/onnx/optimizer.py\n--- a/onnx/optimizer.py\n+++ b/onnx/optimizer.py\n@@ -26,13 +26,23 @@\n \n Supported pass names:\n -- nop\n+ -- eliminate_deadend\n -- eliminate_identity\n- -- eliminate_nop_transpose\n+ -- eliminate_nop_dropout\n+ -- eliminate_nop_monotone_argmax\n -- eliminate_nop_pad\n+ -- eliminate_nop_transpose\n -- eliminate_unused_initializer\n+ -- extract_constant_to_initializer\n+ -- fuse_add_bias_into_conv\n+ -- fuse_bn_into_conv\n+ -- fuse_consecutive_concats\n+ -- fuse_consecutive_log_softmax\n+ -- fuse_consecutive_reduce_unsqueeze\n -- fuse_consecutive_squeezes\n -- fuse_consecutive_transposes\n- -- fuse_add_bias_into_conv\n+ -- fuse_matmul_add_bias_into_gemm\n+ -- fuse_pad_into_conv\n -- fuse_transpose_into_gemm\n \"\"\"\n \n@@ -45,8 +55,10 @@\n 'eliminate_nop_pad',\n 'fuse_consecutive_transposes',\n 'fuse_transpose_into_gemm']\n+\n if not isinstance(model, ModelProto):\n- raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n+ raise ValueError(\n+ 'Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n \n model_str = model.SerializeToString()\n if fixed_point:\n", "issue": "Proposal: Op Registry\nOnce we start getting into more complicated passes having knowledge about operators becomes incredibly useful. What I propose is to maintain a registry of operators mapping to a set of descriptors. \r\n\r\nFor example let's take a look at the [eliminate_nop_monotone_argmax](https://github.com/onnx/onnx/blob/master/onnx/optimizer/passes/eliminate_nop_monotone_argmax.h) pass. Within the pass we have to maintain an unordered set of ops that are monotone. The correct design would be to have a registry which we can ask whether or not if a certain op is monotone. Once such a registry is implemented, passes can be generalized through descriptors, therefore as we add new ops and their respective descriptors our passes naturally scale out.\r\n\r\nSome possible descriptors we could have are:\r\n- monotonic\r\n- elementwise independent (op works on an individual element without knowledge of other elements, such as log, but not softmax since it requires calculating a sum over an axis)\r\n- elementwise dependent (softmax, logsoftmax, softmin, softplus)\r\n- reduction\r\n- shape transform (e.g. transpose, view)\r\n\r\n@houseroad @bddppq What do you guys think?\n", "before_files": [{"content": "# ATTENTION: The code in this file is highly EXPERIMENTAL.\n# Adventurous users should note that the APIs will probably change.\n\n\"\"\"onnx optimizer\n\nThis enables users to optimize their models.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport onnx\nimport onnx.onnx_cpp2py_export.optimizer as C\nfrom onnx import ModelProto\nfrom typing import Text, Sequence, Optional\n\n\"\"\"Apply the optimization on the serialized ModelProto.\n\nArguments:\n input (ModelProto): model\n names (list of string): list of optimization names\n\nReturn:\n return (ModelProto) optimized model\n\nSupported pass names:\n -- nop\n -- eliminate_identity\n -- eliminate_nop_transpose\n -- eliminate_nop_pad\n -- eliminate_unused_initializer\n -- fuse_consecutive_squeezes\n -- fuse_consecutive_transposes\n -- fuse_add_bias_into_conv\n -- fuse_transpose_into_gemm\n\"\"\"\n\nget_available_passes = C.get_available_passes\n\n\ndef optimize(model, passes=None, fixed_point=False): # type: (ModelProto, Optional[Sequence[Text]], bool) -> ModelProto\n if passes is None:\n passes = ['eliminate_nop_transpose',\n 'eliminate_nop_pad',\n 'fuse_consecutive_transposes',\n 'fuse_transpose_into_gemm']\n if not isinstance(model, ModelProto):\n raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n\n model_str = model.SerializeToString()\n if fixed_point:\n optimized_model_str = C.optimize_fixedpoint(model_str, passes)\n else:\n optimized_model_str = C.optimize(model_str, passes)\n\n return onnx.load_from_string(optimized_model_str)\n", "path": "onnx/optimizer.py"}], "after_files": [{"content": "# ATTENTION: The code in this file is highly EXPERIMENTAL.\n# Adventurous users should note that the APIs will probably change.\n\n\"\"\"onnx optimizer\n\nThis enables users to optimize their models.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport onnx\nimport onnx.onnx_cpp2py_export.optimizer as C\nfrom onnx import ModelProto\nfrom typing import Text, Sequence, Optional\n\n\"\"\"Apply the optimization on the serialized ModelProto.\n\nArguments:\n input (ModelProto): model\n names (list of string): list of optimization names\n\nReturn:\n return (ModelProto) optimized model\n\nSupported pass names:\n -- nop\n -- eliminate_deadend\n -- eliminate_identity\n -- eliminate_nop_dropout\n -- eliminate_nop_monotone_argmax\n -- eliminate_nop_pad\n -- eliminate_nop_transpose\n -- eliminate_unused_initializer\n -- extract_constant_to_initializer\n -- fuse_add_bias_into_conv\n -- fuse_bn_into_conv\n -- fuse_consecutive_concats\n -- fuse_consecutive_log_softmax\n -- fuse_consecutive_reduce_unsqueeze\n -- fuse_consecutive_squeezes\n -- fuse_consecutive_transposes\n -- fuse_matmul_add_bias_into_gemm\n -- fuse_pad_into_conv\n -- fuse_transpose_into_gemm\n\"\"\"\n\nget_available_passes = C.get_available_passes\n\n\ndef optimize(model, passes=None, fixed_point=False): # type: (ModelProto, Optional[Sequence[Text]], bool) -> ModelProto\n if passes is None:\n passes = ['eliminate_nop_transpose',\n 'eliminate_nop_pad',\n 'fuse_consecutive_transposes',\n 'fuse_transpose_into_gemm']\n\n if not isinstance(model, ModelProto):\n raise ValueError(\n 'Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n\n model_str = model.SerializeToString()\n if fixed_point:\n optimized_model_str = C.optimize_fixedpoint(model_str, passes)\n else:\n optimized_model_str = C.optimize(model_str, passes)\n\n return onnx.load_from_string(optimized_model_str)\n", "path": "onnx/optimizer.py"}]}
| 1,013 | 332 |
gh_patches_debug_1054
|
rasdani/github-patches
|
git_diff
|
liqd__a4-opin-906
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
styling of categories in dashboard (Safari)
When using Safari the styling of categories in the dashboard is broken.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/ideas/forms.py`
Content:
```
1 from adhocracy4.categories import forms as category_forms
2
3 from . import models
4
5
6 class IdeaForm(category_forms.CategorizableForm):
7 class Meta:
8 model = models.Idea
9 fields = ['name', 'description', 'image', 'category']
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/euth/ideas/forms.py b/euth/ideas/forms.py
--- a/euth/ideas/forms.py
+++ b/euth/ideas/forms.py
@@ -7,3 +7,7 @@
class Meta:
model = models.Idea
fields = ['name', 'description', 'image', 'category']
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.fields['category'].empty_label = '---'
|
{"golden_diff": "diff --git a/euth/ideas/forms.py b/euth/ideas/forms.py\n--- a/euth/ideas/forms.py\n+++ b/euth/ideas/forms.py\n@@ -7,3 +7,7 @@\n class Meta:\n model = models.Idea\n fields = ['name', 'description', 'image', 'category']\n+\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.fields['category'].empty_label = '---'\n", "issue": "styling of categories in dashboard (Safari)\nWhen using Safari the styling of categories in the dashboard is broken. \r\n\r\n\r\n\n", "before_files": [{"content": "from adhocracy4.categories import forms as category_forms\n\nfrom . import models\n\n\nclass IdeaForm(category_forms.CategorizableForm):\n class Meta:\n model = models.Idea\n fields = ['name', 'description', 'image', 'category']\n", "path": "euth/ideas/forms.py"}], "after_files": [{"content": "from adhocracy4.categories import forms as category_forms\n\nfrom . import models\n\n\nclass IdeaForm(category_forms.CategorizableForm):\n class Meta:\n model = models.Idea\n fields = ['name', 'description', 'image', 'category']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['category'].empty_label = '---'\n", "path": "euth/ideas/forms.py"}]}
| 416 | 111 |
gh_patches_debug_5823
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-1207
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] Remove Slack from the list of channels that support Suggested Actions
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/4177
Fixes #2291
Removing Slack from the list of channels that do not support suggested actions.
# Changed projects
* Microsoft.Bot.Builder.Dialogs
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.core import TurnContext
5 from botframework.connector import Channels
6
7
8 class Channel:
9 """
10 Methods for determining channel-specific functionality.
11 """
12
13 @staticmethod
14 def supports_suggested_actions(channel_id: str, button_cnt: int = 100) -> bool:
15 """Determine if a number of Suggested Actions are supported by a Channel.
16
17 Args:
18 channel_id (str): The Channel to check the if Suggested Actions are supported in.
19 button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.
20
21 Returns:
22 bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not
23 support that number of Suggested Actions.
24 """
25
26 max_actions = {
27 # https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies
28 Channels.facebook: 10,
29 Channels.skype: 10,
30 # https://developers.line.biz/en/reference/messaging-api/#items-object
31 Channels.line: 13,
32 # https://dev.kik.com/#/docs/messaging#text-response-object
33 Channels.kik: 20,
34 Channels.telegram: 100,
35 Channels.slack: 100,
36 Channels.emulator: 100,
37 Channels.direct_line: 100,
38 Channels.webchat: 100,
39 }
40 return (
41 button_cnt <= max_actions[channel_id]
42 if channel_id in max_actions
43 else False
44 )
45
46 @staticmethod
47 def supports_card_actions(channel_id: str, button_cnt: int = 100) -> bool:
48 """Determine if a number of Card Actions are supported by a Channel.
49
50 Args:
51 channel_id (str): The Channel to check if the Card Actions are supported in.
52 button_cnt (int, optional): Defaults to 100. The number of Card Actions to check for the Channel.
53
54 Returns:
55 bool: True if the Channel supports the button_cnt total Card Actions, False if the Channel does not support
56 that number of Card Actions.
57 """
58
59 max_actions = {
60 Channels.facebook: 3,
61 Channels.skype: 3,
62 Channels.ms_teams: 3,
63 Channels.line: 99,
64 Channels.slack: 100,
65 Channels.emulator: 100,
66 Channels.direct_line: 100,
67 Channels.webchat: 100,
68 Channels.cortana: 100,
69 }
70 return (
71 button_cnt <= max_actions[channel_id]
72 if channel_id in max_actions
73 else False
74 )
75
76 @staticmethod
77 def has_message_feed(channel_id: str) -> bool:
78 """Determine if a Channel has a Message Feed.
79
80 Args:
81 channel_id (str): The Channel to check for Message Feed.
82
83 Returns:
84 bool: True if the Channel has a Message Feed, False if it does not.
85 """
86
87 return not channel_id == Channels.cortana
88
89 @staticmethod
90 def max_action_title_length( # pylint: disable=unused-argument
91 channel_id: str,
92 ) -> int:
93 """Maximum length allowed for Action Titles.
94
95 Args:
96 channel_id (str): The Channel to determine Maximum Action Title Length.
97
98 Returns:
99 int: The total number of characters allowed for an Action Title on a specific Channel.
100 """
101
102 return 20
103
104 @staticmethod
105 def get_channel_id(turn_context: TurnContext) -> str:
106 """Get the Channel Id from the current Activity on the Turn Context.
107
108 Args:
109 turn_context (TurnContext): The Turn Context to retrieve the Activity's Channel Id from.
110
111 Returns:
112 str: The Channel Id from the Turn Context's Activity.
113 """
114
115 if turn_context.activity.channel_id is None:
116 return ""
117
118 return turn_context.activity.channel_id
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py
--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py
+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py
@@ -32,7 +32,6 @@
# https://dev.kik.com/#/docs/messaging#text-response-object
Channels.kik: 20,
Channels.telegram: 100,
- Channels.slack: 100,
Channels.emulator: 100,
Channels.direct_line: 100,
Channels.webchat: 100,
|
{"golden_diff": "diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py\n--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py\n+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py\n@@ -32,7 +32,6 @@\n # https://dev.kik.com/#/docs/messaging#text-response-object\n Channels.kik: 20,\n Channels.telegram: 100,\n- Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n", "issue": "[PORT] Remove Slack from the list of channels that support Suggested Actions\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/4177\n\nFixes #2291 \r\n\r\nRemoving Slack from the list of channels that do not support suggested actions.\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Builder.Dialogs\r\n\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import TurnContext\nfrom botframework.connector import Channels\n\n\nclass Channel:\n \"\"\"\n Methods for determining channel-specific functionality.\n \"\"\"\n\n @staticmethod\n def supports_suggested_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Suggested Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check the if Suggested Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not\n support that number of Suggested Actions.\n \"\"\"\n\n max_actions = {\n # https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies\n Channels.facebook: 10,\n Channels.skype: 10,\n # https://developers.line.biz/en/reference/messaging-api/#items-object\n Channels.line: 13,\n # https://dev.kik.com/#/docs/messaging#text-response-object\n Channels.kik: 20,\n Channels.telegram: 100,\n Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def supports_card_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Card Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check if the Card Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Card Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Card Actions, False if the Channel does not support\n that number of Card Actions.\n \"\"\"\n\n max_actions = {\n Channels.facebook: 3,\n Channels.skype: 3,\n Channels.ms_teams: 3,\n Channels.line: 99,\n Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n Channels.cortana: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def has_message_feed(channel_id: str) -> bool:\n \"\"\"Determine if a Channel has a Message Feed.\n\n Args:\n channel_id (str): The Channel to check for Message Feed.\n\n Returns:\n bool: True if the Channel has a Message Feed, False if it does not.\n \"\"\"\n\n return not channel_id == Channels.cortana\n\n @staticmethod\n def max_action_title_length( # pylint: disable=unused-argument\n channel_id: str,\n ) -> int:\n \"\"\"Maximum length allowed for Action Titles.\n\n Args:\n channel_id (str): The Channel to determine Maximum Action Title Length.\n\n Returns:\n int: The total number of characters allowed for an Action Title on a specific Channel.\n \"\"\"\n\n return 20\n\n @staticmethod\n def get_channel_id(turn_context: TurnContext) -> str:\n \"\"\"Get the Channel Id from the current Activity on the Turn Context.\n\n Args:\n turn_context (TurnContext): The Turn Context to retrieve the Activity's Channel Id from.\n\n Returns:\n str: The Channel Id from the Turn Context's Activity.\n \"\"\"\n\n if turn_context.activity.channel_id is None:\n return \"\"\n\n return turn_context.activity.channel_id\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import TurnContext\nfrom botframework.connector import Channels\n\n\nclass Channel:\n \"\"\"\n Methods for determining channel-specific functionality.\n \"\"\"\n\n @staticmethod\n def supports_suggested_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Suggested Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check the if Suggested Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not\n support that number of Suggested Actions.\n \"\"\"\n\n max_actions = {\n # https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies\n Channels.facebook: 10,\n Channels.skype: 10,\n # https://developers.line.biz/en/reference/messaging-api/#items-object\n Channels.line: 13,\n # https://dev.kik.com/#/docs/messaging#text-response-object\n Channels.kik: 20,\n Channels.telegram: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def supports_card_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Card Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check if the Card Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Card Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Card Actions, False if the Channel does not support\n that number of Card Actions.\n \"\"\"\n\n max_actions = {\n Channels.facebook: 3,\n Channels.skype: 3,\n Channels.ms_teams: 3,\n Channels.line: 99,\n Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n Channels.cortana: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def has_message_feed(channel_id: str) -> bool:\n \"\"\"Determine if a Channel has a Message Feed.\n\n Args:\n channel_id (str): The Channel to check for Message Feed.\n\n Returns:\n bool: True if the Channel has a Message Feed, False if it does not.\n \"\"\"\n\n return not channel_id == Channels.cortana\n\n @staticmethod\n def max_action_title_length( # pylint: disable=unused-argument\n channel_id: str,\n ) -> int:\n \"\"\"Maximum length allowed for Action Titles.\n\n Args:\n channel_id (str): The Channel to determine Maximum Action Title Length.\n\n Returns:\n int: The total number of characters allowed for an Action Title on a specific Channel.\n \"\"\"\n\n return 20\n\n @staticmethod\n def get_channel_id(turn_context: TurnContext) -> str:\n \"\"\"Get the Channel Id from the current Activity on the Turn Context.\n\n Args:\n turn_context (TurnContext): The Turn Context to retrieve the Activity's Channel Id from.\n\n Returns:\n str: The Channel Id from the Turn Context's Activity.\n \"\"\"\n\n if turn_context.activity.channel_id is None:\n return \"\"\n\n return turn_context.activity.channel_id\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py"}]}
| 1,475 | 165 |
gh_patches_debug_39438
|
rasdani/github-patches
|
git_diff
|
aws-powertools__powertools-lambda-python-2004
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`BaseSegment` defines `put_annotation` as having return type `NoReturn`
### Expected Behaviour
The return type should be `None`. `NoReturn` means that the function never returns (e.g. it always throws an exception) and makes type checkers (e.g. mypy) mark all code following `put_annotation` as unreachable.
Also applies to `put_metadata`, `patch`, and `patch_all`.
### Current Behaviour
These methods breaks type checking in any modules they're used because all code following a call to e.g. `put_annotation` is marked as unreachable.
### Code snippet
```python
with Tracer().provider.in_subsegment('test') as subsegment:
subsegment.put_metadata('x', 1)
print("hi") # marked as unreachable
```
### Possible Solution
Change the return type to `None`.
### Steps to Reproduce
Use the code snippet above and run `mypy` on the file.
### AWS Lambda Powertools for Python version
latest
### AWS Lambda function runtime
3.9
### Packaging format used
PyPi
### Debugging logs
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aws_lambda_powertools/tracing/base.py`
Content:
```
1 import abc
2 import numbers
3 import traceback
4 from contextlib import contextmanager
5 from typing import Any, Generator, List, NoReturn, Optional, Sequence, Union
6
7
8 class BaseSegment(abc.ABC):
9 """Holds common properties and methods on segment and subsegment."""
10
11 @abc.abstractmethod
12 def close(self, end_time: Optional[int] = None):
13 """Close the trace entity by setting `end_time`
14 and flip the in progress flag to False.
15
16 Parameters
17 ----------
18 end_time: int
19 Time in epoch seconds, by default current time will be used.
20 """
21
22 @abc.abstractmethod
23 def add_subsegment(self, subsegment: Any):
24 """Add input subsegment as a child subsegment."""
25
26 @abc.abstractmethod
27 def remove_subsegment(self, subsegment: Any):
28 """Remove input subsegment from child subsegments."""
29
30 @abc.abstractmethod
31 def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
32 """Annotate segment or subsegment with a key-value pair.
33
34 Note: Annotations will be indexed for later search query.
35
36 Parameters
37 ----------
38 key: str
39 Metadata key
40 value: Union[str, numbers.Number, bool]
41 Annotation value
42 """
43
44 @abc.abstractmethod
45 def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
46 """Add metadata to segment or subsegment. Metadata is not indexed
47 but can be later retrieved by BatchGetTraces API.
48
49 Parameters
50 ----------
51 key: str
52 Metadata key
53 value: Any
54 Any object that can be serialized into a JSON string
55 namespace: Set[str]
56 Metadata namespace, by default 'default'
57 """
58
59 @abc.abstractmethod
60 def add_exception(self, exception: BaseException, stack: List[traceback.StackSummary], remote: bool = False):
61 """Add an exception to trace entities.
62
63 Parameters
64 ----------
65 exception: Exception
66 Caught exception
67 stack: List[traceback.StackSummary]
68 List of traceback summaries
69
70 Output from `traceback.extract_stack()`.
71 remote: bool
72 Whether it's a client error (False) or downstream service error (True), by default False
73 """
74
75
76 class BaseProvider(abc.ABC):
77 @abc.abstractmethod
78 @contextmanager
79 def in_subsegment(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:
80 """Return a subsegment context manger.
81
82 Parameters
83 ----------
84 name: str
85 Subsegment name
86 kwargs: Optional[dict]
87 Optional parameters to be propagated to segment
88 """
89
90 @abc.abstractmethod
91 @contextmanager
92 def in_subsegment_async(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:
93 """Return a subsegment async context manger.
94
95 Parameters
96 ----------
97 name: str
98 Subsegment name
99 kwargs: Optional[dict]
100 Optional parameters to be propagated to segment
101 """
102
103 @abc.abstractmethod
104 def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
105 """Annotate current active trace entity with a key-value pair.
106
107 Note: Annotations will be indexed for later search query.
108
109 Parameters
110 ----------
111 key: str
112 Metadata key
113 value: Union[str, numbers.Number, bool]
114 Annotation value
115 """
116
117 @abc.abstractmethod
118 def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
119 """Add metadata to the current active trace entity.
120
121 Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.
122
123 Parameters
124 ----------
125 key: str
126 Metadata key
127 value: Any
128 Any object that can be serialized into a JSON string
129 namespace: Set[str]
130 Metadata namespace, by default 'default'
131 """
132
133 @abc.abstractmethod
134 def patch(self, modules: Sequence[str]) -> NoReturn:
135 """Instrument a set of supported libraries
136
137 Parameters
138 ----------
139 modules: Set[str]
140 Set of modules to be patched
141 """
142
143 @abc.abstractmethod
144 def patch_all(self) -> NoReturn:
145 """Instrument all supported libraries"""
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aws_lambda_powertools/tracing/base.py b/aws_lambda_powertools/tracing/base.py
--- a/aws_lambda_powertools/tracing/base.py
+++ b/aws_lambda_powertools/tracing/base.py
@@ -2,7 +2,7 @@
import numbers
import traceback
from contextlib import contextmanager
-from typing import Any, Generator, List, NoReturn, Optional, Sequence, Union
+from typing import Any, Generator, List, Optional, Sequence, Union
class BaseSegment(abc.ABC):
@@ -28,7 +28,7 @@
"""Remove input subsegment from child subsegments."""
@abc.abstractmethod
- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:
"""Annotate segment or subsegment with a key-value pair.
Note: Annotations will be indexed for later search query.
@@ -42,7 +42,7 @@
"""
@abc.abstractmethod
- def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
+ def put_metadata(self, key: str, value: Any, namespace: str = "default") -> None:
"""Add metadata to segment or subsegment. Metadata is not indexed
but can be later retrieved by BatchGetTraces API.
@@ -101,7 +101,7 @@
"""
@abc.abstractmethod
- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:
"""Annotate current active trace entity with a key-value pair.
Note: Annotations will be indexed for later search query.
@@ -115,7 +115,7 @@
"""
@abc.abstractmethod
- def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
+ def put_metadata(self, key: str, value: Any, namespace: str = "default") -> None:
"""Add metadata to the current active trace entity.
Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.
@@ -131,7 +131,7 @@
"""
@abc.abstractmethod
- def patch(self, modules: Sequence[str]) -> NoReturn:
+ def patch(self, modules: Sequence[str]) -> None:
"""Instrument a set of supported libraries
Parameters
@@ -141,5 +141,5 @@
"""
@abc.abstractmethod
- def patch_all(self) -> NoReturn:
+ def patch_all(self) -> None:
"""Instrument all supported libraries"""
|
{"golden_diff": "diff --git a/aws_lambda_powertools/tracing/base.py b/aws_lambda_powertools/tracing/base.py\n--- a/aws_lambda_powertools/tracing/base.py\n+++ b/aws_lambda_powertools/tracing/base.py\n@@ -2,7 +2,7 @@\n import numbers\n import traceback\n from contextlib import contextmanager\n-from typing import Any, Generator, List, NoReturn, Optional, Sequence, Union\n+from typing import Any, Generator, List, Optional, Sequence, Union\n \n \n class BaseSegment(abc.ABC):\n@@ -28,7 +28,7 @@\n \"\"\"Remove input subsegment from child subsegments.\"\"\"\n \n @abc.abstractmethod\n- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate segment or subsegment with a key-value pair.\n \n Note: Annotations will be indexed for later search query.\n@@ -42,7 +42,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n+ def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to segment or subsegment. Metadata is not indexed\n but can be later retrieved by BatchGetTraces API.\n \n@@ -101,7 +101,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate current active trace entity with a key-value pair.\n \n Note: Annotations will be indexed for later search query.\n@@ -115,7 +115,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n+ def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to the current active trace entity.\n \n Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.\n@@ -131,7 +131,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def patch(self, modules: Sequence[str]) -> NoReturn:\n+ def patch(self, modules: Sequence[str]) -> None:\n \"\"\"Instrument a set of supported libraries\n \n Parameters\n@@ -141,5 +141,5 @@\n \"\"\"\n \n @abc.abstractmethod\n- def patch_all(self) -> NoReturn:\n+ def patch_all(self) -> None:\n \"\"\"Instrument all supported libraries\"\"\"\n", "issue": "`BaseSegment` defines `put_annotation` as having return type `NoReturn`\n### Expected Behaviour\r\n\r\nThe return type should be `None`. `NoReturn` means that the function never returns (e.g. it always throws an exception) and makes type checkers (e.g. mypy) mark all code following `put_annotation` as unreachable.\r\n\r\nAlso applies to `put_metadata`, `patch`, and `patch_all`.\r\n\r\n\r\n### Current Behaviour\r\n\r\nThese methods breaks type checking in any modules they're used because all code following a call to e.g. `put_annotation` is marked as unreachable.\r\n\r\n### Code snippet\r\n\r\n```python\r\nwith Tracer().provider.in_subsegment('test') as subsegment:\r\n subsegment.put_metadata('x', 1)\r\n print(\"hi\") # marked as unreachable\r\n```\r\n\r\n\r\n### Possible Solution\r\n\r\nChange the return type to `None`.\r\n\r\n### Steps to Reproduce\r\n\r\nUse the code snippet above and run `mypy` on the file.\r\n\r\n### AWS Lambda Powertools for Python version\r\n\r\nlatest\r\n\r\n### AWS Lambda function runtime\r\n\r\n3.9\r\n\r\n### Packaging format used\r\n\r\nPyPi\r\n\r\n### Debugging logs\r\n\r\n_No response_\n", "before_files": [{"content": "import abc\nimport numbers\nimport traceback\nfrom contextlib import contextmanager\nfrom typing import Any, Generator, List, NoReturn, Optional, Sequence, Union\n\n\nclass BaseSegment(abc.ABC):\n \"\"\"Holds common properties and methods on segment and subsegment.\"\"\"\n\n @abc.abstractmethod\n def close(self, end_time: Optional[int] = None):\n \"\"\"Close the trace entity by setting `end_time`\n and flip the in progress flag to False.\n\n Parameters\n ----------\n end_time: int\n Time in epoch seconds, by default current time will be used.\n \"\"\"\n\n @abc.abstractmethod\n def add_subsegment(self, subsegment: Any):\n \"\"\"Add input subsegment as a child subsegment.\"\"\"\n\n @abc.abstractmethod\n def remove_subsegment(self, subsegment: Any):\n \"\"\"Remove input subsegment from child subsegments.\"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n \"\"\"Annotate segment or subsegment with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n \"\"\"Add metadata to segment or subsegment. Metadata is not indexed\n but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def add_exception(self, exception: BaseException, stack: List[traceback.StackSummary], remote: bool = False):\n \"\"\"Add an exception to trace entities.\n\n Parameters\n ----------\n exception: Exception\n Caught exception\n stack: List[traceback.StackSummary]\n List of traceback summaries\n\n Output from `traceback.extract_stack()`.\n remote: bool\n Whether it's a client error (False) or downstream service error (True), by default False\n \"\"\"\n\n\nclass BaseProvider(abc.ABC):\n @abc.abstractmethod\n @contextmanager\n def in_subsegment(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n @contextmanager\n def in_subsegment_async(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment async context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n \"\"\"Annotate current active trace entity with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n \"\"\"Add metadata to the current active trace entity.\n\n Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def patch(self, modules: Sequence[str]) -> NoReturn:\n \"\"\"Instrument a set of supported libraries\n\n Parameters\n ----------\n modules: Set[str]\n Set of modules to be patched\n \"\"\"\n\n @abc.abstractmethod\n def patch_all(self) -> NoReturn:\n \"\"\"Instrument all supported libraries\"\"\"\n", "path": "aws_lambda_powertools/tracing/base.py"}], "after_files": [{"content": "import abc\nimport numbers\nimport traceback\nfrom contextlib import contextmanager\nfrom typing import Any, Generator, List, Optional, Sequence, Union\n\n\nclass BaseSegment(abc.ABC):\n \"\"\"Holds common properties and methods on segment and subsegment.\"\"\"\n\n @abc.abstractmethod\n def close(self, end_time: Optional[int] = None):\n \"\"\"Close the trace entity by setting `end_time`\n and flip the in progress flag to False.\n\n Parameters\n ----------\n end_time: int\n Time in epoch seconds, by default current time will be used.\n \"\"\"\n\n @abc.abstractmethod\n def add_subsegment(self, subsegment: Any):\n \"\"\"Add input subsegment as a child subsegment.\"\"\"\n\n @abc.abstractmethod\n def remove_subsegment(self, subsegment: Any):\n \"\"\"Remove input subsegment from child subsegments.\"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate segment or subsegment with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to segment or subsegment. Metadata is not indexed\n but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def add_exception(self, exception: BaseException, stack: List[traceback.StackSummary], remote: bool = False):\n \"\"\"Add an exception to trace entities.\n\n Parameters\n ----------\n exception: Exception\n Caught exception\n stack: List[traceback.StackSummary]\n List of traceback summaries\n\n Output from `traceback.extract_stack()`.\n remote: bool\n Whether it's a client error (False) or downstream service error (True), by default False\n \"\"\"\n\n\nclass BaseProvider(abc.ABC):\n @abc.abstractmethod\n @contextmanager\n def in_subsegment(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n @contextmanager\n def in_subsegment_async(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment async context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate current active trace entity with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to the current active trace entity.\n\n Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def patch(self, modules: Sequence[str]) -> None:\n \"\"\"Instrument a set of supported libraries\n\n Parameters\n ----------\n modules: Set[str]\n Set of modules to be patched\n \"\"\"\n\n @abc.abstractmethod\n def patch_all(self) -> None:\n \"\"\"Instrument all supported libraries\"\"\"\n", "path": "aws_lambda_powertools/tracing/base.py"}]}
| 1,781 | 638 |
gh_patches_debug_25290
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-166
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exit gracefully when misconfigured
Currently if `mkdocs.yaml` does not exist or if `pages`/`project_name` are not set, we'll exit with an exception. Really we should print a simple error to stderr and exit with an appropriate error code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/config.py`
Content:
```
1 # coding: utf-8
2
3 from mkdocs import utils
4 from mkdocs.compat import urlparse
5 import os
6 import yaml
7
8 DEFAULT_CONFIG = {
9 'site_name': None,
10 'pages': None,
11
12 'site_url': None,
13 'site_description': None,
14 'site_author': None,
15 'site_favicon': None,
16
17 'theme': 'mkdocs',
18 'docs_dir': 'docs',
19 'site_dir': 'site',
20 'theme_dir': None,
21
22 'copyright': None,
23 'google-analytics': None,
24
25 # The address on which to serve the livereloading docs server.
26 'dev_addr': '127.0.0.1:8000',
27
28 # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.
29 # If `False`, use `<page_name>.html style file with hyperlinks to the file.
30 # True generates nicer URLs, but False is useful if browsing the output on a filesystem.
31 'use_directory_urls': True,
32
33 # Specify a link to the project source repo to be included
34 # in the documentation pages.
35 'repo_url': None,
36
37 # A name to use for the link to the project source repo.
38 # Default: If repo_url is unset then None, otherwise
39 # "GitHub" or "Bitbucket" for known url or Hostname for unknown urls.
40 'repo_name': None,
41
42 # Specify which css or javascript files from the docs
43 # directionary should be additionally included in the site.
44 # Default: List of all .css and .js files in the docs dir.
45 'extra_css': None,
46 'extra_javascript': None,
47
48 # Determine if the site should include the nav and next/prev elements.
49 # Default: True if the site has more than one page, False otherwise.
50 'include_nav': None,
51 'include_next_prev': None,
52
53 # PyMarkdown extension names.
54 'markdown_extensions': (),
55
56 # Determine if the site should generate a json search index and include
57 # search elements in the theme. - TODO
58 'include_search': False,
59
60 # Determine if the site should include a 404.html page.
61 # TODO: Implment this. Make this None, have it True if a 404.html
62 # template exists in the theme or docs dir.
63 'include_404': False,
64
65 # Determine if the site should include a sitemap.xml page.
66 # TODO: Implement this. Make this None, have it True if a sitemap.xml
67 # template exists in the theme or docs dir.
68 'include_sitemap': False,
69 }
70
71
72 def load_config(filename='mkdocs.yml', options=None):
73 options = options or {}
74 if 'config' in options:
75 filename = options['config']
76 assert os.path.exists(filename), "Config file '%s' does not exist." % filename
77 with open(filename, 'r') as fp:
78 user_config = yaml.load(fp)
79 user_config.update(options)
80 return validate_config(user_config)
81
82
83 def validate_config(user_config):
84 config = DEFAULT_CONFIG.copy()
85 config.update(user_config)
86
87 assert config['site_name'], "Config must contain 'site_name' setting."
88
89 # If not specified, then the 'pages' config simply includes all
90 # markdown files in the docs dir, without generating any header items
91 # for them.
92 pages = []
93 extra_css = []
94 extra_javascript = []
95 for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):
96 for filename in sorted(filenames):
97 fullpath = os.path.join(dirpath, filename)
98 relpath = os.path.relpath(fullpath, config['docs_dir'])
99
100 if utils.is_markdown_file(filename):
101 # index pages should always be the first listed page.
102 if os.path.splitext(relpath)[0] == 'index':
103 pages.insert(0, relpath)
104 else:
105 pages.append(relpath)
106 elif utils.is_css_file(filename):
107 extra_css.append(relpath)
108 elif utils.is_javascript_file(filename):
109 extra_javascript.append(relpath)
110
111 if config['pages'] is None:
112 config['pages'] = pages
113
114 if config['extra_css'] is None:
115 config['extra_css'] = extra_css
116
117 if config['extra_javascript'] is None:
118 config['extra_javascript'] = extra_javascript
119
120 package_dir = os.path.dirname(__file__)
121 theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]
122
123 if config['theme_dir'] is not None:
124 theme_dir.insert(0, config['theme_dir'])
125
126 config['theme_dir'] = theme_dir
127
128 if config['repo_url'] is not None and config['repo_name'] is None:
129 repo_host = urlparse(config['repo_url']).netloc.lower()
130 if repo_host == 'github.com':
131 config['repo_name'] = 'GitHub'
132 elif repo_host == 'bitbucket.com':
133 config['repo_name'] = 'Bitbucket'
134 else:
135 config['repo_name'] = repo_host.split('.')[0].title()
136
137 if config['include_next_prev'] is None:
138 config['include_next_prev'] = len(config['pages']) > 1
139
140 if config['include_nav'] is None:
141 config['include_nav'] = len(config['pages']) > 1
142
143 # To Do:
144
145 # The docs dir must exist.
146 # The theme dir must exist.
147 # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'
148 # A homepage 'index' must exist.
149 # The theme 'base.html' file must exist.
150 # Cannot set repo_name without setting repo_url.
151 # Cannot set 'include_next_prev: true' when only one page exists.
152 # Cannot set 'include_nav: true' when only one page exists.
153 # Error if any config keys provided that are not in the DEFAULT_CONFIG.
154
155 return config
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkdocs/config.py b/mkdocs/config.py
--- a/mkdocs/config.py
+++ b/mkdocs/config.py
@@ -2,7 +2,9 @@
from mkdocs import utils
from mkdocs.compat import urlparse
+import errno
import os
+import sys
import yaml
DEFAULT_CONFIG = {
@@ -73,7 +75,9 @@
options = options or {}
if 'config' in options:
filename = options['config']
- assert os.path.exists(filename), "Config file '%s' does not exist." % filename
+ if not os.path.exists(filename):
+ sys.stderr.write("Config file '%s' does not exist." % filename)
+ sys.exit(errno.ENOENT)
with open(filename, 'r') as fp:
user_config = yaml.load(fp)
user_config.update(options)
@@ -84,7 +88,9 @@
config = DEFAULT_CONFIG.copy()
config.update(user_config)
- assert config['site_name'], "Config must contain 'site_name' setting."
+ if not config['site_name']:
+ sys.stderr.write("Config must contain 'site_name' setting.")
+ sys.exit(errno.EINVAL)
# If not specified, then the 'pages' config simply includes all
# markdown files in the docs dir, without generating any header items
|
{"golden_diff": "diff --git a/mkdocs/config.py b/mkdocs/config.py\n--- a/mkdocs/config.py\n+++ b/mkdocs/config.py\n@@ -2,7 +2,9 @@\n \n from mkdocs import utils\n from mkdocs.compat import urlparse\n+import errno\n import os\n+import sys\n import yaml\n \n DEFAULT_CONFIG = {\n@@ -73,7 +75,9 @@\n options = options or {}\n if 'config' in options:\n filename = options['config']\n- assert os.path.exists(filename), \"Config file '%s' does not exist.\" % filename\n+ if not os.path.exists(filename):\n+ sys.stderr.write(\"Config file '%s' does not exist.\" % filename)\n+ sys.exit(errno.ENOENT)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n user_config.update(options)\n@@ -84,7 +88,9 @@\n config = DEFAULT_CONFIG.copy()\n config.update(user_config)\n \n- assert config['site_name'], \"Config must contain 'site_name' setting.\"\n+ if not config['site_name']:\n+ sys.stderr.write(\"Config must contain 'site_name' setting.\")\n+ sys.exit(errno.EINVAL)\n \n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n", "issue": "Exit gracefully when misconfigured\nCurrently if `mkdocs.yaml` does not exist or if `pages`/`project_name` are not set, we'll exit with an exception. Really we should print a simple error to stderr and exit with an appropriate error code.\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nimport os\nimport yaml\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google-analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # Determine if the site should include a sitemap.xml page.\n # TODO: Implement this. Make this None, have it True if a sitemap.xml\n # template exists in the theme or docs dir.\n 'include_sitemap': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options['config']\n assert os.path.exists(filename), \"Config file '%s' does not exist.\" % filename\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n config.update(user_config)\n\n assert config['site_name'], \"Config must contain 'site_name' setting.\"\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.com':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nimport errno\nimport os\nimport sys\nimport yaml\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google-analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # Determine if the site should include a sitemap.xml page.\n # TODO: Implement this. Make this None, have it True if a sitemap.xml\n # template exists in the theme or docs dir.\n 'include_sitemap': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options['config']\n if not os.path.exists(filename):\n sys.stderr.write(\"Config file '%s' does not exist.\" % filename)\n sys.exit(errno.ENOENT)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n config.update(user_config)\n\n if not config['site_name']:\n sys.stderr.write(\"Config must contain 'site_name' setting.\")\n sys.exit(errno.EINVAL)\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.com':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}]}
| 1,988 | 297 |
gh_patches_debug_33574
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-3501
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dispatch PANDOC_OPTIONS base on input extensions
<!--
Before creating an issue:
* make sure you are using an up-to-date version of Nikola
* search for existing issues that might be related
Describe your requested features as precisely as possible. -->
# Summary
Currently, `COMPILERS` only has a key `pandoc`, and `PANDOC_OPTIONS` is set for all the extensions specified there. The proposed feature simply suggests to have configurable dispatch logic to allow multiple extensions calling pandoc with a different set of `PANDOC_OPTIONS`.
# Example use case
My personal use for this would be to convert one of my site currently using `.md` and `.markdown` to mean a different target format. Both will be converted to HTML, but one of them is slideshow, another is normal "article"/post.
Another possible use case is, if one want to specify markdown extensions in pandoc, say `-t markdown+...-...`, then `COMPILERS['pandoc'] = ['.rst', '.md']` won't work with `PANDOC_OPTIONS = ['-t', 'markdown+...-...']` because that would be wrong input format for `.rst`.
# Suggested implementation
Probably the natural way is to promote
```py
PANDOC_OPTIONS: Union[list, Dict[str, list]]
```
where the keys are those in `COMPILERS['pandoc']`.
If it makes sense I can send a PR doing that. It seems that it only requires changes in <https://github.com/getnikola/nikola/blob/master/nikola/plugins/compile/pandoc.py#L52-L56>, is it true?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/compile/pandoc.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2021 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Page compiler plugin for pandoc.
28
29 You will need, of course, to install pandoc
30 """
31
32
33 import io
34 import os
35 import subprocess
36
37 from nikola.plugin_categories import PageCompiler
38 from nikola.utils import req_missing, makedirs, write_metadata
39
40
41 class CompilePandoc(PageCompiler):
42 """Compile markups into HTML using pandoc."""
43
44 name = "pandoc"
45 friendly_name = "pandoc"
46
47 def set_site(self, site):
48 """Set Nikola site."""
49 self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]
50 super().set_site(site)
51
52 def compile(self, source, dest, is_two_file=True, post=None, lang=None):
53 """Compile the source file into HTML and save as dest."""
54 makedirs(os.path.dirname(dest))
55 try:
56 subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])
57 with open(dest, 'r', encoding='utf-8-sig') as inf:
58 output, shortcode_deps = self.site.apply_shortcodes(inf.read())
59 with open(dest, 'w', encoding='utf-8') as outf:
60 outf.write(output)
61 if post is None:
62 if shortcode_deps:
63 self.logger.error(
64 "Cannot save dependencies for post {0} (post unknown)",
65 source)
66 else:
67 post._depfile[dest] += shortcode_deps
68 except OSError as e:
69 if e.strreror == 'No such file or directory':
70 req_missing(['pandoc'], 'build this site (compile with pandoc)', python=False)
71
72 def compile_string(self, data, source_path=None, is_two_file=True, post=None, lang=None):
73 """Compile into HTML strings."""
74 raise ValueError("Pandoc compiler does not support compile_string due to multiple output formats")
75
76 def create_post(self, path, **kw):
77 """Create a new post."""
78 content = kw.pop('content', None)
79 onefile = kw.pop('onefile', False)
80 # is_page is not used by create_post as of now.
81 kw.pop('is_page', False)
82 metadata = {}
83 metadata.update(self.default_metadata)
84 metadata.update(kw)
85 makedirs(os.path.dirname(path))
86 if not content.endswith('\n'):
87 content += '\n'
88 with io.open(path, "w+", encoding="utf8") as fd:
89 if onefile:
90 fd.write(write_metadata(metadata, comment_wrap=True, site=self.site, compiler=self))
91 fd.write(content)
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nikola/plugins/compile/pandoc.py b/nikola/plugins/compile/pandoc.py
--- a/nikola/plugins/compile/pandoc.py
+++ b/nikola/plugins/compile/pandoc.py
@@ -33,6 +33,8 @@
import io
import os
import subprocess
+from typing import List
+from pathlib import Path
from nikola.plugin_categories import PageCompiler
from nikola.utils import req_missing, makedirs, write_metadata
@@ -49,11 +51,30 @@
self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]
super().set_site(site)
+ def _get_pandoc_options(self, source: str) -> List[str]:
+ """Obtain pandoc args from config depending on type and file extensions."""
+ # Union[List[str], Dict[str, List[str]]]
+ config_options = self.site.config['PANDOC_OPTIONS']
+ type_ = type(config_options)
+ if type_ is list:
+ pandoc_options = config_options
+ elif type_ is dict:
+ ext = Path(source).suffix
+ try:
+ pandoc_options = config_options[ext]
+ except KeyError:
+ self.logger.warn('Setting PANDOC_OPTIONS to [], because extension {} is not defined in PANDOC_OPTIONS: {}.'.format(ext, config_options))
+ pandoc_options = []
+ else:
+ self.logger.warn('Setting PANDOC_OPTIONS to [], because PANDOC_OPTIONS is expected to be of type Union[List[str], Dict[str, List[str]]] but this is not: {}'.format(config_options))
+ pandoc_options = []
+ return pandoc_options
+
def compile(self, source, dest, is_two_file=True, post=None, lang=None):
"""Compile the source file into HTML and save as dest."""
makedirs(os.path.dirname(dest))
try:
- subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])
+ subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source))
with open(dest, 'r', encoding='utf-8-sig') as inf:
output, shortcode_deps = self.site.apply_shortcodes(inf.read())
with open(dest, 'w', encoding='utf-8') as outf:
|
{"golden_diff": "diff --git a/nikola/plugins/compile/pandoc.py b/nikola/plugins/compile/pandoc.py\n--- a/nikola/plugins/compile/pandoc.py\n+++ b/nikola/plugins/compile/pandoc.py\n@@ -33,6 +33,8 @@\n import io\n import os\n import subprocess\n+from typing import List\n+from pathlib import Path\n \n from nikola.plugin_categories import PageCompiler\n from nikola.utils import req_missing, makedirs, write_metadata\n@@ -49,11 +51,30 @@\n self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]\n super().set_site(site)\n \n+ def _get_pandoc_options(self, source: str) -> List[str]:\n+ \"\"\"Obtain pandoc args from config depending on type and file extensions.\"\"\"\n+ # Union[List[str], Dict[str, List[str]]]\n+ config_options = self.site.config['PANDOC_OPTIONS']\n+ type_ = type(config_options)\n+ if type_ is list:\n+ pandoc_options = config_options\n+ elif type_ is dict:\n+ ext = Path(source).suffix\n+ try:\n+ pandoc_options = config_options[ext]\n+ except KeyError:\n+ self.logger.warn('Setting PANDOC_OPTIONS to [], because extension {} is not defined in PANDOC_OPTIONS: {}.'.format(ext, config_options))\n+ pandoc_options = []\n+ else:\n+ self.logger.warn('Setting PANDOC_OPTIONS to [], because PANDOC_OPTIONS is expected to be of type Union[List[str], Dict[str, List[str]]] but this is not: {}'.format(config_options))\n+ pandoc_options = []\n+ return pandoc_options\n+\n def compile(self, source, dest, is_two_file=True, post=None, lang=None):\n \"\"\"Compile the source file into HTML and save as dest.\"\"\"\n makedirs(os.path.dirname(dest))\n try:\n- subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])\n+ subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source))\n with open(dest, 'r', encoding='utf-8-sig') as inf:\n output, shortcode_deps = self.site.apply_shortcodes(inf.read())\n with open(dest, 'w', encoding='utf-8') as outf:\n", "issue": "dispatch PANDOC_OPTIONS base on input extensions\n<!--\r\nBefore creating an issue:\r\n* make sure you are using an up-to-date version of Nikola\r\n* search for existing issues that might be related\r\n\r\nDescribe your requested features as precisely as possible. -->\r\n\r\n# Summary\r\n\r\nCurrently, `COMPILERS` only has a key `pandoc`, and `PANDOC_OPTIONS` is set for all the extensions specified there. The proposed feature simply suggests to have configurable dispatch logic to allow multiple extensions calling pandoc with a different set of `PANDOC_OPTIONS`.\r\n\r\n# Example use case\r\n\r\nMy personal use for this would be to convert one of my site currently using `.md` and `.markdown` to mean a different target format. Both will be converted to HTML, but one of them is slideshow, another is normal \"article\"/post.\r\n\r\nAnother possible use case is, if one want to specify markdown extensions in pandoc, say `-t markdown+...-...`, then `COMPILERS['pandoc'] = ['.rst', '.md']` won't work with `PANDOC_OPTIONS = ['-t', 'markdown+...-...']` because that would be wrong input format for `.rst`.\r\n\r\n# Suggested implementation\r\n\r\nProbably the natural way is to promote\r\n\r\n```py\r\nPANDOC_OPTIONS: Union[list, Dict[str, list]]\r\n```\r\n\r\nwhere the keys are those in `COMPILERS['pandoc']`.\r\n\r\nIf it makes sense I can send a PR doing that. It seems that it only requires changes in <https://github.com/getnikola/nikola/blob/master/nikola/plugins/compile/pandoc.py#L52-L56>, is it true?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2021 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Page compiler plugin for pandoc.\n\nYou will need, of course, to install pandoc\n\"\"\"\n\n\nimport io\nimport os\nimport subprocess\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import req_missing, makedirs, write_metadata\n\n\nclass CompilePandoc(PageCompiler):\n \"\"\"Compile markups into HTML using pandoc.\"\"\"\n\n name = \"pandoc\"\n friendly_name = \"pandoc\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]\n super().set_site(site)\n\n def compile(self, source, dest, is_two_file=True, post=None, lang=None):\n \"\"\"Compile the source file into HTML and save as dest.\"\"\"\n makedirs(os.path.dirname(dest))\n try:\n subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])\n with open(dest, 'r', encoding='utf-8-sig') as inf:\n output, shortcode_deps = self.site.apply_shortcodes(inf.read())\n with open(dest, 'w', encoding='utf-8') as outf:\n outf.write(output)\n if post is None:\n if shortcode_deps:\n self.logger.error(\n \"Cannot save dependencies for post {0} (post unknown)\",\n source)\n else:\n post._depfile[dest] += shortcode_deps\n except OSError as e:\n if e.strreror == 'No such file or directory':\n req_missing(['pandoc'], 'build this site (compile with pandoc)', python=False)\n\n def compile_string(self, data, source_path=None, is_two_file=True, post=None, lang=None):\n \"\"\"Compile into HTML strings.\"\"\"\n raise ValueError(\"Pandoc compiler does not support compile_string due to multiple output formats\")\n\n def create_post(self, path, **kw):\n \"\"\"Create a new post.\"\"\"\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not used by create_post as of now.\n kw.pop('is_page', False)\n metadata = {}\n metadata.update(self.default_metadata)\n metadata.update(kw)\n makedirs(os.path.dirname(path))\n if not content.endswith('\\n'):\n content += '\\n'\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if onefile:\n fd.write(write_metadata(metadata, comment_wrap=True, site=self.site, compiler=self))\n fd.write(content)\n", "path": "nikola/plugins/compile/pandoc.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2021 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Page compiler plugin for pandoc.\n\nYou will need, of course, to install pandoc\n\"\"\"\n\n\nimport io\nimport os\nimport subprocess\nfrom typing import List\nfrom pathlib import Path\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import req_missing, makedirs, write_metadata\n\n\nclass CompilePandoc(PageCompiler):\n \"\"\"Compile markups into HTML using pandoc.\"\"\"\n\n name = \"pandoc\"\n friendly_name = \"pandoc\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]\n super().set_site(site)\n\n def _get_pandoc_options(self, source: str) -> List[str]:\n \"\"\"Obtain pandoc args from config depending on type and file extensions.\"\"\"\n # Union[List[str], Dict[str, List[str]]]\n config_options = self.site.config['PANDOC_OPTIONS']\n type_ = type(config_options)\n if type_ is list:\n pandoc_options = config_options\n elif type_ is dict:\n ext = Path(source).suffix\n try:\n pandoc_options = config_options[ext]\n except KeyError:\n self.logger.warn('Setting PANDOC_OPTIONS to [], because extension {} is not defined in PANDOC_OPTIONS: {}.'.format(ext, config_options))\n pandoc_options = []\n else:\n self.logger.warn('Setting PANDOC_OPTIONS to [], because PANDOC_OPTIONS is expected to be of type Union[List[str], Dict[str, List[str]]] but this is not: {}'.format(config_options))\n pandoc_options = []\n return pandoc_options\n\n def compile(self, source, dest, is_two_file=True, post=None, lang=None):\n \"\"\"Compile the source file into HTML and save as dest.\"\"\"\n makedirs(os.path.dirname(dest))\n try:\n subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source))\n with open(dest, 'r', encoding='utf-8-sig') as inf:\n output, shortcode_deps = self.site.apply_shortcodes(inf.read())\n with open(dest, 'w', encoding='utf-8') as outf:\n outf.write(output)\n if post is None:\n if shortcode_deps:\n self.logger.error(\n \"Cannot save dependencies for post {0} (post unknown)\",\n source)\n else:\n post._depfile[dest] += shortcode_deps\n except OSError as e:\n if e.strreror == 'No such file or directory':\n req_missing(['pandoc'], 'build this site (compile with pandoc)', python=False)\n\n def compile_string(self, data, source_path=None, is_two_file=True, post=None, lang=None):\n \"\"\"Compile into HTML strings.\"\"\"\n raise ValueError(\"Pandoc compiler does not support compile_string due to multiple output formats\")\n\n def create_post(self, path, **kw):\n \"\"\"Create a new post.\"\"\"\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not used by create_post as of now.\n kw.pop('is_page', False)\n metadata = {}\n metadata.update(self.default_metadata)\n metadata.update(kw)\n makedirs(os.path.dirname(path))\n if not content.endswith('\\n'):\n content += '\\n'\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if onefile:\n fd.write(write_metadata(metadata, comment_wrap=True, site=self.site, compiler=self))\n fd.write(content)\n", "path": "nikola/plugins/compile/pandoc.py"}]}
| 1,597 | 526 |
gh_patches_debug_3348
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-811
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dovecot service takes long to start
```
docker stack ps mailu
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
yh6u3jrckt32 mailu_redis.1 redis:alpine prod02.usrpro.com Running Running 4 minutes ago
otpt6os15sy3 mailu_smtp.1 mailu/postfix:master prod01.usrpro.com Running Running 4 minutes ago
artmzgamypoy mailu_webdav.1 mailu/radicale:master prod03.usrpro.com Running Running 4 minutes ago
x5ynw6h5ap3p mailu_imap.1 usrpro/dovecot:workaround-lucene prod03.usrpro.com Running Starting 4 minutes ago
n674xs649jlw mailu_admin.1 mailu/admin:master prod01.usrpro.com Running Running 4 minutes ago
xrmri5cqjzqn mailu_antivirus.1 mailu/clamav:master prod02.usrpro.com Running Running 2 minutes ago
koiuyxlj2cpt mailu_antispam.1 mailu/rspamd:master prod03.usrpro.com Running Running 4 minutes ago
hdpu4gq182ha mailu_webmail.1 mailu/rainloop:master prod01.usrpro.com Running Running 4 minutes ago
wgqkwnz6wyxp mailu_front.1 mailu/nginx:master prod01.usrpro.com Running Running 4 minutes ago
v2eznworzs0g mailu_front.2 mailu/nginx:master prod02.usrpro.com Running Running 4 minutes ago
nckm82d8aryd mailu_front.3 mailu/nginx:master prod03.usrpro.com Running Running 4 minutes ago
```
In `start.py`, most probably it is stuck on:
```
multiprocessing.Process(target=start_podop).start()
```
Have to confirm with writing some logging, like requested in issue #588 .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dovecot/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import jinja2
4 import os
5 import socket
6 import glob
7 import multiprocessing
8 import tenacity
9
10 from tenacity import retry
11 from podop import run_server
12
13
14 def start_podop():
15 os.setuid(8)
16 run_server(3 if "DEBUG" in os.environ else 0, "dovecot", "/tmp/podop.socket", [
17 ("quota", "url", "http://admin/internal/dovecot/§"),
18 ("auth", "url", "http://admin/internal/dovecot/§"),
19 ("sieve", "url", "http://admin/internal/dovecot/§"),
20 ])
21
22 convert = lambda src, dst: open(dst, "w").write(jinja2.Template(open(src).read()).render(**os.environ))
23
24 # Actual startup script
25 resolve = retry(socket.gethostbyname, stop=tenacity.stop_after_attempt(100), wait=tenacity.wait_random(min=2, max=5))
26 os.environ["FRONT_ADDRESS"] = resolve(os.environ.get("FRONT_ADDRESS", "front"))
27 os.environ["REDIS_ADDRESS"] = resolve(os.environ.get("REDIS_ADDRESS", "redis"))
28 if os.environ["WEBMAIL"] != "none":
29 os.environ["WEBMAIL_ADDRESS"] = resolve(os.environ.get("WEBMAIL_ADDRESS", "webmail"))
30
31 for dovecot_file in glob.glob("/conf/*.conf"):
32 convert(dovecot_file, os.path.join("/etc/dovecot", os.path.basename(dovecot_file)))
33
34 # Run Podop, then postfix
35 multiprocessing.Process(target=start_podop).start()
36 os.system("chown -R mail:mail /mail /var/lib/dovecot /conf")
37 os.execv("/usr/sbin/dovecot", ["dovecot", "-c", "/etc/dovecot/dovecot.conf", "-F"])
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dovecot/start.py b/core/dovecot/start.py
--- a/core/dovecot/start.py
+++ b/core/dovecot/start.py
@@ -33,5 +33,6 @@
# Run Podop, then postfix
multiprocessing.Process(target=start_podop).start()
-os.system("chown -R mail:mail /mail /var/lib/dovecot /conf")
+os.system("chown mail:mail /mail")
+os.system("chown -R mail:mail /var/lib/dovecot /conf")
os.execv("/usr/sbin/dovecot", ["dovecot", "-c", "/etc/dovecot/dovecot.conf", "-F"])
|
{"golden_diff": "diff --git a/core/dovecot/start.py b/core/dovecot/start.py\n--- a/core/dovecot/start.py\n+++ b/core/dovecot/start.py\n@@ -33,5 +33,6 @@\n \n # Run Podop, then postfix\n multiprocessing.Process(target=start_podop).start()\n-os.system(\"chown -R mail:mail /mail /var/lib/dovecot /conf\")\n+os.system(\"chown mail:mail /mail\")\n+os.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\n os.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "issue": "Dovecot service takes long to start\n```\r\ndocker stack ps mailu\r\nID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS\r\nyh6u3jrckt32 mailu_redis.1 redis:alpine prod02.usrpro.com Running Running 4 minutes ago \r\notpt6os15sy3 mailu_smtp.1 mailu/postfix:master prod01.usrpro.com Running Running 4 minutes ago \r\nartmzgamypoy mailu_webdav.1 mailu/radicale:master prod03.usrpro.com Running Running 4 minutes ago \r\nx5ynw6h5ap3p mailu_imap.1 usrpro/dovecot:workaround-lucene prod03.usrpro.com Running Starting 4 minutes ago \r\nn674xs649jlw mailu_admin.1 mailu/admin:master prod01.usrpro.com Running Running 4 minutes ago \r\nxrmri5cqjzqn mailu_antivirus.1 mailu/clamav:master prod02.usrpro.com Running Running 2 minutes ago \r\nkoiuyxlj2cpt mailu_antispam.1 mailu/rspamd:master prod03.usrpro.com Running Running 4 minutes ago \r\nhdpu4gq182ha mailu_webmail.1 mailu/rainloop:master prod01.usrpro.com Running Running 4 minutes ago \r\nwgqkwnz6wyxp mailu_front.1 mailu/nginx:master prod01.usrpro.com Running Running 4 minutes ago \r\nv2eznworzs0g mailu_front.2 mailu/nginx:master prod02.usrpro.com Running Running 4 minutes ago \r\nnckm82d8aryd mailu_front.3 mailu/nginx:master prod03.usrpro.com Running Running 4 minutes ago \r\n```\r\nIn `start.py`, most probably it is stuck on:\r\n```\r\nmultiprocessing.Process(target=start_podop).start()\r\n```\r\n\r\nHave to confirm with writing some logging, like requested in issue #588 .\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport jinja2\nimport os\nimport socket\nimport glob\nimport multiprocessing\nimport tenacity\n\nfrom tenacity import retry\nfrom podop import run_server\n\n\ndef start_podop():\n os.setuid(8)\n run_server(3 if \"DEBUG\" in os.environ else 0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"auth\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"sieve\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n ])\n\nconvert = lambda src, dst: open(dst, \"w\").write(jinja2.Template(open(src).read()).render(**os.environ))\n\n# Actual startup script\nresolve = retry(socket.gethostbyname, stop=tenacity.stop_after_attempt(100), wait=tenacity.wait_random(min=2, max=5))\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"REDIS_ADDRESS\"] = resolve(os.environ.get(\"REDIS_ADDRESS\", \"redis\"))\nif os.environ[\"WEBMAIL\"] != \"none\":\n os.environ[\"WEBMAIL_ADDRESS\"] = resolve(os.environ.get(\"WEBMAIL_ADDRESS\", \"webmail\"))\n\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n convert(dovecot_file, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\n# Run Podop, then postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"chown -R mail:mail /mail /var/lib/dovecot /conf\")\nos.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "path": "core/dovecot/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport jinja2\nimport os\nimport socket\nimport glob\nimport multiprocessing\nimport tenacity\n\nfrom tenacity import retry\nfrom podop import run_server\n\n\ndef start_podop():\n os.setuid(8)\n run_server(3 if \"DEBUG\" in os.environ else 0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"auth\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"sieve\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n ])\n\nconvert = lambda src, dst: open(dst, \"w\").write(jinja2.Template(open(src).read()).render(**os.environ))\n\n# Actual startup script\nresolve = retry(socket.gethostbyname, stop=tenacity.stop_after_attempt(100), wait=tenacity.wait_random(min=2, max=5))\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"REDIS_ADDRESS\"] = resolve(os.environ.get(\"REDIS_ADDRESS\", \"redis\"))\nif os.environ[\"WEBMAIL\"] != \"none\":\n os.environ[\"WEBMAIL_ADDRESS\"] = resolve(os.environ.get(\"WEBMAIL_ADDRESS\", \"webmail\"))\n\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n convert(dovecot_file, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\n# Run Podop, then postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"chown mail:mail /mail\")\nos.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\nos.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "path": "core/dovecot/start.py"}]}
| 1,236 | 150 |
gh_patches_debug_6841
|
rasdani/github-patches
|
git_diff
|
activeloopai__deeplake-1738
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEATURE]Do not overwrite python logging configuration
## 🚨🚨 Feature Request
Be able to use both `logging` and `hub` in the same python script
### Current behavior
I found the following code has undesired behavior: when adding the line `import hub`, the logging configuration will be overwritten.
```python
import logging
import hub # this will overwrite the logging config
logging.basicConfig(filename = "sample.log", level = logging.DEBUG,
format = '%(asctime)s: %(filename)s: %(levelname)s: %(message)s')
logger = logging.getLogger(__name__)
logger.debug("this is a debug message")
logger.info("this is an info message")
logger.warning("this is a warning message")
logger.error("this is an error message")
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hub/client/log.py`
Content:
```
1 """
2 License:
3 This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0.
4 If a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.
5 """
6
7 import logging
8 import sys
9
10 logger = logging.getLogger("hub")
11
12
13 def configure_logger(debug=0):
14 log_level = logging.DEBUG if debug == 1 else logging.INFO
15 logging.basicConfig(format="%(message)s", level=log_level, stream=sys.stdout)
16
17
18 configure_logger(0)
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hub/client/log.py b/hub/client/log.py
--- a/hub/client/log.py
+++ b/hub/client/log.py
@@ -12,7 +12,17 @@
def configure_logger(debug=0):
log_level = logging.DEBUG if debug == 1 else logging.INFO
- logging.basicConfig(format="%(message)s", level=log_level, stream=sys.stdout)
+ logger.setLevel(log_level)
+
+ stream_handler = logging.StreamHandler(stream=sys.stdout)
+ stream_handler.setLevel(log_level)
+
+ formatter = logging.Formatter("%(message)s")
+
+ stream_handler.setFormatter(formatter)
+
+ logger.addHandler(stream_handler)
+ logger.propagate = False
configure_logger(0)
|
{"golden_diff": "diff --git a/hub/client/log.py b/hub/client/log.py\n--- a/hub/client/log.py\n+++ b/hub/client/log.py\n@@ -12,7 +12,17 @@\n \n def configure_logger(debug=0):\n log_level = logging.DEBUG if debug == 1 else logging.INFO\n- logging.basicConfig(format=\"%(message)s\", level=log_level, stream=sys.stdout)\n+ logger.setLevel(log_level)\n+\n+ stream_handler = logging.StreamHandler(stream=sys.stdout)\n+ stream_handler.setLevel(log_level)\n+\n+ formatter = logging.Formatter(\"%(message)s\")\n+\n+ stream_handler.setFormatter(formatter)\n+\n+ logger.addHandler(stream_handler)\n+ logger.propagate = False\n \n \n configure_logger(0)\n", "issue": "[FEATURE]Do not overwrite python logging configuration\n## \ud83d\udea8\ud83d\udea8 Feature Request\r\n\r\nBe able to use both `logging` and `hub` in the same python script\r\n\r\n### Current behavior\r\n\r\nI found the following code has undesired behavior: when adding the line `import hub`, the logging configuration will be overwritten.\r\n\r\n```python\r\nimport logging\r\nimport hub # this will overwrite the logging config\r\n\r\nlogging.basicConfig(filename = \"sample.log\", level = logging.DEBUG,\r\n format = '%(asctime)s: %(filename)s: %(levelname)s: %(message)s')\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\nlogger.debug(\"this is a debug message\")\r\nlogger.info(\"this is an info message\")\r\nlogger.warning(\"this is a warning message\")\r\nlogger.error(\"this is an error message\")\r\n\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nLicense:\nThis Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0.\nIf a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.\n\"\"\"\n\nimport logging\nimport sys\n\nlogger = logging.getLogger(\"hub\")\n\n\ndef configure_logger(debug=0):\n log_level = logging.DEBUG if debug == 1 else logging.INFO\n logging.basicConfig(format=\"%(message)s\", level=log_level, stream=sys.stdout)\n\n\nconfigure_logger(0)\n", "path": "hub/client/log.py"}], "after_files": [{"content": "\"\"\"\nLicense:\nThis Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0.\nIf a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.\n\"\"\"\n\nimport logging\nimport sys\n\nlogger = logging.getLogger(\"hub\")\n\n\ndef configure_logger(debug=0):\n log_level = logging.DEBUG if debug == 1 else logging.INFO\n logger.setLevel(log_level)\n\n stream_handler = logging.StreamHandler(stream=sys.stdout)\n stream_handler.setLevel(log_level)\n\n formatter = logging.Formatter(\"%(message)s\")\n\n stream_handler.setFormatter(formatter)\n\n logger.addHandler(stream_handler)\n logger.propagate = False\n\n\nconfigure_logger(0)\n", "path": "hub/client/log.py"}]}
| 567 | 157 |
gh_patches_debug_24390
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4188
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'."
**Version**
3.16, but probably all versions
**Describe the bug**
- Reclaim space for repository fails with the following error.
~~~
Task paused with error: "("Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})"
~~~
**To Reproduce**
See BZ
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2169322
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/tasks/reclaim_space.py`
Content:
```
1 from pulpcore.app.models import (
2 Artifact,
3 Content,
4 ContentArtifact,
5 ProgressReport,
6 PublishedMetadata,
7 Repository,
8 RepositoryVersion,
9 )
10 from pulpcore.app.util import get_domain
11
12
13 def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
14 """
15 This task frees-up disk space by removing Artifact files from the filesystem for Content
16 exclusive to the list of provided repos.
17
18 Note: content marked as `proctected` will be excluded from the reclaim disk space.
19
20 Kwargs:
21 repo_pks (list): A list of repo pks the disk reclaim space is performed on.
22 keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim
23 disk space.
24 force (bool): If True, uploaded content will be taken into account.
25
26 """
27 reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)
28 for repo in reclaimed_repos:
29 repo.invalidate_cache(everything=True)
30
31 domain = get_domain()
32 rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)
33 c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)
34 c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)
35 c_reclaim_qs = c_reclaim_qs.exclude(
36 pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()
37 )
38
39 if keeplist_rv_pks:
40 rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)
41 rv_content = Content.objects.none()
42 for rv in rv_qs.iterator():
43 rv_content |= rv.content
44 c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)
45
46 content_distinct = c_reclaim_qs.distinct("pulp_type")
47 unprotected = []
48 for content in content_distinct:
49 if not content.cast().PROTECTED_FROM_RECLAIM:
50 unprotected.append(content.pulp_type)
51
52 ca_qs = ContentArtifact.objects.select_related("content", "artifact").filter(
53 content__in=c_reclaim_qs.values("pk"), artifact__isnull=False
54 )
55 if not force:
56 ca_qs = ca_qs.filter(remoteartifact__isnull=False)
57 artifact_pks = set()
58 ca_to_update = []
59 for ca in ca_qs.iterator():
60 if ca.content.pulp_type in unprotected:
61 artifact_pks.add(ca.artifact.pk)
62 ca.artifact = None
63 ca_to_update.append(ca)
64
65 ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=["artifact"], batch_size=1000)
66 artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)
67 progress_bar = ProgressReport(
68 message="Reclaim disk space",
69 total=artifacts_to_delete.count(),
70 code="reclaim-space.artifact",
71 done=0,
72 state="running",
73 )
74 progress_bar.save()
75
76 counter = 0
77 interval = 100
78 for artifact in artifacts_to_delete.iterator():
79 # we need to manually call delete() because it cleans up the file on the filesystem
80 artifact.delete()
81 progress_bar.done += 1
82 counter += 1
83
84 if counter >= interval:
85 progress_bar.save()
86 counter = 0
87
88 progress_bar.state = "completed"
89 progress_bar.save()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py
--- a/pulpcore/app/tasks/reclaim_space.py
+++ b/pulpcore/app/tasks/reclaim_space.py
@@ -1,3 +1,7 @@
+from logging import getLogger
+
+from django.db.models.deletion import ProtectedError
+
from pulpcore.app.models import (
Artifact,
Content,
@@ -9,6 +13,8 @@
)
from pulpcore.app.util import get_domain
+log = getLogger(__name__)
+
def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
"""
@@ -76,10 +82,16 @@
counter = 0
interval = 100
for artifact in artifacts_to_delete.iterator():
- # we need to manually call delete() because it cleans up the file on the filesystem
- artifact.delete()
- progress_bar.done += 1
- counter += 1
+ try:
+ # we need to manually call delete() because it cleans up the file on the filesystem
+ artifact.delete()
+ except ProtectedError as e:
+ # Rarely artifact could be shared between to different content units.
+ # Just log and skip the artifact deletion in this case
+ log.info(e)
+ else:
+ progress_bar.done += 1
+ counter += 1
if counter >= interval:
progress_bar.save()
|
{"golden_diff": "diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py\n--- a/pulpcore/app/tasks/reclaim_space.py\n+++ b/pulpcore/app/tasks/reclaim_space.py\n@@ -1,3 +1,7 @@\n+from logging import getLogger\n+\n+from django.db.models.deletion import ProtectedError\n+\n from pulpcore.app.models import (\n Artifact,\n Content,\n@@ -9,6 +13,8 @@\n )\n from pulpcore.app.util import get_domain\n \n+log = getLogger(__name__)\n+\n \n def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n@@ -76,10 +82,16 @@\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n- # we need to manually call delete() because it cleans up the file on the filesystem\n- artifact.delete()\n- progress_bar.done += 1\n- counter += 1\n+ try:\n+ # we need to manually call delete() because it cleans up the file on the filesystem\n+ artifact.delete()\n+ except ProtectedError as e:\n+ # Rarely artifact could be shared between to different content units.\n+ # Just log and skip the artifact deletion in this case\n+ log.info(e)\n+ else:\n+ progress_bar.done += 1\n+ counter += 1\n \n if counter >= interval:\n progress_bar.save()\n", "issue": "Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\"\n**Version**\r\n3.16, but probably all versions\r\n\r\n**Describe the bug**\r\n\r\n- Reclaim space for repository fails with the following error.\r\n\r\n ~~~\r\n Task paused with error: \"(\"Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})\"\r\n ~~~\r\n\r\n**To Reproduce**\r\nSee BZ\r\n\r\n**Additional context**\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2169322\r\n\n", "before_files": [{"content": "from pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}], "after_files": [{"content": "from logging import getLogger\n\nfrom django.db.models.deletion import ProtectedError\n\nfrom pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\nlog = getLogger(__name__)\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n try:\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n except ProtectedError as e:\n # Rarely artifact could be shared between to different content units.\n # Just log and skip the artifact deletion in this case\n log.info(e)\n else:\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}]}
| 1,349 | 322 |
gh_patches_debug_53978
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1091
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add CodeMeta JSON-LD Context File
# Description
As part of an IRIS-HEP initiative to improve the citation quality/ease for all software it has been recommended by @danielskatz that projects:
- Track authors/contributors to software as it is developed
- Create a citation metadata file in the software repo and keep it up to date
- Link Github and Zenodo to get DOIs for major releases
- In the README and docs clearly says how to cite the software
`pyhf` already does all of this, as:
- [x] There is an [`AUTHORS` file](https://github.com/scikit-hep/pyhf/blob/31d34a0edd91593d75181cc3ad71563a52600793/AUTHORS) as well as a [listing of the authors in the docs](https://scikit-hep.org/pyhf/index.html#authors)
- [x] There is a [`.zenodo.json`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/.zenodo.json) that formats Zenodo with the correct metadata and adds the ORCID of the authors
- [x] There is a Zenodo DOI that is kept in sync with our release schedule (almost) fully automatically (almost as it requires 1 button click). [](https://doi.org/10.5281/zenodo.1169739)
- [x] There is a Citation section in both the [`README`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/README.rst#citation) and the [docs webpage](https://scikit-hep.org/pyhf/citations.html) with the preferred citation
It has been pointed out though that we should also look into adding a [CodeMeta Project JSON-LD Context File](https://codemeta.github.io/jsonld/) as with it
> zenodo will read [it] and will be basically the same as [schema.org](http://schema.org/) so that this will help google index software, and tools be able to use the metadata file in [softwareheritage.org](http://softwareheritage.org/) to automatically build a citation for a particular commit.
(cc @gordonwatts as this might be of interest)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
7 'tensorflow-probability~=0.10.0',
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
11 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes
12 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted(set(['matplotlib', 'requests']))
23 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'pytest~=6.0',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'jupyter',
43 'uproot~=3.3',
44 'graphviz',
45 'jsonpatch',
46 ]
47 )
48 )
49 extras_require['docs'] = sorted(
50 set(
51 [
52 'sphinx>=3.1.2',
53 'sphinxcontrib-bibtex',
54 'sphinx-click',
55 'sphinx_rtd_theme',
56 'nbsphinx',
57 'ipywidgets',
58 'sphinx-issues',
59 'sphinx-copybutton>0.2.9',
60 ]
61 )
62 )
63 extras_require['develop'] = sorted(
64 set(
65 extras_require['docs']
66 + extras_require['lint']
67 + extras_require['test']
68 + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']
69 )
70 )
71 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
72
73
74 setup(
75 extras_require=extras_require,
76 use_scm_version=lambda: {'local_scheme': lambda version: ''},
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,7 +65,15 @@
extras_require['docs']
+ extras_require['lint']
+ extras_require['test']
- + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']
+ + [
+ 'nbdime',
+ 'bump2version',
+ 'ipython',
+ 'pre-commit',
+ 'check-manifest',
+ 'codemetapy>=0.3.4',
+ 'twine',
+ ]
)
)
extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,7 +65,15 @@\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n- + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n+ + [\n+ 'nbdime',\n+ 'bump2version',\n+ 'ipython',\n+ 'pre-commit',\n+ 'check-manifest',\n+ 'codemetapy>=0.3.4',\n+ 'twine',\n+ ]\n )\n )\n extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n", "issue": "Add CodeMeta JSON-LD Context File\n# Description\r\n\r\nAs part of an IRIS-HEP initiative to improve the citation quality/ease for all software it has been recommended by @danielskatz that projects:\r\n- Track authors/contributors to software as it is developed\r\n- Create a citation metadata file in the software repo and keep it up to date\r\n- Link Github and Zenodo to get DOIs for major releases\r\n- In the README and docs clearly says how to cite the software\r\n\r\n`pyhf` already does all of this, as:\r\n- [x] There is an [`AUTHORS` file](https://github.com/scikit-hep/pyhf/blob/31d34a0edd91593d75181cc3ad71563a52600793/AUTHORS) as well as a [listing of the authors in the docs](https://scikit-hep.org/pyhf/index.html#authors)\r\n- [x] There is a [`.zenodo.json`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/.zenodo.json) that formats Zenodo with the correct metadata and adds the ORCID of the authors\r\n- [x] There is a Zenodo DOI that is kept in sync with our release schedule (almost) fully automatically (almost as it requires 1 button click). [](https://doi.org/10.5281/zenodo.1169739)\r\n- [x] There is a Citation section in both the [`README`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/README.rst#citation) and the [docs webpage](https://scikit-hep.org/pyhf/citations.html) with the preferred citation\r\n\r\nIt has been pointed out though that we should also look into adding a [CodeMeta Project JSON-LD Context File](https://codemeta.github.io/jsonld/) as with it\r\n\r\n> zenodo will read [it] and will be basically the same as [schema.org](http://schema.org/) so that this will help google index software, and tools be able to use the metadata file in [softwareheritage.org](http://softwareheritage.org/) to automatically build a citation for a particular commit.\r\n\r\n(cc @gordonwatts as this might be of interest)\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib', 'requests']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib', 'requests']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,591 | 163 |
gh_patches_debug_9416
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-1686
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: bing videos engine
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/searxng/searxng
Branch: master
Version: 2022.08.14-649d1a14
<!-- Check if these values are correct -->
**How did you install SearXNG?**
searxng-docker
**What happened?**
bing video search crashed
**How To Reproduce**
search for anything
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->
**Technical report**
Error
* Error: IndexError
* Percentage: 100
* Parameters: `()`
* File name: `searx/engines/bing_videos.py:84`
* Function: `response`
* Code: `metadata = loads(result.xpath('.//div[@class="vrhdata"]/@vrhm')[0])`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/bing_videos.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Bing (Videos)
4
5 """
6
7 from json import loads
8 from urllib.parse import urlencode
9
10 from lxml import html
11
12 from searx.utils import match_language
13 from searx.engines.bing import language_aliases
14
15 from searx.engines.bing import ( # pylint: disable=unused-import
16 _fetch_supported_languages,
17 supported_languages_url,
18 )
19
20 about = {
21 "website": 'https://www.bing.com/videos',
22 "wikidata_id": 'Q4914152',
23 "official_api_documentation": 'https://www.microsoft.com/en-us/bing/apis/bing-video-search-api',
24 "use_official_api": False,
25 "require_api_key": False,
26 "results": 'HTML',
27 }
28
29 categories = ['videos', 'web']
30 paging = True
31 safesearch = True
32 time_range_support = True
33 send_accept_language_header = True
34 number_of_results = 28
35
36 base_url = 'https://www.bing.com/'
37 search_string = (
38 # fmt: off
39 'videos/search'
40 '?{query}'
41 '&count={count}'
42 '&first={first}'
43 '&scope=video'
44 '&FORM=QBLH'
45 # fmt: on
46 )
47 time_range_string = '&qft=+filterui:videoage-lt{interval}'
48 time_range_dict = {'day': '1440', 'week': '10080', 'month': '43200', 'year': '525600'}
49
50 # safesearch definitions
51 safesearch_types = {2: 'STRICT', 1: 'DEMOTE', 0: 'OFF'}
52
53
54 # do search-request
55 def request(query, params):
56 offset = ((params['pageno'] - 1) * number_of_results) + 1
57
58 search_path = search_string.format(query=urlencode({'q': query}), count=number_of_results, first=offset)
59
60 # safesearch cookie
61 params['cookies']['SRCHHPGUSR'] = 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')
62
63 # language cookie
64 language = match_language(params['language'], supported_languages, language_aliases).lower()
65 params['cookies']['_EDGE_S'] = 'mkt=' + language + '&F=1'
66
67 # query and paging
68 params['url'] = base_url + search_path
69
70 # time range
71 if params['time_range'] in time_range_dict:
72 params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])
73
74 return params
75
76
77 # get response from search-request
78 def response(resp):
79 results = []
80
81 dom = html.fromstring(resp.text)
82
83 for result in dom.xpath('//div[@class="dg_u"]'):
84 metadata = loads(result.xpath('.//div[@class="vrhdata"]/@vrhm')[0])
85 info = ' - '.join(result.xpath('.//div[@class="mc_vtvc_meta_block"]//span/text()')).strip()
86 content = '{0} - {1}'.format(metadata['du'], info)
87 thumbnail = '{0}th?id={1}'.format(base_url, metadata['thid'])
88 results.append(
89 {
90 'url': metadata['murl'],
91 'thumbnail': thumbnail,
92 'title': metadata.get('vt', ''),
93 'content': content,
94 'template': 'videos.html',
95 }
96 )
97
98 return results
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/bing_videos.py b/searx/engines/bing_videos.py
--- a/searx/engines/bing_videos.py
+++ b/searx/engines/bing_videos.py
@@ -80,7 +80,7 @@
dom = html.fromstring(resp.text)
- for result in dom.xpath('//div[@class="dg_u"]'):
+ for result in dom.xpath('//div[@class="dg_u"]/div[contains(@class, "mc_vtvc")]'):
metadata = loads(result.xpath('.//div[@class="vrhdata"]/@vrhm')[0])
info = ' - '.join(result.xpath('.//div[@class="mc_vtvc_meta_block"]//span/text()')).strip()
content = '{0} - {1}'.format(metadata['du'], info)
|
{"golden_diff": "diff --git a/searx/engines/bing_videos.py b/searx/engines/bing_videos.py\n--- a/searx/engines/bing_videos.py\n+++ b/searx/engines/bing_videos.py\n@@ -80,7 +80,7 @@\n \n dom = html.fromstring(resp.text)\n \n- for result in dom.xpath('//div[@class=\"dg_u\"]'):\n+ for result in dom.xpath('//div[@class=\"dg_u\"]/div[contains(@class, \"mc_vtvc\")]'):\n metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])\n info = ' - '.join(result.xpath('.//div[@class=\"mc_vtvc_meta_block\"]//span/text()')).strip()\n content = '{0} - {1}'.format(metadata['du'], info)\n", "issue": "Bug: bing videos engine\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nRepository: https://github.com/searxng/searxng\r\nBranch: master\r\nVersion: 2022.08.14-649d1a14\r\n<!-- Check if these values are correct -->\r\n\r\n**How did you install SearXNG?**\r\nsearxng-docker\r\n\r\n**What happened?**\r\nbing video search crashed\r\n\r\n**How To Reproduce**\r\nsearch for anything\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\n\r\n**Technical report**\r\n\r\nError\r\n * Error: IndexError\r\n * Percentage: 100\r\n * Parameters: `()`\r\n * File name: `searx/engines/bing_videos.py:84`\r\n * Function: `response`\r\n * Code: `metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])`\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Bing (Videos)\n\n\"\"\"\n\nfrom json import loads\nfrom urllib.parse import urlencode\n\nfrom lxml import html\n\nfrom searx.utils import match_language\nfrom searx.engines.bing import language_aliases\n\nfrom searx.engines.bing import ( # pylint: disable=unused-import\n _fetch_supported_languages,\n supported_languages_url,\n)\n\nabout = {\n \"website\": 'https://www.bing.com/videos',\n \"wikidata_id\": 'Q4914152',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-video-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\ncategories = ['videos', 'web']\npaging = True\nsafesearch = True\ntime_range_support = True\nsend_accept_language_header = True\nnumber_of_results = 28\n\nbase_url = 'https://www.bing.com/'\nsearch_string = (\n # fmt: off\n 'videos/search'\n '?{query}'\n '&count={count}'\n '&first={first}'\n '&scope=video'\n '&FORM=QBLH'\n # fmt: on\n)\ntime_range_string = '&qft=+filterui:videoage-lt{interval}'\ntime_range_dict = {'day': '1440', 'week': '10080', 'month': '43200', 'year': '525600'}\n\n# safesearch definitions\nsafesearch_types = {2: 'STRICT', 1: 'DEMOTE', 0: 'OFF'}\n\n\n# do search-request\ndef request(query, params):\n offset = ((params['pageno'] - 1) * number_of_results) + 1\n\n search_path = search_string.format(query=urlencode({'q': query}), count=number_of_results, first=offset)\n\n # safesearch cookie\n params['cookies']['SRCHHPGUSR'] = 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')\n\n # language cookie\n language = match_language(params['language'], supported_languages, language_aliases).lower()\n params['cookies']['_EDGE_S'] = 'mkt=' + language + '&F=1'\n\n # query and paging\n params['url'] = base_url + search_path\n\n # time range\n if params['time_range'] in time_range_dict:\n params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n for result in dom.xpath('//div[@class=\"dg_u\"]'):\n metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])\n info = ' - '.join(result.xpath('.//div[@class=\"mc_vtvc_meta_block\"]//span/text()')).strip()\n content = '{0} - {1}'.format(metadata['du'], info)\n thumbnail = '{0}th?id={1}'.format(base_url, metadata['thid'])\n results.append(\n {\n 'url': metadata['murl'],\n 'thumbnail': thumbnail,\n 'title': metadata.get('vt', ''),\n 'content': content,\n 'template': 'videos.html',\n }\n )\n\n return results\n", "path": "searx/engines/bing_videos.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Bing (Videos)\n\n\"\"\"\n\nfrom json import loads\nfrom urllib.parse import urlencode\n\nfrom lxml import html\n\nfrom searx.utils import match_language\nfrom searx.engines.bing import language_aliases\n\nfrom searx.engines.bing import ( # pylint: disable=unused-import\n _fetch_supported_languages,\n supported_languages_url,\n)\n\nabout = {\n \"website\": 'https://www.bing.com/videos',\n \"wikidata_id\": 'Q4914152',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-video-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\ncategories = ['videos', 'web']\npaging = True\nsafesearch = True\ntime_range_support = True\nsend_accept_language_header = True\nnumber_of_results = 28\n\nbase_url = 'https://www.bing.com/'\nsearch_string = (\n # fmt: off\n 'videos/search'\n '?{query}'\n '&count={count}'\n '&first={first}'\n '&scope=video'\n '&FORM=QBLH'\n # fmt: on\n)\ntime_range_string = '&qft=+filterui:videoage-lt{interval}'\ntime_range_dict = {'day': '1440', 'week': '10080', 'month': '43200', 'year': '525600'}\n\n# safesearch definitions\nsafesearch_types = {2: 'STRICT', 1: 'DEMOTE', 0: 'OFF'}\n\n\n# do search-request\ndef request(query, params):\n offset = ((params['pageno'] - 1) * number_of_results) + 1\n\n search_path = search_string.format(query=urlencode({'q': query}), count=number_of_results, first=offset)\n\n # safesearch cookie\n params['cookies']['SRCHHPGUSR'] = 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')\n\n # language cookie\n language = match_language(params['language'], supported_languages, language_aliases).lower()\n params['cookies']['_EDGE_S'] = 'mkt=' + language + '&F=1'\n\n # query and paging\n params['url'] = base_url + search_path\n\n # time range\n if params['time_range'] in time_range_dict:\n params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n for result in dom.xpath('//div[@class=\"dg_u\"]/div[contains(@class, \"mc_vtvc\")]'):\n metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])\n info = ' - '.join(result.xpath('.//div[@class=\"mc_vtvc_meta_block\"]//span/text()')).strip()\n content = '{0} - {1}'.format(metadata['du'], info)\n thumbnail = '{0}th?id={1}'.format(base_url, metadata['thid'])\n results.append(\n {\n 'url': metadata['murl'],\n 'thumbnail': thumbnail,\n 'title': metadata.get('vt', ''),\n 'content': content,\n 'template': 'videos.html',\n }\n )\n\n return results\n", "path": "searx/engines/bing_videos.py"}]}
| 1,492 | 186 |
gh_patches_debug_22806
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1859
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
announce: Batch up to MAXTARGETS channels at a time if available
The simple loop in `announce.py` is kind of...meh. The number of lines it actually sends to the server could be reduced as much as 75% depending on how many channels Sopel is in when `.announce` is called.
Some IRC servers (Hybrid, for example) send a `MAXTARGETS` field in the `ISUPPORT` properties when a client connects. If this is present, it would be useful for `announce.py` to have that information and send its announcement messages to `MAXTARGETS` channels at a time.
Long-term thing, more of an optimization than anything.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/announce.py`
Content:
```
1 # coding=utf-8
2 """
3 announce.py - Sopel Announcement Plugin
4 Sends announcements to all channels the bot has joined.
5 Copyright © 2013, Elad Alfassa, <[email protected]>
6 Licensed under the Eiffel Forum License 2.
7
8 https://sopel.chat
9 """
10 from __future__ import absolute_import, division, print_function, unicode_literals
11
12 from sopel import plugin
13
14
15 @plugin.command('announce')
16 @plugin.example('.announce Some important message here')
17 @plugin.require_admin('Sorry, I can\'t let you do that', reply=True)
18 @plugin.output_prefix('[ANNOUNCEMENT] ')
19 def announce(bot, trigger):
20 """Send an announcement to all channels the bot is in"""
21 for channel in bot.channels:
22 bot.say(trigger.group(2), channel)
23 bot.reply('Announce complete.')
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/announce.py b/sopel/modules/announce.py
--- a/sopel/modules/announce.py
+++ b/sopel/modules/announce.py
@@ -12,12 +12,30 @@
from sopel import plugin
+def _chunks(items, size):
+ """Break a list of items into groups.
+
+ :param items: the collection of items to chunk
+ :type items: :term:`iterable`
+ :param int size: the size of each chunk
+ """
+ # from https://stackoverflow.com/a/312464/5991 with modified names for readability
+ for delim in range(0, len(items), size):
+ yield items[delim:delim + size]
+
+
@plugin.command('announce')
@plugin.example('.announce Some important message here')
@plugin.require_admin('Sorry, I can\'t let you do that', reply=True)
@plugin.output_prefix('[ANNOUNCEMENT] ')
def announce(bot, trigger):
- """Send an announcement to all channels the bot is in"""
- for channel in bot.channels:
- bot.say(trigger.group(2), channel)
+ """Send an announcement to all channels the bot is in."""
+ if trigger.group(2) is None:
+ bot.reply('Announce what? I need a message to say.')
+ return
+
+ size = bot.isupport.get('TARGMAX', {}).get('PRIVMSG', 1)
+ channels = _chunks(bot.channels.keys(), size)
+ for cgroup in channels:
+ bot.say(trigger.group(2), ','.join(cgroup))
bot.reply('Announce complete.')
|
{"golden_diff": "diff --git a/sopel/modules/announce.py b/sopel/modules/announce.py\n--- a/sopel/modules/announce.py\n+++ b/sopel/modules/announce.py\n@@ -12,12 +12,30 @@\n from sopel import plugin\n \n \n+def _chunks(items, size):\n+ \"\"\"Break a list of items into groups.\n+\n+ :param items: the collection of items to chunk\n+ :type items: :term:`iterable`\n+ :param int size: the size of each chunk\n+ \"\"\"\n+ # from https://stackoverflow.com/a/312464/5991 with modified names for readability\n+ for delim in range(0, len(items), size):\n+ yield items[delim:delim + size]\n+\n+\n @plugin.command('announce')\n @plugin.example('.announce Some important message here')\n @plugin.require_admin('Sorry, I can\\'t let you do that', reply=True)\n @plugin.output_prefix('[ANNOUNCEMENT] ')\n def announce(bot, trigger):\n- \"\"\"Send an announcement to all channels the bot is in\"\"\"\n- for channel in bot.channels:\n- bot.say(trigger.group(2), channel)\n+ \"\"\"Send an announcement to all channels the bot is in.\"\"\"\n+ if trigger.group(2) is None:\n+ bot.reply('Announce what? I need a message to say.')\n+ return\n+\n+ size = bot.isupport.get('TARGMAX', {}).get('PRIVMSG', 1)\n+ channels = _chunks(bot.channels.keys(), size)\n+ for cgroup in channels:\n+ bot.say(trigger.group(2), ','.join(cgroup))\n bot.reply('Announce complete.')\n", "issue": "announce: Batch up to MAXTARGETS channels at a time if available\nThe simple loop in `announce.py` is kind of...meh. The number of lines it actually sends to the server could be reduced as much as 75% depending on how many channels Sopel is in when `.announce` is called.\r\n\r\nSome IRC servers (Hybrid, for example) send a `MAXTARGETS` field in the `ISUPPORT` properties when a client connects. If this is present, it would be useful for `announce.py` to have that information and send its announcement messages to `MAXTARGETS` channels at a time.\r\n\r\nLong-term thing, more of an optimization than anything.\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nannounce.py - Sopel Announcement Plugin\nSends announcements to all channels the bot has joined.\nCopyright \u00a9 2013, Elad Alfassa, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom sopel import plugin\n\n\[email protected]('announce')\[email protected]('.announce Some important message here')\[email protected]_admin('Sorry, I can\\'t let you do that', reply=True)\[email protected]_prefix('[ANNOUNCEMENT] ')\ndef announce(bot, trigger):\n \"\"\"Send an announcement to all channels the bot is in\"\"\"\n for channel in bot.channels:\n bot.say(trigger.group(2), channel)\n bot.reply('Announce complete.')\n", "path": "sopel/modules/announce.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nannounce.py - Sopel Announcement Plugin\nSends announcements to all channels the bot has joined.\nCopyright \u00a9 2013, Elad Alfassa, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom sopel import plugin\n\n\ndef _chunks(items, size):\n \"\"\"Break a list of items into groups.\n\n :param items: the collection of items to chunk\n :type items: :term:`iterable`\n :param int size: the size of each chunk\n \"\"\"\n # from https://stackoverflow.com/a/312464/5991 with modified names for readability\n for delim in range(0, len(items), size):\n yield items[delim:delim + size]\n\n\[email protected]('announce')\[email protected]('.announce Some important message here')\[email protected]_admin('Sorry, I can\\'t let you do that', reply=True)\[email protected]_prefix('[ANNOUNCEMENT] ')\ndef announce(bot, trigger):\n \"\"\"Send an announcement to all channels the bot is in.\"\"\"\n if trigger.group(2) is None:\n bot.reply('Announce what? I need a message to say.')\n return\n\n size = bot.isupport.get('TARGMAX', {}).get('PRIVMSG', 1)\n channels = _chunks(bot.channels.keys(), size)\n for cgroup in channels:\n bot.say(trigger.group(2), ','.join(cgroup))\n bot.reply('Announce complete.')\n", "path": "sopel/modules/announce.py"}]}
| 625 | 373 |
gh_patches_debug_16507
|
rasdani/github-patches
|
git_diff
|
azavea__raster-vision-725
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot handle CRS w/o EPSG
```
Checking for existing output [####################################] 100%
Saving command configuration to /opt/data/rv_root/chip/xview-object_detection/command-config.json...
Saving command configuration to /opt/data/rv_root/train/xview-object-detection-mobilenet/command-config.json...
Saving command configuration to /opt/data/rv_root/bundle/xview-object-detection-mobilenet/command-config.json...
Saving command configuration to /opt/data/rv_root/predict/xview-object-detection-mobilenet/command-config.json...
Saving command configuration to /opt/data/rv_root/eval/xview-object-detection-mobilenet/command-config.json...
python -m rastervision run_command /opt/data/rv_root/chip/xview-object_detection/command-config.json
Making training chips...
2019-03-20 16:00:47:rastervision.utils.files: INFO - Downloading s3://azavea-nyc-ml/cogs/000227.tif to /tmp/tmp98w0p7cl/tmpypmuuz6t/tmpqbscq03c/s3/azavea-nyc-ml/cogs/000227.tif
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision/__main__.py", line 17, in <module>
rv.main()
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/opt/src/rastervision/cli/main.py", line 253, in run_command
rv.runner.CommandRunner.run(command_config_uri)
File "/opt/src/rastervision/runner/command_runner.py", line 11, in run
CommandRunner.run_from_proto(msg)
File "/opt/src/rastervision/runner/command_runner.py", line 17, in run_from_proto
command.run()
File "/opt/src/rastervision/command/chip_command.py", line 22, in run
map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))
File "/opt/src/rastervision/command/chip_command.py", line 22, in <lambda>
map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))
File "/opt/src/rastervision/data/scene_config.py", line 36, in create_scene
raster_source = self.raster_source.create_source(tmp_dir)
File "/opt/src/rastervision/data/raster_source/geotiff_source_config.py", line 70, in create_source
y_shift_meters=y_shift_meters)
File "/opt/src/rastervision/data/raster_source/geotiff_source.py", line 46, in __init__
super().__init__(raster_transformers, temp_dir, channel_order)
File "/opt/src/rastervision/data/raster_source/rasterio_source.py", line 47, in __init__
with self.activate():
File "/opt/src/rastervision/data/activate_mixin.py", line 21, in __enter__
self.activate()
File "/opt/src/rastervision/data/activate_mixin.py", line 54, in do_activate
self._activate()
File "/opt/src/rastervision/data/raster_source/geotiff_source.py", line 104, in _activate
super()._activate()
File "/opt/src/rastervision/data/raster_source/rasterio_source.py", line 106, in _activate
self._set_crs_transformer()
File "/opt/src/rastervision/data/raster_source/geotiff_source.py", line 56, in _set_crs_transformer
self.image_dataset)
File "/opt/src/rastervision/data/crs_transformer/rasterio_crs_transformer.py", line 59, in from_dataset
image_crs = dataset.crs['init']
File "/usr/lib/python3.5/collections/__init__.py", line 986, in __getitem__
raise KeyError(key)
KeyError: 'init'
/tmp/tmpkon0yhf2/tmpyysq6a82/Makefile:6: recipe for target '0' failed
make: *** [0] Error 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rastervision/data/crs_transformer/rasterio_crs_transformer.py`
Content:
```
1 import pyproj
2
3 from rasterio.transform import (rowcol, xy)
4
5 from rastervision.data.crs_transformer import (CRSTransformer,
6 IdentityCRSTransformer)
7
8
9 class RasterioCRSTransformer(CRSTransformer):
10 """Transformer for a RasterioRasterSource."""
11
12 def __init__(self, transform, image_crs, map_crs='epsg:4326'):
13 """Construct transformer.
14
15 Args:
16 image_dataset: Rasterio DatasetReader
17 map_crs: CRS code
18 """
19 self.map_proj = pyproj.Proj(init=map_crs)
20 self.image_proj = pyproj.Proj(init=image_crs)
21
22 super().__init__(image_crs, map_crs, transform)
23
24 def map_to_pixel(self, map_point):
25 """Transform point from map to pixel-based coordinates.
26
27 Args:
28 map_point: (x, y) tuple in map coordinates
29
30 Returns:
31 (x, y) tuple in pixel coordinates
32 """
33 image_point = pyproj.transform(self.map_proj, self.image_proj,
34 map_point[0], map_point[1])
35 pixel_point = rowcol(self.transform, image_point[0], image_point[1])
36 pixel_point = (pixel_point[1], pixel_point[0])
37 return pixel_point
38
39 def pixel_to_map(self, pixel_point):
40 """Transform point from pixel to map-based coordinates.
41
42 Args:
43 pixel_point: (x, y) tuple in pixel coordinates
44
45 Returns:
46 (x, y) tuple in map coordinates
47 """
48 image_point = xy(self.transform, int(pixel_point[1]),
49 int(pixel_point[0]))
50 map_point = pyproj.transform(self.image_proj, self.map_proj,
51 image_point[0], image_point[1])
52 return map_point
53
54 @classmethod
55 def from_dataset(cls, dataset, map_crs='epsg:4326'):
56 if dataset.crs is None:
57 return IdentityCRSTransformer()
58 transform = dataset.transform
59 image_crs = dataset.crs['init']
60 return cls(transform, image_crs, map_crs)
61
62 def get_affine_transform(self):
63 return self.transform
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rastervision/data/crs_transformer/rasterio_crs_transformer.py b/rastervision/data/crs_transformer/rasterio_crs_transformer.py
--- a/rastervision/data/crs_transformer/rasterio_crs_transformer.py
+++ b/rastervision/data/crs_transformer/rasterio_crs_transformer.py
@@ -17,7 +17,7 @@
map_crs: CRS code
"""
self.map_proj = pyproj.Proj(init=map_crs)
- self.image_proj = pyproj.Proj(init=image_crs)
+ self.image_proj = pyproj.Proj(image_crs)
super().__init__(image_crs, map_crs, transform)
@@ -56,7 +56,7 @@
if dataset.crs is None:
return IdentityCRSTransformer()
transform = dataset.transform
- image_crs = dataset.crs['init']
+ image_crs = dataset.crs
return cls(transform, image_crs, map_crs)
def get_affine_transform(self):
|
{"golden_diff": "diff --git a/rastervision/data/crs_transformer/rasterio_crs_transformer.py b/rastervision/data/crs_transformer/rasterio_crs_transformer.py\n--- a/rastervision/data/crs_transformer/rasterio_crs_transformer.py\n+++ b/rastervision/data/crs_transformer/rasterio_crs_transformer.py\n@@ -17,7 +17,7 @@\n map_crs: CRS code\n \"\"\"\n self.map_proj = pyproj.Proj(init=map_crs)\n- self.image_proj = pyproj.Proj(init=image_crs)\n+ self.image_proj = pyproj.Proj(image_crs)\n \n super().__init__(image_crs, map_crs, transform)\n \n@@ -56,7 +56,7 @@\n if dataset.crs is None:\n return IdentityCRSTransformer()\n transform = dataset.transform\n- image_crs = dataset.crs['init']\n+ image_crs = dataset.crs\n return cls(transform, image_crs, map_crs)\n \n def get_affine_transform(self):\n", "issue": "Cannot handle CRS w/o EPSG\n```\r\nChecking for existing output [####################################] 100%\r\nSaving command configuration to /opt/data/rv_root/chip/xview-object_detection/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/train/xview-object-detection-mobilenet/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/bundle/xview-object-detection-mobilenet/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/predict/xview-object-detection-mobilenet/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/eval/xview-object-detection-mobilenet/command-config.json...\r\npython -m rastervision run_command /opt/data/rv_root/chip/xview-object_detection/command-config.json\r\nMaking training chips...\r\n2019-03-20 16:00:47:rastervision.utils.files: INFO - Downloading s3://azavea-nyc-ml/cogs/000227.tif to /tmp/tmp98w0p7cl/tmpypmuuz6t/tmpqbscq03c/s3/azavea-nyc-ml/cogs/000227.tif\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.5/runpy.py\", line 184, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/src/rastervision/__main__.py\", line 17, in <module>\r\n rv.main()\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/opt/src/rastervision/cli/main.py\", line 253, in run_command\r\n rv.runner.CommandRunner.run(command_config_uri)\r\n File \"/opt/src/rastervision/runner/command_runner.py\", line 11, in run\r\n CommandRunner.run_from_proto(msg)\r\n File \"/opt/src/rastervision/runner/command_runner.py\", line 17, in run_from_proto\r\n command.run()\r\n File \"/opt/src/rastervision/command/chip_command.py\", line 22, in run\r\n map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))\r\n File \"/opt/src/rastervision/command/chip_command.py\", line 22, in <lambda>\r\n map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))\r\n File \"/opt/src/rastervision/data/scene_config.py\", line 36, in create_scene\r\n raster_source = self.raster_source.create_source(tmp_dir)\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source_config.py\", line 70, in create_source\r\n y_shift_meters=y_shift_meters)\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source.py\", line 46, in __init__\r\n super().__init__(raster_transformers, temp_dir, channel_order)\r\n File \"/opt/src/rastervision/data/raster_source/rasterio_source.py\", line 47, in __init__\r\n with self.activate():\r\n File \"/opt/src/rastervision/data/activate_mixin.py\", line 21, in __enter__\r\n self.activate()\r\n File \"/opt/src/rastervision/data/activate_mixin.py\", line 54, in do_activate\r\n self._activate()\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source.py\", line 104, in _activate\r\n super()._activate()\r\n File \"/opt/src/rastervision/data/raster_source/rasterio_source.py\", line 106, in _activate\r\n self._set_crs_transformer()\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source.py\", line 56, in _set_crs_transformer\r\n self.image_dataset)\r\n File \"/opt/src/rastervision/data/crs_transformer/rasterio_crs_transformer.py\", line 59, in from_dataset\r\n image_crs = dataset.crs['init']\r\n File \"/usr/lib/python3.5/collections/__init__.py\", line 986, in __getitem__\r\n raise KeyError(key)\r\nKeyError: 'init'\r\n/tmp/tmpkon0yhf2/tmpyysq6a82/Makefile:6: recipe for target '0' failed\r\nmake: *** [0] Error 1\r\n```\n", "before_files": [{"content": "import pyproj\n\nfrom rasterio.transform import (rowcol, xy)\n\nfrom rastervision.data.crs_transformer import (CRSTransformer,\n IdentityCRSTransformer)\n\n\nclass RasterioCRSTransformer(CRSTransformer):\n \"\"\"Transformer for a RasterioRasterSource.\"\"\"\n\n def __init__(self, transform, image_crs, map_crs='epsg:4326'):\n \"\"\"Construct transformer.\n\n Args:\n image_dataset: Rasterio DatasetReader\n map_crs: CRS code\n \"\"\"\n self.map_proj = pyproj.Proj(init=map_crs)\n self.image_proj = pyproj.Proj(init=image_crs)\n\n super().__init__(image_crs, map_crs, transform)\n\n def map_to_pixel(self, map_point):\n \"\"\"Transform point from map to pixel-based coordinates.\n\n Args:\n map_point: (x, y) tuple in map coordinates\n\n Returns:\n (x, y) tuple in pixel coordinates\n \"\"\"\n image_point = pyproj.transform(self.map_proj, self.image_proj,\n map_point[0], map_point[1])\n pixel_point = rowcol(self.transform, image_point[0], image_point[1])\n pixel_point = (pixel_point[1], pixel_point[0])\n return pixel_point\n\n def pixel_to_map(self, pixel_point):\n \"\"\"Transform point from pixel to map-based coordinates.\n\n Args:\n pixel_point: (x, y) tuple in pixel coordinates\n\n Returns:\n (x, y) tuple in map coordinates\n \"\"\"\n image_point = xy(self.transform, int(pixel_point[1]),\n int(pixel_point[0]))\n map_point = pyproj.transform(self.image_proj, self.map_proj,\n image_point[0], image_point[1])\n return map_point\n\n @classmethod\n def from_dataset(cls, dataset, map_crs='epsg:4326'):\n if dataset.crs is None:\n return IdentityCRSTransformer()\n transform = dataset.transform\n image_crs = dataset.crs['init']\n return cls(transform, image_crs, map_crs)\n\n def get_affine_transform(self):\n return self.transform\n", "path": "rastervision/data/crs_transformer/rasterio_crs_transformer.py"}], "after_files": [{"content": "import pyproj\n\nfrom rasterio.transform import (rowcol, xy)\n\nfrom rastervision.data.crs_transformer import (CRSTransformer,\n IdentityCRSTransformer)\n\n\nclass RasterioCRSTransformer(CRSTransformer):\n \"\"\"Transformer for a RasterioRasterSource.\"\"\"\n\n def __init__(self, transform, image_crs, map_crs='epsg:4326'):\n \"\"\"Construct transformer.\n\n Args:\n image_dataset: Rasterio DatasetReader\n map_crs: CRS code\n \"\"\"\n self.map_proj = pyproj.Proj(init=map_crs)\n self.image_proj = pyproj.Proj(image_crs)\n\n super().__init__(image_crs, map_crs, transform)\n\n def map_to_pixel(self, map_point):\n \"\"\"Transform point from map to pixel-based coordinates.\n\n Args:\n map_point: (x, y) tuple in map coordinates\n\n Returns:\n (x, y) tuple in pixel coordinates\n \"\"\"\n image_point = pyproj.transform(self.map_proj, self.image_proj,\n map_point[0], map_point[1])\n pixel_point = rowcol(self.transform, image_point[0], image_point[1])\n pixel_point = (pixel_point[1], pixel_point[0])\n return pixel_point\n\n def pixel_to_map(self, pixel_point):\n \"\"\"Transform point from pixel to map-based coordinates.\n\n Args:\n pixel_point: (x, y) tuple in pixel coordinates\n\n Returns:\n (x, y) tuple in map coordinates\n \"\"\"\n image_point = xy(self.transform, int(pixel_point[1]),\n int(pixel_point[0]))\n map_point = pyproj.transform(self.image_proj, self.map_proj,\n image_point[0], image_point[1])\n return map_point\n\n @classmethod\n def from_dataset(cls, dataset, map_crs='epsg:4326'):\n if dataset.crs is None:\n return IdentityCRSTransformer()\n transform = dataset.transform\n image_crs = dataset.crs\n return cls(transform, image_crs, map_crs)\n\n def get_affine_transform(self):\n return self.transform\n", "path": "rastervision/data/crs_transformer/rasterio_crs_transformer.py"}]}
| 2,043 | 241 |
gh_patches_debug_16342
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-8880
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User on SouD should not be able to edit their user profile
From @jamalex in bug bash:
The FacilityUser object is read-only for single-user certificates, so these changes will not be synced back. We had a spec doc that laid out "what should be allowed on a SoUD", I think this was documented in there — we should do an audit to make sure other cases are covered as well.
https://www.notion.so/learningequality/CRITICAL-FEATURE-BUG-Users-on-SoUD-should-not-be-allowed-to-edit-their-user-profile-e5765979e7ed4fcaa694cfe7932ac4df
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/plugins/user_profile/kolibri_plugin.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 from kolibri.core.hooks import NavigationHook
6 from kolibri.core.webpack import hooks as webpack_hooks
7 from kolibri.plugins import KolibriPluginBase
8 from kolibri.plugins.hooks import register_hook
9
10
11 class UserProfile(KolibriPluginBase):
12 translated_view_urls = "urls"
13
14 @property
15 def url_slug(self):
16 return "profile"
17
18
19 @register_hook
20 class UserAuthAsset(webpack_hooks.WebpackBundleHook):
21 bundle_id = "app"
22
23
24 @register_hook
25 class ProfileNavAction(NavigationHook):
26 bundle_id = "user_profile_side_nav"
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/plugins/user_profile/kolibri_plugin.py b/kolibri/plugins/user_profile/kolibri_plugin.py
--- a/kolibri/plugins/user_profile/kolibri_plugin.py
+++ b/kolibri/plugins/user_profile/kolibri_plugin.py
@@ -2,6 +2,7 @@
from __future__ import print_function
from __future__ import unicode_literals
+from kolibri.core.device.utils import get_device_setting
from kolibri.core.hooks import NavigationHook
from kolibri.core.webpack import hooks as webpack_hooks
from kolibri.plugins import KolibriPluginBase
@@ -20,6 +21,10 @@
class UserAuthAsset(webpack_hooks.WebpackBundleHook):
bundle_id = "app"
+ @property
+ def plugin_data(self):
+ return {"isSubsetOfUsersDevice": get_device_setting("subset_of_users_device")}
+
@register_hook
class ProfileNavAction(NavigationHook):
|
{"golden_diff": "diff --git a/kolibri/plugins/user_profile/kolibri_plugin.py b/kolibri/plugins/user_profile/kolibri_plugin.py\n--- a/kolibri/plugins/user_profile/kolibri_plugin.py\n+++ b/kolibri/plugins/user_profile/kolibri_plugin.py\n@@ -2,6 +2,7 @@\n from __future__ import print_function\n from __future__ import unicode_literals\n \n+from kolibri.core.device.utils import get_device_setting\n from kolibri.core.hooks import NavigationHook\n from kolibri.core.webpack import hooks as webpack_hooks\n from kolibri.plugins import KolibriPluginBase\n@@ -20,6 +21,10 @@\n class UserAuthAsset(webpack_hooks.WebpackBundleHook):\n bundle_id = \"app\"\n \n+ @property\n+ def plugin_data(self):\n+ return {\"isSubsetOfUsersDevice\": get_device_setting(\"subset_of_users_device\")}\n+\n \n @register_hook\n class ProfileNavAction(NavigationHook):\n", "issue": "User on SouD should not be able to edit their user profile\nFrom @jamalex in bug bash: \r\nThe FacilityUser object is read-only for single-user certificates, so these changes will not be synced back. We had a spec doc that laid out \"what should be allowed on a SoUD\", I think this was documented in there \u2014 we should do an audit to make sure other cases are covered as well.\r\n\r\nhttps://www.notion.so/learningequality/CRITICAL-FEATURE-BUG-Users-on-SoUD-should-not-be-allowed-to-edit-their-user-profile-e5765979e7ed4fcaa694cfe7932ac4df\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom kolibri.core.hooks import NavigationHook\nfrom kolibri.core.webpack import hooks as webpack_hooks\nfrom kolibri.plugins import KolibriPluginBase\nfrom kolibri.plugins.hooks import register_hook\n\n\nclass UserProfile(KolibriPluginBase):\n translated_view_urls = \"urls\"\n\n @property\n def url_slug(self):\n return \"profile\"\n\n\n@register_hook\nclass UserAuthAsset(webpack_hooks.WebpackBundleHook):\n bundle_id = \"app\"\n\n\n@register_hook\nclass ProfileNavAction(NavigationHook):\n bundle_id = \"user_profile_side_nav\"\n", "path": "kolibri/plugins/user_profile/kolibri_plugin.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom kolibri.core.device.utils import get_device_setting\nfrom kolibri.core.hooks import NavigationHook\nfrom kolibri.core.webpack import hooks as webpack_hooks\nfrom kolibri.plugins import KolibriPluginBase\nfrom kolibri.plugins.hooks import register_hook\n\n\nclass UserProfile(KolibriPluginBase):\n translated_view_urls = \"urls\"\n\n @property\n def url_slug(self):\n return \"profile\"\n\n\n@register_hook\nclass UserAuthAsset(webpack_hooks.WebpackBundleHook):\n bundle_id = \"app\"\n\n @property\n def plugin_data(self):\n return {\"isSubsetOfUsersDevice\": get_device_setting(\"subset_of_users_device\")}\n\n\n@register_hook\nclass ProfileNavAction(NavigationHook):\n bundle_id = \"user_profile_side_nav\"\n", "path": "kolibri/plugins/user_profile/kolibri_plugin.py"}]}
| 605 | 204 |
gh_patches_debug_18363
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-78948
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Functional API for FileLister
### 🚀 The feature, motivation and pitch
Similar to https://github.com/pytorch/data/issues/387
This allows for
```python
IterableWrapper([...]).list_file()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @VitalyFedyunin @ejguan @NivekT
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/utils/data/datapipes/iter/filelister.py`
Content:
```
1 from typing import Iterator, List, Sequence, Union
2
3 from torch.utils.data.datapipes.datapipe import IterDataPipe
4 from torch.utils.data.datapipes.iter import IterableWrapper
5 from torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root
6
7 __all__ = ["FileListerIterDataPipe", ]
8
9
10 class FileListerIterDataPipe(IterDataPipe[str]):
11 r"""
12 Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.
13 Multiple root directories can be provided.
14
15 Args:
16 root: Root directory or a sequence of root directories
17 masks: Unix style filter string or string list for filtering file name(s)
18 recursive: Whether to return pathname from nested directories or not
19 abspath: Whether to return relative pathname or absolute pathname
20 non_deterministic: Whether to return pathname in sorted order or not.
21 If ``False``, the results yielded from each root directory will be sorted
22 length: Nominal length of the datapipe
23
24 Example:
25 >>> from torchdata.datapipes.iter import FileLister
26 >>> dp = FileLister(root=".", recursive=True)
27 >>> list(dp)
28 ['example.py', './data/data.tar']
29 """
30
31 def __init__(
32 self,
33 root: Union[str, Sequence[str], IterDataPipe] = '.',
34 masks: Union[str, List[str]] = '',
35 *,
36 recursive: bool = False,
37 abspath: bool = False,
38 non_deterministic: bool = False,
39 length: int = -1
40 ) -> None:
41 super().__init__()
42 if isinstance(root, str):
43 root = [root, ]
44 if not isinstance(root, IterDataPipe):
45 root = IterableWrapper(root)
46 self.datapipe: IterDataPipe = root
47 self.masks: Union[str, List[str]] = masks
48 self.recursive: bool = recursive
49 self.abspath: bool = abspath
50 self.non_deterministic: bool = non_deterministic
51 self.length: int = length
52
53 def __iter__(self) -> Iterator[str] :
54 for path in self.datapipe:
55 yield from get_file_pathnames_from_root(path, self.masks, self.recursive, self.abspath, self.non_deterministic)
56
57 def __len__(self):
58 if self.length == -1:
59 raise TypeError("{} instance doesn't have valid length".format(type(self).__name__))
60 return self.length
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torch/utils/data/datapipes/iter/filelister.py b/torch/utils/data/datapipes/iter/filelister.py
--- a/torch/utils/data/datapipes/iter/filelister.py
+++ b/torch/utils/data/datapipes/iter/filelister.py
@@ -1,5 +1,8 @@
from typing import Iterator, List, Sequence, Union
+
+from torch.utils.data.datapipes._decorator import functional_datapipe
+
from torch.utils.data.datapipes.datapipe import IterDataPipe
from torch.utils.data.datapipes.iter import IterableWrapper
from torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root
@@ -7,6 +10,7 @@
__all__ = ["FileListerIterDataPipe", ]
+@functional_datapipe("list_files")
class FileListerIterDataPipe(IterDataPipe[str]):
r"""
Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.
|
{"golden_diff": "diff --git a/torch/utils/data/datapipes/iter/filelister.py b/torch/utils/data/datapipes/iter/filelister.py\n--- a/torch/utils/data/datapipes/iter/filelister.py\n+++ b/torch/utils/data/datapipes/iter/filelister.py\n@@ -1,5 +1,8 @@\n from typing import Iterator, List, Sequence, Union\n \n+\n+from torch.utils.data.datapipes._decorator import functional_datapipe\n+\n from torch.utils.data.datapipes.datapipe import IterDataPipe\n from torch.utils.data.datapipes.iter import IterableWrapper\n from torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root\n@@ -7,6 +10,7 @@\n __all__ = [\"FileListerIterDataPipe\", ]\n \n \n+@functional_datapipe(\"list_files\")\n class FileListerIterDataPipe(IterDataPipe[str]):\n r\"\"\"\n Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.\n", "issue": "Functional API for FileLister\n### \ud83d\ude80 The feature, motivation and pitch\n\nSimilar to https://github.com/pytorch/data/issues/387\r\n\r\nThis allows for \r\n```python\r\nIterableWrapper([...]).list_file()\r\n```\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @VitalyFedyunin @ejguan @NivekT\n", "before_files": [{"content": "from typing import Iterator, List, Sequence, Union\n\nfrom torch.utils.data.datapipes.datapipe import IterDataPipe\nfrom torch.utils.data.datapipes.iter import IterableWrapper\nfrom torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root\n\n__all__ = [\"FileListerIterDataPipe\", ]\n\n\nclass FileListerIterDataPipe(IterDataPipe[str]):\n r\"\"\"\n Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.\n Multiple root directories can be provided.\n\n Args:\n root: Root directory or a sequence of root directories\n masks: Unix style filter string or string list for filtering file name(s)\n recursive: Whether to return pathname from nested directories or not\n abspath: Whether to return relative pathname or absolute pathname\n non_deterministic: Whether to return pathname in sorted order or not.\n If ``False``, the results yielded from each root directory will be sorted\n length: Nominal length of the datapipe\n\n Example:\n >>> from torchdata.datapipes.iter import FileLister\n >>> dp = FileLister(root=\".\", recursive=True)\n >>> list(dp)\n ['example.py', './data/data.tar']\n \"\"\"\n\n def __init__(\n self,\n root: Union[str, Sequence[str], IterDataPipe] = '.',\n masks: Union[str, List[str]] = '',\n *,\n recursive: bool = False,\n abspath: bool = False,\n non_deterministic: bool = False,\n length: int = -1\n ) -> None:\n super().__init__()\n if isinstance(root, str):\n root = [root, ]\n if not isinstance(root, IterDataPipe):\n root = IterableWrapper(root)\n self.datapipe: IterDataPipe = root\n self.masks: Union[str, List[str]] = masks\n self.recursive: bool = recursive\n self.abspath: bool = abspath\n self.non_deterministic: bool = non_deterministic\n self.length: int = length\n\n def __iter__(self) -> Iterator[str] :\n for path in self.datapipe:\n yield from get_file_pathnames_from_root(path, self.masks, self.recursive, self.abspath, self.non_deterministic)\n\n def __len__(self):\n if self.length == -1:\n raise TypeError(\"{} instance doesn't have valid length\".format(type(self).__name__))\n return self.length\n", "path": "torch/utils/data/datapipes/iter/filelister.py"}], "after_files": [{"content": "from typing import Iterator, List, Sequence, Union\n\n\nfrom torch.utils.data.datapipes._decorator import functional_datapipe\n\nfrom torch.utils.data.datapipes.datapipe import IterDataPipe\nfrom torch.utils.data.datapipes.iter import IterableWrapper\nfrom torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root\n\n__all__ = [\"FileListerIterDataPipe\", ]\n\n\n@functional_datapipe(\"list_files\")\nclass FileListerIterDataPipe(IterDataPipe[str]):\n r\"\"\"\n Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.\n Multiple root directories can be provided.\n\n Args:\n root: Root directory or a sequence of root directories\n masks: Unix style filter string or string list for filtering file name(s)\n recursive: Whether to return pathname from nested directories or not\n abspath: Whether to return relative pathname or absolute pathname\n non_deterministic: Whether to return pathname in sorted order or not.\n If ``False``, the results yielded from each root directory will be sorted\n length: Nominal length of the datapipe\n\n Example:\n >>> from torchdata.datapipes.iter import FileLister\n >>> dp = FileLister(root=\".\", recursive=True)\n >>> list(dp)\n ['example.py', './data/data.tar']\n \"\"\"\n\n def __init__(\n self,\n root: Union[str, Sequence[str], IterDataPipe] = '.',\n masks: Union[str, List[str]] = '',\n *,\n recursive: bool = False,\n abspath: bool = False,\n non_deterministic: bool = False,\n length: int = -1\n ) -> None:\n super().__init__()\n if isinstance(root, str):\n root = [root, ]\n if not isinstance(root, IterDataPipe):\n root = IterableWrapper(root)\n self.datapipe: IterDataPipe = root\n self.masks: Union[str, List[str]] = masks\n self.recursive: bool = recursive\n self.abspath: bool = abspath\n self.non_deterministic: bool = non_deterministic\n self.length: int = length\n\n def __iter__(self) -> Iterator[str] :\n for path in self.datapipe:\n yield from get_file_pathnames_from_root(path, self.masks, self.recursive, self.abspath, self.non_deterministic)\n\n def __len__(self):\n if self.length == -1:\n raise TypeError(\"{} instance doesn't have valid length\".format(type(self).__name__))\n return self.length\n", "path": "torch/utils/data/datapipes/iter/filelister.py"}]}
| 998 | 226 |
gh_patches_debug_13893
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1370
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: msgspec version needs updating
### Description
Starlite currently needs msgspec >= 0.11.0 from here https://github.com/starlite-api/starlite/blob/8a49aff754d5b409397183eb395cac5a44193d72/pyproject.toml#L52.
Starlite uses msgspec.inspect:
https://github.com/starlite-api/starlite/blob/41089a2d5dff696a25c78a58322d3a8a08ebcac3/starlite/constants.py#L3
msgspec.inspect wasn't introduced in msgspec until 0.12.0:
https://github.com/jcrist/msgspec/releases/tag/0.12.0
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
```bash
1. In a new environment, pip install starlite msgspec==0.11.0
2. Try to import starlite
```
### Screenshots
_No response_
### Logs
_No response_
### Starlite Version
1.51.7
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/constants.py`
Content:
```
1 from inspect import Signature
2
3 from msgspec.inspect import UNSET
4 from pydantic.fields import Undefined
5
6 from starlite.types import Empty
7
8 DEFAULT_ALLOWED_CORS_HEADERS = {"Accept", "Accept-Language", "Content-Language", "Content-Type"}
9 DEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB
10 HTTP_RESPONSE_BODY = "http.response.body"
11 HTTP_RESPONSE_START = "http.response.start"
12 ONE_MEGABYTE = 1024 * 1024
13 REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}
14 RESERVED_KWARGS = {"state", "headers", "cookies", "request", "socket", "data", "query", "scope", "body"}
15 SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
16 SCOPE_STATE_NAMESPACE = "__starlite__"
17 SCOPE_STATE_RESPONSE_COMPRESSED = "response_compressed"
18 UNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}
19 SKIP_VALIDATION_NAMES = {"request", "socket", "scope", "receive", "send"}
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlite/constants.py b/starlite/constants.py
--- a/starlite/constants.py
+++ b/starlite/constants.py
@@ -1,6 +1,5 @@
from inspect import Signature
-from msgspec.inspect import UNSET
from pydantic.fields import Undefined
from starlite.types import Empty
@@ -15,5 +14,5 @@
SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
SCOPE_STATE_NAMESPACE = "__starlite__"
SCOPE_STATE_RESPONSE_COMPRESSED = "response_compressed"
-UNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}
+UNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}
SKIP_VALIDATION_NAMES = {"request", "socket", "scope", "receive", "send"}
|
{"golden_diff": "diff --git a/starlite/constants.py b/starlite/constants.py\n--- a/starlite/constants.py\n+++ b/starlite/constants.py\n@@ -1,6 +1,5 @@\n from inspect import Signature\n \n-from msgspec.inspect import UNSET\n from pydantic.fields import Undefined\n \n from starlite.types import Empty\n@@ -15,5 +14,5 @@\n SCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\n SCOPE_STATE_NAMESPACE = \"__starlite__\"\n SCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\n-UNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}\n+UNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}\n SKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\n", "issue": "Bug: msgspec version needs updating\n### Description\n\nStarlite currently needs msgspec >= 0.11.0 from here https://github.com/starlite-api/starlite/blob/8a49aff754d5b409397183eb395cac5a44193d72/pyproject.toml#L52.\r\n\r\nStarlite uses msgspec.inspect:\r\nhttps://github.com/starlite-api/starlite/blob/41089a2d5dff696a25c78a58322d3a8a08ebcac3/starlite/constants.py#L3\r\n\r\nmsgspec.inspect wasn't introduced in msgspec until 0.12.0:\r\nhttps://github.com/jcrist/msgspec/releases/tag/0.12.0\n\n### URL to code causing the issue\n\n_No response_\n\n### MCVE\n\n_No response_\n\n### Steps to reproduce\n\n```bash\n1. In a new environment, pip install starlite msgspec==0.11.0\r\n2. Try to import starlite\n```\n\n\n### Screenshots\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Starlite Version\n\n1.51.7\n\n### Platform\n\n- [ ] Linux\n- [ ] Mac\n- [ ] Windows\n- [ ] Other (Please specify in the description above)\n", "before_files": [{"content": "from inspect import Signature\n\nfrom msgspec.inspect import UNSET\nfrom pydantic.fields import Undefined\n\nfrom starlite.types import Empty\n\nDEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\nDEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB\nHTTP_RESPONSE_BODY = \"http.response.body\"\nHTTP_RESPONSE_START = \"http.response.start\"\nONE_MEGABYTE = 1024 * 1024\nREDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\nRESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\nSCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\nSCOPE_STATE_NAMESPACE = \"__starlite__\"\nSCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\nUNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}\nSKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\n", "path": "starlite/constants.py"}], "after_files": [{"content": "from inspect import Signature\n\nfrom pydantic.fields import Undefined\n\nfrom starlite.types import Empty\n\nDEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\nDEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB\nHTTP_RESPONSE_BODY = \"http.response.body\"\nHTTP_RESPONSE_START = \"http.response.start\"\nONE_MEGABYTE = 1024 * 1024\nREDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\nRESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\nSCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\nSCOPE_STATE_NAMESPACE = \"__starlite__\"\nSCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\nUNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}\nSKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\n", "path": "starlite/constants.py"}]}
| 826 | 170 |
gh_patches_debug_7811
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpose-970
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Possible bug in `mmpose/apis/train.py`
https://github.com/open-mmlab/mmpose/blob/8d668a578f1580373d90a1eea88f23f6816df60e/mmpose/apis/train.py#L140
Here is the code that may have problem. And I think the default value for `workers_per_gpu` should be an integer but not a dict.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmpose/apis/train.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import warnings
3
4 import torch
5 from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
6 from mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook
7
8 from mmpose.core import DistEvalHook, EvalHook, build_optimizers
9 from mmpose.core.distributed_wrapper import DistributedDataParallelWrapper
10 from mmpose.datasets import build_dataloader, build_dataset
11 from mmpose.utils import get_root_logger
12
13 try:
14 from mmcv.runner import Fp16OptimizerHook
15 except ImportError:
16 warnings.warn(
17 'Fp16OptimizerHook from mmpose will be deprecated from '
18 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)
19 from mmpose.core import Fp16OptimizerHook
20
21
22 def train_model(model,
23 dataset,
24 cfg,
25 distributed=False,
26 validate=False,
27 timestamp=None,
28 meta=None):
29 """Train model entry function.
30
31 Args:
32 model (nn.Module): The model to be trained.
33 dataset (Dataset): Train dataset.
34 cfg (dict): The config dict for training.
35 distributed (bool): Whether to use distributed training.
36 Default: False.
37 validate (bool): Whether to do evaluation. Default: False.
38 timestamp (str | None): Local time for runner. Default: None.
39 meta (dict | None): Meta dict to record some important information.
40 Default: None
41 """
42 logger = get_root_logger(cfg.log_level)
43
44 # prepare data loaders
45 dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
46 # step 1: give default values and override (if exist) from cfg.data
47 loader_cfg = {
48 **dict(
49 seed=cfg.get('seed'),
50 drop_last=False,
51 dist=distributed,
52 num_gpus=len(cfg.gpu_ids)),
53 **({} if torch.__version__ != 'parrots' else dict(
54 prefetch_num=2,
55 pin_memory=False,
56 )),
57 **dict((k, cfg.data[k]) for k in [
58 'samples_per_gpu',
59 'workers_per_gpu',
60 'shuffle',
61 'seed',
62 'drop_last',
63 'prefetch_num',
64 'pin_memory',
65 'persistent_workers',
66 ] if k in cfg.data)
67 }
68
69 # step 2: cfg.data.train_dataloader has highest priority
70 train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))
71
72 data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]
73
74 # determine wether use adversarial training precess or not
75 use_adverserial_train = cfg.get('use_adversarial_train', False)
76
77 # put model on gpus
78 if distributed:
79 find_unused_parameters = cfg.get('find_unused_parameters', True)
80 # Sets the `find_unused_parameters` parameter in
81 # torch.nn.parallel.DistributedDataParallel
82
83 if use_adverserial_train:
84 # Use DistributedDataParallelWrapper for adversarial training
85 model = DistributedDataParallelWrapper(
86 model,
87 device_ids=[torch.cuda.current_device()],
88 broadcast_buffers=False,
89 find_unused_parameters=find_unused_parameters)
90 else:
91 model = MMDistributedDataParallel(
92 model.cuda(),
93 device_ids=[torch.cuda.current_device()],
94 broadcast_buffers=False,
95 find_unused_parameters=find_unused_parameters)
96 else:
97 model = MMDataParallel(
98 model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
99
100 # build runner
101 optimizer = build_optimizers(model, cfg.optimizer)
102
103 runner = EpochBasedRunner(
104 model,
105 optimizer=optimizer,
106 work_dir=cfg.work_dir,
107 logger=logger,
108 meta=meta)
109 # an ugly workaround to make .log and .log.json filenames the same
110 runner.timestamp = timestamp
111
112 if use_adverserial_train:
113 # The optimizer step process is included in the train_step function
114 # of the model, so the runner should NOT include optimizer hook.
115 optimizer_config = None
116 else:
117 # fp16 setting
118 fp16_cfg = cfg.get('fp16', None)
119 if fp16_cfg is not None:
120 optimizer_config = Fp16OptimizerHook(
121 **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
122 elif distributed and 'type' not in cfg.optimizer_config:
123 optimizer_config = OptimizerHook(**cfg.optimizer_config)
124 else:
125 optimizer_config = cfg.optimizer_config
126
127 # register hooks
128 runner.register_training_hooks(cfg.lr_config, optimizer_config,
129 cfg.checkpoint_config, cfg.log_config,
130 cfg.get('momentum_config', None))
131 if distributed:
132 runner.register_hook(DistSamplerSeedHook())
133
134 # register eval hooks
135 if validate:
136 eval_cfg = cfg.get('evaluation', {})
137 val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
138 dataloader_setting = dict(
139 samples_per_gpu=1,
140 workers_per_gpu=cfg.data.get('workers_per_gpu', {}),
141 # cfg.gpus will be ignored if distributed
142 num_gpus=len(cfg.gpu_ids),
143 dist=distributed,
144 drop_last=False,
145 shuffle=False)
146 dataloader_setting = dict(dataloader_setting,
147 **cfg.data.get('val_dataloader', {}))
148 val_dataloader = build_dataloader(val_dataset, **dataloader_setting)
149 eval_hook = DistEvalHook if distributed else EvalHook
150 runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
151
152 if cfg.resume_from:
153 runner.resume(cfg.resume_from)
154 elif cfg.load_from:
155 runner.load_checkpoint(cfg.load_from)
156 runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmpose/apis/train.py b/mmpose/apis/train.py
--- a/mmpose/apis/train.py
+++ b/mmpose/apis/train.py
@@ -137,7 +137,7 @@
val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
dataloader_setting = dict(
samples_per_gpu=1,
- workers_per_gpu=cfg.data.get('workers_per_gpu', {}),
+ workers_per_gpu=cfg.data.get('workers_per_gpu', 1),
# cfg.gpus will be ignored if distributed
num_gpus=len(cfg.gpu_ids),
dist=distributed,
|
{"golden_diff": "diff --git a/mmpose/apis/train.py b/mmpose/apis/train.py\n--- a/mmpose/apis/train.py\n+++ b/mmpose/apis/train.py\n@@ -137,7 +137,7 @@\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n dataloader_setting = dict(\n samples_per_gpu=1,\n- workers_per_gpu=cfg.data.get('workers_per_gpu', {}),\n+ workers_per_gpu=cfg.data.get('workers_per_gpu', 1),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n", "issue": "[Bug] Possible bug in `mmpose/apis/train.py`\nhttps://github.com/open-mmlab/mmpose/blob/8d668a578f1580373d90a1eea88f23f6816df60e/mmpose/apis/train.py#L140\r\n\r\nHere is the code that may have problem. And I think the default value for `workers_per_gpu` should be an integer but not a dict. \n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook\n\nfrom mmpose.core import DistEvalHook, EvalHook, build_optimizers\nfrom mmpose.core.distributed_wrapper import DistributedDataParallelWrapper\nfrom mmpose.datasets import build_dataloader, build_dataset\nfrom mmpose.utils import get_root_logger\n\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn(\n 'Fp16OptimizerHook from mmpose will be deprecated from '\n 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)\n from mmpose.core import Fp16OptimizerHook\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n meta=None):\n \"\"\"Train model entry function.\n\n Args:\n model (nn.Module): The model to be trained.\n dataset (Dataset): Train dataset.\n cfg (dict): The config dict for training.\n distributed (bool): Whether to use distributed training.\n Default: False.\n validate (bool): Whether to do evaluation. Default: False.\n timestamp (str | None): Local time for runner. Default: None.\n meta (dict | None): Meta dict to record some important information.\n Default: None\n \"\"\"\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n # step 1: give default values and override (if exist) from cfg.data\n loader_cfg = {\n **dict(\n seed=cfg.get('seed'),\n drop_last=False,\n dist=distributed,\n num_gpus=len(cfg.gpu_ids)),\n **({} if torch.__version__ != 'parrots' else dict(\n prefetch_num=2,\n pin_memory=False,\n )),\n **dict((k, cfg.data[k]) for k in [\n 'samples_per_gpu',\n 'workers_per_gpu',\n 'shuffle',\n 'seed',\n 'drop_last',\n 'prefetch_num',\n 'pin_memory',\n 'persistent_workers',\n ] if k in cfg.data)\n }\n\n # step 2: cfg.data.train_dataloader has highest priority\n train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))\n\n data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]\n\n # determine wether use adversarial training precess or not\n use_adverserial_train = cfg.get('use_adversarial_train', False)\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', True)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n\n if use_adverserial_train:\n # Use DistributedDataParallelWrapper for adversarial training\n model = DistributedDataParallelWrapper(\n model,\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n\n # build runner\n optimizer = build_optimizers(model, cfg.optimizer)\n\n runner = EpochBasedRunner(\n model,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta)\n # an ugly workaround to make .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n if use_adverserial_train:\n # The optimizer step process is included in the train_step function\n # of the model, so the runner should NOT include optimizer hook.\n optimizer_config = None\n else:\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = OptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n eval_cfg = cfg.get('evaluation', {})\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n dataloader_setting = dict(\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.get('workers_per_gpu', {}),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n drop_last=False,\n shuffle=False)\n dataloader_setting = dict(dataloader_setting,\n **cfg.data.get('val_dataloader', {}))\n val_dataloader = build_dataloader(val_dataset, **dataloader_setting)\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\n", "path": "mmpose/apis/train.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook\n\nfrom mmpose.core import DistEvalHook, EvalHook, build_optimizers\nfrom mmpose.core.distributed_wrapper import DistributedDataParallelWrapper\nfrom mmpose.datasets import build_dataloader, build_dataset\nfrom mmpose.utils import get_root_logger\n\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn(\n 'Fp16OptimizerHook from mmpose will be deprecated from '\n 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)\n from mmpose.core import Fp16OptimizerHook\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n meta=None):\n \"\"\"Train model entry function.\n\n Args:\n model (nn.Module): The model to be trained.\n dataset (Dataset): Train dataset.\n cfg (dict): The config dict for training.\n distributed (bool): Whether to use distributed training.\n Default: False.\n validate (bool): Whether to do evaluation. Default: False.\n timestamp (str | None): Local time for runner. Default: None.\n meta (dict | None): Meta dict to record some important information.\n Default: None\n \"\"\"\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n # step 1: give default values and override (if exist) from cfg.data\n loader_cfg = {\n **dict(\n seed=cfg.get('seed'),\n drop_last=False,\n dist=distributed,\n num_gpus=len(cfg.gpu_ids)),\n **({} if torch.__version__ != 'parrots' else dict(\n prefetch_num=2,\n pin_memory=False,\n )),\n **dict((k, cfg.data[k]) for k in [\n 'samples_per_gpu',\n 'workers_per_gpu',\n 'shuffle',\n 'seed',\n 'drop_last',\n 'prefetch_num',\n 'pin_memory',\n 'persistent_workers',\n ] if k in cfg.data)\n }\n\n # step 2: cfg.data.train_dataloader has highest priority\n train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))\n\n data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]\n\n # determine wether use adversarial training precess or not\n use_adverserial_train = cfg.get('use_adversarial_train', False)\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', True)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n\n if use_adverserial_train:\n # Use DistributedDataParallelWrapper for adversarial training\n model = DistributedDataParallelWrapper(\n model,\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n\n # build runner\n optimizer = build_optimizers(model, cfg.optimizer)\n\n runner = EpochBasedRunner(\n model,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta)\n # an ugly workaround to make .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n if use_adverserial_train:\n # The optimizer step process is included in the train_step function\n # of the model, so the runner should NOT include optimizer hook.\n optimizer_config = None\n else:\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = OptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n eval_cfg = cfg.get('evaluation', {})\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n dataloader_setting = dict(\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.get('workers_per_gpu', 1),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n drop_last=False,\n shuffle=False)\n dataloader_setting = dict(dataloader_setting,\n **cfg.data.get('val_dataloader', {}))\n val_dataloader = build_dataloader(val_dataset, **dataloader_setting)\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\n", "path": "mmpose/apis/train.py"}]}
| 1,977 | 137 |
gh_patches_debug_6437
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-532
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Participated projects visible for all on user profile
If I write a comment on platform meinBerlin and take part in a participation process, everyone can see my profile and in which participation process I took part, even I'm not logged in. It's not anonymous enough.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/users/views.py`
Content:
```
1 from django.views.generic.detail import DetailView
2
3 from adhocracy4.projects.models import Project
4
5 from . import models
6
7
8 class ProfileView(DetailView):
9 model = models.User
10 slug_field = 'username'
11
12 @property
13 def get_participated_projects(self):
14 user = self.object
15
16 qs = Project.objects.filter(
17 action__actor=user
18 ).distinct()
19
20 return qs
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/users/views.py b/apps/users/views.py
--- a/apps/users/views.py
+++ b/apps/users/views.py
@@ -1,20 +1,8 @@
from django.views.generic.detail import DetailView
-from adhocracy4.projects.models import Project
-
from . import models
class ProfileView(DetailView):
model = models.User
slug_field = 'username'
-
- @property
- def get_participated_projects(self):
- user = self.object
-
- qs = Project.objects.filter(
- action__actor=user
- ).distinct()
-
- return qs
|
{"golden_diff": "diff --git a/apps/users/views.py b/apps/users/views.py\n--- a/apps/users/views.py\n+++ b/apps/users/views.py\n@@ -1,20 +1,8 @@\n from django.views.generic.detail import DetailView\n \n-from adhocracy4.projects.models import Project\n-\n from . import models\n \n \n class ProfileView(DetailView):\n model = models.User\n slug_field = 'username'\n-\n- @property\n- def get_participated_projects(self):\n- user = self.object\n-\n- qs = Project.objects.filter(\n- action__actor=user\n- ).distinct()\n-\n- return qs\n", "issue": "Participated projects visible for all on user profile\nIf I write a comment on platform meinBerlin and take part in a participation process, everyone can see my profile and in which participation process I took part, even I'm not logged in. It's not anonymous enough.\n", "before_files": [{"content": "from django.views.generic.detail import DetailView\n\nfrom adhocracy4.projects.models import Project\n\nfrom . import models\n\n\nclass ProfileView(DetailView):\n model = models.User\n slug_field = 'username'\n\n @property\n def get_participated_projects(self):\n user = self.object\n\n qs = Project.objects.filter(\n action__actor=user\n ).distinct()\n\n return qs\n", "path": "apps/users/views.py"}], "after_files": [{"content": "from django.views.generic.detail import DetailView\n\nfrom . import models\n\n\nclass ProfileView(DetailView):\n model = models.User\n slug_field = 'username'\n", "path": "apps/users/views.py"}]}
| 437 | 135 |
gh_patches_debug_7817
|
rasdani/github-patches
|
git_diff
|
lhotse-speech__lhotse-1197
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The 'speaker' field in the stcmds dataset is incorrect
The file 20170001P00001I0001.wav is spoken by a female,
and the file 20170001P00001A0001.wav is spoken by a male.
But they have the same speaker ID P00001
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lhotse/recipes/stcmds.py`
Content:
```
1 """
2 Stcmds is an open-source Chinese Mandarin corpus by Surfingtech (www.surfing.ai), containing utterances from 855 speakers, 102600 utterances;
3 Publicly available on https://www.openslr.org/resources/38
4 ST-CMDS (110 hours)
5
6 """
7 import logging
8 import os
9 import shutil
10 import tarfile
11 from collections import defaultdict
12 from pathlib import Path
13 from typing import Dict, Optional, Union
14
15 from tqdm.auto import tqdm
16
17 from lhotse import fix_manifests, validate_recordings_and_supervisions
18 from lhotse.audio import Recording, RecordingSet
19 from lhotse.supervision import SupervisionSegment, SupervisionSet
20 from lhotse.utils import Pathlike, resumable_download, safe_extract
21
22
23 def text_normalize(line: str):
24 """
25 Modified from https://github.com/wenet-e2e/wenet/blob/main/examples/multi_cn/s0/local/stcmds_data_prep.sh#L42
26 paste -d' ' $data/utt.list $data/text.list |\
27 sed 's/,//g' |\
28 tr '[a-z]' '[A-Z]' |\
29 awk '{if (NF > 1) print $0;}' > $data/train/text
30 """
31 line = line.replace(",", "")
32 line = line.upper()
33 return line
34
35
36 def download_stcmds(
37 target_dir: Pathlike = ".",
38 force_download: bool = False,
39 base_url: str = "http://www.openslr.org/resources",
40 ) -> Path:
41 """
42 Downdload and untar the dataset
43 :param target_dir: Pathlike, the path of the dir to storage the dataset.
44 :param force_download: Bool, if True, download the tars no matter if the tars exist.
45 :param base_url: str, the url of the OpenSLR resources.
46 :return: the path to downloaded and extracted directory with data.
47 """
48 url = f"{base_url}/38"
49 target_dir = Path(target_dir)
50 target_dir.mkdir(parents=True, exist_ok=True)
51 corpus_dir = target_dir / "stcmds"
52 dataset_tar_name = "ST-CMDS-20170001_1-OS.tar.gz"
53 for tar_name in [dataset_tar_name]:
54 tar_path = target_dir / tar_name
55 extracted_dir = corpus_dir / tar_name[:-7]
56 completed_detector = extracted_dir / ".completed"
57 if completed_detector.is_file():
58 logging.info(f"Skipping download of because {completed_detector} exists.")
59 continue
60 resumable_download(
61 f"{url}/{tar_name}", filename=tar_path, force_download=force_download
62 )
63 shutil.rmtree(extracted_dir, ignore_errors=True)
64 with tarfile.open(tar_path) as tar:
65 safe_extract(tar, path=corpus_dir)
66 completed_detector.touch()
67
68 return corpus_dir
69
70
71 def prepare_stcmds(
72 corpus_dir: Pathlike, output_dir: Optional[Pathlike] = None
73 ) -> Dict[str, Dict[str, Union[RecordingSet, SupervisionSet]]]:
74 """
75 Returns the manifests which consist of the Recordings and Supervisions
76 :param corpus_dir: Pathlike, the path of the data dir.
77 :param output_dir: Pathlike, the path where to write the manifests.
78 :return: a Dict whose key is the dataset part, and the value is Dicts with the keys 'recordings' and 'supervisions'.
79 """
80 corpus_dir = Path(corpus_dir)
81 assert corpus_dir.is_dir(), f"No such directory: {corpus_dir}"
82 if output_dir is not None:
83 output_dir = Path(output_dir)
84 output_dir.mkdir(parents=True, exist_ok=True)
85
86 path = corpus_dir / "ST-CMDS-20170001_1-OS"
87 transcript_dict = {}
88 for text_path in path.rglob("**/*.txt"):
89 idx = text_path.stem
90 logging.info(f"processing stcmds transcript {text_path}")
91 with open(text_path, "r", encoding="utf-8") as f:
92 for line in f:
93 line = text_normalize(line)
94 transcript_dict[idx] = line
95
96 manifests = defaultdict(dict)
97 dataset_parts = ["train"]
98 for part in tqdm(
99 dataset_parts,
100 desc="process stcmds audio, it needs waste about 2169 seconds time.",
101 ):
102 logging.info(f"Processing stcmds {part}")
103 recordings = []
104 supervisions = []
105 for audio_path in path.rglob("**/*.wav"):
106 idx = audio_path.stem
107 speaker = "".join(list(idx)[8:14])
108 if idx not in transcript_dict:
109 logging.warning(f"No transcript: {idx}")
110 logging.warning(f"{audio_path} has no transcript")
111 continue
112 text = transcript_dict[idx]
113 if not audio_path.is_file():
114 logging.warning(f"No such file: {audio_path}")
115 continue
116 recording = Recording.from_file(audio_path)
117 recordings.append(recording)
118 segment = SupervisionSegment(
119 id=idx,
120 recording_id=idx,
121 start=0.0,
122 duration=recording.duration,
123 channel=0,
124 language="Chinese",
125 speaker=speaker,
126 text=text.strip(),
127 )
128 supervisions.append(segment)
129
130 recording_set = RecordingSet.from_recordings(recordings)
131 supervision_set = SupervisionSet.from_segments(supervisions)
132
133 recording_set, supervision_set = fix_manifests(recording_set, supervision_set)
134 validate_recordings_and_supervisions(recording_set, supervision_set)
135
136 if output_dir is not None:
137 supervision_set.to_file(output_dir / f"stcmds_supervisions_{part}.jsonl.gz")
138 recording_set.to_file(output_dir / f"stcmds_recordings_{part}.jsonl.gz")
139
140 manifests[part] = {"recordings": recording_set, "supervisions": supervision_set}
141
142 return manifests
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lhotse/recipes/stcmds.py b/lhotse/recipes/stcmds.py
--- a/lhotse/recipes/stcmds.py
+++ b/lhotse/recipes/stcmds.py
@@ -104,7 +104,7 @@
supervisions = []
for audio_path in path.rglob("**/*.wav"):
idx = audio_path.stem
- speaker = "".join(list(idx)[8:14])
+ speaker = "".join(list(idx)[8:15])
if idx not in transcript_dict:
logging.warning(f"No transcript: {idx}")
logging.warning(f"{audio_path} has no transcript")
|
{"golden_diff": "diff --git a/lhotse/recipes/stcmds.py b/lhotse/recipes/stcmds.py\n--- a/lhotse/recipes/stcmds.py\n+++ b/lhotse/recipes/stcmds.py\n@@ -104,7 +104,7 @@\n supervisions = []\n for audio_path in path.rglob(\"**/*.wav\"):\n idx = audio_path.stem\n- speaker = \"\".join(list(idx)[8:14])\n+ speaker = \"\".join(list(idx)[8:15])\n if idx not in transcript_dict:\n logging.warning(f\"No transcript: {idx}\")\n logging.warning(f\"{audio_path} has no transcript\")\n", "issue": "The 'speaker' field in the stcmds dataset is incorrect\nThe file 20170001P00001I0001.wav is spoken by a female, \r\nand the file 20170001P00001A0001.wav is spoken by a male.\r\nBut they have the same speaker ID P00001\n", "before_files": [{"content": "\"\"\"\nStcmds is an open-source Chinese Mandarin corpus by Surfingtech (www.surfing.ai), containing utterances from 855 speakers, 102600 utterances;\nPublicly available on https://www.openslr.org/resources/38\nST-CMDS (110 hours)\n\n\"\"\"\nimport logging\nimport os\nimport shutil\nimport tarfile\nfrom collections import defaultdict\nfrom pathlib import Path\nfrom typing import Dict, Optional, Union\n\nfrom tqdm.auto import tqdm\n\nfrom lhotse import fix_manifests, validate_recordings_and_supervisions\nfrom lhotse.audio import Recording, RecordingSet\nfrom lhotse.supervision import SupervisionSegment, SupervisionSet\nfrom lhotse.utils import Pathlike, resumable_download, safe_extract\n\n\ndef text_normalize(line: str):\n \"\"\"\n Modified from https://github.com/wenet-e2e/wenet/blob/main/examples/multi_cn/s0/local/stcmds_data_prep.sh#L42\n paste -d' ' $data/utt.list $data/text.list |\\\n sed 's/\uff0c//g' |\\\n tr '[a-z]' '[A-Z]' |\\\n awk '{if (NF > 1) print $0;}' > $data/train/text\n \"\"\"\n line = line.replace(\"\uff0c\", \"\")\n line = line.upper()\n return line\n\n\ndef download_stcmds(\n target_dir: Pathlike = \".\",\n force_download: bool = False,\n base_url: str = \"http://www.openslr.org/resources\",\n) -> Path:\n \"\"\"\n Downdload and untar the dataset\n :param target_dir: Pathlike, the path of the dir to storage the dataset.\n :param force_download: Bool, if True, download the tars no matter if the tars exist.\n :param base_url: str, the url of the OpenSLR resources.\n :return: the path to downloaded and extracted directory with data.\n \"\"\"\n url = f\"{base_url}/38\"\n target_dir = Path(target_dir)\n target_dir.mkdir(parents=True, exist_ok=True)\n corpus_dir = target_dir / \"stcmds\"\n dataset_tar_name = \"ST-CMDS-20170001_1-OS.tar.gz\"\n for tar_name in [dataset_tar_name]:\n tar_path = target_dir / tar_name\n extracted_dir = corpus_dir / tar_name[:-7]\n completed_detector = extracted_dir / \".completed\"\n if completed_detector.is_file():\n logging.info(f\"Skipping download of because {completed_detector} exists.\")\n continue\n resumable_download(\n f\"{url}/{tar_name}\", filename=tar_path, force_download=force_download\n )\n shutil.rmtree(extracted_dir, ignore_errors=True)\n with tarfile.open(tar_path) as tar:\n safe_extract(tar, path=corpus_dir)\n completed_detector.touch()\n\n return corpus_dir\n\n\ndef prepare_stcmds(\n corpus_dir: Pathlike, output_dir: Optional[Pathlike] = None\n) -> Dict[str, Dict[str, Union[RecordingSet, SupervisionSet]]]:\n \"\"\"\n Returns the manifests which consist of the Recordings and Supervisions\n :param corpus_dir: Pathlike, the path of the data dir.\n :param output_dir: Pathlike, the path where to write the manifests.\n :return: a Dict whose key is the dataset part, and the value is Dicts with the keys 'recordings' and 'supervisions'.\n \"\"\"\n corpus_dir = Path(corpus_dir)\n assert corpus_dir.is_dir(), f\"No such directory: {corpus_dir}\"\n if output_dir is not None:\n output_dir = Path(output_dir)\n output_dir.mkdir(parents=True, exist_ok=True)\n\n path = corpus_dir / \"ST-CMDS-20170001_1-OS\"\n transcript_dict = {}\n for text_path in path.rglob(\"**/*.txt\"):\n idx = text_path.stem\n logging.info(f\"processing stcmds transcript {text_path}\")\n with open(text_path, \"r\", encoding=\"utf-8\") as f:\n for line in f:\n line = text_normalize(line)\n transcript_dict[idx] = line\n\n manifests = defaultdict(dict)\n dataset_parts = [\"train\"]\n for part in tqdm(\n dataset_parts,\n desc=\"process stcmds audio, it needs waste about 2169 seconds time.\",\n ):\n logging.info(f\"Processing stcmds {part}\")\n recordings = []\n supervisions = []\n for audio_path in path.rglob(\"**/*.wav\"):\n idx = audio_path.stem\n speaker = \"\".join(list(idx)[8:14])\n if idx not in transcript_dict:\n logging.warning(f\"No transcript: {idx}\")\n logging.warning(f\"{audio_path} has no transcript\")\n continue\n text = transcript_dict[idx]\n if not audio_path.is_file():\n logging.warning(f\"No such file: {audio_path}\")\n continue\n recording = Recording.from_file(audio_path)\n recordings.append(recording)\n segment = SupervisionSegment(\n id=idx,\n recording_id=idx,\n start=0.0,\n duration=recording.duration,\n channel=0,\n language=\"Chinese\",\n speaker=speaker,\n text=text.strip(),\n )\n supervisions.append(segment)\n\n recording_set = RecordingSet.from_recordings(recordings)\n supervision_set = SupervisionSet.from_segments(supervisions)\n\n recording_set, supervision_set = fix_manifests(recording_set, supervision_set)\n validate_recordings_and_supervisions(recording_set, supervision_set)\n\n if output_dir is not None:\n supervision_set.to_file(output_dir / f\"stcmds_supervisions_{part}.jsonl.gz\")\n recording_set.to_file(output_dir / f\"stcmds_recordings_{part}.jsonl.gz\")\n\n manifests[part] = {\"recordings\": recording_set, \"supervisions\": supervision_set}\n\n return manifests\n", "path": "lhotse/recipes/stcmds.py"}], "after_files": [{"content": "\"\"\"\nStcmds is an open-source Chinese Mandarin corpus by Surfingtech (www.surfing.ai), containing utterances from 855 speakers, 102600 utterances;\nPublicly available on https://www.openslr.org/resources/38\nST-CMDS (110 hours)\n\n\"\"\"\nimport logging\nimport os\nimport shutil\nimport tarfile\nfrom collections import defaultdict\nfrom pathlib import Path\nfrom typing import Dict, Optional, Union\n\nfrom tqdm.auto import tqdm\n\nfrom lhotse import fix_manifests, validate_recordings_and_supervisions\nfrom lhotse.audio import Recording, RecordingSet\nfrom lhotse.supervision import SupervisionSegment, SupervisionSet\nfrom lhotse.utils import Pathlike, resumable_download, safe_extract\n\n\ndef text_normalize(line: str):\n \"\"\"\n Modified from https://github.com/wenet-e2e/wenet/blob/main/examples/multi_cn/s0/local/stcmds_data_prep.sh#L42\n paste -d' ' $data/utt.list $data/text.list |\\\n sed 's/\uff0c//g' |\\\n tr '[a-z]' '[A-Z]' |\\\n awk '{if (NF > 1) print $0;}' > $data/train/text\n \"\"\"\n line = line.replace(\"\uff0c\", \"\")\n line = line.upper()\n return line\n\n\ndef download_stcmds(\n target_dir: Pathlike = \".\",\n force_download: bool = False,\n base_url: str = \"http://www.openslr.org/resources\",\n) -> Path:\n \"\"\"\n Downdload and untar the dataset\n :param target_dir: Pathlike, the path of the dir to storage the dataset.\n :param force_download: Bool, if True, download the tars no matter if the tars exist.\n :param base_url: str, the url of the OpenSLR resources.\n :return: the path to downloaded and extracted directory with data.\n \"\"\"\n url = f\"{base_url}/38\"\n target_dir = Path(target_dir)\n target_dir.mkdir(parents=True, exist_ok=True)\n corpus_dir = target_dir / \"stcmds\"\n dataset_tar_name = \"ST-CMDS-20170001_1-OS.tar.gz\"\n for tar_name in [dataset_tar_name]:\n tar_path = target_dir / tar_name\n extracted_dir = corpus_dir / tar_name[:-7]\n completed_detector = extracted_dir / \".completed\"\n if completed_detector.is_file():\n logging.info(f\"Skipping download of because {completed_detector} exists.\")\n continue\n resumable_download(\n f\"{url}/{tar_name}\", filename=tar_path, force_download=force_download\n )\n shutil.rmtree(extracted_dir, ignore_errors=True)\n with tarfile.open(tar_path) as tar:\n safe_extract(tar, path=corpus_dir)\n completed_detector.touch()\n\n return corpus_dir\n\n\ndef prepare_stcmds(\n corpus_dir: Pathlike, output_dir: Optional[Pathlike] = None\n) -> Dict[str, Dict[str, Union[RecordingSet, SupervisionSet]]]:\n \"\"\"\n Returns the manifests which consist of the Recordings and Supervisions\n :param corpus_dir: Pathlike, the path of the data dir.\n :param output_dir: Pathlike, the path where to write the manifests.\n :return: a Dict whose key is the dataset part, and the value is Dicts with the keys 'recordings' and 'supervisions'.\n \"\"\"\n corpus_dir = Path(corpus_dir)\n assert corpus_dir.is_dir(), f\"No such directory: {corpus_dir}\"\n if output_dir is not None:\n output_dir = Path(output_dir)\n output_dir.mkdir(parents=True, exist_ok=True)\n\n path = corpus_dir / \"ST-CMDS-20170001_1-OS\"\n transcript_dict = {}\n for text_path in path.rglob(\"**/*.txt\"):\n idx = text_path.stem\n logging.info(f\"processing stcmds transcript {text_path}\")\n with open(text_path, \"r\", encoding=\"utf-8\") as f:\n for line in f:\n line = text_normalize(line)\n transcript_dict[idx] = line\n\n manifests = defaultdict(dict)\n dataset_parts = [\"train\"]\n for part in tqdm(\n dataset_parts,\n desc=\"process stcmds audio, it needs waste about 2169 seconds time.\",\n ):\n logging.info(f\"Processing stcmds {part}\")\n recordings = []\n supervisions = []\n for audio_path in path.rglob(\"**/*.wav\"):\n idx = audio_path.stem\n speaker = \"\".join(list(idx)[8:15])\n if idx not in transcript_dict:\n logging.warning(f\"No transcript: {idx}\")\n logging.warning(f\"{audio_path} has no transcript\")\n continue\n text = transcript_dict[idx]\n if not audio_path.is_file():\n logging.warning(f\"No such file: {audio_path}\")\n continue\n recording = Recording.from_file(audio_path)\n recordings.append(recording)\n segment = SupervisionSegment(\n id=idx,\n recording_id=idx,\n start=0.0,\n duration=recording.duration,\n channel=0,\n language=\"Chinese\",\n speaker=speaker,\n text=text.strip(),\n )\n supervisions.append(segment)\n\n recording_set = RecordingSet.from_recordings(recordings)\n supervision_set = SupervisionSet.from_segments(supervisions)\n\n recording_set, supervision_set = fix_manifests(recording_set, supervision_set)\n validate_recordings_and_supervisions(recording_set, supervision_set)\n\n if output_dir is not None:\n supervision_set.to_file(output_dir / f\"stcmds_supervisions_{part}.jsonl.gz\")\n recording_set.to_file(output_dir / f\"stcmds_recordings_{part}.jsonl.gz\")\n\n manifests[part] = {\"recordings\": recording_set, \"supervisions\": supervision_set}\n\n return manifests\n", "path": "lhotse/recipes/stcmds.py"}]}
| 1,962 | 140 |
gh_patches_debug_38811
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-4673
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
make promp.env.env_name venv --prompt aware (set in pyvenv.cfg)
## xonfig
<details>
```
> xonfig
<xonsh-code>:1:0 - xonfig
<xonsh-code>:1:0 + ![xonfig]
+------------------+----------------------+
| xonsh | 0.11.0 |
| Git SHA | adfa60ea |
| Commit Date | Feb 11 14:53:00 2022 |
| Python | 3.9.7 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | None |
| shell type | readline |
| history backend | json |
| pygments | 2.11.2 |
| on posix | True |
| on linux | True |
| distro | unknown |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| on jupyter | False |
| jupyter kernel | None |
| xontrib 1 | coreutils |
| xontrib 2 | vox |
| xontrib 3 | voxapi |
| RC file 1 | /home/ob/.xonshrc |
+------------------+----------------------+
```
</details>
## Expected Behavior
When activating a venv via `vox activate` the name should be set to the value of `prompt` key in `pyvenv.cfg` if present (file and key) - see https://docs.python.org/3/library/venv.html.
## Current Behavior
The prompt is always set to name of venv directory independent of prompt settings.
## Steps to Reproduce
```shell
$ python -m venv --prompt "MY SPECIAL PROMPT" .venv
$ cat .venv/pyvenv.cfg | grep prompt
prompt = 'MY SPECIAL PROMPT'
vox activate ./.venv
```
new prompt looks like this:
`[17:58:10] (.venv) ob@ob1 ~/oss/xonsh dropthedot|✓`
but should look like this:
`[17:58:10] (MY SPECIAL PROMPT) ob@ob1 ~/oss/xonsh dropthedot|✓`
## Also add `--prompt` to `vox new`?
If this is done, maybe a good idea would be to also add `--prompt` to `vox new` to make this possible on creation (also in the interest of symmetry), but it seems like the common pattern for many xonsh users is to have virtualenvs in `~/.virtualenv` with the venv folder name being the same as the project, therefore automatically having a meaningful prompt, so this does not seem to be an urgently wanted feature for most.
I want to prepare a PR for the "passive" part of the functionality, but could also have a stab at adding `--prompt` to `vox new` at a later date. My main itch would be to respect `prompt` from already existing venvs first.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xonsh/prompt/env.py`
Content:
```
1 """Prompt formatter for virtualenv and others"""
2
3 import os
4
5 from xonsh.built_ins import XSH
6
7
8 def find_env_name():
9 """Finds the current environment name from $VIRTUAL_ENV or
10 $CONDA_DEFAULT_ENV if that is set.
11 """
12 env_path = XSH.env.get("VIRTUAL_ENV", "")
13 if env_path:
14 env_name = os.path.basename(env_path)
15 else:
16 env_name = XSH.env.get("CONDA_DEFAULT_ENV", "")
17 return env_name
18
19
20 def env_name():
21 """Returns the current env_name if it non-empty, surrounded by the
22 ``{env_prefix}`` and ``{env_postfix}`` fields.
23 """
24 env_name = find_env_name()
25 if XSH.env.get("VIRTUAL_ENV_DISABLE_PROMPT") or not env_name:
26 # env name prompt printing disabled, or no environment; just return
27 return
28
29 venv_prompt = XSH.env.get("VIRTUAL_ENV_PROMPT")
30 if venv_prompt is not None:
31 return venv_prompt
32 else:
33 pf = XSH.shell.prompt_formatter
34 pre = pf._get_field_value("env_prefix")
35 post = pf._get_field_value("env_postfix")
36 return pre + env_name + post
37
38
39 def vte_new_tab_cwd():
40 """This prints an escape sequence that tells VTE terminals the hostname
41 and pwd. This should not be needed in most cases, but sometimes is for
42 certain Linux terminals that do not read the PWD from the environment
43 on startup. Note that this does not return a string, it simply prints
44 and flushes the escape sequence to stdout directly.
45 """
46 env = XSH.env
47 t = "\033]7;file://{}{}\007"
48 s = t.format(env.get("HOSTNAME"), env.get("PWD"))
49 print(s, end="", flush=True)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xonsh/prompt/env.py b/xonsh/prompt/env.py
--- a/xonsh/prompt/env.py
+++ b/xonsh/prompt/env.py
@@ -1,42 +1,70 @@
"""Prompt formatter for virtualenv and others"""
-
-import os
+import functools
+import re
+from pathlib import Path
+from typing import Optional
from xonsh.built_ins import XSH
-def find_env_name():
- """Finds the current environment name from $VIRTUAL_ENV or
- $CONDA_DEFAULT_ENV if that is set.
+def find_env_name() -> Optional[str]:
+ """Find current environment name from available sources.
+
+ If ``$VIRTUAL_ENV`` is set, it is determined from the prompt setting in
+ ``<venv>/pyvenv.cfg`` or from the folder name of the environment.
+
+ Otherwise - if it is set - from ``$CONDA_DEFAULT_ENV``.
+ """
+ virtual_env = XSH.env.get("VIRTUAL_ENV")
+ if virtual_env:
+ name = _determine_env_name(virtual_env)
+ if name:
+ return name
+ conda_default_env = XSH.env.get("CONDA_DEFAULT_ENV")
+ if conda_default_env:
+ return conda_default_env
+
+
+def env_name() -> str:
+ """Build env_name based on different sources. Respect order of precedence.
+
+ Name from VIRTUAL_ENV_PROMPT will be used as-is.
+ Names from other sources are surrounded with ``{env_prefix}`` and
+ ``{env_postfix}`` fields.
"""
- env_path = XSH.env.get("VIRTUAL_ENV", "")
- if env_path:
- env_name = os.path.basename(env_path)
- else:
- env_name = XSH.env.get("CONDA_DEFAULT_ENV", "")
- return env_name
+ if XSH.env.get("VIRTUAL_ENV_DISABLE_PROMPT"):
+ return ""
+ virtual_env_prompt = XSH.env.get("VIRTUAL_ENV_PROMPT")
+ if virtual_env_prompt:
+ return virtual_env_prompt
+ found_envname = find_env_name()
+ return _surround_env_name(found_envname) if found_envname else ""
+
[email protected]_cache(maxsize=5)
+def _determine_env_name(virtual_env: str) -> str:
+ """Use prompt setting from pyvenv.cfg or basename of virtual_env.
-def env_name():
- """Returns the current env_name if it non-empty, surrounded by the
- ``{env_prefix}`` and ``{env_postfix}`` fields.
+ Tries to be resilient to subtle changes in whitespace and quoting in the
+ configuration file format as it adheres to no clear standard.
"""
- env_name = find_env_name()
- if XSH.env.get("VIRTUAL_ENV_DISABLE_PROMPT") or not env_name:
- # env name prompt printing disabled, or no environment; just return
- return
-
- venv_prompt = XSH.env.get("VIRTUAL_ENV_PROMPT")
- if venv_prompt is not None:
- return venv_prompt
- else:
- pf = XSH.shell.prompt_formatter
- pre = pf._get_field_value("env_prefix")
- post = pf._get_field_value("env_postfix")
- return pre + env_name + post
-
-
-def vte_new_tab_cwd():
+ venv_path = Path(virtual_env)
+ pyvenv_cfg = venv_path / "pyvenv.cfg"
+ if pyvenv_cfg.is_file():
+ match = re.search(r"prompt\s*=\s*(.*)", pyvenv_cfg.read_text())
+ if match:
+ return match.group(1).strip().lstrip("'\"").rstrip("'\"")
+ return venv_path.name
+
+
+def _surround_env_name(name: str) -> str:
+ pf = XSH.shell.prompt_formatter
+ pre = pf._get_field_value("env_prefix")
+ post = pf._get_field_value("env_postfix")
+ return f"{pre}{name}{post}"
+
+
+def vte_new_tab_cwd() -> None:
"""This prints an escape sequence that tells VTE terminals the hostname
and pwd. This should not be needed in most cases, but sometimes is for
certain Linux terminals that do not read the PWD from the environment
|
{"golden_diff": "diff --git a/xonsh/prompt/env.py b/xonsh/prompt/env.py\n--- a/xonsh/prompt/env.py\n+++ b/xonsh/prompt/env.py\n@@ -1,42 +1,70 @@\n \"\"\"Prompt formatter for virtualenv and others\"\"\"\n-\n-import os\n+import functools\n+import re\n+from pathlib import Path\n+from typing import Optional\n \n from xonsh.built_ins import XSH\n \n \n-def find_env_name():\n- \"\"\"Finds the current environment name from $VIRTUAL_ENV or\n- $CONDA_DEFAULT_ENV if that is set.\n+def find_env_name() -> Optional[str]:\n+ \"\"\"Find current environment name from available sources.\n+\n+ If ``$VIRTUAL_ENV`` is set, it is determined from the prompt setting in\n+ ``<venv>/pyvenv.cfg`` or from the folder name of the environment.\n+\n+ Otherwise - if it is set - from ``$CONDA_DEFAULT_ENV``.\n+ \"\"\"\n+ virtual_env = XSH.env.get(\"VIRTUAL_ENV\")\n+ if virtual_env:\n+ name = _determine_env_name(virtual_env)\n+ if name:\n+ return name\n+ conda_default_env = XSH.env.get(\"CONDA_DEFAULT_ENV\")\n+ if conda_default_env:\n+ return conda_default_env\n+\n+\n+def env_name() -> str:\n+ \"\"\"Build env_name based on different sources. Respect order of precedence.\n+\n+ Name from VIRTUAL_ENV_PROMPT will be used as-is.\n+ Names from other sources are surrounded with ``{env_prefix}`` and\n+ ``{env_postfix}`` fields.\n \"\"\"\n- env_path = XSH.env.get(\"VIRTUAL_ENV\", \"\")\n- if env_path:\n- env_name = os.path.basename(env_path)\n- else:\n- env_name = XSH.env.get(\"CONDA_DEFAULT_ENV\", \"\")\n- return env_name\n+ if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\"):\n+ return \"\"\n+ virtual_env_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n+ if virtual_env_prompt:\n+ return virtual_env_prompt\n+ found_envname = find_env_name()\n+ return _surround_env_name(found_envname) if found_envname else \"\"\n+\n \[email protected]_cache(maxsize=5)\n+def _determine_env_name(virtual_env: str) -> str:\n+ \"\"\"Use prompt setting from pyvenv.cfg or basename of virtual_env.\n \n-def env_name():\n- \"\"\"Returns the current env_name if it non-empty, surrounded by the\n- ``{env_prefix}`` and ``{env_postfix}`` fields.\n+ Tries to be resilient to subtle changes in whitespace and quoting in the\n+ configuration file format as it adheres to no clear standard.\n \"\"\"\n- env_name = find_env_name()\n- if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\") or not env_name:\n- # env name prompt printing disabled, or no environment; just return\n- return\n-\n- venv_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n- if venv_prompt is not None:\n- return venv_prompt\n- else:\n- pf = XSH.shell.prompt_formatter\n- pre = pf._get_field_value(\"env_prefix\")\n- post = pf._get_field_value(\"env_postfix\")\n- return pre + env_name + post\n-\n-\n-def vte_new_tab_cwd():\n+ venv_path = Path(virtual_env)\n+ pyvenv_cfg = venv_path / \"pyvenv.cfg\"\n+ if pyvenv_cfg.is_file():\n+ match = re.search(r\"prompt\\s*=\\s*(.*)\", pyvenv_cfg.read_text())\n+ if match:\n+ return match.group(1).strip().lstrip(\"'\\\"\").rstrip(\"'\\\"\")\n+ return venv_path.name\n+\n+\n+def _surround_env_name(name: str) -> str:\n+ pf = XSH.shell.prompt_formatter\n+ pre = pf._get_field_value(\"env_prefix\")\n+ post = pf._get_field_value(\"env_postfix\")\n+ return f\"{pre}{name}{post}\"\n+\n+\n+def vte_new_tab_cwd() -> None:\n \"\"\"This prints an escape sequence that tells VTE terminals the hostname\n and pwd. This should not be needed in most cases, but sometimes is for\n certain Linux terminals that do not read the PWD from the environment\n", "issue": "make promp.env.env_name venv --prompt aware (set in pyvenv.cfg)\n## xonfig\r\n\r\n<details>\r\n\r\n```\r\n> xonfig\r\n<xonsh-code>:1:0 - xonfig\r\n<xonsh-code>:1:0 + ![xonfig]\r\n+------------------+----------------------+\r\n| xonsh | 0.11.0 |\r\n| Git SHA | adfa60ea |\r\n| Commit Date | Feb 11 14:53:00 2022 |\r\n| Python | 3.9.7 |\r\n| PLY | 3.11 |\r\n| have readline | True |\r\n| prompt toolkit | None |\r\n| shell type | readline |\r\n| history backend | json |\r\n| pygments | 2.11.2 |\r\n| on posix | True |\r\n| on linux | True |\r\n| distro | unknown |\r\n| on wsl | False |\r\n| on darwin | False |\r\n| on windows | False |\r\n| on cygwin | False |\r\n| on msys2 | False |\r\n| is superuser | False |\r\n| default encoding | utf-8 |\r\n| xonsh encoding | utf-8 |\r\n| encoding errors | surrogateescape |\r\n| on jupyter | False |\r\n| jupyter kernel | None |\r\n| xontrib 1 | coreutils |\r\n| xontrib 2 | vox |\r\n| xontrib 3 | voxapi |\r\n| RC file 1 | /home/ob/.xonshrc |\r\n+------------------+----------------------+\r\n```\r\n\r\n</details>\r\n\r\n## Expected Behavior\r\n\r\nWhen activating a venv via `vox activate` the name should be set to the value of `prompt` key in `pyvenv.cfg` if present (file and key) - see https://docs.python.org/3/library/venv.html.\r\n\r\n## Current Behavior\r\n\r\nThe prompt is always set to name of venv directory independent of prompt settings.\r\n\r\n## Steps to Reproduce\r\n\r\n```shell\r\n$ python -m venv --prompt \"MY SPECIAL PROMPT\" .venv\r\n$ cat .venv/pyvenv.cfg | grep prompt\r\nprompt = 'MY SPECIAL PROMPT'\r\nvox activate ./.venv\r\n```\r\n\r\nnew prompt looks like this:\r\n`[17:58:10] (.venv) ob@ob1 ~/oss/xonsh dropthedot|\u2713`\r\n\r\nbut should look like this:\r\n`[17:58:10] (MY SPECIAL PROMPT) ob@ob1 ~/oss/xonsh dropthedot|\u2713`\r\n\r\n## Also add `--prompt` to `vox new`?\r\nIf this is done, maybe a good idea would be to also add `--prompt` to `vox new` to make this possible on creation (also in the interest of symmetry), but it seems like the common pattern for many xonsh users is to have virtualenvs in `~/.virtualenv` with the venv folder name being the same as the project, therefore automatically having a meaningful prompt, so this does not seem to be an urgently wanted feature for most.\r\n\r\nI want to prepare a PR for the \"passive\" part of the functionality, but could also have a stab at adding `--prompt` to `vox new` at a later date. My main itch would be to respect `prompt` from already existing venvs first.\r\n\r\n## For community\r\n\u2b07\ufe0f **Please click the \ud83d\udc4d reaction instead of leaving a `+1` or \ud83d\udc4d comment**\r\n\n", "before_files": [{"content": "\"\"\"Prompt formatter for virtualenv and others\"\"\"\n\nimport os\n\nfrom xonsh.built_ins import XSH\n\n\ndef find_env_name():\n \"\"\"Finds the current environment name from $VIRTUAL_ENV or\n $CONDA_DEFAULT_ENV if that is set.\n \"\"\"\n env_path = XSH.env.get(\"VIRTUAL_ENV\", \"\")\n if env_path:\n env_name = os.path.basename(env_path)\n else:\n env_name = XSH.env.get(\"CONDA_DEFAULT_ENV\", \"\")\n return env_name\n\n\ndef env_name():\n \"\"\"Returns the current env_name if it non-empty, surrounded by the\n ``{env_prefix}`` and ``{env_postfix}`` fields.\n \"\"\"\n env_name = find_env_name()\n if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\") or not env_name:\n # env name prompt printing disabled, or no environment; just return\n return\n\n venv_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n if venv_prompt is not None:\n return venv_prompt\n else:\n pf = XSH.shell.prompt_formatter\n pre = pf._get_field_value(\"env_prefix\")\n post = pf._get_field_value(\"env_postfix\")\n return pre + env_name + post\n\n\ndef vte_new_tab_cwd():\n \"\"\"This prints an escape sequence that tells VTE terminals the hostname\n and pwd. This should not be needed in most cases, but sometimes is for\n certain Linux terminals that do not read the PWD from the environment\n on startup. Note that this does not return a string, it simply prints\n and flushes the escape sequence to stdout directly.\n \"\"\"\n env = XSH.env\n t = \"\\033]7;file://{}{}\\007\"\n s = t.format(env.get(\"HOSTNAME\"), env.get(\"PWD\"))\n print(s, end=\"\", flush=True)\n", "path": "xonsh/prompt/env.py"}], "after_files": [{"content": "\"\"\"Prompt formatter for virtualenv and others\"\"\"\nimport functools\nimport re\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom xonsh.built_ins import XSH\n\n\ndef find_env_name() -> Optional[str]:\n \"\"\"Find current environment name from available sources.\n\n If ``$VIRTUAL_ENV`` is set, it is determined from the prompt setting in\n ``<venv>/pyvenv.cfg`` or from the folder name of the environment.\n\n Otherwise - if it is set - from ``$CONDA_DEFAULT_ENV``.\n \"\"\"\n virtual_env = XSH.env.get(\"VIRTUAL_ENV\")\n if virtual_env:\n name = _determine_env_name(virtual_env)\n if name:\n return name\n conda_default_env = XSH.env.get(\"CONDA_DEFAULT_ENV\")\n if conda_default_env:\n return conda_default_env\n\n\ndef env_name() -> str:\n \"\"\"Build env_name based on different sources. Respect order of precedence.\n\n Name from VIRTUAL_ENV_PROMPT will be used as-is.\n Names from other sources are surrounded with ``{env_prefix}`` and\n ``{env_postfix}`` fields.\n \"\"\"\n if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\"):\n return \"\"\n virtual_env_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n if virtual_env_prompt:\n return virtual_env_prompt\n found_envname = find_env_name()\n return _surround_env_name(found_envname) if found_envname else \"\"\n\n\[email protected]_cache(maxsize=5)\ndef _determine_env_name(virtual_env: str) -> str:\n \"\"\"Use prompt setting from pyvenv.cfg or basename of virtual_env.\n\n Tries to be resilient to subtle changes in whitespace and quoting in the\n configuration file format as it adheres to no clear standard.\n \"\"\"\n venv_path = Path(virtual_env)\n pyvenv_cfg = venv_path / \"pyvenv.cfg\"\n if pyvenv_cfg.is_file():\n match = re.search(r\"prompt\\s*=\\s*(.*)\", pyvenv_cfg.read_text())\n if match:\n return match.group(1).strip().lstrip(\"'\\\"\").rstrip(\"'\\\"\")\n return venv_path.name\n\n\ndef _surround_env_name(name: str) -> str:\n pf = XSH.shell.prompt_formatter\n pre = pf._get_field_value(\"env_prefix\")\n post = pf._get_field_value(\"env_postfix\")\n return f\"{pre}{name}{post}\"\n\n\ndef vte_new_tab_cwd() -> None:\n \"\"\"This prints an escape sequence that tells VTE terminals the hostname\n and pwd. This should not be needed in most cases, but sometimes is for\n certain Linux terminals that do not read the PWD from the environment\n on startup. Note that this does not return a string, it simply prints\n and flushes the escape sequence to stdout directly.\n \"\"\"\n env = XSH.env\n t = \"\\033]7;file://{}{}\\007\"\n s = t.format(env.get(\"HOSTNAME\"), env.get(\"PWD\"))\n print(s, end=\"\", flush=True)\n", "path": "xonsh/prompt/env.py"}]}
| 1,547 | 977 |
gh_patches_debug_50236
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-577
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 1.4.8
User facing issues on the docket:
+ non-zip_safe pex extraction has a race on renaming #557
+ Execute pex archive at runtime with -m parameter #547
+ OSX's python 2.7.10 interpreter reports a bad local platform, bdists fail to resolve #523
+ `pex.resolver.resolve` does the wrong thing when given an `interpreter` and no `platform` #511
+ [Errno 13] Permission denied when executing with bdist_pex #570
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.4.7'
5
6 # Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems
7 # for pex code so we exclude that range.
8 SETUPTOOLS_REQUIREMENT = 'setuptools>=20.3,<41,!=34.*,!=35.*'
9
10 WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.32'
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,7 +1,7 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.4.7'
+__version__ = '1.4.8'
# Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems
# for pex code so we exclude that range.
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,7 +1,7 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.4.7'\n+__version__ = '1.4.8'\n \n # Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems\n # for pex code so we exclude that range.\n", "issue": "Release 1.4.8\nUser facing issues on the docket:\r\n+ non-zip_safe pex extraction has a race on renaming #557\r\n+ Execute pex archive at runtime with -m parameter #547\r\n+ OSX's python 2.7.10 interpreter reports a bad local platform, bdists fail to resolve #523 \r\n+ `pex.resolver.resolve` does the wrong thing when given an `interpreter` and no `platform` #511\r\n+ [Errno 13] Permission denied when executing with bdist_pex #570\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.4.7'\n\n# Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems\n# for pex code so we exclude that range.\nSETUPTOOLS_REQUIREMENT = 'setuptools>=20.3,<41,!=34.*,!=35.*'\n\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.32'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.4.8'\n\n# Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems\n# for pex code so we exclude that range.\nSETUPTOOLS_REQUIREMENT = 'setuptools>=20.3,<41,!=34.*,!=35.*'\n\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.32'\n", "path": "pex/version.py"}]}
| 542 | 143 |
gh_patches_debug_35556
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5279
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.nbcnews: not live
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
Every time it start the same video but on the site it is live. Something was changed on the site few days ago.
### Debug log
```text
[cli][debug] OS: Linux-6.1.0-7-amd64-x86_64-with-glibc2.36
[cli][debug] Python: 3.11.2
[cli][debug] Streamlink: 5.3.1+84.g7978e4ab
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.9.24
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.2
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.1
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.2.3
[cli][debug] esprima: 4.0.1
[cli][debug] importlib-metadata: 4.12.0
[cli][debug] Arguments:
[cli][debug] url=https://www.nbcnews.com/now/
[cli][debug] --loglevel=debug
[cli][debug] --player=mpv
[cli][debug] --default-stream=['best']
[cli][info] Found matching plugin nbcnews for URL https://www.nbcnews.com/now/
[plugins.nbcnews][debug] API ID: 2007524
[utils.l10n][debug] Language code: ru_RU
[cli][info] Available streams: 144p_alt (worst), 144p, 216p_alt, 216p, 288p_alt, 288p, 360p_alt, 360p, 504p_alt, 504p, 576p_alt, 576p, 720p_alt, 720p, 1080p_alt, 1080p (best)
[cli][info] Opening stream: 1080p (hls)
[cli][info] Starting player: mpv
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 1786423; Last Sequence: 1786722
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1786423; End Sequence: 1786722
[stream.hls][debug] Adding segment 1786423 to queue
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/nbcnews.py`
Content:
```
1 """
2 $description 24-hour world, US and local news channel, based in the United States of America.
3 $url nbcnews.com
4 $type live
5 """
6
7 import logging
8 import re
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.plugin.api import validate
12 from streamlink.stream.hls import HLSStream
13
14
15 log = logging.getLogger(__name__)
16
17
18 @pluginmatcher(re.compile(
19 r"https?://(?:www\.)?nbcnews\.com/now",
20 ))
21 class NBCNews(Plugin):
22 URL_API = "https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop"
23 URL_TOKEN = "https://tokens.playmakerservices.com/"
24
25 title = "NBC News Now"
26
27 def _get_streams(self):
28 self.id = self.session.http.get(
29 self.url,
30 schema=validate.Schema(
31 validate.parse_html(),
32 validate.xml_xpath_string(".//script[@type='application/ld+json'][1]/text()"),
33 validate.none_or_all(
34 validate.parse_json(),
35 {"embedUrl": validate.url()},
36 validate.get("embedUrl"),
37 validate.transform(lambda embed_url: embed_url.split("/")[-1]),
38 ),
39 ),
40 )
41 if self.id is None:
42 return
43 log.debug(f"API ID: {self.id}")
44
45 stream = self.session.http.get(
46 self.URL_API.format(self.id),
47 schema=validate.Schema(
48 validate.parse_json(),
49 {
50 "videoSources": [{
51 "cdnSources": {
52 "primary": [{
53 "sourceUrl": validate.url(path=validate.endswith(".m3u8")),
54 }],
55 },
56 }],
57 },
58 validate.get(("videoSources", 0, "cdnSources", "primary", 0, "sourceUrl")),
59 ),
60 )
61
62 url = self.session.http.post(
63 self.URL_TOKEN,
64 json={
65 "requestorId": "nbcnews",
66 "pid": self.id,
67 "application": "NBCSports",
68 "version": "v1",
69 "platform": "desktop",
70 "token": "",
71 "resourceId": "",
72 "inPath": "false",
73 "authenticationType": "unauth",
74 "cdn": "akamai",
75 "url": stream,
76 },
77 schema=validate.Schema(
78 validate.parse_json(),
79 {
80 "akamai": [{
81 "tokenizedUrl": validate.url(),
82 }],
83 },
84 validate.get(("akamai", 0, "tokenizedUrl")),
85 ),
86 )
87 return HLSStream.parse_variant_playlist(self.session, url)
88
89
90 __plugin__ = NBCNews
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/nbcnews.py b/src/streamlink/plugins/nbcnews.py
deleted file mode 100644
--- a/src/streamlink/plugins/nbcnews.py
+++ /dev/null
@@ -1,90 +0,0 @@
-"""
-$description 24-hour world, US and local news channel, based in the United States of America.
-$url nbcnews.com
-$type live
-"""
-
-import logging
-import re
-
-from streamlink.plugin import Plugin, pluginmatcher
-from streamlink.plugin.api import validate
-from streamlink.stream.hls import HLSStream
-
-
-log = logging.getLogger(__name__)
-
-
-@pluginmatcher(re.compile(
- r"https?://(?:www\.)?nbcnews\.com/now",
-))
-class NBCNews(Plugin):
- URL_API = "https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop"
- URL_TOKEN = "https://tokens.playmakerservices.com/"
-
- title = "NBC News Now"
-
- def _get_streams(self):
- self.id = self.session.http.get(
- self.url,
- schema=validate.Schema(
- validate.parse_html(),
- validate.xml_xpath_string(".//script[@type='application/ld+json'][1]/text()"),
- validate.none_or_all(
- validate.parse_json(),
- {"embedUrl": validate.url()},
- validate.get("embedUrl"),
- validate.transform(lambda embed_url: embed_url.split("/")[-1]),
- ),
- ),
- )
- if self.id is None:
- return
- log.debug(f"API ID: {self.id}")
-
- stream = self.session.http.get(
- self.URL_API.format(self.id),
- schema=validate.Schema(
- validate.parse_json(),
- {
- "videoSources": [{
- "cdnSources": {
- "primary": [{
- "sourceUrl": validate.url(path=validate.endswith(".m3u8")),
- }],
- },
- }],
- },
- validate.get(("videoSources", 0, "cdnSources", "primary", 0, "sourceUrl")),
- ),
- )
-
- url = self.session.http.post(
- self.URL_TOKEN,
- json={
- "requestorId": "nbcnews",
- "pid": self.id,
- "application": "NBCSports",
- "version": "v1",
- "platform": "desktop",
- "token": "",
- "resourceId": "",
- "inPath": "false",
- "authenticationType": "unauth",
- "cdn": "akamai",
- "url": stream,
- },
- schema=validate.Schema(
- validate.parse_json(),
- {
- "akamai": [{
- "tokenizedUrl": validate.url(),
- }],
- },
- validate.get(("akamai", 0, "tokenizedUrl")),
- ),
- )
- return HLSStream.parse_variant_playlist(self.session, url)
-
-
-__plugin__ = NBCNews
|
{"golden_diff": "diff --git a/src/streamlink/plugins/nbcnews.py b/src/streamlink/plugins/nbcnews.py\ndeleted file mode 100644\n--- a/src/streamlink/plugins/nbcnews.py\n+++ /dev/null\n@@ -1,90 +0,0 @@\n-\"\"\"\n-$description 24-hour world, US and local news channel, based in the United States of America.\n-$url nbcnews.com\n-$type live\n-\"\"\"\n-\n-import logging\n-import re\n-\n-from streamlink.plugin import Plugin, pluginmatcher\n-from streamlink.plugin.api import validate\n-from streamlink.stream.hls import HLSStream\n-\n-\n-log = logging.getLogger(__name__)\n-\n-\n-@pluginmatcher(re.compile(\n- r\"https?://(?:www\\.)?nbcnews\\.com/now\",\n-))\n-class NBCNews(Plugin):\n- URL_API = \"https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop\"\n- URL_TOKEN = \"https://tokens.playmakerservices.com/\"\n-\n- title = \"NBC News Now\"\n-\n- def _get_streams(self):\n- self.id = self.session.http.get(\n- self.url,\n- schema=validate.Schema(\n- validate.parse_html(),\n- validate.xml_xpath_string(\".//script[@type='application/ld+json'][1]/text()\"),\n- validate.none_or_all(\n- validate.parse_json(),\n- {\"embedUrl\": validate.url()},\n- validate.get(\"embedUrl\"),\n- validate.transform(lambda embed_url: embed_url.split(\"/\")[-1]),\n- ),\n- ),\n- )\n- if self.id is None:\n- return\n- log.debug(f\"API ID: {self.id}\")\n-\n- stream = self.session.http.get(\n- self.URL_API.format(self.id),\n- schema=validate.Schema(\n- validate.parse_json(),\n- {\n- \"videoSources\": [{\n- \"cdnSources\": {\n- \"primary\": [{\n- \"sourceUrl\": validate.url(path=validate.endswith(\".m3u8\")),\n- }],\n- },\n- }],\n- },\n- validate.get((\"videoSources\", 0, \"cdnSources\", \"primary\", 0, \"sourceUrl\")),\n- ),\n- )\n-\n- url = self.session.http.post(\n- self.URL_TOKEN,\n- json={\n- \"requestorId\": \"nbcnews\",\n- \"pid\": self.id,\n- \"application\": \"NBCSports\",\n- \"version\": \"v1\",\n- \"platform\": \"desktop\",\n- \"token\": \"\",\n- \"resourceId\": \"\",\n- \"inPath\": \"false\",\n- \"authenticationType\": \"unauth\",\n- \"cdn\": \"akamai\",\n- \"url\": stream,\n- },\n- schema=validate.Schema(\n- validate.parse_json(),\n- {\n- \"akamai\": [{\n- \"tokenizedUrl\": validate.url(),\n- }],\n- },\n- validate.get((\"akamai\", 0, \"tokenizedUrl\")),\n- ),\n- )\n- return HLSStream.parse_variant_playlist(self.session, url)\n-\n-\n-__plugin__ = NBCNews\n", "issue": "plugins.nbcnews: not live\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest build from the master branch\n\n### Description\n\nEvery time it start the same video but on the site it is live. Something was changed on the site few days ago.\n\n### Debug log\n\n```text\n[cli][debug] OS: Linux-6.1.0-7-amd64-x86_64-with-glibc2.36\r\n[cli][debug] Python: 3.11.2\r\n[cli][debug] Streamlink: 5.3.1+84.g7978e4ab\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.9.24\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.1\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.2.3\r\n[cli][debug] esprima: 4.0.1\r\n[cli][debug] importlib-metadata: 4.12.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.nbcnews.com/now/\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --player=mpv\r\n[cli][debug] --default-stream=['best']\r\n[cli][info] Found matching plugin nbcnews for URL https://www.nbcnews.com/now/\r\n[plugins.nbcnews][debug] API ID: 2007524\r\n[utils.l10n][debug] Language code: ru_RU\r\n[cli][info] Available streams: 144p_alt (worst), 144p, 216p_alt, 216p, 288p_alt, 288p, 360p_alt, 360p, 504p_alt, 504p, 576p_alt, 576p, 720p_alt, 720p, 1080p_alt, 1080p (best)\r\n[cli][info] Opening stream: 1080p (hls)\r\n[cli][info] Starting player: mpv\r\n[stream.hls][debug] Reloading playlist\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[stream.hls][debug] First Sequence: 1786423; Last Sequence: 1786722\r\n[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1786423; End Sequence: 1786722\r\n[stream.hls][debug] Adding segment 1786423 to queue\n```\n\n", "before_files": [{"content": "\"\"\"\n$description 24-hour world, US and local news channel, based in the United States of America.\n$url nbcnews.com\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?nbcnews\\.com/now\",\n))\nclass NBCNews(Plugin):\n URL_API = \"https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop\"\n URL_TOKEN = \"https://tokens.playmakerservices.com/\"\n\n title = \"NBC News Now\"\n\n def _get_streams(self):\n self.id = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[@type='application/ld+json'][1]/text()\"),\n validate.none_or_all(\n validate.parse_json(),\n {\"embedUrl\": validate.url()},\n validate.get(\"embedUrl\"),\n validate.transform(lambda embed_url: embed_url.split(\"/\")[-1]),\n ),\n ),\n )\n if self.id is None:\n return\n log.debug(f\"API ID: {self.id}\")\n\n stream = self.session.http.get(\n self.URL_API.format(self.id),\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"videoSources\": [{\n \"cdnSources\": {\n \"primary\": [{\n \"sourceUrl\": validate.url(path=validate.endswith(\".m3u8\")),\n }],\n },\n }],\n },\n validate.get((\"videoSources\", 0, \"cdnSources\", \"primary\", 0, \"sourceUrl\")),\n ),\n )\n\n url = self.session.http.post(\n self.URL_TOKEN,\n json={\n \"requestorId\": \"nbcnews\",\n \"pid\": self.id,\n \"application\": \"NBCSports\",\n \"version\": \"v1\",\n \"platform\": \"desktop\",\n \"token\": \"\",\n \"resourceId\": \"\",\n \"inPath\": \"false\",\n \"authenticationType\": \"unauth\",\n \"cdn\": \"akamai\",\n \"url\": stream,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"akamai\": [{\n \"tokenizedUrl\": validate.url(),\n }],\n },\n validate.get((\"akamai\", 0, \"tokenizedUrl\")),\n ),\n )\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = NBCNews\n", "path": "src/streamlink/plugins/nbcnews.py"}], "after_files": [{"content": null, "path": "src/streamlink/plugins/nbcnews.py"}]}
| 1,853 | 696 |
gh_patches_debug_21241
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2280
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
events not editable for groupmembers
group member CAN edit a project but CANNOT an event related to the project. 403 error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/offlineevents/rules.py`
Content:
```
1 import rules
2 from rules.predicates import is_superuser
3
4 from adhocracy4.modules import predicates as module_predicates
5 from adhocracy4.organisations.predicates import is_initiator
6 from adhocracy4.projects.predicates import is_moderator
7
8 rules.add_perm(
9 'meinberlin_offlineevents.list_offlineevent',
10 is_superuser | is_initiator | is_moderator
11 )
12
13 rules.add_perm(
14 'meinberlin_offlineevents.view_offlineevent',
15 module_predicates.is_allowed_view_item
16 )
17
18 rules.add_perm(
19 'meinberlin_offlineevents.add_offlineevent',
20 is_superuser | is_initiator | is_moderator
21 )
22
23 rules.add_perm(
24 'meinberlin_offlineevents.change_offlineevent',
25 is_superuser | is_initiator | is_moderator
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/offlineevents/rules.py b/meinberlin/apps/offlineevents/rules.py
--- a/meinberlin/apps/offlineevents/rules.py
+++ b/meinberlin/apps/offlineevents/rules.py
@@ -4,10 +4,11 @@
from adhocracy4.modules import predicates as module_predicates
from adhocracy4.organisations.predicates import is_initiator
from adhocracy4.projects.predicates import is_moderator
+from adhocracy4.projects.predicates import is_prj_group_member
rules.add_perm(
'meinberlin_offlineevents.list_offlineevent',
- is_superuser | is_initiator | is_moderator
+ is_superuser | is_initiator | is_moderator | is_prj_group_member
)
rules.add_perm(
@@ -17,10 +18,10 @@
rules.add_perm(
'meinberlin_offlineevents.add_offlineevent',
- is_superuser | is_initiator | is_moderator
+ is_superuser | is_initiator | is_moderator | is_prj_group_member
)
rules.add_perm(
'meinberlin_offlineevents.change_offlineevent',
- is_superuser | is_initiator | is_moderator
+ is_superuser | is_initiator | is_moderator | is_prj_group_member
)
|
{"golden_diff": "diff --git a/meinberlin/apps/offlineevents/rules.py b/meinberlin/apps/offlineevents/rules.py\n--- a/meinberlin/apps/offlineevents/rules.py\n+++ b/meinberlin/apps/offlineevents/rules.py\n@@ -4,10 +4,11 @@\n from adhocracy4.modules import predicates as module_predicates\n from adhocracy4.organisations.predicates import is_initiator\n from adhocracy4.projects.predicates import is_moderator\n+from adhocracy4.projects.predicates import is_prj_group_member\n \n rules.add_perm(\n 'meinberlin_offlineevents.list_offlineevent',\n- is_superuser | is_initiator | is_moderator\n+ is_superuser | is_initiator | is_moderator | is_prj_group_member\n )\n \n rules.add_perm(\n@@ -17,10 +18,10 @@\n \n rules.add_perm(\n 'meinberlin_offlineevents.add_offlineevent',\n- is_superuser | is_initiator | is_moderator\n+ is_superuser | is_initiator | is_moderator | is_prj_group_member\n )\n \n rules.add_perm(\n 'meinberlin_offlineevents.change_offlineevent',\n- is_superuser | is_initiator | is_moderator\n+ is_superuser | is_initiator | is_moderator | is_prj_group_member\n )\n", "issue": "events not editable for groupmembers\ngroup member CAN edit a project but CANNOT an event related to the project. 403 error.\n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.modules import predicates as module_predicates\nfrom adhocracy4.organisations.predicates import is_initiator\nfrom adhocracy4.projects.predicates import is_moderator\n\nrules.add_perm(\n 'meinberlin_offlineevents.list_offlineevent',\n is_superuser | is_initiator | is_moderator\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.view_offlineevent',\n module_predicates.is_allowed_view_item\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.add_offlineevent',\n is_superuser | is_initiator | is_moderator\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.change_offlineevent',\n is_superuser | is_initiator | is_moderator\n)\n", "path": "meinberlin/apps/offlineevents/rules.py"}], "after_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.modules import predicates as module_predicates\nfrom adhocracy4.organisations.predicates import is_initiator\nfrom adhocracy4.projects.predicates import is_moderator\nfrom adhocracy4.projects.predicates import is_prj_group_member\n\nrules.add_perm(\n 'meinberlin_offlineevents.list_offlineevent',\n is_superuser | is_initiator | is_moderator | is_prj_group_member\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.view_offlineevent',\n module_predicates.is_allowed_view_item\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.add_offlineevent',\n is_superuser | is_initiator | is_moderator | is_prj_group_member\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.change_offlineevent',\n is_superuser | is_initiator | is_moderator | is_prj_group_member\n)\n", "path": "meinberlin/apps/offlineevents/rules.py"}]}
| 519 | 301 |
gh_patches_debug_1221
|
rasdani/github-patches
|
git_diff
|
google__mobly-524
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix pytest warnings in Python 3
`pytests` currently produces the following warnings:
mobly/mobly/test_runner.py:181: PytestWarning: cannot collect test class 'TestRunner' because it has a __init__ constructor
class TestRunner(object):
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/output_test.py:171: DeprecationWarning: Please use assertNotEqual instead.
self.assertNotEquals(output_dir1, output_dir2)
mobly/tests/mobly/output_test.py:205: DeprecationWarning: Please use assertNotEqual instead.
self.assertNotEquals(output_dir1, output_dir2)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import platform
16 import setuptools
17 from setuptools.command import test
18 import sys
19
20 install_requires = [
21 'future', 'portpicker', 'psutil>=5.4.4', 'pyserial', 'pyyaml',
22 'timeout_decorator'
23 ]
24
25 if sys.version_info < (3, ):
26 install_requires.extend([
27 'enum34',
28 # "futures" is needed for py2 compatibility and it only works in 2.7
29 'futures',
30 ])
31
32 if platform.system() == 'Windows':
33 install_requires.append('pywin32')
34
35
36 class PyTest(test.test):
37 """Class used to execute unit tests using PyTest. This allows us to execute
38 unit tests without having to install the package.
39 """
40
41 def finalize_options(self):
42 test.test.finalize_options(self)
43 self.test_args = ['-x', "tests"]
44 self.test_suite = True
45
46 def run_tests(self):
47 import pytest
48 errno = pytest.main(self.test_args)
49 sys.exit(errno)
50
51
52 def main():
53 setuptools.setup(
54 name='mobly',
55 version='1.7.5',
56 maintainer='Ang Li',
57 maintainer_email='[email protected]',
58 description='Automation framework for special end-to-end test cases',
59 license='Apache2.0',
60 url='https://github.com/google/mobly',
61 download_url='https://github.com/google/mobly/tarball/1.7.5',
62 packages=setuptools.find_packages(),
63 include_package_data=False,
64 scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],
65 tests_require=[
66 'mock',
67 'pytest',
68 'pytz',
69 ],
70 install_requires=install_requires,
71 cmdclass={'test': PyTest},
72 )
73
74
75 if __name__ == '__main__':
76 main()
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -40,7 +40,7 @@
def finalize_options(self):
test.test.finalize_options(self)
- self.test_args = ['-x', "tests"]
+ self.test_args = ['-x', "tests/mobly"]
self.test_suite = True
def run_tests(self):
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -40,7 +40,7 @@\n \n def finalize_options(self):\n test.test.finalize_options(self)\n- self.test_args = ['-x', \"tests\"]\n+ self.test_args = ['-x', \"tests/mobly\"]\n self.test_suite = True\n \n def run_tests(self):\n", "issue": "Fix pytest warnings in Python 3\n`pytests` currently produces the following warnings:\r\n\r\n mobly/mobly/test_runner.py:181: PytestWarning: cannot collect test class 'TestRunner' because it has a __init__ constructor\r\n class TestRunner(object):\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/output_test.py:171: DeprecationWarning: Please use assertNotEqual instead.\r\n self.assertNotEquals(output_dir1, output_dir2)\r\n\r\n mobly/tests/mobly/output_test.py:205: DeprecationWarning: Please use assertNotEqual instead.\r\n self.assertNotEquals(output_dir1, output_dir2)\r\n\r\n -- Docs: https://docs.pytest.org/en/latest/warnings.html\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport platform\nimport setuptools\nfrom setuptools.command import test\nimport sys\n\ninstall_requires = [\n 'future', 'portpicker', 'psutil>=5.4.4', 'pyserial', 'pyyaml',\n 'timeout_decorator'\n]\n\nif sys.version_info < (3, ):\n install_requires.extend([\n 'enum34',\n # \"futures\" is needed for py2 compatibility and it only works in 2.7\n 'futures',\n ])\n\nif platform.system() == 'Windows':\n install_requires.append('pywin32')\n\n\nclass PyTest(test.test):\n \"\"\"Class used to execute unit tests using PyTest. This allows us to execute\n unit tests without having to install the package.\n \"\"\"\n\n def finalize_options(self):\n test.test.finalize_options(self)\n self.test_args = ['-x', \"tests\"]\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ndef main():\n setuptools.setup(\n name='mobly',\n version='1.7.5',\n maintainer='Ang Li',\n maintainer_email='[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url='https://github.com/google/mobly',\n download_url='https://github.com/google/mobly/tarball/1.7.5',\n packages=setuptools.find_packages(),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n tests_require=[\n 'mock',\n 'pytest',\n 'pytz',\n ],\n install_requires=install_requires,\n cmdclass={'test': PyTest},\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport platform\nimport setuptools\nfrom setuptools.command import test\nimport sys\n\ninstall_requires = [\n 'future', 'portpicker', 'psutil>=5.4.4', 'pyserial', 'pyyaml',\n 'timeout_decorator'\n]\n\nif sys.version_info < (3, ):\n install_requires.extend([\n 'enum34',\n # \"futures\" is needed for py2 compatibility and it only works in 2.7\n 'futures',\n ])\n\nif platform.system() == 'Windows':\n install_requires.append('pywin32')\n\n\nclass PyTest(test.test):\n \"\"\"Class used to execute unit tests using PyTest. This allows us to execute\n unit tests without having to install the package.\n \"\"\"\n\n def finalize_options(self):\n test.test.finalize_options(self)\n self.test_args = ['-x', \"tests/mobly\"]\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ndef main():\n setuptools.setup(\n name='mobly',\n version='1.7.5',\n maintainer='Ang Li',\n maintainer_email='[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url='https://github.com/google/mobly',\n download_url='https://github.com/google/mobly/tarball/1.7.5',\n packages=setuptools.find_packages(),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n tests_require=[\n 'mock',\n 'pytest',\n 'pytz',\n ],\n install_requires=install_requires,\n cmdclass={'test': PyTest},\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "setup.py"}]}
| 1,947 | 87 |
gh_patches_debug_8409
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-478
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`docker build` argument order is invalid on old versions of Docker
We do: `docker build . --tag thing --label thing`
But this produces an error on Docker 1.11.2:
```
ckuehl@dev4-uswest1cdevc:~/proj/pre-commit$ docker build . --tag thing --label thing
docker: "build" requires 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
```
The path needs to go at the end on 1.11.2, but it works on 1.13.0 as-is. We should probably just change the order of the arguments to make every version happy.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/docker.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import hashlib
5 import os
6
7 from pre_commit import five
8 from pre_commit.languages import helpers
9 from pre_commit.util import CalledProcessError
10 from pre_commit.util import clean_path_on_failure
11 from pre_commit.util import cmd_output
12 from pre_commit.xargs import xargs
13
14
15 ENVIRONMENT_DIR = 'docker'
16 PRE_COMMIT_LABEL = 'PRE_COMMIT'
17
18
19 def md5(s): # pragma: windows no cover
20 return hashlib.md5(five.to_bytes(s)).hexdigest()
21
22
23 def docker_tag(repo_cmd_runner): # pragma: windows no cover
24 return 'pre-commit-{}'.format(
25 md5(os.path.basename(repo_cmd_runner.path()))
26 ).lower()
27
28
29 def docker_is_running(): # pragma: windows no cover
30 try:
31 return cmd_output('docker', 'ps')[0] == 0
32 except CalledProcessError:
33 return False
34
35
36 def assert_docker_available(): # pragma: windows no cover
37 assert docker_is_running(), (
38 'Docker is either not running or not configured in this environment'
39 )
40
41
42 def build_docker_image(repo_cmd_runner, **kwargs): # pragma: windows no cover
43 pull = kwargs.pop('pull')
44 assert not kwargs, kwargs
45 cmd = (
46 'docker', 'build', '.',
47 '--tag', docker_tag(repo_cmd_runner),
48 '--label', PRE_COMMIT_LABEL,
49 )
50 if pull:
51 cmd += ('--pull',)
52 helpers.run_setup_cmd(repo_cmd_runner, cmd)
53
54
55 def install_environment(
56 repo_cmd_runner,
57 version='default',
58 additional_dependencies=(),
59 ): # pragma: windows no cover
60 assert repo_cmd_runner.exists('Dockerfile'), (
61 'No Dockerfile was found in the hook repository'
62 )
63 helpers.assert_version_default('docker', version)
64 helpers.assert_no_additional_deps('docker', additional_dependencies)
65 assert_docker_available()
66
67 directory = repo_cmd_runner.path(
68 helpers.environment_dir(ENVIRONMENT_DIR, 'default'),
69 )
70
71 # Docker doesn't really have relevant disk environment, but pre-commit
72 # still needs to cleanup it's state files on failure
73 with clean_path_on_failure(directory):
74 build_docker_image(repo_cmd_runner, pull=True)
75 os.mkdir(directory)
76
77
78 def run_hook(repo_cmd_runner, hook, file_args): # pragma: windows no cover
79 assert_docker_available()
80 # Rebuild the docker image in case it has gone missing, as many people do
81 # automated cleanup of docker images.
82 build_docker_image(repo_cmd_runner, pull=False)
83
84 hook_cmd = helpers.to_cmd(hook)
85 entry_executable, cmd_rest = hook_cmd[0], hook_cmd[1:]
86
87 cmd = (
88 'docker', 'run',
89 '--rm',
90 '-u', '{}:{}'.format(os.getuid(), os.getgid()),
91 '-v', '{}:/src:rw'.format(os.getcwd()),
92 '--workdir', '/src',
93 '--entrypoint', entry_executable,
94 docker_tag(repo_cmd_runner)
95 ) + cmd_rest
96
97 return xargs(cmd, file_args)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -43,12 +43,14 @@
pull = kwargs.pop('pull')
assert not kwargs, kwargs
cmd = (
- 'docker', 'build', '.',
+ 'docker', 'build',
'--tag', docker_tag(repo_cmd_runner),
'--label', PRE_COMMIT_LABEL,
)
if pull:
cmd += ('--pull',)
+ # This must come last for old versions of docker. See #477
+ cmd += ('.',)
helpers.run_setup_cmd(repo_cmd_runner, cmd)
|
{"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -43,12 +43,14 @@\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n- 'docker', 'build', '.',\n+ 'docker', 'build',\n '--tag', docker_tag(repo_cmd_runner),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n+ # This must come last for old versions of docker. See #477\n+ cmd += ('.',)\n helpers.run_setup_cmd(repo_cmd_runner, cmd)\n", "issue": "`docker build` argument order is invalid on old versions of Docker\nWe do: `docker build . --tag thing --label thing`\r\n\r\nBut this produces an error on Docker 1.11.2:\r\n```\r\nckuehl@dev4-uswest1cdevc:~/proj/pre-commit$ docker build . --tag thing --label thing\r\ndocker: \"build\" requires 1 argument.\r\nSee 'docker build --help'.\r\n\r\nUsage: docker build [OPTIONS] PATH | URL | -\r\n\r\nBuild an image from a Dockerfile\r\n```\r\n\r\nThe path needs to go at the end on 1.11.2, but it works on 1.13.0 as-is. We should probably just change the order of the arguments to make every version happy.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport os\n\nfrom pre_commit import five\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\n\n\ndef md5(s): # pragma: windows no cover\n return hashlib.md5(five.to_bytes(s)).hexdigest()\n\n\ndef docker_tag(repo_cmd_runner): # pragma: windows no cover\n return 'pre-commit-{}'.format(\n md5(os.path.basename(repo_cmd_runner.path()))\n ).lower()\n\n\ndef docker_is_running(): # pragma: windows no cover\n try:\n return cmd_output('docker', 'ps')[0] == 0\n except CalledProcessError:\n return False\n\n\ndef assert_docker_available(): # pragma: windows no cover\n assert docker_is_running(), (\n 'Docker is either not running or not configured in this environment'\n )\n\n\ndef build_docker_image(repo_cmd_runner, **kwargs): # pragma: windows no cover\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n 'docker', 'build', '.',\n '--tag', docker_tag(repo_cmd_runner),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n helpers.run_setup_cmd(repo_cmd_runner, cmd)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n): # pragma: windows no cover\n assert repo_cmd_runner.exists('Dockerfile'), (\n 'No Dockerfile was found in the hook repository'\n )\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n assert_docker_available()\n\n directory = repo_cmd_runner.path(\n helpers.environment_dir(ENVIRONMENT_DIR, 'default'),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup it's state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(repo_cmd_runner, pull=True)\n os.mkdir(directory)\n\n\ndef run_hook(repo_cmd_runner, hook, file_args): # pragma: windows no cover\n assert_docker_available()\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(repo_cmd_runner, pull=False)\n\n hook_cmd = helpers.to_cmd(hook)\n entry_executable, cmd_rest = hook_cmd[0], hook_cmd[1:]\n\n cmd = (\n 'docker', 'run',\n '--rm',\n '-u', '{}:{}'.format(os.getuid(), os.getgid()),\n '-v', '{}:/src:rw'.format(os.getcwd()),\n '--workdir', '/src',\n '--entrypoint', entry_executable,\n docker_tag(repo_cmd_runner)\n ) + cmd_rest\n\n return xargs(cmd, file_args)\n", "path": "pre_commit/languages/docker.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport os\n\nfrom pre_commit import five\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\n\n\ndef md5(s): # pragma: windows no cover\n return hashlib.md5(five.to_bytes(s)).hexdigest()\n\n\ndef docker_tag(repo_cmd_runner): # pragma: windows no cover\n return 'pre-commit-{}'.format(\n md5(os.path.basename(repo_cmd_runner.path()))\n ).lower()\n\n\ndef docker_is_running(): # pragma: windows no cover\n try:\n return cmd_output('docker', 'ps')[0] == 0\n except CalledProcessError:\n return False\n\n\ndef assert_docker_available(): # pragma: windows no cover\n assert docker_is_running(), (\n 'Docker is either not running or not configured in this environment'\n )\n\n\ndef build_docker_image(repo_cmd_runner, **kwargs): # pragma: windows no cover\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n 'docker', 'build',\n '--tag', docker_tag(repo_cmd_runner),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(repo_cmd_runner, cmd)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n): # pragma: windows no cover\n assert repo_cmd_runner.exists('Dockerfile'), (\n 'No Dockerfile was found in the hook repository'\n )\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n assert_docker_available()\n\n directory = repo_cmd_runner.path(\n helpers.environment_dir(ENVIRONMENT_DIR, 'default'),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup it's state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(repo_cmd_runner, pull=True)\n os.mkdir(directory)\n\n\ndef run_hook(repo_cmd_runner, hook, file_args): # pragma: windows no cover\n assert_docker_available()\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(repo_cmd_runner, pull=False)\n\n hook_cmd = helpers.to_cmd(hook)\n entry_executable, cmd_rest = hook_cmd[0], hook_cmd[1:]\n\n cmd = (\n 'docker', 'run',\n '--rm',\n '-u', '{}:{}'.format(os.getuid(), os.getgid()),\n '-v', '{}:/src:rw'.format(os.getcwd()),\n '--workdir', '/src',\n '--entrypoint', entry_executable,\n docker_tag(repo_cmd_runner)\n ) + cmd_rest\n\n return xargs(cmd, file_args)\n", "path": "pre_commit/languages/docker.py"}]}
| 1,288 | 156 |
gh_patches_debug_60679
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-1655
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Main Nav changes to accomodate "Feedback" button
Updating the action based on discussions:
1. Remove `Contact` from main nav.
2. Between `About` and `Submit Data`, add a button `Feedback`.
3. While you are there, change `Submit Data` to `Share Data` (there's a later issue for that which this will close)
Button style should be the same as the "Follow Us" button here, except gray, not blue: http://docs.hdx.rwlabs.org/get-involved/
Note that the megaphone icon shown below will not be used. No icon on the button.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version = 'v0.4.5'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.4.5'
+hdx_version = 'v0.4.6'
|
{"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.4.5'\n+hdx_version = 'v0.4.6'\n", "issue": "Main Nav changes to accomodate \"Feedback\" button\nUpdating the action based on discussions:\n1. Remove `Contact` from main nav. \n2. Between `About` and `Submit Data`, add a button `Feedback`. \n3. While you are there, change `Submit Data` to `Share Data` (there's a later issue for that which this will close)\n\nButton style should be the same as the \"Follow Us\" button here, except gray, not blue: http://docs.hdx.rwlabs.org/get-involved/\n\nNote that the megaphone icon shown below will not be used. No icon on the button.\n\n", "before_files": [{"content": "hdx_version = 'v0.4.5'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.4.6'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]}
| 415 | 106 |
gh_patches_debug_7628
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-3074
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add OpenAPI Specification for /schemas/ endpoint and fix the incorrect specs
## Problem
In order to ensure the accuracy of the specs generated by drf-spectacular for /schemas/ endpoint , we will generate an OpenAPI Specification specifically for this endpoint. This will allow us to verify its correctness by comparing it with the existing test cases and the DRF browsable API page.
## Proposed solution
* Implement custom preprocessing hook function to filter out all endpoints except for the /schemas/ endpoint.The hook function selectively retains only the endpoint paths that match the /schemas/
* Configure the PREPROCESSING_HOOKS setting with the custom hook function, we ensure that only the /schemas/ endpoint is considered during the OpenAPI specification generation process.
* Finally, generate the spec file using the `manage.py spectacular --color --file schema.yml` command
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/settings/openapi.py`
Content:
```
1 def custom_preprocessing_hook(endpoints):
2 filtered = []
3 for (path, path_regex, method, callback) in endpoints:
4 # Remove all but DRF API endpoints
5 if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/"):
6 filtered.append((path, path_regex, method, callback))
7 return filtered
8
9
10 def remove_url_prefix_hook(result, **kwargs):
11 # Remove namespace and version URL prefix from the operation Id of the generated API schema
12 for path, path_info in result['paths'].items():
13 for method, operation in path_info.items():
14 operation_id = operation.get('operationId')
15 if operation_id:
16 if path.startswith('/api/db/v0/'):
17 operation['operationId'] = operation_id.replace('db_v0_', '')
18 elif path.startswith('/api/ui/v0/'):
19 operation['operationId'] = operation_id.replace('ui_v0_', '')
20
21 return result
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/config/settings/openapi.py b/config/settings/openapi.py
--- a/config/settings/openapi.py
+++ b/config/settings/openapi.py
@@ -2,7 +2,7 @@
filtered = []
for (path, path_regex, method, callback) in endpoints:
# Remove all but DRF API endpoints
- if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/"):
+ if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/"):
filtered.append((path, path_regex, method, callback))
return filtered
|
{"golden_diff": "diff --git a/config/settings/openapi.py b/config/settings/openapi.py\n--- a/config/settings/openapi.py\n+++ b/config/settings/openapi.py\n@@ -2,7 +2,7 @@\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n- if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\"):\n+ if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n", "issue": "Add OpenAPI Specification for /schemas/ endpoint and fix the incorrect specs\n## Problem\r\nIn order to ensure the accuracy of the specs generated by drf-spectacular for /schemas/ endpoint , we will generate an OpenAPI Specification specifically for this endpoint. This will allow us to verify its correctness by comparing it with the existing test cases and the DRF browsable API page.\r\n\r\n## Proposed solution\r\n* Implement custom preprocessing hook function to filter out all endpoints except for the /schemas/ endpoint.The hook function selectively retains only the endpoint paths that match the /schemas/ \r\n* Configure the PREPROCESSING_HOOKS setting with the custom hook function, we ensure that only the /schemas/ endpoint is considered during the OpenAPI specification generation process.\r\n* Finally, generate the spec file using the `manage.py spectacular --color --file schema.yml` command\n", "before_files": [{"content": "def custom_preprocessing_hook(endpoints):\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}], "after_files": [{"content": "def custom_preprocessing_hook(endpoints):\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}]}
| 671 | 149 |
gh_patches_debug_5497
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keep tighter version constraints on TensorFlow releases
# Description
As Issue #997 makes it clear that minor releases might as well be major releases for TensorFlow, then it is probably worth keeping tighter version constraints on them and just watching the releases of TensorFlow and TensorFlow Probability to see when we can relax these.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.0',
7 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
11 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes
12 'minuit': ['iminuit~=1.4,>=1.4.3'], # Use "name" keyword in MINUIT optimizer
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted(set(['matplotlib']))
23 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'pytest~=3.5',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'jupyter',
43 'uproot~=3.3',
44 'graphviz',
45 'jsonpatch',
46 ]
47 )
48 )
49 extras_require['docs'] = sorted(
50 set(
51 [
52 'sphinx>=3.1.2',
53 'sphinxcontrib-bibtex',
54 'sphinx-click',
55 'sphinx_rtd_theme',
56 'nbsphinx',
57 'ipywidgets',
58 'sphinx-issues',
59 'sphinx-copybutton>0.2.9',
60 ]
61 )
62 )
63 extras_require['develop'] = sorted(
64 set(
65 extras_require['docs']
66 + extras_require['lint']
67 + extras_require['test']
68 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']
69 )
70 )
71 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
72
73
74 setup(
75 extras_require=extras_require,
76 use_scm_version=lambda: {'local_scheme': lambda version: ''},
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -3,8 +3,8 @@
extras_require = {
'shellcomplete': ['click_completion'],
'tensorflow': [
- 'tensorflow~=2.0',
- 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11
+ 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
+ 'tensorflow-probability~=0.10.0',
],
'torch': ['torch~=1.2'],
'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -3,8 +3,8 @@\n extras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n- 'tensorflow~=2.0',\n- 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n+ 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n+ 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n", "issue": "Keep tighter version constraints on TensorFlow releases\n# Description\r\n\r\nAs Issue #997 makes it clear that minor releases might as well be major releases for TensorFlow, then it is probably worth keeping tighter version constraints on them and just watching the releases of TensorFlow and TensorFlow Probability to see when we can relax these.\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.0',\n 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4,>=1.4.3'], # Use \"name\" keyword in MINUIT optimizer\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4,>=1.4.3'], # Use \"name\" keyword in MINUIT optimizer\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,050 | 170 |
gh_patches_debug_41578
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-1368
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add optional pull to local agent
If the image is present locally we don't want to attempt a pull from a registry
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/cli/agent.py`
Content:
```
1 import click
2
3 from prefect import config
4 from prefect.utilities.configuration import set_temporary_config
5 from prefect.utilities.serialization import from_qualified_name
6
7 _agents = {
8 "local": "prefect.agent.local.LocalAgent",
9 "kubernetes": "prefect.agent.kubernetes.KubernetesAgent",
10 "nomad": "prefect.agent.nomad.NomadAgent",
11 }
12
13
14 @click.group(hidden=True)
15 def agent():
16 """
17 Manage Prefect agents.
18
19 \b
20 Usage:
21 $ prefect agent [COMMAND]
22
23 \b
24 Arguments:
25 start Start a Prefect agent
26
27 \b
28 Examples:
29 $ prefect agent start
30
31 \b
32 $ prefect agent start kubernetes --token MY_TOKEN
33 """
34 pass
35
36
37 @agent.command(hidden=True)
38 @click.argument("name", default="local")
39 @click.option(
40 "--token", "-t", required=False, help="A Prefect Cloud API token.", hidden=True
41 )
42 def start(name, token):
43 """
44 Start an agent.
45
46 \b
47 Arguments:
48 name TEXT The name of an agent to start (e.g. `local`, `kubernetes`, `nomad`)
49 Defaults to `local`
50
51 \b
52 Options:
53 --token, -t TEXT A Prefect Cloud api token
54 """
55 with set_temporary_config(
56 {"cloud.agent.auth_token": token or config.cloud.agent.auth_token}
57 ):
58 retrieved_agent = _agents.get(name, None)
59
60 if not retrieved_agent:
61 click.secho("{} is not a valid agent".format(name), fg="red")
62 return
63
64 from_qualified_name(retrieved_agent)().start()
65
```
Path: `src/prefect/agent/local/agent.py`
Content:
```
1 import docker
2
3 from prefect import config
4 from prefect.agent import Agent
5 from prefect.environments.storage import Docker
6 from prefect.serialization.storage import StorageSchema
7 from prefect.utilities.graphql import GraphQLResult
8
9
10 class LocalAgent(Agent):
11 """
12 Agent which deploys flow runs locally as Docker containers.
13
14 Args:
15 - base_url (str, optional): URL for a Docker daemon server. Defaults to
16 `unix:///var/run/docker.sock` however other hosts such as
17 `tcp://0.0.0.0:2375` can be provided
18 """
19
20 def __init__(self, base_url: str = None) -> None:
21 super().__init__()
22
23 base_url = base_url or "unix://var/run/docker.sock"
24 self.docker_client = docker.APIClient(base_url=base_url, version="auto")
25
26 # Ping Docker daemon for connection issues
27 try:
28 self.docker_client.ping()
29 except Exception as exc:
30 self.logger.error(
31 "Issue connecting to the Docker daemon. Make sure it is running."
32 )
33 raise exc
34
35 def deploy_flows(self, flow_runs: list) -> None:
36 """
37 Deploy flow runs on your local machine as Docker containers
38
39 Args:
40 - flow_runs (list): A list of GraphQLResult flow run objects
41 """
42 for flow_run in flow_runs:
43
44 storage = StorageSchema().load(flow_run.flow.storage)
45 if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):
46 self.logger.error(
47 "Storage for flow run {} is not of type Docker.".format(flow_run.id)
48 )
49 continue
50
51 env_vars = self.populate_env_vars(flow_run=flow_run)
52
53 # Pull image if it doesn't exist locally
54 self.docker_client.pull(storage.name)
55
56 # Create a container
57 container = self.docker_client.create_container(
58 storage.name, command="prefect execute cloud-flow", environment=env_vars
59 )
60
61 # Start the container
62 self.docker_client.start(container=container.get("Id"))
63
64 def populate_env_vars(self, flow_run: GraphQLResult) -> dict:
65 """
66 Populate metadata and variables in the environment variables for a flow run
67
68 Args:
69 - flow_run (GraphQLResult): A flow run object
70
71 Returns:
72 - dict: a dictionary representing the populated environment variables
73 """
74 return {
75 "PREFECT__CLOUD__API": config.cloud.api,
76 "PREFECT__CLOUD__AUTH_TOKEN": config.cloud.agent.auth_token,
77 "PREFECT__CONTEXT__FLOW_RUN_ID": flow_run.id, # type: ignore
78 "PREFECT__CLOUD__USE_LOCAL_SECRETS": "false",
79 "PREFECT__LOGGING__LOG_TO_CLOUD": "true",
80 "PREFECT__LOGGING__LEVEL": "DEBUG",
81 "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS": "prefect.engine.cloud.CloudFlowRunner",
82 "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS": "prefect.engine.cloud.CloudTaskRunner",
83 }
84
85
86 if __name__ == "__main__":
87 LocalAgent().start()
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/agent/local/agent.py b/src/prefect/agent/local/agent.py
--- a/src/prefect/agent/local/agent.py
+++ b/src/prefect/agent/local/agent.py
@@ -1,6 +1,6 @@
import docker
-from prefect import config
+from prefect import config, context
from prefect.agent import Agent
from prefect.environments.storage import Docker
from prefect.serialization.storage import StorageSchema
@@ -15,9 +15,11 @@
- base_url (str, optional): URL for a Docker daemon server. Defaults to
`unix:///var/run/docker.sock` however other hosts such as
`tcp://0.0.0.0:2375` can be provided
+ - no_pull (bool, optional): Flag on whether or not to pull flow images.
+ Defaults to `False` if not provided here or in context.
"""
- def __init__(self, base_url: str = None) -> None:
+ def __init__(self, base_url: str = None, no_pull: bool = None) -> None:
super().__init__()
base_url = base_url or "unix://var/run/docker.sock"
@@ -32,6 +34,10 @@
)
raise exc
+ self.no_pull = no_pull or context.get("no_pull")
+ if self.no_pull is None:
+ self.no_pull = False
+
def deploy_flows(self, flow_runs: list) -> None:
"""
Deploy flow runs on your local machine as Docker containers
@@ -50,8 +56,8 @@
env_vars = self.populate_env_vars(flow_run=flow_run)
- # Pull image if it doesn't exist locally
- self.docker_client.pull(storage.name)
+ if not self.no_pull:
+ self.docker_client.pull(storage.name)
# Create a container
container = self.docker_client.create_container(
diff --git a/src/prefect/cli/agent.py b/src/prefect/cli/agent.py
--- a/src/prefect/cli/agent.py
+++ b/src/prefect/cli/agent.py
@@ -1,6 +1,6 @@
import click
-from prefect import config
+from prefect import config, context
from prefect.utilities.configuration import set_temporary_config
from prefect.utilities.serialization import from_qualified_name
@@ -39,7 +39,8 @@
@click.option(
"--token", "-t", required=False, help="A Prefect Cloud API token.", hidden=True
)
-def start(name, token):
[email protected]("--no-pull", is_flag=True, help="Pull images flag.", hidden=True)
+def start(name, token, no_pull):
"""
Start an agent.
@@ -51,6 +52,8 @@
\b
Options:
--token, -t TEXT A Prefect Cloud api token
+ --no-pull Pull images for a LocalAgent
+ Defaults to pulling if not provided
"""
with set_temporary_config(
{"cloud.agent.auth_token": token or config.cloud.agent.auth_token}
@@ -61,4 +64,5 @@
click.secho("{} is not a valid agent".format(name), fg="red")
return
- from_qualified_name(retrieved_agent)().start()
+ with context(no_pull=no_pull):
+ from_qualified_name(retrieved_agent)().start()
|
{"golden_diff": "diff --git a/src/prefect/agent/local/agent.py b/src/prefect/agent/local/agent.py\n--- a/src/prefect/agent/local/agent.py\n+++ b/src/prefect/agent/local/agent.py\n@@ -1,6 +1,6 @@\n import docker\n \n-from prefect import config\n+from prefect import config, context\n from prefect.agent import Agent\n from prefect.environments.storage import Docker\n from prefect.serialization.storage import StorageSchema\n@@ -15,9 +15,11 @@\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n+ - no_pull (bool, optional): Flag on whether or not to pull flow images.\n+ Defaults to `False` if not provided here or in context.\n \"\"\"\n \n- def __init__(self, base_url: str = None) -> None:\n+ def __init__(self, base_url: str = None, no_pull: bool = None) -> None:\n super().__init__()\n \n base_url = base_url or \"unix://var/run/docker.sock\"\n@@ -32,6 +34,10 @@\n )\n raise exc\n \n+ self.no_pull = no_pull or context.get(\"no_pull\")\n+ if self.no_pull is None:\n+ self.no_pull = False\n+\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n@@ -50,8 +56,8 @@\n \n env_vars = self.populate_env_vars(flow_run=flow_run)\n \n- # Pull image if it doesn't exist locally\n- self.docker_client.pull(storage.name)\n+ if not self.no_pull:\n+ self.docker_client.pull(storage.name)\n \n # Create a container\n container = self.docker_client.create_container(\ndiff --git a/src/prefect/cli/agent.py b/src/prefect/cli/agent.py\n--- a/src/prefect/cli/agent.py\n+++ b/src/prefect/cli/agent.py\n@@ -1,6 +1,6 @@\n import click\n \n-from prefect import config\n+from prefect import config, context\n from prefect.utilities.configuration import set_temporary_config\n from prefect.utilities.serialization import from_qualified_name\n \n@@ -39,7 +39,8 @@\n @click.option(\n \"--token\", \"-t\", required=False, help=\"A Prefect Cloud API token.\", hidden=True\n )\n-def start(name, token):\[email protected](\"--no-pull\", is_flag=True, help=\"Pull images flag.\", hidden=True)\n+def start(name, token, no_pull):\n \"\"\"\n Start an agent.\n \n@@ -51,6 +52,8 @@\n \\b\n Options:\n --token, -t TEXT A Prefect Cloud api token\n+ --no-pull Pull images for a LocalAgent\n+ Defaults to pulling if not provided\n \"\"\"\n with set_temporary_config(\n {\"cloud.agent.auth_token\": token or config.cloud.agent.auth_token}\n@@ -61,4 +64,5 @@\n click.secho(\"{} is not a valid agent\".format(name), fg=\"red\")\n return\n \n- from_qualified_name(retrieved_agent)().start()\n+ with context(no_pull=no_pull):\n+ from_qualified_name(retrieved_agent)().start()\n", "issue": "Add optional pull to local agent\nIf the image is present locally we don't want to attempt a pull from a registry\n", "before_files": [{"content": "import click\n\nfrom prefect import config\nfrom prefect.utilities.configuration import set_temporary_config\nfrom prefect.utilities.serialization import from_qualified_name\n\n_agents = {\n \"local\": \"prefect.agent.local.LocalAgent\",\n \"kubernetes\": \"prefect.agent.kubernetes.KubernetesAgent\",\n \"nomad\": \"prefect.agent.nomad.NomadAgent\",\n}\n\n\[email protected](hidden=True)\ndef agent():\n \"\"\"\n Manage Prefect agents.\n\n \\b\n Usage:\n $ prefect agent [COMMAND]\n\n \\b\n Arguments:\n start Start a Prefect agent\n\n \\b\n Examples:\n $ prefect agent start\n\n \\b\n $ prefect agent start kubernetes --token MY_TOKEN\n \"\"\"\n pass\n\n\[email protected](hidden=True)\[email protected](\"name\", default=\"local\")\[email protected](\n \"--token\", \"-t\", required=False, help=\"A Prefect Cloud API token.\", hidden=True\n)\ndef start(name, token):\n \"\"\"\n Start an agent.\n\n \\b\n Arguments:\n name TEXT The name of an agent to start (e.g. `local`, `kubernetes`, `nomad`)\n Defaults to `local`\n\n \\b\n Options:\n --token, -t TEXT A Prefect Cloud api token\n \"\"\"\n with set_temporary_config(\n {\"cloud.agent.auth_token\": token or config.cloud.agent.auth_token}\n ):\n retrieved_agent = _agents.get(name, None)\n\n if not retrieved_agent:\n click.secho(\"{} is not a valid agent\".format(name), fg=\"red\")\n return\n\n from_qualified_name(retrieved_agent)().start()\n", "path": "src/prefect/cli/agent.py"}, {"content": "import docker\n\nfrom prefect import config\nfrom prefect.agent import Agent\nfrom prefect.environments.storage import Docker\nfrom prefect.serialization.storage import StorageSchema\nfrom prefect.utilities.graphql import GraphQLResult\n\n\nclass LocalAgent(Agent):\n \"\"\"\n Agent which deploys flow runs locally as Docker containers.\n\n Args:\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n \"\"\"\n\n def __init__(self, base_url: str = None) -> None:\n super().__init__()\n\n base_url = base_url or \"unix://var/run/docker.sock\"\n self.docker_client = docker.APIClient(base_url=base_url, version=\"auto\")\n\n # Ping Docker daemon for connection issues\n try:\n self.docker_client.ping()\n except Exception as exc:\n self.logger.error(\n \"Issue connecting to the Docker daemon. Make sure it is running.\"\n )\n raise exc\n\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n\n Args:\n - flow_runs (list): A list of GraphQLResult flow run objects\n \"\"\"\n for flow_run in flow_runs:\n\n storage = StorageSchema().load(flow_run.flow.storage)\n if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):\n self.logger.error(\n \"Storage for flow run {} is not of type Docker.\".format(flow_run.id)\n )\n continue\n\n env_vars = self.populate_env_vars(flow_run=flow_run)\n\n # Pull image if it doesn't exist locally\n self.docker_client.pull(storage.name)\n\n # Create a container\n container = self.docker_client.create_container(\n storage.name, command=\"prefect execute cloud-flow\", environment=env_vars\n )\n\n # Start the container\n self.docker_client.start(container=container.get(\"Id\"))\n\n def populate_env_vars(self, flow_run: GraphQLResult) -> dict:\n \"\"\"\n Populate metadata and variables in the environment variables for a flow run\n\n Args:\n - flow_run (GraphQLResult): A flow run object\n\n Returns:\n - dict: a dictionary representing the populated environment variables\n \"\"\"\n return {\n \"PREFECT__CLOUD__API\": config.cloud.api,\n \"PREFECT__CLOUD__AUTH_TOKEN\": config.cloud.agent.auth_token,\n \"PREFECT__CONTEXT__FLOW_RUN_ID\": flow_run.id, # type: ignore\n \"PREFECT__CLOUD__USE_LOCAL_SECRETS\": \"false\",\n \"PREFECT__LOGGING__LOG_TO_CLOUD\": \"true\",\n \"PREFECT__LOGGING__LEVEL\": \"DEBUG\",\n \"PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudFlowRunner\",\n \"PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudTaskRunner\",\n }\n\n\nif __name__ == \"__main__\":\n LocalAgent().start()\n", "path": "src/prefect/agent/local/agent.py"}], "after_files": [{"content": "import click\n\nfrom prefect import config, context\nfrom prefect.utilities.configuration import set_temporary_config\nfrom prefect.utilities.serialization import from_qualified_name\n\n_agents = {\n \"local\": \"prefect.agent.local.LocalAgent\",\n \"kubernetes\": \"prefect.agent.kubernetes.KubernetesAgent\",\n \"nomad\": \"prefect.agent.nomad.NomadAgent\",\n}\n\n\[email protected](hidden=True)\ndef agent():\n \"\"\"\n Manage Prefect agents.\n\n \\b\n Usage:\n $ prefect agent [COMMAND]\n\n \\b\n Arguments:\n start Start a Prefect agent\n\n \\b\n Examples:\n $ prefect agent start\n\n \\b\n $ prefect agent start kubernetes --token MY_TOKEN\n \"\"\"\n pass\n\n\[email protected](hidden=True)\[email protected](\"name\", default=\"local\")\[email protected](\n \"--token\", \"-t\", required=False, help=\"A Prefect Cloud API token.\", hidden=True\n)\[email protected](\"--no-pull\", is_flag=True, help=\"Pull images flag.\", hidden=True)\ndef start(name, token, no_pull):\n \"\"\"\n Start an agent.\n\n \\b\n Arguments:\n name TEXT The name of an agent to start (e.g. `local`, `kubernetes`, `nomad`)\n Defaults to `local`\n\n \\b\n Options:\n --token, -t TEXT A Prefect Cloud api token\n --no-pull Pull images for a LocalAgent\n Defaults to pulling if not provided\n \"\"\"\n with set_temporary_config(\n {\"cloud.agent.auth_token\": token or config.cloud.agent.auth_token}\n ):\n retrieved_agent = _agents.get(name, None)\n\n if not retrieved_agent:\n click.secho(\"{} is not a valid agent\".format(name), fg=\"red\")\n return\n\n with context(no_pull=no_pull):\n from_qualified_name(retrieved_agent)().start()\n", "path": "src/prefect/cli/agent.py"}, {"content": "import docker\n\nfrom prefect import config, context\nfrom prefect.agent import Agent\nfrom prefect.environments.storage import Docker\nfrom prefect.serialization.storage import StorageSchema\nfrom prefect.utilities.graphql import GraphQLResult\n\n\nclass LocalAgent(Agent):\n \"\"\"\n Agent which deploys flow runs locally as Docker containers.\n\n Args:\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n - no_pull (bool, optional): Flag on whether or not to pull flow images.\n Defaults to `False` if not provided here or in context.\n \"\"\"\n\n def __init__(self, base_url: str = None, no_pull: bool = None) -> None:\n super().__init__()\n\n base_url = base_url or \"unix://var/run/docker.sock\"\n self.docker_client = docker.APIClient(base_url=base_url, version=\"auto\")\n\n # Ping Docker daemon for connection issues\n try:\n self.docker_client.ping()\n except Exception as exc:\n self.logger.error(\n \"Issue connecting to the Docker daemon. Make sure it is running.\"\n )\n raise exc\n\n self.no_pull = no_pull or context.get(\"no_pull\")\n if self.no_pull is None:\n self.no_pull = False\n\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n\n Args:\n - flow_runs (list): A list of GraphQLResult flow run objects\n \"\"\"\n for flow_run in flow_runs:\n\n storage = StorageSchema().load(flow_run.flow.storage)\n if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):\n self.logger.error(\n \"Storage for flow run {} is not of type Docker.\".format(flow_run.id)\n )\n continue\n\n env_vars = self.populate_env_vars(flow_run=flow_run)\n\n if not self.no_pull:\n self.docker_client.pull(storage.name)\n\n # Create a container\n container = self.docker_client.create_container(\n storage.name, command=\"prefect execute cloud-flow\", environment=env_vars\n )\n\n # Start the container\n self.docker_client.start(container=container.get(\"Id\"))\n\n def populate_env_vars(self, flow_run: GraphQLResult) -> dict:\n \"\"\"\n Populate metadata and variables in the environment variables for a flow run\n\n Args:\n - flow_run (GraphQLResult): A flow run object\n\n Returns:\n - dict: a dictionary representing the populated environment variables\n \"\"\"\n return {\n \"PREFECT__CLOUD__API\": config.cloud.api,\n \"PREFECT__CLOUD__AUTH_TOKEN\": config.cloud.agent.auth_token,\n \"PREFECT__CONTEXT__FLOW_RUN_ID\": flow_run.id, # type: ignore\n \"PREFECT__CLOUD__USE_LOCAL_SECRETS\": \"false\",\n \"PREFECT__LOGGING__LOG_TO_CLOUD\": \"true\",\n \"PREFECT__LOGGING__LEVEL\": \"DEBUG\",\n \"PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudFlowRunner\",\n \"PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudTaskRunner\",\n }\n\n\nif __name__ == \"__main__\":\n LocalAgent().start()\n", "path": "src/prefect/agent/local/agent.py"}]}
| 1,641 | 760 |
gh_patches_debug_517
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__models-4963
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
emotion_detection/utils.py代码错误
utils.py第62行:seq_len = max_seq_len 是缩进错误吧?应该包括在else内,否则seq_len不能表示sequence的真实长度
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PaddleNLP/emotion_detection/utils.py`
Content:
```
1 # Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 EmoTect utilities.
16 """
17 from __future__ import absolute_import
18 from __future__ import division
19 from __future__ import print_function
20
21 import io
22 import os
23 import sys
24 import six
25 import random
26
27 import paddle
28 import paddle.fluid as fluid
29 import numpy as np
30
31
32 def init_checkpoint(exe, init_checkpoint_path, main_program):
33 """
34 Init CheckPoint
35 """
36
37 fluid.load(main_program, init_checkpoint_path, exe)
38
39
40 def word2id(word_dict, query):
41 """
42 Convert word sequence into id list
43 """
44 unk_id = len(word_dict)
45 wids = [
46 word_dict[w] if w in word_dict else unk_id
47 for w in query.strip().split(" ")
48 ]
49 return wids
50
51
52 def pad_wid(wids, max_seq_len=128, pad_id=0):
53 """
54 Padding data to max_seq_len
55 """
56 seq_len = len(wids)
57 if seq_len < max_seq_len:
58 for i in range(max_seq_len - seq_len):
59 wids.append(pad_id)
60 else:
61 wids = wids[:max_seq_len]
62 seq_len = max_seq_len
63 return wids, seq_len
64
65
66 def data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):
67 """
68 Data reader, which convert word sequence into id list
69 """
70 all_data = []
71 with io.open(file_path, "r", encoding='utf8') as fin:
72 for line in fin:
73 if line.startswith("label"):
74 continue
75 if phrase == "infer":
76 cols = line.strip().split("\t")
77 query = cols[-1] if len(cols) != -1 else cols[0]
78 wids = word2id(word_dict, query)
79 wids, seq_len = pad_wid(wids, max_seq_len)
80 all_data.append((wids, seq_len))
81 else:
82 cols = line.strip().split("\t")
83 if len(cols) != 2:
84 sys.stderr.write("[NOTICE] Error Format Line!")
85 continue
86 label = int(cols[0])
87 query = cols[1].strip()
88 wids = word2id(word_dict, query)
89 wids, seq_len = pad_wid(wids, max_seq_len)
90 all_data.append((wids, label, seq_len))
91 num_examples[phrase] = len(all_data)
92
93 if phrase == "infer":
94
95 def reader():
96 """
97 Infer reader function
98 """
99 for wids, seq_len in all_data:
100 yield wids, seq_len
101
102 return reader
103
104 def reader():
105 """
106 Reader function
107 """
108 for idx in range(epoch):
109 if phrase == "train" and 'ce_mode' not in os.environ:
110 random.shuffle(all_data)
111 for wids, label, seq_len in all_data:
112 yield wids, label, seq_len
113
114 return reader
115
116
117 def load_vocab(file_path):
118 """
119 load the given vocabulary
120 """
121 vocab = {}
122 with io.open(file_path, 'r', encoding='utf8') as fin:
123 wid = 0
124 for line in fin:
125 if line.strip() not in vocab:
126 vocab[line.strip()] = wid
127 wid += 1
128 vocab["<unk>"] = len(vocab)
129 return vocab
130
131
132 def print_arguments(args):
133 """
134 print arguments
135 """
136 print('----------- Configuration Arguments -----------')
137 for arg, value in sorted(six.iteritems(vars(args))):
138 print('%s: %s' % (arg, value))
139 print('------------------------------------------------')
140
141
142 def query2ids(vocab_path, query):
143 """
144 Convert query to id list according to the given vocab
145 """
146 vocab = load_vocab(vocab_path)
147 wids = word2id(vocab, query)
148 return wids
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PaddleNLP/emotion_detection/utils.py b/PaddleNLP/emotion_detection/utils.py
--- a/PaddleNLP/emotion_detection/utils.py
+++ b/PaddleNLP/emotion_detection/utils.py
@@ -59,7 +59,7 @@
wids.append(pad_id)
else:
wids = wids[:max_seq_len]
- seq_len = max_seq_len
+ seq_len = max_seq_len
return wids, seq_len
|
{"golden_diff": "diff --git a/PaddleNLP/emotion_detection/utils.py b/PaddleNLP/emotion_detection/utils.py\n--- a/PaddleNLP/emotion_detection/utils.py\n+++ b/PaddleNLP/emotion_detection/utils.py\n@@ -59,7 +59,7 @@\n wids.append(pad_id)\n else:\n wids = wids[:max_seq_len]\n- seq_len = max_seq_len\n+ seq_len = max_seq_len\n return wids, seq_len\n", "issue": "emotion_detection/utils.py\u4ee3\u7801\u9519\u8bef\nutils.py\u7b2c62\u884c\uff1aseq_len = max_seq_len \u662f\u7f29\u8fdb\u9519\u8bef\u5427\uff1f\u5e94\u8be5\u5305\u62ec\u5728else\u5185\uff0c\u5426\u5219seq_len\u4e0d\u80fd\u8868\u793asequence\u7684\u771f\u5b9e\u957f\u5ea6\n", "before_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nEmoTect utilities.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport io\nimport os\nimport sys\nimport six\nimport random\n\nimport paddle\nimport paddle.fluid as fluid\nimport numpy as np\n\n\ndef init_checkpoint(exe, init_checkpoint_path, main_program):\n \"\"\"\n Init CheckPoint\n \"\"\"\n\n fluid.load(main_program, init_checkpoint_path, exe)\n\n\ndef word2id(word_dict, query):\n \"\"\"\n Convert word sequence into id list\n \"\"\"\n unk_id = len(word_dict)\n wids = [\n word_dict[w] if w in word_dict else unk_id\n for w in query.strip().split(\" \")\n ]\n return wids\n\n\ndef pad_wid(wids, max_seq_len=128, pad_id=0):\n \"\"\"\n Padding data to max_seq_len\n \"\"\"\n seq_len = len(wids)\n if seq_len < max_seq_len:\n for i in range(max_seq_len - seq_len):\n wids.append(pad_id)\n else:\n wids = wids[:max_seq_len]\n seq_len = max_seq_len\n return wids, seq_len\n\n\ndef data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):\n \"\"\"\n Data reader, which convert word sequence into id list\n \"\"\"\n all_data = []\n with io.open(file_path, \"r\", encoding='utf8') as fin:\n for line in fin:\n if line.startswith(\"label\"):\n continue\n if phrase == \"infer\":\n cols = line.strip().split(\"\\t\")\n query = cols[-1] if len(cols) != -1 else cols[0]\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, seq_len))\n else:\n cols = line.strip().split(\"\\t\")\n if len(cols) != 2:\n sys.stderr.write(\"[NOTICE] Error Format Line!\")\n continue\n label = int(cols[0])\n query = cols[1].strip()\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, label, seq_len))\n num_examples[phrase] = len(all_data)\n\n if phrase == \"infer\":\n\n def reader():\n \"\"\"\n Infer reader function\n \"\"\"\n for wids, seq_len in all_data:\n yield wids, seq_len\n\n return reader\n\n def reader():\n \"\"\"\n Reader function\n \"\"\"\n for idx in range(epoch):\n if phrase == \"train\" and 'ce_mode' not in os.environ:\n random.shuffle(all_data)\n for wids, label, seq_len in all_data:\n yield wids, label, seq_len\n\n return reader\n\n\ndef load_vocab(file_path):\n \"\"\"\n load the given vocabulary\n \"\"\"\n vocab = {}\n with io.open(file_path, 'r', encoding='utf8') as fin:\n wid = 0\n for line in fin:\n if line.strip() not in vocab:\n vocab[line.strip()] = wid\n wid += 1\n vocab[\"<unk>\"] = len(vocab)\n return vocab\n\n\ndef print_arguments(args):\n \"\"\"\n print arguments\n \"\"\"\n print('----------- Configuration Arguments -----------')\n for arg, value in sorted(six.iteritems(vars(args))):\n print('%s: %s' % (arg, value))\n print('------------------------------------------------')\n\n\ndef query2ids(vocab_path, query):\n \"\"\"\n Convert query to id list according to the given vocab\n \"\"\"\n vocab = load_vocab(vocab_path)\n wids = word2id(vocab, query)\n return wids\n", "path": "PaddleNLP/emotion_detection/utils.py"}], "after_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nEmoTect utilities.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport io\nimport os\nimport sys\nimport six\nimport random\n\nimport paddle\nimport paddle.fluid as fluid\nimport numpy as np\n\n\ndef init_checkpoint(exe, init_checkpoint_path, main_program):\n \"\"\"\n Init CheckPoint\n \"\"\"\n\n fluid.load(main_program, init_checkpoint_path, exe)\n\n\ndef word2id(word_dict, query):\n \"\"\"\n Convert word sequence into id list\n \"\"\"\n unk_id = len(word_dict)\n wids = [\n word_dict[w] if w in word_dict else unk_id\n for w in query.strip().split(\" \")\n ]\n return wids\n\n\ndef pad_wid(wids, max_seq_len=128, pad_id=0):\n \"\"\"\n Padding data to max_seq_len\n \"\"\"\n seq_len = len(wids)\n if seq_len < max_seq_len:\n for i in range(max_seq_len - seq_len):\n wids.append(pad_id)\n else:\n wids = wids[:max_seq_len]\n seq_len = max_seq_len\n return wids, seq_len\n\n\ndef data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):\n \"\"\"\n Data reader, which convert word sequence into id list\n \"\"\"\n all_data = []\n with io.open(file_path, \"r\", encoding='utf8') as fin:\n for line in fin:\n if line.startswith(\"label\"):\n continue\n if phrase == \"infer\":\n cols = line.strip().split(\"\\t\")\n query = cols[-1] if len(cols) != -1 else cols[0]\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, seq_len))\n else:\n cols = line.strip().split(\"\\t\")\n if len(cols) != 2:\n sys.stderr.write(\"[NOTICE] Error Format Line!\")\n continue\n label = int(cols[0])\n query = cols[1].strip()\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, label, seq_len))\n num_examples[phrase] = len(all_data)\n\n if phrase == \"infer\":\n\n def reader():\n \"\"\"\n Infer reader function\n \"\"\"\n for wids, seq_len in all_data:\n yield wids, seq_len\n\n return reader\n\n def reader():\n \"\"\"\n Reader function\n \"\"\"\n for idx in range(epoch):\n if phrase == \"train\" and 'ce_mode' not in os.environ:\n random.shuffle(all_data)\n for wids, label, seq_len in all_data:\n yield wids, label, seq_len\n\n return reader\n\n\ndef load_vocab(file_path):\n \"\"\"\n load the given vocabulary\n \"\"\"\n vocab = {}\n with io.open(file_path, 'r', encoding='utf8') as fin:\n wid = 0\n for line in fin:\n if line.strip() not in vocab:\n vocab[line.strip()] = wid\n wid += 1\n vocab[\"<unk>\"] = len(vocab)\n return vocab\n\n\ndef print_arguments(args):\n \"\"\"\n print arguments\n \"\"\"\n print('----------- Configuration Arguments -----------')\n for arg, value in sorted(six.iteritems(vars(args))):\n print('%s: %s' % (arg, value))\n print('------------------------------------------------')\n\n\ndef query2ids(vocab_path, query):\n \"\"\"\n Convert query to id list according to the given vocab\n \"\"\"\n vocab = load_vocab(vocab_path)\n wids = word2id(vocab, query)\n return wids\n", "path": "PaddleNLP/emotion_detection/utils.py"}]}
| 1,630 | 107 |
gh_patches_debug_338
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-1865
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Read the Docs build failing
https://readthedocs.org/projects/gcloud-python/builds/4108022/
https://readthedocs.org/projects/gcloud-python/builds/4108027/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3
4 from setuptools import setup
5 from setuptools import find_packages
6
7 here = os.path.abspath(os.path.dirname(__file__))
8
9
10 with open(os.path.join(here, 'README.rst')) as f:
11 README = f.read()
12
13
14 REQUIREMENTS = [
15 'httplib2 >= 0.9.1',
16 'googleapis-common-protos',
17 'oauth2client >= 2.0.1',
18 'protobuf >= 3.0.0b2, != 3.0.0.b2.post1',
19 'six',
20 ]
21
22 GRPC_EXTRAS = [
23 'grpcio >= 0.14.0',
24 'google-gax >= 0.12.0',
25 'gax-google-pubsub-v1',
26 ]
27
28 if sys.version_info[:2] == (2, 7):
29 REQUIREMENTS.extend(GRPC_EXTRAS)
30
31 setup(
32 name='gcloud',
33 version='0.15.0',
34 description='API Client library for Google Cloud',
35 author='Google Cloud Platform',
36 author_email='[email protected]',
37 long_description=README,
38 scripts=[],
39 url='https://github.com/GoogleCloudPlatform/gcloud-python',
40 packages=find_packages(),
41 license='Apache 2.0',
42 platforms='Posix; MacOS X; Windows',
43 include_package_data=True,
44 zip_safe=False,
45 install_requires=REQUIREMENTS,
46 extras_require={'grpc': GRPC_EXTRAS},
47 classifiers=[
48 'Development Status :: 4 - Beta',
49 'Intended Audience :: Developers',
50 'License :: OSI Approved :: Apache Software License',
51 'Operating System :: OS Independent',
52 'Programming Language :: Python :: 2',
53 'Programming Language :: Python :: 2.6',
54 'Programming Language :: Python :: 2.7',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.4',
57 'Programming Language :: Python :: 3.5',
58 'Topic :: Internet',
59 ]
60 )
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,7 @@
'gax-google-pubsub-v1',
]
-if sys.version_info[:2] == (2, 7):
+if sys.version_info[:2] == (2, 7) and 'READTHEDOCS' not in os.environ:
REQUIREMENTS.extend(GRPC_EXTRAS)
setup(
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,7 +25,7 @@\n 'gax-google-pubsub-v1',\n ]\n \n-if sys.version_info[:2] == (2, 7):\n+if sys.version_info[:2] == (2, 7) and 'READTHEDOCS' not in os.environ:\n REQUIREMENTS.extend(GRPC_EXTRAS)\n \n setup(\n", "issue": "Read the Docs build failing\nhttps://readthedocs.org/projects/gcloud-python/builds/4108022/\nhttps://readthedocs.org/projects/gcloud-python/builds/4108027/\n\n", "before_files": [{"content": "import os\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\nwith open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n\n\nREQUIREMENTS = [\n 'httplib2 >= 0.9.1',\n 'googleapis-common-protos',\n 'oauth2client >= 2.0.1',\n 'protobuf >= 3.0.0b2, != 3.0.0.b2.post1',\n 'six',\n]\n\nGRPC_EXTRAS = [\n 'grpcio >= 0.14.0',\n 'google-gax >= 0.12.0',\n 'gax-google-pubsub-v1',\n]\n\nif sys.version_info[:2] == (2, 7):\n REQUIREMENTS.extend(GRPC_EXTRAS)\n\nsetup(\n name='gcloud',\n version='0.15.0',\n description='API Client library for Google Cloud',\n author='Google Cloud Platform',\n author_email='[email protected]',\n long_description=README,\n scripts=[],\n url='https://github.com/GoogleCloudPlatform/gcloud-python',\n packages=find_packages(),\n license='Apache 2.0',\n platforms='Posix; MacOS X; Windows',\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={'grpc': GRPC_EXTRAS},\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\nwith open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n\n\nREQUIREMENTS = [\n 'httplib2 >= 0.9.1',\n 'googleapis-common-protos',\n 'oauth2client >= 2.0.1',\n 'protobuf >= 3.0.0b2, != 3.0.0.b2.post1',\n 'six',\n]\n\nGRPC_EXTRAS = [\n 'grpcio >= 0.14.0',\n 'google-gax >= 0.11.0',\n 'gax-google-pubsub-v1',\n]\n\nif sys.version_info[:2] == (2, 7) and 'READTHEDOCS' not in os.environ:\n REQUIREMENTS.extend(GRPC_EXTRAS)\n\nsetup(\n name='gcloud',\n version='0.15.0',\n description='API Client library for Google Cloud',\n author='Google Cloud Platform',\n author_email='[email protected]',\n long_description=README,\n scripts=[],\n url='https://github.com/GoogleCloudPlatform/gcloud-python',\n packages=find_packages(),\n license='Apache 2.0',\n platforms='Posix; MacOS X; Windows',\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={'grpc': GRPC_EXTRAS},\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet',\n ]\n)\n", "path": "setup.py"}]}
| 862 | 100 |
gh_patches_debug_60829
|
rasdani/github-patches
|
git_diff
|
microsoft__AzureTRE-1754
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release version 0.3
## Description
As a TRE developer
I want to release current code base as version 0.3
So that people can use a more stable version going forward
## Acceptance criteria
- [ ] All core apps are bumped to 0.3
- [ ] All bundles are bumped to 0.3
- [ ] A tag is created
- [ ] A release is created
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api_app/_version.py`
Content:
```
1 __version__ = "0.2.28"
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.2.28"
+__version__ = "0.3.0"
|
{"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.2.28\"\n+__version__ = \"0.3.0\"\n", "issue": "Release version 0.3\n## Description\r\n\r\nAs a TRE developer \r\nI want to release current code base as version 0.3\r\nSo that people can use a more stable version going forward\r\n\r\n## Acceptance criteria\r\n\r\n- [ ] All core apps are bumped to 0.3\r\n- [ ] All bundles are bumped to 0.3\r\n- [ ] A tag is created\r\n- [ ] A release is created\r\n\n", "before_files": [{"content": "__version__ = \"0.2.28\"\n", "path": "api_app/_version.py"}], "after_files": [{"content": "__version__ = \"0.3.0\"\n", "path": "api_app/_version.py"}]}
| 363 | 63 |
gh_patches_debug_10176
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-7364
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
graph --table=file.html does not show outdated packages
They were previously shown in colors. I guess adding a column "outdated" will resolve this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/assets/templates/search_table_html.py`
Content:
```
1
2 content = """
3 <!DOCTYPE html>
4 <html lang="en">
5 <head>
6 <title>Conan | {{ search.reference }}</title>
7 <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"/>
8 <link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.21/css/dataTables.bootstrap.min.css"/>
9 <style>
10 tr td {
11 white-space:nowrap;
12 }
13 </style>
14 </head>
15 <body>
16 <div class="container-fluid">
17 <h1>{{ search.reference }}</h1>
18 <div class="info">
19 <p>
20 Depending on your package_id_mode, any combination of settings, options and requirements
21 can give you a different packageID. Take into account that your configuration might be
22 different from the one used to generate the packages.
23 </p>
24 </div>
25
26 <table id="results" class="table table-striped table-bordered" style="width:100%">
27 <thead>
28 {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}
29 {%- set headers2rows = headers.row(n_rows=2) %}
30 <tr>
31 {%- for category, subheaders in headers2rows %}
32 <th class="text-center" rowspan="{% if subheaders|length == 1 and not subheaders[0] %}2{% else %}1{% endif %}" colspan="{{ subheaders|length }}">
33 {{ category }}
34 </th>
35 {%- endfor %}
36 </tr>
37 <tr>
38 {%- for category, subheaders in headers2rows %}
39 {%- if subheaders|length != 1 or subheaders[0] != '' %}
40 {%- for subheader in subheaders %}
41 <th>{{ subheader|default(category, true) }}</th>
42 {%- endfor %}
43 {%- endif %}
44 {%- endfor %}
45 </tr>
46 </thead>
47 <tbody>
48 {%- for package in results.packages() %}
49 <tr>
50 {%- for item in package.row(headers) %}
51 <td>{{ item if item != None else ''}}</td>
52 {%- endfor %}
53 </tr>
54 {%- endfor %}
55 </tbody>
56 <tfoot>
57 <tr>
58 {%- for header in headers.row(n_rows=1) %}
59 <th>{{ header }}</th>
60 {%- endfor %}
61 </tr>
62 </tfoot>
63 </table>
64
65 <script type="text/javascript" src="https://code.jquery.com/jquery-3.3.1.js"></script>
66 <script type="text/javascript" src="https://cdn.datatables.net/v/dt/dt-1.10.20/datatables.min.js"></script>
67 <script type="text/javascript" src="https://cdn.datatables.net/1.10.21/js/dataTables.bootstrap.min.js"></script>
68 <script>
69 $(document).ready(function() {
70 // Setup - add a text input to each footer cell
71 $('#results tfoot th').each( function () {
72 var title = $(this).text();
73 $(this).html( '<input type="text" class="form-control filter-input" placeholder="Filter '+title+'" style="width:100%"/>' );
74 });
75
76 var table = $('#results').DataTable( {
77 "dom": "lrtip",
78 "lengthMenu": [[10, 25, 50, -1], [10, 25, 50, "All"]],
79 "pageLength": 10,
80 "columnDefs": [
81 { className: "cell_border_right", "targets": [ {{ headers.keys|length + headers.settings|length -1 }}, {{ headers.keys|length + headers.settings|length + headers.options|length -1 }} ] },
82 { className: "cell_border_right monospaced", "targets": [{{ headers.keys|length -1 }}, ]}
83 ]
84 });
85
86 // Apply the search
87 table.columns().every( function () {
88 var that = this;
89
90 $( 'input', this.footer() ).on( 'keyup change clear', function () {
91 if ( that.search() !== this.value ) {
92 that
93 .search( this.value )
94 .draw();
95 }
96 } );
97 } );
98 });
99 </script>
100 </div>
101 </body>
102 </html>
103 """
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/assets/templates/search_table_html.py b/conans/assets/templates/search_table_html.py
--- a/conans/assets/templates/search_table_html.py
+++ b/conans/assets/templates/search_table_html.py
@@ -25,7 +25,7 @@
<table id="results" class="table table-striped table-bordered" style="width:100%">
<thead>
- {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}
+ {%- set headers = results.get_headers(keys=['remote', 'package_id', 'outdated']) %}
{%- set headers2rows = headers.row(n_rows=2) %}
<tr>
{%- for category, subheaders in headers2rows %}
|
{"golden_diff": "diff --git a/conans/assets/templates/search_table_html.py b/conans/assets/templates/search_table_html.py\n--- a/conans/assets/templates/search_table_html.py\n+++ b/conans/assets/templates/search_table_html.py\n@@ -25,7 +25,7 @@\n \n <table id=\"results\" class=\"table table-striped table-bordered\" style=\"width:100%\">\n <thead>\n- {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}\n+ {%- set headers = results.get_headers(keys=['remote', 'package_id', 'outdated']) %}\n {%- set headers2rows = headers.row(n_rows=2) %}\n <tr>\n {%- for category, subheaders in headers2rows %}\n", "issue": "graph --table=file.html does not show outdated packages\nThey were previously shown in colors. I guess adding a column \"outdated\" will resolve this issue.\n", "before_files": [{"content": "\ncontent = \"\"\"\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <title>Conan | {{ search.reference }}</title>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\"/>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.datatables.net/1.10.21/css/dataTables.bootstrap.min.css\"/>\n <style>\n tr td {\n white-space:nowrap;\n }\n </style>\n </head>\n <body>\n <div class=\"container-fluid\">\n <h1>{{ search.reference }}</h1>\n <div class=\"info\">\n <p>\n Depending on your package_id_mode, any combination of settings, options and requirements\n can give you a different packageID. Take into account that your configuration might be\n different from the one used to generate the packages.\n </p>\n </div>\n\n <table id=\"results\" class=\"table table-striped table-bordered\" style=\"width:100%\">\n <thead>\n {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}\n {%- set headers2rows = headers.row(n_rows=2) %}\n <tr>\n {%- for category, subheaders in headers2rows %}\n <th class=\"text-center\" rowspan=\"{% if subheaders|length == 1 and not subheaders[0] %}2{% else %}1{% endif %}\" colspan=\"{{ subheaders|length }}\">\n {{ category }}\n </th>\n {%- endfor %}\n </tr>\n <tr>\n {%- for category, subheaders in headers2rows %}\n {%- if subheaders|length != 1 or subheaders[0] != '' %}\n {%- for subheader in subheaders %}\n <th>{{ subheader|default(category, true) }}</th>\n {%- endfor %}\n {%- endif %}\n {%- endfor %}\n </tr>\n </thead>\n <tbody>\n {%- for package in results.packages() %}\n <tr>\n {%- for item in package.row(headers) %}\n <td>{{ item if item != None else ''}}</td>\n {%- endfor %}\n </tr>\n {%- endfor %}\n </tbody>\n <tfoot>\n <tr>\n {%- for header in headers.row(n_rows=1) %}\n <th>{{ header }}</th>\n {%- endfor %}\n </tr>\n </tfoot>\n </table>\n\n <script type=\"text/javascript\" src=\"https://code.jquery.com/jquery-3.3.1.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/v/dt/dt-1.10.20/datatables.min.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/1.10.21/js/dataTables.bootstrap.min.js\"></script>\n <script>\n $(document).ready(function() {\n // Setup - add a text input to each footer cell\n $('#results tfoot th').each( function () {\n var title = $(this).text();\n $(this).html( '<input type=\"text\" class=\"form-control filter-input\" placeholder=\"Filter '+title+'\" style=\"width:100%\"/>' );\n });\n\n var table = $('#results').DataTable( {\n \"dom\": \"lrtip\",\n \"lengthMenu\": [[10, 25, 50, -1], [10, 25, 50, \"All\"]],\n \"pageLength\": 10,\n \"columnDefs\": [\n { className: \"cell_border_right\", \"targets\": [ {{ headers.keys|length + headers.settings|length -1 }}, {{ headers.keys|length + headers.settings|length + headers.options|length -1 }} ] },\n { className: \"cell_border_right monospaced\", \"targets\": [{{ headers.keys|length -1 }}, ]}\n ]\n });\n\n // Apply the search\n table.columns().every( function () {\n var that = this;\n\n $( 'input', this.footer() ).on( 'keyup change clear', function () {\n if ( that.search() !== this.value ) {\n that\n .search( this.value )\n .draw();\n }\n } );\n } );\n });\n </script>\n </div>\n </body>\n</html>\n\"\"\"\n", "path": "conans/assets/templates/search_table_html.py"}], "after_files": [{"content": "\ncontent = \"\"\"\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <title>Conan | {{ search.reference }}</title>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\"/>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.datatables.net/1.10.21/css/dataTables.bootstrap.min.css\"/>\n <style>\n tr td {\n white-space:nowrap;\n }\n </style>\n </head>\n <body>\n <div class=\"container-fluid\">\n <h1>{{ search.reference }}</h1>\n <div class=\"info\">\n <p>\n Depending on your package_id_mode, any combination of settings, options and requirements\n can give you a different packageID. Take into account that your configuration might be\n different from the one used to generate the packages.\n </p>\n </div>\n\n <table id=\"results\" class=\"table table-striped table-bordered\" style=\"width:100%\">\n <thead>\n {%- set headers = results.get_headers(keys=['remote', 'package_id', 'outdated']) %}\n {%- set headers2rows = headers.row(n_rows=2) %}\n <tr>\n {%- for category, subheaders in headers2rows %}\n <th class=\"text-center\" rowspan=\"{% if subheaders|length == 1 and not subheaders[0] %}2{% else %}1{% endif %}\" colspan=\"{{ subheaders|length }}\">\n {{ category }}\n </th>\n {%- endfor %}\n </tr>\n <tr>\n {%- for category, subheaders in headers2rows %}\n {%- if subheaders|length != 1 or subheaders[0] != '' %}\n {%- for subheader in subheaders %}\n <th>{{ subheader|default(category, true) }}</th>\n {%- endfor %}\n {%- endif %}\n {%- endfor %}\n </tr>\n </thead>\n <tbody>\n {%- for package in results.packages() %}\n <tr>\n {%- for item in package.row(headers) %}\n <td>{{ item if item != None else ''}}</td>\n {%- endfor %}\n </tr>\n {%- endfor %}\n </tbody>\n <tfoot>\n <tr>\n {%- for header in headers.row(n_rows=1) %}\n <th>{{ header }}</th>\n {%- endfor %}\n </tr>\n </tfoot>\n </table>\n\n <script type=\"text/javascript\" src=\"https://code.jquery.com/jquery-3.3.1.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/v/dt/dt-1.10.20/datatables.min.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/1.10.21/js/dataTables.bootstrap.min.js\"></script>\n <script>\n $(document).ready(function() {\n // Setup - add a text input to each footer cell\n $('#results tfoot th').each( function () {\n var title = $(this).text();\n $(this).html( '<input type=\"text\" class=\"form-control filter-input\" placeholder=\"Filter '+title+'\" style=\"width:100%\"/>' );\n });\n\n var table = $('#results').DataTable( {\n \"dom\": \"lrtip\",\n \"lengthMenu\": [[10, 25, 50, -1], [10, 25, 50, \"All\"]],\n \"pageLength\": 10,\n \"columnDefs\": [\n { className: \"cell_border_right\", \"targets\": [ {{ headers.keys|length + headers.settings|length -1 }}, {{ headers.keys|length + headers.settings|length + headers.options|length -1 }} ] },\n { className: \"cell_border_right monospaced\", \"targets\": [{{ headers.keys|length -1 }}, ]}\n ]\n });\n\n // Apply the search\n table.columns().every( function () {\n var that = this;\n\n $( 'input', this.footer() ).on( 'keyup change clear', function () {\n if ( that.search() !== this.value ) {\n that\n .search( this.value )\n .draw();\n }\n } );\n } );\n });\n </script>\n </div>\n </body>\n</html>\n\"\"\"\n", "path": "conans/assets/templates/search_table_html.py"}]}
| 1,462 | 159 |
gh_patches_debug_18833
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1548
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG][HOTFIX] Add deprecation warning for condition parameter names
In 0.7 the drift checks were updated to add Cramer's V.
Thus the previous parameter condition names ("max_psi... "max_earthmovers...") were renamed.
Previous parameters should still be re-added with deprecation warnings, for all relevant checks and conditions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/vision/deprecation_warnings.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages."""
12 import warnings
13
14 # Added in version 0.6.2, deprecates max_num_categories in all drift checks
15 warnings.filterwarnings(
16 action='always',
17 message=r'.*max_num_categories.*',
18 category=DeprecationWarning,
19 module=r'deepchecks.*'
20 )
21
22 # Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks
23 # Should be removed in 0.8
24 warnings.filterwarnings(
25 action='once',
26 message=r'deepchecks.vision.checks.performance is deprecated.*',
27 category=DeprecationWarning,
28 module=r'deepchecks.*'
29 )
30
31 warnings.filterwarnings(
32 action='once',
33 message=r'deepchecks.vision.checks.methodology is deprecated.*',
34 category=DeprecationWarning,
35 module=r'deepchecks.tabular.checks.methodology.*'
36 )
37
38 warnings.filterwarnings(
39 action='once',
40 message=r'deepchecks.vision.checks.distribution is deprecated.*',
41 category=DeprecationWarning,
42 module=r'deepchecks.*'
43 )
44
45 warnings.filterwarnings(
46 action='always',
47 message=r'the integrity_validation suite is deprecated.*',
48 category=DeprecationWarning,
49 module=r'deepchecks.*'
50 )
51
```
Path: `deepchecks/tabular/deprecation_warnings.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages."""
12 import warnings
13
14 # Added in version 0.6.2, deprecates max_num_categories in all drift checks
15 warnings.filterwarnings(
16 action='always',
17 message=r'.*max_num_categories.*',
18 category=DeprecationWarning,
19 module=r'deepchecks.*'
20 )
21
22 # Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks
23 # Should be removed in 0.8
24 warnings.filterwarnings(
25 action='once',
26 message=r'deepchecks\.tabular\.checks\.performance is deprecated.*',
27 category=DeprecationWarning,
28 module=r'deepchecks.*'
29 )
30
31 warnings.filterwarnings(
32 action='once',
33 message=r'deepchecks.tabular.checks.overview is deprecated.*',
34 category=DeprecationWarning,
35 module=r'deepchecks.*'
36 )
37
38 warnings.filterwarnings(
39 action='once',
40 message=r'deepchecks.tabular.checks.methodology is deprecated.*',
41 category=DeprecationWarning,
42 module=r'deepchecks.tabular.checks.methodology.*'
43 )
44
45 warnings.filterwarnings(
46 action='once',
47 message=r'deepchecks.tabular.checks.distribution is deprecated.*',
48 category=DeprecationWarning,
49 module=r'deepchecks.*'
50 )
51
52 warnings.filterwarnings(
53 action='always',
54 message=r'the single_dataset_integrity suite is deprecated.*',
55 category=DeprecationWarning,
56 module=r'deepchecks.*'
57 )
58
59 warnings.filterwarnings(
60 action='always',
61 message=r'the train_test_leakage suite is deprecated.*',
62 category=DeprecationWarning,
63 module=r'deepchecks.*'
64 )
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/tabular/deprecation_warnings.py b/deepchecks/tabular/deprecation_warnings.py
--- a/deepchecks/tabular/deprecation_warnings.py
+++ b/deepchecks/tabular/deprecation_warnings.py
@@ -62,3 +62,20 @@
category=DeprecationWarning,
module=r'deepchecks.*'
)
+
+# Added in 0.7 Warning filters for drift conditions
+# Should be removed in 0.8
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_psi_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_earth_movers_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
diff --git a/deepchecks/vision/deprecation_warnings.py b/deepchecks/vision/deprecation_warnings.py
--- a/deepchecks/vision/deprecation_warnings.py
+++ b/deepchecks/vision/deprecation_warnings.py
@@ -48,3 +48,20 @@
category=DeprecationWarning,
module=r'deepchecks.*'
)
+
+# Added in 0.7 Warning filters for drift conditions
+# Should be removed in 0.8
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_psi_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_earth_movers_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
|
{"golden_diff": "diff --git a/deepchecks/tabular/deprecation_warnings.py b/deepchecks/tabular/deprecation_warnings.py\n--- a/deepchecks/tabular/deprecation_warnings.py\n+++ b/deepchecks/tabular/deprecation_warnings.py\n@@ -62,3 +62,20 @@\n category=DeprecationWarning,\n module=r'deepchecks.*'\n )\n+\n+# Added in 0.7 Warning filters for drift conditions\n+# Should be removed in 0.8\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_psi_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_earth_movers_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\ndiff --git a/deepchecks/vision/deprecation_warnings.py b/deepchecks/vision/deprecation_warnings.py\n--- a/deepchecks/vision/deprecation_warnings.py\n+++ b/deepchecks/vision/deprecation_warnings.py\n@@ -48,3 +48,20 @@\n category=DeprecationWarning,\n module=r'deepchecks.*'\n )\n+\n+# Added in 0.7 Warning filters for drift conditions\n+# Should be removed in 0.8\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_psi_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_earth_movers_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\n", "issue": "[BUG][HOTFIX] Add deprecation warning for condition parameter names\nIn 0.7 the drift checks were updated to add Cramer's V.\r\nThus the previous parameter condition names (\"max_psi... \"max_earthmovers...\") were renamed.\r\nPrevious parameters should still be re-added with deprecation warnings, for all relevant checks and conditions.\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the integrity_validation suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/vision/deprecation_warnings.py"}, {"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks\\.tabular\\.checks\\.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.overview is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the single_dataset_integrity suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the train_test_leakage suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/tabular/deprecation_warnings.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the integrity_validation suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for drift conditions\n# Should be removed in 0.8\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_psi_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_earth_movers_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/vision/deprecation_warnings.py"}, {"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks\\.tabular\\.checks\\.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.overview is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the single_dataset_integrity suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the train_test_leakage suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for drift conditions\n# Should be removed in 0.8\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_psi_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_earth_movers_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/tabular/deprecation_warnings.py"}]}
| 1,446 | 402 |
gh_patches_debug_20513
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-1896
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'
https://beta.getsentry.com/sentry/sentry/group/56738068/
```
AttributeError: 'int' object has no attribute 'lower'
(11 additional frame(s) were not displayed)
...
File "sentry/interfaces/user.py", line 99, in to_email_html
return render_to_string('sentry/partial/interfaces/user_email.html', context)
File "sentry/web/helpers.py", line 122, in render_to_string
return loader.render_to_string(template, context)
File "templatetag_sugar/node.py", line 19, in render
return self.function(context, *args, **kwargs)
File "sentry/templatetags/sentry_helpers.py", line 314, in gravatar_url
return get_gravatar_url(email, size, default)
File "sentry/utils/avatar.py", line 18, in get_gravatar_url
hashlib.md5(email.lower()).hexdigest())
Stacktrace (most recent call last):
Error processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'
File "sentry/partial/interfaces/user_email.html", line 7
<tr>
{% if user_email %}
<td style="width:84px;padding-top:10px;vertical-align:top">
<img src="{% gravatar_url user_email size 64 %}">
</td>
{% endif %}
<td>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/interfaces/user.py`
Content:
```
1 """
2 sentry.interfaces.user
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 :copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.
6 :license: BSD, see LICENSE for more details.
7 """
8 from __future__ import absolute_import
9
10 __all__ = ('User',)
11
12 from sentry.interfaces.base import Interface
13 from sentry.utils.safe import trim, trim_dict
14 from sentry.web.helpers import render_to_string
15 from ipaddr import IPAddress
16
17
18 def validate_ip(value, required=True):
19 if not required and not value:
20 return
21
22 # will raise a ValueError
23 IPAddress(value)
24 return value
25
26
27 class User(Interface):
28 """
29 An interface which describes the authenticated User for a request.
30
31 You should provide **at least** either an `id` (a unique identifier for
32 an authenticated user) or `ip_address` (their IP address).
33
34 All other attributes are optional.
35
36 >>> {
37 >>> "id": "unique_id",
38 >>> "username": "my_user",
39 >>> "email": "[email protected]"
40 >>> "ip_address": "127.0.0.1",
41 >>> "optional": "value"
42 >>> }
43 """
44 @classmethod
45 def to_python(cls, data):
46 data = data.copy()
47
48 extra_data = data.pop('data', data)
49 if not isinstance(extra_data, dict):
50 extra_data = {}
51
52 kwargs = {
53 'id': trim(data.pop('id', None), 128),
54 'email': trim(data.pop('email', None), 128),
55 'username': trim(data.pop('username', None), 128),
56 'ip_address': validate_ip(data.pop('ip_address', None), False),
57 }
58
59 kwargs['data'] = trim_dict(extra_data)
60 return cls(**kwargs)
61
62 def get_api_context(self, is_public=False):
63 return {
64 'id': self.id,
65 'email': self.email,
66 'username': self.username,
67 'ipAddress': self.ip_address,
68 'data': self.data,
69 }
70
71 def get_path(self):
72 return 'sentry.interfaces.User'
73
74 def get_hash(self):
75 return []
76
77 def to_email_html(self, event, **kwargs):
78 context = {
79 'user_id': self.id,
80 'user_email': self.email,
81 'user_username': self.username,
82 'user_ip_address': self.ip_address,
83 'user_data': self.data,
84 }
85 return render_to_string('sentry/partial/interfaces/user_email.html', context)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/interfaces/user.py b/src/sentry/interfaces/user.py
--- a/src/sentry/interfaces/user.py
+++ b/src/sentry/interfaces/user.py
@@ -15,6 +15,19 @@
from ipaddr import IPAddress
+def validate_email(value, required=True):
+ if not required and not value:
+ return
+
+ if not isinstance(value, basestring):
+ raise TypeError('object of type %r is not an email address' % type(value).__name__)
+
+ # safe to assume an email address at least has a @ in it.
+ if '@' not in value:
+ raise ValueError('malformed email address')
+ return value
+
+
def validate_ip(value, required=True):
if not required and not value:
return
@@ -51,7 +64,7 @@
kwargs = {
'id': trim(data.pop('id', None), 128),
- 'email': trim(data.pop('email', None), 128),
+ 'email': trim(validate_email(data.pop('email', None), False), 128),
'username': trim(data.pop('username', None), 128),
'ip_address': validate_ip(data.pop('ip_address', None), False),
}
|
{"golden_diff": "diff --git a/src/sentry/interfaces/user.py b/src/sentry/interfaces/user.py\n--- a/src/sentry/interfaces/user.py\n+++ b/src/sentry/interfaces/user.py\n@@ -15,6 +15,19 @@\n from ipaddr import IPAddress\n \n \n+def validate_email(value, required=True):\n+ if not required and not value:\n+ return\n+\n+ if not isinstance(value, basestring):\n+ raise TypeError('object of type %r is not an email address' % type(value).__name__)\n+\n+ # safe to assume an email address at least has a @ in it.\n+ if '@' not in value:\n+ raise ValueError('malformed email address')\n+ return value\n+\n+\n def validate_ip(value, required=True):\n if not required and not value:\n return\n@@ -51,7 +64,7 @@\n \n kwargs = {\n 'id': trim(data.pop('id', None), 128),\n- 'email': trim(data.pop('email', None), 128),\n+ 'email': trim(validate_email(data.pop('email', None), False), 128),\n 'username': trim(data.pop('username', None), 128),\n 'ip_address': validate_ip(data.pop('ip_address', None), False),\n }\n", "issue": "Error processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'\nhttps://beta.getsentry.com/sentry/sentry/group/56738068/\n\n```\nAttributeError: 'int' object has no attribute 'lower'\n(11 additional frame(s) were not displayed)\n...\n File \"sentry/interfaces/user.py\", line 99, in to_email_html\n return render_to_string('sentry/partial/interfaces/user_email.html', context)\n File \"sentry/web/helpers.py\", line 122, in render_to_string\n return loader.render_to_string(template, context)\n File \"templatetag_sugar/node.py\", line 19, in render\n return self.function(context, *args, **kwargs)\n File \"sentry/templatetags/sentry_helpers.py\", line 314, in gravatar_url\n return get_gravatar_url(email, size, default)\n File \"sentry/utils/avatar.py\", line 18, in get_gravatar_url\n hashlib.md5(email.lower()).hexdigest())\n\nStacktrace (most recent call last):\n\nError processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'\n\nFile \"sentry/partial/interfaces/user_email.html\", line 7\n\n <tr>\n {% if user_email %}\n <td style=\"width:84px;padding-top:10px;vertical-align:top\">\n <img src=\"{% gravatar_url user_email size 64 %}\">\n </td>\n {% endif %}\n <td>\n```\n\n", "before_files": [{"content": "\"\"\"\nsentry.interfaces.user\n~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import\n\n__all__ = ('User',)\n\nfrom sentry.interfaces.base import Interface\nfrom sentry.utils.safe import trim, trim_dict\nfrom sentry.web.helpers import render_to_string\nfrom ipaddr import IPAddress\n\n\ndef validate_ip(value, required=True):\n if not required and not value:\n return\n\n # will raise a ValueError\n IPAddress(value)\n return value\n\n\nclass User(Interface):\n \"\"\"\n An interface which describes the authenticated User for a request.\n\n You should provide **at least** either an `id` (a unique identifier for\n an authenticated user) or `ip_address` (their IP address).\n\n All other attributes are optional.\n\n >>> {\n >>> \"id\": \"unique_id\",\n >>> \"username\": \"my_user\",\n >>> \"email\": \"[email protected]\"\n >>> \"ip_address\": \"127.0.0.1\",\n >>> \"optional\": \"value\"\n >>> }\n \"\"\"\n @classmethod\n def to_python(cls, data):\n data = data.copy()\n\n extra_data = data.pop('data', data)\n if not isinstance(extra_data, dict):\n extra_data = {}\n\n kwargs = {\n 'id': trim(data.pop('id', None), 128),\n 'email': trim(data.pop('email', None), 128),\n 'username': trim(data.pop('username', None), 128),\n 'ip_address': validate_ip(data.pop('ip_address', None), False),\n }\n\n kwargs['data'] = trim_dict(extra_data)\n return cls(**kwargs)\n\n def get_api_context(self, is_public=False):\n return {\n 'id': self.id,\n 'email': self.email,\n 'username': self.username,\n 'ipAddress': self.ip_address,\n 'data': self.data,\n }\n\n def get_path(self):\n return 'sentry.interfaces.User'\n\n def get_hash(self):\n return []\n\n def to_email_html(self, event, **kwargs):\n context = {\n 'user_id': self.id,\n 'user_email': self.email,\n 'user_username': self.username,\n 'user_ip_address': self.ip_address,\n 'user_data': self.data,\n }\n return render_to_string('sentry/partial/interfaces/user_email.html', context)\n", "path": "src/sentry/interfaces/user.py"}], "after_files": [{"content": "\"\"\"\nsentry.interfaces.user\n~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import\n\n__all__ = ('User',)\n\nfrom sentry.interfaces.base import Interface\nfrom sentry.utils.safe import trim, trim_dict\nfrom sentry.web.helpers import render_to_string\nfrom ipaddr import IPAddress\n\n\ndef validate_email(value, required=True):\n if not required and not value:\n return\n\n if not isinstance(value, basestring):\n raise TypeError('object of type %r is not an email address' % type(value).__name__)\n\n # safe to assume an email address at least has a @ in it.\n if '@' not in value:\n raise ValueError('malformed email address')\n return value\n\n\ndef validate_ip(value, required=True):\n if not required and not value:\n return\n\n # will raise a ValueError\n IPAddress(value)\n return value\n\n\nclass User(Interface):\n \"\"\"\n An interface which describes the authenticated User for a request.\n\n You should provide **at least** either an `id` (a unique identifier for\n an authenticated user) or `ip_address` (their IP address).\n\n All other attributes are optional.\n\n >>> {\n >>> \"id\": \"unique_id\",\n >>> \"username\": \"my_user\",\n >>> \"email\": \"[email protected]\"\n >>> \"ip_address\": \"127.0.0.1\",\n >>> \"optional\": \"value\"\n >>> }\n \"\"\"\n @classmethod\n def to_python(cls, data):\n data = data.copy()\n\n extra_data = data.pop('data', data)\n if not isinstance(extra_data, dict):\n extra_data = {}\n\n kwargs = {\n 'id': trim(data.pop('id', None), 128),\n 'email': trim(validate_email(data.pop('email', None), False), 128),\n 'username': trim(data.pop('username', None), 128),\n 'ip_address': validate_ip(data.pop('ip_address', None), False),\n }\n\n kwargs['data'] = trim_dict(extra_data)\n return cls(**kwargs)\n\n def get_api_context(self, is_public=False):\n return {\n 'id': self.id,\n 'email': self.email,\n 'username': self.username,\n 'ipAddress': self.ip_address,\n 'data': self.data,\n }\n\n def get_path(self):\n return 'sentry.interfaces.User'\n\n def get_hash(self):\n return []\n\n def to_email_html(self, event, **kwargs):\n context = {\n 'user_id': self.id,\n 'user_email': self.email,\n 'user_username': self.username,\n 'user_ip_address': self.ip_address,\n 'user_data': self.data,\n }\n return render_to_string('sentry/partial/interfaces/user_email.html', context)\n", "path": "src/sentry/interfaces/user.py"}]}
| 1,332 | 284 |
gh_patches_debug_35633
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-583
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix several configuration issues
Fix the pending issues in #563 as pointed out by @mauriciovasquezbernal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/configuration/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # FIXME find a better way to avoid all those "Expression has type "Any"" errors
16 # type: ignore
17
18 """
19 Simple configuration manager
20
21 This is a configuration manager for OpenTelemetry. It reads configuration
22 values from environment variables prefixed with
23 ``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.
24 The first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase
25 character.
26
27 For example, these environment variables will be read:
28
29 1. ``OPENTELEMETRY_PYTHON_SOMETHING``
30 2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``
31 3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``
32
33 These won't:
34
35 1. ``OPENTELEMETRY_PYTH_SOMETHING``
36 2. ``OPENTELEMETRY_PYTHON_something``
37 3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``
38 4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``
39
40 The values stored in the environment variables can be found in an instance of
41 ``opentelemetry.configuration.Configuration``. This class can be instantiated
42 freely because instantiating it returns a singleton.
43
44 For example, if the environment variable
45 ``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then
46 ``Configuration().meter_provider == "my_meter_provider"`` would be ``True``.
47
48 Non defined attributes will always return ``None``. This is intended to make it
49 easier to use the ``Configuration`` object in actual code, because it won't be
50 necessary to check for the attribute to be defined first.
51
52 Environment variables used by OpenTelemetry
53 -------------------------------------------
54
55 1. OPENTELEMETRY_PYTHON_METER_PROVIDER
56 2. OPENTELEMETRY_PYTHON_TRACER_PROVIDER
57
58 The value of these environment variables should be the name of the entry point
59 that points to the class that implements either provider. This OpenTelemetry
60 API package provides one entry point for each, which can be found in the
61 setup.py file::
62
63 entry_points={
64 ...
65 "opentelemetry_meter_provider": [
66 "default_meter_provider = "
67 "opentelemetry.metrics:DefaultMeterProvider"
68 ],
69 "opentelemetry_tracer_provider": [
70 "default_tracer_provider = "
71 "opentelemetry.trace:DefaultTracerProvider"
72 ],
73 }
74
75 To use the meter provider above, then the
76 ``OPENTELEMETRY_PYTHON_METER_PROVIDER`` should be set to
77 "default_meter_provider" (this is not actually necessary since the
78 OpenTelemetry API provided providers are the default ones used if no
79 configuration is found in the environment variables).
80 """
81
82 from os import environ
83 from re import fullmatch
84
85
86 class Configuration:
87 _instance = None
88
89 __slots__ = []
90
91 def __new__(cls) -> "Configuration":
92 if Configuration._instance is None:
93
94 for key, value in environ.items():
95
96 match = fullmatch("OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)", key)
97
98 if match is not None:
99
100 key = match.group(1).lower()
101
102 setattr(Configuration, "_{}".format(key), value)
103 setattr(
104 Configuration,
105 key,
106 property(
107 fget=lambda cls, key=key: getattr(
108 cls, "_{}".format(key)
109 )
110 ),
111 )
112
113 Configuration.__slots__.append(key)
114
115 Configuration.__slots__ = tuple(Configuration.__slots__)
116
117 Configuration._instance = object.__new__(cls)
118
119 return cls._instance
120
121 def __getattr__(self, name):
122 return None
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-api/src/opentelemetry/configuration/__init__.py b/opentelemetry-api/src/opentelemetry/configuration/__init__.py
--- a/opentelemetry-api/src/opentelemetry/configuration/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/configuration/__init__.py
@@ -19,27 +19,27 @@
Simple configuration manager
This is a configuration manager for OpenTelemetry. It reads configuration
-values from environment variables prefixed with
-``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.
-The first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase
-character.
+values from environment variables prefixed with ``OPENTELEMETRY_PYTHON_`` whose
+characters are only alphanumeric characters and unserscores, except for the
+first character after ``OPENTELEMETRY_PYTHON_`` which must not be a number.
For example, these environment variables will be read:
1. ``OPENTELEMETRY_PYTHON_SOMETHING``
2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``
3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``
+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else``
+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else2``
These won't:
1. ``OPENTELEMETRY_PYTH_SOMETHING``
-2. ``OPENTELEMETRY_PYTHON_something``
-3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``
-4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``
+2. ``OPENTELEMETRY_PYTHON_2_SOMETHING_AND__ELSE``
+3. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``
The values stored in the environment variables can be found in an instance of
``opentelemetry.configuration.Configuration``. This class can be instantiated
-freely because instantiating it returns a singleton.
+freely because instantiating it returns always the same object.
For example, if the environment variable
``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then
@@ -93,11 +93,13 @@
for key, value in environ.items():
- match = fullmatch("OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)", key)
+ match = fullmatch(
+ r"OPENTELEMETRY_PYTHON_([A-Za-z_][\w_]*)", key
+ )
if match is not None:
- key = match.group(1).lower()
+ key = match.group(1)
setattr(Configuration, "_{}".format(key), value)
setattr(
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/configuration/__init__.py b/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n@@ -19,27 +19,27 @@\n Simple configuration manager\n \n This is a configuration manager for OpenTelemetry. It reads configuration\n-values from environment variables prefixed with\n-``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.\n-The first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase\n-character.\n+values from environment variables prefixed with ``OPENTELEMETRY_PYTHON_`` whose\n+characters are only alphanumeric characters and unserscores, except for the\n+first character after ``OPENTELEMETRY_PYTHON_`` which must not be a number.\n \n For example, these environment variables will be read:\n \n 1. ``OPENTELEMETRY_PYTHON_SOMETHING``\n 2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``\n 3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``\n+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else``\n+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else2``\n \n These won't:\n \n 1. ``OPENTELEMETRY_PYTH_SOMETHING``\n-2. ``OPENTELEMETRY_PYTHON_something``\n-3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``\n-4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n+2. ``OPENTELEMETRY_PYTHON_2_SOMETHING_AND__ELSE``\n+3. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n \n The values stored in the environment variables can be found in an instance of\n ``opentelemetry.configuration.Configuration``. This class can be instantiated\n-freely because instantiating it returns a singleton.\n+freely because instantiating it returns always the same object.\n \n For example, if the environment variable\n ``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n@@ -93,11 +93,13 @@\n \n for key, value in environ.items():\n \n- match = fullmatch(\"OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)\", key)\n+ match = fullmatch(\n+ r\"OPENTELEMETRY_PYTHON_([A-Za-z_][\\w_]*)\", key\n+ )\n \n if match is not None:\n \n- key = match.group(1).lower()\n+ key = match.group(1)\n \n setattr(Configuration, \"_{}\".format(key), value)\n setattr(\n", "issue": "Fix several configuration issues\nFix the pending issues in #563 as pointed out by @mauriciovasquezbernal. \n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# FIXME find a better way to avoid all those \"Expression has type \"Any\"\" errors\n# type: ignore\n\n\"\"\"\nSimple configuration manager\n\nThis is a configuration manager for OpenTelemetry. It reads configuration\nvalues from environment variables prefixed with\n``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.\nThe first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase\ncharacter.\n\nFor example, these environment variables will be read:\n\n1. ``OPENTELEMETRY_PYTHON_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``\n\nThese won't:\n\n1. ``OPENTELEMETRY_PYTH_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_something``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``\n4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n\nThe values stored in the environment variables can be found in an instance of\n``opentelemetry.configuration.Configuration``. This class can be instantiated\nfreely because instantiating it returns a singleton.\n\nFor example, if the environment variable\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n``Configuration().meter_provider == \"my_meter_provider\"`` would be ``True``.\n\nNon defined attributes will always return ``None``. This is intended to make it\neasier to use the ``Configuration`` object in actual code, because it won't be\nnecessary to check for the attribute to be defined first.\n\nEnvironment variables used by OpenTelemetry\n-------------------------------------------\n\n1. OPENTELEMETRY_PYTHON_METER_PROVIDER\n2. OPENTELEMETRY_PYTHON_TRACER_PROVIDER\n\nThe value of these environment variables should be the name of the entry point\nthat points to the class that implements either provider. This OpenTelemetry\nAPI package provides one entry point for each, which can be found in the\nsetup.py file::\n\n entry_points={\n ...\n \"opentelemetry_meter_provider\": [\n \"default_meter_provider = \"\n \"opentelemetry.metrics:DefaultMeterProvider\"\n ],\n \"opentelemetry_tracer_provider\": [\n \"default_tracer_provider = \"\n \"opentelemetry.trace:DefaultTracerProvider\"\n ],\n }\n\nTo use the meter provider above, then the\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` should be set to\n\"default_meter_provider\" (this is not actually necessary since the\nOpenTelemetry API provided providers are the default ones used if no\nconfiguration is found in the environment variables).\n\"\"\"\n\nfrom os import environ\nfrom re import fullmatch\n\n\nclass Configuration:\n _instance = None\n\n __slots__ = []\n\n def __new__(cls) -> \"Configuration\":\n if Configuration._instance is None:\n\n for key, value in environ.items():\n\n match = fullmatch(\"OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)\", key)\n\n if match is not None:\n\n key = match.group(1).lower()\n\n setattr(Configuration, \"_{}\".format(key), value)\n setattr(\n Configuration,\n key,\n property(\n fget=lambda cls, key=key: getattr(\n cls, \"_{}\".format(key)\n )\n ),\n )\n\n Configuration.__slots__.append(key)\n\n Configuration.__slots__ = tuple(Configuration.__slots__)\n\n Configuration._instance = object.__new__(cls)\n\n return cls._instance\n\n def __getattr__(self, name):\n return None\n", "path": "opentelemetry-api/src/opentelemetry/configuration/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# FIXME find a better way to avoid all those \"Expression has type \"Any\"\" errors\n# type: ignore\n\n\"\"\"\nSimple configuration manager\n\nThis is a configuration manager for OpenTelemetry. It reads configuration\nvalues from environment variables prefixed with ``OPENTELEMETRY_PYTHON_`` whose\ncharacters are only alphanumeric characters and unserscores, except for the\nfirst character after ``OPENTELEMETRY_PYTHON_`` which must not be a number.\n\nFor example, these environment variables will be read:\n\n1. ``OPENTELEMETRY_PYTHON_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``\n4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else``\n4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else2``\n\nThese won't:\n\n1. ``OPENTELEMETRY_PYTH_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_2_SOMETHING_AND__ELSE``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n\nThe values stored in the environment variables can be found in an instance of\n``opentelemetry.configuration.Configuration``. This class can be instantiated\nfreely because instantiating it returns always the same object.\n\nFor example, if the environment variable\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n``Configuration().meter_provider == \"my_meter_provider\"`` would be ``True``.\n\nNon defined attributes will always return ``None``. This is intended to make it\neasier to use the ``Configuration`` object in actual code, because it won't be\nnecessary to check for the attribute to be defined first.\n\nEnvironment variables used by OpenTelemetry\n-------------------------------------------\n\n1. OPENTELEMETRY_PYTHON_METER_PROVIDER\n2. OPENTELEMETRY_PYTHON_TRACER_PROVIDER\n\nThe value of these environment variables should be the name of the entry point\nthat points to the class that implements either provider. This OpenTelemetry\nAPI package provides one entry point for each, which can be found in the\nsetup.py file::\n\n entry_points={\n ...\n \"opentelemetry_meter_provider\": [\n \"default_meter_provider = \"\n \"opentelemetry.metrics:DefaultMeterProvider\"\n ],\n \"opentelemetry_tracer_provider\": [\n \"default_tracer_provider = \"\n \"opentelemetry.trace:DefaultTracerProvider\"\n ],\n }\n\nTo use the meter provider above, then the\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` should be set to\n\"default_meter_provider\" (this is not actually necessary since the\nOpenTelemetry API provided providers are the default ones used if no\nconfiguration is found in the environment variables).\n\"\"\"\n\nfrom os import environ\nfrom re import fullmatch\n\n\nclass Configuration:\n _instance = None\n\n __slots__ = []\n\n def __new__(cls) -> \"Configuration\":\n if Configuration._instance is None:\n\n for key, value in environ.items():\n\n match = fullmatch(\n r\"OPENTELEMETRY_PYTHON_([A-Za-z_][\\w_]*)\", key\n )\n\n if match is not None:\n\n key = match.group(1)\n\n setattr(Configuration, \"_{}\".format(key), value)\n setattr(\n Configuration,\n key,\n property(\n fget=lambda cls, key=key: getattr(\n cls, \"_{}\".format(key)\n )\n ),\n )\n\n Configuration.__slots__.append(key)\n\n Configuration.__slots__ = tuple(Configuration.__slots__)\n\n Configuration._instance = object.__new__(cls)\n\n return cls._instance\n\n def __getattr__(self, name):\n return None\n", "path": "opentelemetry-api/src/opentelemetry/configuration/__init__.py"}]}
| 1,490 | 638 |
gh_patches_debug_61676
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1710
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lower bound for typing_extensions?
* Faker version: 14.1.0
* OS: OpenSUSE Leap 15.3
For old Python versions (before Python 3.8), *faker* requires the *typing_extensions* with at least version 3.10.0.2. Due to some other dependencies inside my project, I am required to use version 3.7.4.3.
### Steps to reproduce
1. `pip install faker`
### Expected behavior
*faker* works with `typing_extensions==3.7.4.3` as well, to allow better integration with larger systems which are stuck at such older versions for some reasons.
### Actual behavior
*faker* declares `typing_extensions>=3.10.0.2` (#1557), although I could see no issue when using it with `typing_extensions==3.7.4.3`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from pathlib import Path
4
5 from setuptools import find_packages, setup
6
7 here = Path(__file__).resolve().parent
8 README = (here / "README.rst").read_text(encoding="utf-8")
9 VERSION = (here / "VERSION").read_text(encoding="utf-8").strip()
10
11 excluded_packages = ["docs", "tests", "tests.*"]
12
13
14 # this module can be zip-safe if the zipimporter implements iter_modules or if
15 # pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.
16 try:
17 import pkgutil
18 import zipimport
19
20 zip_safe = (
21 hasattr(zipimport.zipimporter, "iter_modules")
22 or zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()
23 )
24 except AttributeError:
25 zip_safe = False
26
27 setup(
28 name="Faker",
29 version=VERSION,
30 description="Faker is a Python package that generates fake data for you.",
31 long_description=README,
32 entry_points={
33 "console_scripts": ["faker=faker.cli:execute_from_command_line"],
34 "pytest11": ["faker = faker.contrib.pytest.plugin"],
35 },
36 classifiers=[
37 # See https://pypi.org/pypi?%3Aaction=list_classifiers
38 "Development Status :: 5 - Production/Stable",
39 "Environment :: Console",
40 "Intended Audience :: Developers",
41 "Programming Language :: Python",
42 "Programming Language :: Python :: 3",
43 "Programming Language :: Python :: 3 :: Only",
44 "Programming Language :: Python :: 3.6",
45 "Programming Language :: Python :: 3.7",
46 "Programming Language :: Python :: 3.8",
47 "Programming Language :: Python :: 3.9",
48 "Programming Language :: Python :: Implementation :: CPython",
49 "Programming Language :: Python :: Implementation :: PyPy",
50 "Topic :: Software Development :: Libraries :: Python Modules",
51 "Topic :: Software Development :: Testing",
52 "Topic :: Utilities",
53 "License :: OSI Approved :: MIT License",
54 ],
55 keywords="faker fixtures data test mock generator",
56 author="joke2k",
57 author_email="[email protected]",
58 url="https://github.com/joke2k/faker",
59 project_urls={
60 "Bug Tracker": "https://github.com/joke2k/faker/issues",
61 "Changes": "https://github.com/joke2k/faker/blob/master/CHANGELOG.md",
62 "Documentation": "http://faker.rtfd.org/",
63 "Source Code": "https://github.com/joke2k/faker",
64 },
65 license="MIT License",
66 packages=find_packages(exclude=excluded_packages),
67 package_data={
68 "faker": ["py.typed"],
69 },
70 platforms=["any"],
71 zip_safe=zip_safe,
72 python_requires=">=3.6",
73 install_requires=[
74 "python-dateutil>=2.4",
75 "typing-extensions>=3.10.0.2;python_version<'3.8'",
76 ],
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,6 +72,6 @@
python_requires=">=3.6",
install_requires=[
"python-dateutil>=2.4",
- "typing-extensions>=3.10.0.2;python_version<'3.8'",
+ "typing-extensions>=3.7.4.3;python_version<'3.8'",
],
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,6 +72,6 @@\n python_requires=\">=3.6\",\n install_requires=[\n \"python-dateutil>=2.4\",\n- \"typing-extensions>=3.10.0.2;python_version<'3.8'\",\n+ \"typing-extensions>=3.7.4.3;python_version<'3.8'\",\n ],\n )\n", "issue": "Lower bound for typing_extensions?\n* Faker version: 14.1.0\r\n* OS: OpenSUSE Leap 15.3\r\n\r\nFor old Python versions (before Python 3.8), *faker* requires the *typing_extensions* with at least version 3.10.0.2. Due to some other dependencies inside my project, I am required to use version 3.7.4.3.\r\n\r\n### Steps to reproduce\r\n\r\n1. `pip install faker`\r\n\r\n### Expected behavior\r\n\r\n*faker* works with `typing_extensions==3.7.4.3` as well, to allow better integration with larger systems which are stuck at such older versions for some reasons.\r\n\r\n### Actual behavior\r\n\r\n*faker* declares `typing_extensions>=3.10.0.2` (#1557), although I could see no issue when using it with `typing_extensions==3.7.4.3`.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nhere = Path(__file__).resolve().parent\nREADME = (here / \"README.rst\").read_text(encoding=\"utf-8\")\nVERSION = (here / \"VERSION\").read_text(encoding=\"utf-8\").strip()\n\nexcluded_packages = [\"docs\", \"tests\", \"tests.*\"]\n\n\n# this module can be zip-safe if the zipimporter implements iter_modules or if\n# pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.\ntry:\n import pkgutil\n import zipimport\n\n zip_safe = (\n hasattr(zipimport.zipimporter, \"iter_modules\")\n or zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()\n )\nexcept AttributeError:\n zip_safe = False\n\nsetup(\n name=\"Faker\",\n version=VERSION,\n description=\"Faker is a Python package that generates fake data for you.\",\n long_description=README,\n entry_points={\n \"console_scripts\": [\"faker=faker.cli:execute_from_command_line\"],\n \"pytest11\": [\"faker = faker.contrib.pytest.plugin\"],\n },\n classifiers=[\n # See https://pypi.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Utilities\",\n \"License :: OSI Approved :: MIT License\",\n ],\n keywords=\"faker fixtures data test mock generator\",\n author=\"joke2k\",\n author_email=\"[email protected]\",\n url=\"https://github.com/joke2k/faker\",\n project_urls={\n \"Bug Tracker\": \"https://github.com/joke2k/faker/issues\",\n \"Changes\": \"https://github.com/joke2k/faker/blob/master/CHANGELOG.md\",\n \"Documentation\": \"http://faker.rtfd.org/\",\n \"Source Code\": \"https://github.com/joke2k/faker\",\n },\n license=\"MIT License\",\n packages=find_packages(exclude=excluded_packages),\n package_data={\n \"faker\": [\"py.typed\"],\n },\n platforms=[\"any\"],\n zip_safe=zip_safe,\n python_requires=\">=3.6\",\n install_requires=[\n \"python-dateutil>=2.4\",\n \"typing-extensions>=3.10.0.2;python_version<'3.8'\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nhere = Path(__file__).resolve().parent\nREADME = (here / \"README.rst\").read_text(encoding=\"utf-8\")\nVERSION = (here / \"VERSION\").read_text(encoding=\"utf-8\").strip()\n\nexcluded_packages = [\"docs\", \"tests\", \"tests.*\"]\n\n\n# this module can be zip-safe if the zipimporter implements iter_modules or if\n# pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.\ntry:\n import pkgutil\n import zipimport\n\n zip_safe = (\n hasattr(zipimport.zipimporter, \"iter_modules\")\n or zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()\n )\nexcept AttributeError:\n zip_safe = False\n\nsetup(\n name=\"Faker\",\n version=VERSION,\n description=\"Faker is a Python package that generates fake data for you.\",\n long_description=README,\n entry_points={\n \"console_scripts\": [\"faker=faker.cli:execute_from_command_line\"],\n \"pytest11\": [\"faker = faker.contrib.pytest.plugin\"],\n },\n classifiers=[\n # See https://pypi.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Utilities\",\n \"License :: OSI Approved :: MIT License\",\n ],\n keywords=\"faker fixtures data test mock generator\",\n author=\"joke2k\",\n author_email=\"[email protected]\",\n url=\"https://github.com/joke2k/faker\",\n project_urls={\n \"Bug Tracker\": \"https://github.com/joke2k/faker/issues\",\n \"Changes\": \"https://github.com/joke2k/faker/blob/master/CHANGELOG.md\",\n \"Documentation\": \"http://faker.rtfd.org/\",\n \"Source Code\": \"https://github.com/joke2k/faker\",\n },\n license=\"MIT License\",\n packages=find_packages(exclude=excluded_packages),\n package_data={\n \"faker\": [\"py.typed\"],\n },\n platforms=[\"any\"],\n zip_safe=zip_safe,\n python_requires=\">=3.6\",\n install_requires=[\n \"python-dateutil>=2.4\",\n \"typing-extensions>=3.7.4.3;python_version<'3.8'\",\n ],\n)\n", "path": "setup.py"}]}
| 1,246 | 103 |
gh_patches_debug_14376
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-agent-495
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
apache check doesn't support basic auth
If I want basic auth, I'd assume that all I have to do is make my config look like:
```
apache_status_url: http://user:[email protected]:80/server-status?auto
```
But actually that just doesn't work. We need to do some special incantations to make `urllib2` do basic auth.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checks.d/apache.py`
Content:
```
1 import urllib2
2
3 from util import headers
4 from checks import AgentCheck
5
6 class Apache(AgentCheck):
7 """Tracks basic connection/requests/workers metrics
8
9 See http://httpd.apache.org/docs/2.2/mod/mod_status.html for more details
10 """
11 GAUGES = {
12 'IdleWorkers': 'apache.performance.idle_workers',
13 'BusyWorkers': 'apache.performance.busy_workers',
14 'CPULoad': 'apache.performance.cpu_load',
15 'Uptime': 'apache.performance.uptime',
16 'Total kBytes': 'apache.net.bytes',
17 'Total Accesses': 'apache.net.hits',
18 }
19
20 RATES = {
21 'Total kBytes': 'apache.net.bytes_per_s',
22 'Total Accesses': 'apache.net.request_per_s'
23 }
24
25 def __init__(self, name, init_config, agentConfig, instances=None):
26 AgentCheck.__init__(self, name, init_config, agentConfig, instances)
27 self.assumed_url = {}
28
29 def check(self, instance):
30 if 'apache_status_url' not in instance:
31 raise Exception("Missing 'apache_status_url' in Apache config")
32
33 url = self.assumed_url.get(instance['apache_status_url'], instance['apache_status_url'])
34
35 tags = instance.get('tags', [])
36 req = urllib2.Request(url, None,
37 headers(self.agentConfig))
38 request = urllib2.urlopen(req)
39 response = request.read()
40
41 metric_count = 0
42 # Loop through and extract the numerical values
43 for line in response.split('\n'):
44 values = line.split(': ')
45 if len(values) == 2: # match
46 metric, value = values
47 try:
48 value = float(value)
49 except ValueError:
50 continue
51
52 # Special case: kBytes => bytes
53 if metric == 'Total kBytes':
54 value = value * 1024
55
56 # Send metric as a gauge, if applicable
57 if metric in self.GAUGES:
58 metric_count += 1
59 metric_name = self.GAUGES[metric]
60 self.gauge(metric_name, value, tags=tags)
61
62 # Send metric as a rate, if applicable
63 if metric in self.RATES:
64 metric_count += 1
65 metric_name = self.RATES[metric]
66 self.rate(metric_name, value, tags=tags)
67
68 if metric_count == 0:
69 if self.assumed_url.get(instance['apache_status_url'], None) is None and url[-5:] != '?auto':
70 self.assumed_url[instance['apache_status_url']]= '%s?auto' % url
71 self.warning("Assuming url was not correct. Trying to add ?auto suffix to the url")
72 self.check(instance)
73 else:
74 raise Exception("No metrics were fetched for this instance. Make sure that %s is the proper url." % instance['apache_status_url'])
75
76
77 @staticmethod
78 def parse_agent_config(agentConfig):
79 if not agentConfig.get('apache_status_url'):
80 return False
81
82 return {
83 'instances': [{'apache_status_url': agentConfig.get('apache_status_url')}]
84 }
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checks.d/apache.py b/checks.d/apache.py
--- a/checks.d/apache.py
+++ b/checks.d/apache.py
@@ -1,4 +1,5 @@
import urllib2
+import base64
from util import headers
from checks import AgentCheck
@@ -35,6 +36,10 @@
tags = instance.get('tags', [])
req = urllib2.Request(url, None,
headers(self.agentConfig))
+ if 'apache_user' in instance and 'apache_password' in instance:
+ auth_str = '%s:%s' % (instance['apache_user'], instance['apache_password'])
+ encoded_auth_str = base64.encodestring(auth_str)
+ req.add_header("Authorization", "Basic %s" % encoded_auth_str)
request = urllib2.urlopen(req)
response = request.read()
|
{"golden_diff": "diff --git a/checks.d/apache.py b/checks.d/apache.py\n--- a/checks.d/apache.py\n+++ b/checks.d/apache.py\n@@ -1,4 +1,5 @@\n import urllib2\n+import base64\n \n from util import headers\n from checks import AgentCheck\n@@ -35,6 +36,10 @@\n tags = instance.get('tags', [])\n req = urllib2.Request(url, None,\n headers(self.agentConfig))\n+ if 'apache_user' in instance and 'apache_password' in instance:\n+ auth_str = '%s:%s' % (instance['apache_user'], instance['apache_password'])\n+ encoded_auth_str = base64.encodestring(auth_str)\n+ req.add_header(\"Authorization\", \"Basic %s\" % encoded_auth_str)\n request = urllib2.urlopen(req)\n response = request.read()\n", "issue": "apache check doesn't support basic auth\nIf I want basic auth, I'd assume that all I have to do is make my config look like:\n\n```\napache_status_url: http://user:[email protected]:80/server-status?auto\n```\n\nBut actually that just doesn't work. We need to do some special incantations to make `urllib2` do basic auth.\n\n", "before_files": [{"content": "import urllib2\n\nfrom util import headers\nfrom checks import AgentCheck\n\nclass Apache(AgentCheck):\n \"\"\"Tracks basic connection/requests/workers metrics\n\n See http://httpd.apache.org/docs/2.2/mod/mod_status.html for more details\n \"\"\"\n GAUGES = {\n 'IdleWorkers': 'apache.performance.idle_workers',\n 'BusyWorkers': 'apache.performance.busy_workers',\n 'CPULoad': 'apache.performance.cpu_load',\n 'Uptime': 'apache.performance.uptime',\n 'Total kBytes': 'apache.net.bytes',\n 'Total Accesses': 'apache.net.hits',\n }\n\n RATES = {\n 'Total kBytes': 'apache.net.bytes_per_s',\n 'Total Accesses': 'apache.net.request_per_s'\n }\n\n def __init__(self, name, init_config, agentConfig, instances=None):\n AgentCheck.__init__(self, name, init_config, agentConfig, instances)\n self.assumed_url = {}\n\n def check(self, instance):\n if 'apache_status_url' not in instance:\n raise Exception(\"Missing 'apache_status_url' in Apache config\")\n\n url = self.assumed_url.get(instance['apache_status_url'], instance['apache_status_url'])\n\n tags = instance.get('tags', [])\n req = urllib2.Request(url, None,\n headers(self.agentConfig))\n request = urllib2.urlopen(req)\n response = request.read()\n\n metric_count = 0\n # Loop through and extract the numerical values\n for line in response.split('\\n'):\n values = line.split(': ')\n if len(values) == 2: # match\n metric, value = values\n try:\n value = float(value)\n except ValueError:\n continue\n\n # Special case: kBytes => bytes\n if metric == 'Total kBytes':\n value = value * 1024\n\n # Send metric as a gauge, if applicable\n if metric in self.GAUGES:\n metric_count += 1\n metric_name = self.GAUGES[metric]\n self.gauge(metric_name, value, tags=tags)\n\n # Send metric as a rate, if applicable\n if metric in self.RATES:\n metric_count += 1\n metric_name = self.RATES[metric]\n self.rate(metric_name, value, tags=tags)\n\n if metric_count == 0:\n if self.assumed_url.get(instance['apache_status_url'], None) is None and url[-5:] != '?auto':\n self.assumed_url[instance['apache_status_url']]= '%s?auto' % url\n self.warning(\"Assuming url was not correct. Trying to add ?auto suffix to the url\")\n self.check(instance)\n else:\n raise Exception(\"No metrics were fetched for this instance. Make sure that %s is the proper url.\" % instance['apache_status_url'])\n\n\n @staticmethod\n def parse_agent_config(agentConfig):\n if not agentConfig.get('apache_status_url'):\n return False\n\n return {\n 'instances': [{'apache_status_url': agentConfig.get('apache_status_url')}]\n }\n", "path": "checks.d/apache.py"}], "after_files": [{"content": "import urllib2\nimport base64\n\nfrom util import headers\nfrom checks import AgentCheck\n\nclass Apache(AgentCheck):\n \"\"\"Tracks basic connection/requests/workers metrics\n\n See http://httpd.apache.org/docs/2.2/mod/mod_status.html for more details\n \"\"\"\n GAUGES = {\n 'IdleWorkers': 'apache.performance.idle_workers',\n 'BusyWorkers': 'apache.performance.busy_workers',\n 'CPULoad': 'apache.performance.cpu_load',\n 'Uptime': 'apache.performance.uptime',\n 'Total kBytes': 'apache.net.bytes',\n 'Total Accesses': 'apache.net.hits',\n }\n\n RATES = {\n 'Total kBytes': 'apache.net.bytes_per_s',\n 'Total Accesses': 'apache.net.request_per_s'\n }\n\n def __init__(self, name, init_config, agentConfig, instances=None):\n AgentCheck.__init__(self, name, init_config, agentConfig, instances)\n self.assumed_url = {}\n\n def check(self, instance):\n if 'apache_status_url' not in instance:\n raise Exception(\"Missing 'apache_status_url' in Apache config\")\n\n url = self.assumed_url.get(instance['apache_status_url'], instance['apache_status_url'])\n\n tags = instance.get('tags', [])\n req = urllib2.Request(url, None,\n headers(self.agentConfig))\n if 'apache_user' in instance and 'apache_password' in instance:\n auth_str = '%s:%s' % (instance['apache_user'], instance['apache_password'])\n encoded_auth_str = base64.encodestring(auth_str)\n req.add_header(\"Authorization\", \"Basic %s\" % encoded_auth_str)\n request = urllib2.urlopen(req)\n response = request.read()\n\n metric_count = 0\n # Loop through and extract the numerical values\n for line in response.split('\\n'):\n values = line.split(': ')\n if len(values) == 2: # match\n metric, value = values\n try:\n value = float(value)\n except ValueError:\n continue\n\n # Special case: kBytes => bytes\n if metric == 'Total kBytes':\n value = value * 1024\n\n # Send metric as a gauge, if applicable\n if metric in self.GAUGES:\n metric_count += 1\n metric_name = self.GAUGES[metric]\n self.gauge(metric_name, value, tags=tags)\n\n # Send metric as a rate, if applicable\n if metric in self.RATES:\n metric_count += 1\n metric_name = self.RATES[metric]\n self.rate(metric_name, value, tags=tags)\n\n if metric_count == 0:\n if self.assumed_url.get(instance['apache_status_url'], None) is None and url[-5:] != '?auto':\n self.assumed_url[instance['apache_status_url']]= '%s?auto' % url\n self.warning(\"Assuming url was not correct. Trying to add ?auto suffix to the url\")\n self.check(instance)\n else:\n raise Exception(\"No metrics were fetched for this instance. Make sure that %s is the proper url.\" % instance['apache_status_url'])\n\n\n @staticmethod\n def parse_agent_config(agentConfig):\n if not agentConfig.get('apache_status_url'):\n return False\n\n return {\n 'instances': [{'apache_status_url': agentConfig.get('apache_status_url')}]\n }\n", "path": "checks.d/apache.py"}]}
| 1,189 | 187 |
gh_patches_debug_4502
|
rasdani/github-patches
|
git_diff
|
deis__deis-1495
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CoreOS on Google Compute Engine
Could you add Compute Engine to contrib?
http://googlecloudplatform.blogspot.cz/2014/05/official-coreos-images-are-now-available-on-google-compute-engine.html?m=1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `client/setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """Install the Deis command-line client."""
4
5
6 try:
7 from setuptools import setup
8 USE_SETUPTOOLS = True
9 except ImportError:
10 from distutils.core import setup
11 USE_SETUPTOOLS = False
12
13 try:
14 LONG_DESCRIPTION = open('README.rst').read()
15 except IOError:
16 LONG_DESCRIPTION = 'Deis command-line client'
17
18 try:
19 APACHE_LICENSE = open('LICENSE').read()
20 except IOError:
21 APACHE_LICENSE = 'See http://www.apache.org/licenses/LICENSE-2.0'
22
23 KWARGS = {}
24 if USE_SETUPTOOLS:
25 KWARGS = {'entry_points': {'console_scripts': ['deis = deis:main']}}
26 else:
27 KWARGS = {'scripts': ['deis']}
28
29
30 setup(name='deis',
31 version='0.11.0',
32 license=APACHE_LICENSE,
33 description='Command-line Client for Deis, the open PaaS',
34 author='OpDemand',
35 author_email='[email protected]',
36 url='https://github.com/deis/deis',
37 keywords=[
38 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',
39 'aws', 'ec2', 'rackspace', 'digitalocean'
40 ],
41 classifiers=[
42 'Development Status :: 4 - Beta',
43 'Environment :: Console',
44 'Intended Audience :: Developers',
45 'Intended Audience :: Information Technology',
46 'Intended Audience :: System Administrators',
47 'License :: OSI Approved :: Apache Software License',
48 'Operating System :: OS Independent',
49 'Programming Language :: Python',
50 'Programming Language :: Python :: 2.7',
51 'Topic :: Internet',
52 'Topic :: System :: Systems Administration',
53 ],
54 py_modules=['deis'],
55 data_files=[
56 ('.', ['README.rst']),
57 ],
58 long_description=LONG_DESCRIPTION,
59 install_requires=[
60 'docopt==0.6.1', 'python-dateutil==2.2',
61 'PyYAML==3.10', 'requests==2.2.1', 'urllib3==1.8.2'
62 ],
63 zip_safe=True,
64 **KWARGS)
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/client/setup.py b/client/setup.py
--- a/client/setup.py
+++ b/client/setup.py
@@ -36,7 +36,7 @@
url='https://github.com/deis/deis',
keywords=[
'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',
- 'aws', 'ec2', 'rackspace', 'digitalocean'
+ 'aws', 'ec2', 'rackspace', 'digitalocean', 'gce'
],
classifiers=[
'Development Status :: 4 - Beta',
|
{"golden_diff": "diff --git a/client/setup.py b/client/setup.py\n--- a/client/setup.py\n+++ b/client/setup.py\n@@ -36,7 +36,7 @@\n url='https://github.com/deis/deis',\n keywords=[\n 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',\n- 'aws', 'ec2', 'rackspace', 'digitalocean'\n+ 'aws', 'ec2', 'rackspace', 'digitalocean', 'gce'\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n", "issue": "CoreOS on Google Compute Engine\nCould you add Compute Engine to contrib?\n\nhttp://googlecloudplatform.blogspot.cz/2014/05/official-coreos-images-are-now-available-on-google-compute-engine.html?m=1\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"Install the Deis command-line client.\"\"\"\n\n\ntry:\n from setuptools import setup\n USE_SETUPTOOLS = True\nexcept ImportError:\n from distutils.core import setup\n USE_SETUPTOOLS = False\n\ntry:\n LONG_DESCRIPTION = open('README.rst').read()\nexcept IOError:\n LONG_DESCRIPTION = 'Deis command-line client'\n\ntry:\n APACHE_LICENSE = open('LICENSE').read()\nexcept IOError:\n APACHE_LICENSE = 'See http://www.apache.org/licenses/LICENSE-2.0'\n\nKWARGS = {}\nif USE_SETUPTOOLS:\n KWARGS = {'entry_points': {'console_scripts': ['deis = deis:main']}}\nelse:\n KWARGS = {'scripts': ['deis']}\n\n\nsetup(name='deis',\n version='0.11.0',\n license=APACHE_LICENSE,\n description='Command-line Client for Deis, the open PaaS',\n author='OpDemand',\n author_email='[email protected]',\n url='https://github.com/deis/deis',\n keywords=[\n 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',\n 'aws', 'ec2', 'rackspace', 'digitalocean'\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet',\n 'Topic :: System :: Systems Administration',\n ],\n py_modules=['deis'],\n data_files=[\n ('.', ['README.rst']),\n ],\n long_description=LONG_DESCRIPTION,\n install_requires=[\n 'docopt==0.6.1', 'python-dateutil==2.2',\n 'PyYAML==3.10', 'requests==2.2.1', 'urllib3==1.8.2'\n ],\n zip_safe=True,\n **KWARGS)\n", "path": "client/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"Install the Deis command-line client.\"\"\"\n\n\ntry:\n from setuptools import setup\n USE_SETUPTOOLS = True\nexcept ImportError:\n from distutils.core import setup\n USE_SETUPTOOLS = False\n\ntry:\n LONG_DESCRIPTION = open('README.rst').read()\nexcept IOError:\n LONG_DESCRIPTION = 'Deis command-line client'\n\ntry:\n APACHE_LICENSE = open('LICENSE').read()\nexcept IOError:\n APACHE_LICENSE = 'See http://www.apache.org/licenses/LICENSE-2.0'\n\nKWARGS = {}\nif USE_SETUPTOOLS:\n KWARGS = {'entry_points': {'console_scripts': ['deis = deis:main']}}\nelse:\n KWARGS = {'scripts': ['deis']}\n\n\nsetup(name='deis',\n version='0.11.0',\n license=APACHE_LICENSE,\n description='Command-line Client for Deis, the open PaaS',\n author='OpDemand',\n author_email='[email protected]',\n url='https://github.com/deis/deis',\n keywords=[\n 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',\n 'aws', 'ec2', 'rackspace', 'digitalocean', 'gce'\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet',\n 'Topic :: System :: Systems Administration',\n ],\n py_modules=['deis'],\n data_files=[\n ('.', ['README.rst']),\n ],\n long_description=LONG_DESCRIPTION,\n install_requires=[\n 'docopt==0.6.1', 'python-dateutil==2.2',\n 'PyYAML==3.10', 'requests==2.2.1', 'urllib3==1.8.2'\n ],\n zip_safe=True,\n **KWARGS)\n", "path": "client/setup.py"}]}
| 912 | 134 |
gh_patches_debug_5489
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-2005
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature request: RVA-based breakpoints
**Is your feature request related to a problem? Please describe.**
I am missing a command that would allow creating a breakpoint relative to some image - either main program or a library - for example at offset 0x1337 from libc base.
**Describe the solution you'd like**
A new command that would enable this, for example
````
bp libc.so+0x1337
````
**Describe alternatives you've considered**
The alternative is to manually find the image bases which is tedious.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/gdblib/functions.py`
Content:
```
1 """
2 Put all functions defined for gdb in here.
3
4 This file might be changed into a module in the future.
5 """
6
7 from __future__ import annotations
8
9 import functools
10 from typing import Any
11 from typing import Callable
12 from typing import List
13
14 import gdb
15
16 import pwndbg.gdblib.elf
17 import pwndbg.gdblib.proc
18
19 functions: List[_GdbFunction] = []
20
21
22 def GdbFunction(only_when_running: bool = False) -> Callable[..., Any]:
23 return functools.partial(_GdbFunction, only_when_running=only_when_running)
24
25
26 class _GdbFunction(gdb.Function):
27 def __init__(self, func: Callable[..., Any], only_when_running: bool) -> None:
28 self.name = func.__name__
29 self.func = func
30 self.only_when_running = only_when_running
31
32 functions.append(self)
33
34 super().__init__(self.name)
35
36 functools.update_wrapper(self, func)
37 self.__doc__ = func.__doc__
38
39 def invoke(self, *args: gdb.Value) -> Any:
40 if self.only_when_running and not pwndbg.gdblib.proc.alive:
41 # Returning empty string is a workaround that we can't stop e.g. `break *$rebase(offset)`
42 # Thx to that, gdb will print out 'evaluation of this expression requires the target program to be active'
43 return ""
44
45 return self.func(*args)
46
47 def __call__(self, *args: gdb.Value) -> Any:
48 return self.invoke(*args)
49
50
51 @GdbFunction(only_when_running=True)
52 def rebase(addr: gdb.Value | int) -> int:
53 """Return rebased address."""
54 base = pwndbg.gdblib.elf.exe().address
55 return base + int(addr)
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/gdblib/functions.py b/pwndbg/gdblib/functions.py
--- a/pwndbg/gdblib/functions.py
+++ b/pwndbg/gdblib/functions.py
@@ -53,3 +53,16 @@
"""Return rebased address."""
base = pwndbg.gdblib.elf.exe().address
return base + int(addr)
+
+
+@GdbFunction(only_when_running=True)
+def base(name_pattern: gdb.Value | str) -> int:
+ if isinstance(name_pattern, gdb.Value):
+ name = name_pattern.string()
+ else:
+ name = name_pattern
+
+ for p in pwndbg.gdblib.vmmap.get():
+ if name in p.objfile:
+ return p.vaddr
+ raise ValueError(f"No mapping named {name}")
|
{"golden_diff": "diff --git a/pwndbg/gdblib/functions.py b/pwndbg/gdblib/functions.py\n--- a/pwndbg/gdblib/functions.py\n+++ b/pwndbg/gdblib/functions.py\n@@ -53,3 +53,16 @@\n \"\"\"Return rebased address.\"\"\"\n base = pwndbg.gdblib.elf.exe().address\n return base + int(addr)\n+\n+\n+@GdbFunction(only_when_running=True)\n+def base(name_pattern: gdb.Value | str) -> int:\n+ if isinstance(name_pattern, gdb.Value):\n+ name = name_pattern.string()\n+ else:\n+ name = name_pattern\n+\n+ for p in pwndbg.gdblib.vmmap.get():\n+ if name in p.objfile:\n+ return p.vaddr\n+ raise ValueError(f\"No mapping named {name}\")\n", "issue": "Feature request: RVA-based breakpoints\n**Is your feature request related to a problem? Please describe.**\r\nI am missing a command that would allow creating a breakpoint relative to some image - either main program or a library - for example at offset 0x1337 from libc base.\r\n\r\n**Describe the solution you'd like**\r\nA new command that would enable this, for example\r\n\r\n````\r\nbp libc.so+0x1337\r\n````\r\n\r\n\r\n**Describe alternatives you've considered**\r\nThe alternative is to manually find the image bases which is tedious.\r\n\n", "before_files": [{"content": "\"\"\"\nPut all functions defined for gdb in here.\n\nThis file might be changed into a module in the future.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport functools\nfrom typing import Any\nfrom typing import Callable\nfrom typing import List\n\nimport gdb\n\nimport pwndbg.gdblib.elf\nimport pwndbg.gdblib.proc\n\nfunctions: List[_GdbFunction] = []\n\n\ndef GdbFunction(only_when_running: bool = False) -> Callable[..., Any]:\n return functools.partial(_GdbFunction, only_when_running=only_when_running)\n\n\nclass _GdbFunction(gdb.Function):\n def __init__(self, func: Callable[..., Any], only_when_running: bool) -> None:\n self.name = func.__name__\n self.func = func\n self.only_when_running = only_when_running\n\n functions.append(self)\n\n super().__init__(self.name)\n\n functools.update_wrapper(self, func)\n self.__doc__ = func.__doc__\n\n def invoke(self, *args: gdb.Value) -> Any:\n if self.only_when_running and not pwndbg.gdblib.proc.alive:\n # Returning empty string is a workaround that we can't stop e.g. `break *$rebase(offset)`\n # Thx to that, gdb will print out 'evaluation of this expression requires the target program to be active'\n return \"\"\n\n return self.func(*args)\n\n def __call__(self, *args: gdb.Value) -> Any:\n return self.invoke(*args)\n\n\n@GdbFunction(only_when_running=True)\ndef rebase(addr: gdb.Value | int) -> int:\n \"\"\"Return rebased address.\"\"\"\n base = pwndbg.gdblib.elf.exe().address\n return base + int(addr)\n", "path": "pwndbg/gdblib/functions.py"}], "after_files": [{"content": "\"\"\"\nPut all functions defined for gdb in here.\n\nThis file might be changed into a module in the future.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport functools\nfrom typing import Any\nfrom typing import Callable\nfrom typing import List\n\nimport gdb\n\nimport pwndbg.gdblib.elf\nimport pwndbg.gdblib.proc\n\nfunctions: List[_GdbFunction] = []\n\n\ndef GdbFunction(only_when_running: bool = False) -> Callable[..., Any]:\n return functools.partial(_GdbFunction, only_when_running=only_when_running)\n\n\nclass _GdbFunction(gdb.Function):\n def __init__(self, func: Callable[..., Any], only_when_running: bool) -> None:\n self.name = func.__name__\n self.func = func\n self.only_when_running = only_when_running\n\n functions.append(self)\n\n super().__init__(self.name)\n\n functools.update_wrapper(self, func)\n self.__doc__ = func.__doc__\n\n def invoke(self, *args: gdb.Value) -> Any:\n if self.only_when_running and not pwndbg.gdblib.proc.alive:\n # Returning empty string is a workaround that we can't stop e.g. `break *$rebase(offset)`\n # Thx to that, gdb will print out 'evaluation of this expression requires the target program to be active'\n return \"\"\n\n return self.func(*args)\n\n def __call__(self, *args: gdb.Value) -> Any:\n return self.invoke(*args)\n\n\n@GdbFunction(only_when_running=True)\ndef rebase(addr: gdb.Value | int) -> int:\n \"\"\"Return rebased address.\"\"\"\n base = pwndbg.gdblib.elf.exe().address\n return base + int(addr)\n\n\n@GdbFunction(only_when_running=True)\ndef base(name_pattern: gdb.Value | str) -> int:\n if isinstance(name_pattern, gdb.Value):\n name = name_pattern.string()\n else:\n name = name_pattern\n\n for p in pwndbg.gdblib.vmmap.get():\n if name in p.objfile:\n return p.vaddr\n raise ValueError(f\"No mapping named {name}\")\n", "path": "pwndbg/gdblib/functions.py"}]}
| 866 | 185 |
gh_patches_debug_34969
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2525
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature Request: !Sub equivalent of W1001
*cfn-lint version: 0.58.2*
I believe that `!Sub` parameters should be checked to see if they depend on conditional resources the same way W1001 checks this for `!Ref` (e.g. `SubCondParam.Value` should produce a warning). I suspect this is possible since E1019 checks for non-existent parameters within a `!Sub`.
```
---
AWSTemplateFormatVersion: 2010-09-09
Conditions:
CreateContainerRepository: !Equals ["yes", "no"]
Resources:
Repository:
Type: AWS::ECR::Repository
Condition: CreateContainerRepository
SubCondParam:
Type: AWS::SSM::Parameter
Properties:
Type: String
Value: !Sub ${Repository}
RefCondParam:
Type: AWS::SSM::Parameter
Properties:
Type: String
Value: !Ref Repository
SubFakeParam:
Type: AWS::SSM::Parameter
Properties:
Type: String
Value: !Sub ${Fake}
RefFakeParam:
Type: AWS::SSM::Parameter
Properties:
Type: String
Value: !Ref Fake
```
* SubCondParam.Value shows no error or warning, I believe it should show a warning
* RefCondParam.Value shows W1001
* SubFakeParam.Value shows E1019
* RefFakeParam.Value shows E1012
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/functions/RelationshipConditions.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.helpers import PSEUDOPARAMS
6 from cfnlint.rules import CloudFormationLintRule, RuleMatch
7
8
9 class RelationshipConditions(CloudFormationLintRule):
10 """Check if Ref/GetAtt values are available via conditions"""
11
12 id = "W1001"
13 shortdesc = "Ref/GetAtt to resource that is available when conditions are applied"
14 description = (
15 "Check the Conditions that affect a Ref/GetAtt to make sure "
16 "the resource being related to is available when there is a resource "
17 "condition."
18 )
19 source_url = "https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html"
20 tags = ["conditions", "resources", "relationships", "ref", "getatt"]
21
22 def match(self, cfn):
23 """Check CloudFormation Ref/GetAtt for Conditions"""
24
25 matches = []
26
27 # Start with Ref checks
28 ref_objs = cfn.search_deep_keys(searchText="Ref", includeGlobals=False)
29 for ref_obj in ref_objs:
30 value = ref_obj[-1]
31 if value not in PSEUDOPARAMS:
32 scenarios = cfn.is_resource_available(ref_obj, value)
33 for scenario in scenarios:
34 # pylint: disable=consider-using-f-string
35 scenario_text = " and ".join(
36 [
37 'when condition "%s" is %s' % (k, v)
38 for (k, v) in scenario.items()
39 ]
40 )
41 message = (
42 'Ref to resource "{0}" that may not be available {1} at {2}'
43 )
44 matches.append(
45 RuleMatch(
46 ref_obj[:-1],
47 message.format(
48 value, scenario_text, "/".join(map(str, ref_obj[:-1]))
49 ),
50 )
51 )
52
53 # The do GetAtt
54 getatt_objs = cfn.search_deep_keys(
55 searchText="Fn::GetAtt", includeGlobals=False
56 )
57 for getatt_obj in getatt_objs:
58 value_obj = getatt_obj[-1]
59 value = None
60 if isinstance(value_obj, list):
61 value = value_obj[0]
62 elif isinstance(value_obj, str):
63 value = value_obj.split(".")[0]
64 if value:
65 if value not in PSEUDOPARAMS:
66 scenarios = cfn.is_resource_available(getatt_obj, value)
67 for scenario in scenarios:
68 scenario_text = " and ".join(
69 [
70 f'when condition "{k}" is {v}'
71 for (k, v) in scenario.items()
72 ]
73 )
74 message = 'GetAtt to resource "{0}" that may not be available {1} at {2}'
75 matches.append(
76 RuleMatch(
77 getatt_obj[:-1],
78 message.format(
79 value,
80 scenario_text,
81 "/".join(map(str, getatt_obj[:-1])),
82 ),
83 )
84 )
85
86 return matches
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cfnlint/rules/functions/RelationshipConditions.py b/src/cfnlint/rules/functions/RelationshipConditions.py
--- a/src/cfnlint/rules/functions/RelationshipConditions.py
+++ b/src/cfnlint/rules/functions/RelationshipConditions.py
@@ -17,7 +17,7 @@
"condition."
)
source_url = "https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html"
- tags = ["conditions", "resources", "relationships", "ref", "getatt"]
+ tags = ["conditions", "resources", "relationships", "ref", "getatt", "sub"]
def match(self, cfn):
"""Check CloudFormation Ref/GetAtt for Conditions"""
@@ -83,4 +83,46 @@
)
)
+ # The do Sub
+ sub_objs = cfn.search_deep_keys(searchText="Fn::Sub", includeGlobals=False)
+ for sub_obj in sub_objs:
+ sub_string = sub_obj[-1]
+ # Filter out bad types of sub_strings.
+ # Lists have two be two items and it can be just a string
+ if not isinstance(sub_string, (list, str)):
+ continue
+ if isinstance(sub_string, str):
+ sub_string = [sub_string, {}]
+ if len(sub_string) != 2:
+ continue
+ sub_params = sub_string[1]
+ string_params = cfn.get_sub_parameters(sub_string[0])
+
+ for string_param in string_params:
+ if string_param not in sub_params:
+ # deal with GetAtts by dropping everything after the .
+ string_param = string_param.split(".")[0]
+ if string_param in cfn.template.get("Resources", {}):
+ scenarios = cfn.is_resource_available(
+ sub_obj[:-1], string_param
+ )
+ for scenario in scenarios:
+ scenario_text = " and ".join(
+ [
+ f'when condition "{k}" is {v}'
+ for (k, v) in scenario.items()
+ ]
+ )
+ message = 'Fn::Sub to resource "{0}" that may not be available {1} at {2}'
+ matches.append(
+ RuleMatch(
+ sub_obj[:-1],
+ message.format(
+ string_param,
+ scenario_text,
+ "/".join(map(str, sub_obj[:-1])),
+ ),
+ )
+ )
+
return matches
|
{"golden_diff": "diff --git a/src/cfnlint/rules/functions/RelationshipConditions.py b/src/cfnlint/rules/functions/RelationshipConditions.py\n--- a/src/cfnlint/rules/functions/RelationshipConditions.py\n+++ b/src/cfnlint/rules/functions/RelationshipConditions.py\n@@ -17,7 +17,7 @@\n \"condition.\"\n )\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html\"\n- tags = [\"conditions\", \"resources\", \"relationships\", \"ref\", \"getatt\"]\n+ tags = [\"conditions\", \"resources\", \"relationships\", \"ref\", \"getatt\", \"sub\"]\n \n def match(self, cfn):\n \"\"\"Check CloudFormation Ref/GetAtt for Conditions\"\"\"\n@@ -83,4 +83,46 @@\n )\n )\n \n+ # The do Sub\n+ sub_objs = cfn.search_deep_keys(searchText=\"Fn::Sub\", includeGlobals=False)\n+ for sub_obj in sub_objs:\n+ sub_string = sub_obj[-1]\n+ # Filter out bad types of sub_strings.\n+ # Lists have two be two items and it can be just a string\n+ if not isinstance(sub_string, (list, str)):\n+ continue\n+ if isinstance(sub_string, str):\n+ sub_string = [sub_string, {}]\n+ if len(sub_string) != 2:\n+ continue\n+ sub_params = sub_string[1]\n+ string_params = cfn.get_sub_parameters(sub_string[0])\n+\n+ for string_param in string_params:\n+ if string_param not in sub_params:\n+ # deal with GetAtts by dropping everything after the .\n+ string_param = string_param.split(\".\")[0]\n+ if string_param in cfn.template.get(\"Resources\", {}):\n+ scenarios = cfn.is_resource_available(\n+ sub_obj[:-1], string_param\n+ )\n+ for scenario in scenarios:\n+ scenario_text = \" and \".join(\n+ [\n+ f'when condition \"{k}\" is {v}'\n+ for (k, v) in scenario.items()\n+ ]\n+ )\n+ message = 'Fn::Sub to resource \"{0}\" that may not be available {1} at {2}'\n+ matches.append(\n+ RuleMatch(\n+ sub_obj[:-1],\n+ message.format(\n+ string_param,\n+ scenario_text,\n+ \"/\".join(map(str, sub_obj[:-1])),\n+ ),\n+ )\n+ )\n+\n return matches\n", "issue": "Feature Request: !Sub equivalent of W1001\n*cfn-lint version: 0.58.2*\r\n\r\nI believe that `!Sub` parameters should be checked to see if they depend on conditional resources the same way W1001 checks this for `!Ref` (e.g. `SubCondParam.Value` should produce a warning). I suspect this is possible since E1019 checks for non-existent parameters within a `!Sub`.\r\n\r\n```\r\n---\r\nAWSTemplateFormatVersion: 2010-09-09\r\n\r\nConditions:\r\n CreateContainerRepository: !Equals [\"yes\", \"no\"]\r\n\r\nResources:\r\n Repository:\r\n Type: AWS::ECR::Repository\r\n Condition: CreateContainerRepository\r\n\r\n SubCondParam:\r\n Type: AWS::SSM::Parameter\r\n Properties:\r\n Type: String\r\n Value: !Sub ${Repository}\r\n\r\n RefCondParam:\r\n Type: AWS::SSM::Parameter\r\n Properties:\r\n Type: String\r\n Value: !Ref Repository\r\n\r\n SubFakeParam:\r\n Type: AWS::SSM::Parameter\r\n Properties:\r\n Type: String\r\n Value: !Sub ${Fake}\r\n\r\n RefFakeParam:\r\n Type: AWS::SSM::Parameter\r\n Properties:\r\n Type: String\r\n Value: !Ref Fake\r\n```\r\n\r\n* SubCondParam.Value shows no error or warning, I believe it should show a warning\r\n* RefCondParam.Value shows W1001\r\n* SubFakeParam.Value shows E1019\r\n* RefFakeParam.Value shows E1012\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.helpers import PSEUDOPARAMS\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass RelationshipConditions(CloudFormationLintRule):\n \"\"\"Check if Ref/GetAtt values are available via conditions\"\"\"\n\n id = \"W1001\"\n shortdesc = \"Ref/GetAtt to resource that is available when conditions are applied\"\n description = (\n \"Check the Conditions that affect a Ref/GetAtt to make sure \"\n \"the resource being related to is available when there is a resource \"\n \"condition.\"\n )\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html\"\n tags = [\"conditions\", \"resources\", \"relationships\", \"ref\", \"getatt\"]\n\n def match(self, cfn):\n \"\"\"Check CloudFormation Ref/GetAtt for Conditions\"\"\"\n\n matches = []\n\n # Start with Ref checks\n ref_objs = cfn.search_deep_keys(searchText=\"Ref\", includeGlobals=False)\n for ref_obj in ref_objs:\n value = ref_obj[-1]\n if value not in PSEUDOPARAMS:\n scenarios = cfn.is_resource_available(ref_obj, value)\n for scenario in scenarios:\n # pylint: disable=consider-using-f-string\n scenario_text = \" and \".join(\n [\n 'when condition \"%s\" is %s' % (k, v)\n for (k, v) in scenario.items()\n ]\n )\n message = (\n 'Ref to resource \"{0}\" that may not be available {1} at {2}'\n )\n matches.append(\n RuleMatch(\n ref_obj[:-1],\n message.format(\n value, scenario_text, \"/\".join(map(str, ref_obj[:-1]))\n ),\n )\n )\n\n # The do GetAtt\n getatt_objs = cfn.search_deep_keys(\n searchText=\"Fn::GetAtt\", includeGlobals=False\n )\n for getatt_obj in getatt_objs:\n value_obj = getatt_obj[-1]\n value = None\n if isinstance(value_obj, list):\n value = value_obj[0]\n elif isinstance(value_obj, str):\n value = value_obj.split(\".\")[0]\n if value:\n if value not in PSEUDOPARAMS:\n scenarios = cfn.is_resource_available(getatt_obj, value)\n for scenario in scenarios:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in scenario.items()\n ]\n )\n message = 'GetAtt to resource \"{0}\" that may not be available {1} at {2}'\n matches.append(\n RuleMatch(\n getatt_obj[:-1],\n message.format(\n value,\n scenario_text,\n \"/\".join(map(str, getatt_obj[:-1])),\n ),\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/functions/RelationshipConditions.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.helpers import PSEUDOPARAMS\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass RelationshipConditions(CloudFormationLintRule):\n \"\"\"Check if Ref/GetAtt values are available via conditions\"\"\"\n\n id = \"W1001\"\n shortdesc = \"Ref/GetAtt to resource that is available when conditions are applied\"\n description = (\n \"Check the Conditions that affect a Ref/GetAtt to make sure \"\n \"the resource being related to is available when there is a resource \"\n \"condition.\"\n )\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html\"\n tags = [\"conditions\", \"resources\", \"relationships\", \"ref\", \"getatt\", \"sub\"]\n\n def match(self, cfn):\n \"\"\"Check CloudFormation Ref/GetAtt for Conditions\"\"\"\n\n matches = []\n\n # Start with Ref checks\n ref_objs = cfn.search_deep_keys(searchText=\"Ref\", includeGlobals=False)\n for ref_obj in ref_objs:\n value = ref_obj[-1]\n if value not in PSEUDOPARAMS:\n scenarios = cfn.is_resource_available(ref_obj, value)\n for scenario in scenarios:\n # pylint: disable=consider-using-f-string\n scenario_text = \" and \".join(\n [\n 'when condition \"%s\" is %s' % (k, v)\n for (k, v) in scenario.items()\n ]\n )\n message = (\n 'Ref to resource \"{0}\" that may not be available {1} at {2}'\n )\n matches.append(\n RuleMatch(\n ref_obj[:-1],\n message.format(\n value, scenario_text, \"/\".join(map(str, ref_obj[:-1]))\n ),\n )\n )\n\n # The do GetAtt\n getatt_objs = cfn.search_deep_keys(\n searchText=\"Fn::GetAtt\", includeGlobals=False\n )\n for getatt_obj in getatt_objs:\n value_obj = getatt_obj[-1]\n value = None\n if isinstance(value_obj, list):\n value = value_obj[0]\n elif isinstance(value_obj, str):\n value = value_obj.split(\".\")[0]\n if value:\n if value not in PSEUDOPARAMS:\n scenarios = cfn.is_resource_available(getatt_obj, value)\n for scenario in scenarios:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in scenario.items()\n ]\n )\n message = 'GetAtt to resource \"{0}\" that may not be available {1} at {2}'\n matches.append(\n RuleMatch(\n getatt_obj[:-1],\n message.format(\n value,\n scenario_text,\n \"/\".join(map(str, getatt_obj[:-1])),\n ),\n )\n )\n\n # The do Sub\n sub_objs = cfn.search_deep_keys(searchText=\"Fn::Sub\", includeGlobals=False)\n for sub_obj in sub_objs:\n sub_string = sub_obj[-1]\n # Filter out bad types of sub_strings.\n # Lists have two be two items and it can be just a string\n if not isinstance(sub_string, (list, str)):\n continue\n if isinstance(sub_string, str):\n sub_string = [sub_string, {}]\n if len(sub_string) != 2:\n continue\n sub_params = sub_string[1]\n string_params = cfn.get_sub_parameters(sub_string[0])\n\n for string_param in string_params:\n if string_param not in sub_params:\n # deal with GetAtts by dropping everything after the .\n string_param = string_param.split(\".\")[0]\n if string_param in cfn.template.get(\"Resources\", {}):\n scenarios = cfn.is_resource_available(\n sub_obj[:-1], string_param\n )\n for scenario in scenarios:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in scenario.items()\n ]\n )\n message = 'Fn::Sub to resource \"{0}\" that may not be available {1} at {2}'\n matches.append(\n RuleMatch(\n sub_obj[:-1],\n message.format(\n string_param,\n scenario_text,\n \"/\".join(map(str, sub_obj[:-1])),\n ),\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/functions/RelationshipConditions.py"}]}
| 1,417 | 548 |
gh_patches_debug_16419
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-16001
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`./pants run` crashes when `remote_cache_eager_fetch = false`
**Describe the bug**
After setting `remote_cache_eager_fetch = false`, a CI step that `./pants run`s a `pex_binary` has started (intermittently) failing with errors like:
```
Exception: Was not present in either the local or remote store: Digest { hash: Fingerprint<46683dec8706b7ac7c4f6011f68b4b8c10ad423ae8ba57745a6f5e01ba5b2f7b>, size_bytes: 11827 }
```
**Pants version**
`PANTS_SHA=5d8a328d72209863986c8959b20305505bc068ba`
**OS**
Linux
**Additional info**
Some BuildSense links where we've seen the failure:
* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/
* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_31_09_277_3793f53b54914135885f2ac951faf210/
* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/base/exceptions.py`
Content:
```
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6
7 class TargetDefinitionException(Exception):
8 """Indicates an invalid target definition.
9
10 :API: public
11 """
12
13 def __init__(self, target, msg):
14 """
15 :param target: the target in question
16 :param string msg: a description of the target misconfiguration
17 """
18 super().__init__(f"Invalid target {target}: {msg}")
19
20
21 class BuildConfigurationError(Exception):
22 """Indicates an error in a pants installation's configuration."""
23
24
25 class BackendConfigurationError(BuildConfigurationError):
26 """Indicates a plugin backend with a missing or malformed register module."""
27
28
29 class MappingError(Exception):
30 """Indicates an error mapping addressable objects."""
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/base/exceptions.py b/src/python/pants/base/exceptions.py
--- a/src/python/pants/base/exceptions.py
+++ b/src/python/pants/base/exceptions.py
@@ -3,6 +3,11 @@
from __future__ import annotations
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from pants.engine.internals.native_engine import PyFailure
+
class TargetDefinitionException(Exception):
"""Indicates an invalid target definition.
@@ -28,3 +33,15 @@
class MappingError(Exception):
"""Indicates an error mapping addressable objects."""
+
+
+class NativeEngineFailure(Exception):
+ """A wrapper around a `Failure` instance.
+
+ TODO: This type is defined in Python because pyo3 doesn't support declaring Exceptions with
+ additional fields. See https://github.com/PyO3/pyo3/issues/295
+ """
+
+ def __init__(self, msg: str, failure: PyFailure) -> None:
+ super().__init__(msg)
+ self.failure = failure
|
{"golden_diff": "diff --git a/src/python/pants/base/exceptions.py b/src/python/pants/base/exceptions.py\n--- a/src/python/pants/base/exceptions.py\n+++ b/src/python/pants/base/exceptions.py\n@@ -3,6 +3,11 @@\n \n from __future__ import annotations\n \n+from typing import TYPE_CHECKING\n+\n+if TYPE_CHECKING:\n+ from pants.engine.internals.native_engine import PyFailure\n+\n \n class TargetDefinitionException(Exception):\n \"\"\"Indicates an invalid target definition.\n@@ -28,3 +33,15 @@\n \n class MappingError(Exception):\n \"\"\"Indicates an error mapping addressable objects.\"\"\"\n+\n+\n+class NativeEngineFailure(Exception):\n+ \"\"\"A wrapper around a `Failure` instance.\n+\n+ TODO: This type is defined in Python because pyo3 doesn't support declaring Exceptions with\n+ additional fields. See https://github.com/PyO3/pyo3/issues/295\n+ \"\"\"\n+\n+ def __init__(self, msg: str, failure: PyFailure) -> None:\n+ super().__init__(msg)\n+ self.failure = failure\n", "issue": "`./pants run` crashes when `remote_cache_eager_fetch = false`\n**Describe the bug**\r\n\r\nAfter setting `remote_cache_eager_fetch = false`, a CI step that `./pants run`s a `pex_binary` has started (intermittently) failing with errors like:\r\n```\r\n Exception: Was not present in either the local or remote store: Digest { hash: Fingerprint<46683dec8706b7ac7c4f6011f68b4b8c10ad423ae8ba57745a6f5e01ba5b2f7b>, size_bytes: 11827 }\r\n```\r\n\r\n**Pants version**\r\n\r\n`PANTS_SHA=5d8a328d72209863986c8959b20305505bc068ba`\r\n\r\n**OS**\r\n\r\nLinux\r\n\r\n**Additional info**\r\n\r\nSome BuildSense links where we've seen the failure:\r\n* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/\r\n* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_31_09_277_3793f53b54914135885f2ac951faf210/\r\n* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/\r\n\n", "before_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\n\nclass TargetDefinitionException(Exception):\n \"\"\"Indicates an invalid target definition.\n\n :API: public\n \"\"\"\n\n def __init__(self, target, msg):\n \"\"\"\n :param target: the target in question\n :param string msg: a description of the target misconfiguration\n \"\"\"\n super().__init__(f\"Invalid target {target}: {msg}\")\n\n\nclass BuildConfigurationError(Exception):\n \"\"\"Indicates an error in a pants installation's configuration.\"\"\"\n\n\nclass BackendConfigurationError(BuildConfigurationError):\n \"\"\"Indicates a plugin backend with a missing or malformed register module.\"\"\"\n\n\nclass MappingError(Exception):\n \"\"\"Indicates an error mapping addressable objects.\"\"\"\n", "path": "src/python/pants/base/exceptions.py"}], "after_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from pants.engine.internals.native_engine import PyFailure\n\n\nclass TargetDefinitionException(Exception):\n \"\"\"Indicates an invalid target definition.\n\n :API: public\n \"\"\"\n\n def __init__(self, target, msg):\n \"\"\"\n :param target: the target in question\n :param string msg: a description of the target misconfiguration\n \"\"\"\n super().__init__(f\"Invalid target {target}: {msg}\")\n\n\nclass BuildConfigurationError(Exception):\n \"\"\"Indicates an error in a pants installation's configuration.\"\"\"\n\n\nclass BackendConfigurationError(BuildConfigurationError):\n \"\"\"Indicates a plugin backend with a missing or malformed register module.\"\"\"\n\n\nclass MappingError(Exception):\n \"\"\"Indicates an error mapping addressable objects.\"\"\"\n\n\nclass NativeEngineFailure(Exception):\n \"\"\"A wrapper around a `Failure` instance.\n\n TODO: This type is defined in Python because pyo3 doesn't support declaring Exceptions with\n additional fields. See https://github.com/PyO3/pyo3/issues/295\n \"\"\"\n\n def __init__(self, msg: str, failure: PyFailure) -> None:\n super().__init__(msg)\n self.failure = failure\n", "path": "src/python/pants/base/exceptions.py"}]}
| 942 | 243 |
gh_patches_debug_40698
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-761
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Latest requirements.txt seems to be over-specifying dependencies
This is the latest version requirements.txt file:
```
black>=19.3b0
darglint
hypothesis>=4.4.0
interrogate
ipykernel
isort>=4.3.18
jupyter_client
lxml
natsort
nbsphinx>=0.4.2
pandas-flavor
pandas-vet
pre-commit
pyspark
pytest-azurepipelines
pytest-cov
pytest>=3.4.2
scikit-learn
seaborn
setuptools>=38.5.2
sphinxcontrib-fulltoc==1.2.0
unyt
xarray
```
Those are inject in the setup.py as mandatory dependencies but a lot in there looks like docs, dev, or optional. Is pyjanitor really require all that at run time?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """Setup script."""
2
3 import re
4 from pathlib import Path
5
6 from setuptools import find_packages, setup
7
8
9 def requirements():
10 """Reader for requirements.txt"""
11 with open("requirements.txt", "r+") as f:
12 return f.read()
13
14
15 def generate_long_description() -> str:
16 """
17 Extra chunks from README for PyPI description.
18
19 Target chunks must be contained within `.. pypi-doc` pair comments,
20 so there must be an even number of comments in README.
21
22 :returns: Extracted description from README.
23 :raises Exception: if odd number of ``.. pypi-doc`` comments
24 in README.
25 """
26 # Read the contents of README file
27 this_directory = Path(__file__).parent
28 with open(this_directory / "README.rst", encoding="utf-8") as f:
29 readme = f.read()
30
31 # Find pypi-doc comments in README
32 indices = [m.start() for m in re.finditer(".. pypi-doc", readme)]
33 if len(indices) % 2 != 0:
34 raise Exception("Odd number of `.. pypi-doc` comments in README")
35
36 # Loop through pairs of comments and save text between pairs
37 long_description = ""
38 for i in range(0, len(indices), 2):
39 start_index = indices[i] + 11
40 end_index = indices[i + 1]
41 long_description += readme[start_index:end_index]
42 return long_description
43
44
45 extra_spark = ["pyspark"]
46 extra_biology = ["biopython"]
47 extra_chemistry = ["rdkit"]
48 extra_engineering = ["unyt"]
49 extra_all = extra_biology + extra_engineering + extra_spark
50
51 setup(
52 name="pyjanitor",
53 version="0.20.10",
54 description="Tools for cleaning pandas DataFrames",
55 author="Eric J. Ma",
56 author_email="[email protected]",
57 url="https://github.com/ericmjl/pyjanitor",
58 license="MIT",
59 # packages=["janitor", "janitor.xarray", "janitor.spark"],
60 packages=find_packages(),
61 install_requires=requirements(),
62 extras_require={
63 "all": extra_all,
64 "biology": extra_biology,
65 # "chemistry": extra_chemistry, should be inserted once rdkit
66 # fixes https://github.com/rdkit/rdkit/issues/1812
67 "engineering": extra_engineering,
68 "spark": extra_spark,
69 },
70 python_requires=">=3.6",
71 long_description=generate_long_description(),
72 long_description_content_type="text/x-rst",
73 )
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,15 +1,82 @@
"""Setup script."""
-
+import codecs
+import os
import re
from pathlib import Path
+from pprint import pprint
from setuptools import find_packages, setup
+HERE = os.path.abspath(os.path.dirname(__file__))
+
+
+def read(*parts):
+ # intentionally *not* adding an encoding option to open
+ return codecs.open(os.path.join(HERE, *parts), "r").read()
+
-def requirements():
- """Reader for requirements.txt"""
- with open("requirements.txt", "r+") as f:
- return f.read()
+def read_requirements(*parts):
+ """
+ Return requirements from parts.
+
+ Given a requirements.txt (or similar style file),
+ returns a list of requirements.
+ Assumes anything after a single '#' on a line is a comment, and ignores
+ empty lines.
+
+ :param parts: list of filenames which contain the installation "parts",
+ i.e. submodule-specific installation requirements
+ :returns: A compiled list of requirements.
+ """
+ requirements = []
+ for line in read(*parts).splitlines():
+ new_line = re.sub( # noqa: PD005
+ r"(\s*)?#.*$", # the space immediately before the
+ # hash mark, the hash mark, and
+ # anything that follows it
+ "", # replace with a blank string
+ line,
+ )
+ new_line = re.sub( # noqa: PD005
+ r"-r.*$", # link to another requirement file
+ "", # replace with a blank string
+ new_line,
+ )
+ new_line = re.sub( # noqa: PD005
+ r"-e \..*$", # link to editable install
+ "", # replace with a blank string
+ new_line,
+ )
+ # print(line, "-->", new_line)
+ if new_line: # i.e. we have a non-zero-length string
+ requirements.append(new_line)
+ return requirements
+
+
+# pull from requirements.IN, requirements.TXT is generated from this
+INSTALL_REQUIRES = read_requirements(".requirements/base.in")
+
+EXTRA_REQUIRES = {
+ "dev": read_requirements(".requirements/dev.in"),
+ "docs": read_requirements(".requirements/docs.in"),
+ "test": read_requirements(".requirements/testing.in"),
+ "biology": read_requirements(".requirements/biology.in"),
+ "chemistry": read_requirements(".requirements/chemistry.in"),
+ "engineering": read_requirements(".requirements/engineering.in"),
+ "spark": read_requirements(".requirements/spark.in"),
+}
+
+# add 'all' key to EXTRA_REQUIRES
+all_requires = []
+for k, v in EXTRA_REQUIRES.items():
+ all_requires.extend(v)
+EXTRA_REQUIRES["all"] = set(all_requires)
+
+for k1 in ["biology", "chemistry", "engineering", "spark"]:
+ for v2 in EXTRA_REQUIRES[k1]:
+ EXTRA_REQUIRES["docs"].append(v2)
+
+pprint(EXTRA_REQUIRES)
def generate_long_description() -> str:
@@ -42,12 +109,6 @@
return long_description
-extra_spark = ["pyspark"]
-extra_biology = ["biopython"]
-extra_chemistry = ["rdkit"]
-extra_engineering = ["unyt"]
-extra_all = extra_biology + extra_engineering + extra_spark
-
setup(
name="pyjanitor",
version="0.20.10",
@@ -58,15 +119,8 @@
license="MIT",
# packages=["janitor", "janitor.xarray", "janitor.spark"],
packages=find_packages(),
- install_requires=requirements(),
- extras_require={
- "all": extra_all,
- "biology": extra_biology,
- # "chemistry": extra_chemistry, should be inserted once rdkit
- # fixes https://github.com/rdkit/rdkit/issues/1812
- "engineering": extra_engineering,
- "spark": extra_spark,
- },
+ install_requires=INSTALL_REQUIRES,
+ extras_require=EXTRA_REQUIRES,
python_requires=">=3.6",
long_description=generate_long_description(),
long_description_content_type="text/x-rst",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,15 +1,82 @@\n \"\"\"Setup script.\"\"\"\n-\n+import codecs\n+import os\n import re\n from pathlib import Path\n+from pprint import pprint\n \n from setuptools import find_packages, setup\n \n+HERE = os.path.abspath(os.path.dirname(__file__))\n+\n+\n+def read(*parts):\n+ # intentionally *not* adding an encoding option to open\n+ return codecs.open(os.path.join(HERE, *parts), \"r\").read()\n+\n \n-def requirements():\n- \"\"\"Reader for requirements.txt\"\"\"\n- with open(\"requirements.txt\", \"r+\") as f:\n- return f.read()\n+def read_requirements(*parts):\n+ \"\"\"\n+ Return requirements from parts.\n+\n+ Given a requirements.txt (or similar style file),\n+ returns a list of requirements.\n+ Assumes anything after a single '#' on a line is a comment, and ignores\n+ empty lines.\n+\n+ :param parts: list of filenames which contain the installation \"parts\",\n+ i.e. submodule-specific installation requirements\n+ :returns: A compiled list of requirements.\n+ \"\"\"\n+ requirements = []\n+ for line in read(*parts).splitlines():\n+ new_line = re.sub( # noqa: PD005\n+ r\"(\\s*)?#.*$\", # the space immediately before the\n+ # hash mark, the hash mark, and\n+ # anything that follows it\n+ \"\", # replace with a blank string\n+ line,\n+ )\n+ new_line = re.sub( # noqa: PD005\n+ r\"-r.*$\", # link to another requirement file\n+ \"\", # replace with a blank string\n+ new_line,\n+ )\n+ new_line = re.sub( # noqa: PD005\n+ r\"-e \\..*$\", # link to editable install\n+ \"\", # replace with a blank string\n+ new_line,\n+ )\n+ # print(line, \"-->\", new_line)\n+ if new_line: # i.e. we have a non-zero-length string\n+ requirements.append(new_line)\n+ return requirements\n+\n+\n+# pull from requirements.IN, requirements.TXT is generated from this\n+INSTALL_REQUIRES = read_requirements(\".requirements/base.in\")\n+\n+EXTRA_REQUIRES = {\n+ \"dev\": read_requirements(\".requirements/dev.in\"),\n+ \"docs\": read_requirements(\".requirements/docs.in\"),\n+ \"test\": read_requirements(\".requirements/testing.in\"),\n+ \"biology\": read_requirements(\".requirements/biology.in\"),\n+ \"chemistry\": read_requirements(\".requirements/chemistry.in\"),\n+ \"engineering\": read_requirements(\".requirements/engineering.in\"),\n+ \"spark\": read_requirements(\".requirements/spark.in\"),\n+}\n+\n+# add 'all' key to EXTRA_REQUIRES\n+all_requires = []\n+for k, v in EXTRA_REQUIRES.items():\n+ all_requires.extend(v)\n+EXTRA_REQUIRES[\"all\"] = set(all_requires)\n+\n+for k1 in [\"biology\", \"chemistry\", \"engineering\", \"spark\"]:\n+ for v2 in EXTRA_REQUIRES[k1]:\n+ EXTRA_REQUIRES[\"docs\"].append(v2)\n+\n+pprint(EXTRA_REQUIRES)\n \n \n def generate_long_description() -> str:\n@@ -42,12 +109,6 @@\n return long_description\n \n \n-extra_spark = [\"pyspark\"]\n-extra_biology = [\"biopython\"]\n-extra_chemistry = [\"rdkit\"]\n-extra_engineering = [\"unyt\"]\n-extra_all = extra_biology + extra_engineering + extra_spark\n-\n setup(\n name=\"pyjanitor\",\n version=\"0.20.10\",\n@@ -58,15 +119,8 @@\n license=\"MIT\",\n # packages=[\"janitor\", \"janitor.xarray\", \"janitor.spark\"],\n packages=find_packages(),\n- install_requires=requirements(),\n- extras_require={\n- \"all\": extra_all,\n- \"biology\": extra_biology,\n- # \"chemistry\": extra_chemistry, should be inserted once rdkit\n- # fixes https://github.com/rdkit/rdkit/issues/1812\n- \"engineering\": extra_engineering,\n- \"spark\": extra_spark,\n- },\n+ install_requires=INSTALL_REQUIRES,\n+ extras_require=EXTRA_REQUIRES,\n python_requires=\">=3.6\",\n long_description=generate_long_description(),\n long_description_content_type=\"text/x-rst\",\n", "issue": "Latest requirements.txt seems to be over-specifying dependencies\nThis is the latest version requirements.txt file:\r\n\r\n```\r\nblack>=19.3b0\r\ndarglint\r\nhypothesis>=4.4.0\r\ninterrogate\r\nipykernel\r\nisort>=4.3.18\r\njupyter_client\r\nlxml\r\nnatsort\r\nnbsphinx>=0.4.2\r\npandas-flavor\r\npandas-vet\r\npre-commit\r\npyspark\r\npytest-azurepipelines\r\npytest-cov\r\npytest>=3.4.2\r\nscikit-learn\r\nseaborn\r\nsetuptools>=38.5.2\r\nsphinxcontrib-fulltoc==1.2.0\r\nunyt\r\nxarray\r\n```\r\n\r\nThose are inject in the setup.py as mandatory dependencies but a lot in there looks like docs, dev, or optional. Is pyjanitor really require all that at run time?\n", "before_files": [{"content": "\"\"\"Setup script.\"\"\"\n\nimport re\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\n\ndef requirements():\n \"\"\"Reader for requirements.txt\"\"\"\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\ndef generate_long_description() -> str:\n \"\"\"\n Extra chunks from README for PyPI description.\n\n Target chunks must be contained within `.. pypi-doc` pair comments,\n so there must be an even number of comments in README.\n\n :returns: Extracted description from README.\n :raises Exception: if odd number of ``.. pypi-doc`` comments\n in README.\n \"\"\"\n # Read the contents of README file\n this_directory = Path(__file__).parent\n with open(this_directory / \"README.rst\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n # Find pypi-doc comments in README\n indices = [m.start() for m in re.finditer(\".. pypi-doc\", readme)]\n if len(indices) % 2 != 0:\n raise Exception(\"Odd number of `.. pypi-doc` comments in README\")\n\n # Loop through pairs of comments and save text between pairs\n long_description = \"\"\n for i in range(0, len(indices), 2):\n start_index = indices[i] + 11\n end_index = indices[i + 1]\n long_description += readme[start_index:end_index]\n return long_description\n\n\nextra_spark = [\"pyspark\"]\nextra_biology = [\"biopython\"]\nextra_chemistry = [\"rdkit\"]\nextra_engineering = [\"unyt\"]\nextra_all = extra_biology + extra_engineering + extra_spark\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.20.10\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n license=\"MIT\",\n # packages=[\"janitor\", \"janitor.xarray\", \"janitor.spark\"],\n packages=find_packages(),\n install_requires=requirements(),\n extras_require={\n \"all\": extra_all,\n \"biology\": extra_biology,\n # \"chemistry\": extra_chemistry, should be inserted once rdkit\n # fixes https://github.com/rdkit/rdkit/issues/1812\n \"engineering\": extra_engineering,\n \"spark\": extra_spark,\n },\n python_requires=\">=3.6\",\n long_description=generate_long_description(),\n long_description_content_type=\"text/x-rst\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"Setup script.\"\"\"\nimport codecs\nimport os\nimport re\nfrom pathlib import Path\nfrom pprint import pprint\n\nfrom setuptools import find_packages, setup\n\nHERE = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n # intentionally *not* adding an encoding option to open\n return codecs.open(os.path.join(HERE, *parts), \"r\").read()\n\n\ndef read_requirements(*parts):\n \"\"\"\n Return requirements from parts.\n\n Given a requirements.txt (or similar style file),\n returns a list of requirements.\n Assumes anything after a single '#' on a line is a comment, and ignores\n empty lines.\n\n :param parts: list of filenames which contain the installation \"parts\",\n i.e. submodule-specific installation requirements\n :returns: A compiled list of requirements.\n \"\"\"\n requirements = []\n for line in read(*parts).splitlines():\n new_line = re.sub( # noqa: PD005\n r\"(\\s*)?#.*$\", # the space immediately before the\n # hash mark, the hash mark, and\n # anything that follows it\n \"\", # replace with a blank string\n line,\n )\n new_line = re.sub( # noqa: PD005\n r\"-r.*$\", # link to another requirement file\n \"\", # replace with a blank string\n new_line,\n )\n new_line = re.sub( # noqa: PD005\n r\"-e \\..*$\", # link to editable install\n \"\", # replace with a blank string\n new_line,\n )\n # print(line, \"-->\", new_line)\n if new_line: # i.e. we have a non-zero-length string\n requirements.append(new_line)\n return requirements\n\n\n# pull from requirements.IN, requirements.TXT is generated from this\nINSTALL_REQUIRES = read_requirements(\".requirements/base.in\")\n\nEXTRA_REQUIRES = {\n \"dev\": read_requirements(\".requirements/dev.in\"),\n \"docs\": read_requirements(\".requirements/docs.in\"),\n \"test\": read_requirements(\".requirements/testing.in\"),\n \"biology\": read_requirements(\".requirements/biology.in\"),\n \"chemistry\": read_requirements(\".requirements/chemistry.in\"),\n \"engineering\": read_requirements(\".requirements/engineering.in\"),\n \"spark\": read_requirements(\".requirements/spark.in\"),\n}\n\n# add 'all' key to EXTRA_REQUIRES\nall_requires = []\nfor k, v in EXTRA_REQUIRES.items():\n all_requires.extend(v)\nEXTRA_REQUIRES[\"all\"] = set(all_requires)\n\nfor k1 in [\"biology\", \"chemistry\", \"engineering\", \"spark\"]:\n for v2 in EXTRA_REQUIRES[k1]:\n EXTRA_REQUIRES[\"docs\"].append(v2)\n\npprint(EXTRA_REQUIRES)\n\n\ndef generate_long_description() -> str:\n \"\"\"\n Extra chunks from README for PyPI description.\n\n Target chunks must be contained within `.. pypi-doc` pair comments,\n so there must be an even number of comments in README.\n\n :returns: Extracted description from README.\n :raises Exception: if odd number of ``.. pypi-doc`` comments\n in README.\n \"\"\"\n # Read the contents of README file\n this_directory = Path(__file__).parent\n with open(this_directory / \"README.rst\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n # Find pypi-doc comments in README\n indices = [m.start() for m in re.finditer(\".. pypi-doc\", readme)]\n if len(indices) % 2 != 0:\n raise Exception(\"Odd number of `.. pypi-doc` comments in README\")\n\n # Loop through pairs of comments and save text between pairs\n long_description = \"\"\n for i in range(0, len(indices), 2):\n start_index = indices[i] + 11\n end_index = indices[i + 1]\n long_description += readme[start_index:end_index]\n return long_description\n\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.20.10\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n license=\"MIT\",\n # packages=[\"janitor\", \"janitor.xarray\", \"janitor.spark\"],\n packages=find_packages(),\n install_requires=INSTALL_REQUIRES,\n extras_require=EXTRA_REQUIRES,\n python_requires=\">=3.6\",\n long_description=generate_long_description(),\n long_description_content_type=\"text/x-rst\",\n)\n", "path": "setup.py"}]}
| 1,153 | 992 |
gh_patches_debug_3040
|
rasdani/github-patches
|
git_diff
|
mosaicml__composer-496
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move `ComposerTrainer` to top-level imports
Our most heavily used objects should be easily importable from `composer` via:
```
from composer import Trainer, ComposerModel
```
rather than remember their submodule:
```
from composer.models import ComposerModel
```
Especially the last one, its tricky to remember whether its `models` or `model`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `composer/__init__.py`
Content:
```
1 # Copyright 2021 MosaicML. All Rights Reserved.
2
3 from composer import algorithms as algorithms
4 from composer import callbacks as callbacks
5 from composer import datasets as datasets
6 from composer import loggers as loggers
7 from composer import models as models
8 from composer import optim as optim
9 from composer import profiler as profiler
10 from composer import trainer as trainer
11 from composer import utils as utils
12 from composer.core import Algorithm as Algorithm
13 from composer.core import Callback as Callback
14 from composer.core import DataSpec as DataSpec
15 from composer.core import Engine as Engine
16 from composer.core import Event as Event
17 from composer.core import Logger as Logger
18 from composer.core import State as State
19 from composer.core import Time as Time
20 from composer.core import Timer as Timer
21 from composer.core import TimeUnit as TimeUnit
22 from composer.core import types as types
23 from composer.trainer import Trainer as Trainer
24
25 __version__ = "0.3.1"
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/composer/__init__.py b/composer/__init__.py
--- a/composer/__init__.py
+++ b/composer/__init__.py
@@ -20,6 +20,7 @@
from composer.core import Timer as Timer
from composer.core import TimeUnit as TimeUnit
from composer.core import types as types
+from composer.models import ComposerModel as ComposerModel
from composer.trainer import Trainer as Trainer
__version__ = "0.3.1"
|
{"golden_diff": "diff --git a/composer/__init__.py b/composer/__init__.py\n--- a/composer/__init__.py\n+++ b/composer/__init__.py\n@@ -20,6 +20,7 @@\n from composer.core import Timer as Timer\n from composer.core import TimeUnit as TimeUnit\n from composer.core import types as types\n+from composer.models import ComposerModel as ComposerModel\n from composer.trainer import Trainer as Trainer\n \n __version__ = \"0.3.1\"\n", "issue": "Move `ComposerTrainer` to top-level imports\nOur most heavily used objects should be easily importable from `composer` via:\r\n```\r\nfrom composer import Trainer, ComposerModel\r\n```\r\nrather than remember their submodule:\r\n\r\n```\r\nfrom composer.models import ComposerModel\r\n```\r\n\r\nEspecially the last one, its tricky to remember whether its `models` or `model`\n", "before_files": [{"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom composer import algorithms as algorithms\nfrom composer import callbacks as callbacks\nfrom composer import datasets as datasets\nfrom composer import loggers as loggers\nfrom composer import models as models\nfrom composer import optim as optim\nfrom composer import profiler as profiler\nfrom composer import trainer as trainer\nfrom composer import utils as utils\nfrom composer.core import Algorithm as Algorithm\nfrom composer.core import Callback as Callback\nfrom composer.core import DataSpec as DataSpec\nfrom composer.core import Engine as Engine\nfrom composer.core import Event as Event\nfrom composer.core import Logger as Logger\nfrom composer.core import State as State\nfrom composer.core import Time as Time\nfrom composer.core import Timer as Timer\nfrom composer.core import TimeUnit as TimeUnit\nfrom composer.core import types as types\nfrom composer.trainer import Trainer as Trainer\n\n__version__ = \"0.3.1\"\n", "path": "composer/__init__.py"}], "after_files": [{"content": "# Copyright 2021 MosaicML. All Rights Reserved.\n\nfrom composer import algorithms as algorithms\nfrom composer import callbacks as callbacks\nfrom composer import datasets as datasets\nfrom composer import loggers as loggers\nfrom composer import models as models\nfrom composer import optim as optim\nfrom composer import profiler as profiler\nfrom composer import trainer as trainer\nfrom composer import utils as utils\nfrom composer.core import Algorithm as Algorithm\nfrom composer.core import Callback as Callback\nfrom composer.core import DataSpec as DataSpec\nfrom composer.core import Engine as Engine\nfrom composer.core import Event as Event\nfrom composer.core import Logger as Logger\nfrom composer.core import State as State\nfrom composer.core import Time as Time\nfrom composer.core import Timer as Timer\nfrom composer.core import TimeUnit as TimeUnit\nfrom composer.core import types as types\nfrom composer.models import ComposerModel as ComposerModel\nfrom composer.trainer import Trainer as Trainer\n\n__version__ = \"0.3.1\"\n", "path": "composer/__init__.py"}]}
| 569 | 103 |
gh_patches_debug_11199
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1937
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When attempting to generate a Finnish ssn of a specific age the provider crashes
* Faker version: 19.11.0
* OS: Windows 10
When attempting to generate a Finnish ssn of a certain age the provider crashes because ``datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))`` is always trying to generate a time within range. The ssn provider should handle situations where min_age and max_age are the same.
For example:
```
if min_age == max_age:
age = datetime.timedelta(days=min_age * 365)
else:
age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))
```
### Steps to reproduce
1. Create a faker with fi_FI locale: ``fake = Faker('fi_FI')``
2. Call the ssn provider with the same minimum and maximum age: ``fake.ssn(min_age=50, max_age=50, artificial=True)``
### Expected behavior
The faker generates an artificial Finnish ssn that is 50 years old.
### Actual behavior
``ValueError: empty range for randrange() (18250, 18250, 0)``
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/ssn/fi_FI/__init__.py`
Content:
```
1 import datetime
2
3 from .. import Provider as SsnProvider
4
5
6 class Provider(SsnProvider):
7 def ssn(self, min_age: int = 0, max_age: int = 105, artificial: bool = False) -> str:
8 """
9 Returns 11 character Finnish personal identity code (Henkilötunnus,
10 HETU, Swedish: Personbeteckning). This function assigns random
11 gender to person.
12
13 HETU consists of eleven characters of the form DDMMYYCZZZQ, where
14 DDMMYY is the date of birth, C the century sign, ZZZ the individual
15 number and Q the control character (checksum). The sign for the
16 century is either + (1800–1899), - (1900–1999), or A (2000–2099).
17 The individual number ZZZ is odd for males and even for females.
18 For people born in Finland its range is 002-899
19 (larger numbers may be used in special cases).
20 An example of a valid code is 311280-888Y.
21
22 https://en.wikipedia.org/wiki/National_identification_number#Finland
23 """
24
25 def _checksum(hetu):
26 checksum_characters = "0123456789ABCDEFHJKLMNPRSTUVWXY"
27 return checksum_characters[int(hetu) % 31]
28
29 age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))
30 birthday = datetime.date.today() - age
31 hetu_date = "%02d%02d%s" % (
32 birthday.day,
33 birthday.month,
34 str(birthday.year)[-2:],
35 )
36 range = (900, 999) if artificial is True else (2, 899)
37 suffix = str(self.generator.random.randrange(*range)).zfill(3)
38 checksum = _checksum(hetu_date + suffix)
39 separator = self._get_century_code(birthday.year)
40 hetu = "".join([hetu_date, separator, suffix, checksum])
41 return hetu
42
43 @staticmethod
44 def _get_century_code(year: int) -> str:
45 """Returns the century code for a given year"""
46 if 2000 <= year < 3000:
47 separator = "A"
48 elif 1900 <= year < 2000:
49 separator = "-"
50 elif 1800 <= year < 1900:
51 separator = "+"
52 else:
53 raise ValueError("Finnish SSN do not support people born before the year 1800 or after the year 2999")
54 return separator
55
56 vat_id_formats = ("FI########",)
57
58 def vat_id(self) -> str:
59 """
60 http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
61 :return: A random Finnish VAT ID
62 """
63 return self.bothify(self.random_element(self.vat_id_formats))
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/faker/providers/ssn/fi_FI/__init__.py b/faker/providers/ssn/fi_FI/__init__.py
--- a/faker/providers/ssn/fi_FI/__init__.py
+++ b/faker/providers/ssn/fi_FI/__init__.py
@@ -26,7 +26,10 @@
checksum_characters = "0123456789ABCDEFHJKLMNPRSTUVWXY"
return checksum_characters[int(hetu) % 31]
- age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))
+ if min_age == max_age:
+ age = datetime.timedelta(days=min_age * 365)
+ else:
+ age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))
birthday = datetime.date.today() - age
hetu_date = "%02d%02d%s" % (
birthday.day,
|
{"golden_diff": "diff --git a/faker/providers/ssn/fi_FI/__init__.py b/faker/providers/ssn/fi_FI/__init__.py\n--- a/faker/providers/ssn/fi_FI/__init__.py\n+++ b/faker/providers/ssn/fi_FI/__init__.py\n@@ -26,7 +26,10 @@\n checksum_characters = \"0123456789ABCDEFHJKLMNPRSTUVWXY\"\n return checksum_characters[int(hetu) % 31]\n \n- age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))\n+ if min_age == max_age:\n+ age = datetime.timedelta(days=min_age * 365)\n+ else:\n+ age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))\n birthday = datetime.date.today() - age\n hetu_date = \"%02d%02d%s\" % (\n birthday.day,\n", "issue": "When attempting to generate a Finnish ssn of a specific age the provider crashes\n* Faker version: 19.11.0\r\n* OS: Windows 10\r\n\r\nWhen attempting to generate a Finnish ssn of a certain age the provider crashes because ``datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))`` is always trying to generate a time within range. The ssn provider should handle situations where min_age and max_age are the same.\r\nFor example:\r\n```\r\nif min_age == max_age:\r\n age = datetime.timedelta(days=min_age * 365)\r\nelse:\r\n age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. Create a faker with fi_FI locale: ``fake = Faker('fi_FI')``\r\n2. Call the ssn provider with the same minimum and maximum age: ``fake.ssn(min_age=50, max_age=50, artificial=True)``\r\n\r\n### Expected behavior\r\n\r\nThe faker generates an artificial Finnish ssn that is 50 years old. \r\n\r\n### Actual behavior\r\n\r\n``ValueError: empty range for randrange() (18250, 18250, 0)``\r\n\n", "before_files": [{"content": "import datetime\n\nfrom .. import Provider as SsnProvider\n\n\nclass Provider(SsnProvider):\n def ssn(self, min_age: int = 0, max_age: int = 105, artificial: bool = False) -> str:\n \"\"\"\n Returns 11 character Finnish personal identity code (Henkil\u00f6tunnus,\n HETU, Swedish: Personbeteckning). This function assigns random\n gender to person.\n\n HETU consists of eleven characters of the form DDMMYYCZZZQ, where\n DDMMYY is the date of birth, C the century sign, ZZZ the individual\n number and Q the control character (checksum). The sign for the\n century is either + (1800\u20131899), - (1900\u20131999), or A (2000\u20132099).\n The individual number ZZZ is odd for males and even for females.\n For people born in Finland its range is 002-899\n (larger numbers may be used in special cases).\n An example of a valid code is 311280-888Y.\n\n https://en.wikipedia.org/wiki/National_identification_number#Finland\n \"\"\"\n\n def _checksum(hetu):\n checksum_characters = \"0123456789ABCDEFHJKLMNPRSTUVWXY\"\n return checksum_characters[int(hetu) % 31]\n\n age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))\n birthday = datetime.date.today() - age\n hetu_date = \"%02d%02d%s\" % (\n birthday.day,\n birthday.month,\n str(birthday.year)[-2:],\n )\n range = (900, 999) if artificial is True else (2, 899)\n suffix = str(self.generator.random.randrange(*range)).zfill(3)\n checksum = _checksum(hetu_date + suffix)\n separator = self._get_century_code(birthday.year)\n hetu = \"\".join([hetu_date, separator, suffix, checksum])\n return hetu\n\n @staticmethod\n def _get_century_code(year: int) -> str:\n \"\"\"Returns the century code for a given year\"\"\"\n if 2000 <= year < 3000:\n separator = \"A\"\n elif 1900 <= year < 2000:\n separator = \"-\"\n elif 1800 <= year < 1900:\n separator = \"+\"\n else:\n raise ValueError(\"Finnish SSN do not support people born before the year 1800 or after the year 2999\")\n return separator\n\n vat_id_formats = (\"FI########\",)\n\n def vat_id(self) -> str:\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n :return: A random Finnish VAT ID\n \"\"\"\n return self.bothify(self.random_element(self.vat_id_formats))\n", "path": "faker/providers/ssn/fi_FI/__init__.py"}], "after_files": [{"content": "import datetime\n\nfrom .. import Provider as SsnProvider\n\n\nclass Provider(SsnProvider):\n def ssn(self, min_age: int = 0, max_age: int = 105, artificial: bool = False) -> str:\n \"\"\"\n Returns 11 character Finnish personal identity code (Henkil\u00f6tunnus,\n HETU, Swedish: Personbeteckning). This function assigns random\n gender to person.\n\n HETU consists of eleven characters of the form DDMMYYCZZZQ, where\n DDMMYY is the date of birth, C the century sign, ZZZ the individual\n number and Q the control character (checksum). The sign for the\n century is either + (1800\u20131899), - (1900\u20131999), or A (2000\u20132099).\n The individual number ZZZ is odd for males and even for females.\n For people born in Finland its range is 002-899\n (larger numbers may be used in special cases).\n An example of a valid code is 311280-888Y.\n\n https://en.wikipedia.org/wiki/National_identification_number#Finland\n \"\"\"\n\n def _checksum(hetu):\n checksum_characters = \"0123456789ABCDEFHJKLMNPRSTUVWXY\"\n return checksum_characters[int(hetu) % 31]\n\n if min_age == max_age:\n age = datetime.timedelta(days=min_age * 365)\n else:\n age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))\n birthday = datetime.date.today() - age\n hetu_date = \"%02d%02d%s\" % (\n birthday.day,\n birthday.month,\n str(birthday.year)[-2:],\n )\n range = (900, 999) if artificial is True else (2, 899)\n suffix = str(self.generator.random.randrange(*range)).zfill(3)\n checksum = _checksum(hetu_date + suffix)\n separator = self._get_century_code(birthday.year)\n hetu = \"\".join([hetu_date, separator, suffix, checksum])\n return hetu\n\n @staticmethod\n def _get_century_code(year: int) -> str:\n \"\"\"Returns the century code for a given year\"\"\"\n if 2000 <= year < 3000:\n separator = \"A\"\n elif 1900 <= year < 2000:\n separator = \"-\"\n elif 1800 <= year < 1900:\n separator = \"+\"\n else:\n raise ValueError(\"Finnish SSN do not support people born before the year 1800 or after the year 2999\")\n return separator\n\n vat_id_formats = (\"FI########\",)\n\n def vat_id(self) -> str:\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n :return: A random Finnish VAT ID\n \"\"\"\n return self.bothify(self.random_element(self.vat_id_formats))\n", "path": "faker/providers/ssn/fi_FI/__init__.py"}]}
| 1,356 | 230 |
gh_patches_debug_25749
|
rasdani/github-patches
|
git_diff
|
kymatio__kymatio-822
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scattering 2D doesn't work when using 2^J == image size (TensorFlow backend)
Similar issues have been reported before (#284, #363) and also fixed (#412) for Torch backend. However, still not working in the tensorflow backend:
test:
```python
import numpy as np
from kymatio.tensorflow import Scattering2D
scattering = Scattering2D(J=5, shape=(32, 32))
test_im = np.ones((1,1,32,32))
test = scattering.scattering(test_im)
```
Gives error:
```
7 scattering = Scattering2D(J=5, shape=(32, 32))
8 test_im = np.ones((1,1,32,32))
----> 9 test = scattering.scattering(test_im)
~/.local/lib/python3.9/site-packages/kymatio/scattering2d/frontend/tensorflow_frontend.py in scattering(self, input)
48 input = tf.reshape(input, tf.concat(((-1,), signal_shape), 0))
49
---> 50 S = scattering2d(input, self.pad, self.unpad, self.backend, self.J, self.L, self.phi, self.psi,
51 self.max_order, self.out_type)
52
~/.local/lib/python3.9/site-packages/kymatio/scattering2d/core/scattering2d.py in scattering2d(x, pad, unpad, backend, J, L, phi, psi, max_order, out_type)
13 out_S_0, out_S_1, out_S_2 = [], [], []
14
---> 15 U_r = pad(x)
16
17 U_0_c = fft(U_r, 'C2C')
~/.local/lib/python3.9/site-packages/kymatio/scattering2d/backend/tensorflow_backend.py in __call__(self, x)
27 paddings = [[0, 0]] * len(x.shape[:-2])
28 paddings += [[self.pad_size[0], self.pad_size[1]], [self.pad_size[2], self.pad_size[3]]]
---> 29 return tf.cast(tf.pad(x, paddings, mode="REFLECT"), tf.complex64)
30
31 def unpad(in_):
~/.local/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
~/.local/lib/python3.9/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
56 try:
57 ctx.ensure_initialized()
---> 58 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
59 inputs, attrs, num_outputs)
60 except core._NotOkStatusException as e:
InvalidArgumentError: paddings must be less than the dimension size: 32, 32 not less than 32 [Op:MirrorPad]
```
(speculation) So possibly problems with the order of the padding being different in tensorflow from torch.
Should also probably include some tests for these types of problems like the tests for implemented for Torch in #346
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kymatio/scattering2d/backend/tensorflow_backend.py`
Content:
```
1 import tensorflow as tf
2
3 from ...backend.tensorflow_backend import TensorFlowBackend
4
5
6 class Pad(object):
7 def __init__(self, pad_size, input_size):
8 """
9 Padding which allows to simultaneously pad in a reflection fashion
10 and map to complex.
11 Parameters
12 ----------
13 pad_size : list of 4 integers
14 size of padding to apply.
15 input_size : list of 2 integers
16 size of the original signal
17 """
18 self.pad_size = pad_size
19
20 def __call__(self, x):
21 paddings = [[0, 0]] * len(x.shape[:-2])
22 paddings += [[self.pad_size[0], self.pad_size[1]], [self.pad_size[2], self.pad_size[3]]]
23 return tf.pad(x, paddings, mode="REFLECT")
24
25
26 class TensorFlowBackend2D(TensorFlowBackend):
27 Pad = Pad
28
29 @staticmethod
30 def unpad(in_):
31 """
32 Slices the input tensor at indices between 1::-1
33 Parameters
34 ----------
35 in_ : tensor_like
36 input tensor
37 Returns
38 -------
39 in_[..., 1:-1, 1:-1]
40 """
41 return in_[..., 1:-1, 1:-1]
42
43
44 @classmethod
45 def rfft(cls, x):
46 cls.real_check(x)
47 return tf.signal.fft2d(tf.cast(x, tf.complex64), name='rfft2d')
48
49 @classmethod
50 def irfft(cls, x):
51 cls.complex_check(x)
52 return tf.math.real(tf.signal.ifft2d(x, name='irfft2d'))
53
54
55 @classmethod
56 def ifft(cls, x):
57 cls.complex_check(x)
58 return tf.signal.ifft2d(x, name='ifft2d')
59
60 @classmethod
61 def subsample_fourier(cls, x, k):
62 """ Subsampling of a 2D image performed in the Fourier domain.
63
64 Subsampling in the spatial domain amounts to periodization
65 in the Fourier domain, hence the formula.
66
67 Parameters
68 ----------
69 x : tensor_like
70 input tensor with at least three dimensions.
71 k : int
72 integer such that x is subsampled by k along the spatial variables.
73
74 Returns
75 -------
76 out : tensor_like
77 Tensor such that its Fourier transform is the Fourier
78 transform of a subsampled version of x, i.e. in
79 F^{-1}(out)[u1, u2] = F^{-1}(x)[u1 * k, u2 * k]
80
81 """
82 cls.complex_check(x)
83
84 y = tf.reshape(x, (-1, k, x.shape[1] // k, k, x.shape[2] // k))
85
86 out = tf.reduce_mean(y, axis=(1, 3))
87 return out
88
89
90 backend = TensorFlowBackend2D
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kymatio/scattering2d/backend/tensorflow_backend.py b/kymatio/scattering2d/backend/tensorflow_backend.py
--- a/kymatio/scattering2d/backend/tensorflow_backend.py
+++ b/kymatio/scattering2d/backend/tensorflow_backend.py
@@ -16,11 +16,34 @@
size of the original signal
"""
self.pad_size = pad_size
+ self.input_size = input_size
def __call__(self, x):
+ pad_size = list(self.pad_size)
+
+ # Clone to avoid passing on modifications.
+ new_pad_size = list(pad_size)
+
+ # This handles the case where the padding is equal to the image size.
+ if pad_size[0] == self.input_size[0]:
+ new_pad_size[0] -= 1
+ new_pad_size[1] -= 1
+ if pad_size[2] == self.input_size[1]:
+ new_pad_size[2] -= 1
+ new_pad_size[3] -= 1
+
paddings = [[0, 0]] * len(x.shape[:-2])
- paddings += [[self.pad_size[0], self.pad_size[1]], [self.pad_size[2], self.pad_size[3]]]
- return tf.pad(x, paddings, mode="REFLECT")
+ paddings += [[new_pad_size[0], new_pad_size[1]], [new_pad_size[2], new_pad_size[3]]]
+
+ x_padded = tf.pad(x, paddings, mode="REFLECT")
+
+ # Again, special handling for when padding is the same as image size.
+ if pad_size[0] == self.input_size[0]:
+ x_padded = tf.concat([tf.expand_dims(x_padded[..., 1, :], axis=-2), x_padded, tf.expand_dims(x_padded[..., x_padded.shape[-2] -2, :], axis=-2)], axis=-2)
+ if pad_size[2] == self.input_size[1]:
+ x_padded = tf.concat([tf.expand_dims(x_padded[..., :, 1], axis=-1), x_padded, tf.expand_dims(x_padded[..., :, x_padded.shape[-1]-2], axis=-1)], axis=-1)
+
+ return x_padded
class TensorFlowBackend2D(TensorFlowBackend):
|
{"golden_diff": "diff --git a/kymatio/scattering2d/backend/tensorflow_backend.py b/kymatio/scattering2d/backend/tensorflow_backend.py\n--- a/kymatio/scattering2d/backend/tensorflow_backend.py\n+++ b/kymatio/scattering2d/backend/tensorflow_backend.py\n@@ -16,11 +16,34 @@\n size of the original signal\n \"\"\"\n self.pad_size = pad_size\n+ self.input_size = input_size\n \n def __call__(self, x):\n+ pad_size = list(self.pad_size)\n+\n+ # Clone to avoid passing on modifications.\n+ new_pad_size = list(pad_size)\n+\n+ # This handles the case where the padding is equal to the image size.\n+ if pad_size[0] == self.input_size[0]:\n+ new_pad_size[0] -= 1\n+ new_pad_size[1] -= 1\n+ if pad_size[2] == self.input_size[1]:\n+ new_pad_size[2] -= 1\n+ new_pad_size[3] -= 1\n+\n paddings = [[0, 0]] * len(x.shape[:-2])\n- paddings += [[self.pad_size[0], self.pad_size[1]], [self.pad_size[2], self.pad_size[3]]]\n- return tf.pad(x, paddings, mode=\"REFLECT\")\n+ paddings += [[new_pad_size[0], new_pad_size[1]], [new_pad_size[2], new_pad_size[3]]]\n+\n+ x_padded = tf.pad(x, paddings, mode=\"REFLECT\")\n+\n+ # Again, special handling for when padding is the same as image size.\n+ if pad_size[0] == self.input_size[0]:\n+ x_padded = tf.concat([tf.expand_dims(x_padded[..., 1, :], axis=-2), x_padded, tf.expand_dims(x_padded[..., x_padded.shape[-2] -2, :], axis=-2)], axis=-2)\n+ if pad_size[2] == self.input_size[1]:\n+ x_padded = tf.concat([tf.expand_dims(x_padded[..., :, 1], axis=-1), x_padded, tf.expand_dims(x_padded[..., :, x_padded.shape[-1]-2], axis=-1)], axis=-1)\n+\n+ return x_padded\n \n \n class TensorFlowBackend2D(TensorFlowBackend):\n", "issue": "Scattering 2D doesn't work when using 2^J == image size (TensorFlow backend)\nSimilar issues have been reported before (#284, #363) and also fixed (#412) for Torch backend. However, still not working in the tensorflow backend:\r\n\r\ntest:\r\n```python \r\nimport numpy as np\r\nfrom kymatio.tensorflow import Scattering2D\r\n\r\nscattering = Scattering2D(J=5, shape=(32, 32))\r\ntest_im = np.ones((1,1,32,32))\r\ntest = scattering.scattering(test_im)\r\n```\r\n\r\nGives error:\r\n```\r\n 7 scattering = Scattering2D(J=5, shape=(32, 32))\r\n 8 test_im = np.ones((1,1,32,32))\r\n----> 9 test = scattering.scattering(test_im)\r\n\r\n~/.local/lib/python3.9/site-packages/kymatio/scattering2d/frontend/tensorflow_frontend.py in scattering(self, input)\r\n 48 input = tf.reshape(input, tf.concat(((-1,), signal_shape), 0))\r\n 49 \r\n---> 50 S = scattering2d(input, self.pad, self.unpad, self.backend, self.J, self.L, self.phi, self.psi,\r\n 51 self.max_order, self.out_type)\r\n 52 \r\n\r\n~/.local/lib/python3.9/site-packages/kymatio/scattering2d/core/scattering2d.py in scattering2d(x, pad, unpad, backend, J, L, phi, psi, max_order, out_type)\r\n 13 out_S_0, out_S_1, out_S_2 = [], [], []\r\n 14 \r\n---> 15 U_r = pad(x)\r\n 16 \r\n 17 U_0_c = fft(U_r, 'C2C')\r\n\r\n~/.local/lib/python3.9/site-packages/kymatio/scattering2d/backend/tensorflow_backend.py in __call__(self, x)\r\n 27 paddings = [[0, 0]] * len(x.shape[:-2])\r\n 28 paddings += [[self.pad_size[0], self.pad_size[1]], [self.pad_size[2], self.pad_size[3]]]\r\n---> 29 return tf.cast(tf.pad(x, paddings, mode=\"REFLECT\"), tf.complex64)\r\n 30 \r\n 31 def unpad(in_):\r\n\r\n~/.local/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)\r\n 151 except Exception as e:\r\n 152 filtered_tb = _process_traceback_frames(e.__traceback__)\r\n--> 153 raise e.with_traceback(filtered_tb) from None\r\n 154 finally:\r\n 155 del filtered_tb\r\n\r\n~/.local/lib/python3.9/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)\r\n 56 try:\r\n 57 ctx.ensure_initialized()\r\n---> 58 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\r\n 59 inputs, attrs, num_outputs)\r\n 60 except core._NotOkStatusException as e:\r\n\r\nInvalidArgumentError: paddings must be less than the dimension size: 32, 32 not less than 32 [Op:MirrorPad]\r\n```\r\n\r\n(speculation) So possibly problems with the order of the padding being different in tensorflow from torch. \r\n\r\nShould also probably include some tests for these types of problems like the tests for implemented for Torch in #346 \n", "before_files": [{"content": "import tensorflow as tf\n\nfrom ...backend.tensorflow_backend import TensorFlowBackend\n\n\nclass Pad(object):\n def __init__(self, pad_size, input_size):\n \"\"\"\n Padding which allows to simultaneously pad in a reflection fashion\n and map to complex.\n Parameters\n ----------\n pad_size : list of 4 integers\n size of padding to apply.\n input_size : list of 2 integers\n size of the original signal\n \"\"\"\n self.pad_size = pad_size\n\n def __call__(self, x):\n paddings = [[0, 0]] * len(x.shape[:-2])\n paddings += [[self.pad_size[0], self.pad_size[1]], [self.pad_size[2], self.pad_size[3]]]\n return tf.pad(x, paddings, mode=\"REFLECT\")\n\n\nclass TensorFlowBackend2D(TensorFlowBackend):\n Pad = Pad\n\n @staticmethod\n def unpad(in_):\n \"\"\"\n Slices the input tensor at indices between 1::-1\n Parameters\n ----------\n in_ : tensor_like\n input tensor\n Returns\n -------\n in_[..., 1:-1, 1:-1]\n \"\"\"\n return in_[..., 1:-1, 1:-1]\n\n\n @classmethod\n def rfft(cls, x):\n cls.real_check(x)\n return tf.signal.fft2d(tf.cast(x, tf.complex64), name='rfft2d')\n\n @classmethod\n def irfft(cls, x):\n cls.complex_check(x)\n return tf.math.real(tf.signal.ifft2d(x, name='irfft2d'))\n\n\n @classmethod\n def ifft(cls, x):\n cls.complex_check(x)\n return tf.signal.ifft2d(x, name='ifft2d')\n\n @classmethod\n def subsample_fourier(cls, x, k):\n \"\"\" Subsampling of a 2D image performed in the Fourier domain.\n\n Subsampling in the spatial domain amounts to periodization\n in the Fourier domain, hence the formula.\n\n Parameters\n ----------\n x : tensor_like\n input tensor with at least three dimensions.\n k : int\n integer such that x is subsampled by k along the spatial variables.\n\n Returns\n -------\n out : tensor_like\n Tensor such that its Fourier transform is the Fourier\n transform of a subsampled version of x, i.e. in\n F^{-1}(out)[u1, u2] = F^{-1}(x)[u1 * k, u2 * k]\n\n \"\"\"\n cls.complex_check(x)\n\n y = tf.reshape(x, (-1, k, x.shape[1] // k, k, x.shape[2] // k))\n\n out = tf.reduce_mean(y, axis=(1, 3))\n return out\n\n\nbackend = TensorFlowBackend2D\n", "path": "kymatio/scattering2d/backend/tensorflow_backend.py"}], "after_files": [{"content": "import tensorflow as tf\n\nfrom ...backend.tensorflow_backend import TensorFlowBackend\n\n\nclass Pad(object):\n def __init__(self, pad_size, input_size):\n \"\"\"\n Padding which allows to simultaneously pad in a reflection fashion\n and map to complex.\n Parameters\n ----------\n pad_size : list of 4 integers\n size of padding to apply.\n input_size : list of 2 integers\n size of the original signal\n \"\"\"\n self.pad_size = pad_size\n self.input_size = input_size\n\n def __call__(self, x):\n pad_size = list(self.pad_size)\n\n # Clone to avoid passing on modifications.\n new_pad_size = list(pad_size)\n\n # This handles the case where the padding is equal to the image size.\n if pad_size[0] == self.input_size[0]:\n new_pad_size[0] -= 1\n new_pad_size[1] -= 1\n if pad_size[2] == self.input_size[1]:\n new_pad_size[2] -= 1\n new_pad_size[3] -= 1\n\n paddings = [[0, 0]] * len(x.shape[:-2])\n paddings += [[new_pad_size[0], new_pad_size[1]], [new_pad_size[2], new_pad_size[3]]]\n\n x_padded = tf.pad(x, paddings, mode=\"REFLECT\")\n\n # Again, special handling for when padding is the same as image size.\n if pad_size[0] == self.input_size[0]:\n x_padded = tf.concat([tf.expand_dims(x_padded[..., 1, :], axis=-2), x_padded, tf.expand_dims(x_padded[..., x_padded.shape[-2] -2, :], axis=-2)], axis=-2)\n if pad_size[2] == self.input_size[1]:\n x_padded = tf.concat([tf.expand_dims(x_padded[..., :, 1], axis=-1), x_padded, tf.expand_dims(x_padded[..., :, x_padded.shape[-1]-2], axis=-1)], axis=-1)\n\n return x_padded\n\n\nclass TensorFlowBackend2D(TensorFlowBackend):\n Pad = Pad\n\n @staticmethod\n def unpad(in_):\n \"\"\"\n Slices the input tensor at indices between 1::-1\n Parameters\n ----------\n in_ : tensor_like\n input tensor\n Returns\n -------\n in_[..., 1:-1, 1:-1]\n \"\"\"\n return in_[..., 1:-1, 1:-1]\n\n\n @classmethod\n def rfft(cls, x):\n cls.real_check(x)\n return tf.signal.fft2d(tf.cast(x, tf.complex64), name='rfft2d')\n\n @classmethod\n def irfft(cls, x):\n cls.complex_check(x)\n return tf.math.real(tf.signal.ifft2d(x, name='irfft2d'))\n\n\n @classmethod\n def ifft(cls, x):\n cls.complex_check(x)\n return tf.signal.ifft2d(x, name='ifft2d')\n\n @classmethod\n def subsample_fourier(cls, x, k):\n \"\"\" Subsampling of a 2D image performed in the Fourier domain.\n\n Subsampling in the spatial domain amounts to periodization\n in the Fourier domain, hence the formula.\n\n Parameters\n ----------\n x : tensor_like\n input tensor with at least three dimensions.\n k : int\n integer such that x is subsampled by k along the spatial variables.\n\n Returns\n -------\n out : tensor_like\n Tensor such that its Fourier transform is the Fourier\n transform of a subsampled version of x, i.e. in\n F^{-1}(out)[u1, u2] = F^{-1}(x)[u1 * k, u2 * k]\n\n \"\"\"\n cls.complex_check(x)\n\n y = tf.reshape(x, (-1, k, x.shape[1] // k, k, x.shape[2] // k))\n\n out = tf.reduce_mean(y, axis=(1, 3))\n return out\n\n\nbackend = TensorFlowBackend2D\n", "path": "kymatio/scattering2d/backend/tensorflow_backend.py"}]}
| 1,890 | 531 |
gh_patches_debug_5859
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-11755
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GraphQL returns all objects if tag doesn't exist
### NetBox version
v3.4.2
### Python version
3.10
### Steps to Reproduce
1. query GraphQL for device/interface with a tag that doesn't exist
### Expected Behavior
Resulting list of devices/interfaces only contains devices matching the tags or none if no device exists with matching tags.
### Observed Behavior
All devices/interfaces are returned no matter what tag filter has been applied. Same seems to apply for all kinds of filters that can be defined in GraphQL. No errors are returned.
Example below
query:
`{
device_list(tag: "blablalba") {
id
name
tags {
name
}
}
interface_list(tag: "not-ipmi_exporter") {
id
device {
id
}
tags {
name
}
}
}
`
result:
`{
"data": {
"device_list": [
{
"id": "1",
"name": "device-A",
"tags": [
{
"name": "node_exporter"
}
]
},
{
"id": "2",
"name": "device-B",
"tags": [
{
"name": "node_exporter"
}
]
}
],
"interface_list": [
{
"id": "1",
"device": {
"id": "1"
},
"tags": [
{
"name": "ipmi_exporter"
}
]
},
{
"id": "2",
"device": {
"id": "2"
},
"tags": [
{
"name": "ipmi_exporter"
}
]
}
]
}
}`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/netbox/graphql/fields.py`
Content:
```
1 from functools import partial
2
3 import graphene
4 from graphene_django import DjangoListField
5
6 from .utils import get_graphene_type
7
8 __all__ = (
9 'ObjectField',
10 'ObjectListField',
11 )
12
13
14 class ObjectField(graphene.Field):
15 """
16 Retrieve a single object, identified by its numeric ID.
17 """
18 def __init__(self, *args, **kwargs):
19
20 if 'id' not in kwargs:
21 kwargs['id'] = graphene.Int(required=True)
22
23 super().__init__(*args, **kwargs)
24
25 @staticmethod
26 def object_resolver(django_object_type, root, info, **args):
27 """
28 Return an object given its numeric ID.
29 """
30 manager = django_object_type._meta.model._default_manager
31 queryset = django_object_type.get_queryset(manager, info)
32
33 return queryset.get(**args)
34
35 def get_resolver(self, parent_resolver):
36 return partial(self.object_resolver, self._type)
37
38
39 class ObjectListField(DjangoListField):
40 """
41 Retrieve a list of objects, optionally filtered by one or more FilterSet filters.
42 """
43 def __init__(self, _type, *args, **kwargs):
44 filter_kwargs = {}
45
46 # Get FilterSet kwargs
47 filterset_class = getattr(_type._meta, 'filterset_class', None)
48 if filterset_class:
49 for filter_name, filter_field in filterset_class.get_filters().items():
50 field_type = get_graphene_type(type(filter_field))
51 filter_kwargs[filter_name] = graphene.Argument(field_type)
52
53 super().__init__(_type, args=filter_kwargs, *args, **kwargs)
54
55 @staticmethod
56 def list_resolver(django_object_type, resolver, default_manager, root, info, **args):
57 queryset = super(ObjectListField, ObjectListField).list_resolver(django_object_type, resolver, default_manager, root, info, **args)
58
59 # Instantiate and apply the FilterSet, if defined
60 filterset_class = django_object_type._meta.filterset_class
61 if filterset_class:
62 filterset = filterset_class(data=args, queryset=queryset, request=info.context)
63 return filterset.qs
64
65 return queryset
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/netbox/netbox/graphql/fields.py b/netbox/netbox/graphql/fields.py
--- a/netbox/netbox/graphql/fields.py
+++ b/netbox/netbox/graphql/fields.py
@@ -60,6 +60,8 @@
filterset_class = django_object_type._meta.filterset_class
if filterset_class:
filterset = filterset_class(data=args, queryset=queryset, request=info.context)
+ if not filterset.is_valid():
+ return queryset.none()
return filterset.qs
return queryset
|
{"golden_diff": "diff --git a/netbox/netbox/graphql/fields.py b/netbox/netbox/graphql/fields.py\n--- a/netbox/netbox/graphql/fields.py\n+++ b/netbox/netbox/graphql/fields.py\n@@ -60,6 +60,8 @@\n filterset_class = django_object_type._meta.filterset_class\n if filterset_class:\n filterset = filterset_class(data=args, queryset=queryset, request=info.context)\n+ if not filterset.is_valid():\n+ return queryset.none()\n return filterset.qs\n \n return queryset\n", "issue": "GraphQL returns all objects if tag doesn't exist\n### NetBox version\n\nv3.4.2\n\n### Python version\n\n3.10\n\n### Steps to Reproduce\n\n1. query GraphQL for device/interface with a tag that doesn't exist\n\n### Expected Behavior\n\nResulting list of devices/interfaces only contains devices matching the tags or none if no device exists with matching tags.\n\n### Observed Behavior\n\nAll devices/interfaces are returned no matter what tag filter has been applied. Same seems to apply for all kinds of filters that can be defined in GraphQL. No errors are returned.\r\n\r\nExample below\r\n\r\nquery:\r\n`{\r\n device_list(tag: \"blablalba\") {\r\n id\r\n name\r\n tags {\r\n name\r\n }\r\n }\r\n interface_list(tag: \"not-ipmi_exporter\") {\r\n id\r\n device {\r\n id\r\n }\r\n tags {\r\n name\r\n }\r\n }\r\n}\r\n`\r\n\r\nresult:\r\n`{\r\n \"data\": {\r\n \"device_list\": [\r\n {\r\n \"id\": \"1\",\r\n \"name\": \"device-A\",\r\n \"tags\": [\r\n {\r\n \"name\": \"node_exporter\"\r\n }\r\n ]\r\n },\r\n {\r\n \"id\": \"2\",\r\n \"name\": \"device-B\",\r\n \"tags\": [\r\n {\r\n \"name\": \"node_exporter\"\r\n }\r\n ]\r\n }\r\n ],\r\n \"interface_list\": [\r\n {\r\n \"id\": \"1\",\r\n \"device\": {\r\n \"id\": \"1\"\r\n },\r\n \"tags\": [\r\n {\r\n \"name\": \"ipmi_exporter\"\r\n }\r\n ]\r\n },\r\n {\r\n \"id\": \"2\",\r\n \"device\": {\r\n \"id\": \"2\"\r\n },\r\n \"tags\": [\r\n {\r\n \"name\": \"ipmi_exporter\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}`\n", "before_files": [{"content": "from functools import partial\n\nimport graphene\nfrom graphene_django import DjangoListField\n\nfrom .utils import get_graphene_type\n\n__all__ = (\n 'ObjectField',\n 'ObjectListField',\n)\n\n\nclass ObjectField(graphene.Field):\n \"\"\"\n Retrieve a single object, identified by its numeric ID.\n \"\"\"\n def __init__(self, *args, **kwargs):\n\n if 'id' not in kwargs:\n kwargs['id'] = graphene.Int(required=True)\n\n super().__init__(*args, **kwargs)\n\n @staticmethod\n def object_resolver(django_object_type, root, info, **args):\n \"\"\"\n Return an object given its numeric ID.\n \"\"\"\n manager = django_object_type._meta.model._default_manager\n queryset = django_object_type.get_queryset(manager, info)\n\n return queryset.get(**args)\n\n def get_resolver(self, parent_resolver):\n return partial(self.object_resolver, self._type)\n\n\nclass ObjectListField(DjangoListField):\n \"\"\"\n Retrieve a list of objects, optionally filtered by one or more FilterSet filters.\n \"\"\"\n def __init__(self, _type, *args, **kwargs):\n filter_kwargs = {}\n\n # Get FilterSet kwargs\n filterset_class = getattr(_type._meta, 'filterset_class', None)\n if filterset_class:\n for filter_name, filter_field in filterset_class.get_filters().items():\n field_type = get_graphene_type(type(filter_field))\n filter_kwargs[filter_name] = graphene.Argument(field_type)\n\n super().__init__(_type, args=filter_kwargs, *args, **kwargs)\n\n @staticmethod\n def list_resolver(django_object_type, resolver, default_manager, root, info, **args):\n queryset = super(ObjectListField, ObjectListField).list_resolver(django_object_type, resolver, default_manager, root, info, **args)\n\n # Instantiate and apply the FilterSet, if defined\n filterset_class = django_object_type._meta.filterset_class\n if filterset_class:\n filterset = filterset_class(data=args, queryset=queryset, request=info.context)\n return filterset.qs\n\n return queryset\n", "path": "netbox/netbox/graphql/fields.py"}], "after_files": [{"content": "from functools import partial\n\nimport graphene\nfrom graphene_django import DjangoListField\n\nfrom .utils import get_graphene_type\n\n__all__ = (\n 'ObjectField',\n 'ObjectListField',\n)\n\n\nclass ObjectField(graphene.Field):\n \"\"\"\n Retrieve a single object, identified by its numeric ID.\n \"\"\"\n def __init__(self, *args, **kwargs):\n\n if 'id' not in kwargs:\n kwargs['id'] = graphene.Int(required=True)\n\n super().__init__(*args, **kwargs)\n\n @staticmethod\n def object_resolver(django_object_type, root, info, **args):\n \"\"\"\n Return an object given its numeric ID.\n \"\"\"\n manager = django_object_type._meta.model._default_manager\n queryset = django_object_type.get_queryset(manager, info)\n\n return queryset.get(**args)\n\n def get_resolver(self, parent_resolver):\n return partial(self.object_resolver, self._type)\n\n\nclass ObjectListField(DjangoListField):\n \"\"\"\n Retrieve a list of objects, optionally filtered by one or more FilterSet filters.\n \"\"\"\n def __init__(self, _type, *args, **kwargs):\n filter_kwargs = {}\n\n # Get FilterSet kwargs\n filterset_class = getattr(_type._meta, 'filterset_class', None)\n if filterset_class:\n for filter_name, filter_field in filterset_class.get_filters().items():\n field_type = get_graphene_type(type(filter_field))\n filter_kwargs[filter_name] = graphene.Argument(field_type)\n\n super().__init__(_type, args=filter_kwargs, *args, **kwargs)\n\n @staticmethod\n def list_resolver(django_object_type, resolver, default_manager, root, info, **args):\n queryset = super(ObjectListField, ObjectListField).list_resolver(django_object_type, resolver, default_manager, root, info, **args)\n\n # Instantiate and apply the FilterSet, if defined\n filterset_class = django_object_type._meta.filterset_class\n if filterset_class:\n filterset = filterset_class(data=args, queryset=queryset, request=info.context)\n if not filterset.is_valid():\n return queryset.none()\n return filterset.qs\n\n return queryset\n", "path": "netbox/netbox/graphql/fields.py"}]}
| 1,245 | 121 |
gh_patches_debug_2601
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-2126
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Community Roles: Date range validation
Currently, an end date earlier than start date is allowed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `amy/communityroles/forms.py`
Content:
```
1 from collections import defaultdict
2 from typing import Any, Optional
3
4 from django import forms
5 from django.core.exceptions import ObjectDoesNotExist, ValidationError
6
7 from workshops.fields import HeavySelect2Widget, ModelSelect2Widget
8 from workshops.forms import SELECT2_SIDEBAR, BootstrapHelper, WidgetOverrideMixin
9
10 from .models import CommunityRole, CommunityRoleConfig
11
12
13 class CommunityRoleForm(WidgetOverrideMixin, forms.ModelForm):
14 class Meta:
15 model = CommunityRole
16 fields = (
17 "config",
18 "person",
19 "award",
20 "start",
21 "end",
22 "inactivation",
23 "membership",
24 "url",
25 "generic_relation_content_type",
26 "generic_relation_pk",
27 )
28 widgets = {
29 "config": HeavySelect2Widget(
30 data_view="api:communityroleconfig-list", attrs=SELECT2_SIDEBAR
31 ),
32 "person": ModelSelect2Widget(
33 data_view="person-lookup", attrs=SELECT2_SIDEBAR
34 ),
35 "award": ModelSelect2Widget(
36 data_view="award-lookup", attrs=SELECT2_SIDEBAR
37 ),
38 "membership": ModelSelect2Widget(
39 data_view="membership-lookup", attrs=SELECT2_SIDEBAR
40 ),
41 "generic_relation_content_type": forms.Select(
42 # "disabled" means the browsers will not send the field during POST.
43 # See how it's handled in `clean()` method below.
44 attrs={"disabled": ""},
45 ),
46 "generic_relation_pk": HeavySelect2Widget(
47 data_view="generic-object-lookup", attrs=SELECT2_SIDEBAR
48 ),
49 }
50 labels = {
51 "generic_relation_content_type": "Generic relation object type",
52 "generic_relation_pk": "Generic relation object",
53 }
54
55 class Media:
56 js = ("communityrole_form.js",)
57
58 def __init__(self, *args, **kwargs):
59 form_tag = kwargs.pop("form_tag", True)
60 super().__init__(*args, **kwargs)
61 bootstrap_kwargs = {
62 "add_cancel_button": False,
63 "form_tag": form_tag,
64 }
65 self.helper = BootstrapHelper(**bootstrap_kwargs)
66
67 def clean(self) -> dict[str, Any]:
68 """Validate form according to rules set up in related Community Role
69 configuration."""
70 cleaned_data = super().clean()
71 errors: defaultdict[str, list[ValidationError]] = defaultdict(list)
72 config: Optional[CommunityRoleConfig] = cleaned_data.get("config")
73
74 # Config is required, but field validation for 'config' should raise
75 # validation error first.
76 if not config:
77 return cleaned_data
78
79 # Award required?
80 if config.link_to_award and not cleaned_data.get("award"):
81 errors["award"].append(
82 ValidationError(f"Award is required with community role {config}")
83 )
84
85 # Specific award badge required?
86 if (badge := config.award_badge_limit) and (award := cleaned_data.get("award")):
87 if award.badge != badge:
88 errors["award"].append(
89 ValidationError(
90 f"Award badge must be {badge} for community role {config}"
91 )
92 )
93
94 # Membership required?
95 if config.link_to_membership and not cleaned_data.get("membership"):
96 errors["membership"].append(
97 ValidationError(f"Membership is required with community role {config}")
98 )
99
100 # Additional URL supported?
101 if not config.additional_url and cleaned_data.get("url"):
102 errors["url"].append(
103 ValidationError(f"URL is not supported for community role {config}")
104 )
105
106 # Widget for `generic_relation_content_type` is disabled in HTML, which
107 # makes browsers not send it. The code below sets the default value to
108 # the same value as in related config.
109 generic_relation_content_type = config.generic_relation_content_type
110
111 # Generic relation object must exist
112 if config.generic_relation_content_type and generic_relation_content_type:
113 model_class = generic_relation_content_type.model_class()
114 try:
115 model_class._base_manager.get(
116 pk=cleaned_data.get("generic_relation_pk")
117 )
118 except ObjectDoesNotExist:
119 errors["generic_relation_pk"].append(
120 ValidationError(
121 f"Generic relation object of model {model_class.__name__} "
122 "doesn't exist"
123 )
124 )
125
126 if errors:
127 raise ValidationError(errors)
128
129 return cleaned_data
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/amy/communityroles/forms.py b/amy/communityroles/forms.py
--- a/amy/communityroles/forms.py
+++ b/amy/communityroles/forms.py
@@ -127,3 +127,11 @@
raise ValidationError(errors)
return cleaned_data
+
+ def clean_end(self):
+ """Validate that end >= start"""
+ start = self.cleaned_data.get("start")
+ end = self.cleaned_data.get("end")
+ if start and end and end < start:
+ raise ValidationError("Must not be earlier than start date.")
+ return end
|
{"golden_diff": "diff --git a/amy/communityroles/forms.py b/amy/communityroles/forms.py\n--- a/amy/communityroles/forms.py\n+++ b/amy/communityroles/forms.py\n@@ -127,3 +127,11 @@\n raise ValidationError(errors)\n \n return cleaned_data\n+\n+ def clean_end(self):\n+ \"\"\"Validate that end >= start\"\"\"\n+ start = self.cleaned_data.get(\"start\")\n+ end = self.cleaned_data.get(\"end\")\n+ if start and end and end < start:\n+ raise ValidationError(\"Must not be earlier than start date.\")\n+ return end\n", "issue": "Community Roles: Date range validation\nCurrently, an end date earlier than start date is allowed. \n", "before_files": [{"content": "from collections import defaultdict\nfrom typing import Any, Optional\n\nfrom django import forms\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\n\nfrom workshops.fields import HeavySelect2Widget, ModelSelect2Widget\nfrom workshops.forms import SELECT2_SIDEBAR, BootstrapHelper, WidgetOverrideMixin\n\nfrom .models import CommunityRole, CommunityRoleConfig\n\n\nclass CommunityRoleForm(WidgetOverrideMixin, forms.ModelForm):\n class Meta:\n model = CommunityRole\n fields = (\n \"config\",\n \"person\",\n \"award\",\n \"start\",\n \"end\",\n \"inactivation\",\n \"membership\",\n \"url\",\n \"generic_relation_content_type\",\n \"generic_relation_pk\",\n )\n widgets = {\n \"config\": HeavySelect2Widget(\n data_view=\"api:communityroleconfig-list\", attrs=SELECT2_SIDEBAR\n ),\n \"person\": ModelSelect2Widget(\n data_view=\"person-lookup\", attrs=SELECT2_SIDEBAR\n ),\n \"award\": ModelSelect2Widget(\n data_view=\"award-lookup\", attrs=SELECT2_SIDEBAR\n ),\n \"membership\": ModelSelect2Widget(\n data_view=\"membership-lookup\", attrs=SELECT2_SIDEBAR\n ),\n \"generic_relation_content_type\": forms.Select(\n # \"disabled\" means the browsers will not send the field during POST.\n # See how it's handled in `clean()` method below.\n attrs={\"disabled\": \"\"},\n ),\n \"generic_relation_pk\": HeavySelect2Widget(\n data_view=\"generic-object-lookup\", attrs=SELECT2_SIDEBAR\n ),\n }\n labels = {\n \"generic_relation_content_type\": \"Generic relation object type\",\n \"generic_relation_pk\": \"Generic relation object\",\n }\n\n class Media:\n js = (\"communityrole_form.js\",)\n\n def __init__(self, *args, **kwargs):\n form_tag = kwargs.pop(\"form_tag\", True)\n super().__init__(*args, **kwargs)\n bootstrap_kwargs = {\n \"add_cancel_button\": False,\n \"form_tag\": form_tag,\n }\n self.helper = BootstrapHelper(**bootstrap_kwargs)\n\n def clean(self) -> dict[str, Any]:\n \"\"\"Validate form according to rules set up in related Community Role\n configuration.\"\"\"\n cleaned_data = super().clean()\n errors: defaultdict[str, list[ValidationError]] = defaultdict(list)\n config: Optional[CommunityRoleConfig] = cleaned_data.get(\"config\")\n\n # Config is required, but field validation for 'config' should raise\n # validation error first.\n if not config:\n return cleaned_data\n\n # Award required?\n if config.link_to_award and not cleaned_data.get(\"award\"):\n errors[\"award\"].append(\n ValidationError(f\"Award is required with community role {config}\")\n )\n\n # Specific award badge required?\n if (badge := config.award_badge_limit) and (award := cleaned_data.get(\"award\")):\n if award.badge != badge:\n errors[\"award\"].append(\n ValidationError(\n f\"Award badge must be {badge} for community role {config}\"\n )\n )\n\n # Membership required?\n if config.link_to_membership and not cleaned_data.get(\"membership\"):\n errors[\"membership\"].append(\n ValidationError(f\"Membership is required with community role {config}\")\n )\n\n # Additional URL supported?\n if not config.additional_url and cleaned_data.get(\"url\"):\n errors[\"url\"].append(\n ValidationError(f\"URL is not supported for community role {config}\")\n )\n\n # Widget for `generic_relation_content_type` is disabled in HTML, which\n # makes browsers not send it. The code below sets the default value to\n # the same value as in related config.\n generic_relation_content_type = config.generic_relation_content_type\n\n # Generic relation object must exist\n if config.generic_relation_content_type and generic_relation_content_type:\n model_class = generic_relation_content_type.model_class()\n try:\n model_class._base_manager.get(\n pk=cleaned_data.get(\"generic_relation_pk\")\n )\n except ObjectDoesNotExist:\n errors[\"generic_relation_pk\"].append(\n ValidationError(\n f\"Generic relation object of model {model_class.__name__} \"\n \"doesn't exist\"\n )\n )\n\n if errors:\n raise ValidationError(errors)\n\n return cleaned_data\n", "path": "amy/communityroles/forms.py"}], "after_files": [{"content": "from collections import defaultdict\nfrom typing import Any, Optional\n\nfrom django import forms\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\n\nfrom workshops.fields import HeavySelect2Widget, ModelSelect2Widget\nfrom workshops.forms import SELECT2_SIDEBAR, BootstrapHelper, WidgetOverrideMixin\n\nfrom .models import CommunityRole, CommunityRoleConfig\n\n\nclass CommunityRoleForm(WidgetOverrideMixin, forms.ModelForm):\n class Meta:\n model = CommunityRole\n fields = (\n \"config\",\n \"person\",\n \"award\",\n \"start\",\n \"end\",\n \"inactivation\",\n \"membership\",\n \"url\",\n \"generic_relation_content_type\",\n \"generic_relation_pk\",\n )\n widgets = {\n \"config\": HeavySelect2Widget(\n data_view=\"api:communityroleconfig-list\", attrs=SELECT2_SIDEBAR\n ),\n \"person\": ModelSelect2Widget(\n data_view=\"person-lookup\", attrs=SELECT2_SIDEBAR\n ),\n \"award\": ModelSelect2Widget(\n data_view=\"award-lookup\", attrs=SELECT2_SIDEBAR\n ),\n \"membership\": ModelSelect2Widget(\n data_view=\"membership-lookup\", attrs=SELECT2_SIDEBAR\n ),\n \"generic_relation_content_type\": forms.Select(\n # \"disabled\" means the browsers will not send the field during POST.\n # See how it's handled in `clean()` method below.\n attrs={\"disabled\": \"\"},\n ),\n \"generic_relation_pk\": HeavySelect2Widget(\n data_view=\"generic-object-lookup\", attrs=SELECT2_SIDEBAR\n ),\n }\n labels = {\n \"generic_relation_content_type\": \"Generic relation object type\",\n \"generic_relation_pk\": \"Generic relation object\",\n }\n\n class Media:\n js = (\"communityrole_form.js\",)\n\n def __init__(self, *args, **kwargs):\n form_tag = kwargs.pop(\"form_tag\", True)\n super().__init__(*args, **kwargs)\n bootstrap_kwargs = {\n \"add_cancel_button\": False,\n \"form_tag\": form_tag,\n }\n self.helper = BootstrapHelper(**bootstrap_kwargs)\n\n def clean(self) -> dict[str, Any]:\n \"\"\"Validate form according to rules set up in related Community Role\n configuration.\"\"\"\n cleaned_data = super().clean()\n errors: defaultdict[str, list[ValidationError]] = defaultdict(list)\n config: Optional[CommunityRoleConfig] = cleaned_data.get(\"config\")\n\n # Config is required, but field validation for 'config' should raise\n # validation error first.\n if not config:\n return cleaned_data\n\n # Award required?\n if config.link_to_award and not cleaned_data.get(\"award\"):\n errors[\"award\"].append(\n ValidationError(f\"Award is required with community role {config}\")\n )\n\n # Specific award badge required?\n if (badge := config.award_badge_limit) and (award := cleaned_data.get(\"award\")):\n if award.badge != badge:\n errors[\"award\"].append(\n ValidationError(\n f\"Award badge must be {badge} for community role {config}\"\n )\n )\n\n # Membership required?\n if config.link_to_membership and not cleaned_data.get(\"membership\"):\n errors[\"membership\"].append(\n ValidationError(f\"Membership is required with community role {config}\")\n )\n\n # Additional URL supported?\n if not config.additional_url and cleaned_data.get(\"url\"):\n errors[\"url\"].append(\n ValidationError(f\"URL is not supported for community role {config}\")\n )\n\n # Widget for `generic_relation_content_type` is disabled in HTML, which\n # makes browsers not send it. The code below sets the default value to\n # the same value as in related config.\n generic_relation_content_type = config.generic_relation_content_type\n\n # Generic relation object must exist\n if config.generic_relation_content_type and generic_relation_content_type:\n model_class = generic_relation_content_type.model_class()\n try:\n model_class._base_manager.get(\n pk=cleaned_data.get(\"generic_relation_pk\")\n )\n except ObjectDoesNotExist:\n errors[\"generic_relation_pk\"].append(\n ValidationError(\n f\"Generic relation object of model {model_class.__name__} \"\n \"doesn't exist\"\n )\n )\n\n if errors:\n raise ValidationError(errors)\n\n return cleaned_data\n\n def clean_end(self):\n \"\"\"Validate that end >= start\"\"\"\n start = self.cleaned_data.get(\"start\")\n end = self.cleaned_data.get(\"end\")\n if start and end and end < start:\n raise ValidationError(\"Must not be earlier than start date.\")\n return end\n", "path": "amy/communityroles/forms.py"}]}
| 1,491 | 130 |
gh_patches_debug_12625
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-1530
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
License file in pypi archive
The pypi source archive isn't including the `LICENSE.txt` file. Would it be possible to add it? It is very helpful when packaging this for Linux distributions. The official python packaging guidelines also say [each package should have one](https://packaging.python.org/guides/distributing-packages-using-setuptools/#license-txt). Thank you!
Add keywords to setup.py
Testing with pyroma package showed keywords are missing from setup.py
```
willo@macbook ~/g/m/h/hypothesis-python> pyroma .
------------------------------
Checking .
Found hypothesis
------------------------------
Your package does not have keywords data.
------------------------------
Final rating: 9/10
Cottage Cheese
------------------------------
```
Discussed with @Zac-HD and keywords from the github repo below would be good to use.
python testing fuzzing property-based-testing
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hypothesis-python/setup.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import os
21 import sys
22 import warnings
23
24 import setuptools
25
26
27 def local_file(name):
28 return os.path.relpath(os.path.join(os.path.dirname(__file__), name))
29
30
31 SOURCE = local_file('src')
32 README = local_file('README.rst')
33
34 setuptools_version = tuple(map(int, setuptools.__version__.split('.')[:2]))
35
36 if setuptools_version < (36, 2):
37 # Warning only - very bad if uploading bdist but fine if installing sdist.
38 warnings.warn(
39 'This version of setuptools is too old to correctly store '
40 'conditional dependencies in binary wheels. For more info, see: '
41 'https://hynek.me/articles/conditional-python-dependencies/'
42 )
43
44
45 # Assignment to placate pyflakes. The actual version is from the exec that
46 # follows.
47 __version__ = None
48
49 with open(local_file('src/hypothesis/version.py')) as o:
50 exec(o.read())
51
52 assert __version__ is not None
53
54
55 extras = {
56 'datetime': ['pytz'],
57 'pytz': ['pytz'],
58 'dateutil': ['python-dateutil'],
59 'fakefactory': ['Faker>=0.7'],
60 'numpy': ['numpy>=1.9.0'],
61 'pytest': ['pytest>=3.0'],
62 # We only support Django versions with upstream support - see
63 # https://www.djangoproject.com/download/#supported-versions
64 'django': ['pytz', 'django>=1.11'],
65 }
66
67 extras['faker'] = extras['fakefactory']
68 extras['all'] = sorted(sum(extras.values(), []))
69
70
71 install_requires = ['attrs>=16.0.0', 'coverage>=4.0']
72 # Using an environment marker on enum34 makes the dependency condition
73 # independent of the build environemnt, which is important for wheels.
74 # https://www.python.org/dev/peps/pep-0345/#environment-markers
75 if sys.version_info[0] < 3 and setuptools_version < (8, 0):
76 # Except really old systems, where we give up and install unconditionally
77 install_requires.append('enum34')
78 else:
79 install_requires.append('enum34; python_version=="2.7"')
80
81
82 setuptools.setup(
83 name='hypothesis',
84 version=__version__,
85 author='David R. MacIver',
86 author_email='[email protected]',
87 packages=setuptools.find_packages(SOURCE),
88 package_dir={'': SOURCE},
89 package_data={'hypothesis': ['py.typed']},
90 url=(
91 'https://github.com/HypothesisWorks/hypothesis/'
92 'tree/master/hypothesis-python'
93 ),
94 license='MPL v2',
95 description='A library for property based testing',
96 zip_safe=False,
97 extras_require=extras,
98 install_requires=install_requires,
99 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
100 classifiers=[
101 'Development Status :: 5 - Production/Stable',
102 'Intended Audience :: Developers',
103 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
104 'Operating System :: Unix',
105 'Operating System :: POSIX',
106 'Operating System :: Microsoft :: Windows',
107 'Programming Language :: Python',
108 'Programming Language :: Python :: 2.7',
109 'Programming Language :: Python :: 3',
110 'Programming Language :: Python :: 3.4',
111 'Programming Language :: Python :: 3.5',
112 'Programming Language :: Python :: 3.6',
113 'Programming Language :: Python :: 3.7',
114 'Programming Language :: Python :: Implementation :: CPython',
115 'Programming Language :: Python :: Implementation :: PyPy',
116 'Topic :: Software Development :: Testing',
117 'Framework :: Pytest',
118 ],
119 entry_points={
120 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],
121 },
122 long_description=open(README).read(),
123 )
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hypothesis-python/setup.py b/hypothesis-python/setup.py
--- a/hypothesis-python/setup.py
+++ b/hypothesis-python/setup.py
@@ -53,9 +53,9 @@
extras = {
- 'datetime': ['pytz'],
- 'pytz': ['pytz'],
- 'dateutil': ['python-dateutil'],
+ 'datetime': ['pytz>=2014.1'],
+ 'pytz': ['pytz>=2014.1'],
+ 'dateutil': ['python-dateutil>=1.4'],
'fakefactory': ['Faker>=0.7'],
'numpy': ['numpy>=1.9.0'],
'pytest': ['pytest>=3.0'],
@@ -120,4 +120,5 @@
'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],
},
long_description=open(README).read(),
+ keywords='python testing fuzzing property-based-testing',
)
|
{"golden_diff": "diff --git a/hypothesis-python/setup.py b/hypothesis-python/setup.py\n--- a/hypothesis-python/setup.py\n+++ b/hypothesis-python/setup.py\n@@ -53,9 +53,9 @@\n \n \n extras = {\n- 'datetime': ['pytz'],\n- 'pytz': ['pytz'],\n- 'dateutil': ['python-dateutil'],\n+ 'datetime': ['pytz>=2014.1'],\n+ 'pytz': ['pytz>=2014.1'],\n+ 'dateutil': ['python-dateutil>=1.4'],\n 'fakefactory': ['Faker>=0.7'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=3.0'],\n@@ -120,4 +120,5 @@\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n+ keywords='python testing fuzzing property-based-testing',\n )\n", "issue": "License file in pypi archive\nThe pypi source archive isn't including the `LICENSE.txt` file. Would it be possible to add it? It is very helpful when packaging this for Linux distributions. The official python packaging guidelines also say [each package should have one](https://packaging.python.org/guides/distributing-packages-using-setuptools/#license-txt). Thank you!\nAdd keywords to setup.py\nTesting with pyroma package showed keywords are missing from setup.py\r\n\r\n```\r\nwillo@macbook ~/g/m/h/hypothesis-python> pyroma .\r\n------------------------------\r\nChecking .\r\nFound hypothesis\r\n------------------------------\r\nYour package does not have keywords data.\r\n------------------------------\r\nFinal rating: 9/10\r\nCottage Cheese\r\n------------------------------\r\n```\r\nDiscussed with @Zac-HD and keywords from the github repo below would be good to use.\r\n\r\npython testing fuzzing property-based-testing\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport warnings\n\nimport setuptools\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\nsetuptools_version = tuple(map(int, setuptools.__version__.split('.')[:2]))\n\nif setuptools_version < (36, 2):\n # Warning only - very bad if uploading bdist but fine if installing sdist.\n warnings.warn(\n 'This version of setuptools is too old to correctly store '\n 'conditional dependencies in binary wheels. For more info, see: '\n 'https://hynek.me/articles/conditional-python-dependencies/'\n )\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz'],\n 'pytz': ['pytz'],\n 'dateutil': ['python-dateutil'],\n 'fakefactory': ['Faker>=0.7'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=3.0'],\n # We only support Django versions with upstream support - see\n # https://www.djangoproject.com/download/#supported-versions\n 'django': ['pytz', 'django>=1.11'],\n}\n\nextras['faker'] = extras['fakefactory']\nextras['all'] = sorted(sum(extras.values(), []))\n\n\ninstall_requires = ['attrs>=16.0.0', 'coverage>=4.0']\n# Using an environment marker on enum34 makes the dependency condition\n# independent of the build environemnt, which is important for wheels.\n# https://www.python.org/dev/peps/pep-0345/#environment-markers\nif sys.version_info[0] < 3 and setuptools_version < (8, 0):\n # Except really old systems, where we give up and install unconditionally\n install_requires.append('enum34')\nelse:\n install_requires.append('enum34; python_version==\"2.7\"')\n\n\nsetuptools.setup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=setuptools.find_packages(SOURCE),\n package_dir={'': SOURCE},\n package_data={'hypothesis': ['py.typed']},\n url=(\n 'https://github.com/HypothesisWorks/hypothesis/'\n 'tree/master/hypothesis-python'\n ),\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n 'Framework :: Pytest',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n)\n", "path": "hypothesis-python/setup.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport warnings\n\nimport setuptools\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file('src')\nREADME = local_file('README.rst')\n\nsetuptools_version = tuple(map(int, setuptools.__version__.split('.')[:2]))\n\nif setuptools_version < (36, 2):\n # Warning only - very bad if uploading bdist but fine if installing sdist.\n warnings.warn(\n 'This version of setuptools is too old to correctly store '\n 'conditional dependencies in binary wheels. For more info, see: '\n 'https://hynek.me/articles/conditional-python-dependencies/'\n )\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file('src/hypothesis/version.py')) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n 'datetime': ['pytz>=2014.1'],\n 'pytz': ['pytz>=2014.1'],\n 'dateutil': ['python-dateutil>=1.4'],\n 'fakefactory': ['Faker>=0.7'],\n 'numpy': ['numpy>=1.9.0'],\n 'pytest': ['pytest>=3.0'],\n # We only support Django versions with upstream support - see\n # https://www.djangoproject.com/download/#supported-versions\n 'django': ['pytz', 'django>=1.11'],\n}\n\nextras['faker'] = extras['fakefactory']\nextras['all'] = sorted(sum(extras.values(), []))\n\n\ninstall_requires = ['attrs>=16.0.0', 'coverage>=4.0']\n# Using an environment marker on enum34 makes the dependency condition\n# independent of the build environemnt, which is important for wheels.\n# https://www.python.org/dev/peps/pep-0345/#environment-markers\nif sys.version_info[0] < 3 and setuptools_version < (8, 0):\n # Except really old systems, where we give up and install unconditionally\n install_requires.append('enum34')\nelse:\n install_requires.append('enum34; python_version==\"2.7\"')\n\n\nsetuptools.setup(\n name='hypothesis',\n version=__version__,\n author='David R. MacIver',\n author_email='[email protected]',\n packages=setuptools.find_packages(SOURCE),\n package_dir={'': SOURCE},\n package_data={'hypothesis': ['py.typed']},\n url=(\n 'https://github.com/HypothesisWorks/hypothesis/'\n 'tree/master/hypothesis-python'\n ),\n license='MPL v2',\n description='A library for property based testing',\n zip_safe=False,\n extras_require=extras,\n install_requires=install_requires,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',\n 'Operating System :: Unix',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Testing',\n 'Framework :: Pytest',\n ],\n entry_points={\n 'pytest11': ['hypothesispytest = hypothesis.extra.pytestplugin'],\n },\n long_description=open(README).read(),\n keywords='python testing fuzzing property-based-testing',\n)\n", "path": "hypothesis-python/setup.py"}]}
| 1,768 | 223 |
gh_patches_debug_3972
|
rasdani/github-patches
|
git_diff
|
pytorch__audio-635
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
multiple definitions of `SoxEffect`.
`SoxEffect` is defined both in
https://github.com/pytorch/audio/blob/00d38203e401b8d9472a8f8394a10e2c309be02c/torchaudio/torch_sox.h#L50-L54
and
https://github.com/pytorch/audio/blob/00d38203e401b8d9472a8f8394a10e2c309be02c/torchaudio/torch_sox.cpp#L73-L77
-----
-----
`torch_sox.h` can be included into `torch_sox.cpp` so that `SoxEffect` needs to be defined
only once in the header.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import platform
4 import sys
5 import subprocess
6
7 from setuptools import setup, find_packages
8 from torch.utils.cpp_extension import BuildExtension, CppExtension
9
10
11 def check_env_flag(name, default=''):
12 return os.getenv(name, default).upper() in set(['ON', '1', 'YES', 'TRUE', 'Y'])
13
14 DEBUG = check_env_flag('DEBUG')
15 IS_WHEEL = check_env_flag('IS_WHEEL')
16 IS_CONDA = check_env_flag('IS_CONDA')
17
18 print('DEBUG:', DEBUG, 'IS_WHEEL:', IS_WHEEL, 'IS_CONDA:', IS_CONDA)
19
20 eca = []
21 ela = []
22 if DEBUG:
23 if platform.system() == 'Windows':
24 ela += ['/DEBUG:FULL']
25 else:
26 eca += ['-O0', '-g']
27 ela += ['-O0', '-g']
28
29
30 libraries = []
31 include_dirs = []
32 extra_objects = []
33
34 # Hypothetically, the conda distribution could rely on an external sox,
35 # but the library is pretty small and it is not available on the default
36 # anaconda channel. So we statically link it in, just as we do with wheels.
37 if IS_WHEEL or IS_CONDA:
38 audio_path = os.path.dirname(os.path.abspath(__file__))
39
40 include_dirs += [os.path.join(audio_path, 'third_party/flac/include')]
41 include_dirs += [os.path.join(audio_path, 'third_party/lame/include')]
42 include_dirs += [os.path.join(audio_path, 'third_party/sox/include')]
43 include_dirs += [os.path.join(audio_path, 'third_party/mad/include')]
44
45 # proper link order (sox, mad, flac, lame)
46 # (the most important thing is that dependencies come after a libraryl
47 # e.g., sox comes first)
48 extra_objects += [os.path.join(audio_path, 'third_party/sox/lib/libsox.a')]
49 extra_objects += [os.path.join(audio_path, 'third_party/mad/lib/libmad.a')]
50 extra_objects += [os.path.join(audio_path, 'third_party/flac/lib/libFLAC.a')]
51 extra_objects += [os.path.join(audio_path, 'third_party/lame/lib/libmp3lame.a')]
52 else:
53 libraries += ['sox']
54
55
56 # Creating the version file
57 cwd = os.path.dirname(os.path.abspath(__file__))
58 version = '0.6.0a0'
59 sha = 'Unknown'
60
61 try:
62 sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()
63 except Exception:
64 pass
65
66 if os.getenv('BUILD_VERSION'):
67 version = os.getenv('BUILD_VERSION')
68 elif sha != 'Unknown':
69 version += '+' + sha[:7]
70 print('-- Building version ' + version)
71
72 version_path = os.path.join(cwd, 'torchaudio', 'version.py')
73 with open(version_path, 'w') as f:
74 f.write("__version__ = '{}'\n".format(version))
75 f.write("git_version = {}\n".format(repr(sha)))
76
77 pytorch_package_version = os.getenv('PYTORCH_VERSION')
78
79 pytorch_package_dep = 'torch'
80 if pytorch_package_version is not None:
81 pytorch_package_dep += "==" + pytorch_package_version
82
83 if platform.system() == 'Windows':
84 ext_modules = None
85 else:
86 ext_modules = [
87 CppExtension(
88 '_torch_sox',
89 ['torchaudio/torch_sox.cpp'],
90 libraries=libraries,
91 include_dirs=include_dirs,
92 extra_compile_args=eca,
93 extra_objects=extra_objects,
94 extra_link_args=ela),
95 ]
96
97 setup(
98 name="torchaudio",
99 version=version,
100 description="An audio package for PyTorch",
101 url="https://github.com/pytorch/audio",
102 author="Soumith Chintala, David Pollack, Sean Naren, Peter Goldsborough",
103 author_email="[email protected]",
104 classifiers=[
105 "Environment :: Plugins",
106 "Intended Audience :: Developers",
107 "Intended Audience :: Science/Research",
108 "License :: OSI Approved :: BSD License",
109 "Operating System :: MacOS :: MacOS X",
110 "Operating System :: Microsoft :: Windows",
111 "Operating System :: POSIX",
112 "Programming Language :: C++",
113 "Programming Language :: Python :: 2.7",
114 "Programming Language :: Python :: 3",
115 "Programming Language :: Python :: Implementation :: CPython",
116 "Topic :: Multimedia :: Sound/Audio",
117 "Topic :: Scientific/Engineering :: Artificial Intelligence"
118 ],
119 # Exclude the build files.
120 packages=find_packages(exclude=["build"]),
121 ext_modules=ext_modules,
122 cmdclass={'build_ext': BuildExtension},
123 install_requires=[pytorch_package_dep]
124 )
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -88,7 +88,7 @@
'_torch_sox',
['torchaudio/torch_sox.cpp'],
libraries=libraries,
- include_dirs=include_dirs,
+ include_dirs=include_dirs + [cwd],
extra_compile_args=eca,
extra_objects=extra_objects,
extra_link_args=ela),
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -88,7 +88,7 @@\n '_torch_sox',\n ['torchaudio/torch_sox.cpp'],\n libraries=libraries,\n- include_dirs=include_dirs,\n+ include_dirs=include_dirs + [cwd],\n extra_compile_args=eca,\n extra_objects=extra_objects,\n extra_link_args=ela),\n", "issue": "multiple definitions of `SoxEffect`.\n`SoxEffect` is defined both in\r\n\r\nhttps://github.com/pytorch/audio/blob/00d38203e401b8d9472a8f8394a10e2c309be02c/torchaudio/torch_sox.h#L50-L54\r\n\r\nand\r\n\r\nhttps://github.com/pytorch/audio/blob/00d38203e401b8d9472a8f8394a10e2c309be02c/torchaudio/torch_sox.cpp#L73-L77\r\n\r\n\r\n-----\r\n\r\n-----\r\n\r\n`torch_sox.h` can be included into `torch_sox.cpp` so that `SoxEffect` needs to be defined\r\nonly once in the header.\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport platform\nimport sys\nimport subprocess\n\nfrom setuptools import setup, find_packages\nfrom torch.utils.cpp_extension import BuildExtension, CppExtension\n\n\ndef check_env_flag(name, default=''):\n return os.getenv(name, default).upper() in set(['ON', '1', 'YES', 'TRUE', 'Y'])\n\nDEBUG = check_env_flag('DEBUG')\nIS_WHEEL = check_env_flag('IS_WHEEL')\nIS_CONDA = check_env_flag('IS_CONDA')\n\nprint('DEBUG:', DEBUG, 'IS_WHEEL:', IS_WHEEL, 'IS_CONDA:', IS_CONDA)\n\neca = []\nela = []\nif DEBUG:\n if platform.system() == 'Windows':\n ela += ['/DEBUG:FULL']\n else:\n eca += ['-O0', '-g']\n ela += ['-O0', '-g']\n\n\nlibraries = []\ninclude_dirs = []\nextra_objects = []\n\n# Hypothetically, the conda distribution could rely on an external sox,\n# but the library is pretty small and it is not available on the default\n# anaconda channel. So we statically link it in, just as we do with wheels.\nif IS_WHEEL or IS_CONDA:\n audio_path = os.path.dirname(os.path.abspath(__file__))\n\n include_dirs += [os.path.join(audio_path, 'third_party/flac/include')]\n include_dirs += [os.path.join(audio_path, 'third_party/lame/include')]\n include_dirs += [os.path.join(audio_path, 'third_party/sox/include')]\n include_dirs += [os.path.join(audio_path, 'third_party/mad/include')]\n\n # proper link order (sox, mad, flac, lame)\n # (the most important thing is that dependencies come after a libraryl\n # e.g., sox comes first)\n extra_objects += [os.path.join(audio_path, 'third_party/sox/lib/libsox.a')]\n extra_objects += [os.path.join(audio_path, 'third_party/mad/lib/libmad.a')]\n extra_objects += [os.path.join(audio_path, 'third_party/flac/lib/libFLAC.a')]\n extra_objects += [os.path.join(audio_path, 'third_party/lame/lib/libmp3lame.a')]\nelse:\n libraries += ['sox']\n\n\n# Creating the version file\ncwd = os.path.dirname(os.path.abspath(__file__))\nversion = '0.6.0a0'\nsha = 'Unknown'\n\ntry:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\nexcept Exception:\n pass\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n version += '+' + sha[:7]\nprint('-- Building version ' + version)\n\nversion_path = os.path.join(cwd, 'torchaudio', 'version.py')\nwith open(version_path, 'w') as f:\n f.write(\"__version__ = '{}'\\n\".format(version))\n f.write(\"git_version = {}\\n\".format(repr(sha)))\n\npytorch_package_version = os.getenv('PYTORCH_VERSION')\n\npytorch_package_dep = 'torch'\nif pytorch_package_version is not None:\n pytorch_package_dep += \"==\" + pytorch_package_version\n\nif platform.system() == 'Windows':\n ext_modules = None\nelse:\n ext_modules = [\n CppExtension(\n '_torch_sox',\n ['torchaudio/torch_sox.cpp'],\n libraries=libraries,\n include_dirs=include_dirs,\n extra_compile_args=eca,\n extra_objects=extra_objects,\n extra_link_args=ela),\n ]\n\nsetup(\n name=\"torchaudio\",\n version=version,\n description=\"An audio package for PyTorch\",\n url=\"https://github.com/pytorch/audio\",\n author=\"Soumith Chintala, David Pollack, Sean Naren, Peter Goldsborough\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Environment :: Plugins\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX\",\n \"Programming Language :: C++\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Multimedia :: Sound/Audio\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\"\n ],\n # Exclude the build files.\n packages=find_packages(exclude=[\"build\"]),\n ext_modules=ext_modules,\n cmdclass={'build_ext': BuildExtension},\n install_requires=[pytorch_package_dep]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport platform\nimport sys\nimport subprocess\n\nfrom setuptools import setup, find_packages\nfrom torch.utils.cpp_extension import BuildExtension, CppExtension\n\n\ndef check_env_flag(name, default=''):\n return os.getenv(name, default).upper() in set(['ON', '1', 'YES', 'TRUE', 'Y'])\n\nDEBUG = check_env_flag('DEBUG')\nIS_WHEEL = check_env_flag('IS_WHEEL')\nIS_CONDA = check_env_flag('IS_CONDA')\n\nprint('DEBUG:', DEBUG, 'IS_WHEEL:', IS_WHEEL, 'IS_CONDA:', IS_CONDA)\n\neca = []\nela = []\nif DEBUG:\n if platform.system() == 'Windows':\n ela += ['/DEBUG:FULL']\n else:\n eca += ['-O0', '-g']\n ela += ['-O0', '-g']\n\n\nlibraries = []\ninclude_dirs = []\nextra_objects = []\n\n# Hypothetically, the conda distribution could rely on an external sox,\n# but the library is pretty small and it is not available on the default\n# anaconda channel. So we statically link it in, just as we do with wheels.\nif IS_WHEEL or IS_CONDA:\n audio_path = os.path.dirname(os.path.abspath(__file__))\n\n include_dirs += [os.path.join(audio_path, 'third_party/flac/include')]\n include_dirs += [os.path.join(audio_path, 'third_party/lame/include')]\n include_dirs += [os.path.join(audio_path, 'third_party/sox/include')]\n include_dirs += [os.path.join(audio_path, 'third_party/mad/include')]\n\n # proper link order (sox, mad, flac, lame)\n # (the most important thing is that dependencies come after a libraryl\n # e.g., sox comes first)\n extra_objects += [os.path.join(audio_path, 'third_party/sox/lib/libsox.a')]\n extra_objects += [os.path.join(audio_path, 'third_party/mad/lib/libmad.a')]\n extra_objects += [os.path.join(audio_path, 'third_party/flac/lib/libFLAC.a')]\n extra_objects += [os.path.join(audio_path, 'third_party/lame/lib/libmp3lame.a')]\nelse:\n libraries += ['sox']\n\n\n# Creating the version file\ncwd = os.path.dirname(os.path.abspath(__file__))\nversion = '0.6.0a0'\nsha = 'Unknown'\n\ntry:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\nexcept Exception:\n pass\n\nif os.getenv('BUILD_VERSION'):\n version = os.getenv('BUILD_VERSION')\nelif sha != 'Unknown':\n version += '+' + sha[:7]\nprint('-- Building version ' + version)\n\nversion_path = os.path.join(cwd, 'torchaudio', 'version.py')\nwith open(version_path, 'w') as f:\n f.write(\"__version__ = '{}'\\n\".format(version))\n f.write(\"git_version = {}\\n\".format(repr(sha)))\n\npytorch_package_version = os.getenv('PYTORCH_VERSION')\n\npytorch_package_dep = 'torch'\nif pytorch_package_version is not None:\n pytorch_package_dep += \"==\" + pytorch_package_version\n\nif platform.system() == 'Windows':\n ext_modules = None\nelse:\n ext_modules = [\n CppExtension(\n '_torch_sox',\n ['torchaudio/torch_sox.cpp'],\n libraries=libraries,\n include_dirs=include_dirs + [cwd],\n extra_compile_args=eca,\n extra_objects=extra_objects,\n extra_link_args=ela),\n ]\n\nsetup(\n name=\"torchaudio\",\n version=version,\n description=\"An audio package for PyTorch\",\n url=\"https://github.com/pytorch/audio\",\n author=\"Soumith Chintala, David Pollack, Sean Naren, Peter Goldsborough\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Environment :: Plugins\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: POSIX\",\n \"Programming Language :: C++\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Multimedia :: Sound/Audio\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\"\n ],\n # Exclude the build files.\n packages=find_packages(exclude=[\"build\"]),\n ext_modules=ext_modules,\n cmdclass={'build_ext': BuildExtension},\n install_requires=[pytorch_package_dep]\n)\n", "path": "setup.py"}]}
| 1,741 | 94 |
gh_patches_debug_18014
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-1606
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tf1 plugin better stream
Hi,
Are you sure there isn't better stream than 360p for TF1 ?
[cli][info] Found matching plugin tf1 for URL https://www.tf1.fr/tf1/direct
[cli][info] Available streams: 496k (worst), 234p_alt, 234p, 896k, 360p_alt, 360p (best)
I have a far better quality with a web browser than best.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/tf1.py`
Content:
```
1 from __future__ import print_function
2 import re
3
4 from streamlink.plugin import Plugin
5 from streamlink.plugin.api import http, useragents
6 from streamlink.stream import HDSStream
7 from streamlink.stream import HLSStream
8
9
10 class TF1(Plugin):
11 url_re = re.compile(r"https?://(?:www\.)?(?:tf1\.fr/(tf1|tmc|tfx|tf1-series-films)/direct|(lci).fr/direct)/?")
12 embed_url = "http://www.wat.tv/embedframe/live{0}"
13 embed_re = re.compile(r"urlLive.*?:.*?\"(http.*?)\"", re.MULTILINE)
14 api_url = "http://www.wat.tv/get/{0}/591997"
15 swf_url = "http://www.wat.tv/images/v70/PlayerLite.swf"
16 hds_channel_remap = {"tf1": "androidliveconnect", "lci": "androidlivelci", "tfx" : "nt1live", "tf1-series-films" : "hd1live" }
17 hls_channel_remap = {"lci": "LCI", "tf1": "V4", "tfx" : "nt1", "tf1-series-films" : "hd1" }
18
19 @classmethod
20 def can_handle_url(cls, url):
21 return cls.url_re.match(url) is not None
22
23 def _get_hds_streams(self, channel):
24 channel = self.hds_channel_remap.get(channel, "{0}live".format(channel))
25 manifest_url = http.get(self.api_url.format(channel),
26 params={"getURL": 1},
27 headers={"User-Agent": useragents.FIREFOX}).text
28
29 for s in HDSStream.parse_manifest(self.session,
30 manifest_url,
31 pvswf=self.swf_url,
32 headers={"User-Agent": useragents.FIREFOX}).items():
33 yield s
34
35 def _get_hls_streams(self, channel):
36 channel = self.hls_channel_remap.get(channel, channel)
37 embed_url = self.embed_url.format(channel)
38 self.logger.debug("Found embed URL: {0}", embed_url)
39 # page needs to have a mobile user agent
40 embed_page = http.get(embed_url, headers={"User-Agent": useragents.ANDROID})
41
42 m = self.embed_re.search(embed_page.text)
43 if m:
44 hls_stream_url = m.group(1)
45
46 try:
47 for s in HLSStream.parse_variant_playlist(self.session, hls_stream_url).items():
48 yield s
49 except Exception:
50 self.logger.error("Failed to load the HLS playlist for {0}", channel)
51
52 def _get_streams(self):
53 m = self.url_re.match(self.url)
54 if m:
55 channel = m.group(1) or m.group(2)
56 self.logger.debug("Found channel {0}", channel)
57 for s in self._get_hds_streams(channel):
58 yield s
59
60 for s in self._get_hls_streams(channel):
61 yield s
62
63
64 __plugin__ = TF1
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/tf1.py b/src/streamlink/plugins/tf1.py
--- a/src/streamlink/plugins/tf1.py
+++ b/src/streamlink/plugins/tf1.py
@@ -1,6 +1,7 @@
from __future__ import print_function
import re
+from streamlink.compat import urlparse, parse_qsl
from streamlink.plugin import Plugin
from streamlink.plugin.api import http, useragents
from streamlink.stream import HDSStream
@@ -41,8 +42,9 @@
m = self.embed_re.search(embed_page.text)
if m:
- hls_stream_url = m.group(1)
-
+ o = urlparse(m.group(1))
+ prms = dict(parse_qsl(o.query))
+ hls_stream_url = "{0}://{1}{2}?hdnea={3}".format(o.scheme, o.netloc, o.path, prms["hdnea"])
try:
for s in HLSStream.parse_variant_playlist(self.session, hls_stream_url).items():
yield s
|
{"golden_diff": "diff --git a/src/streamlink/plugins/tf1.py b/src/streamlink/plugins/tf1.py\n--- a/src/streamlink/plugins/tf1.py\n+++ b/src/streamlink/plugins/tf1.py\n@@ -1,6 +1,7 @@\n from __future__ import print_function\n import re\n \n+from streamlink.compat import urlparse, parse_qsl\n from streamlink.plugin import Plugin\n from streamlink.plugin.api import http, useragents\n from streamlink.stream import HDSStream\n@@ -41,8 +42,9 @@\n \n m = self.embed_re.search(embed_page.text)\n if m:\n- hls_stream_url = m.group(1)\n-\n+ o = urlparse(m.group(1))\n+ prms = dict(parse_qsl(o.query))\n+ hls_stream_url = \"{0}://{1}{2}?hdnea={3}\".format(o.scheme, o.netloc, o.path, prms[\"hdnea\"])\n try:\n for s in HLSStream.parse_variant_playlist(self.session, hls_stream_url).items():\n yield s\n", "issue": "tf1 plugin better stream\nHi,\r\n\r\nAre you sure there isn't better stream than 360p for TF1 ?\r\n[cli][info] Found matching plugin tf1 for URL https://www.tf1.fr/tf1/direct\r\n[cli][info] Available streams: 496k (worst), 234p_alt, 234p, 896k, 360p_alt, 360p (best)\r\n\r\nI have a far better quality with a web browser than best.\n", "before_files": [{"content": "from __future__ import print_function\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, useragents\nfrom streamlink.stream import HDSStream\nfrom streamlink.stream import HLSStream\n\n\nclass TF1(Plugin):\n url_re = re.compile(r\"https?://(?:www\\.)?(?:tf1\\.fr/(tf1|tmc|tfx|tf1-series-films)/direct|(lci).fr/direct)/?\")\n embed_url = \"http://www.wat.tv/embedframe/live{0}\"\n embed_re = re.compile(r\"urlLive.*?:.*?\\\"(http.*?)\\\"\", re.MULTILINE)\n api_url = \"http://www.wat.tv/get/{0}/591997\"\n swf_url = \"http://www.wat.tv/images/v70/PlayerLite.swf\"\n hds_channel_remap = {\"tf1\": \"androidliveconnect\", \"lci\": \"androidlivelci\", \"tfx\" : \"nt1live\", \"tf1-series-films\" : \"hd1live\" }\n hls_channel_remap = {\"lci\": \"LCI\", \"tf1\": \"V4\", \"tfx\" : \"nt1\", \"tf1-series-films\" : \"hd1\" }\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_hds_streams(self, channel):\n channel = self.hds_channel_remap.get(channel, \"{0}live\".format(channel))\n manifest_url = http.get(self.api_url.format(channel),\n params={\"getURL\": 1},\n headers={\"User-Agent\": useragents.FIREFOX}).text\n\n for s in HDSStream.parse_manifest(self.session,\n manifest_url,\n pvswf=self.swf_url,\n headers={\"User-Agent\": useragents.FIREFOX}).items():\n yield s\n\n def _get_hls_streams(self, channel):\n channel = self.hls_channel_remap.get(channel, channel)\n embed_url = self.embed_url.format(channel)\n self.logger.debug(\"Found embed URL: {0}\", embed_url)\n # page needs to have a mobile user agent\n embed_page = http.get(embed_url, headers={\"User-Agent\": useragents.ANDROID})\n\n m = self.embed_re.search(embed_page.text)\n if m:\n hls_stream_url = m.group(1)\n\n try:\n for s in HLSStream.parse_variant_playlist(self.session, hls_stream_url).items():\n yield s\n except Exception:\n self.logger.error(\"Failed to load the HLS playlist for {0}\", channel)\n\n def _get_streams(self):\n m = self.url_re.match(self.url)\n if m:\n channel = m.group(1) or m.group(2)\n self.logger.debug(\"Found channel {0}\", channel)\n for s in self._get_hds_streams(channel):\n yield s\n\n for s in self._get_hls_streams(channel):\n yield s\n\n\n__plugin__ = TF1\n", "path": "src/streamlink/plugins/tf1.py"}], "after_files": [{"content": "from __future__ import print_function\nimport re\n\nfrom streamlink.compat import urlparse, parse_qsl\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, useragents\nfrom streamlink.stream import HDSStream\nfrom streamlink.stream import HLSStream\n\n\nclass TF1(Plugin):\n url_re = re.compile(r\"https?://(?:www\\.)?(?:tf1\\.fr/(tf1|tmc|tfx|tf1-series-films)/direct|(lci).fr/direct)/?\")\n embed_url = \"http://www.wat.tv/embedframe/live{0}\"\n embed_re = re.compile(r\"urlLive.*?:.*?\\\"(http.*?)\\\"\", re.MULTILINE)\n api_url = \"http://www.wat.tv/get/{0}/591997\"\n swf_url = \"http://www.wat.tv/images/v70/PlayerLite.swf\"\n hds_channel_remap = {\"tf1\": \"androidliveconnect\", \"lci\": \"androidlivelci\", \"tfx\" : \"nt1live\", \"tf1-series-films\" : \"hd1live\" }\n hls_channel_remap = {\"lci\": \"LCI\", \"tf1\": \"V4\", \"tfx\" : \"nt1\", \"tf1-series-films\" : \"hd1\" }\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def _get_hds_streams(self, channel):\n channel = self.hds_channel_remap.get(channel, \"{0}live\".format(channel))\n manifest_url = http.get(self.api_url.format(channel),\n params={\"getURL\": 1},\n headers={\"User-Agent\": useragents.FIREFOX}).text\n\n for s in HDSStream.parse_manifest(self.session,\n manifest_url,\n pvswf=self.swf_url,\n headers={\"User-Agent\": useragents.FIREFOX}).items():\n yield s\n\n def _get_hls_streams(self, channel):\n channel = self.hls_channel_remap.get(channel, channel)\n embed_url = self.embed_url.format(channel)\n self.logger.debug(\"Found embed URL: {0}\", embed_url)\n # page needs to have a mobile user agent\n embed_page = http.get(embed_url, headers={\"User-Agent\": useragents.ANDROID})\n\n m = self.embed_re.search(embed_page.text)\n if m:\n o = urlparse(m.group(1))\n prms = dict(parse_qsl(o.query))\n hls_stream_url = \"{0}://{1}{2}?hdnea={3}\".format(o.scheme, o.netloc, o.path, prms[\"hdnea\"])\n try:\n for s in HLSStream.parse_variant_playlist(self.session, hls_stream_url).items():\n yield s\n except Exception:\n self.logger.error(\"Failed to load the HLS playlist for {0}\", channel)\n\n def _get_streams(self):\n m = self.url_re.match(self.url)\n if m:\n channel = m.group(1) or m.group(2)\n self.logger.debug(\"Found channel {0}\", channel)\n for s in self._get_hds_streams(channel):\n yield s\n\n for s in self._get_hls_streams(channel):\n yield s\n\n\n__plugin__ = TF1\n", "path": "src/streamlink/plugins/tf1.py"}]}
| 1,164 | 230 |
gh_patches_debug_1466
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-1813
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Red input fields on first page load
When loading some pages, input fields are redish, suggesting there where validation errors, but there was not input submit yet.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/users/fields.py`
Content:
```
1 import re
2
3 from django import forms
4 from django.core.exceptions import ValidationError
5 from django.core.validators import EmailValidator
6 from django.forms import widgets
7 from django.utils.translation import ugettext_lazy as _
8
9
10 class CommaSeparatedEmailField(forms.Field):
11 email_validator = EmailValidator(
12 message=_('Please enter correct email addresses, separated by '
13 'commas.')
14 )
15
16 widget = widgets.TextInput(attrs={
17 'placeholder': '[email protected], [email protected], '
18 '[email protected],…'
19 })
20
21 def to_python(self, value):
22 if not value:
23 return []
24
25 emails = []
26 for email in value.split(','):
27 email = email.strip()
28 self.email_validator(email)
29 emails.append(email)
30
31 return emails
32
33
34 class EmailFileField(forms.FileField):
35 """Extract emails from uploaded text files."""
36
37 widget = widgets.FileInput
38 # Find possible email strings. Emails may be quoted and separated by
39 # whitespaces, commas, semicolons or < and >.
40 email_regex = re.compile(r'[^\s;,"\'<]+@[^\s;,"\'>]+\.[a-z]{2,}')
41 email_validator = EmailValidator()
42
43 def clean(self, data, initial=None):
44 file = super().clean(data, initial)
45 return self._extract_emails(file)
46
47 def _extract_emails(self, file):
48 if not file:
49 return []
50
51 emails = []
52 for byteline in file:
53 # As it is difficult to guess the correct encoding of a file,
54 # email addresses are restricted to contain only ascii letters.
55 # This works for every encoding which is a superset of ascii like
56 # utf-8 and latin-1. Non ascii chars are simply ignored.
57 line = byteline.decode('ascii', 'ignore')
58 for match in self.email_regex.finditer(line):
59 email = match.group(0)
60 if self.is_valid_email(email):
61 emails.append(email)
62 return emails
63
64 def is_valid_email(self, email):
65 try:
66 self.email_validator(email)
67 return True
68 except ValidationError:
69 return False
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/users/fields.py b/meinberlin/apps/users/fields.py
--- a/meinberlin/apps/users/fields.py
+++ b/meinberlin/apps/users/fields.py
@@ -14,8 +14,7 @@
)
widget = widgets.TextInput(attrs={
- 'placeholder': '[email protected], [email protected], '
- '[email protected],…'
+ 'placeholder': '[email protected], [email protected],…'
})
def to_python(self, value):
|
{"golden_diff": "diff --git a/meinberlin/apps/users/fields.py b/meinberlin/apps/users/fields.py\n--- a/meinberlin/apps/users/fields.py\n+++ b/meinberlin/apps/users/fields.py\n@@ -14,8 +14,7 @@\n )\n \n widget = widgets.TextInput(attrs={\n- 'placeholder': '[email protected], [email protected], '\n- '[email protected],\u2026'\n+ 'placeholder': '[email protected], [email protected],\u2026'\n })\n \n def to_python(self, value):\n", "issue": "Red input fields on first page load\nWhen loading some pages, input fields are redish, suggesting there where validation errors, but there was not input submit yet.\r\n\r\n\n", "before_files": [{"content": "import re\n\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import EmailValidator\nfrom django.forms import widgets\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass CommaSeparatedEmailField(forms.Field):\n email_validator = EmailValidator(\n message=_('Please enter correct email addresses, separated by '\n 'commas.')\n )\n\n widget = widgets.TextInput(attrs={\n 'placeholder': '[email protected], [email protected], '\n '[email protected],\u2026'\n })\n\n def to_python(self, value):\n if not value:\n return []\n\n emails = []\n for email in value.split(','):\n email = email.strip()\n self.email_validator(email)\n emails.append(email)\n\n return emails\n\n\nclass EmailFileField(forms.FileField):\n \"\"\"Extract emails from uploaded text files.\"\"\"\n\n widget = widgets.FileInput\n # Find possible email strings. Emails may be quoted and separated by\n # whitespaces, commas, semicolons or < and >.\n email_regex = re.compile(r'[^\\s;,\"\\'<]+@[^\\s;,\"\\'>]+\\.[a-z]{2,}')\n email_validator = EmailValidator()\n\n def clean(self, data, initial=None):\n file = super().clean(data, initial)\n return self._extract_emails(file)\n\n def _extract_emails(self, file):\n if not file:\n return []\n\n emails = []\n for byteline in file:\n # As it is difficult to guess the correct encoding of a file,\n # email addresses are restricted to contain only ascii letters.\n # This works for every encoding which is a superset of ascii like\n # utf-8 and latin-1. Non ascii chars are simply ignored.\n line = byteline.decode('ascii', 'ignore')\n for match in self.email_regex.finditer(line):\n email = match.group(0)\n if self.is_valid_email(email):\n emails.append(email)\n return emails\n\n def is_valid_email(self, email):\n try:\n self.email_validator(email)\n return True\n except ValidationError:\n return False\n", "path": "meinberlin/apps/users/fields.py"}], "after_files": [{"content": "import re\n\nfrom django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import EmailValidator\nfrom django.forms import widgets\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass CommaSeparatedEmailField(forms.Field):\n email_validator = EmailValidator(\n message=_('Please enter correct email addresses, separated by '\n 'commas.')\n )\n\n widget = widgets.TextInput(attrs={\n 'placeholder': '[email protected], [email protected],\u2026'\n })\n\n def to_python(self, value):\n if not value:\n return []\n\n emails = []\n for email in value.split(','):\n email = email.strip()\n self.email_validator(email)\n emails.append(email)\n\n return emails\n\n\nclass EmailFileField(forms.FileField):\n \"\"\"Extract emails from uploaded text files.\"\"\"\n\n widget = widgets.FileInput\n # Find possible email strings. Emails may be quoted and separated by\n # whitespaces, commas, semicolons or < and >.\n email_regex = re.compile(r'[^\\s;,\"\\'<]+@[^\\s;,\"\\'>]+\\.[a-z]{2,}')\n email_validator = EmailValidator()\n\n def clean(self, data, initial=None):\n file = super().clean(data, initial)\n return self._extract_emails(file)\n\n def _extract_emails(self, file):\n if not file:\n return []\n\n emails = []\n for byteline in file:\n # As it is difficult to guess the correct encoding of a file,\n # email addresses are restricted to contain only ascii letters.\n # This works for every encoding which is a superset of ascii like\n # utf-8 and latin-1. Non ascii chars are simply ignored.\n line = byteline.decode('ascii', 'ignore')\n for match in self.email_regex.finditer(line):\n email = match.group(0)\n if self.is_valid_email(email):\n emails.append(email)\n return emails\n\n def is_valid_email(self, email):\n try:\n self.email_validator(email)\n return True\n except ValidationError:\n return False\n", "path": "meinberlin/apps/users/fields.py"}]}
| 966 | 122 |
gh_patches_debug_7470
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-661
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Engineering: Back-end - Combine logic for Agency Index and Getting Started page
_As a MST Senior Rider, when I go to benefits.calitp.org, I should see the new Introduction page design and content._
- Change the app logic to the following -
1. When there is only 1 transit agency, go directly to the agency index page (/mst)
2. From the agency index page, go directly to the verifier selection page (/eligibility/)
3. When there is only 1 eligibility verifier, go directly to the only start page (Login.gov verifier)
- Write tests for this new logic
- Update sample data for this new logic (??) so we can adequately test this in Dev, Staging
## How it is now
```mermaid
graph LR
A[core:index] -->|redirect| B(core:agency_index)
B -->|user clicks button| C(eligiblity:index)
C -->|redirect| D(eligibility:start)
```
## After this is implemented
```mermaid
graph LR
A[core:index] -->|redirect| B(core:agency_index)
B -->|*redirect*| C(eligiblity:index)
C -->|redirect| D(eligibility:start)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/core/views.py`
Content:
```
1 """
2 The core application: view definition for the root of the webapp.
3 """
4 from django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError
5 from django.shortcuts import redirect
6 from django.template import loader
7 from django.template.response import TemplateResponse
8 from django.urls import reverse
9 from django.utils.translation import gettext as _
10
11 from . import models, session, viewmodels
12 from .middleware import pageview_decorator
13
14
15 def PageTemplateResponse(request, page_vm):
16 """Helper returns a TemplateResponse using the common page template."""
17 return TemplateResponse(request, "core/page.html", page_vm.context_dict())
18
19
20 def _index_content_title():
21 """Helper returns the content title for the common index page."""
22 return _("core.pages.index.content_title")
23
24
25 def _index_url():
26 """Helper computes the index url path."""
27 return reverse("core:index")
28
29
30 @pageview_decorator
31 def index(request):
32 """View handler for the main entry page."""
33 session.reset(request)
34
35 agencies = models.TransitAgency.all_active()
36
37 if len(agencies) == 1:
38 agency = agencies[0]
39 return redirect(agency.index_url)
40
41 # generate a button to the landing page for each active agency
42 buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]
43 buttons[0].classes.append("mt-3")
44 buttons[0].label = _("core.pages.index.chooseprovider")
45
46 page = viewmodels.Page(
47 title=_("core.pages.index.title"),
48 content_title=_index_content_title(),
49 buttons=buttons,
50 classes="home",
51 )
52
53 return PageTemplateResponse(request, page)
54
55
56 @pageview_decorator
57 def agency_index(request, agency):
58 """View handler for an agency entry page."""
59 session.reset(request)
60 session.update(request, agency=agency, origin=agency.index_url)
61
62 button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index"))
63 button.label = _("core.pages.agency_index.button.label")
64
65 page = viewmodels.Page(
66 title=_("core.pages.agency_index.title"),
67 content_title=_("core.pages.agency_index.content_title"),
68 button=button,
69 classes="home",
70 )
71
72 help_page = reverse("core:help")
73 context_dict = {**page.context_dict(), **{"info_link": f"{help_page}#about"}}
74
75 return TemplateResponse(request, "core/agency_index.html", context_dict)
76
77
78 @pageview_decorator
79 def help(request):
80 """View handler for the help page."""
81 if session.active_agency(request):
82 agency = session.agency(request)
83 buttons = viewmodels.Button.agency_contact_links(agency)
84 else:
85 buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]
86
87 buttons.append(viewmodels.Button.home(request, _("core.buttons.back")))
88
89 page = viewmodels.Page(
90 title=_("core.buttons.help"),
91 content_title=_("core.buttons.help"),
92 buttons=buttons,
93 noimage=True,
94 )
95
96 return TemplateResponse(request, "core/help.html", page.context_dict())
97
98
99 @pageview_decorator
100 def bad_request(request, exception, template_name="400.html"):
101 """View handler for HTTP 400 Bad Request responses."""
102 if session.active_agency(request):
103 session.update(request, origin=session.agency(request).index_url)
104 else:
105 session.update(request, origin=_index_url())
106
107 home = viewmodels.Button.home(request)
108 page = viewmodels.ErrorPage.error(button=home)
109 t = loader.get_template(template_name)
110
111 return HttpResponseBadRequest(t.render(page.context_dict()))
112
113
114 @pageview_decorator
115 def csrf_failure(request, reason):
116 """
117 View handler for CSRF_FAILURE_VIEW with custom data.
118 """
119 if session.active_agency(request):
120 session.update(request, origin=session.agency(request).index_url)
121 else:
122 session.update(request, origin=_index_url())
123
124 home = viewmodels.Button.home(request)
125 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
126 t = loader.get_template("400.html")
127
128 return HttpResponseNotFound(t.render(page.context_dict()))
129
130
131 @pageview_decorator
132 def page_not_found(request, exception, template_name="404.html"):
133 """View handler for HTTP 404 Not Found responses."""
134 if session.active_agency(request):
135 session.update(request, origin=session.agency(request).index_url)
136 else:
137 session.update(request, origin=_index_url())
138
139 home = viewmodels.Button.home(request)
140 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
141 t = loader.get_template(template_name)
142
143 return HttpResponseNotFound(t.render(page.context_dict()))
144
145
146 @pageview_decorator
147 def server_error(request, template_name="500.html"):
148 """View handler for HTTP 500 Server Error responses."""
149 if session.active_agency(request):
150 session.update(request, origin=session.agency(request).index_url)
151 else:
152 session.update(request, origin=_index_url())
153
154 home = viewmodels.Button.home(request)
155 page = viewmodels.ErrorPage.error(button=home)
156 t = loader.get_template(template_name)
157
158 return HttpResponseServerError(t.render(page.context_dict()))
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/benefits/core/views.py b/benefits/core/views.py
--- a/benefits/core/views.py
+++ b/benefits/core/views.py
@@ -59,6 +59,9 @@
session.reset(request)
session.update(request, agency=agency, origin=agency.index_url)
+ if len(agency.eligibility_verifiers.all()) == 1:
+ return redirect(reverse("eligibility:index"))
+
button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index"))
button.label = _("core.pages.agency_index.button.label")
|
{"golden_diff": "diff --git a/benefits/core/views.py b/benefits/core/views.py\n--- a/benefits/core/views.py\n+++ b/benefits/core/views.py\n@@ -59,6 +59,9 @@\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n \n+ if len(agency.eligibility_verifiers.all()) == 1:\n+ return redirect(reverse(\"eligibility:index\"))\n+\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n button.label = _(\"core.pages.agency_index.button.label\")\n", "issue": "Engineering: Back-end - Combine logic for Agency Index and Getting Started page\n_As a MST Senior Rider, when I go to benefits.calitp.org, I should see the new Introduction page design and content._\r\n\r\n- Change the app logic to the following -\r\n1. When there is only 1 transit agency, go directly to the agency index page (/mst)\r\n2. From the agency index page, go directly to the verifier selection page (/eligibility/)\r\n3. When there is only 1 eligibility verifier, go directly to the only start page (Login.gov verifier)\r\n\r\n- Write tests for this new logic\r\n- Update sample data for this new logic (??) so we can adequately test this in Dev, Staging\r\n\r\n## How it is now\r\n\r\n```mermaid\r\ngraph LR\r\n A[core:index] -->|redirect| B(core:agency_index)\r\n B -->|user clicks button| C(eligiblity:index)\r\n C -->|redirect| D(eligibility:start)\r\n```\r\n\r\n## After this is implemented\r\n\r\n```mermaid\r\ngraph LR\r\n A[core:index] -->|redirect| B(core:agency_index)\r\n B -->|*redirect*| C(eligiblity:index)\r\n C -->|redirect| D(eligibility:start)\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.shortcuts import redirect\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import models, session, viewmodels\nfrom .middleware import pageview_decorator\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\n@pageview_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n agencies = models.TransitAgency.all_active()\n\n if len(agencies) == 1:\n agency = agencies[0]\n return redirect(agency.index_url)\n\n # generate a button to the landing page for each active agency\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.index.title\"),\n content_title=_index_content_title(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\n@pageview_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n button.label = _(\"core.pages.agency_index.button.label\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.agency_index.title\"),\n content_title=_(\"core.pages.agency_index.content_title\"),\n button=button,\n classes=\"home\",\n )\n\n help_page = reverse(\"core:help\")\n context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n\n return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n\n\n@pageview_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\n@pageview_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}], "after_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.shortcuts import redirect\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import models, session, viewmodels\nfrom .middleware import pageview_decorator\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\n@pageview_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n agencies = models.TransitAgency.all_active()\n\n if len(agencies) == 1:\n agency = agencies[0]\n return redirect(agency.index_url)\n\n # generate a button to the landing page for each active agency\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.index.title\"),\n content_title=_index_content_title(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\n@pageview_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n if len(agency.eligibility_verifiers.all()) == 1:\n return redirect(reverse(\"eligibility:index\"))\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n button.label = _(\"core.pages.agency_index.button.label\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.agency_index.title\"),\n content_title=_(\"core.pages.agency_index.content_title\"),\n button=button,\n classes=\"home\",\n )\n\n help_page = reverse(\"core:help\")\n context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n\n return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n\n\n@pageview_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\n@pageview_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}]}
| 2,022 | 133 |
gh_patches_debug_24104
|
rasdani/github-patches
|
git_diff
|
piskvorky__gensim-1845
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
D2VTransformer.fit_transform doesn't work
The **X** parameter of the **fit_transform** method of **D2VTransformer** doesn't accept variables of any type, nor list of token lists (raises _AttributeError: 'list' object has no attribute 'words'_), nor list of TaggedDocument (raises _TypeError: sequence item 0: expected str instance, list found_).
Example:
```python
from gensim.sklearn_api import D2VTransformer
from gensim.models import doc2vec
class_dict = {'mathematics': 1, 'physics': 0}
train_data = [
(['calculus', 'mathematical'], 'mathematics'), (['geometry', 'operations', 'curves'], 'mathematics'),
(['natural', 'nuclear'], 'physics'), (['science', 'electromagnetism', 'natural'], 'physics')
]
d2v_sentences = [doc2vec.TaggedDocument(words[0], [i]) for i, words in enumerate(train_data)]
train_input = list(map(lambda x: x[0], train_data))
train_target = list(map(lambda x: class_dict[x[1]], train_data))
model = D2VTransformer(min_count=1)
model.fit_transform(train_input, train_target)
#model.fit_transform(d2v_sentences, train_target)
```
Versions:
Windows-10-10.0.16299-SP0
Python 3.6.4 | packaged by conda-forge | (default, Dec 24 2017, 10:11:43) [MSC v.1900 64 bit (AMD64)]
NumPy 1.13.3
SciPy 0.19.1
gensim 3.2.0
FAST_VERSION 1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gensim/sklearn_api/d2vmodel.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) 2011 Radim Rehurek <[email protected]>
5 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
6
7 """
8 Scikit learn interface for gensim for easy use of gensim with scikit-learn
9 Follows scikit-learn API conventions
10 """
11
12 import numpy as np
13 from six import string_types
14 from sklearn.base import TransformerMixin, BaseEstimator
15 from sklearn.exceptions import NotFittedError
16
17 from gensim import models
18
19
20 class D2VTransformer(TransformerMixin, BaseEstimator):
21 """
22 Base Doc2Vec module
23 """
24
25 def __init__(self, dm_mean=None, dm=1, dbow_words=0, dm_concat=0, dm_tag_count=1, docvecs=None,
26 docvecs_mapfile=None, comment=None, trim_rule=None, size=100, alpha=0.025, window=5, min_count=5,
27 max_vocab_size=None, sample=1e-3, seed=1, workers=3, min_alpha=0.0001, hs=0, negative=5, cbow_mean=1,
28 hashfxn=hash, iter=5, sorted_vocab=1, batch_words=10000):
29 """
30 Sklearn api for Doc2Vec model. See gensim.models.Doc2Vec and gensim.models.Word2Vec for parameter details.
31 """
32 self.gensim_model = None
33 self.dm_mean = dm_mean
34 self.dm = dm
35 self.dbow_words = dbow_words
36 self.dm_concat = dm_concat
37 self.dm_tag_count = dm_tag_count
38 self.docvecs = docvecs
39 self.docvecs_mapfile = docvecs_mapfile
40 self.comment = comment
41 self.trim_rule = trim_rule
42
43 # attributes associated with gensim.models.Word2Vec
44 self.size = size
45 self.alpha = alpha
46 self.window = window
47 self.min_count = min_count
48 self.max_vocab_size = max_vocab_size
49 self.sample = sample
50 self.seed = seed
51 self.workers = workers
52 self.min_alpha = min_alpha
53 self.hs = hs
54 self.negative = negative
55 self.cbow_mean = int(cbow_mean)
56 self.hashfxn = hashfxn
57 self.iter = iter
58 self.sorted_vocab = sorted_vocab
59 self.batch_words = batch_words
60
61 def fit(self, X, y=None):
62 """
63 Fit the model according to the given training data.
64 Calls gensim.models.Doc2Vec
65 """
66 self.gensim_model = models.Doc2Vec(
67 documents=X, dm_mean=self.dm_mean, dm=self.dm,
68 dbow_words=self.dbow_words, dm_concat=self.dm_concat, dm_tag_count=self.dm_tag_count,
69 docvecs=self.docvecs, docvecs_mapfile=self.docvecs_mapfile, comment=self.comment,
70 trim_rule=self.trim_rule, size=self.size, alpha=self.alpha, window=self.window,
71 min_count=self.min_count, max_vocab_size=self.max_vocab_size, sample=self.sample,
72 seed=self.seed, workers=self.workers, min_alpha=self.min_alpha, hs=self.hs,
73 negative=self.negative, cbow_mean=self.cbow_mean, hashfxn=self.hashfxn,
74 iter=self.iter, sorted_vocab=self.sorted_vocab, batch_words=self.batch_words
75 )
76 return self
77
78 def transform(self, docs):
79 """
80 Return the vector representations for the input documents.
81 The input `docs` should be a list of lists like
82 [['calculus', 'mathematical'],
83 ['geometry', 'operations', 'curves']]
84 or a single document like : ['calculus', 'mathematical']
85 """
86 if self.gensim_model is None:
87 raise NotFittedError(
88 "This model has not been fitted yet. Call 'fit' with appropriate arguments before using this method."
89 )
90
91 # The input as array of array
92 if isinstance(docs[0], string_types):
93 docs = [docs]
94 vectors = [self.gensim_model.infer_vector(doc) for doc in docs]
95 return np.reshape(np.array(vectors), (len(docs), self.gensim_model.vector_size))
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gensim/sklearn_api/d2vmodel.py b/gensim/sklearn_api/d2vmodel.py
--- a/gensim/sklearn_api/d2vmodel.py
+++ b/gensim/sklearn_api/d2vmodel.py
@@ -15,6 +15,7 @@
from sklearn.exceptions import NotFittedError
from gensim import models
+from gensim.models import doc2vec
class D2VTransformer(TransformerMixin, BaseEstimator):
@@ -63,8 +64,12 @@
Fit the model according to the given training data.
Calls gensim.models.Doc2Vec
"""
+ if isinstance(X[0], doc2vec.TaggedDocument):
+ d2v_sentences = X
+ else:
+ d2v_sentences = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(X)]
self.gensim_model = models.Doc2Vec(
- documents=X, dm_mean=self.dm_mean, dm=self.dm,
+ documents=d2v_sentences, dm_mean=self.dm_mean, dm=self.dm,
dbow_words=self.dbow_words, dm_concat=self.dm_concat, dm_tag_count=self.dm_tag_count,
docvecs=self.docvecs, docvecs_mapfile=self.docvecs_mapfile, comment=self.comment,
trim_rule=self.trim_rule, size=self.size, alpha=self.alpha, window=self.window,
|
{"golden_diff": "diff --git a/gensim/sklearn_api/d2vmodel.py b/gensim/sklearn_api/d2vmodel.py\n--- a/gensim/sklearn_api/d2vmodel.py\n+++ b/gensim/sklearn_api/d2vmodel.py\n@@ -15,6 +15,7 @@\n from sklearn.exceptions import NotFittedError\n \n from gensim import models\n+from gensim.models import doc2vec\n \n \n class D2VTransformer(TransformerMixin, BaseEstimator):\n@@ -63,8 +64,12 @@\n Fit the model according to the given training data.\n Calls gensim.models.Doc2Vec\n \"\"\"\n+ if isinstance(X[0], doc2vec.TaggedDocument):\n+ d2v_sentences = X\n+ else:\n+ d2v_sentences = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(X)]\n self.gensim_model = models.Doc2Vec(\n- documents=X, dm_mean=self.dm_mean, dm=self.dm,\n+ documents=d2v_sentences, dm_mean=self.dm_mean, dm=self.dm,\n dbow_words=self.dbow_words, dm_concat=self.dm_concat, dm_tag_count=self.dm_tag_count,\n docvecs=self.docvecs, docvecs_mapfile=self.docvecs_mapfile, comment=self.comment,\n trim_rule=self.trim_rule, size=self.size, alpha=self.alpha, window=self.window,\n", "issue": "D2VTransformer.fit_transform doesn't work\nThe **X** parameter of the **fit_transform** method of **D2VTransformer** doesn't accept variables of any type, nor list of token lists (raises _AttributeError: 'list' object has no attribute 'words'_), nor list of TaggedDocument (raises _TypeError: sequence item 0: expected str instance, list found_).\r\n\r\nExample:\r\n```python\r\nfrom gensim.sklearn_api import D2VTransformer\r\nfrom gensim.models import doc2vec\r\n\r\nclass_dict = {'mathematics': 1, 'physics': 0}\r\ntrain_data = [\r\n (['calculus', 'mathematical'], 'mathematics'), (['geometry', 'operations', 'curves'], 'mathematics'),\r\n (['natural', 'nuclear'], 'physics'), (['science', 'electromagnetism', 'natural'], 'physics')\r\n]\r\nd2v_sentences = [doc2vec.TaggedDocument(words[0], [i]) for i, words in enumerate(train_data)]\r\ntrain_input = list(map(lambda x: x[0], train_data))\r\ntrain_target = list(map(lambda x: class_dict[x[1]], train_data))\r\n\r\nmodel = D2VTransformer(min_count=1)\r\nmodel.fit_transform(train_input, train_target)\r\n#model.fit_transform(d2v_sentences, train_target)\r\n```\r\nVersions:\r\nWindows-10-10.0.16299-SP0\r\nPython 3.6.4 | packaged by conda-forge | (default, Dec 24 2017, 10:11:43) [MSC v.1900 64 bit (AMD64)]\r\nNumPy 1.13.3\r\nSciPy 0.19.1\r\ngensim 3.2.0\r\nFAST_VERSION 1\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2011 Radim Rehurek <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nScikit learn interface for gensim for easy use of gensim with scikit-learn\nFollows scikit-learn API conventions\n\"\"\"\n\nimport numpy as np\nfrom six import string_types\nfrom sklearn.base import TransformerMixin, BaseEstimator\nfrom sklearn.exceptions import NotFittedError\n\nfrom gensim import models\n\n\nclass D2VTransformer(TransformerMixin, BaseEstimator):\n \"\"\"\n Base Doc2Vec module\n \"\"\"\n\n def __init__(self, dm_mean=None, dm=1, dbow_words=0, dm_concat=0, dm_tag_count=1, docvecs=None,\n docvecs_mapfile=None, comment=None, trim_rule=None, size=100, alpha=0.025, window=5, min_count=5,\n max_vocab_size=None, sample=1e-3, seed=1, workers=3, min_alpha=0.0001, hs=0, negative=5, cbow_mean=1,\n hashfxn=hash, iter=5, sorted_vocab=1, batch_words=10000):\n \"\"\"\n Sklearn api for Doc2Vec model. See gensim.models.Doc2Vec and gensim.models.Word2Vec for parameter details.\n \"\"\"\n self.gensim_model = None\n self.dm_mean = dm_mean\n self.dm = dm\n self.dbow_words = dbow_words\n self.dm_concat = dm_concat\n self.dm_tag_count = dm_tag_count\n self.docvecs = docvecs\n self.docvecs_mapfile = docvecs_mapfile\n self.comment = comment\n self.trim_rule = trim_rule\n\n # attributes associated with gensim.models.Word2Vec\n self.size = size\n self.alpha = alpha\n self.window = window\n self.min_count = min_count\n self.max_vocab_size = max_vocab_size\n self.sample = sample\n self.seed = seed\n self.workers = workers\n self.min_alpha = min_alpha\n self.hs = hs\n self.negative = negative\n self.cbow_mean = int(cbow_mean)\n self.hashfxn = hashfxn\n self.iter = iter\n self.sorted_vocab = sorted_vocab\n self.batch_words = batch_words\n\n def fit(self, X, y=None):\n \"\"\"\n Fit the model according to the given training data.\n Calls gensim.models.Doc2Vec\n \"\"\"\n self.gensim_model = models.Doc2Vec(\n documents=X, dm_mean=self.dm_mean, dm=self.dm,\n dbow_words=self.dbow_words, dm_concat=self.dm_concat, dm_tag_count=self.dm_tag_count,\n docvecs=self.docvecs, docvecs_mapfile=self.docvecs_mapfile, comment=self.comment,\n trim_rule=self.trim_rule, size=self.size, alpha=self.alpha, window=self.window,\n min_count=self.min_count, max_vocab_size=self.max_vocab_size, sample=self.sample,\n seed=self.seed, workers=self.workers, min_alpha=self.min_alpha, hs=self.hs,\n negative=self.negative, cbow_mean=self.cbow_mean, hashfxn=self.hashfxn,\n iter=self.iter, sorted_vocab=self.sorted_vocab, batch_words=self.batch_words\n )\n return self\n\n def transform(self, docs):\n \"\"\"\n Return the vector representations for the input documents.\n The input `docs` should be a list of lists like\n [['calculus', 'mathematical'],\n ['geometry', 'operations', 'curves']]\n or a single document like : ['calculus', 'mathematical']\n \"\"\"\n if self.gensim_model is None:\n raise NotFittedError(\n \"This model has not been fitted yet. Call 'fit' with appropriate arguments before using this method.\"\n )\n\n # The input as array of array\n if isinstance(docs[0], string_types):\n docs = [docs]\n vectors = [self.gensim_model.infer_vector(doc) for doc in docs]\n return np.reshape(np.array(vectors), (len(docs), self.gensim_model.vector_size))\n", "path": "gensim/sklearn_api/d2vmodel.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2011 Radim Rehurek <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nScikit learn interface for gensim for easy use of gensim with scikit-learn\nFollows scikit-learn API conventions\n\"\"\"\n\nimport numpy as np\nfrom six import string_types\nfrom sklearn.base import TransformerMixin, BaseEstimator\nfrom sklearn.exceptions import NotFittedError\n\nfrom gensim import models\nfrom gensim.models import doc2vec\n\n\nclass D2VTransformer(TransformerMixin, BaseEstimator):\n \"\"\"\n Base Doc2Vec module\n \"\"\"\n\n def __init__(self, dm_mean=None, dm=1, dbow_words=0, dm_concat=0, dm_tag_count=1, docvecs=None,\n docvecs_mapfile=None, comment=None, trim_rule=None, size=100, alpha=0.025, window=5, min_count=5,\n max_vocab_size=None, sample=1e-3, seed=1, workers=3, min_alpha=0.0001, hs=0, negative=5, cbow_mean=1,\n hashfxn=hash, iter=5, sorted_vocab=1, batch_words=10000):\n \"\"\"\n Sklearn api for Doc2Vec model. See gensim.models.Doc2Vec and gensim.models.Word2Vec for parameter details.\n \"\"\"\n self.gensim_model = None\n self.dm_mean = dm_mean\n self.dm = dm\n self.dbow_words = dbow_words\n self.dm_concat = dm_concat\n self.dm_tag_count = dm_tag_count\n self.docvecs = docvecs\n self.docvecs_mapfile = docvecs_mapfile\n self.comment = comment\n self.trim_rule = trim_rule\n\n # attributes associated with gensim.models.Word2Vec\n self.size = size\n self.alpha = alpha\n self.window = window\n self.min_count = min_count\n self.max_vocab_size = max_vocab_size\n self.sample = sample\n self.seed = seed\n self.workers = workers\n self.min_alpha = min_alpha\n self.hs = hs\n self.negative = negative\n self.cbow_mean = int(cbow_mean)\n self.hashfxn = hashfxn\n self.iter = iter\n self.sorted_vocab = sorted_vocab\n self.batch_words = batch_words\n\n def fit(self, X, y=None):\n \"\"\"\n Fit the model according to the given training data.\n Calls gensim.models.Doc2Vec\n \"\"\"\n if isinstance(X[0], doc2vec.TaggedDocument):\n d2v_sentences = X\n else:\n d2v_sentences = [doc2vec.TaggedDocument(words, [i]) for i, words in enumerate(X)]\n self.gensim_model = models.Doc2Vec(\n documents=d2v_sentences, dm_mean=self.dm_mean, dm=self.dm,\n dbow_words=self.dbow_words, dm_concat=self.dm_concat, dm_tag_count=self.dm_tag_count,\n docvecs=self.docvecs, docvecs_mapfile=self.docvecs_mapfile, comment=self.comment,\n trim_rule=self.trim_rule, size=self.size, alpha=self.alpha, window=self.window,\n min_count=self.min_count, max_vocab_size=self.max_vocab_size, sample=self.sample,\n seed=self.seed, workers=self.workers, min_alpha=self.min_alpha, hs=self.hs,\n negative=self.negative, cbow_mean=self.cbow_mean, hashfxn=self.hashfxn,\n iter=self.iter, sorted_vocab=self.sorted_vocab, batch_words=self.batch_words\n )\n return self\n\n def transform(self, docs):\n \"\"\"\n Return the vector representations for the input documents.\n The input `docs` should be a list of lists like\n [['calculus', 'mathematical'],\n ['geometry', 'operations', 'curves']]\n or a single document like : ['calculus', 'mathematical']\n \"\"\"\n if self.gensim_model is None:\n raise NotFittedError(\n \"This model has not been fitted yet. Call 'fit' with appropriate arguments before using this method.\"\n )\n\n # The input as array of array\n if isinstance(docs[0], string_types):\n docs = [docs]\n vectors = [self.gensim_model.infer_vector(doc) for doc in docs]\n return np.reshape(np.array(vectors), (len(docs), self.gensim_model.vector_size))\n", "path": "gensim/sklearn_api/d2vmodel.py"}]}
| 1,781 | 309 |
gh_patches_debug_18562
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-1424
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cache does not get invalidated correctly when LanguageTreeNode is deleted
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
If you delete a language the page tree still shows the translation status of the corresponding language. This is probably a caching issue.
Cache does not get invalidated correctly when LanguageTreeNode is deleted
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
If you delete a language the page tree still shows the translation status of the corresponding language. This is probably a caching issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `integreat_cms/cms/views/language_tree/language_tree_actions.py`
Content:
```
1 """
2 This module contains view actions for the language tree.
3 Typically, they do not render a whole page, but only parts of it or they redirect to regular views.
4 """
5 import logging
6
7 from django.contrib import messages
8 from django.shortcuts import redirect, get_object_or_404
9 from django.utils.translation import ugettext as _
10 from django.views.decorators.http import require_POST
11 from django.db import transaction
12
13 from treebeard.exceptions import InvalidPosition, InvalidMoveToDescendant
14
15 from ...constants import position
16 from ...decorators import permission_required
17 from ...models import LanguageTreeNode
18
19 logger = logging.getLogger(__name__)
20
21
22 @require_POST
23 @permission_required("cms.change_languagetreenode")
24 @transaction.atomic
25 def move_language_tree_node(
26 request, region_slug, language_tree_node_id, target_id, target_position
27 ):
28 """
29 This action moves the given language tree node to the given position relative to the given target.
30
31 :param request: The current request
32 :type request: ~django.http.HttpResponse
33
34 :param region_slug: The slug of the region which language tree should be modified
35 :type region_slug: str
36
37 :param language_tree_node_id: The id of the language tree node which should be moved
38 :type language_tree_node_id: int
39
40 :param target_id: The id of the target language tree node
41 :type target_id: int
42
43 :param target_position: The desired position (choices: :mod:`~integreat_cms.cms.constants.position`)
44 :type target_position: str
45
46 :return: A redirection to the language tree
47 :rtype: ~django.http.HttpResponseRedirect
48 """
49
50 region = request.region
51 language_tree_node = get_object_or_404(
52 region.language_tree_nodes, id=language_tree_node_id
53 )
54 target = get_object_or_404(region.language_tree_nodes, id=target_id)
55
56 try:
57 if target.depth == 1 and target_position in [position.LEFT, position.RIGHT]:
58 raise InvalidPosition(_("A region can only have one root language."))
59 language_tree_node.move(target, target_position)
60 # Call the save method on the (reloaded) node in order to trigger possible signal handlers etc.
61 # (The move()-method executes raw sql which might cause problems if the instance isn't fetched again)
62 language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)
63 language_tree_node.save()
64 messages.success(
65 request,
66 _('The language tree node "{}" was successfully moved.').format(
67 language_tree_node.translated_name
68 ),
69 )
70 logger.debug(
71 "%r moved to %r of %r by %r",
72 language_tree_node,
73 target_position,
74 target,
75 request.user,
76 )
77 except (ValueError, InvalidPosition, InvalidMoveToDescendant) as e:
78 messages.error(request, e)
79 logger.exception(e)
80
81 return redirect("language_tree", **{"region_slug": region_slug})
82
83
84 @require_POST
85 @permission_required("cms.delete_languagetreenode")
86 @transaction.atomic
87 def delete_language_tree_node(request, region_slug, language_tree_node_id):
88 """
89 Deletes the language node of distinct region
90 and all page translations for this language
91
92 :param request: The current request
93 :type request: ~django.http.HttpResponse
94
95 :param region_slug: The slug of the region which language node should be deleted
96 :type region_slug: str
97
98 :param language_tree_node_id: The id of the language tree node which should be deleted
99 :type language_tree_node_id: int
100
101 :return: A redirection to the language tree
102 :rtype: ~django.http.HttpResponseRedirect
103 """
104 # get current region
105 region = request.region
106 # get current selected language node
107 language_node = get_object_or_404(
108 region.language_tree_nodes, id=language_tree_node_id
109 )
110 # get all page translation assigned to the language node
111 page_translations = language_node.language.page_translations
112 # filter those translation that belong to the region and delete them
113 page_translations.filter(page__region=region).delete()
114 # get all event translation assigned to the language node
115 event_translations = language_node.language.event_translations
116 # filter those translation that belong to the region and delete them
117 event_translations.filter(event__region=region).delete()
118 # get all poi translation assigned to the language node
119 poi_translations = language_node.language.poi_translations
120 # filter those translation that belong to the region and delete them
121 poi_translations.filter(poi__region=region).delete()
122 # get all push notification translation assigned to the language node
123 push_notification_translations = (
124 language_node.language.push_notification_translations
125 )
126 # filter those translation that belong to the region and delete them
127 push_notification_translations.filter(push_notification__region=region).delete()
128
129 logger.debug("%r deleted by %r", language_node, request.user)
130 language_node.delete()
131 messages.success(
132 request,
133 _(
134 'The language tree node "{}" and all corresponding translations were successfully deleted.'
135 ).format(language_node.translated_name),
136 )
137 return redirect("language_tree", **{"region_slug": region_slug})
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/integreat_cms/cms/views/language_tree/language_tree_actions.py b/integreat_cms/cms/views/language_tree/language_tree_actions.py
--- a/integreat_cms/cms/views/language_tree/language_tree_actions.py
+++ b/integreat_cms/cms/views/language_tree/language_tree_actions.py
@@ -11,6 +11,7 @@
from django.db import transaction
from treebeard.exceptions import InvalidPosition, InvalidMoveToDescendant
+from cacheops import invalidate_obj
from ...constants import position
from ...decorators import permission_required
@@ -128,6 +129,16 @@
logger.debug("%r deleted by %r", language_node, request.user)
language_node.delete()
+
+ for page in region.pages.all():
+ invalidate_obj(page)
+ for event in region.events.all():
+ invalidate_obj(event)
+ for poi in region.pois.all():
+ invalidate_obj(poi)
+ for push_notification in region.push_notifications.all():
+ invalidate_obj(push_notification)
+
messages.success(
request,
_(
|
{"golden_diff": "diff --git a/integreat_cms/cms/views/language_tree/language_tree_actions.py b/integreat_cms/cms/views/language_tree/language_tree_actions.py\n--- a/integreat_cms/cms/views/language_tree/language_tree_actions.py\n+++ b/integreat_cms/cms/views/language_tree/language_tree_actions.py\n@@ -11,6 +11,7 @@\n from django.db import transaction\n \n from treebeard.exceptions import InvalidPosition, InvalidMoveToDescendant\n+from cacheops import invalidate_obj\n \n from ...constants import position\n from ...decorators import permission_required\n@@ -128,6 +129,16 @@\n \n logger.debug(\"%r deleted by %r\", language_node, request.user)\n language_node.delete()\n+\n+ for page in region.pages.all():\n+ invalidate_obj(page)\n+ for event in region.events.all():\n+ invalidate_obj(event)\n+ for poi in region.pois.all():\n+ invalidate_obj(poi)\n+ for push_notification in region.push_notifications.all():\n+ invalidate_obj(push_notification)\n+\n messages.success(\n request,\n _(\n", "issue": "Cache does not get invalidated correctly when LanguageTreeNode is deleted\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nIf you delete a language the page tree still shows the translation status of the corresponding language. This is probably a caching issue.\r\n\nCache does not get invalidated correctly when LanguageTreeNode is deleted\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nIf you delete a language the page tree still shows the translation status of the corresponding language. This is probably a caching issue.\r\n\n", "before_files": [{"content": "\"\"\"\nThis module contains view actions for the language tree.\nTypically, they do not render a whole page, but only parts of it or they redirect to regular views.\n\"\"\"\nimport logging\n\nfrom django.contrib import messages\nfrom django.shortcuts import redirect, get_object_or_404\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.http import require_POST\nfrom django.db import transaction\n\nfrom treebeard.exceptions import InvalidPosition, InvalidMoveToDescendant\n\nfrom ...constants import position\nfrom ...decorators import permission_required\nfrom ...models import LanguageTreeNode\n\nlogger = logging.getLogger(__name__)\n\n\n@require_POST\n@permission_required(\"cms.change_languagetreenode\")\[email protected]\ndef move_language_tree_node(\n request, region_slug, language_tree_node_id, target_id, target_position\n):\n \"\"\"\n This action moves the given language tree node to the given position relative to the given target.\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param region_slug: The slug of the region which language tree should be modified\n :type region_slug: str\n\n :param language_tree_node_id: The id of the language tree node which should be moved\n :type language_tree_node_id: int\n\n :param target_id: The id of the target language tree node\n :type target_id: int\n\n :param target_position: The desired position (choices: :mod:`~integreat_cms.cms.constants.position`)\n :type target_position: str\n\n :return: A redirection to the language tree\n :rtype: ~django.http.HttpResponseRedirect\n \"\"\"\n\n region = request.region\n language_tree_node = get_object_or_404(\n region.language_tree_nodes, id=language_tree_node_id\n )\n target = get_object_or_404(region.language_tree_nodes, id=target_id)\n\n try:\n if target.depth == 1 and target_position in [position.LEFT, position.RIGHT]:\n raise InvalidPosition(_(\"A region can only have one root language.\"))\n language_tree_node.move(target, target_position)\n # Call the save method on the (reloaded) node in order to trigger possible signal handlers etc.\n # (The move()-method executes raw sql which might cause problems if the instance isn't fetched again)\n language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)\n language_tree_node.save()\n messages.success(\n request,\n _('The language tree node \"{}\" was successfully moved.').format(\n language_tree_node.translated_name\n ),\n )\n logger.debug(\n \"%r moved to %r of %r by %r\",\n language_tree_node,\n target_position,\n target,\n request.user,\n )\n except (ValueError, InvalidPosition, InvalidMoveToDescendant) as e:\n messages.error(request, e)\n logger.exception(e)\n\n return redirect(\"language_tree\", **{\"region_slug\": region_slug})\n\n\n@require_POST\n@permission_required(\"cms.delete_languagetreenode\")\[email protected]\ndef delete_language_tree_node(request, region_slug, language_tree_node_id):\n \"\"\"\n Deletes the language node of distinct region\n and all page translations for this language\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param region_slug: The slug of the region which language node should be deleted\n :type region_slug: str\n\n :param language_tree_node_id: The id of the language tree node which should be deleted\n :type language_tree_node_id: int\n\n :return: A redirection to the language tree\n :rtype: ~django.http.HttpResponseRedirect\n \"\"\"\n # get current region\n region = request.region\n # get current selected language node\n language_node = get_object_or_404(\n region.language_tree_nodes, id=language_tree_node_id\n )\n # get all page translation assigned to the language node\n page_translations = language_node.language.page_translations\n # filter those translation that belong to the region and delete them\n page_translations.filter(page__region=region).delete()\n # get all event translation assigned to the language node\n event_translations = language_node.language.event_translations\n # filter those translation that belong to the region and delete them\n event_translations.filter(event__region=region).delete()\n # get all poi translation assigned to the language node\n poi_translations = language_node.language.poi_translations\n # filter those translation that belong to the region and delete them\n poi_translations.filter(poi__region=region).delete()\n # get all push notification translation assigned to the language node\n push_notification_translations = (\n language_node.language.push_notification_translations\n )\n # filter those translation that belong to the region and delete them\n push_notification_translations.filter(push_notification__region=region).delete()\n\n logger.debug(\"%r deleted by %r\", language_node, request.user)\n language_node.delete()\n messages.success(\n request,\n _(\n 'The language tree node \"{}\" and all corresponding translations were successfully deleted.'\n ).format(language_node.translated_name),\n )\n return redirect(\"language_tree\", **{\"region_slug\": region_slug})\n", "path": "integreat_cms/cms/views/language_tree/language_tree_actions.py"}], "after_files": [{"content": "\"\"\"\nThis module contains view actions for the language tree.\nTypically, they do not render a whole page, but only parts of it or they redirect to regular views.\n\"\"\"\nimport logging\n\nfrom django.contrib import messages\nfrom django.shortcuts import redirect, get_object_or_404\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.http import require_POST\nfrom django.db import transaction\n\nfrom treebeard.exceptions import InvalidPosition, InvalidMoveToDescendant\nfrom cacheops import invalidate_obj\n\nfrom ...constants import position\nfrom ...decorators import permission_required\nfrom ...models import LanguageTreeNode\n\nlogger = logging.getLogger(__name__)\n\n\n@require_POST\n@permission_required(\"cms.change_languagetreenode\")\[email protected]\ndef move_language_tree_node(\n request, region_slug, language_tree_node_id, target_id, target_position\n):\n \"\"\"\n This action moves the given language tree node to the given position relative to the given target.\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param region_slug: The slug of the region which language tree should be modified\n :type region_slug: str\n\n :param language_tree_node_id: The id of the language tree node which should be moved\n :type language_tree_node_id: int\n\n :param target_id: The id of the target language tree node\n :type target_id: int\n\n :param target_position: The desired position (choices: :mod:`~integreat_cms.cms.constants.position`)\n :type target_position: str\n\n :return: A redirection to the language tree\n :rtype: ~django.http.HttpResponseRedirect\n \"\"\"\n\n region = request.region\n language_tree_node = get_object_or_404(\n region.language_tree_nodes, id=language_tree_node_id\n )\n target = get_object_or_404(region.language_tree_nodes, id=target_id)\n\n try:\n if target.depth == 1 and target_position in [position.LEFT, position.RIGHT]:\n raise InvalidPosition(_(\"A region can only have one root language.\"))\n language_tree_node.move(target, target_position)\n # Call the save method on the (reloaded) node in order to trigger possible signal handlers etc.\n # (The move()-method executes raw sql which might cause problems if the instance isn't fetched again)\n language_tree_node = LanguageTreeNode.objects.get(id=language_tree_node_id)\n language_tree_node.save()\n messages.success(\n request,\n _('The language tree node \"{}\" was successfully moved.').format(\n language_tree_node.translated_name\n ),\n )\n logger.debug(\n \"%r moved to %r of %r by %r\",\n language_tree_node,\n target_position,\n target,\n request.user,\n )\n except (ValueError, InvalidPosition, InvalidMoveToDescendant) as e:\n messages.error(request, e)\n logger.exception(e)\n\n return redirect(\"language_tree\", **{\"region_slug\": region_slug})\n\n\n@require_POST\n@permission_required(\"cms.delete_languagetreenode\")\[email protected]\ndef delete_language_tree_node(request, region_slug, language_tree_node_id):\n \"\"\"\n Deletes the language node of distinct region\n and all page translations for this language\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param region_slug: The slug of the region which language node should be deleted\n :type region_slug: str\n\n :param language_tree_node_id: The id of the language tree node which should be deleted\n :type language_tree_node_id: int\n\n :return: A redirection to the language tree\n :rtype: ~django.http.HttpResponseRedirect\n \"\"\"\n # get current region\n region = request.region\n # get current selected language node\n language_node = get_object_or_404(\n region.language_tree_nodes, id=language_tree_node_id\n )\n # get all page translation assigned to the language node\n page_translations = language_node.language.page_translations\n # filter those translation that belong to the region and delete them\n page_translations.filter(page__region=region).delete()\n # get all event translation assigned to the language node\n event_translations = language_node.language.event_translations\n # filter those translation that belong to the region and delete them\n event_translations.filter(event__region=region).delete()\n # get all poi translation assigned to the language node\n poi_translations = language_node.language.poi_translations\n # filter those translation that belong to the region and delete them\n poi_translations.filter(poi__region=region).delete()\n # get all push notification translation assigned to the language node\n push_notification_translations = (\n language_node.language.push_notification_translations\n )\n # filter those translation that belong to the region and delete them\n push_notification_translations.filter(push_notification__region=region).delete()\n\n logger.debug(\"%r deleted by %r\", language_node, request.user)\n language_node.delete()\n\n for page in region.pages.all():\n invalidate_obj(page)\n for event in region.events.all():\n invalidate_obj(event)\n for poi in region.pois.all():\n invalidate_obj(poi)\n for push_notification in region.push_notifications.all():\n invalidate_obj(push_notification)\n\n messages.success(\n request,\n _(\n 'The language tree node \"{}\" and all corresponding translations were successfully deleted.'\n ).format(language_node.translated_name),\n )\n return redirect(\"language_tree\", **{\"region_slug\": region_slug})\n", "path": "integreat_cms/cms/views/language_tree/language_tree_actions.py"}]}
| 1,820 | 232 |
gh_patches_debug_16723
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-2915
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot edit or delete challenge pages with slug `evaluation`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/pages/forms.py`
Content:
```
1 from django import forms
2 from django.db.models import BLANK_CHOICE_DASH
3 from django_summernote.widgets import SummernoteInplaceWidget
4
5 from grandchallenge.core.forms import SaveFormInitMixin
6 from grandchallenge.pages.models import Page
7
8
9 class PageCreateForm(SaveFormInitMixin, forms.ModelForm):
10 def __init__(self, *args, challenge, **kwargs):
11 self.challenge = challenge
12 super().__init__(*args, **kwargs)
13
14 class Meta:
15 model = Page
16 fields = (
17 "display_title",
18 "permission_level",
19 "hidden",
20 "html",
21 )
22 widgets = {"html": SummernoteInplaceWidget()}
23 help_texts = {
24 "html": (
25 "The content of your page. <b>Please note</b>: your html will "
26 "be filtered after it has been saved to remove any non-HTML5 "
27 "compliant markup and scripts. The filtering is not reflected "
28 "in the live view so please <b>check the rendering of your "
29 "page after you click save</b>. If you're going to paste from "
30 "another source such as MS Word, please <b>paste without "
31 "formatting</b> using <b>CTRL+SHIFT+V</b> on Windows or "
32 "<b>⇧+⌥+⌘+V</b> on OS X."
33 )
34 }
35
36
37 class PageUpdateForm(PageCreateForm):
38 """Like the page update form but you can also move the page."""
39
40 move = forms.CharField(widget=forms.Select)
41 move.required = False
42 move.widget.choices = (
43 (BLANK_CHOICE_DASH[0]),
44 (Page.FIRST, "First"),
45 (Page.UP, "Up"),
46 (Page.DOWN, "Down"),
47 (Page.LAST, "Last"),
48 )
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/pages/forms.py b/app/grandchallenge/pages/forms.py
--- a/app/grandchallenge/pages/forms.py
+++ b/app/grandchallenge/pages/forms.py
@@ -1,4 +1,5 @@
from django import forms
+from django.core.exceptions import ValidationError
from django.db.models import BLANK_CHOICE_DASH
from django_summernote.widgets import SummernoteInplaceWidget
@@ -33,6 +34,17 @@
)
}
+ def clean_display_title(self):
+ display_title = self.cleaned_data["display_title"]
+
+ if display_title.lower() in {"evaluation"}:
+ # evaluation results in a URL clash, especially with the update page.
+ raise ValidationError(
+ "Title not allowed, please select an alternative"
+ )
+
+ return display_title
+
class PageUpdateForm(PageCreateForm):
"""Like the page update form but you can also move the page."""
|
{"golden_diff": "diff --git a/app/grandchallenge/pages/forms.py b/app/grandchallenge/pages/forms.py\n--- a/app/grandchallenge/pages/forms.py\n+++ b/app/grandchallenge/pages/forms.py\n@@ -1,4 +1,5 @@\n from django import forms\n+from django.core.exceptions import ValidationError\n from django.db.models import BLANK_CHOICE_DASH\n from django_summernote.widgets import SummernoteInplaceWidget\n \n@@ -33,6 +34,17 @@\n )\n }\n \n+ def clean_display_title(self):\n+ display_title = self.cleaned_data[\"display_title\"]\n+\n+ if display_title.lower() in {\"evaluation\"}:\n+ # evaluation results in a URL clash, especially with the update page.\n+ raise ValidationError(\n+ \"Title not allowed, please select an alternative\"\n+ )\n+\n+ return display_title\n+\n \n class PageUpdateForm(PageCreateForm):\n \"\"\"Like the page update form but you can also move the page.\"\"\"\n", "issue": "Cannot edit or delete challenge pages with slug `evaluation`\n\n", "before_files": [{"content": "from django import forms\nfrom django.db.models import BLANK_CHOICE_DASH\nfrom django_summernote.widgets import SummernoteInplaceWidget\n\nfrom grandchallenge.core.forms import SaveFormInitMixin\nfrom grandchallenge.pages.models import Page\n\n\nclass PageCreateForm(SaveFormInitMixin, forms.ModelForm):\n def __init__(self, *args, challenge, **kwargs):\n self.challenge = challenge\n super().__init__(*args, **kwargs)\n\n class Meta:\n model = Page\n fields = (\n \"display_title\",\n \"permission_level\",\n \"hidden\",\n \"html\",\n )\n widgets = {\"html\": SummernoteInplaceWidget()}\n help_texts = {\n \"html\": (\n \"The content of your page. <b>Please note</b>: your html will \"\n \"be filtered after it has been saved to remove any non-HTML5 \"\n \"compliant markup and scripts. The filtering is not reflected \"\n \"in the live view so please <b>check the rendering of your \"\n \"page after you click save</b>. If you're going to paste from \"\n \"another source such as MS Word, please <b>paste without \"\n \"formatting</b> using <b>CTRL+SHIFT+V</b> on Windows or \"\n \"<b>\u21e7+\u2325+\u2318+V</b> on OS X.\"\n )\n }\n\n\nclass PageUpdateForm(PageCreateForm):\n \"\"\"Like the page update form but you can also move the page.\"\"\"\n\n move = forms.CharField(widget=forms.Select)\n move.required = False\n move.widget.choices = (\n (BLANK_CHOICE_DASH[0]),\n (Page.FIRST, \"First\"),\n (Page.UP, \"Up\"),\n (Page.DOWN, \"Down\"),\n (Page.LAST, \"Last\"),\n )\n", "path": "app/grandchallenge/pages/forms.py"}], "after_files": [{"content": "from django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import BLANK_CHOICE_DASH\nfrom django_summernote.widgets import SummernoteInplaceWidget\n\nfrom grandchallenge.core.forms import SaveFormInitMixin\nfrom grandchallenge.pages.models import Page\n\n\nclass PageCreateForm(SaveFormInitMixin, forms.ModelForm):\n def __init__(self, *args, challenge, **kwargs):\n self.challenge = challenge\n super().__init__(*args, **kwargs)\n\n class Meta:\n model = Page\n fields = (\n \"display_title\",\n \"permission_level\",\n \"hidden\",\n \"html\",\n )\n widgets = {\"html\": SummernoteInplaceWidget()}\n help_texts = {\n \"html\": (\n \"The content of your page. <b>Please note</b>: your html will \"\n \"be filtered after it has been saved to remove any non-HTML5 \"\n \"compliant markup and scripts. The filtering is not reflected \"\n \"in the live view so please <b>check the rendering of your \"\n \"page after you click save</b>. If you're going to paste from \"\n \"another source such as MS Word, please <b>paste without \"\n \"formatting</b> using <b>CTRL+SHIFT+V</b> on Windows or \"\n \"<b>\u21e7+\u2325+\u2318+V</b> on OS X.\"\n )\n }\n\n def clean_display_title(self):\n display_title = self.cleaned_data[\"display_title\"]\n\n if display_title.lower() in {\"evaluation\"}:\n # evaluation results in a URL clash, especially with the update page.\n raise ValidationError(\n \"Title not allowed, please select an alternative\"\n )\n\n return display_title\n\n\nclass PageUpdateForm(PageCreateForm):\n \"\"\"Like the page update form but you can also move the page.\"\"\"\n\n move = forms.CharField(widget=forms.Select)\n move.required = False\n move.widget.choices = (\n (BLANK_CHOICE_DASH[0]),\n (Page.FIRST, \"First\"),\n (Page.UP, \"Up\"),\n (Page.DOWN, \"Down\"),\n (Page.LAST, \"Last\"),\n )\n", "path": "app/grandchallenge/pages/forms.py"}]}
| 757 | 208 |
gh_patches_debug_20301
|
rasdani/github-patches
|
git_diff
|
UTNkar__moore-134
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing search template
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Prerequisites
* [X] Put an X between the brackets on this line if you have done all of the
following:
* Reproduced the problem with clear cache.
* (If running the application locally:) Made sure your running the newest version on the development branch
* Checked that your issue isn't already filed: https://github.com/UTNkar/moore/issues
### Description
The current search is now really working. Although you can search, the results are ugly and very spotty.
Goal: Evaluate the searchable fields within Project Moore and make a search template.
### Steps to Reproduce
1. Visit the home page
2. Search something
3. Be disappointed ☹️
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/materialize/templatetags/materialize.py`
Content:
```
1 from django import template
2 from django.template import loader
3
4 register = template.Library()
5
6
7 def get_widget_name(field):
8 return field.field.widget.__class__.__name__
9
10
11 def append_classes(field):
12 field.field.label_suffix = ''
13 classes = field.field.widget.attrs.get('class', '')
14 classes += ' validate'
15 if field.errors:
16 classes += ' invalid'
17 field.field.widget.attrs['class'] = classes
18
19
20 def render_field(template, field, prefix=None):
21 t = loader.get_template(template)
22 c = {
23 'field': field,
24 'prefix': prefix,
25 }
26 html = t.render(c)
27 return html
28
29
30 @register.simple_tag
31 def materialize_field(field, prefix=None):
32 widget = get_widget_name(field)
33 if widget in ['TextInput', 'EmailInput', 'PasswordInput', 'Select',
34 'Textarea']:
35 append_classes(field)
36 return render_field('materialize/form/input.html', field, prefix)
37 else:
38 return field.as_widget()
39
```
Path: `website/search/views.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 from django.core.paginator import EmptyPage, PageNotAnInteger, Paginator
4 from django.shortcuts import render
5
6 from wagtail.wagtailcore.models import Page
7 from wagtail.wagtailsearch.models import Query
8
9
10 def search(request):
11 search_query = request.GET.get('query', None)
12 page = request.GET.get('page', 1)
13
14 # Search
15 if search_query:
16 search_results = Page.objects.live().search(search_query)
17 query = Query.get(search_query)
18
19 # Record hit
20 query.add_hit()
21 else:
22 search_results = Page.objects.none()
23
24 # Pagination
25 paginator = Paginator(search_results, 10)
26 try:
27 search_results = paginator.page(page)
28 except PageNotAnInteger:
29 search_results = paginator.page(1)
30 except EmptyPage:
31 search_results = paginator.page(paginator.num_pages)
32
33 return render(request, 'search/search.html', {
34 'search_query': search_query,
35 'search_results': search_results,
36 })
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/materialize/templatetags/materialize.py b/website/materialize/templatetags/materialize.py
--- a/website/materialize/templatetags/materialize.py
+++ b/website/materialize/templatetags/materialize.py
@@ -23,8 +23,7 @@
'field': field,
'prefix': prefix,
}
- html = t.render(c)
- return html
+ return t.render(c)
@register.simple_tag
@@ -36,3 +35,12 @@
return render_field('materialize/form/input.html', field, prefix)
else:
return field.as_widget()
+
+
[email protected]_tag('materialize/pagination.html')
+def materialize_pagination(page, url):
+ return {
+ 'page': page,
+ 'paginator': page.paginator,
+ 'url': url,
+ }
diff --git a/website/search/views.py b/website/search/views.py
--- a/website/search/views.py
+++ b/website/search/views.py
@@ -22,7 +22,7 @@
search_results = Page.objects.none()
# Pagination
- paginator = Paginator(search_results, 10)
+ paginator = Paginator(search_results, 25)
try:
search_results = paginator.page(page)
except PageNotAnInteger:
|
{"golden_diff": "diff --git a/website/materialize/templatetags/materialize.py b/website/materialize/templatetags/materialize.py\n--- a/website/materialize/templatetags/materialize.py\n+++ b/website/materialize/templatetags/materialize.py\n@@ -23,8 +23,7 @@\n 'field': field,\n 'prefix': prefix,\n }\n- html = t.render(c)\n- return html\n+ return t.render(c)\n \n \n @register.simple_tag\n@@ -36,3 +35,12 @@\n return render_field('materialize/form/input.html', field, prefix)\n else:\n return field.as_widget()\n+\n+\[email protected]_tag('materialize/pagination.html')\n+def materialize_pagination(page, url):\n+ return {\n+ 'page': page,\n+ 'paginator': page.paginator,\n+ 'url': url,\n+ }\ndiff --git a/website/search/views.py b/website/search/views.py\n--- a/website/search/views.py\n+++ b/website/search/views.py\n@@ -22,7 +22,7 @@\n search_results = Page.objects.none()\n \n # Pagination\n- paginator = Paginator(search_results, 10)\n+ paginator = Paginator(search_results, 25)\n try:\n search_results = paginator.page(page)\n except PageNotAnInteger:\n", "issue": "Missing search template\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Prerequisites\r\n\r\n* [X] Put an X between the brackets on this line if you have done all of the\r\nfollowing:\r\n * Reproduced the problem with clear cache.\r\n * (If running the application locally:) Made sure your running the newest version on the development branch\r\n * Checked that your issue isn't already filed: https://github.com/UTNkar/moore/issues\r\n\r\n### Description\r\n\r\nThe current search is now really working. Although you can search, the results are ugly and very spotty. \r\n\r\nGoal: Evaluate the searchable fields within Project Moore and make a search template.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Visit the home page\r\n2. Search something\r\n3. Be disappointed \u2639\ufe0f \r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\n", "before_files": [{"content": "from django import template\nfrom django.template import loader\n\nregister = template.Library()\n\n\ndef get_widget_name(field):\n return field.field.widget.__class__.__name__\n\n\ndef append_classes(field):\n field.field.label_suffix = ''\n classes = field.field.widget.attrs.get('class', '')\n classes += ' validate'\n if field.errors:\n classes += ' invalid'\n field.field.widget.attrs['class'] = classes\n\n\ndef render_field(template, field, prefix=None):\n t = loader.get_template(template)\n c = {\n 'field': field,\n 'prefix': prefix,\n }\n html = t.render(c)\n return html\n\n\[email protected]_tag\ndef materialize_field(field, prefix=None):\n widget = get_widget_name(field)\n if widget in ['TextInput', 'EmailInput', 'PasswordInput', 'Select',\n 'Textarea']:\n append_classes(field)\n return render_field('materialize/form/input.html', field, prefix)\n else:\n return field.as_widget()\n", "path": "website/materialize/templatetags/materialize.py"}, {"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django.core.paginator import EmptyPage, PageNotAnInteger, Paginator\nfrom django.shortcuts import render\n\nfrom wagtail.wagtailcore.models import Page\nfrom wagtail.wagtailsearch.models import Query\n\n\ndef search(request):\n search_query = request.GET.get('query', None)\n page = request.GET.get('page', 1)\n\n # Search\n if search_query:\n search_results = Page.objects.live().search(search_query)\n query = Query.get(search_query)\n\n # Record hit\n query.add_hit()\n else:\n search_results = Page.objects.none()\n\n # Pagination\n paginator = Paginator(search_results, 10)\n try:\n search_results = paginator.page(page)\n except PageNotAnInteger:\n search_results = paginator.page(1)\n except EmptyPage:\n search_results = paginator.page(paginator.num_pages)\n\n return render(request, 'search/search.html', {\n 'search_query': search_query,\n 'search_results': search_results,\n })\n", "path": "website/search/views.py"}], "after_files": [{"content": "from django import template\nfrom django.template import loader\n\nregister = template.Library()\n\n\ndef get_widget_name(field):\n return field.field.widget.__class__.__name__\n\n\ndef append_classes(field):\n field.field.label_suffix = ''\n classes = field.field.widget.attrs.get('class', '')\n classes += ' validate'\n if field.errors:\n classes += ' invalid'\n field.field.widget.attrs['class'] = classes\n\n\ndef render_field(template, field, prefix=None):\n t = loader.get_template(template)\n c = {\n 'field': field,\n 'prefix': prefix,\n }\n return t.render(c)\n\n\[email protected]_tag\ndef materialize_field(field, prefix=None):\n widget = get_widget_name(field)\n if widget in ['TextInput', 'EmailInput', 'PasswordInput', 'Select',\n 'Textarea']:\n append_classes(field)\n return render_field('materialize/form/input.html', field, prefix)\n else:\n return field.as_widget()\n\n\[email protected]_tag('materialize/pagination.html')\ndef materialize_pagination(page, url):\n return {\n 'page': page,\n 'paginator': page.paginator,\n 'url': url,\n }\n", "path": "website/materialize/templatetags/materialize.py"}, {"content": "from __future__ import absolute_import, unicode_literals\n\nfrom django.core.paginator import EmptyPage, PageNotAnInteger, Paginator\nfrom django.shortcuts import render\n\nfrom wagtail.wagtailcore.models import Page\nfrom wagtail.wagtailsearch.models import Query\n\n\ndef search(request):\n search_query = request.GET.get('query', None)\n page = request.GET.get('page', 1)\n\n # Search\n if search_query:\n search_results = Page.objects.live().search(search_query)\n query = Query.get(search_query)\n\n # Record hit\n query.add_hit()\n else:\n search_results = Page.objects.none()\n\n # Pagination\n paginator = Paginator(search_results, 25)\n try:\n search_results = paginator.page(page)\n except PageNotAnInteger:\n search_results = paginator.page(1)\n except EmptyPage:\n search_results = paginator.page(paginator.num_pages)\n\n return render(request, 'search/search.html', {\n 'search_query': search_query,\n 'search_results': search_results,\n })\n", "path": "website/search/views.py"}]}
| 1,053 | 301 |
gh_patches_debug_12092
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-86
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeDecodeError in staged_files_only
```
$ pre-commit
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to .../.pre-commit-files/patch1397853050.
Traceback (most recent call last):
File ".../bin/pre-commit", line 9, in <module>
load_entry_point('pre-commit==0.0.0', 'console_scripts', 'pre-commit')()
File ".../lib/python2.6/site-packages/pre_commit/util.py", line 52, in wrapper
return func(argv)
File ".../lib/python2.6/site-packages/pre_commit/run.py", line 59, in run
return commands.run(runner, args)
File ".../lib/python2.6/site-packages/pre_commit/commands.py", line 254, in run
with ctx:
File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
return self.gen.next()
File ".../lib/python2.6/site-packages/pre_commit/staged_files_only.py", line 32, in staged_files_only
patch_file.write(diff_stdout)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 3795: ordinal not in range(128)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/staged_files_only.py`
Content:
```
1 import contextlib
2 import logging
3 import time
4
5 from pre_commit.prefixed_command_runner import CalledProcessError
6
7
8 logger = logging.getLogger('pre_commit')
9
10
11 @contextlib.contextmanager
12 def staged_files_only(cmd_runner):
13 """Clear any unstaged changes from the git working directory inside this
14 context.
15
16 Args:
17 cmd_runner - PrefixedCommandRunner
18 """
19 # Determine if there are unstaged files
20 retcode, diff_stdout, _ = cmd_runner.run(
21 ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],
22 retcode=None,
23 )
24 if retcode and diff_stdout.strip():
25 patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))
26 logger.warning('Unstaged files detected.')
27 logger.info(
28 'Stashing unstaged files to {0}.'.format(patch_filename),
29 )
30 # Save the current unstaged changes as a patch
31 with open(patch_filename, 'w') as patch_file:
32 patch_file.write(diff_stdout)
33
34 # Clear the working directory of unstaged changes
35 cmd_runner.run(['git', 'checkout', '--', '.'])
36 try:
37 yield
38 finally:
39 # Try to apply the patch we saved
40 try:
41 cmd_runner.run(['git', 'apply', patch_filename])
42 except CalledProcessError:
43 logger.warning(
44 'Stashed changes conflicted with hook auto-fixes... '
45 'Rolling back fixes...'
46 )
47 # We failed to apply the patch, presumably due to fixes made
48 # by hooks.
49 # Roll back the changes made by hooks.
50 cmd_runner.run(['git', 'checkout', '--', '.'])
51 cmd_runner.run(['git', 'apply', patch_filename])
52 logger.info('Restored changes from {0}.'.format(patch_filename))
53 else:
54 # There weren't any staged files so we don't need to do anything
55 # special
56 yield
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -1,4 +1,5 @@
import contextlib
+import io
import logging
import time
@@ -28,7 +29,7 @@
'Stashing unstaged files to {0}.'.format(patch_filename),
)
# Save the current unstaged changes as a patch
- with open(patch_filename, 'w') as patch_file:
+ with io.open(patch_filename, 'w', encoding='utf-8') as patch_file:
patch_file.write(diff_stdout)
# Clear the working directory of unstaged changes
|
{"golden_diff": "diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py\n--- a/pre_commit/staged_files_only.py\n+++ b/pre_commit/staged_files_only.py\n@@ -1,4 +1,5 @@\n import contextlib\n+import io\n import logging\n import time\n \n@@ -28,7 +29,7 @@\n 'Stashing unstaged files to {0}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n- with open(patch_filename, 'w') as patch_file:\n+ with io.open(patch_filename, 'w', encoding='utf-8') as patch_file:\n patch_file.write(diff_stdout)\n \n # Clear the working directory of unstaged changes\n", "issue": "UnicodeDecodeError in staged_files_only\n```\n$ pre-commit\n[WARNING] Unstaged files detected.\n[INFO] Stashing unstaged files to .../.pre-commit-files/patch1397853050.\nTraceback (most recent call last):\n File \".../bin/pre-commit\", line 9, in <module>\n load_entry_point('pre-commit==0.0.0', 'console_scripts', 'pre-commit')()\n File \".../lib/python2.6/site-packages/pre_commit/util.py\", line 52, in wrapper\n return func(argv)\n File \".../lib/python2.6/site-packages/pre_commit/run.py\", line 59, in run\n return commands.run(runner, args)\n File \".../lib/python2.6/site-packages/pre_commit/commands.py\", line 254, in run\n with ctx:\n File \"/usr/lib64/python2.6/contextlib.py\", line 16, in __enter__\n return self.gen.next()\n File \".../lib/python2.6/site-packages/pre_commit/staged_files_only.py\", line 32, in staged_files_only\n patch_file.write(diff_stdout)\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xfc' in position 3795: ordinal not in range(128)\n```\n\n", "before_files": [{"content": "import contextlib\nimport logging\nimport time\n\nfrom pre_commit.prefixed_command_runner import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n retcode, diff_stdout, _ = cmd_runner.run(\n ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],\n retcode=None,\n )\n if retcode and diff_stdout.strip():\n patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {0}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with open(patch_filename, 'w') as patch_file:\n patch_file.write(diff_stdout)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(['git', 'checkout', '--', '.'])\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n cmd_runner.run(['git', 'apply', patch_filename])\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...'\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(['git', 'checkout', '--', '.'])\n cmd_runner.run(['git', 'apply', patch_filename])\n logger.info('Restored changes from {0}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}], "after_files": [{"content": "import contextlib\nimport io\nimport logging\nimport time\n\nfrom pre_commit.prefixed_command_runner import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n retcode, diff_stdout, _ = cmd_runner.run(\n ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],\n retcode=None,\n )\n if retcode and diff_stdout.strip():\n patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {0}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with io.open(patch_filename, 'w', encoding='utf-8') as patch_file:\n patch_file.write(diff_stdout)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(['git', 'checkout', '--', '.'])\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n cmd_runner.run(['git', 'apply', patch_filename])\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...'\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(['git', 'checkout', '--', '.'])\n cmd_runner.run(['git', 'apply', patch_filename])\n logger.info('Restored changes from {0}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}]}
| 1,070 | 160 |
gh_patches_debug_13481
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-382
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve the logic in the ACSHandleStage
**Version**
main
**Describe the bug**
If there are ACS that point to the same content, pick the first RA instead of last
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/plugin/repo_version_utils.py`
Content:
```
1 from collections import defaultdict
2 from gettext import gettext as _
3 import logging
4
5 from django.db.models import Q
6
7
8 _logger = logging.getLogger(__name__)
9
10
11 def remove_duplicates(repository_version):
12 """
13 Inspect content additions in the `RepositoryVersion` and replace repository duplicates.
14
15 Some content can have two instances A and B which are unique, but cannot both exist together in
16 one repository. For example, pulp_file's content has `relative_path` for that file within the
17 repository.
18
19 Any content newly added to the :class:`~pulpcore.plugin.models.RepositoryVersion` is checked
20 against existing content in the :class:`~pulpcore.plugin.models.RepositoryVersion` with newer
21 "repository duplicates" replace existing "repository duplicates". Each Content model can define
22 a `repo_key_fields` attribute with the field names to be compared. If all `repo_key_fields`
23 contain the same value for two content units, they are considered "repository duplicates".
24
25 After instantiating `RemoveDuplicates` call it with the `run()` method and pass in the
26 :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked and possibly modified as a
27 parameter to `run()`.
28
29 Args:
30 repository_version: The :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked
31 and possibly modified.
32 """
33 query_for_repo_duplicates_by_type = defaultdict(lambda: Q())
34 for item in repository_version.added():
35 detail_item = item.cast()
36 if detail_item.repo_key_fields == ():
37 continue
38 unit_q_dict = {
39 field: getattr(detail_item, field) for field in detail_item.repo_key_fields
40 }
41 item_query = Q(**unit_q_dict) & ~Q(pk=detail_item.pk)
42 query_for_repo_duplicates_by_type[detail_item._meta.model] |= item_query
43
44 for model in query_for_repo_duplicates_by_type:
45 _logger.debug(_("Removing duplicates for type: {}".format(model)))
46 qs = model.objects.filter(query_for_repo_duplicates_by_type[model])
47 repository_version.remove_content(qs)
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/plugin/repo_version_utils.py b/pulpcore/plugin/repo_version_utils.py
--- a/pulpcore/plugin/repo_version_utils.py
+++ b/pulpcore/plugin/repo_version_utils.py
@@ -22,10 +22,6 @@
a `repo_key_fields` attribute with the field names to be compared. If all `repo_key_fields`
contain the same value for two content units, they are considered "repository duplicates".
- After instantiating `RemoveDuplicates` call it with the `run()` method and pass in the
- :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked and possibly modified as a
- parameter to `run()`.
-
Args:
repository_version: The :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked
and possibly modified.
|
{"golden_diff": "diff --git a/pulpcore/plugin/repo_version_utils.py b/pulpcore/plugin/repo_version_utils.py\n--- a/pulpcore/plugin/repo_version_utils.py\n+++ b/pulpcore/plugin/repo_version_utils.py\n@@ -22,10 +22,6 @@\n a `repo_key_fields` attribute with the field names to be compared. If all `repo_key_fields`\n contain the same value for two content units, they are considered \"repository duplicates\".\n \n- After instantiating `RemoveDuplicates` call it with the `run()` method and pass in the\n- :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked and possibly modified as a\n- parameter to `run()`.\n-\n Args:\n repository_version: The :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked\n and possibly modified.\n", "issue": "Improve the logic in the ACSHandleStage\n**Version**\r\nmain\r\n\r\n**Describe the bug**\r\nIf there are ACS that point to the same content, pick the first RA instead of last\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.\r\n\n", "before_files": [{"content": "from collections import defaultdict\nfrom gettext import gettext as _\nimport logging\n\nfrom django.db.models import Q\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef remove_duplicates(repository_version):\n \"\"\"\n Inspect content additions in the `RepositoryVersion` and replace repository duplicates.\n\n Some content can have two instances A and B which are unique, but cannot both exist together in\n one repository. For example, pulp_file's content has `relative_path` for that file within the\n repository.\n\n Any content newly added to the :class:`~pulpcore.plugin.models.RepositoryVersion` is checked\n against existing content in the :class:`~pulpcore.plugin.models.RepositoryVersion` with newer\n \"repository duplicates\" replace existing \"repository duplicates\". Each Content model can define\n a `repo_key_fields` attribute with the field names to be compared. If all `repo_key_fields`\n contain the same value for two content units, they are considered \"repository duplicates\".\n\n After instantiating `RemoveDuplicates` call it with the `run()` method and pass in the\n :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked and possibly modified as a\n parameter to `run()`.\n\n Args:\n repository_version: The :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked\n and possibly modified.\n \"\"\"\n query_for_repo_duplicates_by_type = defaultdict(lambda: Q())\n for item in repository_version.added():\n detail_item = item.cast()\n if detail_item.repo_key_fields == ():\n continue\n unit_q_dict = {\n field: getattr(detail_item, field) for field in detail_item.repo_key_fields\n }\n item_query = Q(**unit_q_dict) & ~Q(pk=detail_item.pk)\n query_for_repo_duplicates_by_type[detail_item._meta.model] |= item_query\n\n for model in query_for_repo_duplicates_by_type:\n _logger.debug(_(\"Removing duplicates for type: {}\".format(model)))\n qs = model.objects.filter(query_for_repo_duplicates_by_type[model])\n repository_version.remove_content(qs)\n", "path": "pulpcore/plugin/repo_version_utils.py"}], "after_files": [{"content": "from collections import defaultdict\nfrom gettext import gettext as _\nimport logging\n\nfrom django.db.models import Q\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef remove_duplicates(repository_version):\n \"\"\"\n Inspect content additions in the `RepositoryVersion` and replace repository duplicates.\n\n Some content can have two instances A and B which are unique, but cannot both exist together in\n one repository. For example, pulp_file's content has `relative_path` for that file within the\n repository.\n\n Any content newly added to the :class:`~pulpcore.plugin.models.RepositoryVersion` is checked\n against existing content in the :class:`~pulpcore.plugin.models.RepositoryVersion` with newer\n \"repository duplicates\" replace existing \"repository duplicates\". Each Content model can define\n a `repo_key_fields` attribute with the field names to be compared. If all `repo_key_fields`\n contain the same value for two content units, they are considered \"repository duplicates\".\n\n Args:\n repository_version: The :class:`~pulpcore.plugin.models.RepositoryVersion` to be checked\n and possibly modified.\n \"\"\"\n query_for_repo_duplicates_by_type = defaultdict(lambda: Q())\n for item in repository_version.added():\n detail_item = item.cast()\n if detail_item.repo_key_fields == ():\n continue\n unit_q_dict = {\n field: getattr(detail_item, field) for field in detail_item.repo_key_fields\n }\n item_query = Q(**unit_q_dict) & ~Q(pk=detail_item.pk)\n query_for_repo_duplicates_by_type[detail_item._meta.model] |= item_query\n\n for model in query_for_repo_duplicates_by_type:\n _logger.debug(_(\"Removing duplicates for type: {}\".format(model)))\n qs = model.objects.filter(query_for_repo_duplicates_by_type[model])\n repository_version.remove_content(qs)\n", "path": "pulpcore/plugin/repo_version_utils.py"}]}
| 885 | 183 |
gh_patches_debug_34351
|
rasdani/github-patches
|
git_diff
|
scikit-hep__awkward-2065
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`ak.fill_none` doesn't support option-in-record
### Version of Awkward Array
main
### Description and code to reproduce
The following example is left unchanged by `ak.fill_none`:
```python
import awkward as ak
record = ak.zip({
'x': [1, None],
'y': [2, 3]
})
result = ak.fill_none(record, 0, axis=-1)
result.show()
```
I'm fairly sure that this is a bug, because `ak.is_none` walks successfully through records. My understanding of records is that it's only reducers that cannot peer through them.
Of course, this example would imply
```python
record = ak.zip({
"x": [1, 4],
"y": [2, 3]
}).mask[[True, False]]
assert ak.fill_none(record, 0, axis=-1).to_list() == [
{"x": 1, "y": 2},
0
]
```
I think this is reasonable; again, `is_none` behaves identically to this. Maybe we _do_ want a parameter in future, e.g. `visit_records` or some such.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/awkward/operations/ak_fill_none.py`
Content:
```
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3 import numbers
4
5 import awkward as ak
6
7 np = ak._nplikes.NumpyMetadata.instance()
8 cpu = ak._backends.NumpyBackend.instance()
9
10
11 def fill_none(array, value, axis=-1, *, highlevel=True, behavior=None):
12 """
13 Args:
14 array: Array-like data (anything #ak.to_layout recognizes).
15 value: Data with which to replace None.
16 axis (None or int): If None, replace all None values in the array
17 with the given value; if an int, The dimension at which this
18 operation is applied. The outermost dimension is `0`, followed
19 by `1`, etc., and negative values count backward from the
20 innermost: `-1` is the innermost dimension, `-2` is the next
21 level up, etc.
22 highlevel (bool): If True, return an #ak.Array; otherwise, return
23 a low-level #ak.contents.Content subclass.
24 behavior (None or dict): Custom #ak.behavior for the output array, if
25 high-level.
26
27 Replaces missing values (None) with a given `value`.
28
29 For example, in the following
30
31 >>> array = ak.Array([[1.1, None, 2.2], [], [None, 3.3, 4.4]])
32
33 The None values could be replaced with `0` by
34
35 >>> ak.fill_none(array, 0)
36 <Array [[1.1, 0, 2.2], [], [0, 3.3, 4.4]] type='3 * var * float64'>
37
38 The replacement value doesn't strictly need the same type as the
39 surrounding data. For example, the None values could also be replaced
40 by a string.
41
42 >>> ak.fill_none(array, "hi")
43 <Array [[1.1, 'hi', 2.2], [], ['hi', ...]] type='3 * var * union[float64, s...'>
44
45 The list content now has a union type:
46
47 >>> ak.fill_none(array, "hi").type.show()
48 3 * var * union[
49 float64,
50 string
51 ]
52
53 The values could be floating-point numbers or strings.
54 """
55 with ak._errors.OperationErrorContext(
56 "ak.fill_none",
57 dict(
58 array=array, value=value, axis=axis, highlevel=highlevel, behavior=behavior
59 ),
60 ):
61 return _impl(array, value, axis, highlevel, behavior)
62
63
64 def _impl(array, value, axis, highlevel, behavior):
65 arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)
66 behavior = ak._util.behavior_of(array, behavior=behavior)
67 backend = ak._backends.backend_of(arraylayout, default=cpu)
68
69 # Convert value type to appropriate layout
70 if (
71 isinstance(value, np.ndarray)
72 and issubclass(value.dtype.type, (np.bool_, np.number))
73 and len(value.shape) != 0
74 ):
75 valuelayout = ak.operations.to_layout(
76 backend.nplike.asarray(value)[np.newaxis],
77 allow_record=False,
78 allow_other=False,
79 )
80 elif isinstance(value, (bool, numbers.Number, np.bool_, np.number)) or (
81 isinstance(value, np.ndarray)
82 and issubclass(value.dtype.type, (np.bool_, np.number))
83 ):
84 valuelayout = ak.operations.to_layout(
85 backend.nplike.asarray(value), allow_record=False, allow_other=False
86 )
87 elif (
88 ak._util.is_sized_iterable(value)
89 and not (isinstance(value, (str, bytes)))
90 or isinstance(value, (ak.highlevel.Record, ak.record.Record))
91 ):
92 valuelayout = ak.operations.to_layout(
93 value, allow_record=True, allow_other=False
94 )
95 if isinstance(valuelayout, ak.record.Record):
96 valuelayout = valuelayout.array[valuelayout.at : valuelayout.at + 1]
97 elif len(valuelayout) == 0:
98 offsets = ak.index.Index64(
99 backend.index_nplike.array([0, 0], dtype=np.int64)
100 )
101 valuelayout = ak.contents.ListOffsetArray(offsets, valuelayout)
102 else:
103 valuelayout = ak.contents.RegularArray(valuelayout, len(valuelayout), 1)
104 else:
105 valuelayout = ak.operations.to_layout(
106 [value], allow_record=False, allow_other=False
107 )
108
109 def maybe_fillna(layout):
110 if layout.is_option:
111 return ak._do.fill_none(layout, valuelayout)
112 else:
113 return layout
114
115 if axis is None:
116
117 def action(layout, continuation, **kwargs):
118 return maybe_fillna(continuation())
119
120 else:
121
122 def action(layout, depth, **kwargs):
123 posaxis = ak._util.maybe_posaxis(layout, axis, depth)
124 if posaxis is not None and posaxis + 1 < depth:
125 return layout
126 elif posaxis is not None and posaxis + 1 == depth:
127 return maybe_fillna(layout)
128 elif layout.is_leaf:
129 raise ak._errors.wrap_error(
130 np.AxisError(
131 f"axis={axis} exceeds the depth of this array ({depth})"
132 )
133 )
134
135 out = ak._do.recursively_apply(arraylayout, action, behavior)
136
137 return ak._util.wrap(out, behavior, highlevel)
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/awkward/operations/ak_fill_none.py b/src/awkward/operations/ak_fill_none.py
--- a/src/awkward/operations/ak_fill_none.py
+++ b/src/awkward/operations/ak_fill_none.py
@@ -63,7 +63,7 @@
def _impl(array, value, axis, highlevel, behavior):
arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)
- behavior = ak._util.behavior_of(array, behavior=behavior)
+ behavior = ak._util.behavior_of(array, value, behavior=behavior)
backend = ak._backends.backend_of(arraylayout, default=cpu)
# Convert value type to appropriate layout
@@ -106,25 +106,24 @@
[value], allow_record=False, allow_other=False
)
- def maybe_fillna(layout):
- if layout.is_option:
- return ak._do.fill_none(layout, valuelayout)
- else:
- return layout
-
if axis is None:
def action(layout, continuation, **kwargs):
- return maybe_fillna(continuation())
+ if layout.is_option:
+ return ak._do.fill_none(continuation(), valuelayout)
else:
def action(layout, depth, **kwargs):
posaxis = ak._util.maybe_posaxis(layout, axis, depth)
- if posaxis is not None and posaxis + 1 < depth:
- return layout
- elif posaxis is not None and posaxis + 1 == depth:
- return maybe_fillna(layout)
+ if posaxis is not None and posaxis + 1 == depth:
+ if layout.is_union or layout.is_record:
+ return None
+ elif layout.is_option:
+ return ak._do.fill_none(layout, valuelayout)
+ else:
+ return layout
+
elif layout.is_leaf:
raise ak._errors.wrap_error(
np.AxisError(
@@ -133,5 +132,4 @@
)
out = ak._do.recursively_apply(arraylayout, action, behavior)
-
return ak._util.wrap(out, behavior, highlevel)
|
{"golden_diff": "diff --git a/src/awkward/operations/ak_fill_none.py b/src/awkward/operations/ak_fill_none.py\n--- a/src/awkward/operations/ak_fill_none.py\n+++ b/src/awkward/operations/ak_fill_none.py\n@@ -63,7 +63,7 @@\n \n def _impl(array, value, axis, highlevel, behavior):\n arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)\n- behavior = ak._util.behavior_of(array, behavior=behavior)\n+ behavior = ak._util.behavior_of(array, value, behavior=behavior)\n backend = ak._backends.backend_of(arraylayout, default=cpu)\n \n # Convert value type to appropriate layout\n@@ -106,25 +106,24 @@\n [value], allow_record=False, allow_other=False\n )\n \n- def maybe_fillna(layout):\n- if layout.is_option:\n- return ak._do.fill_none(layout, valuelayout)\n- else:\n- return layout\n-\n if axis is None:\n \n def action(layout, continuation, **kwargs):\n- return maybe_fillna(continuation())\n+ if layout.is_option:\n+ return ak._do.fill_none(continuation(), valuelayout)\n \n else:\n \n def action(layout, depth, **kwargs):\n posaxis = ak._util.maybe_posaxis(layout, axis, depth)\n- if posaxis is not None and posaxis + 1 < depth:\n- return layout\n- elif posaxis is not None and posaxis + 1 == depth:\n- return maybe_fillna(layout)\n+ if posaxis is not None and posaxis + 1 == depth:\n+ if layout.is_union or layout.is_record:\n+ return None\n+ elif layout.is_option:\n+ return ak._do.fill_none(layout, valuelayout)\n+ else:\n+ return layout\n+\n elif layout.is_leaf:\n raise ak._errors.wrap_error(\n np.AxisError(\n@@ -133,5 +132,4 @@\n )\n \n out = ak._do.recursively_apply(arraylayout, action, behavior)\n-\n return ak._util.wrap(out, behavior, highlevel)\n", "issue": "`ak.fill_none` doesn't support option-in-record\n### Version of Awkward Array\r\n\r\nmain\r\n\r\n### Description and code to reproduce\r\n\r\nThe following example is left unchanged by `ak.fill_none`:\r\n```python\r\nimport awkward as ak\r\n\r\nrecord = ak.zip({\r\n 'x': [1, None],\r\n 'y': [2, 3]\r\n})\r\n\r\nresult = ak.fill_none(record, 0, axis=-1)\r\nresult.show()\r\n```\r\n\r\nI'm fairly sure that this is a bug, because `ak.is_none` walks successfully through records. My understanding of records is that it's only reducers that cannot peer through them. \r\n\r\nOf course, this example would imply\r\n```python\r\nrecord = ak.zip({\r\n \"x\": [1, 4], \r\n\t\"y\": [2, 3]\r\n}).mask[[True, False]]\r\nassert ak.fill_none(record, 0, axis=-1).to_list() == [\r\n {\"x\": 1, \"y\": 2},\r\n 0\r\n]\r\n```\r\n\r\nI think this is reasonable; again, `is_none` behaves identically to this. Maybe we _do_ want a parameter in future, e.g. `visit_records` or some such.\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport numbers\n\nimport awkward as ak\n\nnp = ak._nplikes.NumpyMetadata.instance()\ncpu = ak._backends.NumpyBackend.instance()\n\n\ndef fill_none(array, value, axis=-1, *, highlevel=True, behavior=None):\n \"\"\"\n Args:\n array: Array-like data (anything #ak.to_layout recognizes).\n value: Data with which to replace None.\n axis (None or int): If None, replace all None values in the array\n with the given value; if an int, The dimension at which this\n operation is applied. The outermost dimension is `0`, followed\n by `1`, etc., and negative values count backward from the\n innermost: `-1` is the innermost dimension, `-2` is the next\n level up, etc.\n highlevel (bool): If True, return an #ak.Array; otherwise, return\n a low-level #ak.contents.Content subclass.\n behavior (None or dict): Custom #ak.behavior for the output array, if\n high-level.\n\n Replaces missing values (None) with a given `value`.\n\n For example, in the following\n\n >>> array = ak.Array([[1.1, None, 2.2], [], [None, 3.3, 4.4]])\n\n The None values could be replaced with `0` by\n\n >>> ak.fill_none(array, 0)\n <Array [[1.1, 0, 2.2], [], [0, 3.3, 4.4]] type='3 * var * float64'>\n\n The replacement value doesn't strictly need the same type as the\n surrounding data. For example, the None values could also be replaced\n by a string.\n\n >>> ak.fill_none(array, \"hi\")\n <Array [[1.1, 'hi', 2.2], [], ['hi', ...]] type='3 * var * union[float64, s...'>\n\n The list content now has a union type:\n\n >>> ak.fill_none(array, \"hi\").type.show()\n 3 * var * union[\n float64,\n string\n ]\n\n The values could be floating-point numbers or strings.\n \"\"\"\n with ak._errors.OperationErrorContext(\n \"ak.fill_none\",\n dict(\n array=array, value=value, axis=axis, highlevel=highlevel, behavior=behavior\n ),\n ):\n return _impl(array, value, axis, highlevel, behavior)\n\n\ndef _impl(array, value, axis, highlevel, behavior):\n arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)\n behavior = ak._util.behavior_of(array, behavior=behavior)\n backend = ak._backends.backend_of(arraylayout, default=cpu)\n\n # Convert value type to appropriate layout\n if (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n and len(value.shape) != 0\n ):\n valuelayout = ak.operations.to_layout(\n backend.nplike.asarray(value)[np.newaxis],\n allow_record=False,\n allow_other=False,\n )\n elif isinstance(value, (bool, numbers.Number, np.bool_, np.number)) or (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n ):\n valuelayout = ak.operations.to_layout(\n backend.nplike.asarray(value), allow_record=False, allow_other=False\n )\n elif (\n ak._util.is_sized_iterable(value)\n and not (isinstance(value, (str, bytes)))\n or isinstance(value, (ak.highlevel.Record, ak.record.Record))\n ):\n valuelayout = ak.operations.to_layout(\n value, allow_record=True, allow_other=False\n )\n if isinstance(valuelayout, ak.record.Record):\n valuelayout = valuelayout.array[valuelayout.at : valuelayout.at + 1]\n elif len(valuelayout) == 0:\n offsets = ak.index.Index64(\n backend.index_nplike.array([0, 0], dtype=np.int64)\n )\n valuelayout = ak.contents.ListOffsetArray(offsets, valuelayout)\n else:\n valuelayout = ak.contents.RegularArray(valuelayout, len(valuelayout), 1)\n else:\n valuelayout = ak.operations.to_layout(\n [value], allow_record=False, allow_other=False\n )\n\n def maybe_fillna(layout):\n if layout.is_option:\n return ak._do.fill_none(layout, valuelayout)\n else:\n return layout\n\n if axis is None:\n\n def action(layout, continuation, **kwargs):\n return maybe_fillna(continuation())\n\n else:\n\n def action(layout, depth, **kwargs):\n posaxis = ak._util.maybe_posaxis(layout, axis, depth)\n if posaxis is not None and posaxis + 1 < depth:\n return layout\n elif posaxis is not None and posaxis + 1 == depth:\n return maybe_fillna(layout)\n elif layout.is_leaf:\n raise ak._errors.wrap_error(\n np.AxisError(\n f\"axis={axis} exceeds the depth of this array ({depth})\"\n )\n )\n\n out = ak._do.recursively_apply(arraylayout, action, behavior)\n\n return ak._util.wrap(out, behavior, highlevel)\n", "path": "src/awkward/operations/ak_fill_none.py"}], "after_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport numbers\n\nimport awkward as ak\n\nnp = ak._nplikes.NumpyMetadata.instance()\ncpu = ak._backends.NumpyBackend.instance()\n\n\ndef fill_none(array, value, axis=-1, *, highlevel=True, behavior=None):\n \"\"\"\n Args:\n array: Array-like data (anything #ak.to_layout recognizes).\n value: Data with which to replace None.\n axis (None or int): If None, replace all None values in the array\n with the given value; if an int, The dimension at which this\n operation is applied. The outermost dimension is `0`, followed\n by `1`, etc., and negative values count backward from the\n innermost: `-1` is the innermost dimension, `-2` is the next\n level up, etc.\n highlevel (bool): If True, return an #ak.Array; otherwise, return\n a low-level #ak.contents.Content subclass.\n behavior (None or dict): Custom #ak.behavior for the output array, if\n high-level.\n\n Replaces missing values (None) with a given `value`.\n\n For example, in the following\n\n >>> array = ak.Array([[1.1, None, 2.2], [], [None, 3.3, 4.4]])\n\n The None values could be replaced with `0` by\n\n >>> ak.fill_none(array, 0)\n <Array [[1.1, 0, 2.2], [], [0, 3.3, 4.4]] type='3 * var * float64'>\n\n The replacement value doesn't strictly need the same type as the\n surrounding data. For example, the None values could also be replaced\n by a string.\n\n >>> ak.fill_none(array, \"hi\")\n <Array [[1.1, 'hi', 2.2], [], ['hi', ...]] type='3 * var * union[float64, s...'>\n\n The list content now has a union type:\n\n >>> ak.fill_none(array, \"hi\").type.show()\n 3 * var * union[\n float64,\n string\n ]\n\n The values could be floating-point numbers or strings.\n \"\"\"\n with ak._errors.OperationErrorContext(\n \"ak.fill_none\",\n dict(\n array=array, value=value, axis=axis, highlevel=highlevel, behavior=behavior\n ),\n ):\n return _impl(array, value, axis, highlevel, behavior)\n\n\ndef _impl(array, value, axis, highlevel, behavior):\n arraylayout = ak.operations.to_layout(array, allow_record=True, allow_other=False)\n behavior = ak._util.behavior_of(array, value, behavior=behavior)\n backend = ak._backends.backend_of(arraylayout, default=cpu)\n\n # Convert value type to appropriate layout\n if (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n and len(value.shape) != 0\n ):\n valuelayout = ak.operations.to_layout(\n backend.nplike.asarray(value)[np.newaxis],\n allow_record=False,\n allow_other=False,\n )\n elif isinstance(value, (bool, numbers.Number, np.bool_, np.number)) or (\n isinstance(value, np.ndarray)\n and issubclass(value.dtype.type, (np.bool_, np.number))\n ):\n valuelayout = ak.operations.to_layout(\n backend.nplike.asarray(value), allow_record=False, allow_other=False\n )\n elif (\n ak._util.is_sized_iterable(value)\n and not (isinstance(value, (str, bytes)))\n or isinstance(value, (ak.highlevel.Record, ak.record.Record))\n ):\n valuelayout = ak.operations.to_layout(\n value, allow_record=True, allow_other=False\n )\n if isinstance(valuelayout, ak.record.Record):\n valuelayout = valuelayout.array[valuelayout.at : valuelayout.at + 1]\n elif len(valuelayout) == 0:\n offsets = ak.index.Index64(\n backend.index_nplike.array([0, 0], dtype=np.int64)\n )\n valuelayout = ak.contents.ListOffsetArray(offsets, valuelayout)\n else:\n valuelayout = ak.contents.RegularArray(valuelayout, len(valuelayout), 1)\n else:\n valuelayout = ak.operations.to_layout(\n [value], allow_record=False, allow_other=False\n )\n\n if axis is None:\n\n def action(layout, continuation, **kwargs):\n if layout.is_option:\n return ak._do.fill_none(continuation(), valuelayout)\n\n else:\n\n def action(layout, depth, **kwargs):\n posaxis = ak._util.maybe_posaxis(layout, axis, depth)\n if posaxis is not None and posaxis + 1 == depth:\n if layout.is_union or layout.is_record:\n return None\n elif layout.is_option:\n return ak._do.fill_none(layout, valuelayout)\n else:\n return layout\n\n elif layout.is_leaf:\n raise ak._errors.wrap_error(\n np.AxisError(\n f\"axis={axis} exceeds the depth of this array ({depth})\"\n )\n )\n\n out = ak._do.recursively_apply(arraylayout, action, behavior)\n return ak._util.wrap(out, behavior, highlevel)\n", "path": "src/awkward/operations/ak_fill_none.py"}]}
| 2,029 | 490 |
gh_patches_debug_4095
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-python-6083
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip install mne should install numpy and scipy
Currently, the `mne` PyPI package doesn't have any dependencies, but I think it should really treat `numpy` and `scipy` as hard dependencies (and automatically install them if they are not present).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # Copyright (C) 2011-2017 Alexandre Gramfort
4 # <[email protected]>
5
6 import os
7 import os.path as op
8
9 from setuptools import setup
10
11 # get the version (don't import mne here, so dependencies are not needed)
12 version = None
13 with open(op.join('mne', '__init__.py'), 'r') as fid:
14 for line in (line.strip() for line in fid):
15 if line.startswith('__version__'):
16 version = line.split('=')[1].strip().strip('\'')
17 break
18 if version is None:
19 raise RuntimeError('Could not determine version')
20
21
22 descr = """MNE python project for MEG and EEG data analysis."""
23
24 DISTNAME = 'mne'
25 DESCRIPTION = descr
26 MAINTAINER = 'Alexandre Gramfort'
27 MAINTAINER_EMAIL = '[email protected]'
28 URL = 'http://martinos.org/mne'
29 LICENSE = 'BSD (3-clause)'
30 DOWNLOAD_URL = 'http://github.com/mne-tools/mne-python'
31 VERSION = version
32
33
34 def package_tree(pkgroot):
35 """Get the submodule list."""
36 # Adapted from VisPy
37 path = op.dirname(__file__)
38 subdirs = [op.relpath(i[0], path).replace(op.sep, '.')
39 for i in os.walk(op.join(path, pkgroot))
40 if '__init__.py' in i[2]]
41 return sorted(subdirs)
42
43
44 if __name__ == "__main__":
45 if op.exists('MANIFEST'):
46 os.remove('MANIFEST')
47
48 setup(name=DISTNAME,
49 maintainer=MAINTAINER,
50 include_package_data=True,
51 maintainer_email=MAINTAINER_EMAIL,
52 description=DESCRIPTION,
53 license=LICENSE,
54 url=URL,
55 version=VERSION,
56 download_url=DOWNLOAD_URL,
57 long_description=open('README.rst').read(),
58 zip_safe=False, # the package can run out of an .egg file
59 classifiers=['Intended Audience :: Science/Research',
60 'Intended Audience :: Developers',
61 'License :: OSI Approved',
62 'Programming Language :: Python',
63 'Topic :: Software Development',
64 'Topic :: Scientific/Engineering',
65 'Operating System :: Microsoft :: Windows',
66 'Operating System :: POSIX',
67 'Operating System :: Unix',
68 'Operating System :: MacOS',
69 'Programming Language :: Python :: 3',
70 ],
71 platforms='any',
72 packages=package_tree('mne'),
73 package_data={'mne': [
74 op.join('data', '*.sel'),
75 op.join('data', 'icos.fif.gz'),
76 op.join('data', 'coil_def*.dat'),
77 op.join('data', 'helmets', '*.fif.gz'),
78 op.join('data', 'FreeSurferColorLUT.txt'),
79 op.join('data', 'image', '*gif'),
80 op.join('data', 'image', '*lout'),
81 op.join('data', 'fsaverage', '*.fif'),
82 op.join('channels', 'data', 'layouts', '*.lout'),
83 op.join('channels', 'data', 'layouts', '*.lay'),
84 op.join('channels', 'data', 'montages', '*.sfp'),
85 op.join('channels', 'data', 'montages', '*.txt'),
86 op.join('channels', 'data', 'montages', '*.elc'),
87 op.join('channels', 'data', 'neighbors', '*.mat'),
88 op.join('datasets', 'sleep_physionet', 'SHA1SUMS'),
89 op.join('gui', 'help', '*.json'),
90 op.join('html', '*.js'),
91 op.join('html', '*.css'),
92 op.join('io', 'artemis123', 'resources', '*.csv'),
93 op.join('io', 'edf', 'gdf_encodes.txt')
94 ]},
95 scripts=['bin/mne'])
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -69,6 +69,7 @@
'Programming Language :: Python :: 3',
],
platforms='any',
+ install_requires=['numpy>=1.11.3', 'scipy>=0.17.1'],
packages=package_tree('mne'),
package_data={'mne': [
op.join('data', '*.sel'),
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -69,6 +69,7 @@\n 'Programming Language :: Python :: 3',\n ],\n platforms='any',\n+ install_requires=['numpy>=1.11.3', 'scipy>=0.17.1'],\n packages=package_tree('mne'),\n package_data={'mne': [\n op.join('data', '*.sel'),\n", "issue": "pip install mne should install numpy and scipy\nCurrently, the `mne` PyPI package doesn't have any dependencies, but I think it should really treat `numpy` and `scipy` as hard dependencies (and automatically install them if they are not present).\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Copyright (C) 2011-2017 Alexandre Gramfort\n# <[email protected]>\n\nimport os\nimport os.path as op\n\nfrom setuptools import setup\n\n# get the version (don't import mne here, so dependencies are not needed)\nversion = None\nwith open(op.join('mne', '__init__.py'), 'r') as fid:\n for line in (line.strip() for line in fid):\n if line.startswith('__version__'):\n version = line.split('=')[1].strip().strip('\\'')\n break\nif version is None:\n raise RuntimeError('Could not determine version')\n\n\ndescr = \"\"\"MNE python project for MEG and EEG data analysis.\"\"\"\n\nDISTNAME = 'mne'\nDESCRIPTION = descr\nMAINTAINER = 'Alexandre Gramfort'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://martinos.org/mne'\nLICENSE = 'BSD (3-clause)'\nDOWNLOAD_URL = 'http://github.com/mne-tools/mne-python'\nVERSION = version\n\n\ndef package_tree(pkgroot):\n \"\"\"Get the submodule list.\"\"\"\n # Adapted from VisPy\n path = op.dirname(__file__)\n subdirs = [op.relpath(i[0], path).replace(op.sep, '.')\n for i in os.walk(op.join(path, pkgroot))\n if '__init__.py' in i[2]]\n return sorted(subdirs)\n\n\nif __name__ == \"__main__\":\n if op.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n include_package_data=True,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=open('README.rst').read(),\n zip_safe=False, # the package can run out of an .egg file\n classifiers=['Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n 'Programming Language :: Python :: 3',\n ],\n platforms='any',\n packages=package_tree('mne'),\n package_data={'mne': [\n op.join('data', '*.sel'),\n op.join('data', 'icos.fif.gz'),\n op.join('data', 'coil_def*.dat'),\n op.join('data', 'helmets', '*.fif.gz'),\n op.join('data', 'FreeSurferColorLUT.txt'),\n op.join('data', 'image', '*gif'),\n op.join('data', 'image', '*lout'),\n op.join('data', 'fsaverage', '*.fif'),\n op.join('channels', 'data', 'layouts', '*.lout'),\n op.join('channels', 'data', 'layouts', '*.lay'),\n op.join('channels', 'data', 'montages', '*.sfp'),\n op.join('channels', 'data', 'montages', '*.txt'),\n op.join('channels', 'data', 'montages', '*.elc'),\n op.join('channels', 'data', 'neighbors', '*.mat'),\n op.join('datasets', 'sleep_physionet', 'SHA1SUMS'),\n op.join('gui', 'help', '*.json'),\n op.join('html', '*.js'),\n op.join('html', '*.css'),\n op.join('io', 'artemis123', 'resources', '*.csv'),\n op.join('io', 'edf', 'gdf_encodes.txt')\n ]},\n scripts=['bin/mne'])\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# Copyright (C) 2011-2017 Alexandre Gramfort\n# <[email protected]>\n\nimport os\nimport os.path as op\n\nfrom setuptools import setup\n\n# get the version (don't import mne here, so dependencies are not needed)\nversion = None\nwith open(op.join('mne', '__init__.py'), 'r') as fid:\n for line in (line.strip() for line in fid):\n if line.startswith('__version__'):\n version = line.split('=')[1].strip().strip('\\'')\n break\nif version is None:\n raise RuntimeError('Could not determine version')\n\n\ndescr = \"\"\"MNE python project for MEG and EEG data analysis.\"\"\"\n\nDISTNAME = 'mne'\nDESCRIPTION = descr\nMAINTAINER = 'Alexandre Gramfort'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'http://martinos.org/mne'\nLICENSE = 'BSD (3-clause)'\nDOWNLOAD_URL = 'http://github.com/mne-tools/mne-python'\nVERSION = version\n\n\ndef package_tree(pkgroot):\n \"\"\"Get the submodule list.\"\"\"\n # Adapted from VisPy\n path = op.dirname(__file__)\n subdirs = [op.relpath(i[0], path).replace(op.sep, '.')\n for i in os.walk(op.join(path, pkgroot))\n if '__init__.py' in i[2]]\n return sorted(subdirs)\n\n\nif __name__ == \"__main__\":\n if op.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n include_package_data=True,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=open('README.rst').read(),\n zip_safe=False, # the package can run out of an .egg file\n classifiers=['Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n 'Programming Language :: Python :: 3',\n ],\n platforms='any',\n install_requires=['numpy>=1.11.3', 'scipy>=0.17.1'],\n packages=package_tree('mne'),\n package_data={'mne': [\n op.join('data', '*.sel'),\n op.join('data', 'icos.fif.gz'),\n op.join('data', 'coil_def*.dat'),\n op.join('data', 'helmets', '*.fif.gz'),\n op.join('data', 'FreeSurferColorLUT.txt'),\n op.join('data', 'image', '*gif'),\n op.join('data', 'image', '*lout'),\n op.join('data', 'fsaverage', '*.fif'),\n op.join('channels', 'data', 'layouts', '*.lout'),\n op.join('channels', 'data', 'layouts', '*.lay'),\n op.join('channels', 'data', 'montages', '*.sfp'),\n op.join('channels', 'data', 'montages', '*.txt'),\n op.join('channels', 'data', 'montages', '*.elc'),\n op.join('channels', 'data', 'neighbors', '*.mat'),\n op.join('datasets', 'sleep_physionet', 'SHA1SUMS'),\n op.join('gui', 'help', '*.json'),\n op.join('html', '*.js'),\n op.join('html', '*.css'),\n op.join('io', 'artemis123', 'resources', '*.csv'),\n op.join('io', 'edf', 'gdf_encodes.txt')\n ]},\n scripts=['bin/mne'])\n", "path": "setup.py"}]}
| 1,344 | 100 |
gh_patches_debug_4828
|
rasdani/github-patches
|
git_diff
|
hylang__hy-2078
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Run REPL using `python -m hy`
Figuring out how to get a REPL is not trivial. What do you think about making `python -m hy` do it?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/__main__.py`
Content:
```
1 import hy # NOQA
2 import sys
3
4 # This just mocks the normalish behavior of the Python interp. Helpful to aid
5 # with shimming existing apps that don't really "work" with Hy.
6 #
7 # You could say this script helps Hyjack a file.
8 #
9
10
11 if len(sys.argv) > 1:
12 sys.argv.pop(0)
13 hy.importer._import_from_path('__main__', sys.argv[0])
14 sys.exit(0) # right?
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hy/__main__.py b/hy/__main__.py
--- a/hy/__main__.py
+++ b/hy/__main__.py
@@ -1,14 +1,8 @@
-import hy # NOQA
import sys
-# This just mocks the normalish behavior of the Python interp. Helpful to aid
-# with shimming existing apps that don't really "work" with Hy.
-#
-# You could say this script helps Hyjack a file.
-#
+from hy.cmdline import hy_main
+# Running hy as a module (e.g. `python -m hy`)
+# is equivalent to running the main `hy` command.
-if len(sys.argv) > 1:
- sys.argv.pop(0)
- hy.importer._import_from_path('__main__', sys.argv[0])
- sys.exit(0) # right?
+sys.exit(hy_main())
|
{"golden_diff": "diff --git a/hy/__main__.py b/hy/__main__.py\n--- a/hy/__main__.py\n+++ b/hy/__main__.py\n@@ -1,14 +1,8 @@\n-import hy # NOQA\n import sys\n \n-# This just mocks the normalish behavior of the Python interp. Helpful to aid\n-# with shimming existing apps that don't really \"work\" with Hy.\n-#\n-# You could say this script helps Hyjack a file.\n-#\n+from hy.cmdline import hy_main\n \n+# Running hy as a module (e.g. `python -m hy`)\n+# is equivalent to running the main `hy` command.\n \n-if len(sys.argv) > 1:\n- sys.argv.pop(0)\n- hy.importer._import_from_path('__main__', sys.argv[0])\n- sys.exit(0) # right?\n+sys.exit(hy_main())\n", "issue": "Run REPL using `python -m hy`\nFiguring out how to get a REPL is not trivial. What do you think about making `python -m hy` do it?\n", "before_files": [{"content": "import hy # NOQA\nimport sys\n\n# This just mocks the normalish behavior of the Python interp. Helpful to aid\n# with shimming existing apps that don't really \"work\" with Hy.\n#\n# You could say this script helps Hyjack a file.\n#\n\n\nif len(sys.argv) > 1:\n sys.argv.pop(0)\n hy.importer._import_from_path('__main__', sys.argv[0])\n sys.exit(0) # right?\n", "path": "hy/__main__.py"}], "after_files": [{"content": "import sys\n\nfrom hy.cmdline import hy_main\n\n# Running hy as a module (e.g. `python -m hy`)\n# is equivalent to running the main `hy` command.\n\nsys.exit(hy_main())\n", "path": "hy/__main__.py"}]}
| 418 | 199 |
gh_patches_debug_1702
|
rasdani/github-patches
|
git_diff
|
mozilla__bugbug-331
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Figure out what to do with http_service on CI
We have two options:
- build the http_service with fake models and don't push it on CI. Build it with real models and push it after training;
- build the http_service without models and let it download models at runtime.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `http_service/check_models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import logging
7 import os
8 import sys
9
10 from bugbug.models.component import ComponentModel
11 from bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel
12 from bugbug.models.regression import RegressionModel
13
14 logging.basicConfig(level=logging.INFO)
15 LOGGER = logging.getLogger()
16
17 MODELS = {
18 "defectenhancementtask": DefectEnhancementTaskModel,
19 "component": ComponentModel,
20 "regression": RegressionModel,
21 }
22 MODELS_DIR = os.path.join(os.path.dirname(__file__), "models")
23
24
25 def load_model(model):
26 model_file_path = os.path.join(MODELS_DIR, f"{model}model")
27 LOGGER.info(f"Lookup model in {model_file_path}")
28 model = MODELS[model].load(model_file_path)
29 return model
30
31
32 def check_models():
33 for model_name in MODELS.keys():
34 # Try loading the model
35 load_model(model_name)
36
37
38 if __name__ == "__main__":
39 try:
40 check_models()
41 except Exception:
42 LOGGER.warning(
43 "Failed to validate the models, please run `python models.py download`",
44 exc_info=True,
45 )
46 sys.exit(1)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/http_service/check_models.py b/http_service/check_models.py
--- a/http_service/check_models.py
+++ b/http_service/check_models.py
@@ -36,6 +36,13 @@
if __name__ == "__main__":
+
+ should_check_models = os.environ.get("CHECK_MODELS", "1")
+
+ if should_check_models == "0":
+ print("Skipping checking models as instructed by env var $CHECK_MODELS")
+ sys.exit(0)
+
try:
check_models()
except Exception:
|
{"golden_diff": "diff --git a/http_service/check_models.py b/http_service/check_models.py\n--- a/http_service/check_models.py\n+++ b/http_service/check_models.py\n@@ -36,6 +36,13 @@\n \n \n if __name__ == \"__main__\":\n+\n+ should_check_models = os.environ.get(\"CHECK_MODELS\", \"1\")\n+\n+ if should_check_models == \"0\":\n+ print(\"Skipping checking models as instructed by env var $CHECK_MODELS\")\n+ sys.exit(0)\n+\n try:\n check_models()\n except Exception:\n", "issue": "Figure out what to do with http_service on CI\nWe have two options:\r\n- build the http_service with fake models and don't push it on CI. Build it with real models and push it after training;\r\n- build the http_service without models and let it download models at runtime.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nimport os\nimport sys\n\nfrom bugbug.models.component import ComponentModel\nfrom bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel\nfrom bugbug.models.regression import RegressionModel\n\nlogging.basicConfig(level=logging.INFO)\nLOGGER = logging.getLogger()\n\nMODELS = {\n \"defectenhancementtask\": DefectEnhancementTaskModel,\n \"component\": ComponentModel,\n \"regression\": RegressionModel,\n}\nMODELS_DIR = os.path.join(os.path.dirname(__file__), \"models\")\n\n\ndef load_model(model):\n model_file_path = os.path.join(MODELS_DIR, f\"{model}model\")\n LOGGER.info(f\"Lookup model in {model_file_path}\")\n model = MODELS[model].load(model_file_path)\n return model\n\n\ndef check_models():\n for model_name in MODELS.keys():\n # Try loading the model\n load_model(model_name)\n\n\nif __name__ == \"__main__\":\n try:\n check_models()\n except Exception:\n LOGGER.warning(\n \"Failed to validate the models, please run `python models.py download`\",\n exc_info=True,\n )\n sys.exit(1)\n", "path": "http_service/check_models.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nimport os\nimport sys\n\nfrom bugbug.models.component import ComponentModel\nfrom bugbug.models.defect_enhancement_task import DefectEnhancementTaskModel\nfrom bugbug.models.regression import RegressionModel\n\nlogging.basicConfig(level=logging.INFO)\nLOGGER = logging.getLogger()\n\nMODELS = {\n \"defectenhancementtask\": DefectEnhancementTaskModel,\n \"component\": ComponentModel,\n \"regression\": RegressionModel,\n}\nMODELS_DIR = os.path.join(os.path.dirname(__file__), \"models\")\n\n\ndef load_model(model):\n model_file_path = os.path.join(MODELS_DIR, f\"{model}model\")\n LOGGER.info(f\"Lookup model in {model_file_path}\")\n model = MODELS[model].load(model_file_path)\n return model\n\n\ndef check_models():\n for model_name in MODELS.keys():\n # Try loading the model\n load_model(model_name)\n\n\nif __name__ == \"__main__\":\n\n should_check_models = os.environ.get(\"CHECK_MODELS\", \"1\")\n\n if should_check_models == \"0\":\n print(\"Skipping checking models as instructed by env var $CHECK_MODELS\")\n sys.exit(0)\n\n try:\n check_models()\n except Exception:\n LOGGER.warning(\n \"Failed to validate the models, please run `python models.py download`\",\n exc_info=True,\n )\n sys.exit(1)\n", "path": "http_service/check_models.py"}]}
| 714 | 119 |
gh_patches_debug_43397
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-1931
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add token to websockets connector
Currently, anyone that knows opdroid url and endpoint will be able to request a socket to initialize a websocket connection.
Ideally, we should allow users to select a token in the configuration settings. When opsdroid gets a request, if the token doesn't exist then we just reject the request.
This will also a nice feature to implement along side opsdroid-web v2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/connector/websocket/__init__.py`
Content:
```
1 """A connector which allows websocket connections."""
2 import json
3 import logging
4 import uuid
5 from datetime import datetime
6
7 import aiohttp
8 import aiohttp.web
9 from aiohttp import WSCloseCode
10 from opsdroid.connector import Connector, register_event
11 from opsdroid.events import Message
12
13 _LOGGER = logging.getLogger(__name__)
14 HEADERS = {"Access-Control-Allow-Origin": "*"}
15 CONFIG_SCHEMA = {"bot-name": str, "max-connections": int, "connection-timeout": int}
16
17
18 class ConnectorWebsocket(Connector):
19 """A connector which allows websocket connections."""
20
21 def __init__(self, config, opsdroid=None):
22 """Create the connector."""
23 super().__init__(config, opsdroid=opsdroid)
24 _LOGGER.debug(_("Starting Websocket connector."))
25 self.name = config.get("name", "websocket")
26 self.max_connections = self.config.get("max-connections", 10)
27 self.connection_timeout = self.config.get("connection-timeout", 60)
28 self.accepting_connections = True
29 self.active_connections = {}
30 self.available_connections = []
31 self.bot_name = self.config.get("bot-name", "opsdroid")
32
33 async def connect(self):
34 """Connect to the chat service."""
35 self.accepting_connections = True
36
37 self.opsdroid.web_server.web_app.router.add_get(
38 "/connector/websocket/{socket}", self.websocket_handler
39 )
40
41 self.opsdroid.web_server.web_app.router.add_post(
42 "/connector/websocket", self.new_websocket_handler
43 )
44
45 async def disconnect(self):
46 """Disconnect from current sessions."""
47 self.accepting_connections = False
48 connections_to_close = self.active_connections.copy()
49 for connection in connections_to_close:
50 await connections_to_close[connection].close(
51 code=WSCloseCode.GOING_AWAY, message="Server shutdown"
52 )
53
54 async def new_websocket_handler(self, request):
55 """Handle for aiohttp creating websocket connections."""
56 if (
57 len(self.active_connections) + len(self.available_connections)
58 < self.max_connections
59 and self.accepting_connections
60 ):
61 socket = {"id": str(uuid.uuid1()), "date": datetime.now()}
62 self.available_connections.append(socket)
63 return aiohttp.web.Response(
64 text=json.dumps({"socket": socket["id"]}), headers=HEADERS, status=200
65 )
66 return aiohttp.web.Response(
67 text=json.dumps("No connections available"), headers=HEADERS, status=429
68 )
69
70 async def websocket_handler(self, request):
71 """Handle for aiohttp handling websocket connections."""
72 socket = request.match_info.get("socket")
73 available = [
74 item for item in self.available_connections if item["id"] == socket
75 ]
76 if len(available) != 1:
77 return aiohttp.web.Response(
78 text=json.dumps("Please request a socket first"),
79 headers=HEADERS,
80 status=400,
81 )
82 if (
83 datetime.now() - available[0]["date"]
84 ).total_seconds() > self.connection_timeout:
85 self.available_connections.remove(available[0])
86 return aiohttp.web.Response(
87 text=json.dumps("Socket request timed out"), headers=HEADERS, status=408
88 )
89 self.available_connections.remove(available[0])
90 _LOGGER.debug(_("User connected to %s."), socket)
91
92 websocket = aiohttp.web.WebSocketResponse()
93 await websocket.prepare(request)
94
95 self.active_connections[socket] = websocket
96 async for msg in websocket:
97 if msg.type == aiohttp.WSMsgType.TEXT:
98 message = Message(text=msg.data, user=None, target=None, connector=self)
99 await self.opsdroid.parse(message)
100 elif msg.type == aiohttp.WSMsgType.ERROR:
101 _LOGGER.error(
102 _("Websocket connection closed with exception %s."),
103 websocket.exception(),
104 )
105
106 _LOGGER.info(_("websocket connection closed"))
107 self.active_connections.pop(socket, None)
108
109 return websocket
110
111 async def listen(self):
112 """Listen for and parse new messages.
113
114 Listening is handled by the aiohttp web server so
115 we don't need to do anything here.
116
117 """
118
119 @register_event(Message)
120 async def send_message(self, message):
121 """Respond with a message."""
122 try:
123 if message.target is None:
124 message.target = next(iter(self.active_connections))
125 _LOGGER.debug(
126 _("Responding with: '%s' in target %s"), message.text, message.target
127 )
128 await self.active_connections[message.target].send_str(message.text)
129 except KeyError:
130 _LOGGER.error(_("No active socket for target %s"), message.target)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opsdroid/connector/websocket/__init__.py b/opsdroid/connector/websocket/__init__.py
--- a/opsdroid/connector/websocket/__init__.py
+++ b/opsdroid/connector/websocket/__init__.py
@@ -9,12 +9,49 @@
from aiohttp import WSCloseCode
from opsdroid.connector import Connector, register_event
from opsdroid.events import Message
+import dataclasses
+from typing import Optional
_LOGGER = logging.getLogger(__name__)
HEADERS = {"Access-Control-Allow-Origin": "*"}
CONFIG_SCHEMA = {"bot-name": str, "max-connections": int, "connection-timeout": int}
[email protected]
+class WebsocketMessage:
+ """A message received from a websocket connection."""
+
+ message: str
+ user: Optional[str]
+ socket: Optional[str]
+
+ @classmethod
+ def parse_payload(cls, payload: str):
+ """Parse the payload of a websocket message.
+
+ We will try to parse the payload as a json string.
+ If that fails, we will use the default values which are:
+
+ message: str
+ user: None
+ socket: None
+
+ """
+ try:
+ data = json.loads(payload)
+ return cls(
+ message=data.get("message"),
+ user=data.get("user"),
+ socket=data.get("socket"),
+ )
+ except json.JSONDecodeError:
+ return cls(
+ message=payload,
+ user=None,
+ socket=None,
+ )
+
+
class ConnectorWebsocket(Connector):
"""A connector which allows websocket connections."""
@@ -29,6 +66,7 @@
self.active_connections = {}
self.available_connections = []
self.bot_name = self.config.get("bot-name", "opsdroid")
+ self.authorization_token = self.config.get("token")
async def connect(self):
"""Connect to the chat service."""
@@ -53,6 +91,7 @@
async def new_websocket_handler(self, request):
"""Handle for aiohttp creating websocket connections."""
+ await self.validate_request(request)
if (
len(self.active_connections) + len(self.available_connections)
< self.max_connections
@@ -95,7 +134,13 @@
self.active_connections[socket] = websocket
async for msg in websocket:
if msg.type == aiohttp.WSMsgType.TEXT:
- message = Message(text=msg.data, user=None, target=None, connector=self)
+ payload = WebsocketMessage.parse_payload(msg.data)
+ message = Message(
+ text=payload.message,
+ user=payload.user,
+ target=payload.socket,
+ connector=self,
+ )
await self.opsdroid.parse(message)
elif msg.type == aiohttp.WSMsgType.ERROR:
_LOGGER.error(
@@ -108,6 +153,20 @@
return websocket
+ async def validate_request(self, request):
+ """Validate the request by looking at headers and the connector token.
+
+ If the token does not exist in the header, but exists in the configuration,
+ then we will simply return a Forbidden error.
+
+ """
+ client_token = request.headers.get("Authorization")
+ if self.authorization_token and (
+ client_token is None or client_token != self.authorization_token
+ ):
+ raise aiohttp.web.HTTPUnauthorized()
+ return True
+
async def listen(self):
"""Listen for and parse new messages.
@@ -117,7 +176,7 @@
"""
@register_event(Message)
- async def send_message(self, message):
+ async def send_message(self, message: Message):
"""Respond with a message."""
try:
if message.target is None:
|
{"golden_diff": "diff --git a/opsdroid/connector/websocket/__init__.py b/opsdroid/connector/websocket/__init__.py\n--- a/opsdroid/connector/websocket/__init__.py\n+++ b/opsdroid/connector/websocket/__init__.py\n@@ -9,12 +9,49 @@\n from aiohttp import WSCloseCode\n from opsdroid.connector import Connector, register_event\n from opsdroid.events import Message\n+import dataclasses\n+from typing import Optional\n \n _LOGGER = logging.getLogger(__name__)\n HEADERS = {\"Access-Control-Allow-Origin\": \"*\"}\n CONFIG_SCHEMA = {\"bot-name\": str, \"max-connections\": int, \"connection-timeout\": int}\n \n \[email protected]\n+class WebsocketMessage:\n+ \"\"\"A message received from a websocket connection.\"\"\"\n+\n+ message: str\n+ user: Optional[str]\n+ socket: Optional[str]\n+\n+ @classmethod\n+ def parse_payload(cls, payload: str):\n+ \"\"\"Parse the payload of a websocket message.\n+\n+ We will try to parse the payload as a json string.\n+ If that fails, we will use the default values which are:\n+\n+ message: str\n+ user: None\n+ socket: None\n+\n+ \"\"\"\n+ try:\n+ data = json.loads(payload)\n+ return cls(\n+ message=data.get(\"message\"),\n+ user=data.get(\"user\"),\n+ socket=data.get(\"socket\"),\n+ )\n+ except json.JSONDecodeError:\n+ return cls(\n+ message=payload,\n+ user=None,\n+ socket=None,\n+ )\n+\n+\n class ConnectorWebsocket(Connector):\n \"\"\"A connector which allows websocket connections.\"\"\"\n \n@@ -29,6 +66,7 @@\n self.active_connections = {}\n self.available_connections = []\n self.bot_name = self.config.get(\"bot-name\", \"opsdroid\")\n+ self.authorization_token = self.config.get(\"token\")\n \n async def connect(self):\n \"\"\"Connect to the chat service.\"\"\"\n@@ -53,6 +91,7 @@\n \n async def new_websocket_handler(self, request):\n \"\"\"Handle for aiohttp creating websocket connections.\"\"\"\n+ await self.validate_request(request)\n if (\n len(self.active_connections) + len(self.available_connections)\n < self.max_connections\n@@ -95,7 +134,13 @@\n self.active_connections[socket] = websocket\n async for msg in websocket:\n if msg.type == aiohttp.WSMsgType.TEXT:\n- message = Message(text=msg.data, user=None, target=None, connector=self)\n+ payload = WebsocketMessage.parse_payload(msg.data)\n+ message = Message(\n+ text=payload.message,\n+ user=payload.user,\n+ target=payload.socket,\n+ connector=self,\n+ )\n await self.opsdroid.parse(message)\n elif msg.type == aiohttp.WSMsgType.ERROR:\n _LOGGER.error(\n@@ -108,6 +153,20 @@\n \n return websocket\n \n+ async def validate_request(self, request):\n+ \"\"\"Validate the request by looking at headers and the connector token.\n+\n+ If the token does not exist in the header, but exists in the configuration,\n+ then we will simply return a Forbidden error.\n+\n+ \"\"\"\n+ client_token = request.headers.get(\"Authorization\")\n+ if self.authorization_token and (\n+ client_token is None or client_token != self.authorization_token\n+ ):\n+ raise aiohttp.web.HTTPUnauthorized()\n+ return True\n+\n async def listen(self):\n \"\"\"Listen for and parse new messages.\n \n@@ -117,7 +176,7 @@\n \"\"\"\n \n @register_event(Message)\n- async def send_message(self, message):\n+ async def send_message(self, message: Message):\n \"\"\"Respond with a message.\"\"\"\n try:\n if message.target is None:\n", "issue": "Add token to websockets connector\nCurrently, anyone that knows opdroid url and endpoint will be able to request a socket to initialize a websocket connection.\r\n\r\nIdeally, we should allow users to select a token in the configuration settings. When opsdroid gets a request, if the token doesn't exist then we just reject the request.\r\n\r\nThis will also a nice feature to implement along side opsdroid-web v2\n", "before_files": [{"content": "\"\"\"A connector which allows websocket connections.\"\"\"\nimport json\nimport logging\nimport uuid\nfrom datetime import datetime\n\nimport aiohttp\nimport aiohttp.web\nfrom aiohttp import WSCloseCode\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message\n\n_LOGGER = logging.getLogger(__name__)\nHEADERS = {\"Access-Control-Allow-Origin\": \"*\"}\nCONFIG_SCHEMA = {\"bot-name\": str, \"max-connections\": int, \"connection-timeout\": int}\n\n\nclass ConnectorWebsocket(Connector):\n \"\"\"A connector which allows websocket connections.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connector.\"\"\"\n super().__init__(config, opsdroid=opsdroid)\n _LOGGER.debug(_(\"Starting Websocket connector.\"))\n self.name = config.get(\"name\", \"websocket\")\n self.max_connections = self.config.get(\"max-connections\", 10)\n self.connection_timeout = self.config.get(\"connection-timeout\", 60)\n self.accepting_connections = True\n self.active_connections = {}\n self.available_connections = []\n self.bot_name = self.config.get(\"bot-name\", \"opsdroid\")\n\n async def connect(self):\n \"\"\"Connect to the chat service.\"\"\"\n self.accepting_connections = True\n\n self.opsdroid.web_server.web_app.router.add_get(\n \"/connector/websocket/{socket}\", self.websocket_handler\n )\n\n self.opsdroid.web_server.web_app.router.add_post(\n \"/connector/websocket\", self.new_websocket_handler\n )\n\n async def disconnect(self):\n \"\"\"Disconnect from current sessions.\"\"\"\n self.accepting_connections = False\n connections_to_close = self.active_connections.copy()\n for connection in connections_to_close:\n await connections_to_close[connection].close(\n code=WSCloseCode.GOING_AWAY, message=\"Server shutdown\"\n )\n\n async def new_websocket_handler(self, request):\n \"\"\"Handle for aiohttp creating websocket connections.\"\"\"\n if (\n len(self.active_connections) + len(self.available_connections)\n < self.max_connections\n and self.accepting_connections\n ):\n socket = {\"id\": str(uuid.uuid1()), \"date\": datetime.now()}\n self.available_connections.append(socket)\n return aiohttp.web.Response(\n text=json.dumps({\"socket\": socket[\"id\"]}), headers=HEADERS, status=200\n )\n return aiohttp.web.Response(\n text=json.dumps(\"No connections available\"), headers=HEADERS, status=429\n )\n\n async def websocket_handler(self, request):\n \"\"\"Handle for aiohttp handling websocket connections.\"\"\"\n socket = request.match_info.get(\"socket\")\n available = [\n item for item in self.available_connections if item[\"id\"] == socket\n ]\n if len(available) != 1:\n return aiohttp.web.Response(\n text=json.dumps(\"Please request a socket first\"),\n headers=HEADERS,\n status=400,\n )\n if (\n datetime.now() - available[0][\"date\"]\n ).total_seconds() > self.connection_timeout:\n self.available_connections.remove(available[0])\n return aiohttp.web.Response(\n text=json.dumps(\"Socket request timed out\"), headers=HEADERS, status=408\n )\n self.available_connections.remove(available[0])\n _LOGGER.debug(_(\"User connected to %s.\"), socket)\n\n websocket = aiohttp.web.WebSocketResponse()\n await websocket.prepare(request)\n\n self.active_connections[socket] = websocket\n async for msg in websocket:\n if msg.type == aiohttp.WSMsgType.TEXT:\n message = Message(text=msg.data, user=None, target=None, connector=self)\n await self.opsdroid.parse(message)\n elif msg.type == aiohttp.WSMsgType.ERROR:\n _LOGGER.error(\n _(\"Websocket connection closed with exception %s.\"),\n websocket.exception(),\n )\n\n _LOGGER.info(_(\"websocket connection closed\"))\n self.active_connections.pop(socket, None)\n\n return websocket\n\n async def listen(self):\n \"\"\"Listen for and parse new messages.\n\n Listening is handled by the aiohttp web server so\n we don't need to do anything here.\n\n \"\"\"\n\n @register_event(Message)\n async def send_message(self, message):\n \"\"\"Respond with a message.\"\"\"\n try:\n if message.target is None:\n message.target = next(iter(self.active_connections))\n _LOGGER.debug(\n _(\"Responding with: '%s' in target %s\"), message.text, message.target\n )\n await self.active_connections[message.target].send_str(message.text)\n except KeyError:\n _LOGGER.error(_(\"No active socket for target %s\"), message.target)\n", "path": "opsdroid/connector/websocket/__init__.py"}], "after_files": [{"content": "\"\"\"A connector which allows websocket connections.\"\"\"\nimport json\nimport logging\nimport uuid\nfrom datetime import datetime\n\nimport aiohttp\nimport aiohttp.web\nfrom aiohttp import WSCloseCode\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message\nimport dataclasses\nfrom typing import Optional\n\n_LOGGER = logging.getLogger(__name__)\nHEADERS = {\"Access-Control-Allow-Origin\": \"*\"}\nCONFIG_SCHEMA = {\"bot-name\": str, \"max-connections\": int, \"connection-timeout\": int}\n\n\[email protected]\nclass WebsocketMessage:\n \"\"\"A message received from a websocket connection.\"\"\"\n\n message: str\n user: Optional[str]\n socket: Optional[str]\n\n @classmethod\n def parse_payload(cls, payload: str):\n \"\"\"Parse the payload of a websocket message.\n\n We will try to parse the payload as a json string.\n If that fails, we will use the default values which are:\n\n message: str\n user: None\n socket: None\n\n \"\"\"\n try:\n data = json.loads(payload)\n return cls(\n message=data.get(\"message\"),\n user=data.get(\"user\"),\n socket=data.get(\"socket\"),\n )\n except json.JSONDecodeError:\n return cls(\n message=payload,\n user=None,\n socket=None,\n )\n\n\nclass ConnectorWebsocket(Connector):\n \"\"\"A connector which allows websocket connections.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connector.\"\"\"\n super().__init__(config, opsdroid=opsdroid)\n _LOGGER.debug(_(\"Starting Websocket connector.\"))\n self.name = config.get(\"name\", \"websocket\")\n self.max_connections = self.config.get(\"max-connections\", 10)\n self.connection_timeout = self.config.get(\"connection-timeout\", 60)\n self.accepting_connections = True\n self.active_connections = {}\n self.available_connections = []\n self.bot_name = self.config.get(\"bot-name\", \"opsdroid\")\n self.authorization_token = self.config.get(\"token\")\n\n async def connect(self):\n \"\"\"Connect to the chat service.\"\"\"\n self.accepting_connections = True\n\n self.opsdroid.web_server.web_app.router.add_get(\n \"/connector/websocket/{socket}\", self.websocket_handler\n )\n\n self.opsdroid.web_server.web_app.router.add_post(\n \"/connector/websocket\", self.new_websocket_handler\n )\n\n async def disconnect(self):\n \"\"\"Disconnect from current sessions.\"\"\"\n self.accepting_connections = False\n connections_to_close = self.active_connections.copy()\n for connection in connections_to_close:\n await connections_to_close[connection].close(\n code=WSCloseCode.GOING_AWAY, message=\"Server shutdown\"\n )\n\n async def new_websocket_handler(self, request):\n \"\"\"Handle for aiohttp creating websocket connections.\"\"\"\n await self.validate_request(request)\n if (\n len(self.active_connections) + len(self.available_connections)\n < self.max_connections\n and self.accepting_connections\n ):\n socket = {\"id\": str(uuid.uuid1()), \"date\": datetime.now()}\n self.available_connections.append(socket)\n return aiohttp.web.Response(\n text=json.dumps({\"socket\": socket[\"id\"]}), headers=HEADERS, status=200\n )\n return aiohttp.web.Response(\n text=json.dumps(\"No connections available\"), headers=HEADERS, status=429\n )\n\n async def websocket_handler(self, request):\n \"\"\"Handle for aiohttp handling websocket connections.\"\"\"\n socket = request.match_info.get(\"socket\")\n available = [\n item for item in self.available_connections if item[\"id\"] == socket\n ]\n if len(available) != 1:\n return aiohttp.web.Response(\n text=json.dumps(\"Please request a socket first\"),\n headers=HEADERS,\n status=400,\n )\n if (\n datetime.now() - available[0][\"date\"]\n ).total_seconds() > self.connection_timeout:\n self.available_connections.remove(available[0])\n return aiohttp.web.Response(\n text=json.dumps(\"Socket request timed out\"), headers=HEADERS, status=408\n )\n self.available_connections.remove(available[0])\n _LOGGER.debug(_(\"User connected to %s.\"), socket)\n\n websocket = aiohttp.web.WebSocketResponse()\n await websocket.prepare(request)\n\n self.active_connections[socket] = websocket\n async for msg in websocket:\n if msg.type == aiohttp.WSMsgType.TEXT:\n payload = WebsocketMessage.parse_payload(msg.data)\n message = Message(\n text=payload.message,\n user=payload.user,\n target=payload.socket,\n connector=self,\n )\n await self.opsdroid.parse(message)\n elif msg.type == aiohttp.WSMsgType.ERROR:\n _LOGGER.error(\n _(\"Websocket connection closed with exception %s.\"),\n websocket.exception(),\n )\n\n _LOGGER.info(_(\"websocket connection closed\"))\n self.active_connections.pop(socket, None)\n\n return websocket\n\n async def validate_request(self, request):\n \"\"\"Validate the request by looking at headers and the connector token.\n\n If the token does not exist in the header, but exists in the configuration,\n then we will simply return a Forbidden error.\n\n \"\"\"\n client_token = request.headers.get(\"Authorization\")\n if self.authorization_token and (\n client_token is None or client_token != self.authorization_token\n ):\n raise aiohttp.web.HTTPUnauthorized()\n return True\n\n async def listen(self):\n \"\"\"Listen for and parse new messages.\n\n Listening is handled by the aiohttp web server so\n we don't need to do anything here.\n\n \"\"\"\n\n @register_event(Message)\n async def send_message(self, message: Message):\n \"\"\"Respond with a message.\"\"\"\n try:\n if message.target is None:\n message.target = next(iter(self.active_connections))\n _LOGGER.debug(\n _(\"Responding with: '%s' in target %s\"), message.text, message.target\n )\n await self.active_connections[message.target].send_str(message.text)\n except KeyError:\n _LOGGER.error(_(\"No active socket for target %s\"), message.target)\n", "path": "opsdroid/connector/websocket/__init__.py"}]}
| 1,631 | 850 |
gh_patches_debug_30899
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1797
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs: DTO Factory exclude fields in nested complex types
### Summary
Need to document procedure for excluding fields from nested complex types, e.g.,
```py
class Model1:
a: int
b: str
class Model2:
c: int
d: str
class Model3:
e: Union[Model1, Model2]
config = DTOConfig(exclude={"e.0.a", "e.1.c"})
...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/examples/data_transfer_objects/factory/excluding_fields.py`
Content:
```
1 from datetime import datetime
2 from uuid import UUID
3
4 from sqlalchemy import ForeignKey
5 from sqlalchemy.orm import Mapped, mapped_column, relationship
6 from typing_extensions import Annotated
7
8 from litestar import Litestar, post
9 from litestar.contrib.sqlalchemy.dto import SQLAlchemyDTO
10 from litestar.dto.factory import DTOConfig, dto_field
11
12 from .my_lib import Base
13
14
15 class Address(Base):
16 street: Mapped[str]
17 city: Mapped[str]
18 state: Mapped[str]
19 zip: Mapped[str]
20
21
22 class User(Base):
23 name: Mapped[str]
24 password: Mapped[str] = mapped_column(info=dto_field("private"))
25 created_at: Mapped[datetime] = mapped_column(info=dto_field("read-only"))
26 address_id: Mapped[UUID] = mapped_column(ForeignKey("address.id"), info=dto_field("private"))
27 address: Mapped[Address] = relationship(info=dto_field("read-only"))
28
29
30 UserDTO = SQLAlchemyDTO[User]
31 config = DTOConfig(exclude={"id", "address.id", "address.street"})
32 ReadUserDTO = SQLAlchemyDTO[Annotated[User, config]]
33
34
35 @post("/users", dto=UserDTO, return_dto=ReadUserDTO, sync_to_thread=False)
36 def create_user(data: User) -> User:
37 data.created_at = datetime.min
38 data.address = Address(street="123 Main St", city="Anytown", state="NY", zip="12345")
39 return data
40
41
42 app = Litestar(route_handlers=[create_user])
43
44 # run: /users -H "Content-Type: application/json" -d '{"name":"Litestar User","password":"xyz","created_at":"2023-04-24T00:00:00Z"}'
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/examples/data_transfer_objects/factory/excluding_fields.py b/docs/examples/data_transfer_objects/factory/excluding_fields.py
--- a/docs/examples/data_transfer_objects/factory/excluding_fields.py
+++ b/docs/examples/data_transfer_objects/factory/excluding_fields.py
@@ -1,4 +1,5 @@
from datetime import datetime
+from typing import List
from uuid import UUID
from sqlalchemy import ForeignKey
@@ -19,16 +20,30 @@
zip: Mapped[str]
+class Pets(Base):
+ name: Mapped[str]
+ user_id: Mapped[UUID] = mapped_column(ForeignKey("user.id"))
+
+
class User(Base):
name: Mapped[str]
password: Mapped[str] = mapped_column(info=dto_field("private"))
created_at: Mapped[datetime] = mapped_column(info=dto_field("read-only"))
address_id: Mapped[UUID] = mapped_column(ForeignKey("address.id"), info=dto_field("private"))
address: Mapped[Address] = relationship(info=dto_field("read-only"))
+ pets: Mapped[List[Pets]] = relationship(info=dto_field("read-only"))
UserDTO = SQLAlchemyDTO[User]
-config = DTOConfig(exclude={"id", "address.id", "address.street"})
+config = DTOConfig(
+ exclude={
+ "id",
+ "address.id",
+ "address.street",
+ "pets.0.id",
+ "pets.0.user_id",
+ }
+)
ReadUserDTO = SQLAlchemyDTO[Annotated[User, config]]
@@ -36,6 +51,7 @@
def create_user(data: User) -> User:
data.created_at = datetime.min
data.address = Address(street="123 Main St", city="Anytown", state="NY", zip="12345")
+ data.pets = [Pets(id=1, name="Fido"), Pets(id=2, name="Spot")]
return data
|
{"golden_diff": "diff --git a/docs/examples/data_transfer_objects/factory/excluding_fields.py b/docs/examples/data_transfer_objects/factory/excluding_fields.py\n--- a/docs/examples/data_transfer_objects/factory/excluding_fields.py\n+++ b/docs/examples/data_transfer_objects/factory/excluding_fields.py\n@@ -1,4 +1,5 @@\n from datetime import datetime\n+from typing import List\n from uuid import UUID\n \n from sqlalchemy import ForeignKey\n@@ -19,16 +20,30 @@\n zip: Mapped[str]\n \n \n+class Pets(Base):\n+ name: Mapped[str]\n+ user_id: Mapped[UUID] = mapped_column(ForeignKey(\"user.id\"))\n+\n+\n class User(Base):\n name: Mapped[str]\n password: Mapped[str] = mapped_column(info=dto_field(\"private\"))\n created_at: Mapped[datetime] = mapped_column(info=dto_field(\"read-only\"))\n address_id: Mapped[UUID] = mapped_column(ForeignKey(\"address.id\"), info=dto_field(\"private\"))\n address: Mapped[Address] = relationship(info=dto_field(\"read-only\"))\n+ pets: Mapped[List[Pets]] = relationship(info=dto_field(\"read-only\"))\n \n \n UserDTO = SQLAlchemyDTO[User]\n-config = DTOConfig(exclude={\"id\", \"address.id\", \"address.street\"})\n+config = DTOConfig(\n+ exclude={\n+ \"id\",\n+ \"address.id\",\n+ \"address.street\",\n+ \"pets.0.id\",\n+ \"pets.0.user_id\",\n+ }\n+)\n ReadUserDTO = SQLAlchemyDTO[Annotated[User, config]]\n \n \n@@ -36,6 +51,7 @@\n def create_user(data: User) -> User:\n data.created_at = datetime.min\n data.address = Address(street=\"123 Main St\", city=\"Anytown\", state=\"NY\", zip=\"12345\")\n+ data.pets = [Pets(id=1, name=\"Fido\"), Pets(id=2, name=\"Spot\")]\n return data\n", "issue": "Docs: DTO Factory exclude fields in nested complex types\n### Summary\n\nNeed to document procedure for excluding fields from nested complex types, e.g.,\r\n\r\n```py\r\nclass Model1:\r\n a: int\r\n b: str\r\n\r\nclass Model2:\r\n c: int\r\n d: str\r\n\r\nclass Model3:\r\n e: Union[Model1, Model2]\r\n\r\nconfig = DTOConfig(exclude={\"e.0.a\", \"e.1.c\"})\r\n...\r\n```\n", "before_files": [{"content": "from datetime import datetime\nfrom uuid import UUID\n\nfrom sqlalchemy import ForeignKey\nfrom sqlalchemy.orm import Mapped, mapped_column, relationship\nfrom typing_extensions import Annotated\n\nfrom litestar import Litestar, post\nfrom litestar.contrib.sqlalchemy.dto import SQLAlchemyDTO\nfrom litestar.dto.factory import DTOConfig, dto_field\n\nfrom .my_lib import Base\n\n\nclass Address(Base):\n street: Mapped[str]\n city: Mapped[str]\n state: Mapped[str]\n zip: Mapped[str]\n\n\nclass User(Base):\n name: Mapped[str]\n password: Mapped[str] = mapped_column(info=dto_field(\"private\"))\n created_at: Mapped[datetime] = mapped_column(info=dto_field(\"read-only\"))\n address_id: Mapped[UUID] = mapped_column(ForeignKey(\"address.id\"), info=dto_field(\"private\"))\n address: Mapped[Address] = relationship(info=dto_field(\"read-only\"))\n\n\nUserDTO = SQLAlchemyDTO[User]\nconfig = DTOConfig(exclude={\"id\", \"address.id\", \"address.street\"})\nReadUserDTO = SQLAlchemyDTO[Annotated[User, config]]\n\n\n@post(\"/users\", dto=UserDTO, return_dto=ReadUserDTO, sync_to_thread=False)\ndef create_user(data: User) -> User:\n data.created_at = datetime.min\n data.address = Address(street=\"123 Main St\", city=\"Anytown\", state=\"NY\", zip=\"12345\")\n return data\n\n\napp = Litestar(route_handlers=[create_user])\n\n# run: /users -H \"Content-Type: application/json\" -d '{\"name\":\"Litestar User\",\"password\":\"xyz\",\"created_at\":\"2023-04-24T00:00:00Z\"}'\n", "path": "docs/examples/data_transfer_objects/factory/excluding_fields.py"}], "after_files": [{"content": "from datetime import datetime\nfrom typing import List\nfrom uuid import UUID\n\nfrom sqlalchemy import ForeignKey\nfrom sqlalchemy.orm import Mapped, mapped_column, relationship\nfrom typing_extensions import Annotated\n\nfrom litestar import Litestar, post\nfrom litestar.contrib.sqlalchemy.dto import SQLAlchemyDTO\nfrom litestar.dto.factory import DTOConfig, dto_field\n\nfrom .my_lib import Base\n\n\nclass Address(Base):\n street: Mapped[str]\n city: Mapped[str]\n state: Mapped[str]\n zip: Mapped[str]\n\n\nclass Pets(Base):\n name: Mapped[str]\n user_id: Mapped[UUID] = mapped_column(ForeignKey(\"user.id\"))\n\n\nclass User(Base):\n name: Mapped[str]\n password: Mapped[str] = mapped_column(info=dto_field(\"private\"))\n created_at: Mapped[datetime] = mapped_column(info=dto_field(\"read-only\"))\n address_id: Mapped[UUID] = mapped_column(ForeignKey(\"address.id\"), info=dto_field(\"private\"))\n address: Mapped[Address] = relationship(info=dto_field(\"read-only\"))\n pets: Mapped[List[Pets]] = relationship(info=dto_field(\"read-only\"))\n\n\nUserDTO = SQLAlchemyDTO[User]\nconfig = DTOConfig(\n exclude={\n \"id\",\n \"address.id\",\n \"address.street\",\n \"pets.0.id\",\n \"pets.0.user_id\",\n }\n)\nReadUserDTO = SQLAlchemyDTO[Annotated[User, config]]\n\n\n@post(\"/users\", dto=UserDTO, return_dto=ReadUserDTO, sync_to_thread=False)\ndef create_user(data: User) -> User:\n data.created_at = datetime.min\n data.address = Address(street=\"123 Main St\", city=\"Anytown\", state=\"NY\", zip=\"12345\")\n data.pets = [Pets(id=1, name=\"Fido\"), Pets(id=2, name=\"Spot\")]\n return data\n\n\napp = Litestar(route_handlers=[create_user])\n\n# run: /users -H \"Content-Type: application/json\" -d '{\"name\":\"Litestar User\",\"password\":\"xyz\",\"created_at\":\"2023-04-24T00:00:00Z\"}'\n", "path": "docs/examples/data_transfer_objects/factory/excluding_fields.py"}]}
| 831 | 437 |
gh_patches_debug_18488
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-1054
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] [Hydra-Submitit-Launcher] Add support for passing `additional_parameters` as supported by Submititi
# 🚀 Feature Request
When calling [Submitit](https://github.com/facebookincubator/submitit) directly (not the plugin), one can pass an optional dictionary argument called [`additional_parameters`](https://github.com/facebookincubator/submitit/blob/e6fc6b2532a2ea480f08747816f14c53f526ef04/submitit/slurm/slurm.py#L432) to add parameters which are not currently available in submitit.
This argument can not be passed when using the submitit-plugin. Example error:
```
Key 'additional_parameters' not in 'SlurmQueueConf'
full_key: hydra.launcher.additional_parameters
reference_type=Optional[SlurmQueueConf]
object_type=SlurmQueueConf
```
We should add support for passing this `additional_parameters` dict via the plugin aswell.
## Motivation
Same as above.
## Pitch
Add support for the dict in [SlurmQueueConf](https://github.com/facebookresearch/hydra/blob/master/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py#L31) and pass to submitit.AutoExecutor [here](https://github.com/facebookresearch/hydra/blob/dfbebc34020ad3d4bf1ab061565408740e62eb22/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/submitit_launcher.py#L105)
**Are you willing to open a pull request?** (See [CONTRIBUTING](../../CONTRIBUTING.md)): Yes
## Additional context
Since this change is quite small, is there any reason it wasnt done in the first iteration of the plugin?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass
3 from typing import Optional
4
5 from hydra.core.config_store import ConfigStore
6
7
8 @dataclass
9 class BaseQueueConf:
10 """Configuration shared by all executors"""
11
12 submitit_folder: str = "${hydra.sweep.dir}/.submitit/%j"
13
14 # maximum time for the job in minutes
15 timeout_min: int = 60
16 # number of cpus to use for each task
17 cpus_per_task: int = 1
18 # number of gpus to use on each node
19 gpus_per_node: int = 0
20 # number of tasks to spawn on each node
21 tasks_per_node: int = 1
22 # memory to reserve for the job on each node (in GB)
23 mem_gb: int = 4
24 # number of nodes to use for the job
25 nodes: int = 1
26 # name of the job
27 name: str = "${hydra.job.name}"
28
29
30 @dataclass
31 class SlurmQueueConf(BaseQueueConf):
32 """Slurm configuration overrides and specific parameters"""
33
34 _target_: str = (
35 "hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher"
36 )
37
38 # Params are used to configure sbatch, for more info check:
39 # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py
40
41 # Following parameters are slurm specific
42 # More information: https://slurm.schedmd.com/sbatch.html
43 #
44 # slurm partition to use on the cluster
45 partition: Optional[str] = None
46 comment: Optional[str] = None
47 constraint: Optional[str] = None
48 exclude: Optional[str] = None
49
50 # Following parameters are submitit specifics
51 #
52 # USR1 signal delay before timeout
53 signal_delay_s: int = 120
54 # Maximum number of retries on job timeout.
55 # Change this only after you confirmed your code can handle re-submission
56 # by properly resuming from the latest stored checkpoint.
57 # check the following for more info on slurm_max_num_timeout
58 # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md
59 max_num_timeout: int = 0
60
61
62 @dataclass
63 class LocalQueueConf(BaseQueueConf):
64 _target_: str = (
65 "hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher"
66 )
67
68
69 # finally, register two different choices:
70 ConfigStore.instance().store(
71 group="hydra/launcher",
72 name="submitit_local",
73 node=LocalQueueConf(),
74 provider="submitit_launcher",
75 )
76
77
78 ConfigStore.instance().store(
79 group="hydra/launcher",
80 name="submitit_slurm",
81 node=SlurmQueueConf(),
82 provider="submitit_launcher",
83 )
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
@@ -1,6 +1,6 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from dataclasses import dataclass
-from typing import Optional
+from dataclasses import dataclass, field
+from typing import Any, Dict, Optional
from hydra.core.config_store import ConfigStore
@@ -57,6 +57,9 @@
# check the following for more info on slurm_max_num_timeout
# https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md
max_num_timeout: int = 0
+ # Useful to add parameters which are not currently available in the plugin.
+ # Eg: {"mail-user": "[email protected]", "mail-type": "BEGIN"}
+ additional_parameters: Dict[str, Any] = field(default_factory=dict)
@dataclass
|
{"golden_diff": "diff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n@@ -1,6 +1,6 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n-from dataclasses import dataclass\n-from typing import Optional\n+from dataclasses import dataclass, field\n+from typing import Any, Dict, Optional\n \n from hydra.core.config_store import ConfigStore\n \n@@ -57,6 +57,9 @@\n # check the following for more info on slurm_max_num_timeout\n # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md\n max_num_timeout: int = 0\n+ # Useful to add parameters which are not currently available in the plugin.\n+ # Eg: {\"mail-user\": \"[email protected]\", \"mail-type\": \"BEGIN\"}\n+ additional_parameters: Dict[str, Any] = field(default_factory=dict)\n \n \n @dataclass\n", "issue": "[Feature Request] [Hydra-Submitit-Launcher] Add support for passing `additional_parameters` as supported by Submititi\n# \ud83d\ude80 Feature Request\r\n\r\nWhen calling [Submitit](https://github.com/facebookincubator/submitit) directly (not the plugin), one can pass an optional dictionary argument called [`additional_parameters`](https://github.com/facebookincubator/submitit/blob/e6fc6b2532a2ea480f08747816f14c53f526ef04/submitit/slurm/slurm.py#L432) to add parameters which are not currently available in submitit.\r\n\r\nThis argument can not be passed when using the submitit-plugin. Example error:\r\n\r\n```\r\nKey 'additional_parameters' not in 'SlurmQueueConf'\r\n full_key: hydra.launcher.additional_parameters\r\n reference_type=Optional[SlurmQueueConf]\r\n object_type=SlurmQueueConf\r\n```\r\n\r\nWe should add support for passing this `additional_parameters` dict via the plugin aswell.\r\n\r\n## Motivation\r\n\r\nSame as above.\r\n\r\n## Pitch\r\n\r\nAdd support for the dict in [SlurmQueueConf](https://github.com/facebookresearch/hydra/blob/master/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py#L31) and pass to submitit.AutoExecutor [here](https://github.com/facebookresearch/hydra/blob/dfbebc34020ad3d4bf1ab061565408740e62eb22/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/submitit_launcher.py#L105)\r\n\r\n**Are you willing to open a pull request?** (See [CONTRIBUTING](../../CONTRIBUTING.md)): Yes\r\n\r\n## Additional context\r\n\r\nSince this change is quite small, is there any reason it wasnt done in the first iteration of the plugin?\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass BaseQueueConf:\n \"\"\"Configuration shared by all executors\"\"\"\n\n submitit_folder: str = \"${hydra.sweep.dir}/.submitit/%j\"\n\n # maximum time for the job in minutes\n timeout_min: int = 60\n # number of cpus to use for each task\n cpus_per_task: int = 1\n # number of gpus to use on each node\n gpus_per_node: int = 0\n # number of tasks to spawn on each node\n tasks_per_node: int = 1\n # memory to reserve for the job on each node (in GB)\n mem_gb: int = 4\n # number of nodes to use for the job\n nodes: int = 1\n # name of the job\n name: str = \"${hydra.job.name}\"\n\n\n@dataclass\nclass SlurmQueueConf(BaseQueueConf):\n \"\"\"Slurm configuration overrides and specific parameters\"\"\"\n\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher\"\n )\n\n # Params are used to configure sbatch, for more info check:\n # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py\n\n # Following parameters are slurm specific\n # More information: https://slurm.schedmd.com/sbatch.html\n #\n # slurm partition to use on the cluster\n partition: Optional[str] = None\n comment: Optional[str] = None\n constraint: Optional[str] = None\n exclude: Optional[str] = None\n\n # Following parameters are submitit specifics\n #\n # USR1 signal delay before timeout\n signal_delay_s: int = 120\n # Maximum number of retries on job timeout.\n # Change this only after you confirmed your code can handle re-submission\n # by properly resuming from the latest stored checkpoint.\n # check the following for more info on slurm_max_num_timeout\n # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md\n max_num_timeout: int = 0\n\n\n@dataclass\nclass LocalQueueConf(BaseQueueConf):\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher\"\n )\n\n\n# finally, register two different choices:\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_local\",\n node=LocalQueueConf(),\n provider=\"submitit_launcher\",\n)\n\n\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_slurm\",\n node=SlurmQueueConf(),\n provider=\"submitit_launcher\",\n)\n", "path": "plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, Optional\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass BaseQueueConf:\n \"\"\"Configuration shared by all executors\"\"\"\n\n submitit_folder: str = \"${hydra.sweep.dir}/.submitit/%j\"\n\n # maximum time for the job in minutes\n timeout_min: int = 60\n # number of cpus to use for each task\n cpus_per_task: int = 1\n # number of gpus to use on each node\n gpus_per_node: int = 0\n # number of tasks to spawn on each node\n tasks_per_node: int = 1\n # memory to reserve for the job on each node (in GB)\n mem_gb: int = 4\n # number of nodes to use for the job\n nodes: int = 1\n # name of the job\n name: str = \"${hydra.job.name}\"\n\n\n@dataclass\nclass SlurmQueueConf(BaseQueueConf):\n \"\"\"Slurm configuration overrides and specific parameters\"\"\"\n\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher\"\n )\n\n # Params are used to configure sbatch, for more info check:\n # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py\n\n # Following parameters are slurm specific\n # More information: https://slurm.schedmd.com/sbatch.html\n #\n # slurm partition to use on the cluster\n partition: Optional[str] = None\n comment: Optional[str] = None\n constraint: Optional[str] = None\n exclude: Optional[str] = None\n\n # Following parameters are submitit specifics\n #\n # USR1 signal delay before timeout\n signal_delay_s: int = 120\n # Maximum number of retries on job timeout.\n # Change this only after you confirmed your code can handle re-submission\n # by properly resuming from the latest stored checkpoint.\n # check the following for more info on slurm_max_num_timeout\n # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md\n max_num_timeout: int = 0\n # Useful to add parameters which are not currently available in the plugin.\n # Eg: {\"mail-user\": \"[email protected]\", \"mail-type\": \"BEGIN\"}\n additional_parameters: Dict[str, Any] = field(default_factory=dict)\n\n\n@dataclass\nclass LocalQueueConf(BaseQueueConf):\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher\"\n )\n\n\n# finally, register two different choices:\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_local\",\n node=LocalQueueConf(),\n provider=\"submitit_launcher\",\n)\n\n\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_slurm\",\n node=SlurmQueueConf(),\n provider=\"submitit_launcher\",\n)\n", "path": "plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py"}]}
| 1,497 | 276 |
gh_patches_debug_3636
|
rasdani/github-patches
|
git_diff
|
safe-global__safe-config-service-68
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add /check endpoint
This endpoint should be used to check if the server is alive and ready to receive requests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/config/urls.py`
Content:
```
1 from django.contrib import admin
2 from django.urls import include, path
3
4 urlpatterns = [
5 path("api/v1/", include("safe_apps.urls", namespace="v1")),
6 path("admin/", admin.site.urls),
7 ]
8
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/config/urls.py b/src/config/urls.py
--- a/src/config/urls.py
+++ b/src/config/urls.py
@@ -1,7 +1,9 @@
from django.contrib import admin
+from django.http import HttpResponse
from django.urls import include, path
urlpatterns = [
path("api/v1/", include("safe_apps.urls", namespace="v1")),
path("admin/", admin.site.urls),
+ path("check/", lambda request: HttpResponse("Ok"), name="check"),
]
|
{"golden_diff": "diff --git a/src/config/urls.py b/src/config/urls.py\n--- a/src/config/urls.py\n+++ b/src/config/urls.py\n@@ -1,7 +1,9 @@\n from django.contrib import admin\n+from django.http import HttpResponse\n from django.urls import include, path\n \n urlpatterns = [\n path(\"api/v1/\", include(\"safe_apps.urls\", namespace=\"v1\")),\n path(\"admin/\", admin.site.urls),\n+ path(\"check/\", lambda request: HttpResponse(\"Ok\"), name=\"check\"),\n ]\n", "issue": "Add /check endpoint\nThis endpoint should be used to check if the server is alive and ready to receive requests\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.urls import include, path\n\nurlpatterns = [\n path(\"api/v1/\", include(\"safe_apps.urls\", namespace=\"v1\")),\n path(\"admin/\", admin.site.urls),\n]\n", "path": "src/config/urls.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.http import HttpResponse\nfrom django.urls import include, path\n\nurlpatterns = [\n path(\"api/v1/\", include(\"safe_apps.urls\", namespace=\"v1\")),\n path(\"admin/\", admin.site.urls),\n path(\"check/\", lambda request: HttpResponse(\"Ok\"), name=\"check\"),\n]\n", "path": "src/config/urls.py"}]}
| 336 | 111 |
gh_patches_debug_34281
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-830
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
zh_CN (and possible other providers) return invalid results for domain_name
Brief summary of the issue goes here.
### Steps to reproduce
f = Faker('zh_CN')
f.domain_name()
>he.cn
### Expected behavior
it should return valid domain names
### Actual behavior
he.cn is not a valid domain. he is in the list of second-level domains (https://en.wikipedia.org/wiki/.cn#Second-level_domains_of_provinces) and will cause failures in places where domains are being validated.
it's unclear where this change should/could be made after briefly reviewing the source code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/internet/zh_CN/__init__.py`
Content:
```
1 # coding=utf-8
2 from __future__ import unicode_literals
3 from .. import Provider as InternetProvider
4 from faker.utils.decorators import slugify
5
6
7 class Provider(InternetProvider):
8 user_name_formats = (
9 '{{last_romanized_name}}.{{first_romanized_name}}',
10 '{{first_romanized_name}}.{{last_romanized_name}}',
11 '{{first_romanized_name}}##',
12 '?{{last_romanized_name}}',
13 )
14 tlds = ('com', 'com', 'com', 'net', 'org', 'cn', 'cn', 'cn')
15
16 @slugify
17 def domain_word(self):
18 return self.generator.format('last_romanized_name')
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/faker/providers/internet/zh_CN/__init__.py b/faker/providers/internet/zh_CN/__init__.py
--- a/faker/providers/internet/zh_CN/__init__.py
+++ b/faker/providers/internet/zh_CN/__init__.py
@@ -1,5 +1,6 @@
# coding=utf-8
from __future__ import unicode_literals
+from collections import OrderedDict
from .. import Provider as InternetProvider
from faker.utils.decorators import slugify
@@ -11,8 +12,54 @@
'{{first_romanized_name}}##',
'?{{last_romanized_name}}',
)
- tlds = ('com', 'com', 'com', 'net', 'org', 'cn', 'cn', 'cn')
+
+ tlds = OrderedDict((
+ ('cn', 0.8),
+ ('net', 0.1),
+ ('com', 0.05),
+ ('org', 0.05),
+ ))
+
+ second_level_domains = ('ac', 'com', 'edu', 'gov', 'mil', 'net', 'org',
+ 'ah', 'bj', 'cq', 'fj', 'gd', 'gs', 'gz', 'gx',
+ 'ha', 'hb', 'he', 'hi', 'hk', 'hl', 'hn', 'jl',
+ 'js', 'jx', 'ln', 'mo', 'nm', 'nx', 'qh', 'sc',
+ 'sd', 'sh', 'sn', 'sx', 'tj', 'xj', 'xz', 'yn', 'zj')
+
+ domain_formats = (
+ '##', '??',
+ '{{first_romanized_name}}',
+ '{{last_romanized_name}}',
+ '{{first_romanized_name}}{{last_romanized_name}}',
+ '{{last_romanized_name}}{{last_romanized_name}}',
+ '{{first_romanized_name}}{{first_romanized_name}}',
+ )
@slugify
def domain_word(self):
- return self.generator.format('last_romanized_name')
+ pattern = self.random_element(self.domain_formats)
+ if '#' in pattern or '?' in pattern:
+ return self.bothify(pattern)
+ else:
+ return self.generator.parse(pattern)
+
+ def domain_name(self, levels=1):
+ if levels < 1:
+ raise ValueError("levels must be greater than or equal to 1")
+ if levels == 1:
+ domain_word = self.domain_word()
+ # Avoids he.cn as seen in issue #687
+ while domain_word in self.second_level_domains:
+ domain_word = self.domain_word()
+ return domain_word + '.' + self.tld()
+ elif levels == 2:
+ my_tld = self.tld()
+ my_second_level = ''
+ if my_tld == 'cn':
+ my_second_level = self.random_element(self.second_level_domains)
+ else:
+ my_second_level = self.domain_word()
+ return self.domain_word() + '.' + my_second_level + '.' + my_tld
+ else:
+ return self.domain_word() + '.' + self.domain_name(levels - 1)
+
|
{"golden_diff": "diff --git a/faker/providers/internet/zh_CN/__init__.py b/faker/providers/internet/zh_CN/__init__.py\n--- a/faker/providers/internet/zh_CN/__init__.py\n+++ b/faker/providers/internet/zh_CN/__init__.py\n@@ -1,5 +1,6 @@\n # coding=utf-8\n from __future__ import unicode_literals\n+from collections import OrderedDict\n from .. import Provider as InternetProvider\n from faker.utils.decorators import slugify\n \n@@ -11,8 +12,54 @@\n '{{first_romanized_name}}##',\n '?{{last_romanized_name}}',\n )\n- tlds = ('com', 'com', 'com', 'net', 'org', 'cn', 'cn', 'cn')\n+\n+ tlds = OrderedDict((\n+ ('cn', 0.8),\n+ ('net', 0.1),\n+ ('com', 0.05),\n+ ('org', 0.05),\n+ ))\n+\n+ second_level_domains = ('ac', 'com', 'edu', 'gov', 'mil', 'net', 'org',\n+ 'ah', 'bj', 'cq', 'fj', 'gd', 'gs', 'gz', 'gx',\n+ 'ha', 'hb', 'he', 'hi', 'hk', 'hl', 'hn', 'jl',\n+ 'js', 'jx', 'ln', 'mo', 'nm', 'nx', 'qh', 'sc',\n+ 'sd', 'sh', 'sn', 'sx', 'tj', 'xj', 'xz', 'yn', 'zj')\n+\n+ domain_formats = (\n+ '##', '??',\n+ '{{first_romanized_name}}',\n+ '{{last_romanized_name}}',\n+ '{{first_romanized_name}}{{last_romanized_name}}',\n+ '{{last_romanized_name}}{{last_romanized_name}}',\n+ '{{first_romanized_name}}{{first_romanized_name}}',\n+ )\n \n @slugify\n def domain_word(self):\n- return self.generator.format('last_romanized_name')\n+ pattern = self.random_element(self.domain_formats)\n+ if '#' in pattern or '?' in pattern:\n+ return self.bothify(pattern)\n+ else:\n+ return self.generator.parse(pattern)\n+\n+ def domain_name(self, levels=1):\n+ if levels < 1:\n+ raise ValueError(\"levels must be greater than or equal to 1\")\n+ if levels == 1:\n+ domain_word = self.domain_word()\n+ # Avoids he.cn as seen in issue #687\n+ while domain_word in self.second_level_domains:\n+ domain_word = self.domain_word()\n+ return domain_word + '.' + self.tld()\n+ elif levels == 2:\n+ my_tld = self.tld()\n+ my_second_level = ''\n+ if my_tld == 'cn':\n+ my_second_level = self.random_element(self.second_level_domains)\n+ else:\n+ my_second_level = self.domain_word()\n+ return self.domain_word() + '.' + my_second_level + '.' + my_tld\n+ else:\n+ return self.domain_word() + '.' + self.domain_name(levels - 1)\n+\n", "issue": "zh_CN (and possible other providers) return invalid results for domain_name\nBrief summary of the issue goes here.\r\n\r\n### Steps to reproduce\r\n\r\nf = Faker('zh_CN')\r\nf.domain_name()\r\n>he.cn\r\n\r\n### Expected behavior\r\n\r\nit should return valid domain names\r\n\r\n### Actual behavior\r\n\r\nhe.cn is not a valid domain. he is in the list of second-level domains (https://en.wikipedia.org/wiki/.cn#Second-level_domains_of_provinces) and will cause failures in places where domains are being validated.\r\n\r\nit's unclear where this change should/could be made after briefly reviewing the source code.\r\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals\nfrom .. import Provider as InternetProvider\nfrom faker.utils.decorators import slugify\n\n\nclass Provider(InternetProvider):\n user_name_formats = (\n '{{last_romanized_name}}.{{first_romanized_name}}',\n '{{first_romanized_name}}.{{last_romanized_name}}',\n '{{first_romanized_name}}##',\n '?{{last_romanized_name}}',\n )\n tlds = ('com', 'com', 'com', 'net', 'org', 'cn', 'cn', 'cn')\n\n @slugify\n def domain_word(self):\n return self.generator.format('last_romanized_name')\n", "path": "faker/providers/internet/zh_CN/__init__.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals\nfrom collections import OrderedDict\nfrom .. import Provider as InternetProvider\nfrom faker.utils.decorators import slugify\n\n\nclass Provider(InternetProvider):\n user_name_formats = (\n '{{last_romanized_name}}.{{first_romanized_name}}',\n '{{first_romanized_name}}.{{last_romanized_name}}',\n '{{first_romanized_name}}##',\n '?{{last_romanized_name}}',\n )\n\n tlds = OrderedDict((\n ('cn', 0.8),\n ('net', 0.1),\n ('com', 0.05),\n ('org', 0.05),\n ))\n\n second_level_domains = ('ac', 'com', 'edu', 'gov', 'mil', 'net', 'org',\n 'ah', 'bj', 'cq', 'fj', 'gd', 'gs', 'gz', 'gx',\n 'ha', 'hb', 'he', 'hi', 'hk', 'hl', 'hn', 'jl',\n 'js', 'jx', 'ln', 'mo', 'nm', 'nx', 'qh', 'sc',\n 'sd', 'sh', 'sn', 'sx', 'tj', 'xj', 'xz', 'yn', 'zj')\n\n domain_formats = (\n '##', '??',\n '{{first_romanized_name}}',\n '{{last_romanized_name}}',\n '{{first_romanized_name}}{{last_romanized_name}}',\n '{{last_romanized_name}}{{last_romanized_name}}',\n '{{first_romanized_name}}{{first_romanized_name}}',\n )\n\n @slugify\n def domain_word(self):\n pattern = self.random_element(self.domain_formats)\n if '#' in pattern or '?' in pattern:\n return self.bothify(pattern)\n else:\n return self.generator.parse(pattern)\n\n def domain_name(self, levels=1):\n if levels < 1:\n raise ValueError(\"levels must be greater than or equal to 1\")\n if levels == 1:\n domain_word = self.domain_word()\n # Avoids he.cn as seen in issue #687\n while domain_word in self.second_level_domains:\n domain_word = self.domain_word()\n return domain_word + '.' + self.tld()\n elif levels == 2:\n my_tld = self.tld()\n my_second_level = ''\n if my_tld == 'cn':\n my_second_level = self.random_element(self.second_level_domains)\n else:\n my_second_level = self.domain_word()\n return self.domain_word() + '.' + my_second_level + '.' + my_tld\n else:\n return self.domain_word() + '.' + self.domain_name(levels - 1)\n\n", "path": "faker/providers/internet/zh_CN/__init__.py"}]}
| 570 | 724 |
gh_patches_debug_20196
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-8465
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failing to load sqlalchemy-plugin in pyinstaller'ed exe
This line of code works perfectly fine while running as a file from spyder.
engine = sqlalchemy.create_engine('teradata://uid:pwd@DBinstance?driver=Teradata')
However, after making an exe file using pyinstaller, I get the following error while running the exe:
sqlalchemy.exc.NoSuchModuleError: Can't load plugin:sqlalchemy.dialects:teradata
This looks to be an exe-creation-issue in loading libraries. Has anyone encountered a similar issue and was able to resolve? Any pointers would be appreciated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-sqlalchemy.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2023, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 import re
13 import importlib.util
14
15 from PyInstaller import isolated
16 from PyInstaller.lib.modulegraph.modulegraph import SourceModule
17 from PyInstaller.utils.hooks import check_requirement, logger
18
19 # 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.
20 excludedimports = ['sqlalchemy.testing']
21
22 # Include most common database bindings some database bindings are detected and include some are not. We should
23 # explicitly include database backends.
24 hiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']
25
26 if check_requirement('sqlalchemy >= 1.4'):
27 hiddenimports.append("sqlalchemy.sql.default_comparator")
28
29
30 @isolated.decorate
31 def _get_dialect_modules(module_name):
32 import importlib
33 module = importlib.import_module(module_name)
34 return [f"{module_name}.{submodule_name}" for submodule_name in module.__all__]
35
36
37 # In SQLAlchemy >= 0.6, the "sqlalchemy.dialects" package provides dialects.
38 # In SQLAlchemy <= 0.5, the "sqlalchemy.databases" package provides dialects.
39 if check_requirement('sqlalchemy >= 0.6'):
40 hiddenimports += _get_dialect_modules("sqlalchemy.dialects")
41 else:
42 hiddenimports += _get_dialect_modules("sqlalchemy.databases")
43
44
45 def hook(hook_api):
46 """
47 SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This decorator does imports. E.g.:
48
49 @util.dependencies("sqlalchemy.sql.schema")
50
51 This hook scans for included SQLAlchemy modules and then scans those modules for any util.dependencies and marks
52 those modules as hidden imports.
53 """
54
55 if not check_requirement('sqlalchemy >= 0.9'):
56 return
57
58 # this parser is very simplistic but seems to catch all cases as of V1.1
59 depend_regex = re.compile(r'@util.dependencies\([\'"](.*?)[\'"]\)')
60
61 hidden_imports_set = set()
62 known_imports = set()
63 for node in hook_api.module_graph.iter_graph(start=hook_api.module):
64 if isinstance(node, SourceModule) and node.identifier.startswith('sqlalchemy.'):
65 known_imports.add(node.identifier)
66
67 # Read the source...
68 with open(node.filename, 'rb') as f:
69 source_code = f.read()
70 source_code = importlib.util.decode_source(source_code)
71
72 # ... and scan it
73 for match in depend_regex.findall(source_code):
74 hidden_imports_set.add(match)
75
76 hidden_imports_set -= known_imports
77 if len(hidden_imports_set):
78 logger.info(" Found %d sqlalchemy hidden imports", len(hidden_imports_set))
79 hook_api.add_imports(*list(hidden_imports_set))
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PyInstaller/hooks/hook-sqlalchemy.py b/PyInstaller/hooks/hook-sqlalchemy.py
--- a/PyInstaller/hooks/hook-sqlalchemy.py
+++ b/PyInstaller/hooks/hook-sqlalchemy.py
@@ -14,7 +14,9 @@
from PyInstaller import isolated
from PyInstaller.lib.modulegraph.modulegraph import SourceModule
-from PyInstaller.utils.hooks import check_requirement, logger
+from PyInstaller.utils.hooks import check_requirement, collect_entry_point, logger
+
+datas = []
# 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.
excludedimports = ['sqlalchemy.testing']
@@ -41,6 +43,13 @@
else:
hiddenimports += _get_dialect_modules("sqlalchemy.databases")
+# Collect additional dialects and plugins that are registered via entry-points, under assumption that they are available
+# in the build environment for a reason (i.e., they are used).
+for entry_point_name in ('sqlalchemy.dialects', 'sqlalchemy.plugins'):
+ ep_datas, ep_hiddenimports = collect_entry_point(entry_point_name)
+ datas += ep_datas
+ hiddenimports += ep_hiddenimports
+
def hook(hook_api):
"""
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-sqlalchemy.py b/PyInstaller/hooks/hook-sqlalchemy.py\n--- a/PyInstaller/hooks/hook-sqlalchemy.py\n+++ b/PyInstaller/hooks/hook-sqlalchemy.py\n@@ -14,7 +14,9 @@\n \n from PyInstaller import isolated\n from PyInstaller.lib.modulegraph.modulegraph import SourceModule\n-from PyInstaller.utils.hooks import check_requirement, logger\n+from PyInstaller.utils.hooks import check_requirement, collect_entry_point, logger\n+\n+datas = []\n \n # 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.\n excludedimports = ['sqlalchemy.testing']\n@@ -41,6 +43,13 @@\n else:\n hiddenimports += _get_dialect_modules(\"sqlalchemy.databases\")\n \n+# Collect additional dialects and plugins that are registered via entry-points, under assumption that they are available\n+# in the build environment for a reason (i.e., they are used).\n+for entry_point_name in ('sqlalchemy.dialects', 'sqlalchemy.plugins'):\n+ ep_datas, ep_hiddenimports = collect_entry_point(entry_point_name)\n+ datas += ep_datas\n+ hiddenimports += ep_hiddenimports\n+\n \n def hook(hook_api):\n \"\"\"\n", "issue": "Failing to load sqlalchemy-plugin in pyinstaller'ed exe\nThis line of code works perfectly fine while running as a file from spyder.\r\n\r\nengine = sqlalchemy.create_engine('teradata://uid:pwd@DBinstance?driver=Teradata')\r\nHowever, after making an exe file using pyinstaller, I get the following error while running the exe:\r\n\r\nsqlalchemy.exc.NoSuchModuleError: Can't load plugin:sqlalchemy.dialects:teradata\r\n\r\nThis looks to be an exe-creation-issue in loading libraries. Has anyone encountered a similar issue and was able to resolve? Any pointers would be appreciated.\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport re\nimport importlib.util\n\nfrom PyInstaller import isolated\nfrom PyInstaller.lib.modulegraph.modulegraph import SourceModule\nfrom PyInstaller.utils.hooks import check_requirement, logger\n\n# 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.\nexcludedimports = ['sqlalchemy.testing']\n\n# Include most common database bindings some database bindings are detected and include some are not. We should\n# explicitly include database backends.\nhiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']\n\nif check_requirement('sqlalchemy >= 1.4'):\n hiddenimports.append(\"sqlalchemy.sql.default_comparator\")\n\n\[email protected]\ndef _get_dialect_modules(module_name):\n import importlib\n module = importlib.import_module(module_name)\n return [f\"{module_name}.{submodule_name}\" for submodule_name in module.__all__]\n\n\n# In SQLAlchemy >= 0.6, the \"sqlalchemy.dialects\" package provides dialects.\n# In SQLAlchemy <= 0.5, the \"sqlalchemy.databases\" package provides dialects.\nif check_requirement('sqlalchemy >= 0.6'):\n hiddenimports += _get_dialect_modules(\"sqlalchemy.dialects\")\nelse:\n hiddenimports += _get_dialect_modules(\"sqlalchemy.databases\")\n\n\ndef hook(hook_api):\n \"\"\"\n SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This decorator does imports. E.g.:\n\n @util.dependencies(\"sqlalchemy.sql.schema\")\n\n This hook scans for included SQLAlchemy modules and then scans those modules for any util.dependencies and marks\n those modules as hidden imports.\n \"\"\"\n\n if not check_requirement('sqlalchemy >= 0.9'):\n return\n\n # this parser is very simplistic but seems to catch all cases as of V1.1\n depend_regex = re.compile(r'@util.dependencies\\([\\'\"](.*?)[\\'\"]\\)')\n\n hidden_imports_set = set()\n known_imports = set()\n for node in hook_api.module_graph.iter_graph(start=hook_api.module):\n if isinstance(node, SourceModule) and node.identifier.startswith('sqlalchemy.'):\n known_imports.add(node.identifier)\n\n # Read the source...\n with open(node.filename, 'rb') as f:\n source_code = f.read()\n source_code = importlib.util.decode_source(source_code)\n\n # ... and scan it\n for match in depend_regex.findall(source_code):\n hidden_imports_set.add(match)\n\n hidden_imports_set -= known_imports\n if len(hidden_imports_set):\n logger.info(\" Found %d sqlalchemy hidden imports\", len(hidden_imports_set))\n hook_api.add_imports(*list(hidden_imports_set))\n", "path": "PyInstaller/hooks/hook-sqlalchemy.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport re\nimport importlib.util\n\nfrom PyInstaller import isolated\nfrom PyInstaller.lib.modulegraph.modulegraph import SourceModule\nfrom PyInstaller.utils.hooks import check_requirement, collect_entry_point, logger\n\ndatas = []\n\n# 'sqlalchemy.testing' causes bundling a lot of unnecessary modules.\nexcludedimports = ['sqlalchemy.testing']\n\n# Include most common database bindings some database bindings are detected and include some are not. We should\n# explicitly include database backends.\nhiddenimports = ['pysqlite2', 'MySQLdb', 'psycopg2', 'sqlalchemy.ext.baked']\n\nif check_requirement('sqlalchemy >= 1.4'):\n hiddenimports.append(\"sqlalchemy.sql.default_comparator\")\n\n\[email protected]\ndef _get_dialect_modules(module_name):\n import importlib\n module = importlib.import_module(module_name)\n return [f\"{module_name}.{submodule_name}\" for submodule_name in module.__all__]\n\n\n# In SQLAlchemy >= 0.6, the \"sqlalchemy.dialects\" package provides dialects.\n# In SQLAlchemy <= 0.5, the \"sqlalchemy.databases\" package provides dialects.\nif check_requirement('sqlalchemy >= 0.6'):\n hiddenimports += _get_dialect_modules(\"sqlalchemy.dialects\")\nelse:\n hiddenimports += _get_dialect_modules(\"sqlalchemy.databases\")\n\n# Collect additional dialects and plugins that are registered via entry-points, under assumption that they are available\n# in the build environment for a reason (i.e., they are used).\nfor entry_point_name in ('sqlalchemy.dialects', 'sqlalchemy.plugins'):\n ep_datas, ep_hiddenimports = collect_entry_point(entry_point_name)\n datas += ep_datas\n hiddenimports += ep_hiddenimports\n\n\ndef hook(hook_api):\n \"\"\"\n SQLAlchemy 0.9 introduced the decorator 'util.dependencies'. This decorator does imports. E.g.:\n\n @util.dependencies(\"sqlalchemy.sql.schema\")\n\n This hook scans for included SQLAlchemy modules and then scans those modules for any util.dependencies and marks\n those modules as hidden imports.\n \"\"\"\n\n if not check_requirement('sqlalchemy >= 0.9'):\n return\n\n # this parser is very simplistic but seems to catch all cases as of V1.1\n depend_regex = re.compile(r'@util.dependencies\\([\\'\"](.*?)[\\'\"]\\)')\n\n hidden_imports_set = set()\n known_imports = set()\n for node in hook_api.module_graph.iter_graph(start=hook_api.module):\n if isinstance(node, SourceModule) and node.identifier.startswith('sqlalchemy.'):\n known_imports.add(node.identifier)\n\n # Read the source...\n with open(node.filename, 'rb') as f:\n source_code = f.read()\n source_code = importlib.util.decode_source(source_code)\n\n # ... and scan it\n for match in depend_regex.findall(source_code):\n hidden_imports_set.add(match)\n\n hidden_imports_set -= known_imports\n if len(hidden_imports_set):\n logger.info(\" Found %d sqlalchemy hidden imports\", len(hidden_imports_set))\n hook_api.add_imports(*list(hidden_imports_set))\n", "path": "PyInstaller/hooks/hook-sqlalchemy.py"}]}
| 1,225 | 271 |
gh_patches_debug_21
|
rasdani/github-patches
|
git_diff
|
sbi-dev__sbi-31
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move tests to top folder
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sbi/dummytests/whateverstuff.py`
Content:
```
1 # content of test_compute.py
2
3
4 def test_compute(param1):
5 assert param1 < 4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sbi/dummytests/whateverstuff.py b/sbi/dummytests/whateverstuff.py
deleted file mode 100644
--- a/sbi/dummytests/whateverstuff.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# content of test_compute.py
-
-
-def test_compute(param1):
- assert param1 < 4
\ No newline at end of file
|
{"golden_diff": "diff --git a/sbi/dummytests/whateverstuff.py b/sbi/dummytests/whateverstuff.py\ndeleted file mode 100644\n--- a/sbi/dummytests/whateverstuff.py\n+++ /dev/null\n@@ -1,5 +0,0 @@\n-# content of test_compute.py\n-\n-\n-def test_compute(param1):\n- assert param1 < 4\n\\ No newline at end of file\n", "issue": "Move tests to top folder\n\n", "before_files": [{"content": "# content of test_compute.py\n\n\ndef test_compute(param1):\n assert param1 < 4", "path": "sbi/dummytests/whateverstuff.py"}], "after_files": [{"content": null, "path": "sbi/dummytests/whateverstuff.py"}]}
| 296 | 95 |
gh_patches_debug_42
|
rasdani/github-patches
|
git_diff
|
wright-group__WrightTools-221
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
make setup.py an executable script
Change file permissions and reformat the shebang line
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # !/usr/bin/env python
2
3 import os
4 from setuptools import setup, find_packages
5
6
7 def package_files(directory):
8 paths = []
9 for (path, directories, filenames) in os.walk(directory):
10 for filename in filenames:
11 paths.append(os.path.join('..', path, filename))
12 return paths
13
14
15 here = os.path.abspath(os.path.dirname(__file__))
16
17 extra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))
18 extra_files.append(os.path.join(here, 'CONTRIBUTORS'))
19 extra_files.append(os.path.join(here, 'LICENSE'))
20 extra_files.append(os.path.join(here, 'README.rst'))
21 extra_files.append(os.path.join(here, 'requirements.txt'))
22 extra_files.append(os.path.join(here, 'VERSION'))
23 extra_files.append(os.path.join(here, 'WrightTools', 'client_secrets.json'))
24
25 with open(os.path.join(here, 'requirements.txt')) as f:
26 required = f.read().splitlines()
27
28 with open(os.path.join(here, 'VERSION')) as version_file:
29 version = version_file.read().strip()
30
31 setup(
32 name='WrightTools',
33 packages=find_packages(),
34 package_data={'': extra_files},
35 setup_requires=['pytest-runner'],
36 tests_require=['pytest'],
37 install_requires=required,
38 extras_require={'docs': ['sphinx-gallery>=0.1.9']},
39 version=version,
40 description='Tools for loading, processing, and plotting multidimensional spectroscopy data.',
41 author='Blaise Thompson',
42 author_email='[email protected]',
43 license='MIT',
44 url='http://wright.tools',
45 keywords='spectroscopy science multidimensional visualization',
46 classifiers=['Development Status :: 5 - Production/Stable',
47 'Intended Audience :: Science/Research',
48 'License :: OSI Approved :: MIT License',
49 'Natural Language :: English',
50 'Programming Language :: Python :: 2',
51 'Programming Language :: Python :: 2.7',
52 'Programming Language :: Python :: 3',
53 'Programming Language :: Python :: 3.3',
54 'Programming Language :: Python :: 3.4',
55 'Programming Language :: Python :: 3.5',
56 'Topic :: Scientific/Engineering']
57 )
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
old mode 100644
new mode 100755
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,4 @@
-# !/usr/bin/env python
+#! /usr/bin/env python
import os
from setuptools import setup, find_packages
|
{"golden_diff": "diff --git a/setup.py b/setup.py\nold mode 100644\nnew mode 100755\n--- a/setup.py\n+++ b/setup.py\n@@ -1,4 +1,4 @@\n-# !/usr/bin/env python\n+#! /usr/bin/env python\n \n import os\n from setuptools import setup, find_packages\n", "issue": "make setup.py an executable script\nChange file permissions and reformat the shebang line\n", "before_files": [{"content": "# !/usr/bin/env python\n\nimport os\nfrom setuptools import setup, find_packages\n\n\ndef package_files(directory):\n paths = []\n for (path, directories, filenames) in os.walk(directory):\n for filename in filenames:\n paths.append(os.path.join('..', path, filename))\n return paths\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nextra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))\nextra_files.append(os.path.join(here, 'CONTRIBUTORS'))\nextra_files.append(os.path.join(here, 'LICENSE'))\nextra_files.append(os.path.join(here, 'README.rst'))\nextra_files.append(os.path.join(here, 'requirements.txt'))\nextra_files.append(os.path.join(here, 'VERSION'))\nextra_files.append(os.path.join(here, 'WrightTools', 'client_secrets.json'))\n\nwith open(os.path.join(here, 'requirements.txt')) as f:\n required = f.read().splitlines()\n\nwith open(os.path.join(here, 'VERSION')) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name='WrightTools',\n packages=find_packages(),\n package_data={'': extra_files},\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n install_requires=required,\n extras_require={'docs': ['sphinx-gallery>=0.1.9']},\n version=version,\n description='Tools for loading, processing, and plotting multidimensional spectroscopy data.',\n author='Blaise Thompson',\n author_email='[email protected]',\n license='MIT',\n url='http://wright.tools',\n keywords='spectroscopy science multidimensional visualization',\n classifiers=['Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering']\n)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\nimport os\nfrom setuptools import setup, find_packages\n\n\ndef package_files(directory):\n paths = []\n for (path, directories, filenames) in os.walk(directory):\n for filename in filenames:\n paths.append(os.path.join('..', path, filename))\n return paths\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nextra_files = package_files(os.path.join(here, 'WrightTools', 'datasets'))\nextra_files.append(os.path.join(here, 'CONTRIBUTORS'))\nextra_files.append(os.path.join(here, 'LICENSE'))\nextra_files.append(os.path.join(here, 'README.rst'))\nextra_files.append(os.path.join(here, 'requirements.txt'))\nextra_files.append(os.path.join(here, 'VERSION'))\nextra_files.append(os.path.join(here, 'WrightTools', 'client_secrets.json'))\n\nwith open(os.path.join(here, 'requirements.txt')) as f:\n required = f.read().splitlines()\n\nwith open(os.path.join(here, 'VERSION')) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name='WrightTools',\n packages=find_packages(),\n package_data={'': extra_files},\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n install_requires=required,\n extras_require={'docs': ['sphinx-gallery>=0.1.9']},\n version=version,\n description='Tools for loading, processing, and plotting multidimensional spectroscopy data.',\n author='Blaise Thompson',\n author_email='[email protected]',\n license='MIT',\n url='http://wright.tools',\n keywords='spectroscopy science multidimensional visualization',\n classifiers=['Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Scientific/Engineering']\n)\n", "path": "setup.py"}]}
| 858 | 78 |
gh_patches_debug_8151
|
rasdani/github-patches
|
git_diff
|
google__turbinia-826
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jupyter FileExtractionTask task fails
```
2021-02-25 17:17:55 [INFO] Execution of [['sudo', 'image_export.py', '--logfile', '/evidence/1614273470-4a7632dfe76c428797ce0ddeafc684bf-FileArtifactExtractionTask/4a7632dfe76c428797ce0ddeafc684bf.log', '-w', '/evidence/1614273470-4a7632dfe76c428797ce0ddeafc684bf-FileArtifactExtractionTask/export', '--partitions', 'all', '--artifact_filters', 'JupyterConfigFile', '-d', '/dev/loop1']] failed with status 1
2021-02-25 17:17:55 [INFO] image_export.py failed for artifact JupyterConfigFile.
```
https://github.com/google/turbinia/pull/766/checks?check_run_id=1981113701#step:9:842
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/workers/analysis/jupyter.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2020 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for analysing Jupyter."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import re
21
22 from turbinia import TurbiniaException
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.evidence import ReportText
25 from turbinia.lib import text_formatter as fmt
26 from turbinia.workers import TurbiniaTask
27 from turbinia.workers import Priority
28
29
30 class JupyterAnalysisTask(TurbiniaTask):
31 """Task to analyze a Jupyter Notebook config."""
32
33 REQUIRED_STATES = [state.ATTACHED, state.MOUNTED]
34
35 def run(self, evidence, result):
36 """Run the Jupyter worker.
37
38 Args:
39 evidence (Evidence object): The evidence to process
40 result (TurbiniaTaskResult): The object to place task results into.
41
42 Returns:
43 TurbiniaTaskResult object.
44 """
45
46 # Where to store the resulting output file.
47 output_file_name = 'jupyter_analysis.txt'
48 output_file_path = os.path.join(self.output_dir, output_file_name)
49
50 # What type of evidence we should output.
51 output_evidence = ReportText(source_path=output_file_path)
52
53 # Read the config file.
54
55 jupyter_config = open(evidence.local_path, 'r').read()
56
57 # Extract the config and return the report
58 (report, priority, summary) = self.analyse_config(jupyter_config)
59 output_evidence.text_data = report
60 result.report_priority = priority
61 result.report_data = report
62
63 # Write the report to the output file.
64 with open(output_file_path, 'w') as fh:
65 fh.write(output_evidence.text_data.encode('utf8'))
66 fh.write('\n'.encode('utf8'))
67
68 # Add the resulting evidence to the result object.
69 result.add_evidence(output_evidence, evidence.config)
70 result.close(self, success=True, status=summary)
71
72 return result
73
74 def analyse_config(self, jupyter_config):
75 """Extract security related configs from Jupyter configuration files.
76
77 Args:
78 config (str): configuration file content.
79
80 Returns:
81 Tuple(
82 report_text(str): The report data
83 report_priority(int): The priority of the report (0 - 100)
84 summary(str): A summary of the report (used for task status)
85 )
86 """
87 findings = []
88 num_misconfigs = 0
89 for line in jupyter_config.split('\n'):
90
91 if all(x in line for x in ['disable_check_xsrf', 'True']):
92 findings.append(fmt.bullet('XSRF protection is disabled.'))
93 num_misconfigs += 1
94 continue
95 if all(x in line for x in ['allow_root', 'True']):
96 findings.append(fmt.bullet('Juypter Notebook allowed to run as root.'))
97 num_misconfigs += 1
98 continue
99 if 'NotebookApp.password' in line:
100 if all(x in line for x in ['required', 'False']):
101 findings.append(
102 fmt.bullet(
103 'Password is not required to access this Jupyter Notebook.'))
104 num_misconfigs += 1
105 continue
106 if 'required' not in line:
107 password_hash = line.split('=')
108 if len(password_hash) > 1:
109 if password_hash[1].strip() == "''":
110 findings.append(
111 fmt.bullet(
112 'There is no password set for this Jupyter Notebook.'))
113 num_misconfigs += 1
114 if all(x in line for x in ['allow_remote_access', 'True']):
115 findings.append(
116 fmt.bullet('Remote access is enabled on this Jupyter Notebook.'))
117 num_misconfigs += 1
118 continue
119
120 if findings:
121 summary = 'Insecure Jupyter Notebook configuration found. Total misconfigs: {}'.format(
122 num_misconfigs)
123 findings.insert(0, fmt.heading4(fmt.bold(summary)))
124 report = '\n'.join(findings)
125 return (report, Priority.HIGH, summary)
126
127 report = 'No issues found in Jupyter Notebook configuration.'
128 return (report, Priority.LOW, report)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/turbinia/workers/analysis/jupyter.py b/turbinia/workers/analysis/jupyter.py
--- a/turbinia/workers/analysis/jupyter.py
+++ b/turbinia/workers/analysis/jupyter.py
@@ -61,9 +61,9 @@
result.report_data = report
# Write the report to the output file.
- with open(output_file_path, 'w') as fh:
- fh.write(output_evidence.text_data.encode('utf8'))
- fh.write('\n'.encode('utf8'))
+ with open(output_file_path, 'w', encoding='utf-8') as fh:
+ fh.write(output_evidence.text_data)
+ fh.write('\n')
# Add the resulting evidence to the result object.
result.add_evidence(output_evidence, evidence.config)
|
{"golden_diff": "diff --git a/turbinia/workers/analysis/jupyter.py b/turbinia/workers/analysis/jupyter.py\n--- a/turbinia/workers/analysis/jupyter.py\n+++ b/turbinia/workers/analysis/jupyter.py\n@@ -61,9 +61,9 @@\n result.report_data = report\n \n # Write the report to the output file.\n- with open(output_file_path, 'w') as fh:\n- fh.write(output_evidence.text_data.encode('utf8'))\n- fh.write('\\n'.encode('utf8'))\n+ with open(output_file_path, 'w', encoding='utf-8') as fh:\n+ fh.write(output_evidence.text_data)\n+ fh.write('\\n')\n \n # Add the resulting evidence to the result object.\n result.add_evidence(output_evidence, evidence.config)\n", "issue": "Jupyter FileExtractionTask task fails\n```\r\n2021-02-25 17:17:55 [INFO] Execution of [['sudo', 'image_export.py', '--logfile', '/evidence/1614273470-4a7632dfe76c428797ce0ddeafc684bf-FileArtifactExtractionTask/4a7632dfe76c428797ce0ddeafc684bf.log', '-w', '/evidence/1614273470-4a7632dfe76c428797ce0ddeafc684bf-FileArtifactExtractionTask/export', '--partitions', 'all', '--artifact_filters', 'JupyterConfigFile', '-d', '/dev/loop1']] failed with status 1\r\n2021-02-25 17:17:55 [INFO] image_export.py failed for artifact JupyterConfigFile.\r\n```\r\n\r\nhttps://github.com/google/turbinia/pull/766/checks?check_run_id=1981113701#step:9:842\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2020 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for analysing Jupyter.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport re\n\nfrom turbinia import TurbiniaException\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import ReportText\nfrom turbinia.lib import text_formatter as fmt\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.workers import Priority\n\n\nclass JupyterAnalysisTask(TurbiniaTask):\n \"\"\"Task to analyze a Jupyter Notebook config.\"\"\"\n\n REQUIRED_STATES = [state.ATTACHED, state.MOUNTED]\n\n def run(self, evidence, result):\n \"\"\"Run the Jupyter worker.\n\n Args:\n evidence (Evidence object): The evidence to process\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n # Where to store the resulting output file.\n output_file_name = 'jupyter_analysis.txt'\n output_file_path = os.path.join(self.output_dir, output_file_name)\n\n # What type of evidence we should output.\n output_evidence = ReportText(source_path=output_file_path)\n\n # Read the config file.\n\n jupyter_config = open(evidence.local_path, 'r').read()\n\n # Extract the config and return the report\n (report, priority, summary) = self.analyse_config(jupyter_config)\n output_evidence.text_data = report\n result.report_priority = priority\n result.report_data = report\n\n # Write the report to the output file.\n with open(output_file_path, 'w') as fh:\n fh.write(output_evidence.text_data.encode('utf8'))\n fh.write('\\n'.encode('utf8'))\n\n # Add the resulting evidence to the result object.\n result.add_evidence(output_evidence, evidence.config)\n result.close(self, success=True, status=summary)\n\n return result\n\n def analyse_config(self, jupyter_config):\n \"\"\"Extract security related configs from Jupyter configuration files.\n\n Args:\n config (str): configuration file content.\n\n Returns:\n Tuple(\n report_text(str): The report data\n report_priority(int): The priority of the report (0 - 100)\n summary(str): A summary of the report (used for task status)\n )\n \"\"\"\n findings = []\n num_misconfigs = 0\n for line in jupyter_config.split('\\n'):\n\n if all(x in line for x in ['disable_check_xsrf', 'True']):\n findings.append(fmt.bullet('XSRF protection is disabled.'))\n num_misconfigs += 1\n continue\n if all(x in line for x in ['allow_root', 'True']):\n findings.append(fmt.bullet('Juypter Notebook allowed to run as root.'))\n num_misconfigs += 1\n continue\n if 'NotebookApp.password' in line:\n if all(x in line for x in ['required', 'False']):\n findings.append(\n fmt.bullet(\n 'Password is not required to access this Jupyter Notebook.'))\n num_misconfigs += 1\n continue\n if 'required' not in line:\n password_hash = line.split('=')\n if len(password_hash) > 1:\n if password_hash[1].strip() == \"''\":\n findings.append(\n fmt.bullet(\n 'There is no password set for this Jupyter Notebook.'))\n num_misconfigs += 1\n if all(x in line for x in ['allow_remote_access', 'True']):\n findings.append(\n fmt.bullet('Remote access is enabled on this Jupyter Notebook.'))\n num_misconfigs += 1\n continue\n\n if findings:\n summary = 'Insecure Jupyter Notebook configuration found. Total misconfigs: {}'.format(\n num_misconfigs)\n findings.insert(0, fmt.heading4(fmt.bold(summary)))\n report = '\\n'.join(findings)\n return (report, Priority.HIGH, summary)\n\n report = 'No issues found in Jupyter Notebook configuration.'\n return (report, Priority.LOW, report)\n", "path": "turbinia/workers/analysis/jupyter.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2020 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for analysing Jupyter.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport re\n\nfrom turbinia import TurbiniaException\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import ReportText\nfrom turbinia.lib import text_formatter as fmt\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.workers import Priority\n\n\nclass JupyterAnalysisTask(TurbiniaTask):\n \"\"\"Task to analyze a Jupyter Notebook config.\"\"\"\n\n REQUIRED_STATES = [state.ATTACHED, state.MOUNTED]\n\n def run(self, evidence, result):\n \"\"\"Run the Jupyter worker.\n\n Args:\n evidence (Evidence object): The evidence to process\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n # Where to store the resulting output file.\n output_file_name = 'jupyter_analysis.txt'\n output_file_path = os.path.join(self.output_dir, output_file_name)\n\n # What type of evidence we should output.\n output_evidence = ReportText(source_path=output_file_path)\n\n # Read the config file.\n\n jupyter_config = open(evidence.local_path, 'r').read()\n\n # Extract the config and return the report\n (report, priority, summary) = self.analyse_config(jupyter_config)\n output_evidence.text_data = report\n result.report_priority = priority\n result.report_data = report\n\n # Write the report to the output file.\n with open(output_file_path, 'w', encoding='utf-8') as fh:\n fh.write(output_evidence.text_data)\n fh.write('\\n')\n\n # Add the resulting evidence to the result object.\n result.add_evidence(output_evidence, evidence.config)\n result.close(self, success=True, status=summary)\n\n return result\n\n def analyse_config(self, jupyter_config):\n \"\"\"Extract security related configs from Jupyter configuration files.\n\n Args:\n config (str): configuration file content.\n\n Returns:\n Tuple(\n report_text(str): The report data\n report_priority(int): The priority of the report (0 - 100)\n summary(str): A summary of the report (used for task status)\n )\n \"\"\"\n findings = []\n num_misconfigs = 0\n for line in jupyter_config.split('\\n'):\n\n if all(x in line for x in ['disable_check_xsrf', 'True']):\n findings.append(fmt.bullet('XSRF protection is disabled.'))\n num_misconfigs += 1\n continue\n if all(x in line for x in ['allow_root', 'True']):\n findings.append(fmt.bullet('Juypter Notebook allowed to run as root.'))\n num_misconfigs += 1\n continue\n if 'NotebookApp.password' in line:\n if all(x in line for x in ['required', 'False']):\n findings.append(\n fmt.bullet(\n 'Password is not required to access this Jupyter Notebook.'))\n num_misconfigs += 1\n continue\n if 'required' not in line:\n password_hash = line.split('=')\n if len(password_hash) > 1:\n if password_hash[1].strip() == \"''\":\n findings.append(\n fmt.bullet(\n 'There is no password set for this Jupyter Notebook.'))\n num_misconfigs += 1\n if all(x in line for x in ['allow_remote_access', 'True']):\n findings.append(\n fmt.bullet('Remote access is enabled on this Jupyter Notebook.'))\n num_misconfigs += 1\n continue\n\n if findings:\n summary = 'Insecure Jupyter Notebook configuration found. Total misconfigs: {}'.format(\n num_misconfigs)\n findings.insert(0, fmt.heading4(fmt.bold(summary)))\n report = '\\n'.join(findings)\n return (report, Priority.HIGH, summary)\n\n report = 'No issues found in Jupyter Notebook configuration.'\n return (report, Priority.LOW, report)\n", "path": "turbinia/workers/analysis/jupyter.py"}]}
| 1,859 | 186 |
gh_patches_debug_41396
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-6063
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multiple-compressed responses support
HTTP response body can be compressed multiple times. In such cases `Content-Encoding` header contains a list of comma-separated encodings. Some servers instead can send multiple `Content-Encoding` headers.
This fix allows HttpCompressionMiddleware handle that.
See also: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
Fix #5143
Fixes #5143
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/downloadermiddlewares/httpcompression.py`
Content:
```
1 from __future__ import annotations
2
3 import warnings
4 from logging import getLogger
5 from typing import TYPE_CHECKING, List, Optional, Union
6
7 from scrapy import Request, Spider, signals
8 from scrapy.crawler import Crawler
9 from scrapy.exceptions import IgnoreRequest, NotConfigured
10 from scrapy.http import Response, TextResponse
11 from scrapy.responsetypes import responsetypes
12 from scrapy.statscollectors import StatsCollector
13 from scrapy.utils._compression import (
14 _DecompressionMaxSizeExceeded,
15 _inflate,
16 _unbrotli,
17 _unzstd,
18 )
19 from scrapy.utils.deprecate import ScrapyDeprecationWarning
20 from scrapy.utils.gz import gunzip
21
22 if TYPE_CHECKING:
23 # typing.Self requires Python 3.11
24 from typing_extensions import Self
25
26 logger = getLogger(__name__)
27
28 ACCEPTED_ENCODINGS: List[bytes] = [b"gzip", b"deflate"]
29
30 try:
31 import brotli # noqa: F401
32 except ImportError:
33 pass
34 else:
35 ACCEPTED_ENCODINGS.append(b"br")
36
37 try:
38 import zstandard # noqa: F401
39 except ImportError:
40 pass
41 else:
42 ACCEPTED_ENCODINGS.append(b"zstd")
43
44
45 class HttpCompressionMiddleware:
46 """This middleware allows compressed (gzip, deflate) traffic to be
47 sent/received from web sites"""
48
49 def __init__(
50 self,
51 stats: Optional[StatsCollector] = None,
52 *,
53 crawler: Optional[Crawler] = None,
54 ):
55 if not crawler:
56 self.stats = stats
57 self._max_size = 1073741824
58 self._warn_size = 33554432
59 return
60 self.stats = crawler.stats
61 self._max_size = crawler.settings.getint("DOWNLOAD_MAXSIZE")
62 self._warn_size = crawler.settings.getint("DOWNLOAD_WARNSIZE")
63 crawler.signals.connect(self.open_spider, signals.spider_opened)
64
65 @classmethod
66 def from_crawler(cls, crawler: Crawler) -> Self:
67 if not crawler.settings.getbool("COMPRESSION_ENABLED"):
68 raise NotConfigured
69 try:
70 return cls(crawler=crawler)
71 except TypeError:
72 warnings.warn(
73 "HttpCompressionMiddleware subclasses must either modify "
74 "their '__init__' method to support a 'crawler' parameter or "
75 "reimplement their 'from_crawler' method.",
76 ScrapyDeprecationWarning,
77 )
78 mw = cls()
79 mw.stats = crawler.stats
80 mw._max_size = crawler.settings.getint("DOWNLOAD_MAXSIZE")
81 mw._warn_size = crawler.settings.getint("DOWNLOAD_WARNSIZE")
82 crawler.signals.connect(mw.open_spider, signals.spider_opened)
83 return mw
84
85 def open_spider(self, spider):
86 if hasattr(spider, "download_maxsize"):
87 self._max_size = spider.download_maxsize
88 if hasattr(spider, "download_warnsize"):
89 self._warn_size = spider.download_warnsize
90
91 def process_request(
92 self, request: Request, spider: Spider
93 ) -> Union[Request, Response, None]:
94 request.headers.setdefault("Accept-Encoding", b", ".join(ACCEPTED_ENCODINGS))
95 return None
96
97 def process_response(
98 self, request: Request, response: Response, spider: Spider
99 ) -> Union[Request, Response]:
100 if request.method == "HEAD":
101 return response
102 if isinstance(response, Response):
103 content_encoding = response.headers.getlist("Content-Encoding")
104 if content_encoding:
105 encoding = content_encoding.pop()
106 max_size = request.meta.get("download_maxsize", self._max_size)
107 warn_size = request.meta.get("download_warnsize", self._warn_size)
108 try:
109 decoded_body = self._decode(
110 response.body, encoding.lower(), max_size
111 )
112 except _DecompressionMaxSizeExceeded:
113 raise IgnoreRequest(
114 f"Ignored response {response} because its body "
115 f"({len(response.body)} B) exceeded DOWNLOAD_MAXSIZE "
116 f"({max_size} B) during decompression."
117 )
118 if len(response.body) < warn_size <= len(decoded_body):
119 logger.warning(
120 f"{response} body size after decompression "
121 f"({len(decoded_body)} B) is larger than the "
122 f"download warning size ({warn_size} B)."
123 )
124 if self.stats:
125 self.stats.inc_value(
126 "httpcompression/response_bytes",
127 len(decoded_body),
128 spider=spider,
129 )
130 self.stats.inc_value(
131 "httpcompression/response_count", spider=spider
132 )
133 respcls = responsetypes.from_args(
134 headers=response.headers, url=response.url, body=decoded_body
135 )
136 kwargs = dict(cls=respcls, body=decoded_body)
137 if issubclass(respcls, TextResponse):
138 # force recalculating the encoding until we make sure the
139 # responsetypes guessing is reliable
140 kwargs["encoding"] = None
141 response = response.replace(**kwargs)
142 if not content_encoding:
143 del response.headers["Content-Encoding"]
144
145 return response
146
147 def _decode(self, body: bytes, encoding: bytes, max_size: int) -> bytes:
148 if encoding == b"gzip" or encoding == b"x-gzip":
149 return gunzip(body, max_size=max_size)
150 if encoding == b"deflate":
151 return _inflate(body, max_size=max_size)
152 if encoding == b"br" and b"br" in ACCEPTED_ENCODINGS:
153 return _unbrotli(body, max_size=max_size)
154 if encoding == b"zstd" and b"zstd" in ACCEPTED_ENCODINGS:
155 return _unzstd(body, max_size=max_size)
156 return body
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/downloadermiddlewares/httpcompression.py b/scrapy/downloadermiddlewares/httpcompression.py
--- a/scrapy/downloadermiddlewares/httpcompression.py
+++ b/scrapy/downloadermiddlewares/httpcompression.py
@@ -1,6 +1,7 @@
from __future__ import annotations
import warnings
+from itertools import chain
from logging import getLogger
from typing import TYPE_CHECKING, List, Optional, Union
@@ -102,18 +103,18 @@
if isinstance(response, Response):
content_encoding = response.headers.getlist("Content-Encoding")
if content_encoding:
- encoding = content_encoding.pop()
max_size = request.meta.get("download_maxsize", self._max_size)
warn_size = request.meta.get("download_warnsize", self._warn_size)
try:
- decoded_body = self._decode(
- response.body, encoding.lower(), max_size
+ decoded_body, content_encoding = self._handle_encoding(
+ response.body, content_encoding, max_size
)
except _DecompressionMaxSizeExceeded:
raise IgnoreRequest(
f"Ignored response {response} because its body "
- f"({len(response.body)} B) exceeded DOWNLOAD_MAXSIZE "
- f"({max_size} B) during decompression."
+ f"({len(response.body)} B compressed) exceeded "
+ f"DOWNLOAD_MAXSIZE ({max_size} B) during "
+ f"decompression."
)
if len(response.body) < warn_size <= len(decoded_body):
logger.warning(
@@ -121,6 +122,7 @@
f"({len(decoded_body)} B) is larger than the "
f"download warning size ({warn_size} B)."
)
+ response.headers["Content-Encoding"] = content_encoding
if self.stats:
self.stats.inc_value(
"httpcompression/response_bytes",
@@ -144,6 +146,28 @@
return response
+ def _handle_encoding(self, body, content_encoding, max_size):
+ to_decode, to_keep = self._split_encodings(content_encoding)
+ for encoding in to_decode:
+ body = self._decode(body, encoding, max_size)
+ return body, to_keep
+
+ def _split_encodings(self, content_encoding):
+ to_keep = [
+ encoding.strip().lower()
+ for encoding in chain.from_iterable(
+ encodings.split(b",") for encodings in content_encoding
+ )
+ ]
+ to_decode = []
+ while to_keep:
+ encoding = to_keep.pop()
+ if encoding not in ACCEPTED_ENCODINGS:
+ to_keep.append(encoding)
+ return to_decode, to_keep
+ to_decode.append(encoding)
+ return to_decode, to_keep
+
def _decode(self, body: bytes, encoding: bytes, max_size: int) -> bytes:
if encoding == b"gzip" or encoding == b"x-gzip":
return gunzip(body, max_size=max_size)
|
{"golden_diff": "diff --git a/scrapy/downloadermiddlewares/httpcompression.py b/scrapy/downloadermiddlewares/httpcompression.py\n--- a/scrapy/downloadermiddlewares/httpcompression.py\n+++ b/scrapy/downloadermiddlewares/httpcompression.py\n@@ -1,6 +1,7 @@\n from __future__ import annotations\n \n import warnings\n+from itertools import chain\n from logging import getLogger\n from typing import TYPE_CHECKING, List, Optional, Union\n \n@@ -102,18 +103,18 @@\n if isinstance(response, Response):\n content_encoding = response.headers.getlist(\"Content-Encoding\")\n if content_encoding:\n- encoding = content_encoding.pop()\n max_size = request.meta.get(\"download_maxsize\", self._max_size)\n warn_size = request.meta.get(\"download_warnsize\", self._warn_size)\n try:\n- decoded_body = self._decode(\n- response.body, encoding.lower(), max_size\n+ decoded_body, content_encoding = self._handle_encoding(\n+ response.body, content_encoding, max_size\n )\n except _DecompressionMaxSizeExceeded:\n raise IgnoreRequest(\n f\"Ignored response {response} because its body \"\n- f\"({len(response.body)} B) exceeded DOWNLOAD_MAXSIZE \"\n- f\"({max_size} B) during decompression.\"\n+ f\"({len(response.body)} B compressed) exceeded \"\n+ f\"DOWNLOAD_MAXSIZE ({max_size} B) during \"\n+ f\"decompression.\"\n )\n if len(response.body) < warn_size <= len(decoded_body):\n logger.warning(\n@@ -121,6 +122,7 @@\n f\"({len(decoded_body)} B) is larger than the \"\n f\"download warning size ({warn_size} B).\"\n )\n+ response.headers[\"Content-Encoding\"] = content_encoding\n if self.stats:\n self.stats.inc_value(\n \"httpcompression/response_bytes\",\n@@ -144,6 +146,28 @@\n \n return response\n \n+ def _handle_encoding(self, body, content_encoding, max_size):\n+ to_decode, to_keep = self._split_encodings(content_encoding)\n+ for encoding in to_decode:\n+ body = self._decode(body, encoding, max_size)\n+ return body, to_keep\n+\n+ def _split_encodings(self, content_encoding):\n+ to_keep = [\n+ encoding.strip().lower()\n+ for encoding in chain.from_iterable(\n+ encodings.split(b\",\") for encodings in content_encoding\n+ )\n+ ]\n+ to_decode = []\n+ while to_keep:\n+ encoding = to_keep.pop()\n+ if encoding not in ACCEPTED_ENCODINGS:\n+ to_keep.append(encoding)\n+ return to_decode, to_keep\n+ to_decode.append(encoding)\n+ return to_decode, to_keep\n+\n def _decode(self, body: bytes, encoding: bytes, max_size: int) -> bytes:\n if encoding == b\"gzip\" or encoding == b\"x-gzip\":\n return gunzip(body, max_size=max_size)\n", "issue": "Multiple-compressed responses support\nHTTP response body can be compressed multiple times. In such cases `Content-Encoding` header contains a list of comma-separated encodings. Some servers instead can send multiple `Content-Encoding` headers.\r\nThis fix allows HttpCompressionMiddleware handle that.\r\nSee also: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding\nFix #5143\nFixes #5143 \n", "before_files": [{"content": "from __future__ import annotations\n\nimport warnings\nfrom logging import getLogger\nfrom typing import TYPE_CHECKING, List, Optional, Union\n\nfrom scrapy import Request, Spider, signals\nfrom scrapy.crawler import Crawler\nfrom scrapy.exceptions import IgnoreRequest, NotConfigured\nfrom scrapy.http import Response, TextResponse\nfrom scrapy.responsetypes import responsetypes\nfrom scrapy.statscollectors import StatsCollector\nfrom scrapy.utils._compression import (\n _DecompressionMaxSizeExceeded,\n _inflate,\n _unbrotli,\n _unzstd,\n)\nfrom scrapy.utils.deprecate import ScrapyDeprecationWarning\nfrom scrapy.utils.gz import gunzip\n\nif TYPE_CHECKING:\n # typing.Self requires Python 3.11\n from typing_extensions import Self\n\nlogger = getLogger(__name__)\n\nACCEPTED_ENCODINGS: List[bytes] = [b\"gzip\", b\"deflate\"]\n\ntry:\n import brotli # noqa: F401\nexcept ImportError:\n pass\nelse:\n ACCEPTED_ENCODINGS.append(b\"br\")\n\ntry:\n import zstandard # noqa: F401\nexcept ImportError:\n pass\nelse:\n ACCEPTED_ENCODINGS.append(b\"zstd\")\n\n\nclass HttpCompressionMiddleware:\n \"\"\"This middleware allows compressed (gzip, deflate) traffic to be\n sent/received from web sites\"\"\"\n\n def __init__(\n self,\n stats: Optional[StatsCollector] = None,\n *,\n crawler: Optional[Crawler] = None,\n ):\n if not crawler:\n self.stats = stats\n self._max_size = 1073741824\n self._warn_size = 33554432\n return\n self.stats = crawler.stats\n self._max_size = crawler.settings.getint(\"DOWNLOAD_MAXSIZE\")\n self._warn_size = crawler.settings.getint(\"DOWNLOAD_WARNSIZE\")\n crawler.signals.connect(self.open_spider, signals.spider_opened)\n\n @classmethod\n def from_crawler(cls, crawler: Crawler) -> Self:\n if not crawler.settings.getbool(\"COMPRESSION_ENABLED\"):\n raise NotConfigured\n try:\n return cls(crawler=crawler)\n except TypeError:\n warnings.warn(\n \"HttpCompressionMiddleware subclasses must either modify \"\n \"their '__init__' method to support a 'crawler' parameter or \"\n \"reimplement their 'from_crawler' method.\",\n ScrapyDeprecationWarning,\n )\n mw = cls()\n mw.stats = crawler.stats\n mw._max_size = crawler.settings.getint(\"DOWNLOAD_MAXSIZE\")\n mw._warn_size = crawler.settings.getint(\"DOWNLOAD_WARNSIZE\")\n crawler.signals.connect(mw.open_spider, signals.spider_opened)\n return mw\n\n def open_spider(self, spider):\n if hasattr(spider, \"download_maxsize\"):\n self._max_size = spider.download_maxsize\n if hasattr(spider, \"download_warnsize\"):\n self._warn_size = spider.download_warnsize\n\n def process_request(\n self, request: Request, spider: Spider\n ) -> Union[Request, Response, None]:\n request.headers.setdefault(\"Accept-Encoding\", b\", \".join(ACCEPTED_ENCODINGS))\n return None\n\n def process_response(\n self, request: Request, response: Response, spider: Spider\n ) -> Union[Request, Response]:\n if request.method == \"HEAD\":\n return response\n if isinstance(response, Response):\n content_encoding = response.headers.getlist(\"Content-Encoding\")\n if content_encoding:\n encoding = content_encoding.pop()\n max_size = request.meta.get(\"download_maxsize\", self._max_size)\n warn_size = request.meta.get(\"download_warnsize\", self._warn_size)\n try:\n decoded_body = self._decode(\n response.body, encoding.lower(), max_size\n )\n except _DecompressionMaxSizeExceeded:\n raise IgnoreRequest(\n f\"Ignored response {response} because its body \"\n f\"({len(response.body)} B) exceeded DOWNLOAD_MAXSIZE \"\n f\"({max_size} B) during decompression.\"\n )\n if len(response.body) < warn_size <= len(decoded_body):\n logger.warning(\n f\"{response} body size after decompression \"\n f\"({len(decoded_body)} B) is larger than the \"\n f\"download warning size ({warn_size} B).\"\n )\n if self.stats:\n self.stats.inc_value(\n \"httpcompression/response_bytes\",\n len(decoded_body),\n spider=spider,\n )\n self.stats.inc_value(\n \"httpcompression/response_count\", spider=spider\n )\n respcls = responsetypes.from_args(\n headers=response.headers, url=response.url, body=decoded_body\n )\n kwargs = dict(cls=respcls, body=decoded_body)\n if issubclass(respcls, TextResponse):\n # force recalculating the encoding until we make sure the\n # responsetypes guessing is reliable\n kwargs[\"encoding\"] = None\n response = response.replace(**kwargs)\n if not content_encoding:\n del response.headers[\"Content-Encoding\"]\n\n return response\n\n def _decode(self, body: bytes, encoding: bytes, max_size: int) -> bytes:\n if encoding == b\"gzip\" or encoding == b\"x-gzip\":\n return gunzip(body, max_size=max_size)\n if encoding == b\"deflate\":\n return _inflate(body, max_size=max_size)\n if encoding == b\"br\" and b\"br\" in ACCEPTED_ENCODINGS:\n return _unbrotli(body, max_size=max_size)\n if encoding == b\"zstd\" and b\"zstd\" in ACCEPTED_ENCODINGS:\n return _unzstd(body, max_size=max_size)\n return body\n", "path": "scrapy/downloadermiddlewares/httpcompression.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport warnings\nfrom itertools import chain\nfrom logging import getLogger\nfrom typing import TYPE_CHECKING, List, Optional, Union\n\nfrom scrapy import Request, Spider, signals\nfrom scrapy.crawler import Crawler\nfrom scrapy.exceptions import IgnoreRequest, NotConfigured\nfrom scrapy.http import Response, TextResponse\nfrom scrapy.responsetypes import responsetypes\nfrom scrapy.statscollectors import StatsCollector\nfrom scrapy.utils._compression import (\n _DecompressionMaxSizeExceeded,\n _inflate,\n _unbrotli,\n _unzstd,\n)\nfrom scrapy.utils.deprecate import ScrapyDeprecationWarning\nfrom scrapy.utils.gz import gunzip\n\nif TYPE_CHECKING:\n # typing.Self requires Python 3.11\n from typing_extensions import Self\n\nlogger = getLogger(__name__)\n\nACCEPTED_ENCODINGS: List[bytes] = [b\"gzip\", b\"deflate\"]\n\ntry:\n import brotli # noqa: F401\nexcept ImportError:\n pass\nelse:\n ACCEPTED_ENCODINGS.append(b\"br\")\n\ntry:\n import zstandard # noqa: F401\nexcept ImportError:\n pass\nelse:\n ACCEPTED_ENCODINGS.append(b\"zstd\")\n\n\nclass HttpCompressionMiddleware:\n \"\"\"This middleware allows compressed (gzip, deflate) traffic to be\n sent/received from web sites\"\"\"\n\n def __init__(\n self,\n stats: Optional[StatsCollector] = None,\n *,\n crawler: Optional[Crawler] = None,\n ):\n if not crawler:\n self.stats = stats\n self._max_size = 1073741824\n self._warn_size = 33554432\n return\n self.stats = crawler.stats\n self._max_size = crawler.settings.getint(\"DOWNLOAD_MAXSIZE\")\n self._warn_size = crawler.settings.getint(\"DOWNLOAD_WARNSIZE\")\n crawler.signals.connect(self.open_spider, signals.spider_opened)\n\n @classmethod\n def from_crawler(cls, crawler: Crawler) -> Self:\n if not crawler.settings.getbool(\"COMPRESSION_ENABLED\"):\n raise NotConfigured\n try:\n return cls(crawler=crawler)\n except TypeError:\n warnings.warn(\n \"HttpCompressionMiddleware subclasses must either modify \"\n \"their '__init__' method to support a 'crawler' parameter or \"\n \"reimplement their 'from_crawler' method.\",\n ScrapyDeprecationWarning,\n )\n mw = cls()\n mw.stats = crawler.stats\n mw._max_size = crawler.settings.getint(\"DOWNLOAD_MAXSIZE\")\n mw._warn_size = crawler.settings.getint(\"DOWNLOAD_WARNSIZE\")\n crawler.signals.connect(mw.open_spider, signals.spider_opened)\n return mw\n\n def open_spider(self, spider):\n if hasattr(spider, \"download_maxsize\"):\n self._max_size = spider.download_maxsize\n if hasattr(spider, \"download_warnsize\"):\n self._warn_size = spider.download_warnsize\n\n def process_request(\n self, request: Request, spider: Spider\n ) -> Union[Request, Response, None]:\n request.headers.setdefault(\"Accept-Encoding\", b\", \".join(ACCEPTED_ENCODINGS))\n return None\n\n def process_response(\n self, request: Request, response: Response, spider: Spider\n ) -> Union[Request, Response]:\n if request.method == \"HEAD\":\n return response\n if isinstance(response, Response):\n content_encoding = response.headers.getlist(\"Content-Encoding\")\n if content_encoding:\n max_size = request.meta.get(\"download_maxsize\", self._max_size)\n warn_size = request.meta.get(\"download_warnsize\", self._warn_size)\n try:\n decoded_body, content_encoding = self._handle_encoding(\n response.body, content_encoding, max_size\n )\n except _DecompressionMaxSizeExceeded:\n raise IgnoreRequest(\n f\"Ignored response {response} because its body \"\n f\"({len(response.body)} B compressed) exceeded \"\n f\"DOWNLOAD_MAXSIZE ({max_size} B) during \"\n f\"decompression.\"\n )\n if len(response.body) < warn_size <= len(decoded_body):\n logger.warning(\n f\"{response} body size after decompression \"\n f\"({len(decoded_body)} B) is larger than the \"\n f\"download warning size ({warn_size} B).\"\n )\n response.headers[\"Content-Encoding\"] = content_encoding\n if self.stats:\n self.stats.inc_value(\n \"httpcompression/response_bytes\",\n len(decoded_body),\n spider=spider,\n )\n self.stats.inc_value(\n \"httpcompression/response_count\", spider=spider\n )\n respcls = responsetypes.from_args(\n headers=response.headers, url=response.url, body=decoded_body\n )\n kwargs = dict(cls=respcls, body=decoded_body)\n if issubclass(respcls, TextResponse):\n # force recalculating the encoding until we make sure the\n # responsetypes guessing is reliable\n kwargs[\"encoding\"] = None\n response = response.replace(**kwargs)\n if not content_encoding:\n del response.headers[\"Content-Encoding\"]\n\n return response\n\n def _handle_encoding(self, body, content_encoding, max_size):\n to_decode, to_keep = self._split_encodings(content_encoding)\n for encoding in to_decode:\n body = self._decode(body, encoding, max_size)\n return body, to_keep\n\n def _split_encodings(self, content_encoding):\n to_keep = [\n encoding.strip().lower()\n for encoding in chain.from_iterable(\n encodings.split(b\",\") for encodings in content_encoding\n )\n ]\n to_decode = []\n while to_keep:\n encoding = to_keep.pop()\n if encoding not in ACCEPTED_ENCODINGS:\n to_keep.append(encoding)\n return to_decode, to_keep\n to_decode.append(encoding)\n return to_decode, to_keep\n\n def _decode(self, body: bytes, encoding: bytes, max_size: int) -> bytes:\n if encoding == b\"gzip\" or encoding == b\"x-gzip\":\n return gunzip(body, max_size=max_size)\n if encoding == b\"deflate\":\n return _inflate(body, max_size=max_size)\n if encoding == b\"br\" and b\"br\" in ACCEPTED_ENCODINGS:\n return _unbrotli(body, max_size=max_size)\n if encoding == b\"zstd\" and b\"zstd\" in ACCEPTED_ENCODINGS:\n return _unzstd(body, max_size=max_size)\n return body\n", "path": "scrapy/downloadermiddlewares/httpcompression.py"}]}
| 1,972 | 660 |
gh_patches_debug_590
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1838
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.96
On the docket:
+ [x] PEX_EXTRA_SYS_PATH propagation can break subprocesses run against other venvs. #1836
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.95"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.95"
+__version__ = "2.1.96"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.95\"\n+__version__ = \"2.1.96\"\n", "issue": "Release 2.1.96\nOn the docket:\r\n+ [x] PEX_EXTRA_SYS_PATH propagation can break subprocesses run against other venvs. #1836 \r\n\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.95\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.96\"\n", "path": "pex/version.py"}]}
| 350 | 96 |
gh_patches_debug_29330
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-4987
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
STS calls to use regional endpoints instead of global endpoint
Change STS api calls to use regional STS endpoints to gain the benefits of "reduce latency, build in redundancy, and increase session token validity" as documented at https://docs.aws.amazon.com/general/latest/gr/rande.html#sts_region
This will also enable leveraging a VPC Endpoint for STS, if provisioned.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/credentials.py`
Content:
```
1 # Copyright 2016-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 Authentication utilities
16 """
17 from __future__ import absolute_import, division, print_function, unicode_literals
18
19 import os
20
21 from botocore.credentials import RefreshableCredentials
22 from botocore.session import get_session
23 from boto3 import Session
24
25 from c7n.version import version
26 from c7n.utils import get_retry
27
28
29 class SessionFactory(object):
30
31 def __init__(self, region, profile=None, assume_role=None, external_id=None):
32 self.region = region
33 self.profile = profile
34 self.assume_role = assume_role
35 self.external_id = external_id
36 self.user_agent_name = "CloudCustodian"
37 self.session_name = "CloudCustodian"
38 if 'C7N_SESSION_SUFFIX' in os.environ:
39 self.session_name = "%s@%s" % (
40 self.session_name, os.environ['C7N_SESSION_SUFFIX'])
41 self._subscribers = []
42
43 def _set_policy_name(self, name):
44 self.user_agent_name = ("CloudCustodian(%s)" % name).strip()
45
46 policy_name = property(None, _set_policy_name)
47
48 def __call__(self, assume=True, region=None):
49 if self.assume_role and assume:
50 session = Session(profile_name=self.profile)
51 session = assumed_session(
52 self.assume_role, self.session_name, session,
53 region or self.region, self.external_id)
54 else:
55 session = Session(
56 region_name=region or self.region, profile_name=self.profile)
57
58 return self.update(session)
59
60 def update(self, session):
61 session._session.user_agent_name = self.user_agent_name
62 session._session.user_agent_version = version
63
64 for s in self._subscribers:
65 s(session)
66
67 return session
68
69 def set_subscribers(self, subscribers):
70 self._subscribers = subscribers
71
72
73 def assumed_session(role_arn, session_name, session=None, region=None, external_id=None):
74 """STS Role assume a boto3.Session
75
76 With automatic credential renewal.
77
78 Args:
79 role_arn: iam role arn to assume
80 session_name: client session identifier
81 session: an optional extant session, note session is captured
82 in a function closure for renewing the sts assumed role.
83
84 :return: a boto3 session using the sts assumed role credentials
85
86 Notes: We have to poke at botocore internals a few times
87 """
88 if session is None:
89 session = Session()
90
91 retry = get_retry(('Throttling',))
92
93 def refresh():
94
95 parameters = {"RoleArn": role_arn, "RoleSessionName": session_name}
96
97 if external_id is not None:
98 parameters['ExternalId'] = external_id
99
100 credentials = retry(
101 session.client('sts').assume_role, **parameters)['Credentials']
102 return dict(
103 access_key=credentials['AccessKeyId'],
104 secret_key=credentials['SecretAccessKey'],
105 token=credentials['SessionToken'],
106 # Silly that we basically stringify so it can be parsed again
107 expiry_time=credentials['Expiration'].isoformat())
108
109 session_credentials = RefreshableCredentials.create_from_metadata(
110 metadata=refresh(),
111 refresh_using=refresh,
112 method='sts-assume-role')
113
114 # so dirty.. it hurts, no clean way to set this outside of the
115 # internals poke. There's some work upstream on making this nicer
116 # but its pretty baroque as well with upstream support.
117 # https://github.com/boto/boto3/issues/443
118 # https://github.com/boto/botocore/issues/761
119
120 s = get_session()
121 s._credentials = session_credentials
122 if region is None:
123 region = s.get_config_variable('region') or 'us-east-1'
124 s.set_config_variable('region', region)
125 return Session(botocore_session=s)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/c7n/credentials.py b/c7n/credentials.py
--- a/c7n/credentials.py
+++ b/c7n/credentials.py
@@ -26,6 +26,10 @@
from c7n.utils import get_retry
+# 0.8.45.1 compatibility with global only sts endpoints, out of caution, remove in 0.8.46.1
+USE_STS_GLOBAL = os.environ.get('C7N_USE_STS_GLOBAL', '').lower() in ('yes', 'true')
+
+
class SessionFactory(object):
def __init__(self, region, profile=None, assume_role=None, external_id=None):
@@ -98,7 +102,8 @@
parameters['ExternalId'] = external_id
credentials = retry(
- session.client('sts').assume_role, **parameters)['Credentials']
+ get_sts_client(
+ session, region).assume_role, **parameters)['Credentials']
return dict(
access_key=credentials['AccessKeyId'],
secret_key=credentials['SecretAccessKey'],
@@ -123,3 +128,20 @@
region = s.get_config_variable('region') or 'us-east-1'
s.set_config_variable('region', region)
return Session(botocore_session=s)
+
+
+def get_sts_client(session, region):
+ """Get the AWS STS endpoint specific for the given region.
+
+ Returns the global endpoint if region is not specified.
+
+ For the list of regional endpoints, see https://amzn.to/2ohJgtR
+ """
+ if region and not USE_STS_GLOBAL:
+ endpoint_url = "https://sts.{}.amazonaws.com".format(region)
+ region_name = region
+ else:
+ endpoint_url = "https://sts.amazonaws.com"
+ region_name = None
+ return session.client(
+ 'sts', endpoint_url=endpoint_url, region_name=region_name)
|
{"golden_diff": "diff --git a/c7n/credentials.py b/c7n/credentials.py\n--- a/c7n/credentials.py\n+++ b/c7n/credentials.py\n@@ -26,6 +26,10 @@\n from c7n.utils import get_retry\n \n \n+# 0.8.45.1 compatibility with global only sts endpoints, out of caution, remove in 0.8.46.1\n+USE_STS_GLOBAL = os.environ.get('C7N_USE_STS_GLOBAL', '').lower() in ('yes', 'true')\n+\n+\n class SessionFactory(object):\n \n def __init__(self, region, profile=None, assume_role=None, external_id=None):\n@@ -98,7 +102,8 @@\n parameters['ExternalId'] = external_id\n \n credentials = retry(\n- session.client('sts').assume_role, **parameters)['Credentials']\n+ get_sts_client(\n+ session, region).assume_role, **parameters)['Credentials']\n return dict(\n access_key=credentials['AccessKeyId'],\n secret_key=credentials['SecretAccessKey'],\n@@ -123,3 +128,20 @@\n region = s.get_config_variable('region') or 'us-east-1'\n s.set_config_variable('region', region)\n return Session(botocore_session=s)\n+\n+\n+def get_sts_client(session, region):\n+ \"\"\"Get the AWS STS endpoint specific for the given region.\n+\n+ Returns the global endpoint if region is not specified.\n+\n+ For the list of regional endpoints, see https://amzn.to/2ohJgtR\n+ \"\"\"\n+ if region and not USE_STS_GLOBAL:\n+ endpoint_url = \"https://sts.{}.amazonaws.com\".format(region)\n+ region_name = region\n+ else:\n+ endpoint_url = \"https://sts.amazonaws.com\"\n+ region_name = None\n+ return session.client(\n+ 'sts', endpoint_url=endpoint_url, region_name=region_name)\n", "issue": "STS calls to use regional endpoints instead of global endpoint\nChange STS api calls to use regional STS endpoints to gain the benefits of \"reduce latency, build in redundancy, and increase session token validity\" as documented at https://docs.aws.amazon.com/general/latest/gr/rande.html#sts_region\r\n\r\nThis will also enable leveraging a VPC Endpoint for STS, if provisioned. \n", "before_files": [{"content": "# Copyright 2016-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nAuthentication utilities\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\n\nfrom botocore.credentials import RefreshableCredentials\nfrom botocore.session import get_session\nfrom boto3 import Session\n\nfrom c7n.version import version\nfrom c7n.utils import get_retry\n\n\nclass SessionFactory(object):\n\n def __init__(self, region, profile=None, assume_role=None, external_id=None):\n self.region = region\n self.profile = profile\n self.assume_role = assume_role\n self.external_id = external_id\n self.user_agent_name = \"CloudCustodian\"\n self.session_name = \"CloudCustodian\"\n if 'C7N_SESSION_SUFFIX' in os.environ:\n self.session_name = \"%s@%s\" % (\n self.session_name, os.environ['C7N_SESSION_SUFFIX'])\n self._subscribers = []\n\n def _set_policy_name(self, name):\n self.user_agent_name = (\"CloudCustodian(%s)\" % name).strip()\n\n policy_name = property(None, _set_policy_name)\n\n def __call__(self, assume=True, region=None):\n if self.assume_role and assume:\n session = Session(profile_name=self.profile)\n session = assumed_session(\n self.assume_role, self.session_name, session,\n region or self.region, self.external_id)\n else:\n session = Session(\n region_name=region or self.region, profile_name=self.profile)\n\n return self.update(session)\n\n def update(self, session):\n session._session.user_agent_name = self.user_agent_name\n session._session.user_agent_version = version\n\n for s in self._subscribers:\n s(session)\n\n return session\n\n def set_subscribers(self, subscribers):\n self._subscribers = subscribers\n\n\ndef assumed_session(role_arn, session_name, session=None, region=None, external_id=None):\n \"\"\"STS Role assume a boto3.Session\n\n With automatic credential renewal.\n\n Args:\n role_arn: iam role arn to assume\n session_name: client session identifier\n session: an optional extant session, note session is captured\n in a function closure for renewing the sts assumed role.\n\n :return: a boto3 session using the sts assumed role credentials\n\n Notes: We have to poke at botocore internals a few times\n \"\"\"\n if session is None:\n session = Session()\n\n retry = get_retry(('Throttling',))\n\n def refresh():\n\n parameters = {\"RoleArn\": role_arn, \"RoleSessionName\": session_name}\n\n if external_id is not None:\n parameters['ExternalId'] = external_id\n\n credentials = retry(\n session.client('sts').assume_role, **parameters)['Credentials']\n return dict(\n access_key=credentials['AccessKeyId'],\n secret_key=credentials['SecretAccessKey'],\n token=credentials['SessionToken'],\n # Silly that we basically stringify so it can be parsed again\n expiry_time=credentials['Expiration'].isoformat())\n\n session_credentials = RefreshableCredentials.create_from_metadata(\n metadata=refresh(),\n refresh_using=refresh,\n method='sts-assume-role')\n\n # so dirty.. it hurts, no clean way to set this outside of the\n # internals poke. There's some work upstream on making this nicer\n # but its pretty baroque as well with upstream support.\n # https://github.com/boto/boto3/issues/443\n # https://github.com/boto/botocore/issues/761\n\n s = get_session()\n s._credentials = session_credentials\n if region is None:\n region = s.get_config_variable('region') or 'us-east-1'\n s.set_config_variable('region', region)\n return Session(botocore_session=s)\n", "path": "c7n/credentials.py"}], "after_files": [{"content": "# Copyright 2016-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nAuthentication utilities\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\n\nfrom botocore.credentials import RefreshableCredentials\nfrom botocore.session import get_session\nfrom boto3 import Session\n\nfrom c7n.version import version\nfrom c7n.utils import get_retry\n\n\n# 0.8.45.1 compatibility with global only sts endpoints, out of caution, remove in 0.8.46.1\nUSE_STS_GLOBAL = os.environ.get('C7N_USE_STS_GLOBAL', '').lower() in ('yes', 'true')\n\n\nclass SessionFactory(object):\n\n def __init__(self, region, profile=None, assume_role=None, external_id=None):\n self.region = region\n self.profile = profile\n self.assume_role = assume_role\n self.external_id = external_id\n self.user_agent_name = \"CloudCustodian\"\n self.session_name = \"CloudCustodian\"\n if 'C7N_SESSION_SUFFIX' in os.environ:\n self.session_name = \"%s@%s\" % (\n self.session_name, os.environ['C7N_SESSION_SUFFIX'])\n self._subscribers = []\n\n def _set_policy_name(self, name):\n self.user_agent_name = (\"CloudCustodian(%s)\" % name).strip()\n\n policy_name = property(None, _set_policy_name)\n\n def __call__(self, assume=True, region=None):\n if self.assume_role and assume:\n session = Session(profile_name=self.profile)\n session = assumed_session(\n self.assume_role, self.session_name, session,\n region or self.region, self.external_id)\n else:\n session = Session(\n region_name=region or self.region, profile_name=self.profile)\n\n return self.update(session)\n\n def update(self, session):\n session._session.user_agent_name = self.user_agent_name\n session._session.user_agent_version = version\n\n for s in self._subscribers:\n s(session)\n\n return session\n\n def set_subscribers(self, subscribers):\n self._subscribers = subscribers\n\n\ndef assumed_session(role_arn, session_name, session=None, region=None, external_id=None):\n \"\"\"STS Role assume a boto3.Session\n\n With automatic credential renewal.\n\n Args:\n role_arn: iam role arn to assume\n session_name: client session identifier\n session: an optional extant session, note session is captured\n in a function closure for renewing the sts assumed role.\n\n :return: a boto3 session using the sts assumed role credentials\n\n Notes: We have to poke at botocore internals a few times\n \"\"\"\n if session is None:\n session = Session()\n\n retry = get_retry(('Throttling',))\n\n def refresh():\n\n parameters = {\"RoleArn\": role_arn, \"RoleSessionName\": session_name}\n\n if external_id is not None:\n parameters['ExternalId'] = external_id\n\n credentials = retry(\n get_sts_client(\n session, region).assume_role, **parameters)['Credentials']\n return dict(\n access_key=credentials['AccessKeyId'],\n secret_key=credentials['SecretAccessKey'],\n token=credentials['SessionToken'],\n # Silly that we basically stringify so it can be parsed again\n expiry_time=credentials['Expiration'].isoformat())\n\n session_credentials = RefreshableCredentials.create_from_metadata(\n metadata=refresh(),\n refresh_using=refresh,\n method='sts-assume-role')\n\n # so dirty.. it hurts, no clean way to set this outside of the\n # internals poke. There's some work upstream on making this nicer\n # but its pretty baroque as well with upstream support.\n # https://github.com/boto/boto3/issues/443\n # https://github.com/boto/botocore/issues/761\n\n s = get_session()\n s._credentials = session_credentials\n if region is None:\n region = s.get_config_variable('region') or 'us-east-1'\n s.set_config_variable('region', region)\n return Session(botocore_session=s)\n\n\ndef get_sts_client(session, region):\n \"\"\"Get the AWS STS endpoint specific for the given region.\n\n Returns the global endpoint if region is not specified.\n\n For the list of regional endpoints, see https://amzn.to/2ohJgtR\n \"\"\"\n if region and not USE_STS_GLOBAL:\n endpoint_url = \"https://sts.{}.amazonaws.com\".format(region)\n region_name = region\n else:\n endpoint_url = \"https://sts.amazonaws.com\"\n region_name = None\n return session.client(\n 'sts', endpoint_url=endpoint_url, region_name=region_name)\n", "path": "c7n/credentials.py"}]}
| 1,580 | 431 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.