title
stringlengths
2
169
diff
stringlengths
235
19.5k
body
stringlengths
0
30.5k
url
stringlengths
48
84
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
diff_len
float64
101
3.99k
repo_name
stringclasses
83 values
__index_level_0__
int64
15
52.7k
Add requirements to npm module
diff --git a/lib/ansible/modules/packaging/language/npm.py b/lib/ansible/modules/packaging/language/npm.py index fed13921d00fd8..256218702433ab 100644 --- a/lib/ansible/modules/packaging/language/npm.py +++ b/lib/ansible/modules/packaging/language/npm.py @@ -70,6 +70,8 @@ required: false default: present choices: [ "present", "absent", "latest" ] +requirements: + - npm installed in bin path (recommended /usr/local/bin) ''' EXAMPLES = '''
npm is required in order for this module to work. ##### SUMMARY <!--- Describe the change, including rationale and design decisions --> <!--- If you are fixing an existing issue, please include "Fixes #nnn" in your commit message and your description; but you should still explain what the change does. --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Docs Pull Request ##### COMPONENT NAME npm
https://api.github.com/repos/ansible/ansible/pulls/33641
2017-12-06T17:47:58Z
2017-12-06T22:03:33Z
2017-12-06T22:03:32Z
2019-04-26T23:28:23Z
130
ansible/ansible
49,535
Handle XDG_CACHE_HOME properly for download_root
diff --git a/whisper/__init__.py b/whisper/__init__.py index 2a1fb4ec..cb334065 100644 --- a/whisper/__init__.py +++ b/whisper/__init__.py @@ -94,9 +94,14 @@ def load_model(name: str, device: Optional[Union[str, torch.device]] = None, dow if device is None: device = "cuda" if torch.cuda.is_available() else "cpu" if download_root is None: - download_root = os.getenv( - "XDG_CACHE_HOME", - os.path.join(os.path.expanduser("~"), ".cache", "whisper") + download_root = os.path.join( + os.getenv( + "XDG_CACHE_HOME", + os.path.join( + os.path.expanduser("~"), ".cache" + ) + ), + "whisper" ) if name in _MODELS:
It used to download moduls in `$XDG_CACHE_HOME` rather then `$XDG_CACHE_HOME/whisper` when I use the `$XDG_CACHE_HOME` variable in my system.
https://api.github.com/repos/openai/whisper/pulls/864
2023-01-19T20:23:03Z
2023-01-21T09:09:39Z
2023-01-21T09:09:39Z
2023-01-21T09:09:40Z
215
openai/whisper
45,781
Fix indentation in docs/editor_integration.md
diff --git a/docs/editor_integration.md b/docs/editor_integration.md index ae24c76888..f2d21f2111 100644 --- a/docs/editor_integration.md +++ b/docs/editor_integration.md @@ -12,38 +12,38 @@ Options include the following: 1. Install `black`. -```console -$ pip install black -``` + ```console + $ pip install black + ``` 2. Locate your `black` installation folder. -On macOS / Linux / BSD: + On macOS / Linux / BSD: -```console -$ which black -/usr/local/bin/black # possible location -``` + ```console + $ which black + /usr/local/bin/black # possible location + ``` -On Windows: + On Windows: -```console -$ where black -%LocalAppData%\Programs\Python\Python36-32\Scripts\black.exe # possible location -``` + ```console + $ where black + %LocalAppData%\Programs\Python\Python36-32\Scripts\black.exe # possible location + ``` -Note that if you are using a virtual environment detected by PyCharm, this is an -unneeded step. In this case the path to `black` is `$PyInterpreterDirectory$/black`. + Note that if you are using a virtual environment detected by PyCharm, this is an + unneeded step. In this case the path to `black` is `$PyInterpreterDirectory$/black`. 3. Open External tools in PyCharm/IntelliJ IDEA -On macOS: + On macOS: -`PyCharm -> Preferences -> Tools -> External Tools` + `PyCharm -> Preferences -> Tools -> External Tools` -On Windows / Linux / BSD: + On Windows / Linux / BSD: -`File -> Settings -> Tools -> External Tools` + `File -> Settings -> Tools -> External Tools` 4. Click the + icon to add a new external tool with the following values: @@ -83,28 +83,28 @@ Wing supports black via the OS Commands tool, as explained in the Wing documenta 1. Install `black`. -```console -$ pip install black -``` + ```console + $ pip install black + ``` 2. Make sure it runs from the command line, e.g. -```console -$ black --help -``` + ```console + $ black --help + ``` 3. In Wing IDE, activate the **OS Commands** panel and define the command **black** to execute black on the currently selected file: -- Use the Tools -> OS Commands menu selection -- click on **+** in **OS Commands** -> New: Command line.. - - Title: black - - Command Line: black %s - - I/O Encoding: Use Default - - Key Binding: F1 - - [x] Raise OS Commands when executed - - [x] Auto-save files before execution - - [x] Line mode + - Use the Tools -> OS Commands menu selection + - click on **+** in **OS Commands** -> New: Command line.. + - Title: black + - Command Line: black %s + - I/O Encoding: Use Default + - Key Binding: F1 + - [x] Raise OS Commands when executed + - [x] Auto-save files before execution + - [x] Line mode 4. Select a file in the editor and press **F1** , or whatever key binding you selected in step 3, to reformat the file.
Numbered list entries' bodies need to be indented or else the list won't render correctly. Fixes GH-2045. also \*sigh\* I'm going to have a fun time dealing with the conflicts when trying to merge my documentation reorganization branch /s
https://api.github.com/repos/psf/black/pulls/2056
2021-03-20T19:06:56Z
2021-03-20T19:15:56Z
2021-03-20T19:15:56Z
2021-03-20T19:26:38Z
831
psf/black
24,025
Fix allowing identical flows to be created before startup
diff --git a/homeassistant/helpers/discovery_flow.py b/homeassistant/helpers/discovery_flow.py index 863fb58625ce..2bfccf46960e 100644 --- a/homeassistant/helpers/discovery_flow.py +++ b/homeassistant/helpers/discovery_flow.py @@ -2,7 +2,7 @@ from __future__ import annotations from collections.abc import Coroutine -from typing import Any +from typing import Any, NamedTuple from homeassistant.const import EVENT_HOMEASSISTANT_STARTED from homeassistant.core import CoreState, Event, HomeAssistant, callback @@ -20,17 +20,18 @@ def async_create_flow( hass: HomeAssistant, domain: str, context: dict[str, Any], data: Any ) -> None: """Create a discovery flow.""" - if hass.state == CoreState.running: + dispatcher: FlowDispatcher | None = None + if DISCOVERY_FLOW_DISPATCHER in hass.data: + dispatcher = hass.data[DISCOVERY_FLOW_DISPATCHER] + elif hass.state != CoreState.running: + dispatcher = hass.data[DISCOVERY_FLOW_DISPATCHER] = FlowDispatcher(hass) + dispatcher.async_setup() + + if not dispatcher or dispatcher.started: if init_coro := _async_init_flow(hass, domain, context, data): hass.async_create_task(init_coro) return - if DISCOVERY_FLOW_DISPATCHER not in hass.data: - dispatcher = hass.data[DISCOVERY_FLOW_DISPATCHER] = FlowDispatcher(hass) - dispatcher.async_setup() - else: - dispatcher = hass.data[DISCOVERY_FLOW_DISPATCHER] - return dispatcher.async_create(domain, context, data) @@ -49,13 +50,28 @@ def _async_init_flow( return hass.config_entries.flow.async_init(domain, context=context, data=data) +class PendingFlowKey(NamedTuple): + """Key for pending flows.""" + + domain: str + source: str + + +class PendingFlowValue(NamedTuple): + """Value for pending flows.""" + + context: dict[str, Any] + data: Any + + class FlowDispatcher: """Dispatch discovery flows.""" def __init__(self, hass: HomeAssistant) -> None: """Init the discovery dispatcher.""" self.hass = hass - self.pending_flows: list[tuple[str, dict[str, Any], Any]] = [] + self.started = False + self.pending_flows: dict[PendingFlowKey, list[PendingFlowValue]] = {} @callback def async_setup(self) -> None: @@ -64,10 +80,16 @@ def async_setup(self) -> None: async def _async_start(self, event: Event) -> None: """Start processing pending flows.""" - self.hass.data.pop(DISCOVERY_FLOW_DISPATCHER) - - init_coros = [_async_init_flow(self.hass, *flow) for flow in self.pending_flows] - + pending_flows = self.pending_flows + self.pending_flows = {} + self.started = True + init_coros = [ + _async_init_flow( + self.hass, flow_key.domain, flow_values.context, flow_values.data + ) + for flow_key, flows in pending_flows.items() + for flow_values in flows + ] await gather_with_concurrency( FLOW_INIT_LIMIT, *[init_coro for init_coro in init_coros if init_coro is not None], @@ -76,4 +98,8 @@ async def _async_start(self, event: Event) -> None: @callback def async_create(self, domain: str, context: dict[str, Any], data: Any) -> None: """Create and add or queue a flow.""" - self.pending_flows.append((domain, context, data)) + key = PendingFlowKey(domain, context["source"]) + values = PendingFlowValue(context, data) + existing = self.pending_flows.setdefault(key, []) + if not any(existing_values.data == data for existing_values in existing): + existing.append(values) diff --git a/tests/helpers/test_discovery_flow.py b/tests/helpers/test_discovery_flow.py index 549848e5c7b0..4019be803154 100644 --- a/tests/helpers/test_discovery_flow.py +++ b/tests/helpers/test_discovery_flow.py @@ -56,8 +56,11 @@ async def test_async_create_flow_deferred_until_started(hass, mock_flow_init): ] -async def test_async_create_flow_checks_existing_flows(hass, mock_flow_init): - """Test existing flows prevent an identical one from being creates.""" +async def test_async_create_flow_checks_existing_flows_after_startup( + hass, mock_flow_init +): + """Test existing flows prevent an identical ones from being after startup.""" + hass.bus.async_fire(EVENT_HOMEASSISTANT_STARTED) with patch( "homeassistant.data_entry_flow.FlowManager.async_has_matching_flow", return_value=True, @@ -69,3 +72,26 @@ async def test_async_create_flow_checks_existing_flows(hass, mock_flow_init): {"properties": {"id": "aa:bb:cc:dd:ee:ff"}}, ) assert not mock_flow_init.mock_calls + + +async def test_async_create_flow_checks_existing_flows_before_startup( + hass, mock_flow_init +): + """Test existing flows prevent an identical ones from being created before startup.""" + hass.state = CoreState.stopped + for _ in range(2): + discovery_flow.async_create_flow( + hass, + "hue", + {"source": config_entries.SOURCE_HOMEKIT}, + {"properties": {"id": "aa:bb:cc:dd:ee:ff"}}, + ) + hass.bus.async_fire(EVENT_HOMEASSISTANT_STARTED) + await hass.async_block_till_done() + assert mock_flow_init.mock_calls == [ + call( + "hue", + context={"source": "homekit"}, + data={"properties": {"id": "aa:bb:cc:dd:ee:ff"}}, + ) + ]
## Proposed change <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> The check for identical discovery flows only worked after the started event because we were still holding the tasks as pending so there was nothing to compare again. We now check against pending flows as well before the started event. If startup took a while we could have ended up with quite the thundering herd when the started event does fire. ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [ ] Dependency upgrade - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [ ] New feature (which adds functionality to an existing integration) - [ ] Deprecation (breaking change to happen in the future) - [ ] Breaking change (fix/feature causing existing functionality to break) - [ ] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> - This PR fixes or closes issue: fixes # - This PR is related to issue: - Link to documentation pull request: ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [x] The code change is tested and works locally. - [ ] Local tests pass. **Your PR cannot be merged unless tests pass** - [ ] There is no commented out code in this PR. - [ ] I have followed the [development checklist][dev-checklist] - [ ] The code has been formatted using Black (`black --fast homeassistant tests`) - [x] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [ ] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [ ] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [ ] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/88213
2023-02-15T21:37:44Z
2023-02-16T02:36:01Z
2023-02-16T02:36:01Z
2023-02-17T03:07:21Z
1,349
home-assistant/core
38,980
Paramount+/CBS URL Changes
diff --git a/yt_dlp/extractor/cbs.py b/yt_dlp/extractor/cbs.py index ae9ce586281..2af36ea825b 100644 --- a/yt_dlp/extractor/cbs.py +++ b/yt_dlp/extractor/cbs.py @@ -77,21 +77,21 @@ class CBSIE(CBSBaseIE): (?: cbs:| https?://(?:www\.)?(?: - cbs\.com/(?:shows/[^/]+/video|movies/[^/]+)/| + cbs\.com/(?:shows|movies)/(?:video|[^/]+/video|[^/]+)/| colbertlateshow\.com/(?:video|podcasts)/) )(?P<id>[\w-]+)''' # All tests are blocked outside US _TESTS = [{ - 'url': 'https://www.cbs.com/shows/garth-brooks/video/_u7W953k6la293J7EPTd9oHkSPs6Xn6_/connect-chat-feat-garth-brooks/', + 'url': 'https://www.cbs.com/shows/video/xrUyNLtl9wd8D_RWWAg9NU2F_V6QpB3R/', 'info_dict': { - 'id': '_u7W953k6la293J7EPTd9oHkSPs6Xn6_', + 'id': 'xrUyNLtl9wd8D_RWWAg9NU2F_V6QpB3R', 'ext': 'mp4', - 'title': 'Connect Chat feat. Garth Brooks', - 'description': 'Connect with country music singer Garth Brooks, as he chats with fans on Wednesday November 27, 2013. Be sure to tune in to Garth Brooks: Live from Las Vegas, Friday November 29, at 9/8c on CBS!', - 'duration': 1495, - 'timestamp': 1385585425, - 'upload_date': '20131127', + 'title': 'Tough As Nails - Dreams Never Die', + 'description': 'md5:a3535a62531cdd52b0364248a2c1ae33', + 'duration': 2588, + 'timestamp': 1639015200, + 'upload_date': '20211209', 'uploader': 'CBSI-NEW', }, 'params': { @@ -99,14 +99,14 @@ class CBSIE(CBSBaseIE): 'skip_download': True, }, }, { - 'url': 'https://www.cbs.com/shows/the-late-show-with-stephen-colbert/video/60icOhMb9NcjbcWnF_gub9XXHdeBcNk2/the-late-show-6-23-21-christine-baranski-joy-oladokun-', + 'url': 'https://www.cbs.com/shows/video/sZH1MGgomIosZgxGJ1l263MFq16oMtW1/', 'info_dict': { - 'id': '60icOhMb9NcjbcWnF_gub9XXHdeBcNk2', - 'title': 'The Late Show - 6/23/21 (Christine Baranski, Joy Oladokun)', - 'timestamp': 1624507140, - 'description': 'md5:e01af24e95c74d55e8775aef86117b95', + 'id': 'sZH1MGgomIosZgxGJ1l263MFq16oMtW1', + 'title': 'The Late Show - 3/16/22 (Michael Buble, Rose Matafeo)', + 'timestamp': 1647488100, + 'description': 'md5:d0e6ec23c544b7fa8e39a8e6844d2439', 'uploader': 'CBSI-NEW', - 'upload_date': '20210624', + 'upload_date': '20220317', }, 'params': { 'ignore_no_formats_error': True, diff --git a/yt_dlp/extractor/paramountplus.py b/yt_dlp/extractor/paramountplus.py index a1d7cd7241f..94a9319ea06 100644 --- a/yt_dlp/extractor/paramountplus.py +++ b/yt_dlp/extractor/paramountplus.py @@ -14,12 +14,12 @@ class ParamountPlusIE(CBSBaseIE): (?: paramountplus:| https?://(?:www\.)?(?: - paramountplus\.com/(?:shows/[^/]+/video|movies/[^/]+)/ + paramountplus\.com/(?:shows|movies)/(?:video|[^/]+/video|[^/]+)/ )(?P<id>[\w-]+))''' # All tests are blocked outside US _TESTS = [{ - 'url': 'https://www.paramountplus.com/shows/catdog/video/Oe44g5_NrlgiZE3aQVONleD6vXc8kP0k/catdog-climb-every-catdog-the-canine-mutiny/', + 'url': 'https://www.paramountplus.com/shows/video/Oe44g5_NrlgiZE3aQVONleD6vXc8kP0k/', 'info_dict': { 'id': 'Oe44g5_NrlgiZE3aQVONleD6vXc8kP0k', 'ext': 'mp4', @@ -34,7 +34,7 @@ class ParamountPlusIE(CBSBaseIE): 'skip_download': 'm3u8', }, }, { - 'url': 'https://www.paramountplus.com/shows/tooning-out-the-news/video/6hSWYWRrR9EUTz7IEe5fJKBhYvSUfexd/7-23-21-week-in-review-rep-jahana-hayes-howard-fineman-sen-michael-bennet-sheera-frenkel-cecilia-kang-/', + 'url': 'https://www.paramountplus.com/shows/video/6hSWYWRrR9EUTz7IEe5fJKBhYvSUfexd/', 'info_dict': { 'id': '6hSWYWRrR9EUTz7IEe5fJKBhYvSUfexd', 'ext': 'mp4', @@ -49,7 +49,7 @@ class ParamountPlusIE(CBSBaseIE): 'skip_download': 'm3u8', }, }, { - 'url': 'https://www.paramountplus.com/movies/daddys-home/vM2vm0kE6vsS2U41VhMRKTOVHyQAr6pC', + 'url': 'https://www.paramountplus.com/movies/video/vM2vm0kE6vsS2U41VhMRKTOVHyQAr6pC/', 'info_dict': { 'id': 'vM2vm0kE6vsS2U41VhMRKTOVHyQAr6pC', 'ext': 'mp4', @@ -64,7 +64,7 @@ class ParamountPlusIE(CBSBaseIE): }, 'expected_warnings': ['Ignoring subtitle tracks'], # TODO: Investigate this }, { - 'url': 'https://www.paramountplus.com/movies/sonic-the-hedgehog/5EKDXPOzdVf9voUqW6oRuocyAEeJGbEc', + 'url': 'https://www.paramountplus.com/movies/video/5EKDXPOzdVf9voUqW6oRuocyAEeJGbEc/', 'info_dict': { 'id': '5EKDXPOzdVf9voUqW6oRuocyAEeJGbEc', 'ext': 'mp4', @@ -79,10 +79,16 @@ class ParamountPlusIE(CBSBaseIE): }, 'expected_warnings': ['Ignoring subtitle tracks'], }, { - 'url': 'https://www.paramountplus.com/shows/all-rise/video/QmR1WhNkh1a_IrdHZrbcRklm176X_rVc/all-rise-space/', + 'url': 'https://www.paramountplus.com/shows/the-real-world/video/mOVeHeL9ub9yWdyzSZFYz8Uj4ZBkVzQg/the-real-world-reunion/', 'only_matching': True, }, { - 'url': 'https://www.paramountplus.com/movies/million-dollar-american-princesses-meghan-and-harry/C0LpgNwXYeB8txxycdWdR9TjxpJOsdCq', + 'url': 'https://www.paramountplus.com/shows/video/mOVeHeL9ub9yWdyzSZFYz8Uj4ZBkVzQg/', + 'only_matching': True, + }, { + 'url': 'https://www.paramountplus.com/movies/video/W0VyStQqUnqKzJkrpSAIARuCc9YuYGNy/', + 'only_matching': True, + }, { + 'url': 'https://www.paramountplus.com/movies/paw-patrol-the-movie/W0VyStQqUnqKzJkrpSAIARuCc9YuYGNy/', 'only_matching': True, }]
## Please follow the guide below - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x]) - Use *Preview* tab to see how your *pull request* will actually look like --- ### Before submitting a *pull request* make sure you have: - [X] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [X] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [X] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [X] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [X] Bug fix - [ ] Improvement - [ ] New extractor - [ ] New feature --- ### Description of your *pull request* and other information Paramount and CBS got rid of the show, movie, and episode names from their URLs. So this should take care of the new format and the old as that seems to still work as well.
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/3098
2022-03-18T05:37:26Z
2022-03-18T09:49:31Z
2022-03-18T09:49:31Z
2022-03-18T09:49:31Z
2,192
yt-dlp/yt-dlp
7,796
feat(replays): Improve render of archived Replay List items
diff --git a/static/app/views/replays/replayTable/tableCell.tsx b/static/app/views/replays/replayTable/tableCell.tsx index c10176860dfdb..468f02fdf9d38 100644 --- a/static/app/views/replays/replayTable/tableCell.tsx +++ b/static/app/views/replays/replayTable/tableCell.tsx @@ -11,7 +11,7 @@ import {StringWalker} from 'sentry/components/replays/walker/urlWalker'; import ScoreBar from 'sentry/components/scoreBar'; import TimeSince from 'sentry/components/timeSince'; import CHART_PALETTE from 'sentry/constants/chartPalette'; -import {IconCalendar, IconLocation} from 'sentry/icons'; +import {IconCalendar, IconDelete, IconLocation} from 'sentry/icons'; import {t, tn} from 'sentry/locale'; import {space, ValidSize} from 'sentry/styles/space'; import type {Organization} from 'sentry/types'; @@ -28,6 +28,24 @@ type Props = { replay: ReplayListRecord | ReplayListRecordWithTx; }; +function getUserBadgeUser(replay: Props['replay']) { + return replay.is_archived + ? { + username: '', + email: '', + id: '', + ip_address: '', + name: '', + } + : { + username: replay.user?.display_name || '', + email: replay.user?.email || '', + id: replay.user?.id || '', + ip_address: replay.user?.ip || '', + name: replay.user?.username || '', + }; +} + export function ReplayCell({ eventView, organization, @@ -45,6 +63,23 @@ export function ReplayCell({ }, }; + if (replay.is_archived) { + return ( + <Item isArchived={replay.is_archived}> + <Row gap={1}> + <StyledIconDelete color="gray500" size="md" /> + <div> + <Row gap={0.5}>{t('Deleted Replay')}</Row> + <Row gap={0.5}> + {project ? <Avatar size={12} project={project} /> : null} + {getShortEventId(replay.id)} + </Row> + </div> + </Row> + </Item> + ); + } + const subText = replay.urls ? ( <Cols> <StringWalker urls={replay.urls} /> @@ -83,17 +118,15 @@ export function ReplayCell({ <UserBadgeFullWidth avatarSize={24} displayName={ - <MainLink to={replayDetails}> - {replay.user.display_name || t('Unknown User')} - </MainLink> + replay.is_archived ? ( + replay.user?.display_name || t('Unknown User') + ) : ( + <MainLink to={replayDetails}> + {replay.user?.display_name || t('Unknown User')} + </MainLink> + ) } - user={{ - username: replay.user.display_name || '', - email: replay.user.email || '', - id: replay.user.id || '', - ip_address: replay.user.ip || '', - name: replay.user.username || '', - }} + user={getUserBadgeUser(replay)} // this is the subheading for the avatar, so displayEmail in this case is a misnomer displayEmail={subText} /> @@ -101,6 +134,10 @@ export function ReplayCell({ ); } +const StyledIconDelete = styled(IconDelete)` + margin: ${space(0.25)}; +`; + // Need to be full width for StringWalker to take up full width and truncate properly const UserBadgeFullWidth = styled(UserBadge)` width: 100%; @@ -130,6 +167,9 @@ export function TransactionCell({ }: Props & {organization: Organization}) { const location = useLocation(); + if (replay.is_archived) { + return <Item isArchived />; + } const hasTxEvent = 'txEvent' in replay; const txDuration = hasTxEvent ? replay.txEvent?.['transaction.duration'] : undefined; return hasTxEvent ? ( @@ -149,6 +189,9 @@ export function OSCell({replay}: Props) { const theme = useTheme(); const hasRoomForColumns = useMedia(`(min-width: ${theme.breakpoints.large})`); + if (replay.is_archived) { + return <Item isArchived />; + } return ( <Item> <ContextIcon @@ -164,6 +207,9 @@ export function BrowserCell({replay}: Props) { const theme = useTheme(); const hasRoomForColumns = useMedia(`(min-width: ${theme.breakpoints.large})`); + if (replay.is_archived) { + return <Item isArchived />; + } return ( <Item> <ContextIcon @@ -175,6 +221,9 @@ export function BrowserCell({replay}: Props) { } export function DurationCell({replay}: Props) { + if (replay.is_archived) { + return <Item isArchived />; + } return ( <Item> <Time>{formatTime(replay.duration.asMilliseconds())}</Time> @@ -183,6 +232,9 @@ export function DurationCell({replay}: Props) { } export function ErrorCountCell({replay}: Props) { + if (replay.is_archived) { + return <Item isArchived />; + } return ( <Item data-test-id="replay-table-count-errors"> <ErrorCount countErrors={replay.count_errors} /> @@ -191,6 +243,9 @@ export function ErrorCountCell({replay}: Props) { } export function ActivityCell({replay}: Props) { + if (replay.is_archived) { + return <Item isArchived />; + } const scoreBarPalette = new Array(10).fill([CHART_PALETTE[0][0]]); return ( <Item> @@ -204,11 +259,12 @@ export function ActivityCell({replay}: Props) { ); } -const Item = styled('div')` +const Item = styled('div')<{isArchived?: boolean}>` display: flex; align-items: center; gap: ${space(1)}; padding: ${space(1.5)}; + ${p => (p.isArchived ? 'opacity: 0.5;' : '')}; `; const Time = styled('span')`
Give 'archived' results in the replay list special treatment. Some considerations addressed: - No need to link into the details page, it would only 404 - All the fields are set to`0` or the empty string, so we can show nothing instead - consider what the slim list view looks like as well -> not updating the slim view, it's going away in a few days since we have backend search improvements. **Before:** ![Image](https://user-images.githubusercontent.com/187460/231650239-1d15074e-fb91-406b-b95f-7d2bd8a80e32.png) **Regular list view with an archived item:** <img width="1178" alt="SCR-20230414-bzdh" src="https://user-images.githubusercontent.com/187460/231834420-ee037eb7-113b-43b7-9d0c-bfd20bf9f75f.png"> The slim list is not updated, that view is going away in a few days. Fixes https://github.com/getsentry/sentry/issues/47303
https://api.github.com/repos/getsentry/sentry/pulls/47338
2023-04-13T17:15:33Z
2023-04-14T07:59:52Z
2023-04-14T07:59:52Z
2023-04-29T12:00:49Z
1,483
getsentry/sentry
44,419
Update gpt4free/aiassist/__init__.py
diff --git a/gpt4free/aiassist/__init__.py b/gpt4free/aiassist/__init__.py index 10082c2263..f54feaeeab 100644 --- a/gpt4free/aiassist/__init__.py +++ b/gpt4free/aiassist/__init__.py @@ -21,6 +21,7 @@ def create( url = "http://43.153.7.56:8080/api/chat-process" request = requests.post(url, json=json_data) + request.encoding = request.apparent_encoding content = request.content response = Completion.__load_json(content)
### **Added**: `request.encoding = request.apparent_encoding` ### **Now, code must look like this**: ``` url = "https://ai.usesless.com/api/chat-process" request = requests.post(url, headers=Completion.headers, json=json_data) request.encoding = request.apparent_encoding # <--- Fix is here content = request.content ``` ### **My prompt in russian (cyrillic)**: `Привет!` ### **AiAssist's answer WITHOUT my fix**: ``` ÐÑивеÑ! Чем Ñ Ð¼Ð¾Ð³Ñ Ðам помоÑÑ? ``` ### **AiAssist's answer WITH my fix**: ``` Привет! Чем я могу Вам помочь? ```
https://api.github.com/repos/xtekky/gpt4free/pulls/595
2023-05-23T10:36:15Z
2023-05-24T17:39:47Z
2023-05-24T17:39:46Z
2023-05-26T05:18:34Z
144
xtekky/gpt4free
37,968
Add awesome-sphinxdoc: a curated list of awesome tools for Sphinx Python Documentation Generator.
diff --git a/README.md b/README.md index fc0cdb87a..5c9fb6533 100644 --- a/README.md +++ b/README.md @@ -243,6 +243,7 @@ A curated list of awesome Python frameworks, libraries and software. Inspired by *Libraries for generating project documentation.* * [Sphinx](http://sphinx-doc.org/) - Python Documentation generator. + * [awesome-sphinxdoc](https://github.com/yoloseem/awesome-sphinxdoc) * [reStructuredText](http://docutils.sourceforge.net/rst.html) - Markup Syntax and Parser Component of Docutils. * [MkDocs](http://www.mkdocs.org/) - Markdown friendly documentation generator. * [Pycco](http://fitzgen.github.io/pycco/) - The original quick-and-dirty, hundred-line-long, literate-programming-style documentation generator.
This related [repository](https://github.com/yoloseem/awesome-sphinxdoc).
https://api.github.com/repos/vinta/awesome-python/pulls/238
2014-10-13T07:59:25Z
2014-10-16T01:18:29Z
2014-10-16T01:18:29Z
2014-10-16T01:18:29Z
197
vinta/awesome-python
27,112
infra: poetry run min versions 2
diff --git a/.github/workflows/_release.yml b/.github/workflows/_release.yml index c05d92c156a428..ef0c50f767df3c 100644 --- a/.github/workflows/_release.yml +++ b/.github/workflows/_release.yml @@ -183,7 +183,9 @@ jobs: - name: Get minimum versions id: check-version - run: echo "min-versions=$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)" >> $GITHUB_OUTPUT + run: | + poetry run pip install packaging + echo "min-versions=$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)" >> $GITHUB_OUTPUT - name: Run unit tests with minimum dependency versions env:
https://api.github.com/repos/langchain-ai/langchain/pulls/17149
2024-02-07T01:57:27Z
2024-02-07T01:57:43Z
2024-02-07T01:57:43Z
2024-02-07T01:57:43Z
191
langchain-ai/langchain
43,448
Searching libcrypto.so in more locations
diff --git a/shadowsocks/crypto/ctypes_openssl.py b/shadowsocks/crypto/ctypes_openssl.py index 9e0dfca87..22238c0a9 100644 --- a/shadowsocks/crypto/ctypes_openssl.py +++ b/shadowsocks/crypto/ctypes_openssl.py @@ -45,9 +45,29 @@ def load_openssl(): if libcrypto_path: break else: + # We may get here when find_library fails because, for example, + # the user does not have sufficient privileges to access those + # tools underlying find_library on linux. + import glob - for libcrypto_path in glob.glob('/usr/lib/libcrypto.*'): - pass + import sys + + patterns = ['/usr/lib/libcrypto.*'] + + # Some linux distros may store so in alternative locations + if sys.maxsize > 2 ** 32: + # Python is 64-bit + patterns.extend(['/usr/lib64/libcrypto.*']) + else: + # Python is 32-bit + patterns.extend(['/usr/lib32/libcrypto.*']) + + for pat in patterns: + files = glob.glob(pat) + if files: + libcrypto_path = files[0] + break + if libcrypto_path is None: raise Exception('libcrypto(OpenSSL) not found') logging.info('loading libcrypto from %s', libcrypto_path) @@ -83,9 +103,9 @@ def load_cipher(cipher_name): class CtypesCrypto(object): def __init__(self, cipher_name, key, iv, op): + self._ctx = None if not loaded: load_openssl() - self._ctx = None cipher = libcrypto.EVP_get_cipherbyname(cipher_name) if not cipher: cipher = load_cipher(cipher_name) diff --git a/shadowsocks/crypto/rc4_md5.py b/shadowsocks/crypto/rc4_md5.py index 3062dcc0f..aa22b1682 100644 --- a/shadowsocks/crypto/rc4_md5.py +++ b/shadowsocks/crypto/rc4_md5.py @@ -39,7 +39,7 @@ def create_cipher(alg, key, iv, op, key_as_bytes=0, d=None, salt=None, try: from shadowsocks.crypto import ctypes_openssl return ctypes_openssl.CtypesCrypto(b'rc4', rc4_key, b'', op) - except: + except Exception: import M2Crypto.EVP return M2Crypto.EVP.Cipher(b'rc4', rc4_key, b'', op, key_as_bytes=0, d='md5', salt=None, i=1, diff --git a/shadowsocks/eventloop.py b/shadowsocks/eventloop.py index 55c30bb98..304b22920 100644 --- a/shadowsocks/eventloop.py +++ b/shadowsocks/eventloop.py @@ -232,8 +232,9 @@ def run(self): logging.error(e) import traceback traceback.print_exc() - for handler in self._handlers_to_remove: - self._handlers.remove(handler) + if self._handlers_to_remove: + for handler in self._handlers_to_remove: + self._handlers.remove(handler) self._handlers_to_remove = [] self._iterating = False
When I tried to setup `ss` server on my host, which was a shared shard, `crypto` searching failed, because `libcrypto.so` was put in `/usr/lib64` rather than `/usr/lib`. A few minor fixes are also included. Change 6eadfca was missing in #256.
https://api.github.com/repos/shadowsocks/shadowsocks/pulls/257
2015-01-11T13:33:07Z
2015-01-12T05:07:00Z
2015-01-12T05:07:00Z
2015-01-12T15:26:56Z
776
shadowsocks/shadowsocks
24,699
Limit to major releases rather than second point.
diff --git a/setup.py b/setup.py index 570353330e..2da9ba07c5 100755 --- a/setup.py +++ b/setup.py @@ -42,8 +42,8 @@ def run_tests(self): packages = ['requests'] requires = [ - 'chardet>=3.0.2,<3.1.0', - 'idna>=2.5,<2.9', + 'chardet>=3.0.2,<4', + 'idna>=2.5,<3', 'urllib3>=1.21.1,<1.26,!=1.25.0,!=1.25.1', 'certifi>=2017.4.17' @@ -102,7 +102,7 @@ def run_tests(self): cmdclass={'test': PyTest}, tests_require=test_requirements, extras_require={ - 'security': ['pyOpenSSL >= 0.14', 'cryptography>=1.3.4', 'idna>=2.0.0'], + 'security': ['pyOpenSSL >= 0.14', 'cryptography>=1.3.4'], 'socks': ['PySocks>=1.5.6, !=1.5.7'], 'socks:sys_platform == "win32" and python_version == "2.7"': ['win_inet_pton'], },
requests should trust dependent packages to do semver rather than artificially limiting version compatibility, which causes problems for pip. Fixes https://github.com/psf/requests/issues/5341, https://github.com/psf/requests/issues/5337 and supercedes https://github.com/psf/requests/pull/5226.
https://api.github.com/repos/psf/requests/pulls/5342
2020-02-17T17:09:06Z
2020-02-18T14:58:28Z
2020-02-18T14:58:27Z
2021-08-29T00:07:05Z
314
psf/requests
32,066
fix tokenization_test
diff --git a/tokenization_test.py b/tokenization_test.py index 8a46028ef..e38af9414 100644 --- a/tokenization_test.py +++ b/tokenization_test.py @@ -30,7 +30,7 @@ def test_full_tokenizer(self): "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", "##ing", "," ] - with tempfile.NamedTemporaryFile(delete=False) as vocab_writer: + with tempfile.NamedTemporaryFile(mode='w+', delete=False) as vocab_writer: vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) vocab_file = vocab_writer.name
tokenization_test currently fails because the NamedTemporaryFile is opened as a binary file, but the vocab is written as text.
https://api.github.com/repos/google-research/bert/pulls/93
2018-11-09T22:21:18Z
2018-11-09T22:32:36Z
2018-11-09T22:32:36Z
2018-11-09T22:37:03Z
162
google-research/bert
38,427
Typos
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index 718d1086d..397ff5dd1 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -3398,7 +3398,7 @@ Placing them in the same namespace as the class makes their relationship to the ##### Note -This is expecially important for [overloaded operators](#Ro-namespace). +This is especially important for [overloaded operators](#Ro-namespace). ##### Enforcement @@ -6664,7 +6664,7 @@ Avoiding inconsistent definition in different namespaces bool s==s; -This is what a deafault == would do, if we had such defaults. +This is what a default `==` would do, if we had such defaults. ##### Example
expecially => especially (This is expecially important for) deafault => default (This is what a deafault == would do) And `==` marked-down as code.
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/536
2016-02-26T20:40:34Z
2016-03-20T16:48:56Z
2016-03-20T16:48:56Z
2016-03-20T16:48:56Z
188
isocpp/CppCoreGuidelines
15,414
cp of 8353
diff --git a/doc/doc_ch/algorithm_overview.md b/doc/doc_ch/algorithm_overview.md index 7f6919c13a..02a4cbad95 100755 --- a/doc/doc_ch/algorithm_overview.md +++ b/doc/doc_ch/algorithm_overview.md @@ -3,6 +3,8 @@ - [1. 两阶段OCR算法](#1) - [1.1 文本检测算法](#11) - [1.2 文本识别算法](#12) + - [1.3 文本超分辨率算法](#13) + - [1.4 公式识别算法](#14) - [2. 端到端OCR算法](#2) - [3. 表格识别算法](#3) - [4. 关键信息抽取算法](#4) @@ -107,6 +109,34 @@ PaddleOCR将**持续新增**支持OCR领域前沿算法与模型,**欢迎广 |RobustScanner|ResNet31| 87.77% | rec_r31_robustscanner | [训练模型](https://paddleocr.bj.bcebos.com/contribution/rec_r31_robustscanner.tar)| |RFL|ResNetRFL| 88.63% | rec_resnet_rfl_att | [训练模型](https://paddleocr.bj.bcebos.com/contribution/rec_resnet_rfl_att_train.tar) | + +<a name="13"></a> + +### 1.3 文本超分辨率算法 +已支持的文本超分辨率算法列表(戳链接获取使用教程): +- [x] [Text Gestalt](./algorithm_sr_gestalt.md) +- [x] [Text Telescope](./algorithm_sr_telescope.md) + +在TextZoom公开数据集上,算法效果如下: + +|模型|骨干网络|PSNR_Avg|SSIM_Avg|配置文件|下载链接| +|---|---|---|---|---|---| +|Text Gestalt|tsrn|19.28|0.6560| [configs/sr/sr_tsrn_transformer_strock.yml](../../configs/sr/sr_tsrn_transformer_strock.yml)|[训练模型](https://paddleocr.bj.bcebos.com/sr_tsrn_transformer_strock_train.tar)| +|Text Telescope|tbsrn|21.56|0.7411| [configs/sr/sr_telescope.yml](../../configs/sr/sr_telescope.yml)|[训练模型](https://paddleocr.bj.bcebos.com/contribution/sr_telescope_train.tar)| + +<a name="14"></a> + +### 1.4 公式识别算法 + +已支持的公式识别算法列表(戳链接获取使用教程): +- [x] [CAN](./algorithm_rec_can.md.md) + +在CROHME手写公式数据集上,算法效果如下: + +|模型 |骨干网络|配置文件|ExpRate|下载链接| +| ----- | ----- | ----- | ----- | ----- | +|CAN|DenseNet|[rec_d28_can.yml](../../configs/rec/rec_d28_can.yml)|51.72%|[训练模型](https://paddleocr.bj.bcebos.com/contribution/rec_d28_can_train.tar)| + <a name="2"></a> ## 2. 端到端算法 diff --git a/doc/doc_ch/table_recognition.md b/doc/doc_ch/table_recognition.md index 156ba80e37..f09dedd038 100644 --- a/doc/doc_ch/table_recognition.md +++ b/doc/doc_ch/table_recognition.md @@ -14,6 +14,9 @@ - [2.5. 分布式训练](#25-分布式训练) - [2.6. 其他训练环境](#26-其他训练环境) - [2.7. 模型微调](#27-模型微调) + - [2.7.1 数据选择](#271-数据选择) + - [2.7.2 模型选择](#272-模型选择) + - [2.7.3 训练超参选择](#273-训练超参选择) - [3. 模型评估与预测](#3-模型评估与预测) - [3.1. 指标评估](#31-指标评估) - [3.2. 测试表格结构识别效果](#32-测试表格结构识别效果) @@ -219,7 +222,39 @@ DCU设备上运行需要设置环境变量 `export HIP_VISIBLE_DEVICES=0,1,2,3` ## 2.7. 模型微调 -实际使用过程中,建议加载官方提供的预训练模型,在自己的数据集中进行微调,关于模型的微调方法,请参考:[模型微调教程](./finetune.md)。 +### 2.7.1 数据选择 + +数据量:建议至少准备2000张的表格识别数据集用于模型微调。 + +### 2.7.2 模型选择 + +建议选择SLANet模型(配置文件:[SLANet_ch.yml](../../configs/table/SLANet_ch.yml),预训练模型:[ch_ppstructure_mobile_v2.0_SLANet_train.tar](https://paddleocr.bj.bcebos.com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_train.tar))进行微调,其精度与泛化性能是目前提供的最优中文表格预训练模型。 + +更多表格识别模型,请参考[PP-Structure 系列模型库](../../ppstructure/docs/models_list.md)。 + +### 2.7.3 训练超参选择 + +在模型微调的时候,最重要的超参就是预训练模型路径`pretrained_model`, 学习率`learning_rate`,部分配置文件如下所示。 + +```yaml +Global: + pretrained_model: ./ch_ppstructure_mobile_v2.0_SLANet_train/best_accuracy.pdparams # 预训练模型路径 +Optimizer: + lr: + name: Cosine + learning_rate: 0.001 # + warmup_epoch: 0 + regularizer: + name: 'L2' + factor: 0 +``` + +上述配置文件中,首先需要将`pretrained_model`字段指定为`best_accuracy.pdparams`文件路径。 + +PaddleOCR提供的配置文件是在4卡训练(相当于总的batch size是`4*48=192`)、且没有加载预训练模型情况下的配置文件,因此您的场景中,学习率与总的batch size需要对应线性调整,例如 + +* 如果您的场景中是单卡训练,单卡batch_size=48,则总的batch_size=48,建议将学习率调整为`0.00025`左右。 +* 如果您的场景中是单卡训练,由于显存限制,只能设置单卡batch_size=32,则总的batch_size=32,建议将学习率调整为`0.00017`左右。 # 3. 模型评估与预测 diff --git a/doc/doc_en/algorithm_overview_en.md b/doc/doc_en/algorithm_overview_en.md index 309d074ed4..fad0fb8a72 100755 --- a/doc/doc_en/algorithm_overview_en.md +++ b/doc/doc_en/algorithm_overview_en.md @@ -3,6 +3,8 @@ - [1. Two-stage OCR Algorithms](#1) - [1.1 Text Detection Algorithms](#11) - [1.2 Text Recognition Algorithms](#12) + - [1.3 Text Super-Resolution Algorithms](#13) + - [1.4 Formula Recognition Algorithm](#14) - [2. End-to-end OCR Algorithms](#2) - [3. Table Recognition Algorithms](#3) - [4. Key Information Extraction Algorithms](#4) @@ -104,6 +106,36 @@ Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation r |RobustScanner|ResNet31| 87.77% | rec_r31_robustscanner | [trained model](https://paddleocr.bj.bcebos.com/contribution/rec_r31_robustscanner.tar)| |RFL|ResNetRFL| 88.63% | rec_resnet_rfl_att | [trained model](https://paddleocr.bj.bcebos.com/contribution/rec_resnet_rfl_att_train.tar) | +<a name="13"></a> + +### 1.3 Text Super-Resolution Algorithms + +Supported text super-resolution algorithms (Click the link to get the tutorial): +- [x] [Text Gestalt](./algorithm_sr_gestalt.md) +- [x] [Text Telescope](./algorithm_sr_telescope.md) + +On the TextZoom public dataset, the effect of the algorithm is as follows: + +|Model|Backbone|PSNR_Avg|SSIM_Avg|Config|Download link| +|---|---|---|---|---|---| +|Text Gestalt|tsrn|19.28|0.6560| [configs/sr/sr_tsrn_transformer_strock.yml](../../configs/sr/sr_tsrn_transformer_strock.yml)|[trained model](https://paddleocr.bj.bcebos.com/sr_tsrn_transformer_strock_train.tar)| +|Text Telescope|tbsrn|21.56|0.7411| [configs/sr/sr_telescope.yml](../../configs/sr/sr_telescope.yml)|[trained model](https://paddleocr.bj.bcebos.com/contribution/sr_telescope_train.tar)| + +<a name="14"></a> + +### 1.4 Formula Recognition Algorithm + +Supported formula recognition algorithms (Click the link to get the tutorial): + +- [x] [CAN](./algorithm_rec_can.md.md) + +On the CROHME handwritten formula dataset, the effect of the algorithm is as follows: + +|Model |Backbone|Config|ExpRate|Download link| +| ----- | ----- | ----- | ----- | ----- | +|CAN|DenseNet|[rec_d28_can.yml](../../configs/rec/rec_d28_can.yml)|51.72%|[trained model](https://paddleocr.bj.bcebos.com/contribution/rec_d28_can_train.tar)| + + <a name="2"></a> ## 2. End-to-end OCR Algorithms @@ -122,7 +154,7 @@ On the PubTabNet dataset, the algorithm result is as follows: |Model|Backbone|Config|Acc|Download link| |---|---|---|---|---| -|TableMaster|TableResNetExtra|[configs/table/table_master.yml](../../configs/table/table_master.yml)|77.47%|[trained](https://paddleocr.bj.bcebos.com/ppstructure/models/tablemaster/table_structure_tablemaster_train.tar) / [inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/tablemaster/table_structure_tablemaster_infer.tar)| +|TableMaster|TableResNetExtra|[configs/table/table_master.yml](../../configs/table/table_master.yml)|77.47%|[trained model](https://paddleocr.bj.bcebos.com/ppstructure/models/tablemaster/table_structure_tablemaster_train.tar) / [inference model](https://paddleocr.bj.bcebos.com/ppstructure/models/tablemaster/table_structure_tablemaster_infer.tar)| <a name="4"></a> diff --git a/doc/doc_en/table_recognition_en.md b/doc/doc_en/table_recognition_en.md index cff2933df2..d79d98936e 100644 --- a/doc/doc_en/table_recognition_en.md +++ b/doc/doc_en/table_recognition_en.md @@ -14,6 +14,9 @@ This article provides a full-process guide for the PaddleOCR table recognition m - [2.5. Distributed Training](#25-distributed-training) - [2.6. Training on other platform(Windows/macOS/Linux DCU)](#26-training-on-other-platformwindowsmacoslinux-dcu) - [2.7. Fine-tuning](#27-fine-tuning) + - [2.7.1 Dataset](#271-dataset) + - [2.7.2 model selection](#272-model-selection) + - [2.7.3 Training hyperparameter selection](#273-training-hyperparameter-selection) - [3. Evaluation and Test](#3-evaluation-and-test) - [3.1. Evaluation](#31-evaluation) - [3.2. Test table structure recognition effect](#32-test-table-structure-recognition-effect) @@ -226,8 +229,40 @@ Running on a DCU device requires setting the environment variable `export HIP_VI ## 2.7. Fine-tuning -In the actual use process, it is recommended to load the officially provided pre-training model and fine-tune it in your own data set. For the fine-tuning method of the table recognition model, please refer to: [Model fine-tuning tutorial](./finetune.md). +### 2.7.1 Dataset + +Data number: It is recommended to prepare at least 2000 table recognition datasets for model fine-tuning. + +### 2.7.2 model selection + +It is recommended to choose the SLANet model (configuration file: [SLANet_ch.yml](../../configs/table/SLANet_ch.yml), pre-training model: [ch_ppstructure_mobile_v2.0_SLANet_train.tar](https://paddleocr.bj.bcebos .com/ppstructure/models/slanet/ch_ppstructure_mobile_v2.0_SLANet_train.tar)) for fine-tuning, its accuracy and generalization performance is the best Chinese table pre-training model currently available. + +For more table recognition models, please refer to [PP-Structure Series Model Library](../../ppstructure/docs/models_list.md). + +### 2.7.3 Training hyperparameter selection + +When fine-tuning the model, the most important hyperparameters are the pretrained model path `pretrained_model`, the learning rate `learning_rate`, and some configuration files are shown below. + +```yaml +Global: + pretrained_model: ./ch_ppstructure_mobile_v2.0_SLANet_train/best_accuracy.pdparams # Pre-trained model path +Optimizer: + lr: + name: Cosine + learning_rate: 0.001 # + warmup_epoch: 0 + regularizer: + name: 'L2' + factor: 0 +``` + +In the above configuration file, you first need to specify the `pretrained_model` field as the `best_accuracy.pdparams` file path. + +The configuration file provided by PaddleOCR is for 4-card training (equivalent to a total batch size of `4*48=192`) and no pre-trained model is loaded. Therefore, in your scenario, the learning rate is the same as the total The batch size needs to be adjusted linearly, for example + +* If your scenario is single card training, single card batch_size=48, then the total batch_size=48, it is recommended to adjust the learning rate to about `0.00025`. +* If your scenario is for single-card training, due to memory limitations, you can only set batch_size=32 for a single card, then the total batch_size=32, it is recommended to adjust the learning rate to about `0.00017`. # 3. Evaluation and Test
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/8354
2022-11-17T07:54:38Z
2022-11-17T07:54:54Z
2022-11-17T07:54:54Z
2022-11-17T07:54:55Z
3,551
PaddlePaddle/PaddleOCR
41,782
Update to include the book "x86-64 Assembly Language Programming with Ubuntu" by Ed Jorgensen
diff --git a/README.md b/README.md index 34f0b57..d1fdb42 100644 --- a/README.md +++ b/README.md @@ -1242,6 +1242,7 @@ Richard Feynman's Learning Strategy: - [Memory Allocation](https://samwho.dev/memory-allocation/) (an interactive article) - [Why does 0.1 + 0.2 = 0.30000000000000004?](https://jvns.ca/blog/2023/02/08/why-does-0-1-plus-0-2-equal-0-30000000000000004/), Julia Evans (about floating point) - [Putting the "You" in CPU](https://cpu.land/the-basics) +- [x86-64 Assembly Language Programming with Ubuntu](http://www.egr.unlv.edu/~ed/assembly64.pdf) ### Machine learning/AI
# Why this book should be added The book "x86-64 Assembly Language Programming with Ubuntu" covers all the information you need to get started with writing assembly code. Starting with an introduction discussing the benefits of assembly, and an extensive description of the architecture covering every prerequisite. It includes a section on debugging your programs using DDD, so you can inspect and watch the variables in your code update in real time. With an introduction discussing basic computer architecture, and a conclusion talking about system interrupts and parallel computing, it truly arms readers with all they need to write efficient assembly code.
https://api.github.com/repos/charlax/professional-programming/pulls/67
2024-01-22T18:34:16Z
2024-02-26T03:04:14Z
2024-02-26T03:04:14Z
2024-02-26T03:04:14Z
204
charlax/professional-programming
21,519
Skip Exchanges
diff --git a/skip-tests.json b/skip-tests.json index f55562045cbc..31a2795892e0 100644 --- a/skip-tests.json +++ b/skip-tests.json @@ -299,7 +299,10 @@ } }, "coinbaseprime": { - "skipPhpAsync":true + "skipPhpAsync":true, + "skipMethods": { + "fetchStatus": "request timeout" + } }, "coinspot": { "skipMethods": { @@ -610,7 +613,8 @@ "skipMethods": { "loadMarkets": "precision key has an null value, but is expected to have a value", "fetchTickers": "quoteVolume >= baseVolume * low does not get true", - "fetchTicker": "same" + "fetchTicker": "same", + "fetchCurrencies": "info key is missing from structure <<< paymium fetchCurrencie" } }, "phemex": {
https://api.github.com/repos/ccxt/ccxt/pulls/17590
2023-04-18T17:42:12Z
2023-04-18T17:44:53Z
2023-04-18T17:44:53Z
2023-04-18T17:44:53Z
229
ccxt/ccxt
13,882
Update old issue link to point to letsencrypt community forums.
diff --git a/certbot/client.py b/certbot/client.py index b735421f5a2..bc25da54918 100644 --- a/certbot/client.py +++ b/certbot/client.py @@ -556,11 +556,11 @@ def _rollback_and_restart(self, success_msg): self.installer.rollback_checkpoints() self.installer.restart() except: - # TODO: suggest letshelp-letsencrypt here reporter.add_message( "An error occurred and we failed to restore your config and " - "restart your server. Please submit a bug report to " - "https://github.com/letsencrypt/letsencrypt", + "restart your server. Please post to " + "https://community.letsencrypt.org/c/server-config " + "with details about your configuration and this error you received.", reporter.HIGH_PRIORITY) raise reporter.add_message(success_msg, reporter.HIGH_PRIORITY)
Issue #5527 came from this outdated link-- we should at least update this to certbot/certbot and maybe even direct people to the community forums? Also specify that we need information/context about the error in order to debug it.
https://api.github.com/repos/certbot/certbot/pulls/5538
2018-02-05T20:18:34Z
2018-02-06T00:27:21Z
2018-02-06T00:27:21Z
2018-02-06T00:27:21Z
209
certbot/certbot
1,296
Tag/Option Parsing in Prompts From File
diff --git a/scripts/prompts_from_file.py b/scripts/prompts_from_file.py index b24f1a80604..5732623f48a 100644 --- a/scripts/prompts_from_file.py +++ b/scripts/prompts_from_file.py @@ -28,6 +28,44 @@ def ui(self, is_img2img): checkbox_txt.change(fn=lambda x: [gr.File.update(visible = not x), gr.TextArea.update(visible = x)], inputs=[checkbox_txt], outputs=[file, prompt_txt]) return [checkbox_txt, file, prompt_txt] + def process_string_tag(self, tag): + return tag[1:-2] + + def process_int_tag(self, tag): + return int(tag) + + def process_float_tag(self, tag): + return float(tag) + + def process_boolean_tag(self, tag): + return True if (tag == "true") else False + + prompt_tags = { + "sd_model": None, + "outpath_samples": process_string_tag, + "outpath_grids": process_string_tag, + "prompt_for_display": process_string_tag, + "prompt": process_string_tag, + "negative_prompt": process_string_tag, + "styles": process_string_tag, + "seed": process_int_tag, + "subseed_strength": process_float_tag, + "subseed": process_int_tag, + "seed_resize_from_h": process_int_tag, + "seed_resize_from_w": process_int_tag, + "sampler_index": process_int_tag, + "batch_size": process_int_tag, + "n_iter": process_int_tag, + "steps": process_int_tag, + "cfg_scale": process_float_tag, + "width": process_int_tag, + "height": process_int_tag, + "restore_faces": process_boolean_tag, + "tiling": process_boolean_tag, + "do_not_save_samples": process_boolean_tag, + "do_not_save_grid": process_boolean_tag + } + def on_show(self, checkbox_txt, file, prompt_txt): return [ gr.Checkbox.update(visible = True), gr.File.update(visible = not checkbox_txt), gr.TextArea.update(visible = checkbox_txt) ] @@ -41,6 +79,7 @@ def run(self, p, checkbox_txt, data: bytes, prompt_txt: str): img_count = len(lines) * p.n_iter batch_count = math.ceil(img_count / p.batch_size) loop_count = math.ceil(batch_count / p.n_iter) + # These numbers no longer accurately reflect the total images and number of batches print(f"Will process {img_count} images in {batch_count} batches.") p.do_not_save_grid = True @@ -50,7 +89,25 @@ def run(self, p, checkbox_txt, data: bytes, prompt_txt: str): images = [] for loop_no in range(loop_count): state.job = f"{loop_no + 1} out of {loop_count}" - p.prompt = lines[loop_no*p.batch_size:(loop_no+1)*p.batch_size] * p.n_iter + # The following line may need revising to remove batch_size references + current_line = lines[loop_no*p.batch_size:(loop_no+1)*p.batch_size] * p.n_iter + + # If the current line has no tags, parse the whole line as a prompt, else parse each tag + if(current_line[0][:2] != "--"): + p.prompt = current_line + else: + tokenized_line = current_line[0].split("--") + + for tag in tokenized_line: + tag_split = tag.split(" ", 1) + if(tag_split[0] != ''): + value_func = self.prompt_tags.get(tag_split[0], None) + if(value_func != None): + value = value_func(self, tag_split[1]) + setattr(p, tag_split[0], value) + else: + print(f"Unknown option \"{tag_split}\"") + proc = process_images(p) images += proc.images
This implements parsing for GUI options in prompts from file, allowing users to select a custom number of steps, seed, sampler, etc on a per line basis. Prompts that have been written without option parsing in mind will work as they have up to this point, even in the same file as prompts with options. An example prompt to demonstrate the option syntax is as follows: `--prompt "a man surfing in hawaii" --negative_prompt "extra_limbs" --seed 350 --steps 50` A full list of supported options can be found in the prompts_from_file.py file inside the prompt_tags dictionary. Due to the way the code is currently written, total image count and batch size are being calculated before any lines get parsed, meaning that with these new changes, they're nearly always wrong. Functionally this has no consequences other than the loading bar displaying incorrect completion percentages. As I'm not sure how to go about fixing this, I've left it untouched.
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/1446
2022-10-01T15:08:14Z
2022-10-15T07:47:51Z
2022-10-15T07:47:51Z
2022-10-15T09:43:07Z
920
AUTOMATIC1111/stable-diffusion-webui
40,397
Removed unneeded print statements form mongokit pattern doc
diff --git a/docs/patterns/mongokit.rst b/docs/patterns/mongokit.rst index b50cf4568a..b4b6fc011c 100644 --- a/docs/patterns/mongokit.rst +++ b/docs/patterns/mongokit.rst @@ -122,9 +122,6 @@ collection first, this is somewhat the same as a table in the SQL world. >>> user = {'name': u'admin', 'email': u'admin@localhost'} >>> collection.insert(user) -print list(collection.find()) -print collection.find_one({'name': u'admin'}) - MongoKit will automatically commit for us. To query your database, you use the collection directly:
Fixes #477
https://api.github.com/repos/pallets/flask/pulls/479
2012-04-08T23:09:33Z
2012-06-17T13:25:10Z
2012-06-17T13:25:10Z
2020-11-14T07:08:08Z
156
pallets/flask
20,612
Adds 2019 Honda Pilot
diff --git a/selfdrive/car/honda/carstate.py b/selfdrive/car/honda/carstate.py index 90488cc86d5649..dd575c3c605f9f 100644 --- a/selfdrive/car/honda/carstate.py +++ b/selfdrive/car/honda/carstate.py @@ -106,9 +106,7 @@ def get_can_signals(CP): elif CP.carFingerprint == CAR.ACURA_ILX: signals += [("CAR_GAS", "GAS_PEDAL_2", 0), ("MAIN_ON", "SCM_BUTTONS", 0)] - elif CP.carFingerprint == CAR.CRV: - signals += [("MAIN_ON", "SCM_BUTTONS", 0)] - elif CP.carFingerprint == CAR.ACURA_RDX: + elif CP.carFingerprint in (CAR.CRV, CAR.ACURA_RDX, CAR.PILOT_2019, CAR.RIDGELINE): signals += [("MAIN_ON", "SCM_BUTTONS", 0)] elif CP.carFingerprint == CAR.ODYSSEY: signals += [("MAIN_ON", "SCM_FEEDBACK", 0), @@ -118,8 +116,6 @@ def get_can_signals(CP): elif CP.carFingerprint == CAR.PILOT: signals += [("MAIN_ON", "SCM_BUTTONS", 0), ("CAR_GAS", "GAS_PEDAL_2", 0)] - elif CP.carFingerprint == CAR.RIDGELINE: - signals += [("MAIN_ON", "SCM_BUTTONS", 0)] # add gas interceptor reading if we are using it if CP.enableGasInterceptor: @@ -252,7 +248,7 @@ def update(self, cp): self.pedal_gas = cp.vl["POWERTRAIN_DATA"]['PEDAL_GAS'] # crv doesn't include cruise control - if self.CP.carFingerprint in (CAR.CRV, CAR.ODYSSEY, CAR.ACURA_RDX, CAR.RIDGELINE): + if self.CP.carFingerprint in (CAR.CRV, CAR.ODYSSEY, CAR.ACURA_RDX, CAR.RIDGELINE, CAR.PILOT_2019): self.car_gas = self.pedal_gas else: self.car_gas = cp.vl["GAS_PEDAL_2"]['CAR_GAS'] diff --git a/selfdrive/car/honda/hondacan.py b/selfdrive/car/honda/hondacan.py index c0b3f1d36ebb79..a48c199d9f8c11 100644 --- a/selfdrive/car/honda/hondacan.py +++ b/selfdrive/car/honda/hondacan.py @@ -142,6 +142,8 @@ def create_radar_commands(v_ego, car_fingerprint, new_radar_config, idx): msg_0x301 = "\x0f\x18\x51\x02\x5a\x00\x00" elif car_fingerprint == CAR.PILOT: msg_0x301 = "\x00\x00\x56\x02\x58\x00\x00" + elif car_fingerprint == CAR.PILOT_2019: + msg_0x301 = "\x00\x00\x58\x02\x5c\x00\x00" elif car_fingerprint == CAR.RIDGELINE: msg_0x301 = "\x00\x00\x56\x02\x57\x00\x00" commands.append(make_can_msg(0x300, msg_0x300, idx, 1)) diff --git a/selfdrive/car/honda/interface.py b/selfdrive/car/honda/interface.py index e9c76d5e468ea9..91fbb078a2f202 100755 --- a/selfdrive/car/honda/interface.py +++ b/selfdrive/car/honda/interface.py @@ -288,7 +288,7 @@ def get_params(candidate, fingerprint): ret.longitudinalKiBP = [0., 35.] ret.longitudinalKiV = [0.18, 0.12] - elif candidate == CAR.PILOT: + elif candidate in (CAR.PILOT, CAR.PILOT_2019): stop_and_go = False ret.mass = 4303 * CV.LB_TO_KG + std_cargo ret.wheelbase = 2.81 diff --git a/selfdrive/car/honda/values.py b/selfdrive/car/honda/values.py index 67948fdd9767c9..243849431c7c11 100644 --- a/selfdrive/car/honda/values.py +++ b/selfdrive/car/honda/values.py @@ -47,6 +47,7 @@ class CAR: ODYSSEY = "HONDA ODYSSEY 2018 EX-L" ACURA_RDX = "ACURA RDX 2018 ACURAWATCH PLUS" PILOT = "HONDA PILOT 2017 TOURING" + PILOT_2019 = "HONDA PILOT 2019 ELITE" RIDGELINE = "HONDA RIDGELINE 2017 BLACK EDITION" @@ -88,6 +89,9 @@ class CAR: CAR.PILOT: [{ 57: 3, 145: 8, 228: 5, 229: 4, 308: 5, 316: 8, 334: 8, 339: 7, 342: 6, 344: 8, 379: 8, 380: 8, 392: 6, 399: 7, 419: 8, 420: 8, 422: 8, 425: 8, 426: 8, 427: 3, 432: 7, 463: 8, 464: 8, 476: 4, 490: 8, 506: 8, 507: 1, 512: 6, 513: 6, 538: 3, 542: 7, 545: 5, 546: 3, 597: 8, 660: 8, 773: 7, 777: 8, 780: 8, 795: 8, 800: 8, 804: 8, 808: 8, 819: 7, 821: 5, 829: 5, 837: 5, 856: 7, 871: 8, 882: 2, 884: 7, 891: 8, 892: 8, 923: 2, 929: 8, 963: 8, 965: 8, 966: 8, 967: 8, 983: 8, 985: 3, 1027: 5, 1029: 8, 1036: 8, 1039: 8, 1064: 7, 1088: 8, 1089: 8, 1108: 8, 1125: 8, 1296: 8, 1424: 5, 1600: 5, 1601: 8, 1612: 5, 1613: 5, 1616: 5, 1618: 5, 1668: 5 }], + CAR.PILOT_2019: [{ + 57: 3, 145: 8, 228: 5, 308: 5, 316: 8, 334: 8, 342: 6, 344: 8, 379: 8, 380: 8, 399: 7, 411: 5, 419: 8, 420: 8, 422: 8, 425: 8, 426: 8, 427: 3, 432: 7, 463: 8, 464: 8, 476: 4, 490: 8, 506: 8, 538: 3, 542: 7, 545: 5, 546: 3, 597: 8, 660: 8, 773: 7, 777: 8, 780: 8, 795: 8, 800: 8, 804: 8, 808: 8, 817: 4, 819: 7, 821: 5, 825: 4, 829: 5, 837: 5, 856: 7, 871: 8, 881: 8, 882: 2, 884: 7, 891: 8, 892: 8, 923: 2, 927: 8, 929: 8, 983: 8, 985: 3, 1029: 8, 1052: 8, 1064: 7, 1088: 8, 1089: 8, 1092: 1, 1108: 8, 1110: 8, 1125: 8, 1296: 8, 1424: 5, 1445: 8, 1600: 5, 1601: 8, 1612: 5, 1613: 5, 1614: 5, 1615: 8, 1616: 5, 1617: 8, 1618: 5, 1623: 5, 1668: 5 + }], # Ridgeline w/ Added Comma Pedal Support (512L & 513L) CAR.RIDGELINE: [{ 57: 3, 145: 8, 228: 5, 229: 4, 308: 5, 316: 8, 339: 7, 342: 6, 344: 8, 380: 8, 392: 6, 399: 7, 419: 8, 420: 8, 422: 8, 425: 8, 426: 8, 427: 3, 432: 7, 464: 8, 471: 3, 476: 4, 490: 8, 506: 8, 512: 6, 513: 6, 545: 5, 546: 3, 597: 8, 660: 8, 773: 7, 777: 8, 780: 8, 795: 8, 800: 8, 804: 8, 808: 8, 819: 7, 821: 5, 829: 5, 871: 8, 882: 2, 884: 7, 892: 8, 923: 2, 927: 8, 929: 8, 963: 8, 965: 8, 966: 8, 967: 8, 983: 8, 985: 3, 1027: 5, 1029: 8, 1036: 8, 1039: 8, 1064: 7, 1088: 8, 1089: 8, 1108: 8, 1125: 8, 1296: 8, 1365: 5, 1424: 5, 1600: 5, 1601: 8, 1613: 5, 1616: 5, 1618: 5, 1668: 5, 2015: 3 @@ -106,6 +110,7 @@ class CAR: CAR.CRV_5G: dbc_dict('honda_crv_ex_2017_can_generated', None), CAR.ODYSSEY: dbc_dict('honda_odyssey_exl_2018_generated', 'acura_ilx_2016_nidec'), CAR.PILOT: dbc_dict('honda_pilot_touring_2017_can_generated', 'acura_ilx_2016_nidec'), + CAR.PILOT_2019: dbc_dict('honda_pilot_touring_2017_can_generated', 'acura_ilx_2016_nidec'), CAR.RIDGELINE: dbc_dict('honda_ridgeline_black_edition_2017_can_generated', 'acura_ilx_2016_nidec'), } @@ -121,6 +126,7 @@ class CAR: CAR.CRV_5G: 1200, CAR.ODYSSEY: 1200, CAR.PILOT: 1200, + CAR.PILOT_2019: 1200, CAR.RIDGELINE: 1200, } @@ -135,6 +141,7 @@ class CAR: CAR.CRV_5G: 1.025, CAR.ODYSSEY: 1., CAR.PILOT: 1., + CAR.PILOT_2019: 1., CAR.RIDGELINE: 1., }
https://api.github.com/repos/commaai/openpilot/pulls/334
2018-08-27T01:11:57Z
2018-08-27T05:35:11Z
2018-08-27T05:35:11Z
2018-09-25T22:02:01Z
2,965
commaai/openpilot
9,022
Update the ACME github repository URL.
diff --git a/acme/acme/__init__.py b/acme/acme/__init__.py index c38cea41453..0f5f0e4bd31 100644 --- a/acme/acme/__init__.py +++ b/acme/acme/__init__.py @@ -1,12 +1,12 @@ """ACME protocol implementation. This module is an implementation of the `ACME protocol`_. Latest -supported version: `v02`_. +supported version: `draft-ietf-acme-01`_. -.. _`ACME protocol`: https://github.com/letsencrypt/acme-spec +.. _`ACME protocol`: https://github.com/ietf-wg-acme/acme/ -.. _`v02`: - https://github.com/letsencrypt/acme-spec/commit/d328fea2d507deb9822793c512830d827a4150c4 +.. _`draft-ietf-acme-01`: + https://github.com/ietf-wg-acme/acme/tree/draft-ietf-acme-acme-01 """
https://api.github.com/repos/certbot/certbot/pulls/1965
2015-12-20T15:23:23Z
2015-12-23T16:26:43Z
2015-12-23T16:26:43Z
2016-05-06T19:21:46Z
247
certbot/certbot
1,176
pd.show_versions()
diff --git a/doc/README.rst b/doc/README.rst index 660a3b7232891..4b5b0d8818e2d 100644 --- a/doc/README.rst +++ b/doc/README.rst @@ -88,7 +88,7 @@ Furthermore, it is recommended to have all `optional dependencies installed. This is not needed, but be aware that you will see some error messages. Because all the code in the documentation is executed during the doc build, the examples using this optional dependencies will generate errors. -Run ``pd.show_version()`` to get an overview of the installed version of all +Run ``pd.show_versions()`` to get an overview of the installed version of all dependencies. .. warning::
Shouldn't this be `pd.show_versions()`? Maybe it's a typo.
https://api.github.com/repos/pandas-dev/pandas/pulls/8746
2014-11-06T11:54:04Z
2014-11-06T12:09:38Z
2014-11-06T12:09:38Z
2014-11-06T12:09:38Z
168
pandas-dev/pandas
45,624
[workflow] fixed typos in the leaderboard workflow
diff --git a/.github/workflows/report_leaderboard_to_lark.yml b/.github/workflows/report_leaderboard_to_lark.yml index f51847a39521..60c3ad2a6781 100644 --- a/.github/workflows/report_leaderboard_to_lark.yml +++ b/.github/workflows/report_leaderboard_to_lark.yml @@ -1,4 +1,4 @@ -name: Publish Nightly Version to PyPI +name: Generate Community Report and Send to Lark on: workflow_dispatch: @@ -24,6 +24,6 @@ jobs: - run: python .github/workflows/scripts/generate_leaderboard_and_send_to_lark.py env: LARK_APP_ID: ${{ secrets.LARK_LEADERBOARD_APP_ID }} - APP_SECRET: ${{ secrets.LARK_LEADERBOARD_APP_SECRET }} + LARK_APP_SECRET: ${{ secrets.LARK_LEADERBOARD_APP_SECRET }} LARK_WEBHOOK_URL: ${{ secrets.LARK_LEADERBOARD_WEBHOOK_URL }} GITHUB_TOKEN: ${{ github.token }}
## 📌 Checklist before creating the PR - [x] I have created an issue for this PR for traceability - [x] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description` - [x] I have added relevant tags if possible for us to better distinguish different PRs ## 🚨 Issue number > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge > > e.g. fixed #1234, closed #1234, resolved #1234 Fixed #2566 ## 📝 What does this PR do? > Summarize your work here. > if you have any plots/diagrams/screenshots/tables, please attach them here. This PR fixed the typos stated in #2566 to make sure the workflow runs correctly. ## 💥 Checklist before requesting a review - [x] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)) - [x] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible - [x] I have performed a self-review of my code - [x] I have added thorough tests. - [x] I have added docstrings for all the functions/methods I implemented ## ⭐️ Do you enjoy contributing to Colossal-AI? - [x] 🌝 Yes, I do. - [ ] 🌚 No, I don't. Tell us more if you don't enjoy contributing to Colossal-AI.
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/2567
2023-02-03T09:25:08Z
2023-02-03T09:25:56Z
2023-02-03T09:25:56Z
2023-02-03T09:26:00Z
231
hpcaitech/ColossalAI
11,132
[build] Override timestamps in zip file
diff --git a/Makefile b/Makefile index 84ccce2b33f..41e3a683a54 100644 --- a/Makefile +++ b/Makefile @@ -46,8 +46,15 @@ tar: youtube-dl.tar.gz pypi-files: youtube-dl.bash-completion README.txt youtube-dl.1 youtube-dl.fish youtube-dl: youtube_dl/*.py youtube_dl/*/*.py - zip --quiet youtube-dl youtube_dl/*.py youtube_dl/*/*.py - zip --quiet --junk-paths youtube-dl youtube_dl/__main__.py + mkdir -p zip + for d in youtube_dl youtube_dl/downloader youtube_dl/extractor youtube_dl/postprocessor ; do \ + mkdir -p zip/$$d ;\ + cp -a $$d/*.py zip/$$d/ ;\ + done + touch -t 200001010101 zip/youtube_dl/*.py zip/youtube_dl/*/*.py + mv zip/youtube_dl/__main__.py zip/ + cd zip ; zip --quiet ../youtube-dl youtube_dl/*.py youtube_dl/*/*.py __main__.py + rm -rf zip echo '#!$(PYTHON)' > youtube-dl cat youtube-dl.zip >> youtube-dl rm youtube-dl.zip
to make build reproducible. See https://reproducible-builds.org/ for why this is good. Copying files to not interfere with freshness detection.
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/13669
2017-07-17T12:03:18Z
2017-08-22T13:51:21Z
2017-08-22T13:51:20Z
2017-08-22T13:51:25Z
295
ytdl-org/youtube-dl
50,190
Added TFLearn
diff --git a/README.md b/README.md index b9f3ec68..1dad7e1f 100644 --- a/README.md +++ b/README.md @@ -753,6 +753,7 @@ on MNIST digits[DEEP LEARNING] * [Orange](http://orange.biolab.si/) - Open source data visualization and data analysis for novices and experts. * [MXNet](https://github.com/dmlc/mxnet) - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Go, Javascript and more. * [milk](https://github.com/luispedro/milk) - Machine learning toolkit focused on supervised classification. +* [TFLearn](https://github.com/tflearn/tflearn) - Deep learning library featuring a higher-level API for TensorFlow. <a name="python-data-analysis" /> #### Data Analysis / Data Visualization
Added TFLearn library link
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/263
2016-04-01T03:41:08Z
2016-04-07T18:45:59Z
2016-04-07T18:45:59Z
2016-04-07T18:46:02Z
211
josephmisiti/awesome-machine-learning
52,391
community: added `partners/package-name` folders
diff --git a/libs/partners/astradb/README.md b/libs/partners/astradb/README.md index 62566a56776327..af37bc8f7dcb99 100644 --- a/libs/partners/astradb/README.md +++ b/libs/partners/astradb/README.md @@ -1,3 +1,5 @@ +# langchain-astradb + This package has moved! https://github.com/langchain-ai/langchain-datastax/tree/main/libs/astradb \ No newline at end of file diff --git a/libs/partners/google-alloydb-pg/README.md b/libs/partners/google-alloydb-pg/README.md new file mode 100644 index 00000000000000..dca038b4e2416f --- /dev/null +++ b/libs/partners/google-alloydb-pg/README.md @@ -0,0 +1 @@ +# langchain-google-alloydb-pg diff --git a/libs/partners/google-bigtable/README.md b/libs/partners/google-bigtable/README.md new file mode 100644 index 00000000000000..2729b52e184e60 --- /dev/null +++ b/libs/partners/google-bigtable/README.md @@ -0,0 +1 @@ +# langchain-google-bigtable diff --git a/libs/partners/google-cloud-sql-mssql/README.md b/libs/partners/google-cloud-sql-mssql/README.md new file mode 100644 index 00000000000000..e730064666162e --- /dev/null +++ b/libs/partners/google-cloud-sql-mssql/README.md @@ -0,0 +1 @@ +# langchain-google-cloud-sql-mssql diff --git a/libs/partners/google-cloud-sql-mysql/README.md b/libs/partners/google-cloud-sql-mysql/README.md new file mode 100644 index 00000000000000..39674215ed37d9 --- /dev/null +++ b/libs/partners/google-cloud-sql-mysql/README.md @@ -0,0 +1 @@ +# langchain-google-cloud-sql-mysql diff --git a/libs/partners/google-cloud-sql-pg/README.md b/libs/partners/google-cloud-sql-pg/README.md new file mode 100644 index 00000000000000..4fe990ea216ca0 --- /dev/null +++ b/libs/partners/google-cloud-sql-pg/README.md @@ -0,0 +1 @@ +# langchain-google-cloud-sql-pg diff --git a/libs/partners/google-datastore/README.md b/libs/partners/google-datastore/README.md new file mode 100644 index 00000000000000..672dd543559d61 --- /dev/null +++ b/libs/partners/google-datastore/README.md @@ -0,0 +1 @@ +# langchain-google-datastore diff --git a/libs/partners/google-el-carro/README.md b/libs/partners/google-el-carro/README.md new file mode 100644 index 00000000000000..83313f135705f2 --- /dev/null +++ b/libs/partners/google-el-carro/README.md @@ -0,0 +1 @@ +# langchain-google-el-carro diff --git a/libs/partners/google-firestore/README.md b/libs/partners/google-firestore/README.md new file mode 100644 index 00000000000000..e51235522a8e1b --- /dev/null +++ b/libs/partners/google-firestore/README.md @@ -0,0 +1 @@ +# langchain-google-firestore diff --git a/libs/partners/google-genai/README.md b/libs/partners/google-genai/README.md index 088b699ee2e71f..81eab04ce078c5 100644 --- a/libs/partners/google-genai/README.md +++ b/libs/partners/google-genai/README.md @@ -1,3 +1,5 @@ +# langchain-google-genai + This package has moved! -https://github.com/langchain-ai/langchain-google/tree/main/libs/genai \ No newline at end of file +https://github.com/langchain-ai/langchain-google/tree/main/libs/genai diff --git a/libs/partners/google-memorystore-redis/README.md b/libs/partners/google-memorystore-redis/README.md new file mode 100644 index 00000000000000..de2b1cee4cc215 --- /dev/null +++ b/libs/partners/google-memorystore-redis/README.md @@ -0,0 +1 @@ +# langchain-google-memorystore-redis diff --git a/libs/partners/google-spanner/README.md b/libs/partners/google-spanner/README.md new file mode 100644 index 00000000000000..d0775174557e64 --- /dev/null +++ b/libs/partners/google-spanner/README.md @@ -0,0 +1 @@ +# langchain-google-spanner diff --git a/libs/partners/google-vertexai/README.md b/libs/partners/google-vertexai/README.md index 2ac1ce42e160aa..8ac5023708fc3e 100644 --- a/libs/partners/google-vertexai/README.md +++ b/libs/partners/google-vertexai/README.md @@ -1,3 +1,5 @@ +# langchain-google-vertexai + This package has moved! https://github.com/langchain-ai/langchain-google/tree/main/libs/vertexai \ No newline at end of file diff --git a/libs/partners/nvidia-ai-endpoints/README.md b/libs/partners/nvidia-ai-endpoints/README.md index 3e19cc333d0136..b6d7a27ee83efa 100644 --- a/libs/partners/nvidia-ai-endpoints/README.md +++ b/libs/partners/nvidia-ai-endpoints/README.md @@ -1,3 +1,5 @@ +# langchain-nvidia-ai-endpoints + This package has moved! https://github.com/langchain-ai/langchain-nvidia/tree/main/libs/ai-endpoints \ No newline at end of file diff --git a/libs/partners/nvidia-trt/README.md b/libs/partners/nvidia-trt/README.md index 8a2546ff82d90d..c485e16eae96d5 100644 --- a/libs/partners/nvidia-trt/README.md +++ b/libs/partners/nvidia-trt/README.md @@ -1,3 +1,5 @@ +# langchain-nvidia-trt + This package has moved! https://github.com/langchain-ai/langchain-nvidia/tree/main/libs/trt \ No newline at end of file
Added references to new integration packages from Google, by adding subfolders to `partners/`.
https://api.github.com/repos/langchain-ai/langchain/pulls/19290
2024-03-19T21:04:21Z
2024-03-26T02:16:59Z
2024-03-26T02:16:59Z
2024-03-27T17:08:30Z
1,517
langchain-ai/langchain
42,814
master
diff --git a/3-tier.py b/3-tier.py index a1cd30c1..625870a3 100644 --- a/3-tier.py +++ b/3-tier.py @@ -12,12 +12,11 @@ class Data(object): } def __get__(self, obj, klas): - print ("(Fetching from Data Store)") + print("(Fetching from Data Store)") return {'products': self.products} class BusinessLogic(object): - """ Business logic holding data store instances """ data = Data() diff --git a/README.md b/README.md index 98550102..492a3378 100644 --- a/README.md +++ b/README.md @@ -25,6 +25,7 @@ Current Patterns: | [decorator](decorator.py) | wrap functionality with other functionality in order to affect outputs | | [facade](facade.py) | use one class as an API to a number of others | | [factory_method](factory_method.py) | delegate a specialized function/method to create instances | +| [front_controller](front_controller.py) | single handler requests coming to the application | | [flyweight](flyweight.py) | transparently reuse existing instances of objects with similar/identical state | | [graph_search](graph_search.py) | (graphing algorithms, not design patterns) | | [lazy_evaluation](lazy_evaluation.py) | lazily-evaluated property pattern in Python | @@ -36,6 +37,7 @@ Current Patterns: | [prototype](prototype.py) | use a factory and clones of a prototype for new instances (if instantiation is expensive) | | [proxy](proxy.py) | an object funnels operations to something else | | [publish_subscribe](publish_subscribe.py) | a source syndicates events/data to 0+ registered listeners | +| [specification](specification.py) | business rules can be recombined by chaining the business rules together using boolean logic | | [state](state.py) | logic is org'd into a discrete number of potential states and the next state that can be transitioned to | | [strategy](strategy.py) | selectable operations over the same data | | [template](template.py) | an object imposes a structure but takes pluggable components | diff --git a/adapter.py b/adapter.py index 65f03cbb..374d01fb 100644 --- a/adapter.py +++ b/adapter.py @@ -37,7 +37,6 @@ def make_noise(self, octane_level): class Adapter(object): - """ Adapts an object by replacing methods. Usage: diff --git a/front_controller.py b/front_controller.py new file mode 100644 index 00000000..e6a4939d --- /dev/null +++ b/front_controller.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +""" +@author: Gordeev Andrey <[email protected]> +The controller provides a centralized entry point that controls and manages +request handling. +""" + + +class MobileView(object): + def show_index_page(self): + print('Displaying mobile index page') + + +class TabletView(object): + def show_index_page(self): + print('Displaying tablet index page') + + +class Dispatcher(object): + def __init__(self): + self.mobile_view = MobileView() + self.tablet_view = TabletView() + + def dispatch(self, request): + if request.type == Request.mobile_type: + self.mobile_view.show_index_page() + elif request.type == Request.tablet_type: + self.tablet_view.show_index_page() + else: + print('cant dispatch the request') + + +class RequestController(object): + """ front controller """ + def __init__(self): + self.dispatcher = Dispatcher() + + def dispatch_request(self, request): + if isinstance(request, Request): + self.dispatcher.dispatch(request) + else: + print('request must be a Request object') + + +class Request(object): + """ request """ + + mobile_type = 'mobile' + tablet_type = 'tablet' + + def __init__(self, request): + self.type = None + request = request.lower() + if request == self.mobile_type: + self.type = self.mobile_type + elif request == self.tablet_type: + self.type = self.tablet_type + + +if __name__ == '__main__': + front_controller = RequestController() + front_controller.dispatch_request(Request('mobile')) + front_controller.dispatch_request(Request('tablet')) + + front_controller.dispatch_request(Request('desktop')) + front_controller.dispatch_request('mobile') + + +### OUTPUT ### +# Displaying mobile index page +# Displaying tablet index page +# cant dispatch the request +# request must be a Request object \ No newline at end of file diff --git a/specification.py b/specification.py new file mode 100644 index 00000000..5c885e98 --- /dev/null +++ b/specification.py @@ -0,0 +1,117 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +""" +@author: Gordeev Andrey <[email protected]> + +Specification provide recombination business logic by +chaining together using boolean logic +""" + +from abc import abstractmethod + + +class Specification(object): + + def and_specification(self, candidate): + raise NotImplementedError() + + def or_specification(self, candidate): + raise NotImplementedError() + + def not_specification(self): + raise NotImplementedError() + + @abstractmethod + def is_satisfied_by(self, candidate): + pass + + +class CompositeSpecification(Specification): + @abstractmethod + def is_satisfied_by(self, candidate): + pass + + def and_specification(self, candidate): + return AndSpecification(self, candidate) + + def or_specification(self, candidate): + return OrSpecification(self, candidate) + + def not_specification(self): + return NotSpecification(self) + + +class AndSpecification(CompositeSpecification): + _one = Specification() + _other = Specification() + + def __init__(self, one, other): + self._one = one + self._other = other + + def is_satisfied_by(self, candidate): + return bool(self._one.is_satisfied_by(candidate) and + self._other.is_satisfied_by(candidate)) + + +class OrSpecification(CompositeSpecification): + _one = Specification() + _other = Specification() + + def __init__(self, one, other): + self._one = one + self._other = other + + def is_satisfied_by(self, candidate): + return bool(self._one.is_satisfied_by(candidate) or + self._other.is_satisfied_by(candidate)) + + +class NotSpecification(CompositeSpecification): + _wrapped = Specification() + + def __init__(self, wrapped): + self._wrapped = wrapped + + def is_satisfied_by(self, candidate): + return bool(not self._wrapped.is_satisfied_by(candidate)) + + +class User(object): + + def __init__(self, super_user=False): + self.super_user = super_user + + +class UserSpecification(CompositeSpecification): + + def is_satisfied_by(self, candidate): + return isinstance(candidate, User) + + +class SuperUserSpecification(CompositeSpecification): + + def is_satisfied_by(self, candidate): + return getattr(candidate, 'super_user', False) + + +if __name__ == '__main__': + print('Specification') + andrey = User() + ivan = User(super_user=True) + vasiliy = 'not User instance' + + root_specification = UserSpecification().\ + and_specification(SuperUserSpecification()) + + print(root_specification.is_satisfied_by(andrey)) + print(root_specification.is_satisfied_by(ivan)) + print(root_specification.is_satisfied_by(vasiliy)) + + +### OUTPUT ### +# Specification +# False +# True +# False
Added two patterns: front controller and specification. pep8 minor code edit.
https://api.github.com/repos/faif/python-patterns/pulls/86
2015-06-06T19:48:26Z
2015-06-09T20:16:54Z
2015-06-09T20:16:54Z
2015-06-09T20:16:54Z
1,828
faif/python-patterns
33,603
Added additional nodes for CLIP merging
diff --git a/comfy_extras/nodes_model_merging.py b/comfy_extras/nodes_model_merging.py index d594cf490b..a25b73ca76 100644 --- a/comfy_extras/nodes_model_merging.py +++ b/comfy_extras/nodes_model_merging.py @@ -87,6 +87,50 @@ def merge(self, clip1, clip2, ratio): m.add_patches({k: kp[k]}, 1.0 - ratio, ratio) return (m, ) + +class CLIPSubtract: + @classmethod + def INPUT_TYPES(s): + return {"required": { "clip1": ("CLIP",), + "clip2": ("CLIP",), + "multiplier": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}), + }} + RETURN_TYPES = ("CLIP",) + FUNCTION = "merge" + + CATEGORY = "advanced/model_merging" + + def merge(self, clip1, clip2, multiplier): + m = clip1.clone() + kp = clip2.get_key_patches() + for k in kp: + if k.endswith(".position_ids") or k.endswith(".logit_scale"): + continue + m.add_patches({k: kp[k]}, - multiplier, multiplier) + return (m, ) + + +class CLIPAdd: + @classmethod + def INPUT_TYPES(s): + return {"required": { "clip1": ("CLIP",), + "clip2": ("CLIP",), + }} + RETURN_TYPES = ("CLIP",) + FUNCTION = "merge" + + CATEGORY = "advanced/model_merging" + + def merge(self, clip1, clip2): + m = clip1.clone() + kp = clip2.get_key_patches() + for k in kp: + if k.endswith(".position_ids") or k.endswith(".logit_scale"): + continue + m.add_patches({k: kp[k]}, 1.0, 1.0) + return (m, ) + + class ModelMergeBlocks: @classmethod def INPUT_TYPES(s): @@ -279,6 +323,8 @@ def save(self, vae, filename_prefix, prompt=None, extra_pnginfo=None): "ModelMergeAdd": ModelAdd, "CheckpointSave": CheckpointSave, "CLIPMergeSimple": CLIPMergeSimple, + "CLIPMergeSubtract": CLIPSubtract, + "CLIPMergeAdd": CLIPAdd, "CLIPSave": CLIPSave, "VAESave": VAESave, }
Additional ways of merging CLIPs, similar to the one for models.
https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/3004
2024-03-09T18:34:28Z
2024-03-10T14:43:42Z
2024-03-10T14:43:42Z
2024-03-10T14:49:40Z
608
comfyanonymous/ComfyUI
17,704
Update modbus.py
diff --git a/homeassistant/components/modbus.py b/homeassistant/components/modbus.py index 001c8d1188a4..293e86b014e9 100644 --- a/homeassistant/components/modbus.py +++ b/homeassistant/components/modbus.py @@ -40,7 +40,7 @@ ETHERNET_SCHEMA = { vol.Required(CONF_HOST): cv.string, vol.Required(CONF_PORT): cv.positive_int, - vol.Required(CONF_TYPE): vol.Any('tcp', 'udp'), + vol.Required(CONF_TYPE): vol.Any('tcp', 'udp', 'rtuovertcp'), vol.Optional(CONF_TIMEOUT, default=3): cv.socket_timeout, } @@ -92,6 +92,13 @@ def setup(hass, config): bytesize=config[DOMAIN][CONF_BYTESIZE], parity=config[DOMAIN][CONF_PARITY], timeout=config[DOMAIN][CONF_TIMEOUT]) + elif client_type == 'rtuovertcp': + from pymodbus.client.sync import ModbusTcpClient as ModbusClient + from pymodbus.transaction import ModbusRtuFramer as ModbusFramer + client = ModbusClient(host=config[DOMAIN][CONF_HOST], + port=config[DOMAIN][CONF_PORT], + framer=ModbusFramer, + timeout=config[DOMAIN][CONF_TIMEOUT]) elif client_type == 'tcp': from pymodbus.client.sync import ModbusTcpClient as ModbusClient client = ModbusClient(host=config[DOMAIN][CONF_HOST],
Support of MODBUS RTU over TCP ethernet mode. See more info here: https://www.eltima.com/modbus-over-ethernet/ ## Description: Pull request for the documentaiton update: https://github.com/home-assistant/home-assistant.github.io/pull/4261 **Related issue (if applicable):** N/A **Pull request in [home-assistant.github.io](https://github.com/home-assistant/home-assistant.github.io) with documentation (if applicable):** home-assistant/home-assistant.github.io#<home-assistant.github.io PR number goes here> ## Example entry for `configuration.yaml` (if applicable): ```yaml modbus: type: rtuovertcp host: 192.168.0.11 port: 10001 ``` ## Checklist: If user exposed functionality or configuration variables are added/changed: - [x] Documentation added/updated in [home-assistant.github.io](https://github.com/home-assistant/home-assistant.github.io) If the code communicates with devices, web services, or third-party tools: - [ ] Local tests with `tox` run successfully. **Your PR cannot be merged unless tests pass** - [ ] New dependencies have been added to the `REQUIREMENTS` variable ([example][ex-requir]). - [ ] New dependencies are only imported inside functions that use them ([example][ex-import]). - [ ] New dependencies have been added to `requirements_all.txt` by running `script/gen_requirements_all.py`. - [ ] New files were added to `.coveragerc`. If the code does not interact with devices: - [ ] Local tests with `tox` run successfully. **Your PR cannot be merged unless tests pass** - [ ] Tests have been added to verify that the new code works. [ex-requir]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L14 [ex-import]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L54
https://api.github.com/repos/home-assistant/core/pulls/11238
2017-12-19T20:02:22Z
2017-12-29T08:19:35Z
2017-12-29T08:19:35Z
2018-03-30T20:15:46Z
333
home-assistant/core
39,393
fix(binance): fix editOrder timestamp update, fix #18028
diff --git a/ts/src/pro/binance.ts b/ts/src/pro/binance.ts index cf03f435a481..487b1af832b9 100644 --- a/ts/src/pro/binance.ts +++ b/ts/src/pro/binance.ts @@ -1407,7 +1407,7 @@ export default class binance extends binanceRest { let timestamp = this.safeInteger (order, 'O'); const T = this.safeInteger (order, 'T'); let lastTradeTimestamp = undefined; - if (executionType === 'NEW') { + if (executionType === 'NEW' || executionType === 'AMENDMENT') { if (timestamp === undefined) { timestamp = T; }
When editting an order the timestamp wasn't being updated correctly, therefore it wasn't always filtered correctly and shown in new updates.
https://api.github.com/repos/ccxt/ccxt/pulls/18046
2023-05-27T09:14:16Z
2023-05-27T10:47:43Z
2023-05-27T10:47:43Z
2023-05-27T10:47:43Z
158
ccxt/ccxt
13,790
💚 Fix pip cache for Smokeshow
diff --git a/.github/workflows/smokeshow.yml b/.github/workflows/smokeshow.yml index 55d64517f5a08..c83d16f15c437 100644 --- a/.github/workflows/smokeshow.yml +++ b/.github/workflows/smokeshow.yml @@ -18,7 +18,6 @@ jobs: with: python-version: '3.9' cache: "pip" - cache-dependency-path: pyproject.toml - run: pip install smokeshow
💚 Fix pip cache for Smokeshow
https://api.github.com/repos/tiangolo/fastapi/pulls/5697
2022-11-27T13:25:34Z
2022-11-27T13:39:18Z
2022-11-27T13:39:18Z
2022-11-27T13:39:18Z
125
tiangolo/fastapi
22,824
Throw clear exceptions
diff --git a/requests/models.py b/requests/models.py index 136427fe7d..ae3c1be135 100644 --- a/requests/models.py +++ b/requests/models.py @@ -413,7 +413,10 @@ def full_url(self): if not scheme in SCHEMAS: raise InvalidSchema("Invalid scheme %r" % scheme) - netloc = netloc.encode('idna').decode('utf-8') + try: + netloc = netloc.encode('idna').decode('utf-8') + except UnicodeError: + raise InvalidURL('URL has an invalid label.') if not path: path = '/' diff --git a/tests/test_requests.py b/tests/test_requests.py index f43ccac85b..3bbcfdf48e 100755 --- a/tests/test_requests.py +++ b/tests/test_requests.py @@ -19,6 +19,7 @@ from requests import HTTPError from requests import get, post, head, put from requests.auth import HTTPBasicAuth, HTTPDigestAuth +from requests.exceptions import InvalidURL if 'HTTPBIN_URL' not in os.environ: os.environ['HTTPBIN_URL'] = 'http://httpbin.org/' @@ -1062,6 +1063,10 @@ def test_bytes_files(self): """Test that `bytes` can be used as the values of `files`.""" post(httpbin('post'), files={'test': b'test'}) + def test_invalid_urls_throw_requests_exception(self): + """Test that URLs with invalid labels throw + Requests.exceptions.InvalidURL instead of UnicodeError.""" + self.assertRaises(InvalidURL, get, 'http://.google.com/') if __name__ == '__main__': unittest.main()
I'm not super invested in this: I think the UnicodeError is reasonably clear, but it probably helps to throw a Requests based exception when there is an error in the URL. The InvalidURL exception subclasses ValueError, which means anyone catching this by looking for ValueErrors will still catch this exception. That said, you might just want to keep throwing the Unicode Error. (Inspired by me really wanting to close #697.)
https://api.github.com/repos/psf/requests/pulls/774
2012-08-10T16:49:52Z
2012-08-13T21:14:19Z
2012-08-13T21:14:19Z
2021-09-08T23:06:12Z
388
psf/requests
33,037
cabana: save settings to user-specific directory
diff --git a/tools/cabana/.gitignore b/tools/cabana/.gitignore index c3f5ef2b69fbda..362a51f5c9231d 100644 --- a/tools/cabana/.gitignore +++ b/tools/cabana/.gitignore @@ -2,6 +2,5 @@ moc_* *.moc cabana -settings dbc/car_fingerprint_to_dbc.json tests/test_cabana diff --git a/tools/cabana/cabana.cc b/tools/cabana/cabana.cc index 0ccef7d3aba845..33403a2bff9ccd 100644 --- a/tools/cabana/cabana.cc +++ b/tools/cabana/cabana.cc @@ -18,8 +18,6 @@ int main(int argc, char *argv[]) { app.setWindowIcon(QIcon(":cabana-icon.png")); UnixSignalHandler signalHandler; - - settings.load(); utils::setTheme(settings.theme); QCommandLineParser cmd_parser; diff --git a/tools/cabana/mainwin.cc b/tools/cabana/mainwin.cc index 9ac88032d5ed30..4f27dcf5ed30fb 100644 --- a/tools/cabana/mainwin.cc +++ b/tools/cabana/mainwin.cc @@ -608,12 +608,6 @@ void MainWindow::closeEvent(QCloseEvent *event) { } settings.message_header_state = messages_widget->saveHeaderState(); - auto status = settings.save(); - if (status == QSettings::AccessError) { - QString error = tr("Failed to write settings to [%1]: access denied").arg(Settings::filePath()); - qDebug() << error; - QMessageBox::warning(this, tr("Failed to write settings"), error); - } QWidget::closeEvent(event); } diff --git a/tools/cabana/messageswidget.cc b/tools/cabana/messageswidget.cc index 6a94e013f68422..aea09c55dee152 100644 --- a/tools/cabana/messageswidget.cc +++ b/tools/cabana/messageswidget.cc @@ -2,6 +2,7 @@ #include <limits> +#include <QCheckBox> #include <QHBoxLayout> #include <QPainter> #include <QPushButton> diff --git a/tools/cabana/settings.cc b/tools/cabana/settings.cc index c408179fddfd5b..ac8d45007d65a2 100644 --- a/tools/cabana/settings.cc +++ b/tools/cabana/settings.cc @@ -6,65 +6,51 @@ #include <QFileDialog> #include <QFormLayout> #include <QPushButton> +#include <QSettings> #include <QStandardPaths> +#include <type_traits> #include "tools/cabana/util.h" Settings settings; -QSettings::Status Settings::save() { - QSettings s(filePath(), QSettings::IniFormat); - s.setValue("absolute_time", absolute_time); - s.setValue("fps", fps); - s.setValue("max_cached_minutes", max_cached_minutes); - s.setValue("chart_height", chart_height); - s.setValue("chart_range", chart_range); - s.setValue("chart_column_count", chart_column_count); - s.setValue("last_dir", last_dir); - s.setValue("last_route_dir", last_route_dir); - s.setValue("window_state", window_state); - s.setValue("geometry", geometry); - s.setValue("video_splitter_state", video_splitter_state); - s.setValue("recent_files", recent_files); - s.setValue("message_header_state_v3", message_header_state); - s.setValue("chart_series_type", chart_series_type); - s.setValue("theme", theme); - s.setValue("sparkline_range", sparkline_range); - s.setValue("multiple_lines_bytes", multiple_lines_bytes); - s.setValue("log_livestream", log_livestream); - s.setValue("log_path", log_path); - s.setValue("drag_direction", drag_direction); - s.setValue("suppress_defined_signals", suppress_defined_signals); - s.sync(); - return s.status(); +template <class SettingOperation> +void settings_op(SettingOperation op) { + QSettings s("cabana"); + op(s, "absolute_time", settings.absolute_time); + op(s, "fps", settings.fps); + op(s, "max_cached_minutes", settings.max_cached_minutes); + op(s, "chart_height", settings.chart_height); + op(s, "chart_range", settings.chart_range); + op(s, "chart_column_count", settings.chart_column_count); + op(s, "last_dir", settings.last_dir); + op(s, "last_route_dir", settings.last_route_dir); + op(s, "window_state", settings.window_state); + op(s, "geometry", settings.geometry); + op(s, "video_splitter_state", settings.video_splitter_state); + op(s, "recent_files", settings.recent_files); + op(s, "message_header_state", settings.message_header_state); + op(s, "chart_series_type", settings.chart_series_type); + op(s, "theme", settings.theme); + op(s, "sparkline_range", settings.sparkline_range); + op(s, "multiple_lines_bytes", settings.multiple_lines_bytes); + op(s, "log_livestream", settings.log_livestream); + op(s, "log_path", settings.log_path); + op(s, "drag_direction", (int &)settings.drag_direction); + op(s, "suppress_defined_signals", settings.suppress_defined_signals); } -void Settings::load() { - QSettings s(filePath(), QSettings::IniFormat); - absolute_time = s.value("absolute_time", false).toBool(); - fps = s.value("fps", 10).toInt(); - max_cached_minutes = s.value("max_cached_minutes", 30).toInt(); - chart_height = s.value("chart_height", 200).toInt(); - chart_range = s.value("chart_range", 3 * 60).toInt(); - chart_column_count = s.value("chart_column_count", 1).toInt(); - last_dir = s.value("last_dir", QDir::homePath()).toString(); - last_route_dir = s.value("last_route_dir", QDir::homePath()).toString(); - window_state = s.value("window_state").toByteArray(); - geometry = s.value("geometry").toByteArray(); - video_splitter_state = s.value("video_splitter_state").toByteArray(); - recent_files = s.value("recent_files").toStringList(); - message_header_state = s.value("message_header_state_v3").toByteArray(); - chart_series_type = s.value("chart_series_type", 0).toInt(); - theme = s.value("theme", 0).toInt(); - sparkline_range = s.value("sparkline_range", 15).toInt(); - multiple_lines_bytes = s.value("multiple_lines_bytes", true).toBool(); - log_livestream = s.value("log_livestream", true).toBool(); - log_path = s.value("log_path").toString(); - drag_direction = (Settings::DragDirection)s.value("drag_direction", 0).toInt(); - suppress_defined_signals = s.value("suppress_defined_signals", false).toBool(); - if (log_path.isEmpty()) { - log_path = QStandardPaths::writableLocation(QStandardPaths::HomeLocation) + "/cabana_live_stream/"; - } +Settings::Settings() { + last_dir = last_route_dir = QDir::homePath(); + log_path = QStandardPaths::writableLocation(QStandardPaths::HomeLocation) + "/cabana_live_stream/"; + settings_op([](QSettings &s, const QString &key, auto &value) { + if (auto v = s.value(key); v.canConvert<std::decay_t<decltype(value)>>()) + value = v.value<std::decay_t<decltype(value)>>(); + }); +} + +Settings::~Settings() { + settings_op([](QSettings &s, const QString &key, auto &v) { s.setValue(key, v); }); } // SettingsDlg @@ -75,45 +61,39 @@ SettingsDlg::SettingsDlg(QWidget *parent) : QDialog(parent) { QGroupBox *groupbox = new QGroupBox("General"); QFormLayout *form_layout = new QFormLayout(groupbox); - theme = new QComboBox(this); + form_layout->addRow(tr("Color Theme"), theme = new QComboBox(this)); theme->setToolTip(tr("You may need to restart cabana after changes theme")); theme->addItems({tr("Automatic"), tr("Light"), tr("Dark")}); theme->setCurrentIndex(settings.theme); - form_layout->addRow(tr("Color Theme"), theme); - fps = new QSpinBox(this); + form_layout->addRow("FPS", fps = new QSpinBox(this)); fps->setRange(10, 100); fps->setSingleStep(10); fps->setValue(settings.fps); - form_layout->addRow("FPS", fps); - cached_minutes = new QSpinBox(this); + form_layout->addRow(tr("Max Cached Minutes"), cached_minutes = new QSpinBox(this)); cached_minutes->setRange(5, 60); cached_minutes->setSingleStep(1); cached_minutes->setValue(settings.max_cached_minutes); - form_layout->addRow(tr("Max Cached Minutes"), cached_minutes); main_layout->addWidget(groupbox); groupbox = new QGroupBox("New Signal Settings"); form_layout = new QFormLayout(groupbox); - drag_direction = new QComboBox(this); + form_layout->addRow(tr("Drag Direction"), drag_direction = new QComboBox(this)); drag_direction->addItems({tr("MSB First"), tr("LSB First"), tr("Always Little Endian"), tr("Always Big Endian")}); drag_direction->setCurrentIndex(settings.drag_direction); - form_layout->addRow(tr("Drag Direction"), drag_direction); main_layout->addWidget(groupbox); groupbox = new QGroupBox("Chart"); form_layout = new QFormLayout(groupbox); - chart_series_type = new QComboBox(this); + form_layout->addRow(tr("Default Series Type"), chart_series_type = new QComboBox(this)); chart_series_type->addItems({tr("Line"), tr("Step Line"), tr("Scatter")}); chart_series_type->setCurrentIndex(settings.chart_series_type); - form_layout->addRow(tr("Chart Default Series Type"), chart_series_type); - chart_height = new QSpinBox(this); + form_layout->addRow(tr("Chart Height"), chart_height = new QSpinBox(this)); chart_height->setRange(100, 500); chart_height->setSingleStep(10); chart_height->setValue(settings.chart_height); - form_layout->addRow(tr("Chart Height"), chart_height); main_layout->addWidget(groupbox); log_livestream = new QGroupBox(tr("Enable live stream logging"), this); @@ -125,10 +105,9 @@ SettingsDlg::SettingsDlg(QWidget *parent) : QDialog(parent) { path_layout->addWidget(browse_btn); main_layout->addWidget(log_livestream); - - auto buttonBox = new QDialogButtonBox(QDialogButtonBox::Ok | QDialogButtonBox::Cancel | QDialogButtonBox::Apply); + auto buttonBox = new QDialogButtonBox(QDialogButtonBox::Ok | QDialogButtonBox::Cancel); main_layout->addWidget(buttonBox); - main_layout->addStretch(1); + setFixedSize(400, sizeHint().height()); QObject::connect(browse_btn, &QPushButton::clicked, [this]() { QString fn = QFileDialog::getExistingDirectory( @@ -139,31 +118,22 @@ SettingsDlg::SettingsDlg(QWidget *parent) : QDialog(parent) { log_path->setText(fn); } }); - QObject::connect(buttonBox, &QDialogButtonBox::clicked, [=](QAbstractButton *button) { - auto role = buttonBox->buttonRole(button); - if (role == QDialogButtonBox::AcceptRole) { - save(); - accept(); - } else if (role == QDialogButtonBox::ApplyRole) { - save(); - } else if (role == QDialogButtonBox::RejectRole) { - reject(); - } - }); + QObject::connect(buttonBox, &QDialogButtonBox::rejected, this, &QDialog::reject); + QObject::connect(buttonBox, &QDialogButtonBox::accepted, this, &SettingsDlg::save); } void SettingsDlg::save() { - settings.fps = fps->value(); if (std::exchange(settings.theme, theme->currentIndex()) != settings.theme) { // set theme before emit changed utils::setTheme(settings.theme); } + settings.fps = fps->value(); settings.max_cached_minutes = cached_minutes->value(); settings.chart_series_type = chart_series_type->currentIndex(); settings.chart_height = chart_height->value(); settings.log_livestream = log_livestream->isChecked(); settings.log_path = log_path->text(); settings.drag_direction = (Settings::DragDirection)drag_direction->currentIndex(); - settings.save(); emit settings.changed(); + QDialog::accept(); } diff --git a/tools/cabana/settings.h b/tools/cabana/settings.h index da7781e6e44e86..9f24a6fbd370f7 100644 --- a/tools/cabana/settings.h +++ b/tools/cabana/settings.h @@ -1,13 +1,10 @@ #pragma once -#include <QApplication> #include <QByteArray> -#include <QCheckBox> #include <QComboBox> #include <QDialog> #include <QGroupBox> #include <QLineEdit> -#include <QSettings> #include <QSpinBox> #define LIGHT_THEME 1 @@ -24,10 +21,8 @@ class Settings : public QObject { AlwaysBE, }; - Settings() {} - QSettings::Status save(); - void load(); - inline static QString filePath() { return QApplication::applicationDirPath() + "/settings"; } + Settings(); + ~Settings(); bool absolute_time = false; int fps = 10; @@ -49,15 +44,13 @@ class Settings : public QObject { QByteArray window_state; QStringList recent_files; QByteArray message_header_state; - DragDirection drag_direction; + DragDirection drag_direction = MsbFirst; signals: void changed(); }; class SettingsDlg : public QDialog { - Q_OBJECT - public: SettingsDlg(QWidget *parent); void save(); diff --git a/tools/cabana/streams/pandastream.cc b/tools/cabana/streams/pandastream.cc index 4a6c588e5102eb..13d202d9cab320 100644 --- a/tools/cabana/streams/pandastream.cc +++ b/tools/cabana/streams/pandastream.cc @@ -2,6 +2,7 @@ #include <vector> +#include <QCheckBox> #include <QLabel> #include <QMessageBox> #include <QPushButton> diff --git a/tools/cabana/tools/findsignal.h b/tools/cabana/tools/findsignal.h index e9e5f9f1808895..5ef7461fee276b 100644 --- a/tools/cabana/tools/findsignal.h +++ b/tools/cabana/tools/findsignal.h @@ -4,6 +4,7 @@ #include <limits> #include <QAbstractTableModel> +#include <QCheckBox> #include <QLabel> #include <QPushButton> #include <QTableView>
1. save settings to QSettings::UserScope with QSettings::NativeFormat. On Unix system, it's `$HOME/.config/cabana.conf`. 2. use template to simplify the read/write operation.
https://api.github.com/repos/commaai/openpilot/pulls/30328
2023-10-25T20:46:53Z
2023-10-25T21:39:42Z
2023-10-25T21:39:41Z
2023-10-25T21:52:45Z
3,486
commaai/openpilot
9,590
Fix code typos for C.145
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index 8e8dca78f..3c55ab62c 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -5013,8 +5013,8 @@ and such interfaces are often not easily or naturally organized into a single-ro **Example**: - struct B { int a; virtual f(); }; - struct D { int b; override f(); }; + struct B { int a; virtual int f(); }; + struct D : B { int b; int f() override; }; void use(B b) { @@ -5026,7 +5026,7 @@ and such interfaces are often not easily or naturally organized into a single-ro void use2() { D d; - use(b); // slice + use(d); // slice } Both `d`s are sliced.
Also added the "extends B" to D
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/43
2015-09-20T12:21:02Z
2015-09-20T16:21:01Z
2015-09-20T16:21:01Z
2015-09-20T16:53:40Z
216
isocpp/CppCoreGuidelines
16,023
Markdown fix anchors and move Header text outside anchor
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index ec317037b..bc4bd738f 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -4603,7 +4603,7 @@ Of course there are way of making `==` work in a hierarchy, but the naive approa **Enforcement**: ??? -<a name="SS-containers"</a> +<a name="SS-containers"></a> ## C.con: Containers and other resource handles A container is an object holding a sequence of objects of some type; `std::vector` is the archetypical container. @@ -4827,7 +4827,7 @@ not using this (over)general interface in favor of a particular interface found * Flag overrides without `override`. -<a name="Rh-kind"><a> +<a name="Rh-kind"></a> ### C.129: When designing a class hierarchy, distinguish between implementation inheritance and interface inheritance **Reason**: ??? Herb: I've become a non-fan of implementation inheritance -- seems most often an antipattern. Are there reasonable examples of it? @@ -8123,7 +8123,7 @@ Lock-free programming rule summary: * ??? * ??? -### <a name="Rconc">Don't use lock-free programming unless you absolutely have to</a> +### <a name="Rconc"></a>Don't use lock-free programming unless you absolutely have to **Reason**: It's error-prone and requires expert level knowledge of language features, machine architecture, and data structures. @@ -11838,7 +11838,7 @@ Modernization can be much faster, simpler, and safer when supported with analysi This section contains follow-up material on rules and sets of rules. In particular, here we present further rationale, longer examples, and discussions of alternatives. -### <a name="Sd order">Discussion: Define and initialize member variables in the order of member declaration</a> +### <a name="Sd order"></a>Discussion: Define and initialize member variables in the order of member declaration Member variables are always initialized in the order they are declared in the class definition, so write them in that order in the constructor initialization list. Writing them in a different order just makes the code confusing because it won't run in the order you see, and that can make it hard to see order-dependent bugs. @@ -11868,7 +11868,7 @@ If the class definition and the constructor body are in separate files, the long ??? -### <a name="Sd factory">Discussion: Use a factory function if you need "virtual behavior" during initialization</a> +### <a name="Sd factory"></a>Discussion: Use a factory function if you need "virtual behavior" during initialization If your design wants virtual dispatch into a derived class from a base class constructor or destructor for functions like `f` and `g`, you need other techniques, such as a post-constructor -- a separate member function the caller must invoke to complete initialization, which can safely call `f` and `g` because in member functions virtual calls behave normally. Some techniques for this are shown in the References. Here's a non-exhaustive list of options: @@ -11924,7 +11924,7 @@ In summary, no post-construction technique is perfect. The worst techniques dodg -###<a name="Sd dtor">Discussion: Make base class destructors public and virtual, or protected and nonvirtual</a> +###<a name="Sd dtor"></a>Discussion: Make base class destructors public and virtual, or protected and nonvirtual Should destruction behave virtually? That is, should destruction through a pointer to a `base` class should be allowed? If yes, then `base`'s destructor must be public in order to be callable, and virtual otherwise calling it results in undefined behavior. Otherwise, it should be protected so that only derived classes can invoke it in their own destructors, and nonvirtual since it doesn't need to behave virtually virtual. @@ -12094,7 +12094,7 @@ When using exceptions as your error handling mechanism, always document this beh -## <a name ="Sd consistent">Define Copy, move, and destroy consistently</a> +## <a name ="Sd consistent"></a>Define Copy, move, and destroy consistently **Reason**: ??? @@ -12192,7 +12192,7 @@ Resource management rule summary: -### <a name="Cr safety">Provide strong resource safety; that is, never leak anything that you think of as a resource</a> +### <a name="Cr safety"></a>Provide strong resource safety; that is, never leak anything that you think of as a resource **Reason**: Prevent leaks. Leaks can lead to performance degradation, mysterious error, system crashes, and security violations. @@ -12217,7 +12217,7 @@ This class is a resource handle. It manages the lifetime of the `T`s. To do so, **Enforcement**: The basic technique for preventing leaks is to have every resource owned by a resource handle with a suitable destructor. A checker can find "naked `new`s". Given a list of C-style allocation functions (e.g., `fopen()`), a checker can also find uses that are not managed by a resource handle. In general, "naked pointers" can be viewed with suspicion, flagged, and/or analyzed. A a complete list of resources cannot be generated without human input (the definition of "a resource" is necessarily too general), but a tool can be "parameterized" with a resource list. -### <a name="Cr never">Never throw while holding a resource not owned by a handle</a> +### <a name="Cr never"></a>Never throw while holding a resource not owned by a handle **Reason**: That would be a leak. @@ -12251,14 +12251,14 @@ For starters, we know about the standard-library containers, `string`, and smart The use of `array_view` and `string_view` should help a lot (they are not resource handles). -### <a name="Cr raw">A "raw" pointer or reference is never a resource handle</a> +### <a name="Cr raw"></a>A "raw" pointer or reference is never a resource handle **Reason** To be able to distinguish owners from views. **Note**: This is independent of how you "spell" pointer: `T*`, `T&`, `Ptr<T>` and `Range<T>` are not owners. -### <a name="Cr outlive">Never let a pointer outlive the object it points to</a> +### <a name="Cr outlive"></a>Never let a pointer outlive the object it points to **Reason**: To avoid extremely hard-to-find errors. Dereferencing such a pointer is undefined behavior and could lead to violations of the type system. @@ -12283,7 +12283,7 @@ The `string`s of `v` are destroyed upon exit from `bad()` and so is `v` itself. **Enforcement**: Most compilers already warn about simple cases and has the information to do more. Consider any pointer returned from a function suspect. Use containers, resource handles, and views (e.g., `array_view` known not to be resource handles) to lower the number of cases to be examined. For starters, consider every class with a destructor a resource handle. -### <a name="Cr templates">Use templates to express containers (and other resource handles)</a> +### <a name="Cr templates"></a>Use templates to express containers (and other resource handles) **Reason**: To provide statically type-safe manipulation of elements. @@ -12295,7 +12295,7 @@ The `string`s of `v` are destroyed upon exit from `bad()` and so is `v` itself. int sz; }; -### <a name="Cr value return">Return containers by value (relying on move for efficiency)</a> +### <a name="Cr value return"></a>Return containers by value (relying on move for efficiency) **Reason**: To simplify code and eliminate a need for explicit memory management. To bring an object into a surrounding scope, thereby extending its lifetime. @@ -12310,7 +12310,7 @@ The `string`s of `v` are destroyed upon exit from `bad()` and so is `v` itself. **Enforcement**: Check for pointers and references returned from functions and see if they are assigned to resource handles (e.g., to a `unique_ptr`). -### <a name="Cr handle">If a class is a resource handle, it needs a constructor, a destructor, and copy and/or move operations</a> +### <a name="Cr handle"></a>If a class is a resource handle, it needs a constructor, a destructor, and copy and/or move operations **Reason**: To provide complete control of the lifetime of the resource. To provide a coherent set of operations on the resource. @@ -12330,7 +12330,7 @@ Now `Named` has a default constructor, a destructor, and efficient copy and move **Enforcement**: In general, a tool cannot know if a class is a resource handle. However, if a class has some of [the default operations](???), it should have all, and if a class has a member that is a resource handle, it should be considered a resource handle. -### <a name="Cr list">If a class is a container, give it an initializer-list constructor</a> +### <a name="Cr list"></a>If a class is a container, give it an initializer-list constructor **Reason**: It is common to need an initial set of elements.
else header becomes an ugly no-op link (also for Consistency)
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/163
2015-09-27T08:05:01Z
2015-09-27T16:39:49Z
2015-09-27T16:39:49Z
2015-09-30T08:16:49Z
2,119
isocpp/CppCoreGuidelines
15,804
Rename twitter to X
diff --git a/README.md b/README.md index 7e8d7381e1..75a0f3d802 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # FastChat -| [**Demo**](https://chat.lmsys.org/) | [**Discord**](https://discord.gg/HSWAKCrnFx) | [**Twitter**](https://twitter.com/lmsysorg) | +| [**Demo**](https://chat.lmsys.org/) | [**Discord**](https://discord.gg/HSWAKCrnFx) | [**X**](https://x.com/lmsysorg) | FastChat is an open platform for training, serving, and evaluating large language model based chatbots. The core features include: - The weights, training code, and evaluation code for state-of-the-art models (e.g., Vicuna).
changed twitter redirect to x.com <!-- Thank you for your contribution! --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number (if applicable) <!-- For example: "Closes #1234" --> ## Checks - [ ] I've run `format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed. - [ ] I've made sure the relevant tests are passing (if applicable).
https://api.github.com/repos/lm-sys/FastChat/pulls/2406
2023-09-12T05:06:26Z
2023-09-12T08:24:30Z
2023-09-12T08:24:30Z
2023-09-12T08:24:30Z
209
lm-sys/FastChat
41,140
Replaced encode() usage with bytes literals.
diff --git a/tests/auth_tests/test_views.py b/tests/auth_tests/test_views.py index 42acafd26de01..521013d18d2af 100644 --- a/tests/auth_tests/test_views.py +++ b/tests/auth_tests/test_views.py @@ -444,7 +444,7 @@ def _test_confirm_start(self): def test_confirm_invalid_uuid(self): """A uidb64 that decodes to a non-UUID doesn't crash.""" _, path = self._test_confirm_start() - invalid_uidb64 = urlsafe_base64_encode('INVALID_UUID'.encode()) + invalid_uidb64 = urlsafe_base64_encode(b'INVALID_UUID') first, _uuidb64_, second = path.strip('/').split('/') response = self.client.get('/' + '/'.join((first, invalid_uidb64, second)) + '/') self.assertContains(response, 'The password reset link was invalid') diff --git a/tests/httpwrappers/tests.py b/tests/httpwrappers/tests.py index 0c81cd0341836..bafdd2891cf34 100644 --- a/tests/httpwrappers/tests.py +++ b/tests/httpwrappers/tests.py @@ -294,7 +294,7 @@ def test_headers_type(self): # ASCII strings or bytes values are converted to strings. r['key'] = 'test' self.assertEqual(r['key'], 'test') - r['key'] = 'test'.encode('ascii') + r['key'] = b'test' self.assertEqual(r['key'], 'test') self.assertIn(b'test', r.serialize_headers()) @@ -334,7 +334,7 @@ def test_long_line(self): # Bug #20889: long lines trigger newlines to be added to headers # (which is not allowed due to bug #10188) h = HttpResponse() - f = 'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz a\xcc\x88'.encode('latin-1') + f = b'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz a\xcc\x88' f = f.decode('utf-8') h['Content-Disposition'] = 'attachment; filename="%s"' % f # This one is triggering https://bugs.python.org/issue20747, that is Python
https://api.github.com/repos/django/django/pulls/12097
2019-11-18T12:52:48Z
2019-11-18T14:31:43Z
2019-11-18T14:31:43Z
2019-11-28T11:21:10Z
513
django/django
51,121
Revert "Re-enable parallel builds in CI"
diff --git a/.circleci/setup_env.sh b/.circleci/setup_env.sh index 7f82b613f8cb8..52a8cab1cd2de 100755 --- a/.circleci/setup_env.sh +++ b/.circleci/setup_env.sh @@ -55,7 +55,8 @@ if pip list | grep -q ^pandas; then fi echo "Build extensions" -python setup.py build_ext -q -j4 +# GH 47305: Parallel build can causes flaky ImportError from pandas/_libs/tslibs +python setup.py build_ext -q -j1 echo "Install pandas" python -m pip install --no-build-isolation --no-use-pep517 -e . diff --git a/.github/workflows/32-bit-linux.yml b/.github/workflows/32-bit-linux.yml index e14be521ff523..4e363c7fd573d 100644 --- a/.github/workflows/32-bit-linux.yml +++ b/.github/workflows/32-bit-linux.yml @@ -42,7 +42,7 @@ jobs: python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \ python -m pip install versioneer[toml] && \ python -m pip install cython numpy python-dateutil pytz pytest>=7.0.0 pytest-xdist>=2.2.0 pytest-asyncio>=0.17 hypothesis>=6.34.2 && \ - python setup.py build_ext -q -j$(nproc) && \ + python setup.py build_ext -q -j1 && \ python -m pip install --no-build-isolation --no-use-pep517 -e . && \ python -m pip list && \ export PANDAS_CI=1 && \ diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml index 5730e2b75f48f..910b68dce07d0 100644 --- a/.github/workflows/python-dev.yml +++ b/.github/workflows/python-dev.yml @@ -82,9 +82,10 @@ jobs: python -m pip install python-dateutil pytz cython hypothesis>=6.34.2 pytest>=7.0.0 pytest-xdist>=2.2.0 pytest-cov pytest-asyncio>=0.17 python -m pip list + # GH 47305: Parallel build can cause flaky ImportError from pandas/_libs/tslibs - name: Build Pandas run: | - python setup.py build_ext -q -j4 + python setup.py build_ext -q -j1 python -m pip install -e . --no-build-isolation --no-use-pep517 --no-index - name: Build Version
Reverts pandas-dev/pandas#51902 Depends on #51525 which is reverted by #51951
https://api.github.com/repos/pandas-dev/pandas/pulls/51952
2023-03-13T23:26:53Z
2023-03-14T23:23:57Z
2023-03-14T23:23:57Z
2023-03-15T02:33:59Z
618
pandas-dev/pandas
45,486
Fixed a cross-platform endian issue
diff --git a/doc/whats_new/v0.24.rst b/doc/whats_new/v0.24.rst index b67004f1e8b00..335a7499ed21e 100644 --- a/doc/whats_new/v0.24.rst +++ b/doc/whats_new/v0.24.rst @@ -367,6 +367,10 @@ Changelog :meth:`tree.DecisionTreeRegressor.fit`, and has not effect. :pr:`17614` by :user:`Juan Carlos Alfaro Jiménez <alfaro96>`. +- |Fix| Allow serialized tree based models to be unpickled on a machine + with different endianness. + :pr:`17644` by :user:`Qi Zhang <qzhang90>`. + Code and Documentation Contributors ----------------------------------- diff --git a/sklearn/tree/_tree.pyx b/sklearn/tree/_tree.pyx index 9d6f02bd16103..f4484ab1a3314 100644 --- a/sklearn/tree/_tree.pyx +++ b/sklearn/tree/_tree.pyx @@ -652,8 +652,15 @@ cdef class Tree: value_shape = (node_ndarray.shape[0], self.n_outputs, self.max_n_classes) + + if (node_ndarray.dtype != NODE_DTYPE): + # possible mismatch of big/little endian due to serialization + # on a different architecture. Try swapping the byte order. + node_ndarray = node_ndarray.byteswap().newbyteorder() + if (node_ndarray.dtype != NODE_DTYPE): + raise ValueError('Did not recognise loaded array dytpe') + if (node_ndarray.ndim != 1 or - node_ndarray.dtype != NODE_DTYPE or not node_ndarray.flags.c_contiguous or value_ndarray.shape != value_shape or not value_ndarray.flags.c_contiguous or
This PR is to fix a problem that I encountered when loading a GradientBoostingClassifier/GradientBoostingRegressor model, which was trained on a little-endian machine, on a big-endian machine. The error occurred at https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_tree.pyx#L671 The reason was that the loaded `node_ndarray` was in little-endian byte order while the machine was expecting a big-endian one. The fix in the PR is, before throwing out an error, try to swap the byte order of the `node_ndarray` and see whether it can satisfy the `dtype` checking.
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/17644
2020-06-19T21:30:01Z
2020-08-12T14:43:09Z
2020-08-12T14:43:09Z
2020-08-12T14:43:10Z
437
scikit-learn/scikit-learn
46,422
Remove python eval from vector sql db chain
diff --git a/libs/experimental/langchain_experimental/sql/vector_sql.py b/libs/experimental/langchain_experimental/sql/vector_sql.py index 43c42f7096a59b..9c0ee021d1dfc9 100644 --- a/libs/experimental/langchain_experimental/sql/vector_sql.py +++ b/libs/experimental/langchain_experimental/sql/vector_sql.py @@ -8,7 +8,7 @@ from langchain.chains.sql_database.prompt import PROMPT, SQL_PROMPTS from langchain.prompts.prompt import PromptTemplate from langchain.schema import BaseOutputParser, BasePromptTemplate -from langchain.schema.base import Embeddings +from langchain.schema.embeddings import Embeddings from langchain.schema.language_model import BaseLanguageModel from langchain.tools.sql_database.prompt import QUERY_CHECKER from langchain.utilities.sql_database import SQLDatabase @@ -76,23 +76,11 @@ def parse(self, text: str) -> str: return super().parse(text) -def _try_eval(x: Any) -> Any: - try: - return eval(x) - except Exception: - return x - - def get_result_from_sqldb( db: SQLDatabase, cmd: str ) -> Union[str, List[Dict[str, Any]], Dict[str, Any]]: result = db._execute(cmd, fetch="all") # type: ignore - if isinstance(result, list): - return [{k: _try_eval(v) for k, v in dict(d._asdict()).items()} for d in result] - else: - return { - k: _try_eval(v) for k, v in dict(result._asdict()).items() # type: ignore - } + return result class VectorSQLDatabaseChain(SQLDatabaseChain):
<!-- Thank you for contributing to LangChain! Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes (if applicable), - **Dependencies:** any dependencies required for this change, - **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below), - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/extras` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. -->
https://api.github.com/repos/langchain-ai/langchain/pulls/10937
2023-09-22T14:43:43Z
2023-09-23T15:51:04Z
2023-09-23T15:51:04Z
2023-09-23T15:51:05Z
390
langchain-ai/langchain
42,945
Fix a test case inside tests/models that leaves a trace in the DB
diff --git a/tests/models/test_dagbag.py b/tests/models/test_dagbag.py index 787086cf9d642..343eb468b9f25 100644 --- a/tests/models/test_dagbag.py +++ b/tests/models/test_dagbag.py @@ -44,27 +44,25 @@ from tests.test_utils import db from tests.test_utils.asserts import assert_queries_count from tests.test_utils.config import conf_vars -from tests.test_utils.permissions import delete_dag_specific_permissions + + +def db_clean_up(): + db.clear_db_dags() + db.clear_db_runs() + db.clear_db_serialized_dags() + db.clear_dag_specific_permissions() class TestDagBag: @classmethod def setup_class(cls): cls.empty_dir = mkdtemp() + db_clean_up() @classmethod def teardown_class(cls): shutil.rmtree(cls.empty_dir) - - def setup_methods(self) -> None: - db.clear_db_dags() - db.clear_db_runs() - db.clear_db_serialized_dags() - - def teardown_method(self) -> None: - db.clear_db_dags() - db.clear_db_runs() - db.clear_db_serialized_dags() + db_clean_up() def test_get_existing_dag(self): """ @@ -774,6 +772,7 @@ def test_sync_to_db_syncs_dag_specific_perms_on_update(self): Test that dagbag.sync_to_db will sync DAG specific permissions when a DAG is new or updated """ + db_clean_up() session = settings.Session() with freeze_time(tz.datetime(2020, 1, 5, 0, 0, 0)) as frozen_time: dagbag = DagBag( @@ -807,7 +806,7 @@ def test_sync_perm_for_dag(self, mock_security_manager): Test that dagbag._sync_perm_for_dag will call ApplessAirflowSecurityManager.sync_perm_for_dag when DAG specific perm views don't exist already or the DAG has access_control set. """ - delete_dag_specific_permissions() + db_clean_up() with create_session() as session: security_manager = ApplessAirflowSecurityManager(session) mock_sync_perm_for_dag = mock_security_manager.return_value.sync_perm_for_dag @@ -879,6 +878,7 @@ def test_get_dag_with_dag_serialization(self): def test_collect_dags_from_db(self): """DAGs are collected from Database""" + db.clear_db_dags() example_dags_folder = airflow.example_dags.__path__[0] dagbag = DagBag(example_dags_folder) diff --git a/tests/test_utils/db.py b/tests/test_utils/db.py index c6559b8e1ddfe..2c0d0ae48a15e 100644 --- a/tests/test_utils/db.py +++ b/tests/test_utils/db.py @@ -36,8 +36,10 @@ ) from airflow.models.dagcode import DagCode from airflow.models.serialized_dag import SerializedDagModel +from airflow.security.permissions import RESOURCE_DAG_PREFIX from airflow.utils.db import add_default_pool_if_not_exists, create_default_connections from airflow.utils.session import create_session +from airflow.www.fab_security.sqla.models import Permission, Resource, assoc_permission_role def clear_db_runs(): @@ -126,3 +128,20 @@ def clear_db_task_fail(): def clear_db_task_reschedule(): with create_session() as session: session.query(TaskReschedule).delete() + + +def clear_dag_specific_permissions(): + with create_session() as session: + dag_resources = session.query(Resource).filter(Resource.name.like(f"{RESOURCE_DAG_PREFIX}%")).all() + dag_resource_ids = [d.id for d in dag_resources] + + dag_permissions = session.query(Permission).filter(Permission.resource_id.in_(dag_resource_ids)).all() + dag_permission_ids = [d.id for d in dag_permissions] + + session.query(assoc_permission_role).filter( + assoc_permission_role.c.permission_view_id.in_(dag_permission_ids) + ).delete(synchronize_session=False) + session.query(Permission).filter(Permission.resource_id.in_(dag_resource_ids)).delete( + synchronize_session=False + ) + session.query(Resource).filter(Resource.id.in_(dag_resource_ids)).delete(synchronize_session=False) diff --git a/tests/test_utils/permissions.py b/tests/test_utils/permissions.py deleted file mode 100644 index eabc64c503313..0000000000000 --- a/tests/test_utils/permissions.py +++ /dev/null @@ -1,38 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - - -from airflow.security.permissions import RESOURCE_DAG_PREFIX -from airflow.utils.session import create_session -from airflow.www.fab_security.sqla.models import Permission, Resource, assoc_permission_role - - -def delete_dag_specific_permissions(): - with create_session() as session: - dag_resources = session.query(Resource).filter(Resource.name.like(f"{RESOURCE_DAG_PREFIX}%")).all() - dag_resource_ids = [d.id for d in dag_resources] - - dag_permissions = session.query(Permission).filter(Permission.resource_id.in_(dag_resource_ids)).all() - dag_permission_ids = [d.id for d in dag_permissions] - - session.query(assoc_permission_role).filter( - assoc_permission_role.c.permission_view_id.in_(dag_permission_ids) - ).delete(synchronize_session=False) - session.query(Permission).filter(Permission.resource_id.in_(dag_resource_ids)).delete( - synchronize_session=False - ) - session.query(Resource).filter(Resource.id.in_(dag_resource_ids)).delete(synchronize_session=False)
This fixes `test_sync_perm_for_dag` test case inside `test_dagbag.py` that leaves a record for a test DAG permission in the DB. It causes failure on another test case (tests/www/test_security.py::test_verify_public_role_has_no_permissions) in the other suite when run sequentially To reproduce the issue within Breeze: `pytest --disable-pytest-warnings tests/models/test_dagbag.py::TestDagBag::test_sync_perm_for_dag` then `pytest --disable-pytest-warnings tests/www/test_security.py::test_verify_public_role_has_no_permissions` Edit: Also, moved `delete_dag_specific_permissions` to `db` module for consistency. Fixed, `setup_method` as it wasn't executed due to misspelling. --- **^ Add meaningful description above** Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information. In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed. In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x). In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/main/UPDATING.md).
https://api.github.com/repos/apache/airflow/pulls/20881
2022-01-14T16:51:53Z
2022-01-18T16:39:00Z
2022-01-18T16:39:00Z
2022-01-18T16:39:00Z
1,435
apache/airflow
14,277
Add doc for add-new-model-like command
diff --git a/templates/adding_a_new_model/README.md b/templates/adding_a_new_model/README.md index d19270bf75af3..496c4f004be57 100644 --- a/templates/adding_a_new_model/README.md +++ b/templates/adding_a_new_model/README.md @@ -14,13 +14,16 @@ See the License for the specific language governing permissions and limitations under the License. --> -# Using `cookiecutter` to generate models +# Adding a new model This folder contains templates to generate new models that fit the current API and pass all tests. It generates models in both PyTorch, TensorFlow, and Flax and completes the `__init__.py` and auto-modeling files, and creates the -documentation. +documentation. Their use is described in the [next section](#cookiecutter-templates). -## Usage +There is also a CLI tool to generate a new model like an existing one called `transformers-cli add-new-model-like`. +Jump to the [Add new model like section](#add-new-model-like-command) to learn how to use it. + +## Cookiecutter Templates Using the `cookiecutter` utility requires to have all the `dev` dependencies installed. Let's first clone the repository and install it in our environment: @@ -81,7 +84,7 @@ Choose from 1, 2 [1]: Once the command has finished, you should have a total of 7 new files spread across the repository: ``` -docs/source/model_doc/<model_name>.rst +docs/source/model_doc/<model_name>.mdx src/transformers/models/<model_name>/configuration_<model_name>.py src/transformers/models/<model_name>/modeling_<model_name>.py src/transformers/models/<model_name>/modeling_tf_<model_name>.py @@ -118,3 +121,136 @@ will be merged quickly: library's standards. - You should complete the documentation file (`docs/source/model_doc/<model_name>.rst`) so that your model may be usable. + +## Add new model like command + +Using the `transformers-cli add-new-model-like` command requires to have all the `dev` dependencies installed. Let's +first clone the repository and install it in our environment: + +```shell script +git clone https://github.com/huggingface/transformers +cd transformers +pip install -e ".[dev]" +``` + +Once the installation is done, you can use the CLI command `add-new-model-like` to generate your models: + +```shell script +transformers-cli add-new-model-like +``` + +This will start a small questionnaire you have to fill. + +``` +What identifier would you like to use for the model type of this model? +``` + +You will have to input the model type of the model you want to clone. The model type can be found in several places: +- inside the configuration of any checkpoint of that model +- the name of the documentation page of that model + +For instance the doc page of `BigBirdPegasus` is `https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus` +so its model type is `"bigbird_pegasus"`. + +If you make a typo, the command will suggest you the closest model types it can find. + +Once this is done, the questionnaire will ask you for the new model name and its various casings: + +``` +What is the name for your new model? +What identifier would you like to use for the model type of this model? +What name would you like to use for the module of this model? +What prefix (camel-cased) would you like to use for the model classes of this model? +What prefix (upper-cased) would you like to use for the constants relative to this model? +``` + +From your answer to the first question, defaults will be determined for all others. The first name should be written +as you want your model be named in the doc, with no special casing (like RoBERTa) and from there, you can either stick +with the defaults or change the cased versions. + +Next will be the name of the config class to use for this model: + +``` +What will be the name of the config class for this model? +``` + +Then, you will be asked for a checkpoint identifier: + +``` +Please give a checkpoint identifier (on the model Hub) for this new model. +``` + +This is the checkpoint that will be used in the examples across the files and the integration tests. Put the name you +wish, as it will appear on the Model Hub. Do not forget to include the organisation. + +Then you will have to say whether your model re-uses the same processing classes as the model you're cloning: + +``` +Will your new model use the same processing class as Xxx (XxxTokenizer/XxxFeatureExtractor) +``` + +Answer yes if you have no intentions to make any change to the class used for preprocessing. It can use different +files (for instance you can reuse the `BertTokenizer` with a new vocab file). + +If you answer no, you will have to give the name of the classes +for the new tokenizer/feature extractor/processor (depending on the model you're cloning). + +Next the questionnaire will ask + +``` +Should we add # Copied from statements when creating the new modeling file? +``` + +This is the intenal mechanism used in the library to make sure code copied from various modeling files stay consistent. +If you plan to completely rewrite the modeling file, you should answer no, whereas if you just want to tweak one part +of the model, you should answer yes. + +Lastly, the questionnaire will inquire about frameworks: + +``` +Should we add a version of your new model in all the frameworks implemented by Old Model (xxx)? +``` + +If you answer yes, the new model will have files for all the frameworks implemented by the model you're cloning. +Otherwise, you will get a new question to select the frameworks you want. + +Once the command has finished, you will see a new subfolder in the `src/transformers/models/` folder, with the +necessary files (configuration and modeling files for all frameworks requested, and maybe the processing files, +depending on your choices). + +You will also see a doc file and tests for your new models. First you should run + +``` +make style +maxke fix-copies +``` + +and then you can start tweaking your model. You should: +- fill the doc file at `docs/source/model_doc/model_name.mdx` +- tweak the configuration and modeling files to your need + +Once you're done, you can run the tests to ensure that they all pass: + +``` +python -m pytest ./tests/test_*<model_name>*.py +``` + +⚠ You should be careful about the classes preceded by the following line:️ + +```python +# Copied from transformers.[...] +``` + +This line ensures that the copy does not diverge from the source. If it *should* diverge, because the implementation +is different, this line needs to be deleted. If you don't delete this line and run `make fix-copies`, +your changes will be overwritten. + +Once you have edited the files to fit your architecture, simply re-run the tests (and edit them if a change +is needed!) afterwards to make sure everything works as expected. + +Once the files are generated and you are happy with your changes, here's a checklist to ensure that your contribution +will be merged quickly: + +- You should run the `make fixup` utility to fix the style of the files and to ensure the code quality meets the + library's standards. +- You should add your model to the main README then run `make fix-copies`.
# What does this PR do? This PR adds documentation for the new command `add-new-model-like`.
https://api.github.com/repos/huggingface/transformers/pulls/15433
2022-01-31T15:48:07Z
2022-01-31T16:10:46Z
2022-01-31T16:10:46Z
2022-01-31T16:11:15Z
1,741
huggingface/transformers
12,368
fix: Use sentry:release in results for get_release_tags
diff --git a/src/sentry/tagstore/snuba/backend.py b/src/sentry/tagstore/snuba/backend.py index 706937af24063..bd0a9698e3f77 100644 --- a/src/sentry/tagstore/snuba/backend.py +++ b/src/sentry/tagstore/snuba/backend.py @@ -288,15 +288,16 @@ def get_release_tags(self, project_ids, environment_id, versions): # NB we add release as a condition rather than a filter because # this method is already dealing with version strings rather than # release ids which would need to be translated by the snuba util. - key = 'tags[sentry:release]' - conditions = [[key, 'IN', versions]] + tag = 'sentry:release' + col = 'tags[{}]'.format(tag) + conditions = [[col, 'IN', versions]] aggregations = [ ['count()', '', 'times_seen'], ['min', SEEN_COLUMN, 'first_seen'], ['max', SEEN_COLUMN, 'last_seen'], ] - result = snuba.query(start, end, ['project_id', key], + result = snuba.query(start, end, ['project_id', col], conditions, filters, aggregations, referrer='tagstore.get_release_tags') @@ -305,7 +306,7 @@ def get_release_tags(self, project_ids, environment_id, versions): for value, data in six.iteritems(project_data): values.append( TagValue( - key=key, + key=tag, value=value, **fix_tag_value_data(data) ) @@ -383,7 +384,7 @@ def get_group_event_filter(self, project_id, group_id, environment_id, tags): conditions = [[['tags[{}]'.format(k), '=', v] for (k, v) in tags.items()]] result = snuba.query(start, end, groupby=['event_id'], conditions=conditions, - filter_keys=filters, limit=1000, referrer='tagstore.get_group_event_filter') + filter_keys=filters, limit=1000, referrer='tagstore.get_group_event_filter') if not result: return None diff --git a/tests/snuba/tagstore/test_tagstore_backend.py b/tests/snuba/tagstore/test_tagstore_backend.py index 2aae341d664b6..3ce8056341597 100644 --- a/tests/snuba/tagstore/test_tagstore_backend.py +++ b/tests/snuba/tagstore/test_tagstore_backend.py @@ -287,6 +287,7 @@ def test_get_release_tags(self): assert tags[0].last_seen == one_second_ago assert tags[0].first_seen == one_second_ago assert tags[0].times_seen == 1 + assert tags[0].key == 'sentry:release' def test_get_group_event_filter(self): assert self.ts.get_group_event_filter(
The column we need to query snuba for is `tags[sentry:release]` but the actual tag name is just `sentry_release` I think this is the only case in here that needs this particular fix.
https://api.github.com/repos/getsentry/sentry/pulls/8823
2018-06-22T18:37:40Z
2018-06-22T19:59:12Z
2018-06-22T19:59:12Z
2020-12-21T19:40:21Z
650
getsentry/sentry
44,456
Small changes to enable make gce to run.
diff --git a/test/integration/credentials.template b/test/integration/credentials.template index bbf7c9ba6eaf86..4e2d3afcd6873e 100644 --- a/test/integration/credentials.template +++ b/test/integration/credentials.template @@ -9,9 +9,9 @@ ec2_access_key: ec2_secret_key: # GCE Credentials -service_account_email: -pem_file: -project_id: +gce_service_account_email: +gce_pem_file: +gce_project_id: # Azure Credentials azure_subscription_id: "{{ lookup('env', 'AZURE_SUBSCRIPTION_ID') }}" diff --git a/test/integration/gce_credentials.py b/test/integration/gce_credentials.py index 0d7ae81cae4371..e474bf0307eb28 100644 --- a/test/integration/gce_credentials.py +++ b/test/integration/gce_credentials.py @@ -1,5 +1,6 @@ import collections import os +import sys import yaml try:
##### ISSUE TYPE - Bugfix Pull Request - fixes GCE integration tests ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ansible 2.2.0 (gce_test_fixes 166903c36b) last updated 2016/07/06 16:29:53 (GMT +000) lib/ansible/modules/core: (detached HEAD 4a0a9cd1fc) last updated 2016/07/06 16:23:20 (GMT +000) lib/ansible/modules/extras: (detached HEAD e0b3e2f790) last updated 2016/07/06 16:23:22 (GMT +000) config file = configured module search path = Default w/o overrides ##### SUMMARY <!--- Describe the change, including rationale and design decisions --> Changes to enable `make gce` to run. Added sys import so libcloud error is displayed; renamed credentials keys in template file so they work properly with gce_credentials.py.
https://api.github.com/repos/ansible/ansible/pulls/16607
2016-07-06T16:33:55Z
2016-09-16T14:02:18Z
2016-09-16T14:02:18Z
2019-04-26T17:01:06Z
229
ansible/ansible
48,951
add new tamper script substring2leftright.py
diff --git a/tamper/substring2leftright.py b/tamper/substring2leftright.py new file mode 100644 index 00000000000..9eb3eea430b --- /dev/null +++ b/tamper/substring2leftright.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python + +""" +Copyright (c) 2006-2019 sqlmap developers (http://sqlmap.org/) +See the file 'LICENSE' for copying permission +""" + +import re + +from lib.core.enums import PRIORITY + +__priority__ = PRIORITY.NORMAL + +def dependencies(): + pass + +def tamper(payload, **kwargs): + """ + Replaces PostgreSQL SUBSTRING with LEFT and RIGHT + + Tested against: + * PostgreSQL 9.6.12 + + Note: + * Useful to bypass weak web application firewalls that filter SUBSTRING (but not LEFT and RIGHT) + + >>> tamper('SUBSTRING((X FROM 1 FOR 1))') + 'LEFT(X,1)' + >>> tamper('SUBSTRING((X FROM 5 FOR 1))') + 'LEFT(RIGHT(X,-4),1)' + """ + + retVal = payload + + if payload: + match = re.search(r"SUBSTRING\(\((.*)\sFROM\s(\d+)\sFOR\s1\)\)", payload) + + if match: + pos = int(match.group(2)) + if pos == 1: + _ = "LEFT((%s,1))" % (match.group(1)) + else: + _ = "LEFT(RIGHT((%s,%d),1))" % (match.group(1), 1-pos) + + retVal = retVal.replace(match.group(0), _) + + return retVal
replace PostgreSQL "SUBSTRING" with appropriate LEFT and RIGHT functions. useful if some stupid WAF blocks SUBSTRING but not LEFT and RIGHT. of course only works with PostgreSQL (but maybe there are similar solutions for other dbs as well).
https://api.github.com/repos/sqlmapproject/sqlmap/pulls/3527
2019-03-08T21:26:32Z
2019-03-11T10:17:30Z
2019-03-11T10:17:30Z
2019-03-11T23:18:35Z
416
sqlmapproject/sqlmap
14,940
Remove stale action workflow file
diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml deleted file mode 100644 index 42353d233a29..000000000000 --- a/.github/workflows/stale.yml +++ /dev/null @@ -1,31 +0,0 @@ -# name: Mark/Close stale issues and pull requests -# on: -# schedule: -# - cron: "0 * * * *" # Run every hour -# jobs: -# stale: -# runs-on: ubuntu-latest -# steps: -# - uses: actions/[email protected] -# with: -# repo-token: ${{ secrets.GITHUB_TOKEN }} -# days-before-stale: 30 -# days-before-close: 7 -# stale-issue-message: > -# This issue has been automatically marked as stale because it has not had -# recent activity. It will be closed if no further activity occurs. Thank you -# for your contributions. -# close-issue-message: > -# Please reopen this issue once you add more information and updates here. -# If this is not the case and you need some help, feel free to seek help -# from our [Gitter](https://gitter.im/TheAlgorithms) or ping one of the -# reviewers. Thank you for your contributions! -# stale-pr-message: > -# This pull request has been automatically marked as stale because it has not had -# recent activity. It will be closed if no further activity occurs. Thank you -# for your contributions. -# close-pr-message: > -# Please reopen this pull request once you commit the changes requested -# or make improvements on the code. If this is not the case and you need -# some help, feel free to seek help from our [Gitter](https://gitter.im/TheAlgorithms) -# or ping one of the reviewers. Thank you for your contributions!
https://api.github.com/repos/TheAlgorithms/Python/pulls/3915
2020-11-21T03:06:47Z
2020-11-21T03:34:50Z
2020-11-21T03:34:50Z
2020-11-21T09:26:58Z
454
TheAlgorithms/Python
29,805
Fix docstring of test_request_context
diff --git a/src/flask/app.py b/src/flask/app.py index db442c9edf..ce4dcf6a7d 100644 --- a/src/flask/app.py +++ b/src/flask/app.py @@ -2448,7 +2448,7 @@ def test_request_context(self, *args: t.Any, **kwargs: t.Any) -> RequestContext: :data:`request` point at the request for the created environment. :: - with test_request_context(...): + with app.test_request_context(...): generate_report() When using the shell, it may be easier to push and pop the
Add missing `app.`.
https://api.github.com/repos/pallets/flask/pulls/4821
2022-09-18T11:13:15Z
2022-09-18T11:53:47Z
2022-09-18T11:53:47Z
2022-10-03T00:09:39Z
143
pallets/flask
20,851
[infer]Fix some bugs in test_llama and test_bloom
diff --git a/tests/test_infer/test_bloom_infer.py b/tests/test_infer/test_bloom_infer.py index dad3f9cb295f..1f01460994d9 100644 --- a/tests/test_infer/test_bloom_infer.py +++ b/tests/test_infer/test_bloom_infer.py @@ -21,7 +21,7 @@ def run(): - model_path = "/data3/models/bloom-7b1" + model_path = "/home/lczyh/data3/models/bloom-7b1" if os.path.isdir(model_path) is False: return @@ -43,7 +43,7 @@ def run(): infer_engine.shard_model_by(shardformer) generate_kwargs = dict(do_sample=False) - outputs = infer_engine.generate(input_ids, generate_kwargs) + outputs = infer_engine.generate(input_ids, **generate_kwargs) if not dist.is_initialized() or dist.get_rank() == 0: output_text = tokenizer.decode(outputs[0]) diff --git a/tests/test_infer/test_llama_infer.py b/tests/test_infer/test_llama_infer.py index 1d043ba59338..986f70633289 100644 --- a/tests/test_infer/test_llama_infer.py +++ b/tests/test_infer/test_llama_infer.py @@ -15,7 +15,7 @@ from colossalai.testing import clear_cache_before_run, parameterize, rerun_if_address_is_in_use, spawn os.environ['TRANSFORMERS_NO_ADVISORY_WARNINGS'] = 'true' -TPSIZE = 1 +TPSIZE = 2 BATCH_SIZE = 8 MAX_INPUT_LEN = 12 MAX_OUTPUT_LEN = 100 @@ -46,10 +46,7 @@ def init_to_get_rotary(self, base=10000): return -@parameterize('test_config', [{ - 'tp_size': TPSIZE, -}]) -def run_llama_test(test_config): +def run_llama_test(): llama_model_path = "/data/scratch/llama-7b-hf" if os.path.isdir(llama_model_path) is False: @@ -73,14 +70,14 @@ def run_llama_test(test_config): infer_engine.shard_model_by(shardformer) generate_kwargs = dict(max_new_tokens=MAX_OUTPUT_LEN, do_sample=False) - outputs = infer_engine.generate(input_ids, generate_kwargs) + outputs = infer_engine.generate(input_ids, **generate_kwargs) #print("outputs.shape: ", outputs.shape) #print("outputs: ", outputs) if not dist.is_initialized() or dist.get_rank() == 0: for o in outputs: output_text = tokenizer.decode(o) - #print(output_text) + # print(output_text) def check_llama(rank, world_size, port):
## 📌 Checklist before creating the PR - [ ] I have created an issue for this PR for traceability - [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description` - [ ] I have added relevant tags if possible for us to better distinguish different PRs ## 🚨 Issue number > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge > > e.g. `fixed #1234`, `closed #1234`, `resolved #1234` ## 📝 What does this PR do? > Summarize your work here. > if you have any plots/diagrams/screenshots/tables, please attach them here. ## 💥 Checklist before requesting a review - [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)) - [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible - [ ] I have performed a self-review of my code - [ ] I have added thorough tests. - [ ] I have added docstrings for all the functions/methods I implemented ## ⭐️ Do you enjoy contributing to Colossal-AI? - [ ] 🌝 Yes, I do. - [ ] 🌚 No, I don't. Tell us more if you don't enjoy contributing to Colossal-AI.
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/4635
2023-09-06T08:58:18Z
2023-09-06T09:00:21Z
2023-09-06T09:00:21Z
2023-09-06T09:00:22Z
625
hpcaitech/ColossalAI
11,666
Fix typos: change it's to its where appropriate
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index 488d26dfe..8e8dca78f 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -292,7 +292,7 @@ Also, we assume that the rules will be refined over time to make them more preci A rule is aimed at being simple, rather than carefully phrased to mention every alternative and special case. Such information is found in the **Alternative** paragraphs and the [Discussion](#S-discussion) sections. -If you don't understand a rule or disagree with it, please visit it's **Discussion**. +If you don't understand a rule or disagree with it, please visit its **Discussion**. If you feel that a discussion is missing or incomplete, send us an email. This is not a language manual. @@ -1318,7 +1318,7 @@ In that case, mark owning pointers using `owner` : } This tells analysis tools that `res` is an owner. -That is, it's value must be `delete`d or transferred to another owner, as is done here by the `return`. +That is, its value must be `delete`d or transferred to another owner, as is done here by the `return`. `owner` is used similarly in the implementation of resource handles. @@ -2836,7 +2836,7 @@ You need a reason (use cases) for using a hierarchy. // ... } -If a class can be part of a hierarchy, we (in real code if not necessarily in small examples) must manipulate it's objects through pointers or references. +If a class can be part of a hierarchy, we (in real code if not necessarily in small examples) must manipulate its objects through pointers or references. That implies more memory overhead, more allocations and deallocations, and more run-time overhead to perform the resulting indiretions. **Note**: Concrete types can be stack allocated and be members of other classes. @@ -3059,7 +3059,7 @@ These operations disagree about copy semantics. This will lead to confusion and Does this class need a destructor is a surprisingly powerful design question. For most classes the answer is "no" either because the class holds no resources or because destruction is handled by [the rule of zero](#Rc-zero); -that is, it's members can take care of themselves as concerns destruction. +that is, its members can take care of themselves as concerns destruction. If the answer is "yes", much of the design of the class follows (see [the rule of five](#Rc-five). @@ -4606,7 +4606,7 @@ Of course there are way of making `==` work in a hierarchy, but the naive approa ## C.con: Containers and other resource handles A container is an object holding a sequence of objects of some type; `std::vector` is the archetypical container. -A resource handle is a class that owns a resource; `std::vector` is the typical resource handle; it's resource is its sequence of elements. +A resource handle is a class that owns a resource; `std::vector` is the typical resource handle; its resource is its sequence of elements. Summary of container rules:
There were a few spots in the document where the possessive pronoun "its" was misspelled as "it's" (which is a contraction for "it is").
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/39
2015-09-20T02:05:38Z
2015-09-20T06:38:53Z
2015-09-20T06:38:53Z
2015-09-20T06:39:11Z
713
isocpp/CppCoreGuidelines
15,750
Cfn: Fix bug preventing reuse of stack names
diff --git a/localstack/services/cloudformation/stores.py b/localstack/services/cloudformation/stores.py index 48cf5260217e0..8020d328c5561 100644 --- a/localstack/services/cloudformation/stores.py +++ b/localstack/services/cloudformation/stores.py @@ -54,6 +54,7 @@ def get_cloudformation_store(account_id: str, region_name: str) -> CloudFormatio # TODO: rework / fix usage of this def find_stack(account_id: str, region_name: str, stack_name: str) -> Stack | None: + # Warning: This function may not return the correct stack if multiple stacks with same name exist. state = get_cloudformation_store(account_id, region_name) return ( [s for s in state.stacks.values() if stack_name in [s.stack_name, s.stack_id]] or [None] @@ -63,16 +64,13 @@ def find_stack(account_id: str, region_name: str, stack_name: str) -> Stack | No def find_change_set( account_id: str, region_name: str, cs_name: str, stack_name: Optional[str] = None ) -> Optional[StackChangeSet]: - state = get_cloudformation_store(account_id, region_name) - stack = find_stack(account_id, region_name, stack_name) - stacks = [stack] if stack else state.stacks.values() - result = [ - cs - for s in stacks - for cs in s.change_sets - if cs_name in [cs.change_set_id, cs.change_set_name] - ] - return (result or [None])[0] + store = get_cloudformation_store(account_id, region_name) + for stack in store.stacks.values(): + if stack_name in (stack.stack_name, stack.stack_id, None): + for change_set in stack.change_sets: + if cs_name in (change_set.change_set_id, change_set.change_set_name): + return change_set + return None def exports_map(account_id: str, region_name: str): diff --git a/tests/aws/services/cloudformation/api/test_changesets.py b/tests/aws/services/cloudformation/api/test_changesets.py index ebf3012c0d711..79a323bd6f10b 100644 --- a/tests/aws/services/cloudformation/api/test_changesets.py +++ b/tests/aws/services/cloudformation/api/test_changesets.py @@ -995,3 +995,39 @@ def test_name_conflicts(aws_client, snapshot, cleanups): ChangeSetName=second_initial_changeset_id ) snapshot.match("second_initial_changeset_id_desc", second_initial_changeset_id_desc) + + [email protected] +def test_describe_change_set_with_similarly_named_stacks(deploy_cfn_template, aws_client): + stack_name = f"stack-{short_uid()}" + change_set_name = f"change-set-{short_uid()}" + + # create a changeset + template_path = os.path.join(os.path.dirname(__file__), "../../../templates/ec2_keypair.yml") + template_body = load_template_raw(template_path) + aws_client.cloudformation.create_change_set( + StackName=stack_name, + ChangeSetName=change_set_name, + TemplateBody=template_body, + ChangeSetType="CREATE", + ) + + # delete the stack + aws_client.cloudformation.delete_stack(StackName=stack_name) + aws_client.cloudformation.get_waiter("stack_delete_complete").wait(StackName=stack_name) + + # create a new changeset with the same name + response = aws_client.cloudformation.create_change_set( + StackName=stack_name, + ChangeSetName=change_set_name, + TemplateBody=template_body, + ChangeSetType="CREATE", + ) + + # ensure that the correct changeset is returned when requested by stack name + assert ( + aws_client.cloudformation.describe_change_set( + ChangeSetName=response["Id"], StackName=stack_name + )["ChangeSetId"] + == response["Id"] + ) diff --git a/tests/aws/services/cloudformation/api/test_changesets.validation.json b/tests/aws/services/cloudformation/api/test_changesets.validation.json index db8510fd9decd..70b374a8181f3 100644 --- a/tests/aws/services/cloudformation/api/test_changesets.validation.json +++ b/tests/aws/services/cloudformation/api/test_changesets.validation.json @@ -17,6 +17,9 @@ "tests/aws/services/cloudformation/api/test_changesets.py::test_describe_change_set_nonexisting": { "last_validated_date": "2022-08-11T11:22:01+00:00" }, + "tests/aws/services/cloudformation/api/test_changesets.py::test_describe_change_set_with_similarly_named_stacks": { + "last_validated_date": "2024-03-06T13:56:47+00:00" + }, "tests/aws/services/cloudformation/api/test_changesets.py::test_empty_changeset": { "last_validated_date": "2022-08-10T08:52:55+00:00" },
## Summary This PR fixes a bug that prevented stack names from being reused. ``` $ awslocal cloudformation deploy --stack-name bar --template-file ./localstack/tests/aws/templates/ec2_keypair.yml Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - bar $ awslocal cloudformation delete-stack --stack-name bar $ awslocal cloudformation deploy --stack-name bar --template-file ./localstack/tests/aws/templates/ec2_keypair.yml Waiting for changeset to be created.. 'Status' ``` It turns out that the function `find_stack()` is not guaranteed to return the correct stack if multiple stacks exist with the same name. This caused the deployment to fail because `awscli cfn deploy` calls the following API operations in succession. The second API operation (which depended on `find_stack()`) failed. ``` AWS cloudformation.CreateChangeSet => 200 AWS cloudformation.DescribeChangeSet => 404 (ChangeSetNotFound) ``` ## Changes - Rework the `find_change_set()` function to remove dependency on `find_stack()`. ## Tests Adds an AWS validated test. ## Related https://github.com/localstack/localstack/issues/9911 https://github.com/localstack/localstack/issues/10327
https://api.github.com/repos/localstack/localstack/pulls/10403
2024-03-06T11:04:25Z
2024-03-12T08:26:54Z
2024-03-12T08:26:54Z
2024-03-12T08:26:55Z
1,145
localstack/localstack
28,997
Endpoint to delete documents ingested
diff --git a/private_gpt/components/node_store/node_store_component.py b/private_gpt/components/node_store/node_store_component.py index c20f98c5e..c039bf502 100644 --- a/private_gpt/components/node_store/node_store_component.py +++ b/private_gpt/components/node_store/node_store_component.py @@ -1,3 +1,5 @@ +import logging + from injector import inject, singleton from llama_index.storage.docstore import BaseDocumentStore, SimpleDocumentStore from llama_index.storage.index_store import SimpleIndexStore @@ -5,6 +7,8 @@ from private_gpt.paths import local_data_path +logger = logging.getLogger(__name__) + @singleton class NodeStoreComponent: @@ -18,6 +22,7 @@ def __init__(self) -> None: persist_dir=str(local_data_path) ) except FileNotFoundError: + logger.debug("Local index store not found, creating a new one") self.index_store = SimpleIndexStore() try: @@ -25,4 +30,5 @@ def __init__(self) -> None: persist_dir=str(local_data_path) ) except FileNotFoundError: + logger.debug("Local document store not found, creating a new one") self.doc_store = SimpleDocumentStore() diff --git a/private_gpt/server/ingest/ingest_router.py b/private_gpt/server/ingest/ingest_router.py index dd49b5a8a..5c156f464 100644 --- a/private_gpt/server/ingest/ingest_router.py +++ b/private_gpt/server/ingest/ingest_router.py @@ -47,3 +47,14 @@ def list_ingested() -> IngestResponse: service = root_injector.get(IngestService) ingested_documents = service.list_ingested() return IngestResponse(object="list", model="private-gpt", data=ingested_documents) + + +@ingest_router.delete("/ingest/{doc_id}", tags=["Ingestion"]) +def delete_ingested(doc_id: str) -> None: + """Delete the specified ingested Document. + + The `doc_id` can be obtained from the `GET /ingest/list` endpoint. + The document will be effectively deleted from your storage context. + """ + service = root_injector.get(IngestService) + service.delete(doc_id) diff --git a/private_gpt/server/ingest/ingest_service.py b/private_gpt/server/ingest/ingest_service.py index 6a34e6fbb..0026660cd 100644 --- a/private_gpt/server/ingest/ingest_service.py +++ b/private_gpt/server/ingest/ingest_service.py @@ -1,3 +1,4 @@ +import logging import tempfile from pathlib import Path from typing import TYPE_CHECKING, Any, AnyStr @@ -9,6 +10,7 @@ StorageContext, StringIterableReader, VectorStoreIndex, + load_index_from_storage, ) from llama_index.node_parser import SentenceWindowNodeParser from llama_index.readers.file.base import DEFAULT_FILE_READER_CLS @@ -25,6 +27,8 @@ if TYPE_CHECKING: from llama_index.readers.base import BaseReader +logger = logging.getLogger(__name__) + class IngestedDoc(BaseModel): object: str = Field(enum=["ingest.document"]) @@ -70,6 +74,7 @@ def __init__( ) def ingest(self, file_name: str, file_data: AnyStr | Path) -> list[IngestedDoc]: + logger.info("Ingesting file_name=%s", file_name) extension = Path(file_name).suffix reader_cls = DEFAULT_FILE_READER_CLS.get(extension) documents: list[Document] @@ -100,7 +105,9 @@ def ingest(self, file_name: str, file_data: AnyStr | Path) -> list[IngestedDoc]: else: path_to_tmp.write_text(str(file_data)) documents = reader.load_data(path_to_tmp) - + logger.info( + "Transformed file=%s into count=%s documents", file_name, len(documents) + ) for document in documents: document.metadata["file_name"] = file_name return self._save_docs(documents) @@ -153,7 +160,26 @@ def list_ingested(self) -> list[IngestedDoc]: doc_metadata=doc_metadata, ) ) - return ingested_docs except ValueError: + logger.warning("Got an exception when getting list of docs", exc_info=True) pass + logger.debug("Found count=%s ingested documents", len(ingested_docs)) return ingested_docs + + def delete(self, doc_id: str) -> None: + """Delete an ingested document. + + :raises ValueError: if the document does not exist + """ + logger.info( + "Deleting the ingested document=%s in the doc and index store", doc_id + ) + + # Load the index with store_nodes_override=True to be able to delete them + index = load_index_from_storage(self.storage_context, store_nodes_override=True) + + # Delete the document from the index + index.delete_ref_doc(doc_id, delete_from_docstore=True) + + # Save the index + self.storage_context.persist(persist_dir=local_data_path)
A file that is ingested will be transformed into several documents (that are organized into nodes). This endpoint is deleting documents (bits of a file). These bits can be retrieved thanks to the endpoint to list all the documents. Also added logs in the scope of this PR
https://api.github.com/repos/zylon-ai/private-gpt/pulls/1163
2023-11-04T17:50:35Z
2023-11-06T14:47:42Z
2023-11-06T14:47:42Z
2023-11-07T08:09:09Z
1,193
zylon-ai/private-gpt
38,552
Speed up `dag.clear()` when clearing lots of ExternalTaskSensor and ExternalTaskMarker
diff --git a/airflow/models/dag.py b/airflow/models/dag.py index 45bd398dc4030..5860689fd4747 100644 --- a/airflow/models/dag.py +++ b/airflow/models/dag.py @@ -1119,6 +1119,7 @@ def clear( recursion_depth=0, max_recursion_depth=None, dag_bag=None, + visited_external_tis=None, ): """ Clears a set of task instances associated with the current dag for @@ -1154,6 +1155,9 @@ def clear( :type max_recursion_depth: int :param dag_bag: The DagBag used to find the dags :type dag_bag: airflow.models.dagbag.DagBag + :param visited_external_tis: A set used internally to keep track of the visited TaskInstance when + clearing tasks across multiple DAGs linked by ExternalTaskMarker to avoid redundant work. + :type visited_external_tis: set """ TI = TaskInstance tis = session.query(TI) @@ -1188,7 +1192,8 @@ def clear( session=session, recursion_depth=recursion_depth, max_recursion_depth=max_recursion_depth, - dag_bag=dag_bag + dag_bag=dag_bag, + visited_external_tis=visited_external_tis )) if start_date: @@ -1209,51 +1214,60 @@ def clear( instances = tis.all() for ti in instances: if ti.operator == ExternalTaskMarker.__name__: - task: ExternalTaskMarker = cast(ExternalTaskMarker, copy.copy(self.get_task(ti.task_id))) - ti.task = task - - if recursion_depth == 0: - # Maximum recursion depth allowed is the recursion_depth of the first - # ExternalTaskMarker in the tasks to be cleared. - max_recursion_depth = task.recursion_depth - - if recursion_depth + 1 > max_recursion_depth: - # Prevent cycles or accidents. - raise AirflowException("Maximum recursion depth {} reached for {} {}. " - "Attempted to clear too many tasks " - "or there may be a cyclic dependency." - .format(max_recursion_depth, - ExternalTaskMarker.__name__, ti.task_id)) - ti.render_templates() - external_tis = session.query(TI).filter(TI.dag_id == task.external_dag_id, - TI.task_id == task.external_task_id, - TI.execution_date == - pendulum.parse(task.execution_date)) - - for tii in external_tis: - if not dag_bag: - dag_bag = DagBag() - external_dag = dag_bag.get_dag(tii.dag_id) - if not external_dag: - raise AirflowException("Could not find dag {}".format(tii.dag_id)) - downstream = external_dag.sub_dag( - task_regex=r"^{}$".format(tii.task_id), - include_upstream=False, - include_downstream=True - ) - tis = tis.union(downstream.clear(start_date=tii.execution_date, - end_date=tii.execution_date, - only_failed=only_failed, - only_running=only_running, - confirm_prompt=confirm_prompt, - include_subdags=include_subdags, - include_parentdag=False, - dag_run_state=dag_run_state, - get_tis=True, - session=session, - recursion_depth=recursion_depth + 1, - max_recursion_depth=max_recursion_depth, - dag_bag=dag_bag)) + if visited_external_tis is None: + visited_external_tis = set() + ti_key = ti.key.primary + if ti_key not in visited_external_tis: + # Only clear this ExternalTaskMarker if it's not already visited by the + # recursive calls to dag.clear(). + task: ExternalTaskMarker = cast(ExternalTaskMarker, + copy.copy(self.get_task(ti.task_id))) + ti.task = task + + if recursion_depth == 0: + # Maximum recursion depth allowed is the recursion_depth of the first + # ExternalTaskMarker in the tasks to be cleared. + max_recursion_depth = task.recursion_depth + + if recursion_depth + 1 > max_recursion_depth: + # Prevent cycles or accidents. + raise AirflowException("Maximum recursion depth {} reached for {} {}. " + "Attempted to clear too many tasks " + "or there may be a cyclic dependency." + .format(max_recursion_depth, + ExternalTaskMarker.__name__, ti.task_id)) + ti.render_templates() + external_tis = session.query(TI).filter(TI.dag_id == task.external_dag_id, + TI.task_id == task.external_task_id, + TI.execution_date == + pendulum.parse(task.execution_date)) + + for tii in external_tis: + if not dag_bag: + dag_bag = DagBag(read_dags_from_db=True) + external_dag = dag_bag.get_dag(tii.dag_id) + if not external_dag: + raise AirflowException("Could not find dag {}".format(tii.dag_id)) + downstream = external_dag.sub_dag( + task_regex=r"^{}$".format(tii.task_id), + include_upstream=False, + include_downstream=True + ) + tis = tis.union(downstream.clear(start_date=tii.execution_date, + end_date=tii.execution_date, + only_failed=only_failed, + only_running=only_running, + confirm_prompt=confirm_prompt, + include_subdags=include_subdags, + include_parentdag=False, + dag_run_state=dag_run_state, + get_tis=True, + session=session, + recursion_depth=recursion_depth + 1, + max_recursion_depth=max_recursion_depth, + dag_bag=dag_bag, + visited_external_tis=visited_external_tis)) + visited_external_tis.add(ti_key) if get_tis: return tis @@ -1391,12 +1405,15 @@ def partial_subset( based on a regex that should match one or many tasks, and includes upstream and downstream neighbours based on the flag passed. """ - # deep-copying self.task_dict takes a long time, and we don't want all + # deep-copying self.task_dict and self._task_group takes a long time, and we don't want all # the tasks anyway, so we copy the tasks manually later task_dict = self.task_dict + task_group = self._task_group self.task_dict = {} + self._task_group = None dag = copy.deepcopy(self) self.task_dict = task_dict + self._task_group = task_group regex_match = [ t for t in self.tasks if re.findall(task_regex, t.task_id)] @@ -1412,24 +1429,30 @@ def partial_subset( dag.task_dict = {t.task_id: copy.deepcopy(t, {id(t.dag): dag}) for t in regex_match + also_include} - # Remove tasks not included in the subdag from task_group - def remove_excluded(group): - for child in list(group.children.values()): + def filter_task_group(group, parent_group): + """ + Exclude tasks not included in the subdag from the given TaskGroup. + """ + copied = copy.copy(group) + copied.used_group_ids = set(copied.used_group_ids) + copied._parent_group = parent_group + + copied.children = {} + + for child in group.children.values(): if isinstance(child, BaseOperator): - if child.task_id not in dag.task_dict: - group.children.pop(child.task_id) - else: - # The tasks in the subdag are a copy of tasks in the original dag - # so update the reference in the TaskGroups too. - group.children[child.task_id] = dag.task_dict[child.task_id] + if child.task_id in dag.task_dict: + copied.children[child.task_id] = dag.task_dict[child.task_id] else: - remove_excluded(child) + filtered_child = filter_task_group(child, copied) + + # Only include this child TaskGroup if it is non-empty. + if filtered_child.children: + copied.children[child.group_id] = filtered_child - # Remove this TaskGroup if it doesn't contain any tasks in this subdag - if not child.children: - group.children.pop(child.group_id) + return copied - remove_excluded(dag.task_group) + dag._task_group = filter_task_group(self._task_group, None) # Removing upstream/downstream references to tasks and TaskGroups that did not make # the cut. diff --git a/tests/sensors/test_external_task_sensor.py b/tests/sensors/test_external_task_sensor.py index 7d4f1835f9940..e45bdc52280fd 100644 --- a/tests/sensors/test_external_task_sensor.py +++ b/tests/sensors/test_external_task_sensor.py @@ -668,3 +668,57 @@ def test_clear_multiple_external_task_marker(dag_bag_multiple): # That has since been fixed. It should take no more than a few seconds to call # dag.clear() here. assert agg_dag.clear(start_date=execution_date, end_date=execution_date, dag_bag=dag_bag_multiple) == 51 + + [email protected] +def dag_bag_head_tail(): + """ + Create a DagBag containing one DAG, with task "head" depending on task "tail" of the + previous execution_date. + + 20200501 20200502 20200510 + +------+ +------+ +------+ + | head | -->head | --> -->head | + | | | / | | | / / | | | + | v | / | v | / / | v | + | body | / | body | / ... / | body | + | | |/ | | |/ / | | | + | v / | v / / | v | + | tail/| | tail/| / | tail | + +------+ +------+ +------+ + """ + dag_bag = DagBag(dag_folder=DEV_NULL, include_examples=False) + with DAG("head_tail", start_date=DEFAULT_DATE, schedule_interval="@daily") as dag: + head = ExternalTaskSensor(task_id='head', + external_dag_id=dag.dag_id, + external_task_id="tail", + execution_delta=timedelta(days=1), + mode="reschedule") + body = DummyOperator(task_id="body") + tail = ExternalTaskMarker(task_id="tail", + external_dag_id=dag.dag_id, + external_task_id=head.task_id, + execution_date="{{ tomorrow_ds_nodash }}") + head >> body >> tail + + dag_bag.bag_dag(dag=dag, root_dag=dag) + + yield dag_bag + + +def test_clear_overlapping_external_task_marker(dag_bag_head_tail): + dag = dag_bag_head_tail.get_dag("head_tail") + + # Mark first head task success. + first = TaskInstance(task=dag.get_task("head"), execution_date=DEFAULT_DATE) + first.run(mark_success=True) + + for delta in range(10): + execution_date = DEFAULT_DATE + timedelta(days=delta) + run_tasks(dag_bag_head_tail, execution_date=execution_date) + + # The next two lines are doing the same thing. Clearing the first "head" with "Future" + # selected is the same as not selecting "Future". They should take similar amount of + # time too because dag.clear() uses visited_external_tis to keep track of visited ExternalTaskMarker. + assert dag.clear(start_date=DEFAULT_DATE, dag_bag=dag_bag_head_tail) == 30 + assert dag.clear(start_date=DEFAULT_DATE, end_date=execution_date, dag_bag=dag_bag_head_tail) == 30
This is an improvement to the UI response time when clearing dozens of DagRuns of large DAGs (thousands of tasks) containing many `ExternalTaskSensor` + `ExternalTaskMarker` pairs. In the current implementation, clearing tasks can get slow especially if the user chooses to clear with Future, Downstream and Recursive all selected. This PR speeds it up. There are two major improvements: - Updating `self._task_group` in `dag.sub_dag()` is improved to not deep copy `_task_group` because it's a waste of time. Instead, do something like `dag.task_dict`, set it to None first and then copy explicitly. - Pass the `TaskInstance` already visited down the recursive calls of `dag.clear()` as `visited_external_tis`. This speeds up the example in `test_clear_overlapping_external_task_marker` by almost five folds. For real large dags containing 500 tasks set up in a similar manner, the time it takes to clear 30 DagRun is cut from around 100s to less than 10s.
https://api.github.com/repos/apache/airflow/pulls/11184
2020-09-28T13:06:18Z
2020-10-22T14:37:36Z
2020-10-22T14:37:36Z
2020-10-23T03:02:54Z
2,763
apache/airflow
14,053
Elaborate on Python support policy
diff --git a/CHANGES.md b/CHANGES.md index f81c285d0b..440aaeaa35 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -84,6 +84,7 @@ and the first release covered by our new stability policy. - Change HTML theme to Furo primarily for its responsive design and mobile support (#2793) - Deprecate the `black-primer` tool (#2809) +- Document Python support policy (#2819) ## 21.12b0 diff --git a/docs/faq.md b/docs/faq.md index 0cff6ae5e1..264141e3f3 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -71,9 +71,16 @@ readability because operators are misaligned. Disable W503 and enable the disabled-by-default counterpart W504. E203 should be disabled while changes are still [discussed](https://github.com/PyCQA/pycodestyle/issues/373). -## Does Black support Python 2? +## Which Python versions does Black support? -Support for formatting Python 2 code was removed in version 22.0. +Currently the runtime requires Python 3.6-3.10. Formatting is supported for files +containing syntax from Python 3.3 to 3.10. We promise to support at least all Python +versions that have not reached their end of life. This is the case for both running +_Black_ and formatting code. + +Support for formatting Python 2 code was removed in version 22.0. While we've made no +plans to stop supporting older Python 3 minor versions immediately, their support might +also be removed some time in the future without a deprecation period. ## Why does my linter or typechecker complain after I format my code?
<!-- Hello! Thanks for submitting a PR. To help make things go a bit more smoothly we would appreciate that you go through this template. --> ### Description Closes #2251: I've included a short Python support policy to our FAQ. TL;DR: we promise to support all non-EOL versions, but don't promise to keep old 3+ versions laying around if we decide to remove them at some point. <!-- Good things to put here include: reasoning for the change (please link any relevant issues!), any noteworthy (or hacky) choices to be aware of, or what the problem resolved here looked like ... we won't mind a ranty story :) --> ### Checklist - did you ... <!-- If any of the following items aren't relevant for your contribution please still tick them so we know you've gone through the checklist. All user-facing changes should get an entry. Otherwise, signal to us this should get the magical label to silence the CHANGELOG entry check. Tests are required for bugfixes and new features. Documentation changes are necessary for formatting and most enhancement changes. --> - [x] Add new / update outdated documentation? ### Some discussion: - Is the content fine? It's basically what we do now, or at least the least strict and most convenient version of it. We could promise a deprecation period as well. - Should we relocate the text? The Python 2 support text was nice as a FAQ entry, but this could be better off somewhere else.
https://api.github.com/repos/psf/black/pulls/2819
2022-01-28T17:19:14Z
2022-01-28T18:58:18Z
2022-01-28T18:58:17Z
2022-01-28T19:03:38Z
413
psf/black
23,980
convert.py doesn't work with OriginalHighRes trainer bugfix
diff --git a/plugins/Model_OriginalHighRes/Model.py b/plugins/Model_OriginalHighRes/Model.py index 26fd94c08a..7a98ca0dbc 100644 --- a/plugins/Model_OriginalHighRes/Model.py +++ b/plugins/Model_OriginalHighRes/Model.py @@ -111,30 +111,38 @@ def initModel(self): self.autoencoder_B.compile(optimizer=optimizer, loss='mean_absolute_error') - def load(self, swapped): + def load(self, swapped): + model_dir = str(self.model_dir) + from json import JSONDecodeError face_A, face_B = (hdf['decoder_AH5'], hdf['decoder_BH5']) if not swapped else (hdf['decoder_BH5'], hdf['decoder_AH5']) - state_dir = os.path.join(self.model_dir, 'state_{version_str}_{ENCODER.value}.json'.format(**globals())) + state_dir = os.path.join(model_dir, 'state_{version_str}_{ENCODER.value}.json'.format(**globals())) ser = lib.Serializer.get_serializer('json') + try: with open(state_dir, 'rb') as fp: state = ser.unmarshal(fp.read()) self._epoch_no = state['epoch_no'] - except (JSONDecodeError, IOError) as e: - print('Failed loading training state metadata', e) - self._epoch_no = 0 + except IOError as e: + print('Error loading training info:', e.strerror) + self._epoch_no = 0 + except JSONDecodeError as e: + print('Error loading training info:', e.msg) + self._epoch_no = 0 try: - self.encoder.load_weights(os.path.join(self.model_dir, hdf['encoderH5'])) - self.decoder_A.load_weights(os.path.join(self.model_dir, face_A)) - self.decoder_B.load_weights(os.path.join(self.model_dir, face_B)) + self.encoder.load_weights(os.path.join(model_dir, hdf['encoderH5'])) + self.decoder_A.load_weights(os.path.join(model_dir, face_A)) + self.decoder_B.load_weights(os.path.join(model_dir, face_B)) print('loaded model weights') return True + except IOError as e: + print('Failed loading training data:', e.strerror) except Exception as e: - print('Failed loading existing training data.', e) - return False - + print('Failed loading training data:', str(e)) + + return False def converter(self, swap): autoencoder = self.autoencoder_B if not swap else self.autoencoder_A @@ -259,7 +267,7 @@ def save_weights(self): except NameError: print('backup functionality not available\n') - state_dir = os.path.join(self.model_dir, 'state_{version_str}_{ENCODER.value}.json'.format(**globals())) + state_dir = os.path.join(model_dir, 'state_{version_str}_{ENCODER.value}.json'.format(**globals())) ser = lib.Serializer.get_serializer('json') try: with open(state_dir, 'wb') as fp:
https://api.github.com/repos/deepfakes/faceswap/pulls/442
2018-06-22T08:59:17Z
2018-06-22T09:07:09Z
2018-06-22T09:07:09Z
2018-08-28T17:37:19Z
707
deepfakes/faceswap
18,665
change some numeric behavior
diff --git a/README.md b/README.md index d8ef7ec45a6..0231599ba25 100644 --- a/README.md +++ b/README.md @@ -373,10 +373,11 @@ an adoption helper, avoid using this for new projects. ### Numeric literals -*Black* standardizes all numeric literals to use lowercase letters: `0xab` -instead of `0XAB` and `1e10` instead of `1E10`. In Python 3.6+, *Black* -adds underscores to long numeric literals to aid readability: `100000000` -becomes `100_000_000`. +*Black* standardizes most numeric literals to use lowercase letters: `0xab` +instead of `0XAB` and `1e10` instead of `1E10`. Python 2 long literals are +styled as `2L` instead of `2l` to avoid confusion between `l` and `1`. In +Python 3.6+, *Black* adds underscores to long numeric literals to aid +readability: `100000000` becomes `100_000_000`. ### Line breaks & binary operators @@ -851,10 +852,13 @@ More details can be found in [CONTRIBUTING](CONTRIBUTING.md). * adjacent string literals are now correctly split into multiple lines (#463) -* code with `_` in numeric literals is recognized as Python 3.6+ (#461) +* numeric literals are now formatted by *Black* (#452, #461, #464, #469): -* numeric literals are now normalized to include `_` separators on Python 3.6+ code - (#452) + * numeric literals are normalized to include `_` separators on Python 3.6+ code + + * code with `_` in numeric literals is recognized as Python 3.6+ + + * most letters in numeric literals are lowercased (e.g., in `1e10` or `0xab`) * cache is now populated when `--check` is successful for a file which speeds up consecutive checks of properly formatted unmodified files (#448) diff --git a/black.py b/black.py index 0f166c61494..3a51f21f4cd 100644 --- a/black.py +++ b/black.py @@ -2522,8 +2522,8 @@ def normalize_string_quotes(leaf: Leaf) -> None: def normalize_numeric_literal(leaf: Leaf, allow_underscores: bool) -> None: """Normalizes numeric (float, int, and complex) literals. - All letters used in the representation are normalized to lowercase, long number - literals are split using underscores. + All letters used in the representation are normalized to lowercase (except + in Python 2 long literals), and long number literals are split using underscores. """ text = leaf.value.lower() if text.startswith(("0o", "0x", "0b")): @@ -2543,6 +2543,9 @@ def normalize_numeric_literal(leaf: Leaf, allow_underscores: bool) -> None: elif text.endswith(("j", "l")): number = text[:-1] suffix = text[-1] + # Capitalize in "2L" because "l" looks too similar to "1". + if suffix == "l": + suffix = "L" text = f"{format_float_or_int_string(number, allow_underscores)}{suffix}" else: text = format_float_or_int_string(text, allow_underscores) @@ -2556,14 +2559,22 @@ def format_float_or_int_string(text: str, allow_underscores: bool) -> str: before, after = text.split(".") before = format_int_string(before, allow_underscores) if before else "0" - after = format_int_string(after, allow_underscores) if after else "0" + if after: + after = format_int_string(after, allow_underscores, count_from_end=False) + else: + after = "0" return f"{before}.{after}" -def format_int_string(text: str, allow_underscores: bool) -> str: +def format_int_string( + text: str, allow_underscores: bool, count_from_end: bool = True +) -> str: """Normalizes underscores in a string to e.g. 1_000_000. - Input must be a string of at least six digits and optional underscores. + Input must be a string of digits and optional underscores. + If count_from_end is False, we add underscores after groups of three digits + counting from the beginning instead of the end of the strings. This is used + for the fractional part of float literals. """ if not allow_underscores: return text @@ -2573,9 +2584,12 @@ def format_int_string(text: str, allow_underscores: bool) -> str: # No underscores for numbers <= 6 digits long. return text - # Avoid removing leading zeros, which are important if we're formatting - # part of a number like "0.001". - return format(int("1" + text), "3_")[1:].lstrip("_") + if count_from_end: + # Avoid removing leading zeros, which are important if we're formatting + # part of a number like "0.001". + return format(int("1" + text), "3_")[1:].lstrip("_") + else: + return "_".join(text[i : i + 3] for i in range(0, len(text), 3)) def normalize_invisible_parens(node: Node, parens_after: Set[str]) -> None: diff --git a/tests/data/numeric_literals.py b/tests/data/numeric_literals.py index 2dc64c75c8b..b812ebfa824 100644 --- a/tests/data/numeric_literals.py +++ b/tests/data/numeric_literals.py @@ -6,6 +6,7 @@ x = 1. x = 1E+1 x = 1E-1 +x = 1.00000001 x = 123456789.123456789 x = 123456789.123456789E123456789 x = 123456789E123456789 @@ -27,6 +28,7 @@ x = 1.0 x = 1e1 x = 1e-1 +x = 1.000_000_01 x = 123_456_789.123_456_789 x = 123_456_789.123_456_789e123_456_789 x = 123_456_789e123_456_789 diff --git a/tests/data/numeric_literals_py2.py b/tests/data/numeric_literals_py2.py index 107c39bbaf5..d2db7b0ccc4 100644 --- a/tests/data/numeric_literals_py2.py +++ b/tests/data/numeric_literals_py2.py @@ -1,6 +1,7 @@ #!/usr/bin/env python2.7 x = 123456789L +x = 123456789l x = 123456789 # output @@ -8,5 +9,6 @@ #!/usr/bin/env python2.7 -x = 123456789l +x = 123456789L +x = 123456789L x = 123456789
Closes #467.
https://api.github.com/repos/psf/black/pulls/469
2018-08-22T04:05:46Z
2018-08-23T18:55:30Z
2018-08-23T18:55:29Z
2022-03-26T21:19:27Z
1,665
psf/black
24,441
Fix pytest warnings
diff --git a/tests/conftest.py b/tests/conftest.py index 3d097159d..964458b67 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -7,6 +7,10 @@ shells.shell = shells.Generic() +def pytest_configure(config): + config.addinivalue_line("markers", "functional: mark test as functional") + + def pytest_addoption(parser): """Adds `--enable-functional` argument.""" group = parser.getgroup("thefuck") diff --git a/tests/test_conf.py b/tests/test_conf.py index 7d0fe4b87..657e47556 100644 --- a/tests/test_conf.py +++ b/tests/test_conf.py @@ -43,7 +43,7 @@ def test_from_file_with_DEFAULT(self, load_source, settings): assert settings.rules == const.DEFAULT_RULES + ['test'] [email protected]('load_source') [email protected]('load_source') class TestSettingsFromEnv(object): def test_from_env(self, os_environ, settings): os_environ.update({'THEFUCK_RULES': 'bash:lisp',
Adds custom mark (`functional`) as per the [pytest documentation](https://docs.pytest.org/en/latest/mark.html#registering-marks). Fixes typo in `tests/test_conf.py` `@pytest.mark.usefixture('load_source')` to `@pytest.mark.usefixtures('load_source')`
https://api.github.com/repos/nvbn/thefuck/pulls/1116
2020-07-20T02:31:09Z
2020-11-03T17:29:28Z
2020-11-03T17:29:28Z
2021-07-17T13:47:08Z
260
nvbn/thefuck
30,846
Error message for PAY_PER_REQUEST billing mode
diff --git a/localstack/services/cloudformation/models/dynamodb.py b/localstack/services/cloudformation/models/dynamodb.py index 254e9bca1c80b..339acfe09db7f 100644 --- a/localstack/services/cloudformation/models/dynamodb.py +++ b/localstack/services/cloudformation/models/dynamodb.py @@ -9,6 +9,9 @@ def get_ddb_provisioned_throughput(params, **kwargs): args = params.get("ProvisionedThroughput") if args == PLACEHOLDER_AWS_NO_VALUE: return {} + is_ondemand = params.get("BillingMode") == "PAY_PER_REQUEST" + if is_ondemand and args is None: + return if args: if isinstance(args["ReadCapacityUnits"], str): args["ReadCapacityUnits"] = int(args["ReadCapacityUnits"]) @@ -64,6 +67,15 @@ def fetch_state(self, stack_name, resources): table_name = self.resolve_refs_recursively(stack_name, table_name, resources) return aws_stack.connect_to_service("dynamodb").describe_table(TableName=table_name) + @staticmethod + def add_defaults(resource, stack_name: str): + is_pay_per_request = resource.get("Properties", {}).get("BillingMode") == "PAY_PER_REQUEST" + if not is_pay_per_request: + resource["Properties"]["ProvisionedThroughput"] = { + "ReadCapacityUnits": 5, + "WriteCapacityUnits": 5, + } + @classmethod def get_deploy_templates(cls): def _pre_create(resource_id, resources, resource_type, func, stack_name): @@ -96,12 +108,6 @@ def _generate_res_name(): # TODO: generalize ) ), }, - "defaults": { - "ProvisionedThroughput": { - "ReadCapacityUnits": 5, - "WriteCapacityUnits": 5, - } - }, }, { "function": "enable_kinesis_streaming_destination", diff --git a/localstack/services/dynamodb/provider.py b/localstack/services/dynamodb/provider.py index f1c313c2c085c..95370e84e18a4 100644 --- a/localstack/services/dynamodb/provider.py +++ b/localstack/services/dynamodb/provider.py @@ -23,6 +23,7 @@ BatchGetRequestMap, BatchWriteItemInput, BatchWriteItemOutput, + BillingMode, CreateGlobalTableOutput, CreateTableInput, CreateTableOutput, @@ -386,6 +387,13 @@ def create_table( table_name = create_table_input["TableName"] if self.table_exists(table_name): raise ResourceInUseException("Cannot create preexisting table") + billing_mode = create_table_input.get("BillingMode") + provisioned_throughput = create_table_input.get("ProvisionedThroughput") + if billing_mode == BillingMode.PAY_PER_REQUEST and provisioned_throughput is not None: + raise ValidationException( + "One or more parameter values were invalid: Neither ReadCapacityUnits nor WriteCapacityUnits can be " + "specified when BillingMode is PAY_PER_REQUEST" + ) # forward request to backend result = self.forward_request(context) diff --git a/tests/integration/test_dynamodb.py b/tests/integration/test_dynamodb.py index 3a31327eb2412..a8311b03f1258 100644 --- a/tests/integration/test_dynamodb.py +++ b/tests/integration/test_dynamodb.py @@ -978,6 +978,20 @@ def test_dynamodb_batch_write_item(self): assert result.get("UnprocessedItems") == {} + def test_dynamodb_pay_per_request(self): + dynamodb = aws_stack.create_external_boto_client("dynamodb") + table_name = "ddb-table-%s" % short_uid() + + with pytest.raises(Exception) as e: + dynamodb.create_table( + TableName=table_name, + KeySchema=[{"AttributeName": "id", "KeyType": "HASH"}], + AttributeDefinitions=[{"AttributeName": "id", "AttributeType": "S"}], + ProvisionedThroughput={"ReadCapacityUnits": 5, "WriteCapacityUnits": 5}, + BillingMode="PAY_PER_REQUEST", + ) + assert e.match("ValidationException") + def test_dynamodb_create_table_with_sse_specification(self): dynamodb = aws_stack.create_external_boto_client("dynamodb") table_name = "ddb-table-%s" % short_uid()
As reported in #4982, when a table is created with `PAY_PER_REQUEST` billing mode and a provisioned throughput is specified as well, we should return a `ValidationException`.
https://api.github.com/repos/localstack/localstack/pulls/5877
2022-04-16T18:13:49Z
2022-04-17T18:33:39Z
2022-04-17T18:33:39Z
2022-04-19T07:47:49Z
998
localstack/localstack
28,411
Fix description of nova_compute:name option
diff --git a/library/cloud/nova_compute b/library/cloud/nova_compute index 643d685ac9c03b..bed780bf2d1355 100644 --- a/library/cloud/nova_compute +++ b/library/cloud/nova_compute @@ -61,7 +61,7 @@ options: default: present name: description: - - Name that has to be given to the image + - Name that has to be given to the instance required: true default: None image_id:
It's the name of the instance, not of an image.
https://api.github.com/repos/ansible/ansible/pulls/4742
2013-10-31T10:36:42Z
2013-10-31T12:05:05Z
2013-10-31T12:05:05Z
2019-04-24T19:10:40Z
118
ansible/ansible
49,121
[br] Allow '/' in URL, allow empty author + broadcastDate fields
diff --git a/youtube_dl/extractor/br.py b/youtube_dl/extractor/br.py index 5fcc1084a22..7cc159e201e 100644 --- a/youtube_dl/extractor/br.py +++ b/youtube_dl/extractor/br.py @@ -9,21 +9,35 @@ class BRIE(InfoExtractor): IE_DESC = "Bayerischer Rundfunk Mediathek" - _VALID_URL = r"^https?://(?:www\.)?br\.de/mediathek/video/(?:sendungen/)?(?P<id>[a-z0-9\-]+)\.html$" + _VALID_URL = r"^https?://(?:www\.)?br\.de/mediathek/video/(?:sendungen/)?(?:[a-z0-9\-/]+/)?(?P<id>[a-z0-9\-]+)\.html$" _BASE_URL = "http://www.br.de" - _TEST = { - "url": "http://www.br.de/mediathek/video/anselm-gruen-114.html", - "md5": "c4f83cf0f023ba5875aba0bf46860df2", - "info_dict": { - "id": "2c8d81c5-6fb7-4a74-88d4-e768e5856532", - "ext": "mp4", - "title": "Feiern und Verzichten", - "description": "Anselm Grün: Feiern und Verzichten", - "uploader": "BR/Birgit Baier", - "upload_date": "20140301" + _TESTS = [ + { + "url": "http://www.br.de/mediathek/video/anselm-gruen-114.html", + "md5": "c4f83cf0f023ba5875aba0bf46860df2", + "info_dict": { + "id": "2c8d81c5-6fb7-4a74-88d4-e768e5856532", + "ext": "mp4", + "title": "Feiern und Verzichten", + "description": "Anselm Grün: Feiern und Verzichten", + "uploader": "BR/Birgit Baier", + "upload_date": "20140301" + } + }, + { + "url": "http://www.br.de/mediathek/video/sendungen/unter-unserem-himmel/unter-unserem-himmel-alpen-ueber-den-pass-100.html", + "md5": "ab451b09d861dbed7d7cc9ab0be19ebe", + "info_dict": { + "id": "2c060e69-3a27-4e13-b0f0-668fac17d812", + "ext": "mp4", + "title": "Über den Pass", + "description": "Die Eroberung der Alpen: Über den Pass", + "uploader": None, + "upload_date": None + } } - } + ] def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) @@ -33,16 +47,21 @@ def _real_extract(self, url): r"return BRavFramework\.register\(BRavFramework\('avPlayer_(?:[a-f0-9-]{36})'\)\.setup\({dataURL:'(/mediathek/video/[a-z0-9/~_.-]+)'}\)\);", page, "XMLURL") xml = self._download_xml(self._BASE_URL + xml_url, None) - videos = [{ - "id": xml_video.get("externalId"), - "title": xml_video.find("title").text, - "formats": self._extract_formats(xml_video.find("assets")), - "thumbnails": self._extract_thumbnails(xml_video.find("teaserImage/variants")), - "description": " ".join(xml_video.find("shareTitle").text.splitlines()), - "uploader": xml_video.find("author").text, - "upload_date": "".join(reversed(xml_video.find("broadcastDate").text.split("."))), - "webpage_url": xml_video.find("permalink").text, - } for xml_video in xml.findall("video")] + videos = [] + for xml_video in xml.findall("video"): + video = { + "id": xml_video.get("externalId"), + "title": xml_video.find("title").text, + "formats": self._extract_formats(xml_video.find("assets")), + "thumbnails": self._extract_thumbnails(xml_video.find("teaserImage/variants")), + "description": " ".join(xml_video.find("shareTitle").text.splitlines()), + "webpage_url": xml_video.find("permalink").text + } + if xml_video.find("author").text: + video["uploader"] = xml_video.find("author").text + if xml_video.find("broadcastDate").text: + video["upload_date"] = "".join(reversed(xml_video.find("broadcastDate").text.split("."))) + videos.append(video) if len(videos) > 1: self._downloader.report_warning(
- Allow URLs that have a 'subdirectory' before the actual program name, e.g. 'xyz/xyz-episode-1'. - The author and broadcastDate fields in the XML file may be empty. - Add test case for the two problems above.
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/2555
2014-03-13T13:10:36Z
2014-03-13T13:36:03Z
2014-03-13T13:36:03Z
2014-03-13T13:36:14Z
1,206
ytdl-org/youtube-dl
49,733
Point the user to fullchain.pem, not cert.pem
diff --git a/letsencrypt/cli.py b/letsencrypt/cli.py index 1b396b0b8c1..5163f286824 100644 --- a/letsencrypt/cli.py +++ b/letsencrypt/cli.py @@ -267,16 +267,31 @@ def _treat_as_renewal(config, domains): return None -def _report_new_cert(cert_path): - """Reports the creation of a new certificate to the user.""" +def _report_new_cert(cert_path, fullchain_path): + """Reports the creation of a new certificate to the user. + + :param str cert_path: path to cert + :param str fullchain_path: path to full chain + + """ expiry = crypto_util.notAfter(cert_path).date() reporter_util = zope.component.getUtility(interfaces.IReporter) - reporter_util.add_message("Congratulations! Your certificate has been " - "saved at {0} and will expire on {1}. To obtain " - "a new version of the certificate in the " - "future, simply run Let's Encrypt again.".format( - cert_path, expiry), - reporter_util.MEDIUM_PRIORITY) + if fullchain_path: + # Print the path to fullchain.pem because that's what modern webservers + # (Nginx and Apache2.4) will want. + and_chain = "and chain have" + path = fullchain_path + else: + # Unless we're in .csr mode and there really isn't one + and_chain = "has " + path = cert_path + # XXX Perhaps one day we could detect the presence of known old webservers + # and say something more informative here. + msg = ("Congratulations! Your certificate {0} been saved at {1}." + " Your cert will expire on {2}. To obtain a new version of the " + "certificate in the future, simply run Let's Encrypt again." + .format(and_chain, path, expiry)) + reporter_util.add_message(msg, reporter_util.MEDIUM_PRIORITY) def _auth_from_domains(le_client, config, domains, plugins): @@ -304,7 +319,7 @@ def _auth_from_domains(le_client, config, domains, plugins): if not lineage: raise Error("Certificate could not be obtained") - _report_new_cert(lineage.cert) + _report_new_cert(lineage.cert, lineage.fullchain) return lineage @@ -312,8 +327,8 @@ def _auth_from_domains(le_client, config, domains, plugins): def set_configurator(previously, now): """ Setting configurators multiple ways is okay, as long as they all agree - :param string previously: previously identified request for the installer/authenticator - :param string requested: the request currently being processed + :param str previously: previously identified request for the installer/authenticator + :param str requested: the request currently being processed """ if now is None: # we're not actually setting anything @@ -329,8 +344,8 @@ def diagnose_configurator_problem(cfg_type, requested, plugins): """ Raise the most helpful error message about a plugin being unavailable - :param string cfg_type: either "installer" or "authenticator" - :param string requested: the plugin that was requested + :param str cfg_type: either "installer" or "authenticator" + :param str requested: the plugin that was requested :param PluginRegistry plugins: available plugins :raises error.PluginSelectionError: if there was a problem @@ -455,9 +470,9 @@ def auth(args, config, plugins): if args.csr is not None: certr, chain = le_client.obtain_certificate_from_csr(le_util.CSR( file=args.csr[0], data=args.csr[1], form="der")) - cert_path, _ = le_client.save_certificate( - certr, chain, args.cert_path, args.chain_path) - _report_new_cert(cert_path) + cert_path, _, cert_fullchain = le_client.save_certificate( + certr, chain, args.cert_path, args.chain_path, args.fullchain_path) + _report_new_cert(cert_path, cert_fullchain) else: domains = _find_domains(args, installer) _auth_from_domains(le_client, config, domains, plugins) diff --git a/letsencrypt/client.py b/letsencrypt/client.py index 732bdcf0337..3a6d9047286 100644 --- a/letsencrypt/client.py +++ b/letsencrypt/client.py @@ -258,8 +258,8 @@ def obtain_and_enroll_certificate(self, domains, plugins): params, config, cli_config) return lineage - def save_certificate(self, certr, chain_cert, cert_path, chain_path): - # pylint: disable=no-self-use + def save_certificate(self, certr, chain_cert, + cert_path, chain_path, fullchain_path): """Saves the certificate received from the ACME server. :param certr: ACME "certificate" resource. @@ -268,24 +268,23 @@ def save_certificate(self, certr, chain_cert, cert_path, chain_path): :param list chain_cert: :param str cert_path: Candidate path to a certificate. :param str chain_path: Candidate path to a certificate chain. + :param str fullchain_path: Candidate path to a full cert chain. - :returns: cert_path, chain_path (absolute paths to the actual files) + :returns: cert_path, chain_path, and fullchain_path as absolute + paths to the actual files :rtype: `tuple` of `str` :raises IOError: If unable to find room to write the cert files """ - for path in cert_path, chain_path: + for path in cert_path, chain_path, fullchain_path: le_util.make_or_verify_dir( os.path.dirname(path), 0o755, os.geteuid(), self.config.strict_permissions) - # try finally close - cert_chain_abspath = None - cert_file, act_cert_path = le_util.unique_file(cert_path, 0o644) - # TODO: Except cert_pem = OpenSSL.crypto.dump_certificate( OpenSSL.crypto.FILETYPE_PEM, certr.body) + cert_file, act_cert_path = le_util.unique_file(cert_path, 0o644) try: cert_file.write(cert_pem) finally: @@ -293,22 +292,15 @@ def save_certificate(self, certr, chain_cert, cert_path, chain_path): logger.info("Server issued certificate; certificate written to %s", act_cert_path) + cert_chain_abspath = None + fullchain_abspath = None if chain_cert: - chain_file, act_chain_path = le_util.unique_file( - chain_path, 0o644) - # TODO: Except chain_pem = crypto_util.dump_pyopenssl_chain(chain_cert) - try: - chain_file.write(chain_pem) - finally: - chain_file.close() + cert_chain_abspath = _save_chain(chain_pem, chain_path) + fullchain_abspath = _save_chain(cert_pem + chain_pem, + fullchain_path) - logger.info("Cert chain written to %s", act_chain_path) - - # This expects a valid chain file - cert_chain_abspath = os.path.abspath(act_chain_path) - - return os.path.abspath(act_cert_path), cert_chain_abspath + return os.path.abspath(act_cert_path), cert_chain_abspath, fullchain_abspath def deploy_certificate(self, domains, privkey_path, cert_path, chain_path, fullchain_path): @@ -465,3 +457,25 @@ def view_config_changes(config): rev = reverter.Reverter(config) rev.recovery_routine() rev.view_config_changes() + + +def _save_chain(chain_pem, chain_path): + """Saves chain_pem at a unique path based on chain_path. + + :param str chain_pem: certificate chain in PEM format + :param str chain_path: candidate path for the cert chain + + :returns: absolute path to saved cert chain + :rtype: str + + """ + chain_file, act_chain_path = le_util.unique_file(chain_path, 0o644) + try: + chain_file.write(chain_pem) + finally: + chain_file.close() + + logger.info("Cert chain written to %s", act_chain_path) + + # This expects a valid chain file + return os.path.abspath(act_chain_path) diff --git a/letsencrypt/tests/cli_test.py b/letsencrypt/tests/cli_test.py index 9d9164f24a4..8e917205594 100644 --- a/letsencrypt/tests/cli_test.py +++ b/letsencrypt/tests/cli_test.py @@ -149,7 +149,7 @@ def test_auth_new_request_success(self, mock_get_utility, mock_notAfter): date = '1970-01-01' mock_notAfter().date.return_value = date - mock_lineage = mock.MagicMock(cert=cert_path) + mock_lineage = mock.MagicMock(cert=cert_path, fullchain=cert_path) mock_client = mock.MagicMock() mock_client.obtain_and_enroll_certificate.return_value = mock_lineage self._auth_new_request_common(mock_client) @@ -177,9 +177,10 @@ def _auth_new_request_common(self, mock_client): @mock.patch('letsencrypt.cli._treat_as_renewal') @mock.patch('letsencrypt.cli._init_le_client') def test_auth_renewal(self, mock_init, mock_renewal, mock_get_utility): - cert_path = '/etc/letsencrypt/live/foo.bar' + cert_path = '/etc/letsencrypt/live/foo.bar/cert.pem' + chain_path = '/etc/letsencrypt/live/foo.bar/fullchain.pem' - mock_lineage = mock.MagicMock(cert=cert_path) + mock_lineage = mock.MagicMock(cert=cert_path, fullchain=chain_path) mock_cert = mock.MagicMock(body='body') mock_key = mock.MagicMock(pem='pem_key') mock_renewal.return_value = mock_lineage @@ -195,7 +196,7 @@ def test_auth_renewal(self, mock_init, mock_renewal, mock_get_utility): mock_lineage.update_all_links_to.assert_called_once_with( mock_lineage.latest_common_version()) self.assertTrue( - cert_path in mock_get_utility().add_message.call_args[0][0]) + chain_path in mock_get_utility().add_message.call_args[0][0]) @mock.patch('letsencrypt.crypto_util.notAfter') @mock.patch('letsencrypt.cli.display_ops.pick_installer') @@ -203,23 +204,24 @@ def test_auth_renewal(self, mock_init, mock_renewal, mock_get_utility): @mock.patch('letsencrypt.cli._init_le_client') def test_auth_csr(self, mock_init, mock_get_utility, mock_pick_installer, mock_notAfter): - cert_path = '/etc/letsencrypt/live/foo.bar' + cert_path = '/etc/letsencrypt/live/blahcert.pem' date = '1970-01-01' mock_notAfter().date.return_value = date mock_client = mock.MagicMock() mock_client.obtain_certificate_from_csr.return_value = ('certr', 'chain') - mock_client.save_certificate.return_value = cert_path, None + mock_client.save_certificate.return_value = cert_path, None, None mock_init.return_value = mock_client installer = 'installer' self._call( ['-a', 'standalone', '-i', installer, 'auth', '--csr', CSR, - '--cert-path', cert_path, '--chain-path', '/']) + '--cert-path', cert_path, '--fullchain-path', '/', + '--chain-path', '/']) self.assertEqual(mock_pick_installer.call_args[0][1], installer) mock_client.save_certificate.assert_called_once_with( - 'certr', 'chain', cert_path, '/') + 'certr', 'chain', cert_path, '/', '/') self.assertTrue( cert_path in mock_get_utility().add_message.call_args[0][0]) self.assertTrue( diff --git a/letsencrypt/tests/client_test.py b/letsencrypt/tests/client_test.py index 3f7b84a6418..2efe11108a2 100644 --- a/letsencrypt/tests/client_test.py +++ b/letsencrypt/tests/client_test.py @@ -120,18 +120,22 @@ def test_save_certificate(self): os.chmod(tmp_path, 0o755) # TODO: really?? certr = mock.MagicMock(body=test_util.load_cert(certs[0])) - cert1 = test_util.load_cert(certs[1]) - cert2 = test_util.load_cert(certs[2]) + chain_cert = [test_util.load_cert(certs[1]), + test_util.load_cert(certs[2])] candidate_cert_path = os.path.join(tmp_path, "certs", "cert.pem") candidate_chain_path = os.path.join(tmp_path, "chains", "chain.pem") + candidate_fullchain_path = os.path.join(tmp_path, "chains", "fullchain.pem") - cert_path, chain_path = self.client.save_certificate( - certr, [cert1, cert2], candidate_cert_path, candidate_chain_path) + cert_path, chain_path, fullchain_path = self.client.save_certificate( + certr, chain_cert, candidate_cert_path, candidate_chain_path, + candidate_fullchain_path) self.assertEqual(os.path.dirname(cert_path), os.path.dirname(candidate_cert_path)) self.assertEqual(os.path.dirname(chain_path), os.path.dirname(candidate_chain_path)) + self.assertEqual(os.path.dirname(fullchain_path), + os.path.dirname(candidate_fullchain_path)) with open(cert_path, "r") as cert_file: cert_contents = cert_file.read()
Closes: #1074
https://api.github.com/repos/certbot/certbot/pulls/1077
2015-10-21T23:51:18Z
2015-10-22T17:57:13Z
2015-10-22T17:57:13Z
2016-05-06T19:21:53Z
3,137
certbot/certbot
392
[NFC] polish colossalai/engine/schedule/_pipeline_schedule_v2.py code…
diff --git a/colossalai/engine/schedule/_pipeline_schedule_v2.py b/colossalai/engine/schedule/_pipeline_schedule_v2.py index 50a87aafad02..28c58bd82b5c 100644 --- a/colossalai/engine/schedule/_pipeline_schedule_v2.py +++ b/colossalai/engine/schedule/_pipeline_schedule_v2.py @@ -1,11 +1,12 @@ #!/usr/bin/env python # -*- encoding: utf-8 -*- -from typing import Tuple, Iterable +from typing import Iterable, Tuple -from colossalai import engine -import colossalai.communication.p2p_v2 as comm import torch.cuda + +import colossalai.communication.p2p_v2 as comm +from colossalai import engine from colossalai.context.parallel_mode import ParallelMode from colossalai.core import global_context as gpc from colossalai.utils.cuda import get_current_device @@ -35,7 +36,7 @@ def pack_return_tensors(return_tensors): class PipelineScheduleV2(PipelineSchedule): """Derived class of PipelineSchedule, the only difference is that forward_backward_step is reconstructed with p2p_v2 - + Args: num_microbatches (int): The number of microbatches. data_process_func (Callable, optional): @@ -43,9 +44,9 @@ class PipelineScheduleV2(PipelineSchedule): tensor_shape (torch.Size, optional): Specified shape in pipeline communication. scatter_gather_tensors (bool, optional): If set to `True`, communication will be reduced over pipeline when using 1D tensor parallelization. - + Example: - + # this shows an example of customized data_process_func def data_process_func(stage_output, dataloader_output): output1, output2 = stage_output
… style ## 📌 Checklist before creating the PR - [ ] I have created an issue for this PR for traceability - [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description` - [ ] I have added relevant tags if possible for us to better distinguish different PRs ## 🚨 Issue number > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge > > e.g. `fixed #1234`, `closed #1234`, `resolved #1234` ## 📝 What does this PR do? > Summarize your work here. > if you have any plots/diagrams/screenshots/tables, please attach them here. ## 💥 Checklist before requesting a review - [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)) - [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible - [ ] I have performed a self-review of my code - [ ] I have added thorough tests. - [ ] I have added docstrings for all the functions/methods I implemented ## ⭐️ Do you enjoy contributing to Colossal-AI? - [ ] 🌝 Yes, I do. - [ ] 🌚 No, I don't. Tell us more if you don't enjoy contributing to Colossal-AI.
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/3275
2023-03-28T05:39:30Z
2023-03-28T06:31:39Z
2023-03-28T06:31:39Z
2023-03-28T06:31:39Z
396
hpcaitech/ColossalAI
11,469
--download: Use `time.monotonic()` and rework code to prevent `ZeroDivisionError` specific handling
diff --git a/httpie/downloads.py b/httpie/downloads.py index bd53684d1c..e38c56e249 100644 --- a/httpie/downloads.py +++ b/httpie/downloads.py @@ -9,7 +9,7 @@ import sys import threading from mailbox import Message -from time import sleep, time +from time import sleep, monotonic from typing import IO, Optional, Tuple from urllib.parse import urlsplit @@ -350,7 +350,7 @@ def started(self, resumed_from=0, total_size=None): assert self.time_started is None self.total_size = total_size self.downloaded = self.resumed_from = resumed_from - self.time_started = time() + self.time_started = monotonic() def chunk_downloaded(self, size): assert self.time_finished is None @@ -363,7 +363,7 @@ def has_finished(self): def finished(self): assert self.time_started is not None assert self.time_finished is None - self.time_finished = time() + self.time_finished = monotonic() class ProgressReporterThread(threading.Thread): @@ -389,7 +389,7 @@ def __init__( self._spinner_pos = 0 self._status_line = '' self._prev_bytes = 0 - self._prev_time = time() + self._prev_time = monotonic() self._should_stop = threading.Event() def stop(self): @@ -406,16 +406,11 @@ def run(self): sleep(self._tick) def report_speed(self): - - now = time() - + now = monotonic() if now - self._prev_time >= self._update_interval: downloaded = self.status.downloaded - try: - speed = ((downloaded - self._prev_bytes) - / (now - self._prev_time)) - except ZeroDivisionError: - speed = 0 + speed = ((downloaded - self._prev_bytes) + / (now - self._prev_time)) if not self.status.total_size: self._status_line = PROGRESS_NO_CONTENT_LENGTH.format( @@ -423,10 +418,9 @@ def report_speed(self): speed=humanize_bytes(speed), ) else: - try: - percentage = downloaded / self.status.total_size * 100 - except ZeroDivisionError: - percentage = 0 + percentage = (downloaded / self.status.total_size * 100 + if self.status.total_size + else 0) if not speed: eta = '-:--:--' @@ -457,17 +451,10 @@ def sum_up(self): actually_downloaded = ( self.status.downloaded - self.status.resumed_from) time_taken = self.status.time_finished - self.status.time_started + speed = actually_downloaded / time_taken if time_taken else actually_downloaded self.output.write(CLEAR_LINE) - try: - speed = actually_downloaded / time_taken - except ZeroDivisionError: - # Either time is 0 (not all systems provide `time.time` - # with a better precision than 1 second), and/or nothing - # has been downloaded. - speed = actually_downloaded - self.output.write(SUMMARY.format( downloaded=humanize_bytes(actually_downloaded), total=(self.status.total_size
It simplifies speed/percentage handling. It also expands coverage as testing previous `ZeroDivisionError` corner-cases is not easy.
https://api.github.com/repos/httpie/cli/pulls/1113
2021-07-26T10:07:17Z
2021-07-29T14:05:56Z
2021-07-29T14:05:56Z
2021-07-29T14:08:46Z
753
httpie/cli
34,010
DOC Fix davies_bouldin_score for numpydoc
diff --git a/maint_tools/test_docstrings.py b/maint_tools/test_docstrings.py index d7cebff7344b9..6abed61972aad 100644 --- a/maint_tools/test_docstrings.py +++ b/maint_tools/test_docstrings.py @@ -120,7 +120,6 @@ "sklearn.metrics.cluster._supervised.pair_confusion_matrix", "sklearn.metrics.cluster._supervised.rand_score", "sklearn.metrics.cluster._supervised.v_measure_score", - "sklearn.metrics.cluster._unsupervised.davies_bouldin_score", "sklearn.metrics.cluster._unsupervised.silhouette_samples", "sklearn.metrics.cluster._unsupervised.silhouette_score", "sklearn.metrics.pairwise.additive_chi2_kernel", diff --git a/sklearn/metrics/cluster/_unsupervised.py b/sklearn/metrics/cluster/_unsupervised.py index fd4933c1df17a..e353511d614f3 100644 --- a/sklearn/metrics/cluster/_unsupervised.py +++ b/sklearn/metrics/cluster/_unsupervised.py @@ -303,7 +303,7 @@ def calinski_harabasz_score(X, labels): def davies_bouldin_score(X, labels): - """Computes the Davies-Bouldin score. + """Compute the Davies-Bouldin score. The score is defined as the average similarity measure of each cluster with its most similar cluster, where similarity is the ratio of within-cluster
…_score passes numpydoc validation #21350 #### Reference Issues/PRs DOC Ensures that sklearn.metrics.cluster._unsupervised.davies_bouldin_score #21350 #### What does this implement/fix? Explain your changes. Make summary start with infinitive verb. #### Any other comments? <!-- Please be aware that we are a loose team of volunteers so patience is necessary; assistance handling other issues is very welcome. We value all user contributions, no matter how minor they are. If we are slow to review, either the pull request needs some benchmarking, tinkering, convincing, etc. or more likely the reviewers are simply busy. In either case, we ask for your understanding during the review process. For more information, see our FAQ on this topic: http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention. Thanks for contributing! -->
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/21850
2021-12-01T22:12:38Z
2021-12-03T07:05:26Z
2021-12-03T07:05:26Z
2022-03-12T15:56:08Z
341
scikit-learn/scikit-learn
46,614
Update schema.rst
diff --git a/docs/tutorial/schema.rst b/docs/tutorial/schema.rst index f845503703..246baccd64 100644 --- a/docs/tutorial/schema.rst +++ b/docs/tutorial/schema.rst @@ -14,7 +14,7 @@ named `schema.sql` in the just created `flaskr` folder: create table entries ( id integer primary key autoincrement, title text not null, - text text not null + 'text' text not null ); This schema consists of a single table called `entries` and each row in
The error is that sqlite3 needs escaping on the database column 'text' in the line: text text not null so that it becomes: 'text' text not null The example on mitsuhiko/flask is correct and helped me detecting the mistake.
https://api.github.com/repos/pallets/flask/pulls/1099
2014-07-01T14:58:23Z
2014-07-27T11:31:59Z
2014-07-27T11:31:59Z
2020-11-14T05:08:07Z
127
pallets/flask
20,666
Fix OpenSearch port strategy when running inside Docker
diff --git a/localstack/services/opensearch/cluster.py b/localstack/services/opensearch/cluster.py index 3c3e6a258756b..a8517b8f2e1e4 100644 --- a/localstack/services/opensearch/cluster.py +++ b/localstack/services/opensearch/cluster.py @@ -123,11 +123,10 @@ def build_cluster_run_command(cluster_bin: str, settings: CommandSettings) -> Li class OpensearchCluster(Server): - """Manages an OpenSearch cluster which is installed an operated by LocalStack.""" + """Manages an OpenSearch cluster which is installed and operated by LocalStack.""" - # TODO: legacy default port should be removed here def __init__( - self, port=4571, host="localhost", version: str = None, directories: Directories = None + self, port, host="localhost", version: str = None, directories: Directories = None ) -> None: super().__init__(port, host) self._version = version or self.default_version @@ -201,6 +200,7 @@ def _base_settings(self, dirs) -> CommandSettings: "path.data": f'"{dirs.data}"', "path.repo": f'"{dirs.backup}"', "plugins.security.disabled": "true", + "discovery.type": "single-node", } if os.path.exists(os.path.join(dirs.mods, "x-pack-ml")): @@ -358,6 +358,7 @@ def _base_settings(self, dirs) -> CommandSettings: "http.compression": "false", "path.data": f'"{dirs.data}"', "path.repo": f'"{dirs.backup}"', + "discovery.type": "single-node", } if os.path.exists(os.path.join(dirs.mods, "x-pack-ml")): diff --git a/localstack/services/opensearch/cluster_manager.py b/localstack/services/opensearch/cluster_manager.py index f6960952f2f1f..02f21d2733795 100644 --- a/localstack/services/opensearch/cluster_manager.py +++ b/localstack/services/opensearch/cluster_manager.py @@ -7,6 +7,7 @@ from localstack import config from localstack.aws.api.opensearch import DomainEndpointOptions, EngineType +from localstack.config import EDGE_BIND_HOST from localstack.constants import LOCALHOST, LOCALHOST_HOSTNAME from localstack.services.generic_proxy import EndpointProxy, FakeEndpointProxyServer from localstack.services.opensearch import versions @@ -255,11 +256,11 @@ def _create_cluster(self, arn, url, version) -> Server: # startup routine for the singleton cluster instance if engine_type == EngineType.OpenSearch: self.cluster = OpensearchCluster( - port=get_free_tcp_port(), directories=resolve_directories(version, arn) + get_free_tcp_port(), directories=resolve_directories(version, arn) ) else: self.cluster = ElasticsearchCluster( - port=get_free_tcp_port(), directories=resolve_directories(version, arn) + get_free_tcp_port(), directories=resolve_directories(version, arn) ) def _start_async(*_): @@ -305,14 +306,14 @@ def _create_cluster(self, arn, url, version) -> Server: port = _get_port_from_url(url) if engine_type == EngineType.OpenSearch: return OpensearchCluster( - port=port, - host=LOCALHOST, + port, + host=EDGE_BIND_HOST, version=version, directories=resolve_directories(version, arn), ) else: return ElasticsearchCluster( - port=port, + port, host=LOCALHOST, version=version, directories=resolve_directories(version, arn), @@ -354,14 +355,14 @@ def _create_cluster(self, arn, url, version) -> Server: engine_type = versions.get_engine_type(version) if engine_type == EngineType.OpenSearch: self.cluster = OpensearchCluster( - port=port, - host=LOCALHOST, + port, + host=EDGE_BIND_HOST, version=version, directories=resolve_directories(version, arn), ) else: self.cluster = ElasticsearchCluster( - port=port, + port, host=LOCALHOST, version=version, directories=resolve_directories(version, arn), diff --git a/localstack/services/opensearch/provider.py b/localstack/services/opensearch/provider.py index fcff9523c681d..1e35dca25e129 100644 --- a/localstack/services/opensearch/provider.py +++ b/localstack/services/opensearch/provider.py @@ -77,6 +77,7 @@ VPCDerivedInfoStatus, VPCOptions, ) +from localstack.config import LOCALSTACK_HOSTNAME from localstack.constants import OPENSEARCH_DEFAULT_VERSION from localstack.services.generic_proxy import RegionBackend from localstack.services.opensearch import versions @@ -146,7 +147,10 @@ def create_cluster( # FIXME: in AWS, the Endpoint is set once the cluster is running, not before (like here), but our tests and # in particular cloudformation currently relies on the assumption that it is set when the domain is created. status = region.opensearch_domains[domain_key.domain_name] - status["Endpoint"] = cluster.url.split("://")[-1] + # Replacing only 0.0.0.0 here as usage of this bind address mostly means running in docker which is used locally + # If another bind address is used we want to keep it in the endpoint as this is a conscious user decision to + # access from another device on the network. + status["Endpoint"] = cluster.url.split("://")[-1].replace("0.0.0.0", LOCALSTACK_HOSTNAME) status["EngineVersion"] = engine_version if cluster.is_up(): diff --git a/tests/integration/test_opensearch.py b/tests/integration/test_opensearch.py index 7c7fba78a7c79..ce95a1cb9688f 100644 --- a/tests/integration/test_opensearch.py +++ b/tests/integration/test_opensearch.py @@ -8,6 +8,7 @@ from localstack import config from localstack.aws.accounts import get_aws_account_id +from localstack.config import EDGE_BIND_HOST, LOCALSTACK_HOSTNAME from localstack.constants import OPENSEARCH_DEFAULT_VERSION, OPENSEARCH_PLUGIN_LIST from localstack.services.install import install_opensearch from localstack.services.opensearch.cluster import EdgeProxiedOpensearchCluster @@ -413,7 +414,7 @@ def test_endpoint_strategy_port(self, monkeypatch, opensearch_create_domain, ope assert "Endpoint" in status endpoint = status["Endpoint"] parts = endpoint.split(":") - assert parts[0] == "localhost" + assert parts[0] in ("localhost", "127.0.0.1") assert int(parts[1]) in range( config.EXTERNAL_SERVICE_PORTS_START, config.EXTERNAL_SERVICE_PORTS_END ) @@ -585,7 +586,8 @@ def test_endpoint_strategy_port_singleton_cluster(self, monkeypatch): parts = cluster_0.url.split(":") assert parts[0] == "http" - assert parts[1] == "//localhost" + # either f"//{the bind host}" is used, or in the case of "//0.0.0.0" the localstack hostname instead + assert parts[1][2:] in [EDGE_BIND_HOST, LOCALSTACK_HOSTNAME] assert int(parts[2]) in range( config.EXTERNAL_SERVICE_PORTS_START, config.EXTERNAL_SERVICE_PORTS_END )
When running LocalStack (and therefore OpenSearch/Elasticsearch) in Docker, the search clusters were still bound to localhost in the container when using the port strategy. This caused them to be unreachable. This PR fixes that by addressing #6419
https://api.github.com/repos/localstack/localstack/pulls/6638
2022-08-10T13:11:46Z
2022-08-24T11:42:54Z
2022-08-24T11:42:54Z
2022-08-24T11:42:57Z
1,739
localstack/localstack
29,278
Changed sticky bit string from 'sS' to 'tT'
diff --git a/share/adapters/chmod.sh b/share/adapters/chmod.sh index 557d08f2..8d5e9643 100755 --- a/share/adapters/chmod.sh +++ b/share/adapters/chmod.sh @@ -32,7 +32,7 @@ chmod_calc(){ [ ${num:1:1} -eq 1 ] && p_s+='w' || p_s+='-' if [[ $sticky == 'X' ]] then - [ ${num:2:1} -eq 1 ] && p_s+='s' || p_s+='S' + [ ${num:2:1} -eq 1 ] && p_s+='t' || p_s+='T' else [ ${num:2:1} -eq 1 ] && p_s+='x' || p_s+='-' fi @@ -42,7 +42,7 @@ chmod_calc(){ fi done # If permission string is given calc number - elif [[ ${#1} -le 9 && $(( ${#1} % 3 )) -eq 0 && $1 =~ ^[r,w,x,s,S,-]+$ ]] + elif [[ ${#1} -le 9 && $(( ${#1} % 3 )) -eq 0 && $1 =~ ^[r,t,T,w,x,-]+$ ]] then p_s=$1 [[ ${p_s,,} =~ 's' ]] && p_n+="1" || p_n+="0" @@ -56,9 +56,9 @@ chmod_calc(){ [[ ${1:$i:1} == 'w' ]] && W+=('X') || W+=(' ') [[ ${1:$((i++)):1} == 'w' ]] && let num++ num=$(( num << 1 )) - [[ 'xs' =~ ${1:$i:1} ]] && X+=('X') || X+=(' ') - [[ 'Ss' =~ ${1:$i:1} ]] && S+=('X') || S+=(' ') - [[ 'xs' =~ ${1:$((i++)):1} ]] && let num++ + [[ 'xt' =~ ${1:$i:1} ]] && X+=('X') || X+=(' ') + [[ 'Tt' =~ ${1:$i:1} ]] && S+=('X') || S+=(' ') + [[ 'xt' =~ ${1:$((i++)):1} ]] && let num++ p_n+="$num" done else
https://api.github.com/repos/chubin/cheat.sh/pulls/205
2020-05-31T13:20:17Z
2020-06-01T10:45:21Z
2020-06-01T10:45:20Z
2020-06-01T10:45:21Z
564
chubin/cheat.sh
15,185
✏ Fix typos in docs and source examples
diff --git a/docs/en/docs/async.md b/docs/en/docs/async.md index 7c3dcfdea03a4..07afd7bdc6930 100644 --- a/docs/en/docs/async.md +++ b/docs/en/docs/async.md @@ -210,7 +210,7 @@ Most of the existing popular Python frameworks (including Flask and Django) were Even though the main specification for asynchronous web Python (ASGI) was developed at Django, to add support for WebSockets. -That kind of asynchronicity is what made NodeJS popular (even though NodeJS is not parallel) and that's the strength of Go as a programing language. +That kind of asynchronicity is what made NodeJS popular (even though NodeJS is not parallel) and that's the strength of Go as a programming language. And that's the same level of performance you get with **FastAPI**. diff --git a/tests/test_multipart_installation.py b/tests/test_multipart_installation.py index c134332d324fc..c8a6fd942fa1a 100644 --- a/tests/test_multipart_installation.py +++ b/tests/test_multipart_installation.py @@ -42,7 +42,7 @@ def test_incorrect_multipart_installed_multi_form(monkeypatch): app = FastAPI() @app.post("/") - async def root(username: str = Form(...), pasword: str = Form(...)): + async def root(username: str = Form(...), password: str = Form(...)): return username # pragma: nocover
Fixing some typos on `async.md`, `unzip-docs.sh` and `test_multipart_installation.py` that I encountered while checking the source code.
https://api.github.com/repos/tiangolo/fastapi/pulls/2102
2020-09-26T22:36:34Z
2020-11-05T22:14:18Z
2020-11-05T22:14:18Z
2021-03-05T21:11:18Z
345
tiangolo/fastapi
23,423
fetchBalance edits
diff --git a/js/bitbank.js b/js/bitbank.js index 914b3367d9d9..c3626b01ff87 100644 --- a/js/bitbank.js +++ b/js/bitbank.js @@ -314,14 +314,13 @@ module.exports = class bitbank extends Exchange { const balance = assets[i]; const currencyId = this.safeString (balance, 'asset'); const code = this.safeCurrencyCode (currencyId); - const account = { - 'free': this.safeNumber (balance, 'free_amount'), - 'used': this.safeNumber (balance, 'locked_amount'), - 'total': this.safeNumber (balance, 'onhand_amount'), - }; + const account = this.account (); + account['free'] = this.safeString (balance, 'free_amount'); + account['used'] = this.safeString (balance, 'locked_amount'); + account['total'] = this.safeString (balance, 'onhand_amount'); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } parseOrderStatus (status) { diff --git a/js/bitso.js b/js/bitso.js index a37c07fc338e..defcece54973 100644 --- a/js/bitso.js +++ b/js/bitso.js @@ -232,14 +232,13 @@ module.exports = class bitso extends Exchange { const balance = balances[i]; const currencyId = this.safeString (balance, 'currency'); const code = this.safeCurrencyCode (currencyId); - const account = { - 'free': this.safeNumber (balance, 'available'), - 'used': this.safeNumber (balance, 'locked'), - 'total': this.safeNumber (balance, 'total'), - }; + const account = this.account (); + account['free'] = this.safeString (balance, 'available'); + account['used'] = this.safeString (balance, 'locked'); + account['total'] = this.safeString (balance, 'total'); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } async fetchOrderBook (symbol, limit = undefined, params = {}) { diff --git a/js/bitvavo.js b/js/bitvavo.js index b4b92b3939d6..ef66620ca81b 100644 --- a/js/bitvavo.js +++ b/js/bitvavo.js @@ -742,13 +742,12 @@ module.exports = class bitvavo extends Exchange { const balance = response[i]; const currencyId = this.safeString (balance, 'symbol'); const code = this.safeCurrencyCode (currencyId); - const account = { - 'free': this.safeNumber (balance, 'available'), - 'used': this.safeNumber (balance, 'inOrder'), - }; + const account = this.account (); + account['free'] = this.safeString (balance, 'available'); + account['used'] = this.safeString (balance, 'inOrder'); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } async fetchDepositAddress (code, params = {}) { diff --git a/js/btctradeua.js b/js/btctradeua.js index bd3b77b7105c..32527f881954 100644 --- a/js/btctradeua.js +++ b/js/btctradeua.js @@ -107,10 +107,10 @@ module.exports = class btctradeua extends Exchange { const currencyId = this.safeString (balance, 'currency'); const code = this.safeCurrencyCode (currencyId); const account = this.account (); - account['total'] = this.safeNumber (balance, 'balance'); + account['total'] = this.safeString (balance, 'balance'); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } async fetchOrderBook (symbol, limit = undefined, params = {}) { diff --git a/js/buda.js b/js/buda.js index f70f49c8dd52..146b07ebeac9 100644 --- a/js/buda.js +++ b/js/buda.js @@ -453,11 +453,11 @@ module.exports = class buda extends Exchange { const currencyId = this.safeString (balance, 'id'); const code = this.safeCurrencyCode (currencyId); const account = this.account (); - account['free'] = parseFloat (balance['available_amount'][0]); - account['total'] = parseFloat (balance['amount'][0]); + account['free'] = this.safeString (balance['available_amount'], 0); + account['total'] = this.safeString (balance['amount'], 0); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } async fetchOrder (id, symbol = undefined, params = {}) { diff --git a/js/coinbasepro.js b/js/coinbasepro.js index 0245545d76ba..6d8c48454e57 100644 --- a/js/coinbasepro.js +++ b/js/coinbasepro.js @@ -392,14 +392,13 @@ module.exports = class coinbasepro extends Exchange { const balance = response[i]; const currencyId = this.safeString (balance, 'currency'); const code = this.safeCurrencyCode (currencyId); - const account = { - 'free': this.safeNumber (balance, 'available'), - 'used': this.safeNumber (balance, 'hold'), - 'total': this.safeNumber (balance, 'balance'), - }; + const account = this.account (); + account['free'] = this.safeString (balance, 'available'); + account['used'] = this.safeString (balance, 'hold'); + account['total'] = this.safeString (balance, 'balance'); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } async fetchOrderBook (symbol, limit = undefined, params = {}) { diff --git a/js/coinegg.js b/js/coinegg.js index 0935600f8f1a..0b2e2ef895b4 100644 --- a/js/coinegg.js +++ b/js/coinegg.js @@ -326,9 +326,9 @@ module.exports = class coinegg extends Exchange { result[code] = this.account (); } const type = (accountType === 'lock') ? 'used' : 'free'; - result[code][type] = this.safeNumber (balances, key); + result[code][type] = this.safeString (balances, key); } - return this.parseBalance (result); + return this.parseBalance (result, false); } parseOrder (order, market = undefined) { diff --git a/js/coinfalcon.js b/js/coinfalcon.js index 14f04812322c..8d307729380f 100644 --- a/js/coinfalcon.js +++ b/js/coinfalcon.js @@ -256,14 +256,13 @@ module.exports = class coinfalcon extends Exchange { const balance = balances[i]; const currencyId = this.safeString (balance, 'currency_code'); const code = this.safeCurrencyCode (currencyId); - const account = { - 'free': this.safeNumber (balance, 'available_balance'), - 'used': this.safeNumber (balance, 'hold_balance'), - 'total': this.safeNumber (balance, 'balance'), - }; + const account = this.account (); + account['free'] = this.safeString (balance, 'available_balance'); + account['used'] = this.safeString (balance, 'hold_balance'); + account['total'] = this.safeString (balance, 'balance'); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } parseOrderStatus (status) { diff --git a/js/exx.js b/js/exx.js index d44ce64d3213..72b59ef1b76b 100644 --- a/js/exx.js +++ b/js/exx.js @@ -267,14 +267,13 @@ module.exports = class exx extends Exchange { const currencyId = currencies[i]; const balance = balances[currencyId]; const code = this.safeCurrencyCode (currencyId); - const account = { - 'free': this.safeNumber (balance, 'balance'), - 'used': this.safeNumber (balance, 'freeze'), - 'total': this.safeNumber (balance, 'total'), - }; + const account = this.account (); + account['free'] = this.safeString (balance, 'balance'); + account['used'] = this.safeString (balance, 'freeze'); + account['total'] = this.safeString (balance, 'total'); result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } parseOrder (order, market = undefined) { diff --git a/js/idex.js b/js/idex.js index d9dde7ed0daa..c3c14d981876 100644 --- a/js/idex.js +++ b/js/idex.js @@ -589,16 +589,13 @@ module.exports = class idex extends Exchange { const entry = response[i]; const currencyId = this.safeString (entry, 'asset'); const code = this.safeCurrencyCode (currencyId); - const total = this.safeNumber (entry, 'quantity'); - const free = this.safeNumber (entry, 'availableForTrade'); - const used = this.safeNumber (entry, 'locked'); - result[code] = { - 'free': free, - 'used': used, - 'total': total, - }; + const account = this.account (); + account['total'] = this.safeString (entry, 'quantity'); + account['free'] = this.safeString (entry, 'availableForTrade'); + account['used'] = this.safeString (entry, 'locked'); + result[code] = account; } - return this.parseBalance (result); + return this.parseBalance (result, false); } async fetchMyTrades (symbol = undefined, since = undefined, limit = undefined, params = {}) {
https://api.github.com/repos/ccxt/ccxt/pulls/8923
2021-04-10T08:04:21Z
2021-04-10T09:06:21Z
2021-04-10T09:06:21Z
2021-04-17T17:43:38Z
2,383
ccxt/ccxt
13,855
fix download issue
diff --git a/youtube_dl/extractor/dtube.py b/youtube_dl/extractor/dtube.py index 5887887e15e..20190c9cc42 100644 --- a/youtube_dl/extractor/dtube.py +++ b/youtube_dl/extractor/dtube.py @@ -48,7 +48,7 @@ def _real_extract(self, url): def canonical_url(h): if not h: return None - return 'https://ipfs.io/ipfs/' + h + return 'https://video.dtube.top/ipfs/' + h formats = [] for q in ('240', '480', '720', '1080', ''):
thx to @GianlucaFicarelli ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [adding new extractor tutorial](https://github.com/rg3/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/rg3/youtube-dl#youtube-dl-coding-conventions) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) ### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) - [x] this is a fix motivated by @GianlucaFicarelli ### What is the purpose of your *pull request*? - [x] Bug fix - [ ] Improvement - [ ] New extractor - [ ] New feature --- closes #18741 ### Description of your *pull request* and other information Explanation of your *pull request* in arbitrary form goes here. Please make sure the description explains the purpose and effect of your *pull request* and is worded well enough to be understood. Provide as much context and examples as possible.
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/18776
2019-01-07T20:08:56Z
2019-01-08T01:44:43Z
2019-01-08T01:44:43Z
2019-01-08T16:29:15Z
155
ytdl-org/youtube-dl
50,160
[3.6] Correct a typo in the Unittest documentation (GH-10397)
diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst index a43e9453239ca8..3a8af0c52a5998 100644 --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -588,7 +588,7 @@ Distinguishing test iterations using subtests .. versionadded:: 3.4 -When some of your tests differ only by a some very small differences, for +When there are very small differences among your tests, for instance some parameters, unittest allows you to distinguish them inside the body of a test method using the :meth:`~TestCase.subTest` context manager.
Co-Authored-By: maggyero <[email protected]> (cherry picked from commit 009b2f02049eda3b29d4f4f743e51df106686375) Co-authored-by: Géry Ogam <[email protected]>
https://api.github.com/repos/python/cpython/pulls/10440
2018-11-09T19:35:17Z
2018-11-09T19:50:53Z
2018-11-09T19:50:53Z
2018-11-09T19:50:56Z
154
python/cpython
4,761
Remove tox
diff --git a/.gitignore b/.gitignore index 11d04d10a4..71e2361629 100644 --- a/.gitignore +++ b/.gitignore @@ -52,7 +52,6 @@ pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ -.tox/ .nox/ .coverage .coverage.* diff --git a/CHANGELOG.rst b/CHANGELOG.rst index a455588eea..1e23a1b9f2 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -10,6 +10,7 @@ This project adheres to `Semantic Versioning <https://semver.org/>`_. ------------------------- * Added support for combining cookies specified on the CLI and in a session file (`#932`_). * Added out of the box SOCKS support with no extra installation (`#904`_). +* Removed Tox testing entirely (`#943`_). `2.2.0`_ (2020-06-18) @@ -461,3 +462,4 @@ This project adheres to `Semantic Versioning <https://semver.org/>`_. .. _#925: https://github.com/jakubroztocil/httpie/issues/925 .. _#932: https://github.com/jakubroztocil/httpie/issues/932 .. _#934: https://github.com/jakubroztocil/httpie/issues/934 +.. _#943: https://github.com/jakubroztocil/httpie/issues/943 diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 8ecff67ffd..b51039a38c 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -123,8 +123,7 @@ so please make sure all checks pass. Running tests locally ********************* -HTTPie uses the `pytest`_ runner. It also uses `Tox`_ which allows you to run -tests on multiple Python versions even when testing locally. +HTTPie uses the `pytest`_ runner. .. code-block:: bash @@ -135,9 +134,6 @@ tests on multiple Python versions even when testing locally. # Run tests with coverage make test-cover - # Run all tests in all of the supported and available Pythons via Tox - make test-tox - # Test PEP8 compliance make pycodestyle @@ -158,12 +154,6 @@ can run specific tests from the terminal: py.test tests/test_uploads.py::TestMultipartFormDataFileUpload py.test tests/test_uploads.py::TestMultipartFormDataFileUpload::test_upload_ok - # Run specific tests on the on all Pythons via Tox - # (change to `tox -e py37' to limit Python version) - tox -- tests/test_uploads.py --verbose - tox -- tests/test_uploads.py::TestMultipartFormDataFileUpload --verbose - tox -- tests/test_uploads.py::TestMultipartFormDataFileUpload::test_upload_ok --verbose - ----- See `Makefile`_ for additional development utilities. @@ -172,8 +162,6 @@ See `Makefile`_ for additional development utilities. Finally, don't forget to add yourself to `AUTHORS`_! -.. _Tox: http://tox.testrun.org -.. _supported Python environments: https://github.com/jakubroztocil/httpie/blob/master/tox.ini .. _existing issues: https://github.com/jakubroztocil/httpie/issues?state=open .. _AUTHORS: https://github.com/jakubroztocil/httpie/blob/master/AUTHORS.rst .. _Makefile: https://github.com/jakubroztocil/httpie/blob/master/Makefile diff --git a/Makefile b/Makefile index 50637c9eda..e998cbfaa7 100644 --- a/Makefile +++ b/Makefile @@ -38,7 +38,7 @@ clean: rm -rf $(VENV_ROOT) # Remove symlink for virtualenvwrapper, if we’ve created one. [ -n "$(WORKON_HOME)" -a -L "$(WORKON_HOME)/httpie" -a -f "$(WORKON_HOME)/httpie" ] && rm $(WORKON_HOME)/httpie || true - rm -rf .tox *.egg dist build .coverage .cache .pytest_cache httpie.egg-info + rm -rf *.egg dist build .coverage .cache .pytest_cache httpie.egg-info find . -name '__pycache__' -delete -o -name '*.pyc' -delete @echo @@ -86,7 +86,7 @@ test-cover: test # test-all is meant to test everything — even this Makefile -test-all: clean install test test-tox test-dist pycodestyle +test-all: clean install test test-dist pycodestyle @echo @@ -94,12 +94,6 @@ test-dist: test-sdist test-bdist-wheel @echo -test-tox: uninstall-httpie install - @echo $(H1)Running tests on all Pythons via Tox$(H1END) - $(VENV_BIN)/tox - @echo - - test-sdist: clean venv @echo $(H1)Testing sdist build an installation$(H1END) $(VENV_PYTHON) setup.py sdist diff --git a/requirements-dev.txt b/requirements-dev.txt index e0ccd39ce1..8e755e73c8 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -1,4 +1,3 @@ -tox mock pytest pytest-cov diff --git a/setup.cfg b/setup.cfg index 7ae337612a..e672f486f9 100644 --- a/setup.cfg +++ b/setup.cfg @@ -10,7 +10,7 @@ addopts = --tb=native [pycodestyle] # <http://pycodestyle.pycqa.org/en/latest/intro.html#configuration> -exclude = .git,.idea,__pycache__,build,dist,.tox,.pytest_cache,*.egg-info +exclude = .git,.idea,__pycache__,build,dist,.pytest_cache,*.egg-info # <http://pycodestyle.pycqa.org/en/latest/intro.html#error-codes> # E241 - multiple spaces after ‘,’ diff --git a/tox.ini b/tox.ini deleted file mode 100644 index d44bce8fd0..0000000000 --- a/tox.ini +++ /dev/null @@ -1,23 +0,0 @@ -# Tox (http://tox.testrun.org/) is a tool for running tests -# in multiple virtualenvs. See ./CONTRIBUTING.rst - - -[tox] -# pypy3 currently fails because of a Flask issue -envlist = py37 - - -[testenv] -deps = - mock - pytest - pytest-httpbin>=0.0.6 - - -commands = - # NOTE: the order of the directories in posargs seems to matter. - # When changed, then many ImportMismatchError exceptions occurrs. - py.test \ - --verbose \ - --doctest-modules \ - {posargs:./httpie ./tests}
Fixes #943
https://api.github.com/repos/httpie/cli/pulls/944
2020-06-26T13:41:13Z
2020-06-26T15:22:07Z
2020-06-26T15:22:07Z
2020-06-26T15:22:16Z
1,691
httpie/cli
34,019
Scrapinghub → Zyte
diff --git a/AUTHORS b/AUTHORS index bcaa1ecd342..9706adf421e 100644 --- a/AUTHORS +++ b/AUTHORS @@ -1,8 +1,8 @@ Scrapy was brought to life by Shane Evans while hacking a scraping framework prototype for Mydeco (mydeco.com). It soon became maintained, extended and improved by Insophia (insophia.com), with the initial sponsorship of Mydeco to -bootstrap the project. In mid-2011, Scrapinghub became the new official -maintainer. +bootstrap the project. In mid-2011, Scrapinghub (now Zyte) became the new +official maintainer. Here is the list of the primary authors & contributors: diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index d1cd3e517bc..65246038330 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -55,7 +55,7 @@ further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at [email protected]. All +reported by contacting the project team at [email protected]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. diff --git a/README.rst b/README.rst index bbe34652299..5750e2c0fe0 100644 --- a/README.rst +++ b/README.rst @@ -42,10 +42,11 @@ Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing. -Scrapy is maintained by `Scrapinghub`_ and `many other contributors`_. +Scrapy is maintained by Zyte_ (formerly Scrapinghub) and `many other +contributors`_. .. _many other contributors: https://github.com/scrapy/scrapy/graphs/contributors -.. _Scrapinghub: https://www.scrapinghub.com/ +.. _Zyte: https://www.zyte.com/ Check the Scrapy homepage at https://scrapy.org for more information, including a list of features. @@ -95,7 +96,7 @@ Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms. -Please report unacceptable behavior to [email protected]. +Please report unacceptable behavior to [email protected]. Companies using Scrapy ====================== diff --git a/docs/intro/install.rst b/docs/intro/install.rst index 73d7ede4293..bf919ce254b 100644 --- a/docs/intro/install.rst +++ b/docs/intro/install.rst @@ -266,7 +266,6 @@ For details, see `Issue #2473 <https://github.com/scrapy/scrapy/issues/2473>`_. .. _setuptools: https://pypi.python.org/pypi/setuptools .. _homebrew: https://brew.sh/ .. _zsh: https://www.zsh.org/ -.. _Scrapinghub: https://scrapinghub.com .. _Anaconda: https://docs.anaconda.com/anaconda/ .. _Miniconda: https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html .. _conda-forge: https://conda-forge.org/ diff --git a/docs/topics/deploy.rst b/docs/topics/deploy.rst index 361914a2973..961d6dc015d 100644 --- a/docs/topics/deploy.rst +++ b/docs/topics/deploy.rst @@ -14,7 +14,7 @@ spiders come in. Popular choices for deploying Scrapy spiders are: * :ref:`Scrapyd <deploy-scrapyd>` (open source) -* :ref:`Scrapy Cloud <deploy-scrapy-cloud>` (cloud-based) +* :ref:`Zyte Scrapy Cloud <deploy-scrapy-cloud>` (cloud-based) .. _deploy-scrapyd: @@ -32,28 +32,28 @@ Scrapyd is maintained by some of the Scrapy developers. .. _deploy-scrapy-cloud: -Deploying to Scrapy Cloud -========================= +Deploying to Zyte Scrapy Cloud +============================== -`Scrapy Cloud`_ is a hosted, cloud-based service by `Scrapinghub`_, -the company behind Scrapy. +`Zyte Scrapy Cloud`_ is a hosted, cloud-based service by Zyte_, the company +behind Scrapy. -Scrapy Cloud removes the need to setup and monitor servers -and provides a nice UI to manage spiders and review scraped items, -logs and stats. +Zyte Scrapy Cloud removes the need to setup and monitor servers and provides a +nice UI to manage spiders and review scraped items, logs and stats. -To deploy spiders to Scrapy Cloud you can use the `shub`_ command line tool. -Please refer to the `Scrapy Cloud documentation`_ for more information. +To deploy spiders to Zyte Scrapy Cloud you can use the `shub`_ command line +tool. +Please refer to the `Zyte Scrapy Cloud documentation`_ for more information. -Scrapy Cloud is compatible with Scrapyd and one can switch between +Zyte Scrapy Cloud is compatible with Scrapyd and one can switch between them as needed - the configuration is read from the ``scrapy.cfg`` file just like ``scrapyd-deploy``. -.. _Scrapyd: https://github.com/scrapy/scrapyd .. _Deploying your project: https://scrapyd.readthedocs.io/en/latest/deploy.html -.. _Scrapy Cloud: https://scrapinghub.com/scrapy-cloud +.. _Scrapyd: https://github.com/scrapy/scrapyd .. _scrapyd-client: https://github.com/scrapy/scrapyd-client -.. _shub: https://doc.scrapinghub.com/shub.html .. _scrapyd-deploy documentation: https://scrapyd.readthedocs.io/en/latest/deploy.html -.. _Scrapy Cloud documentation: https://doc.scrapinghub.com/scrapy-cloud.html -.. _Scrapinghub: https://scrapinghub.com/ +.. _shub: https://shub.readthedocs.io/en/latest/ +.. _Zyte: https://zyte.com/ +.. _Zyte Scrapy Cloud: https://www.zyte.com/scrapy-cloud/ +.. _Zyte Scrapy Cloud documentation: https://docs.zyte.com/scrapy-cloud.html diff --git a/docs/topics/logging.rst b/docs/topics/logging.rst index 55065a1a378..c3445d40e9a 100644 --- a/docs/topics/logging.rst +++ b/docs/topics/logging.rst @@ -101,7 +101,7 @@ instance, which can be accessed and used like this:: class MySpider(scrapy.Spider): name = 'myspider' - start_urls = ['https://scrapinghub.com'] + start_urls = ['https://scrapy.org'] def parse(self, response): self.logger.info('Parse function called on %s', response.url) @@ -117,7 +117,7 @@ Python logger you want. For example:: class MySpider(scrapy.Spider): name = 'myspider' - start_urls = ['https://scrapinghub.com'] + start_urls = ['https://scrapy.org'] def parse(self, response): logger.info('Parse function called on %s', response.url) diff --git a/docs/topics/practices.rst b/docs/topics/practices.rst index cf1de1bd15e..502fd5fcd01 100644 --- a/docs/topics/practices.rst +++ b/docs/topics/practices.rst @@ -63,7 +63,7 @@ project as example. process = CrawlerProcess(get_project_settings()) # 'followall' is the name of one of the spiders of the project. - process.crawl('followall', domain='scrapinghub.com') + process.crawl('followall', domain='scrapy.org') process.start() # the script will block here until the crawling is finished There's another Scrapy utility that provides more control over the crawling @@ -244,7 +244,7 @@ Here are some tips to keep in mind when dealing with these kinds of sites: super proxy that you can attach your own proxies to. * use a highly distributed downloader that circumvents bans internally, so you can just focus on parsing clean pages. One example of such downloaders is - `Crawlera`_ + `Zyte Smart Proxy Manager`_ If you are still unable to prevent your bot getting banned, consider contacting `commercial support`_. @@ -254,5 +254,5 @@ If you are still unable to prevent your bot getting banned, consider contacting .. _ProxyMesh: https://proxymesh.com/ .. _Google cache: http://www.googleguide.com/cached_pages.html .. _testspiders: https://github.com/scrapinghub/testspiders -.. _Crawlera: https://scrapinghub.com/crawlera .. _scrapoxy: https://scrapoxy.io/ +.. _Zyte Smart Proxy Manager: https://www.zyte.com/smart-proxy-manager/ diff --git a/docs/topics/selectors.rst b/docs/topics/selectors.rst index b576fde91f1..c7ec2e0cc34 100644 --- a/docs/topics/selectors.rst +++ b/docs/topics/selectors.rst @@ -464,10 +464,10 @@ effectively. If you are not much familiar with XPath yet, you may want to take a look first at this `XPath tutorial`_. .. note:: - Some of the tips are based on `this post from ScrapingHub's blog`_. + Some of the tips are based on `this post from Zyte's blog`_. .. _`XPath tutorial`: http://www.zvon.org/comp/r/tut-XPath_1.html -.. _`this post from ScrapingHub's blog`: https://blog.scrapinghub.com/2014/07/17/xpath-tips-from-the-web-scraping-trenches/ +.. _this post from Zyte's blog: https://www.zyte.com/blog/xpath-tips-from-the-web-scraping-trenches/ .. _topics-selectors-relative-xpaths: diff --git a/scrapy/core/downloader/handlers/http11.py b/scrapy/core/downloader/handlers/http11.py index a0fd837b11c..513df2de9ef 100644 --- a/scrapy/core/downloader/handlers/http11.py +++ b/scrapy/core/downloader/handlers/http11.py @@ -303,11 +303,14 @@ def _get_agent(self, request, timeout): proxyHost = to_unicode(proxyHost) omitConnectTunnel = b'noconnect' in proxyParams if omitConnectTunnel: - warnings.warn("Using HTTPS proxies in the noconnect mode is deprecated. " - "If you use Crawlera, it doesn't require this mode anymore, " - "so you should update scrapy-crawlera to 1.3.0+ " - "and remove '?noconnect' from the Crawlera URL.", - ScrapyDeprecationWarning) + warnings.warn( + "Using HTTPS proxies in the noconnect mode is deprecated. " + "If you use Zyte Smart Proxy Manager (formerly Crawlera), " + "it doesn't require this mode anymore, so you should " + "update scrapy-crawlera to 1.3.0+ and remove '?noconnect' " + "from the Zyte Smart Proxy Manager URL.", + ScrapyDeprecationWarning, + ) if scheme == b'https' and not omitConnectTunnel: proxyAuth = request.headers.get(b'Proxy-Authorization', None) proxyConf = (proxyHost, proxyPort, proxyAuth)
https://api.github.com/repos/scrapy/scrapy/pulls/4973
2021-02-02T14:04:28Z
2021-02-02T20:10:53Z
2021-02-02T20:10:53Z
2021-02-02T20:10:53Z
2,764
scrapy/scrapy
35,011
[FIX] - SemanticSplitterNodeParser
diff --git a/llama-index-core/llama_index/core/node_parser/text/semantic_splitter.py b/llama-index-core/llama_index/core/node_parser/text/semantic_splitter.py index e5054ac8cc89e..6e4760a7b5e85 100644 --- a/llama-index-core/llama_index/core/node_parser/text/semantic_splitter.py +++ b/llama-index-core/llama_index/core/node_parser/text/semantic_splitter.py @@ -227,13 +227,11 @@ def _build_node_chunks( start_index = 0 for index in indices_above_threshold: - end_index = index - 1 - - group = sentences[start_index : end_index + 1] + group = sentences[start_index : index + 1] combined_text = "".join([d["sentence"] for d in group]) chunks.append(combined_text) - start_index = index + start_index = index + 1 if start_index < len(sentences): combined_text = "".join(
# Description - SemanticSplitterNodeParser seems to have a bug that results in nodes not having a `text` property - This PR addresses that bug: - It might be that we are slicing incorrectly and thus chunking incorrect groups Fixes #11277 ## Type of Change Please delete options that are not relevant. - [x] Bug fix (non-breaking change which fixes an issue) # How Has This Been Tested? Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration - [x] Re ran the example as given in the attached bug report / issue - [x] I stared at the code and made sure it makes sense
https://api.github.com/repos/run-llama/llama_index/pulls/11295
2024-02-22T21:07:38Z
2024-02-22T22:37:10Z
2024-02-22T22:37:10Z
2024-02-22T22:37:10Z
235
run-llama/llama_index
6,422
Mention that OpenBSD has a native letsencrypt package now.
diff --git a/docs/using.rst b/docs/using.rst index 68790119150..80d4297730e 100644 --- a/docs/using.rst +++ b/docs/using.rst @@ -59,8 +59,8 @@ or for full help, type: ``letsencrypt-auto`` is the recommended method of running the Let's Encrypt client beta releases on systems that don't have a packaged version. Debian -experimental, Arch linux and FreeBSD now have native packages, so on those -systems you can just install ``letsencrypt`` (and perhaps +experimental, Arch linux, FreeBSD and OpenBSD now have native packages, so on +those systems you can just install ``letsencrypt`` (and perhaps ``letsencrypt-apache``). If you'd like to run the latest copy from Git, or run your own locally modified copy of the client, follow the instructions in the :doc:`contributing`. Some `other methods of installation`_ are discussed @@ -346,6 +346,11 @@ Operating System Packages * Port: ``cd /usr/ports/security/py-letsencrypt && make install clean`` * Package: ``pkg install py27-letsencrypt`` +**OpenBSD** + + * Port: ``cd /usr/ports/security/letsencrypt/client && make install clean`` + * Package: ``pkg_add letsencrypt`` + **Arch Linux** .. code-block:: shell
https://api.github.com/repos/certbot/certbot/pulls/1893
2015-12-14T12:59:31Z
2015-12-18T19:31:43Z
2015-12-18T19:31:43Z
2016-05-06T19:22:17Z
317
certbot/certbot
1,197
Delete a broken threading.local example
diff --git a/Lib/_threading_local.py b/Lib/_threading_local.py index 4ec4828144b7e9..245bd0ac91b799 100644 --- a/Lib/_threading_local.py +++ b/Lib/_threading_local.py @@ -56,11 +56,7 @@ >>> class MyLocal(local): ... number = 2 - ... initialized = False ... def __init__(self, **kw): - ... if self.initialized: - ... raise SystemError('__init__ called too many times') - ... self.initialized = True ... self.__dict__.update(kw) ... def squared(self): ... return self.number ** 2 @@ -97,7 +93,7 @@ >>> thread.start() >>> thread.join() >>> log - [[('color', 'red'), ('initialized', True)], 11] + [[('color', 'red')], 11] without affecting this thread's data:
This code never did anything correct or useful. The class attribute will never be affected, and the condition will never be true. Requested by @alex.
https://api.github.com/repos/python/cpython/pulls/5870
2018-02-25T06:43:18Z
2018-02-25T15:03:41Z
2018-02-25T15:03:41Z
2018-02-25T15:06:49Z
235
python/cpython
4,574
Remove empty lines from `certbot certificates` when envoked with `--cert-name` or `-d`.
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md index 62bceb74da3..9b37e9a1a85 100644 --- a/certbot/CHANGELOG.md +++ b/certbot/CHANGELOG.md @@ -17,7 +17,8 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). ### Fixed -* +* Don't output an empty line for a hidden certificate when `certbot certificates` is being used + in combination with `--cert-name` or `-d`. More details about these changes can be found on our GitHub repo. diff --git a/certbot/certbot/_internal/cert_manager.py b/certbot/certbot/_internal/cert_manager.py index 8fab5735a4d..b9b4ad2d7f0 100644 --- a/certbot/certbot/_internal/cert_manager.py +++ b/certbot/certbot/_internal/cert_manager.py @@ -266,9 +266,9 @@ def human_readable_cert_info(config, cert, skip_filter_checks=False): checker = ocsp.RevocationChecker() if config.certname and cert.lineagename != config.certname and not skip_filter_checks: - return "" + return None if config.domains and not set(config.domains).issubset(cert.names()): - return "" + return None now = pytz.UTC.fromutc(datetime.datetime.utcnow()) reasons = [] @@ -358,7 +358,9 @@ def _report_human_readable(config, parsed_certs): """Format a results report for a parsed cert""" certinfo = [] for cert in parsed_certs: - certinfo.append(human_readable_cert_info(config, cert)) + cert_info = human_readable_cert_info(config, cert) + if cert_info is not None: + certinfo.append(cert_info) return "\n".join(certinfo)
Fixes #8722 ## Pull Request Checklist - [X] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made. - [ ] Add or update any documentation as needed to support the changes in this PR. - [X] Include your name in `AUTHORS.md` if you like. Not entirely enthusiastic about the variable name `cert_info`, but it fits..
https://api.github.com/repos/certbot/certbot/pulls/8723
2021-03-21T17:59:49Z
2021-03-21T21:42:23Z
2021-03-21T21:42:23Z
2021-06-05T14:47:18Z
438
certbot/certbot
2,086
Update README.md
diff --git a/README.md b/README.md index 807bf482..eba82ce2 100644 --- a/README.md +++ b/README.md @@ -5,14 +5,14 @@ [[Model card]](https://github.com/openai/whisper/blob/main/model-card.md) [[Colab example]](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb) -Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification. +Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. ## Approach ![Approach](https://raw.githubusercontent.com/openai/whisper/main/approach.png) -A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets. +A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets. ## Setup @@ -68,9 +68,9 @@ There are five model sizes, four with English-only versions, offering speed and | medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x | | large | 1550 M | N/A | `large` | ~10 GB | 1x | -For English-only applications, the `.en` models tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models. +The `.en` models for English-only applications tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models. -Whisper's performance varies widely depending on the language. The figure below shows a WER (Word Error Rate) breakdown by languages of Fleurs dataset, using the `large-v2` model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in [the paper](https://arxiv.org/abs/2212.04356). The smaller is better. +Whisper's performance varies widely depending on the language. The figure below shows a WER (Word Error Rate) breakdown by languages of the Fleurs dataset using the `large-v2` model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in [the paper](https://arxiv.org/abs/2212.04356). The smaller, the better. ![WER breakdown by language](https://raw.githubusercontent.com/openai/whisper/main/language-breakdown.svg) @@ -144,4 +144,4 @@ Please use the [🙌 Show and tell](https://github.com/openai/whisper/discussion ## License -The code and the model weights of Whisper are released under the MIT License. See [LICENSE](https://github.com/openai/whisper/blob/main/LICENSE) for further details. +Whisper's code and model weights are released under the MIT License. See [LICENSE](https://github.com/openai/whisper/blob/main/LICENSE) for further details.
Fixed a few typos and made general improvements for clarity.
https://api.github.com/repos/openai/whisper/pulls/894
2023-01-25T16:36:11Z
2023-03-04T00:42:00Z
2023-03-04T00:42:00Z
2023-03-07T15:26:45Z
893
openai/whisper
45,829
release 3.2.6 ,fix some bug
diff --git a/code/default/download.md b/code/default/download.md index 9765e04e21..0ca10a983b 100644 --- a/code/default/download.md +++ b/code/default/download.md @@ -1,7 +1,7 @@ ## 下载(Download): 测试版(Test): -https://codeload.github.com/XX-net/XX-Net/zip/3.2.5 +https://codeload.github.com/XX-net/XX-Net/zip/3.2.6 稳定版(Stable): https://codeload.github.com/XX-net/XX-Net/zip/3.1.19 diff --git a/code/default/version.txt b/code/default/version.txt index 5ae69bd5f0..34cde5690e 100644 --- a/code/default/version.txt +++ b/code/default/version.txt @@ -1 +1 @@ -3.2.5 +3.2.6
https://api.github.com/repos/XX-net/XX-Net/pulls/3904
2016-07-20T03:50:30Z
2016-07-20T03:50:47Z
2016-07-20T03:50:47Z
2016-07-20T03:50:48Z
219
XX-net/XX-Net
17,051
fix small typo
diff --git a/README.md b/README.md index 8ddef7cf3..243da3642 100644 --- a/README.md +++ b/README.md @@ -37,7 +37,7 @@ This creates a `train.bin` and `val.bin` in that data directory. Now it is time $ python train.py config/train_shakespeare_char.py ``` -If you peak inside it, you'll see that we're training a GPT with a context size of up to 256 characters, 384 feature channels, and it is a 6-layer Transformer with 6 heads in each layer. On one A100 GPU this training run takes about 3 minutes and the best validation loss is 1.4697. Based on the configuration, the model checkpoints are being written into the `--out_dir` directory `out-shakespeare-char`. So once the training finishes we can sample from the best model by pointing the sampling script at this directory: +If you peek inside it, you'll see that we're training a GPT with a context size of up to 256 characters, 384 feature channels, and it is a 6-layer Transformer with 6 heads in each layer. On one A100 GPU this training run takes about 3 minutes and the best validation loss is 1.4697. Based on the configuration, the model checkpoints are being written into the `--out_dir` directory `out-shakespeare-char`. So once the training finishes we can sample from the best model by pointing the sampling script at this directory: ``` $ python sample.py --out_dir=out-shakespeare-char
https://api.github.com/repos/karpathy/nanoGPT/pulls/224
2023-03-25T19:36:55Z
2023-04-13T05:12:27Z
2023-04-13T05:12:27Z
2023-04-13T05:12:27Z
341
karpathy/nanoGPT
40,961
GUI with PyWebIO
diff --git a/gui/README.md b/gui/README.md index c638c4dca7..707fd36d27 100644 --- a/gui/README.md +++ b/gui/README.md @@ -2,6 +2,8 @@ This code provides a Graphical User Interface (GUI) for gpt4free. Users can ask questions and get answers from GPT-4 API's, utilizing multiple API implementations. The project contains two different Streamlit applications: `streamlit_app.py` and `streamlit_chat_app.py`. +In addition, a new GUI script specifically implemented using PyWebIO has been added and can be found in the pywebio-gui folder. If there are errors with the Streamlit version, you can try using the PyWebIO version instead + Installation ------------ @@ -69,4 +71,4 @@ There is a bug in `streamlit_chat_app.py` right now that I haven't pinpointed ye License ------- -This project is licensed under the MIT License. \ No newline at end of file +This project is licensed under the MIT License. diff --git a/gui/pywebio-gui/README.md b/gui/pywebio-gui/README.md new file mode 100644 index 0000000000..2b99c075d5 --- /dev/null +++ b/gui/pywebio-gui/README.md @@ -0,0 +1,24 @@ +# GUI with PyWebIO +Simple, fast, and with fewer errors +Only requires +```bash +pip install gpt4free +pip install pywebio +``` +clicking on 'pywebio-usesless.py' will run it + +PS: Currently, only 'usesless' is implemented, and the GUI is expected to be updated infrequently, with a focus on stability. + +↓ Here is the introduction in zh-Hans-CN below. + +# 使用pywebio实现的极简GUI +简单,快捷,报错少 +只需要 +```bash +pip install gpt4free +pip install pywebio +``` + +双击pywebio-usesless.py即可运行 + +ps:目前仅实现usesless,这个gui更新频率应该会比较少,目的是追求稳定 diff --git a/gui/pywebio-gui/pywebio-usesless.py b/gui/pywebio-gui/pywebio-usesless.py new file mode 100644 index 0000000000..c0843be6ba --- /dev/null +++ b/gui/pywebio-gui/pywebio-usesless.py @@ -0,0 +1,59 @@ +from gpt4free import usesless +import time +from pywebio import start_server,config +from pywebio.input import * +from pywebio.output import * +from pywebio.session import local +message_id = "" +def status(): + try: + req = usesless.Completion.create(prompt="hello", parentMessageId=message_id) + print(f"Answer: {req['text']}") + put_success(f"Answer: {req['text']}",scope="body") + except: + put_error("Program Error",scope="body") + +def ask(prompt): + req = usesless.Completion.create(prompt=prompt, parentMessageId=local.message_id) + rp=req['text'] + local.message_id=req["id"] + print("AI:\n"+rp) + local.conversation.extend([ + {"role": "user", "content": prompt}, + {"role": "assistant", "content": rp} + ]) + print(local.conversation) + return rp + +def msg(): + while True: + text= input_group("You:",[textarea('You:',name='text',rows=3, placeholder='请输入问题')]) + if not(bool(text)): + break + if not(bool(text["text"])): + continue + time.sleep(0.5) + put_code("You:"+text["text"],scope="body") + print("Question:"+text["text"]) + with use_scope('foot'): + put_loading(color="info") + rp= ask(text["text"]) + clear(scope="foot") + time.sleep(0.5) + put_markdown("Bot:\n"+rp,scope="body") + time.sleep(0.7) + +@config(title="AIchat",theme="dark") +def main(): + put_scope("heads") + with use_scope('heads'): + put_html("<h1><center>AI Chat</center></h1>") + put_scope("body") + put_scope("foot") + status() + local.conversation=[] + local.message_id="" + msg() + +print("Click link to chat page") +start_server(main, port=8099,allowed_origins="*",auto_open_webbrowser=True,debug=True)
Simple, fast, and with fewer errors
https://api.github.com/repos/xtekky/gpt4free/pulls/329
2023-05-01T04:13:15Z
2023-05-01T08:17:53Z
2023-05-01T08:17:53Z
2023-05-01T08:17:53Z
1,112
xtekky/gpt4free
38,261
Added my own project ReqRes
diff --git a/README.md b/README.md index 5525432c6a..0bcc28d505 100644 --- a/README.md +++ b/README.md @@ -41,6 +41,7 @@ A collective list of JSON APIs for use in web development. | Lorem Text | Generates Lorem Ipsum text | Yes | [Go!] (https://market.mashape.com/montanaflynn/lorem-text-generator) | Hipster Ipsum | Generates Hipster Ipsum text | No | [Go!] (http://hipsterjesus.com/) | Loripsum | The "lorem ipsum" generator that doesn't suck | No | [Go!] (http://loripsum.net/) +| ReqRes | A hosted REST-API ready to respond to your AJAX requests | No | [Go!] (http://reqres.in/) ### Drinks
https://api.github.com/repos/public-apis/public-apis/pulls/162
2016-04-13T15:55:41Z
2016-04-13T15:57:43Z
2016-04-13T15:57:43Z
2016-04-13T15:57:43Z
185
public-apis/public-apis
36,141
Use string concat in jsonify
diff --git a/flask/json/__init__.py b/flask/json/__init__.py index 6559c1aa74..6a10b73701 100644 --- a/flask/json/__init__.py +++ b/flask/json/__init__.py @@ -264,7 +264,7 @@ def get_current_user(): data = args or kwargs return current_app.response_class( - (dumps(data, indent=indent, separators=separators), '\n'), + dumps(data, indent=indent, separators=separators) + '\n', mimetype=current_app.config['JSONIFY_MIMETYPE'] )
#1262 added a newline at the end of the response body from `jsonfiy`. It passed a tuple to the `Response`, which is more appropriate for streaming responses. Instead, use a simple `+ '\n'` and pass a string. This allows the content length header to be calculated while creating the response object (#1877). ref pallets/werkzeug#1130
https://api.github.com/repos/pallets/flask/pulls/2577
2018-01-03T21:10:09Z
2018-01-03T21:18:00Z
2018-01-03T21:18:00Z
2020-11-14T03:36:20Z
143
pallets/flask
20,517
DOC: update DataFrame.to_records
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index a66d00fff9714..7b112c5be6e8d 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1209,20 +1209,68 @@ def from_records(cls, data, index=None, exclude=None, columns=None, def to_records(self, index=True, convert_datetime64=True): """ - Convert DataFrame to record array. Index will be put in the - 'index' field of the record array if requested + Convert DataFrame to a NumPy record array. + + Index will be put in the 'index' field of the record array if + requested. Parameters ---------- index : boolean, default True - Include index in resulting record array, stored in 'index' field + Include index in resulting record array, stored in 'index' field. convert_datetime64 : boolean, default True Whether to convert the index to datetime.datetime if it is a - DatetimeIndex + DatetimeIndex. Returns ------- - y : recarray + y : numpy.recarray + + See Also + -------- + DataFrame.from_records: convert structured or record ndarray + to DataFrame. + numpy.recarray: ndarray that allows field access using + attributes, analogous to typed columns in a + spreadsheet. + + Examples + -------- + >>> df = pd.DataFrame({'A': [1, 2], 'B': [0.5, 0.75]}, + ... index=['a', 'b']) + >>> df + A B + a 1 0.50 + b 2 0.75 + >>> df.to_records() + rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)], + dtype=[('index', 'O'), ('A', '<i8'), ('B', '<f8')]) + + The index can be excluded from the record array: + + >>> df.to_records(index=False) + rec.array([(1, 0.5 ), (2, 0.75)], + dtype=[('A', '<i8'), ('B', '<f8')]) + + By default, timestamps are converted to `datetime.datetime`: + + >>> df.index = pd.date_range('2018-01-01 09:00', periods=2, freq='min') + >>> df + A B + 2018-01-01 09:00:00 1 0.50 + 2018-01-01 09:01:00 2 0.75 + >>> df.to_records() + rec.array([(datetime.datetime(2018, 1, 1, 9, 0), 1, 0.5 ), + (datetime.datetime(2018, 1, 1, 9, 1), 2, 0.75)], + dtype=[('index', 'O'), ('A', '<i8'), ('B', '<f8')]) + + The timestamp conversion can be disabled so NumPy's datetime64 + data type is used instead: + + >>> df.to_records(convert_datetime64=False) + rec.array([('2018-01-01T09:00:00.000000000', 1, 0.5 ), + ('2018-01-01T09:01:00.000000000', 2, 0.75)], + dtype=[('index', '<M8[ns]'), ('A', '<i8'), ('B', '<f8')]) """ if index: if is_datetime64_any_dtype(self.index) and convert_datetime64:
Checklist for the pandas documentation sprint (ignore this if you are doing an unrelated PR): - [X] PR title is "DOC: update the <your-function-or-method> docstring" - [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>` - [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff` - [X] The html version looks good: `python doc/make.py --single <your-function-or-method>` - [X] It has been proofread on language by another sprint participant Please include the output of the validation script below between the "```" ticks: ``` # paste output of "scripts/validate_docstrings.py <your-function-or-method>" here # between the "```" (remove this comment, but keep the "```") ################################################################################ ################### Docstring (pandas.DataFrame.to_records) ################### ################################################################################ Convert DataFrame to record array. Index will be put in the 'index' field of the record array if requested. Parameters ---------- index : boolean, default True Include index in resulting record array, stored in 'index' field. convert_datetime64 : boolean, default True Whether to convert the index to datetime.datetime if it is a DatetimeIndex. Returns ------- y : recarray See Also -------- DataFrame.from_records: convert structured or record ndarray to DataFrame. numpy.recarray: ndarray that allows field access using attributes, analogous to typed (typed) columns in a spreadsheet. Examples -------- >>> df = pd.DataFrame({'A': [1, 2], 'B': [0.5, 0.75]}, ... index=['a', 'b']) >>> df A B a 1 0.50 b 2 0.75 >>> df.to_records() rec.array([('a', 1, 0.5 ), ('b', 2, 0.75)], dtype=[('index', 'O'), ('A', '<i8'), ('B', '<f8')]) The index can be excluded from the record array: >>> df.to_records(index=False) rec.array([(1, 0.5 ), (2, 0.75)], dtype=[('A', '<i8'), ('B', '<f8')]) By default, timestamps are converted to `datetime.datetime`: >>> df.index = pd.date_range('2018-01-01 09:00', periods=2, freq='min') >>> df A B 2018-01-01 09:00:00 1 0.50 2018-01-01 09:01:00 2 0.75 >>> df.to_records() rec.array([(datetime.datetime(2018, 1, 1, 9, 0), 1, 0.5 ), (datetime.datetime(2018, 1, 1, 9, 1), 2, 0.75)], dtype=[('index', 'O'), ('A', '<i8'), ('B', '<f8')]) The timestamp conversion can be disabled so NumPy's datetime64 data type is used instead: >>> df.to_records(convert_datetime64=False) rec.array([('2018-01-01T09:00:00.000000000', 1, 0.5 ), ('2018-01-01T09:01:00.000000000', 2, 0.75)], dtype=[('index', '<M8[ns]'), ('A', '<i8'), ('B', '<f8')]) ################################################################################ ################################## Validation ################################## ################################################################################ Docstring for "pandas.DataFrame.to_records" correct. :) ``` If the validation script still gives errors, but you think there is a good reason to deviate in this case (and there are certainly such cases), please state this explicitly. Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint): - [ ] closes #xxxx - [ ] tests added / passed - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/20191
2018-03-10T16:03:50Z
2018-03-11T11:58:10Z
2018-03-11T11:58:10Z
2018-03-11T15:29:29Z
859
pandas-dev/pandas
45,025
ICO content view
diff --git a/mitmproxy/contentviews/image/image_parser.py b/mitmproxy/contentviews/image/image_parser.py index 7c74669af7..fcc50cb5b4 100644 --- a/mitmproxy/contentviews/image/image_parser.py +++ b/mitmproxy/contentviews/image/image_parser.py @@ -6,6 +6,7 @@ from mitmproxy.contrib.kaitaistruct import png from mitmproxy.contrib.kaitaistruct import gif from mitmproxy.contrib.kaitaistruct import jpeg +from mitmproxy.contrib.kaitaistruct import ico Metadata = typing.List[typing.Tuple[str, str]] @@ -78,3 +79,25 @@ def parse_jpeg(data: bytes) -> Metadata: if field.data is not None: parts.append((field.tag._name_, field.data.decode('UTF-8').strip('\x00'))) return parts + + +def parse_ico(data: bytes) -> Metadata: + img = ico.Ico(KaitaiStream(io.BytesIO(data))) + parts = [ + ('Format', 'ICO'), + ('Number of images', str(img.num_images)), + ] + + for i, image in enumerate(img.images): + parts.append( + ( + 'Image {}'.format(i + 1), "Size: {} x {}\n" + "{: >18}Bits per pixel: {}\n" + "{: >18}PNG: {}".format(256 if not image.width else image.width, + 256 if not image.height else image.height, + '', image.bpp, + '', image.is_png) + ) + ) + + return parts diff --git a/mitmproxy/contentviews/image/view.py b/mitmproxy/contentviews/image/view.py index 95ee1e436c..6f75473bc0 100644 --- a/mitmproxy/contentviews/image/view.py +++ b/mitmproxy/contentviews/image/view.py @@ -5,6 +5,14 @@ from . import image_parser +def test_ico(h, f): + if h.startswith(b"\x00\x00\x01\x00"): + return "ico" + + +imghdr.tests.append(test_ico) + + class ViewImage(base.View): name = "Image" prompt = ("image", "i") @@ -27,6 +35,8 @@ def __call__(self, data, **metadata): image_metadata = image_parser.parse_gif(data) elif image_type == 'jpeg': image_metadata = image_parser.parse_jpeg(data) + elif image_type == 'ico': + image_metadata = image_parser.parse_ico(data) else: image_metadata = [ ("Image Format", image_type or "unknown") diff --git a/mitmproxy/contrib/kaitaistruct/ico.py b/mitmproxy/contrib/kaitaistruct/ico.py new file mode 100644 index 0000000000..94b1b8d96a --- /dev/null +++ b/mitmproxy/contrib/kaitaistruct/ico.py @@ -0,0 +1,90 @@ +# This is a generated file! Please edit source .ksy file and use kaitai-struct-compiler to rebuild + +from pkg_resources import parse_version +from kaitaistruct import __version__ as ks_version, KaitaiStruct, KaitaiStream, BytesIO +import struct + + +if parse_version(ks_version) < parse_version('0.7'): + raise Exception("Incompatible Kaitai Struct Python API: 0.7 or later is required, but you have %s" % (ks_version)) + +class Ico(KaitaiStruct): + """Microsoft Windows uses specific file format to store applications + icons - ICO. This is a container that contains one or more image + files (effectively, DIB parts of BMP files or full PNG files are + contained inside). + + .. seealso:: + Source - https://msdn.microsoft.com/en-us/library/ms997538.aspx + """ + def __init__(self, _io, _parent=None, _root=None): + self._io = _io + self._parent = _parent + self._root = _root if _root else self + self._read() + + def _read(self): + self.magic = self._io.ensure_fixed_contents(struct.pack('4b', 0, 0, 1, 0)) + self.num_images = self._io.read_u2le() + self.images = [None] * (self.num_images) + for i in range(self.num_images): + self.images[i] = self._root.IconDirEntry(self._io, self, self._root) + + + class IconDirEntry(KaitaiStruct): + def __init__(self, _io, _parent=None, _root=None): + self._io = _io + self._parent = _parent + self._root = _root if _root else self + self._read() + + def _read(self): + self.width = self._io.read_u1() + self.height = self._io.read_u1() + self.num_colors = self._io.read_u1() + self.reserved = self._io.ensure_fixed_contents(struct.pack('1b', 0)) + self.num_planes = self._io.read_u2le() + self.bpp = self._io.read_u2le() + self.len_img = self._io.read_u4le() + self.ofs_img = self._io.read_u4le() + + @property + def img(self): + """Raw image data. Use `is_png` to determine whether this is an + embedded PNG file (true) or a DIB bitmap (false) and call a + relevant parser, if needed to parse image data further. + """ + if hasattr(self, '_m_img'): + return self._m_img if hasattr(self, '_m_img') else None + + _pos = self._io.pos() + self._io.seek(self.ofs_img) + self._m_img = self._io.read_bytes(self.len_img) + self._io.seek(_pos) + return self._m_img if hasattr(self, '_m_img') else None + + @property + def png_header(self): + """Pre-reads first 8 bytes of the image to determine if it's an + embedded PNG file. + """ + if hasattr(self, '_m_png_header'): + return self._m_png_header if hasattr(self, '_m_png_header') else None + + _pos = self._io.pos() + self._io.seek(self.ofs_img) + self._m_png_header = self._io.read_bytes(8) + self._io.seek(_pos) + return self._m_png_header if hasattr(self, '_m_png_header') else None + + @property + def is_png(self): + """True if this image is in PNG format.""" + if hasattr(self, '_m_is_png'): + return self._m_is_png if hasattr(self, '_m_is_png') else None + + self._m_is_png = self.png_header == struct.pack('8b', -119, 80, 78, 71, 13, 10, 26, 10) + return self._m_is_png if hasattr(self, '_m_is_png') else None + + + diff --git a/mitmproxy/contrib/kaitaistruct/make.sh b/mitmproxy/contrib/kaitaistruct/make.sh index 9ef6888650..789829cf6a 100755 --- a/mitmproxy/contrib/kaitaistruct/make.sh +++ b/mitmproxy/contrib/kaitaistruct/make.sh @@ -6,6 +6,6 @@ wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/gif.ksy wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/jpeg.ksy wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/png.ksy -wget -N https://raw.githubusercontent.com/mitmproxy/mitmproxy/master/mitmproxy/contrib/tls_client_hello.py +wget -N https://raw.githubusercontent.com/kaitai-io/kaitai_struct_formats/master/image/ico.ksy kaitai-struct-compiler --target python --opaque-types=true *.ksy diff --git a/mitmproxy/contrib/tls_client_hello.ksy b/mitmproxy/contrib/kaitaistruct/tls_client_hello.ksy similarity index 100% rename from mitmproxy/contrib/tls_client_hello.ksy rename to mitmproxy/contrib/kaitaistruct/tls_client_hello.ksy diff --git a/test/mitmproxy/contentviews/image/test_image_parser.py b/test/mitmproxy/contentviews/image/test_image_parser.py index 3cb44ca6a2..fdc72165b1 100644 --- a/test/mitmproxy/contentviews/image/test_image_parser.py +++ b/test/mitmproxy/contentviews/image/test_image_parser.py @@ -167,3 +167,26 @@ def test_parse_gif(filename, metadata): def test_parse_jpeg(filename, metadata): with open(tutils.test_data.path(filename), 'rb') as f: assert metadata == image_parser.parse_jpeg(f.read()) + + [email protected]("filename, metadata", { + "mitmproxy/data/image.ico": [ + ('Format', 'ICO'), + ('Number of images', '3'), + ('Image 1', "Size: {} x {}\n" + "{: >18}Bits per pixel: {}\n" + "{: >18}PNG: {}".format(48, 48, '', 24, '', False) + ), + ('Image 2', "Size: {} x {}\n" + "{: >18}Bits per pixel: {}\n" + "{: >18}PNG: {}".format(32, 32, '', 24, '', False) + ), + ('Image 3', "Size: {} x {}\n" + "{: >18}Bits per pixel: {}\n" + "{: >18}PNG: {}".format(16, 16, '', 24, '', False) + ) + ] +}.items()) +def test_ico(filename, metadata): + with open(tutils.test_data.path(filename), 'rb') as f: + assert metadata == image_parser.parse_ico(f.read()) diff --git a/test/mitmproxy/contentviews/image/test_view.py b/test/mitmproxy/contentviews/image/test_view.py index 34f655a13b..6da5b1d0b9 100644 --- a/test/mitmproxy/contentviews/image/test_view.py +++ b/test/mitmproxy/contentviews/image/test_view.py @@ -9,8 +9,7 @@ def test_view_image(): "mitmproxy/data/image.png", "mitmproxy/data/image.gif", "mitmproxy/data/all.jpeg", - # https://bugs.python.org/issue21574 - # "mitmproxy/data/image.ico", + "mitmproxy/data/image.ico", ]: with open(tutils.test_data.path(img), "rb") as f: viewname, lines = v(f.read())
Added ico content view closes #2407 ![screenshot from 2017-06-24 03-51-34](https://user-images.githubusercontent.com/16747982/27512698-83ed5aee-5967-11e7-8b87-f8da53c33996.png)
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/2411
2017-06-23T21:57:36Z
2017-06-25T00:02:01Z
2017-06-25T00:02:01Z
2017-08-01T17:35:47Z
2,571
mitmproxy/mitmproxy
27,810
Fix YouTube on HTTP/2
diff --git a/code/default/gae_proxy/local/proxy.ini b/code/default/gae_proxy/local/proxy.ini index 1b3ec305e5..43db5c5551 100644 --- a/code/default/gae_proxy/local/proxy.ini +++ b/code/default/gae_proxy/local/proxy.ini @@ -50,7 +50,7 @@ talkx.l.google.com = direct .appspot.com = direct ;.gvt1.com = direct ;.android.com = direct -.youtube.com = direct +;.youtube.com = direct ;.ggpht.com = direct ;.2mdn.net = direct ;.googlesyndication.com = direct
不知道这样改可对 修复#3320
https://api.github.com/repos/XX-net/XX-Net/pulls/3325
2016-05-10T12:58:16Z
2016-05-10T14:25:00Z
2016-05-10T14:25:00Z
2016-05-10T14:25:00Z
152
XX-net/XX-Net
17,043
Added more python scripts
diff --git a/Hand-Motion-Detection/hand_motion_recognizer.py b/Hand-Motion-Detection/hand_motion_recognizer.py new file mode 100644 index 0000000000..9e9db13ce9 --- /dev/null +++ b/Hand-Motion-Detection/hand_motion_recognizer.py @@ -0,0 +1,52 @@ +import mediapipe as mp +import cv2 +import numpy as np +import uuid +import os + +mp_drawing = mp.solutions.drawing_utils +mp_hands = mp.solutions.hands + +cap = cv2.VideoCapture(0) + +with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands: + while cap.isOpened(): + ret, frame = cap.read() + + # BGR 2 RGB + image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + + # Flip on horizontal + image = cv2.flip(image, 1) + + # Set flag + image.flags.writeable = False + + # Detections + results = hands.process(image) + + # Set flag to true + image.flags.writeable = True + + # RGB 2 BGR + image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) + + # Detections + print(results) + + # Rendering results + if results.multi_hand_landmarks: + for num, hand in enumerate(results.multi_hand_landmarks): + mp_drawing.draw_landmarks(image, hand, mp_hands.HAND_CONNECTIONS, + mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4), + mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2), + ) + + + cv2.imshow('Hand Tracking', image) + + if cv2.waitKey(10) & 0xFF == ord('q'): + break + +cap.release() +cv2.destroyAllWindows() diff --git a/Hand-Motion-Detection/requirements.txt b/Hand-Motion-Detection/requirements.txt new file mode 100644 index 0000000000..a203bdfcfb --- /dev/null +++ b/Hand-Motion-Detection/requirements.txt @@ -0,0 +1,3 @@ +numpy==1.19.5 +opencv_python==4.5.2.52 +mediapipe==0.8.7.3
I have added a python OpenCV script to recognize hand motion. All the required and additional files are also added.
https://api.github.com/repos/geekcomputers/Python/pulls/1537
2022-06-13T14:23:56Z
2022-06-13T17:27:16Z
2022-06-13T17:27:16Z
2022-06-13T17:27:16Z
595
geekcomputers/Python
31,414
Corrected decorators to remove duplicate teardown_request() and add after
diff --git a/docs/tutorial/dbcon.rst b/docs/tutorial/dbcon.rst index 8f9e459542..b19cb14c8b 100644 --- a/docs/tutorial/dbcon.rst +++ b/docs/tutorial/dbcon.rst @@ -9,7 +9,7 @@ connection in all our functions so it makes sense to initialize them before each request and shut them down afterwards. Flask allows us to do that with the :meth:`~flask.Flask.before_request`, -:meth:`~flask.Flask.teardown_request` and :meth:`~flask.Flask.teardown_request` +:meth:`~flask.Flask.after_request` and :meth:`~flask.Flask.teardown_request` decorators:: @app.before_request
Corrected decorators to remove duplicate teardown_request() and add after_request()
https://api.github.com/repos/pallets/flask/pulls/276
2011-07-11T22:28:30Z
2011-07-11T22:32:56Z
2011-07-11T22:32:56Z
2020-11-14T05:52:54Z
163
pallets/flask
20,462
Update fig to docker-compose.
diff --git a/README.md b/README.md index d6631458c..be530c0ed 100644 --- a/README.md +++ b/README.md @@ -987,7 +987,7 @@ A curated list of awesome Python frameworks, libraries and software. Inspired by * [provy](https://github.com/python-provy/provy) - An easy-to-use provisioning system in Python. * [honcho](https://github.com/nickstenning/honcho) - A Python port of [Foreman](https://github.com/ddollar/foreman), a tool for managing Procfile-based applications. * [gunnery](https://github.com/gunnery/gunnery) - Multipurpose task execution tool for distributed systems with web-based interface. -* [fig](http://www.fig.sh/) - Fast, isolated development environments using [Docker](https://www.docker.com/). +* [Docker-Compose](https://docs.docker.com/compose/) - Fast, isolated development environments using [Docker](https://www.docker.com/). * [hgapi](http://bitbucket.org/haard/hgapi) - Pure-Python API for Mercurial. * [gitapi](http://bitbucket.org/haard/gitapi) - Pure-Python API for git. * [supervisor](https://github.com/Supervisor/supervisor) - Supervisor process control system for UNIX.
https://api.github.com/repos/vinta/awesome-python/pulls/354
2015-04-15T11:24:54Z
2015-04-15T11:43:55Z
2015-04-15T11:43:55Z
2015-04-15T11:43:55Z
304
vinta/awesome-python
27,037
Update README.md
diff --git a/Java RMI/README.md b/Java RMI/README.md index c5e8fc92bc..bcda5536a0 100644 --- a/Java RMI/README.md +++ b/Java RMI/README.md @@ -64,6 +64,14 @@ $ rmg enum 172.17.0.2 9010 [...] ``` +Using Metasploit +```bash +use auxiliary/scanner/misc/java_rmi_server +set RHOSTS <IPs> +set RPORT <PORT> +run +``` + ## Exploitation ### RCE using sjet or mjet @@ -97,6 +105,15 @@ jython mjet.py TARGET_IP TARGET_PORT command super_secret "whoami" jython mjet.py TARGET_IP TARGET_PORT command super_secret shell ``` +### RCE using Metasploit +```bash +use exploit/multi/misc/java_rmi_server +set RHOSTS <IPs> +set RPORT <PORT> +# configure also the payload if needed +run +``` + ## References * [ATTACKING RMI BASED JMX SERVICES - HANS-MARTIN MÜNCH, 28 April 2019](https://mogwailabs.de/en/blog/2019/04/attacking-rmi-based-jmx-services/)
https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/573
2022-10-12T18:35:39Z
2022-10-12T19:43:16Z
2022-10-12T19:43:16Z
2022-10-12T19:43:16Z
303
swisskyrepo/PayloadsAllTheThings
8,696
Improve Perceiver docs
diff --git a/src/transformers/models/perceiver/modeling_perceiver.py b/src/transformers/models/perceiver/modeling_perceiver.py index c365d85217410..ede1063130acc 100755 --- a/src/transformers/models/perceiver/modeling_perceiver.py +++ b/src/transformers/models/perceiver/modeling_perceiver.py @@ -810,15 +810,16 @@ def forward( >>> # EXAMPLE 2: using the Perceiver to classify images >>> # - we define an ImagePreprocessor, which can be used to embed images >>> preprocessor=PerceiverImagePreprocessor( - config, - prep_type="conv1x1", - spatial_downsample=1, - out_channels=256, - position_encoding_type="trainable", - concat_or_add_pos="concat", - project_pos_dim=256, - trainable_position_encoding_kwargs=dict(num_channels=256, index_dims=config.image_size ** 2), - ) + ... config, + ... prep_type="conv1x1", + ... spatial_downsample=1, + ... out_channels=256, + ... position_encoding_type="trainable", + ... concat_or_add_pos="concat", + ... project_pos_dim=256, + ... trainable_position_encoding_kwargs=dict(num_channels=256, index_dims=config.image_size ** 2, + ... ), + ... ) >>> model = PerceiverModel( ... config, @@ -1188,10 +1189,11 @@ def forward( This model uses learned position embeddings. In other words, this model is not given any privileged information about the structure of images. As shown in the paper, this model can achieve a top-1 accuracy of 72.7 on ImageNet. -`PerceiverForImageClassificationLearned` uses -`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "conv1x1") to -preprocess the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to -decode the latent representation of `~transformers.PerceiverModel` into classification logits. +:class:`~transformers.PerceiverForImageClassificationLearned` uses +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with :obj:`prep_type="conv1x1"`) +to preprocess the input images, and +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to decode the latent +representation of :class:`~transformers.PerceiverModel` into classification logits. """, PERCEIVER_START_DOCSTRING, ) @@ -1326,10 +1328,11 @@ def forward( This model uses fixed 2D Fourier position embeddings. As shown in the paper, this model can achieve a top-1 accuracy of 79.0 on ImageNet, and 84.5 when pre-trained on a large-scale dataset (i.e. JFT). -`PerceiverForImageClassificationLearned` uses -`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "pixels") to -preprocess the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to -decode the latent representation of `~transformers.PerceiverModel` into classification logits. +:class:`~transformers.PerceiverForImageClassificationLearned` uses +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with :obj:`prep_type="pixels"`) +to preprocess the input images, and +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to decode the latent +representation of :class:`~transformers.PerceiverModel` into classification logits. """, PERCEIVER_START_DOCSTRING, ) @@ -1461,10 +1464,11 @@ def forward( This model uses a 2D conv+maxpool preprocessing network. As shown in the paper, this model can achieve a top-1 accuracy of 82.1 on ImageNet. -`PerceiverForImageClassificationLearned` uses -`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "conv") to preprocess -the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to decode the -latent representation of `~transformers.PerceiverModel` into classification logits. +:class:`~transformers.PerceiverForImageClassificationLearned` uses +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with :obj:`prep_type="conv"`) to +preprocess the input images, and +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder` to decode the latent +representation of :class:`~transformers.PerceiverModel` into classification logits. """, PERCEIVER_START_DOCSTRING, ) @@ -1592,10 +1596,11 @@ def forward( @add_start_docstrings( """ -Example use of Perceiver for optical flow, for tasks such as Sintel and KITTI. `PerceiverForOpticalFlow` uses -`transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type` = "patches") to -preprocess the input images, and `transformers.models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder` to -decode the latent representation of `~transformers.PerceiverModel`. +Example use of Perceiver for optical flow, for tasks such as Sintel and KITTI. +:class:`~transformers.PerceiverForOpticalFlow` uses +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor` (with `prep_type="patches"`) to +preprocess the input images, and :class:`~transformers.models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder` +to decode the latent representation of :class:`~transformers.PerceiverModel`. As input, one concatenates 2 subsequent frames along the channel dimension and extract a 3 x 3 patch around each pixel (leading to 3 x 3 x 3 x 2 = 54 values for each pixel). Fixed Fourier position encodings are used to encode the position @@ -1717,25 +1722,26 @@ def forward( """ Example use of Perceiver for multimodal (video) autoencoding, for tasks such as Kinetics-700. -`PerceiverForMultimodalAutoencoding` uses -`transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor` to preprocess the 3 modalities: -images, audio and class labels. This preprocessor uses modality-specific preprocessors to preprocess every modality -separately, after which they are concatenated. Trainable position embeddings are used to pad each modality to the same -number of channels to make concatenation along the time dimension possible. Next, one applies the Perceiver encoder. +:class:`~transformers.PerceiverForMultimodalAutoencoding` uses +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor` to preprocess the 3 +modalities: images, audio and class labels. This preprocessor uses modality-specific preprocessors to preprocess every +modality separately, after which they are concatenated. Trainable position embeddings are used to pad each modality to +the same number of channels to make concatenation along the time dimension possible. Next, one applies the Perceiver +encoder. -`transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder` is used to decode the latent -representation of `~transformers.PerceiverModel`. This decoder uses each modality-specific decoder to construct +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder` is used to decode the latent +representation of :class:`~transformers.PerceiverModel`. This decoder uses each modality-specific decoder to construct queries. The decoder queries are created based on the inputs after preprocessing. However, autoencoding an entire video in a single forward pass is computationally infeasible, hence one only uses parts of the decoder queries to do cross-attention with the latent representation. This is determined by the subsampled indices for each modality, which -can be provided as additional input to the forward pass of `PerceiverForMultimodalAutoencoding`. +can be provided as additional input to the forward pass of :class:`~transformers.PerceiverForMultimodalAutoencoding`. -`transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder` also pads the decoder queries of the -different modalities to the same number of channels, in order to concatenate them along the time dimension. Next, -cross-attention is performed with the latent representation of `PerceiverModel`. +:class:`~transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder` also pads the decoder queries of +the different modalities to the same number of channels, in order to concatenate them along the time dimension. Next, +cross-attention is performed with the latent representation of :class:`~transformers.PerceiverModel`. -Finally, `transformers.models.perceiver.modeling_perceiver.PerceiverMultiModalPostprocessor` is used to turn this -tensor into an actual video. It first splits up the output into the different modalities, and then applies the +Finally, :class:`~transformers.models.perceiver.modeling_perceiver.PerceiverMultiModalPostprocessor` is used to turn +this tensor into an actual video. It first splits up the output into the different modalities, and then applies the respective postprocessor for each modality. Note that, by masking the classification label during evaluation (i.e. simply providing a tensor of zeros for the
# What does this PR do? Some last-minute changes to beautify the Perceiver docs.
https://api.github.com/repos/huggingface/transformers/pulls/14786
2021-12-15T16:53:50Z
2021-12-15T17:02:06Z
2021-12-15T17:02:06Z
2021-12-15T17:02:06Z
2,128
huggingface/transformers
11,934
Improve wording and structure of paragraph.
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index fb9441904..4050adfe6 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -1953,9 +1953,10 @@ such as passing a `const int&`, returning an `array<BigPOD>` by value, and retur * Identify an array with a length specified separately * Identify a location in an array -Confusion about what meaning a `T*` is the source of many serious errors, so using separate names for pointers of these separate uses makes code clearer. -For debugging, `owner<T*>` and `not_null<T>` can be instrumented to check. +Using separate names for each of these uses improves code quality because confusion about the meaning of any particular `T*` is the source of many serious errors. For example, `not_null<T*>` makes it obvious to a reader (human or machine) that a test for `nullptr` is not necessary before dereference. +Additionally, when debugging, `owner<T*>` and `not_null<T>` can be instrumented to check for correctness. + **Example**: Consider
To be grammatically correct, the phrase "Confusion about what meaning a T\* is the source" should have had a doubled "is", which would have been awkward, so the sentence has been reworded. The second and third sentences were inverted. Prior to the inversion, the sentence starting with "For example" initially appeared to be expanding on the debuggability of not_null, rather than providing an example of how using separate names improves code quality.
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/197
2015-09-28T16:54:29Z
2015-09-29T15:50:06Z
2015-09-29T15:50:06Z
2015-09-30T17:04:12Z
261
isocpp/CppCoreGuidelines
15,501
[gym.vector]: a generic`close` with user-defined `close_extras`
diff --git a/gym/vector/async_vector_env.py b/gym/vector/async_vector_env.py index d1d4a0809a2..3ef90422863 100644 --- a/gym/vector/async_vector_env.py +++ b/gym/vector/async_vector_env.py @@ -241,7 +241,7 @@ def step_wait(self, timeout=None): return (deepcopy(self.observations) if self.copy else self.observations, np.array(rewards), np.array(dones, dtype=np.bool_), infos) - def close(self, timeout=None, terminate=False): + def close_extras(self, timeout=None, terminate=False): """ Parameters ---------- @@ -254,12 +254,6 @@ def close(self, timeout=None, terminate=False): If `True`, then the `close` operation is forced and all processes are terminated. """ - if self.closed: - return - - if self.viewer is not None: - self.viewer.close() - timeout = 0 if terminate else timeout try: if self._state != AsyncState.DEFAULT: @@ -288,8 +282,6 @@ def close(self, timeout=None, terminate=False): for process in self.processes: process.join() - self.closed = True - def _poll(self, timeout=None): self._assert_is_running() if timeout is None: @@ -338,11 +330,6 @@ def _raise_if_errors(self, successes): logger.error('Raising the last exception back to the main process.') raise exctype(value) - def __del__(self): - if hasattr(self, 'closed'): - if not self.closed: - self.close(terminate=True) - def _worker(index, env_fn, pipe, parent_pipe, shared_memory, error_queue): assert shared_memory is None diff --git a/gym/vector/sync_vector_env.py b/gym/vector/sync_vector_env.py index 379977ae9e7..b3d4eb03484 100644 --- a/gym/vector/sync_vector_env.py +++ b/gym/vector/sync_vector_env.py @@ -83,16 +83,8 @@ def step_wait(self): return (deepcopy(self.observations) if self.copy else self.observations, np.copy(self._rewards), np.copy(self._dones), infos) - def close(self): - if self.closed: - return - if self.viewer is not None: - self.viewer.close() - - for env in self.envs: - env.close() - - self.closed = True + def close_extras(self, **kwargs): + [env.close() for env in self.envs] def _check_observation_spaces(self): for env in self.envs: diff --git a/gym/vector/vector_env.py b/gym/vector/vector_env.py index aabe0f1b0d5..973fff803a2 100644 --- a/gym/vector/vector_env.py +++ b/gym/vector/vector_env.py @@ -91,6 +91,34 @@ def step(self, actions): self.step_async(actions) return self.step_wait() + def close_extras(self, **kwargs): + r"""Clean up the extra resources e.g. beyond what's in this base class. """ + raise NotImplementedError() + + def close(self, **kwargs): + r"""Close all sub-environments and release resources. + + It also closes all the existing image viewers, then calls :meth:`close_extras` and set + :attr:`closed` as ``True``. + + .. warning:: + + This function itself does not close the environments, it should be handled + in :meth:`close_extras`. This is generic for both synchronous and asynchronous + vectorized environments. + + .. note:: + + This will be automatically called when garbage collected or program exited. + + """ + if self.closed: + return + if self.viewer is not None: + self.viewer.close() + self.close_extras(**kwargs) + self.closed = True + def seed(self, seeds=None): """ Parameters @@ -104,8 +132,7 @@ def seed(self, seeds=None): """ pass - def __del__(self): if hasattr(self, 'closed'): if not self.closed: - self.close() + self.close(terminate=True)
https://api.github.com/repos/openai/gym/pulls/1631
2019-08-02T12:40:50Z
2019-10-25T22:18:54Z
2019-10-25T22:18:54Z
2019-10-25T22:18:55Z
993
openai/gym
5,150
Extended answer for ways deliver a software
diff --git a/README.md b/README.md index 115c499e5..8629caf30 100644 --- a/README.md +++ b/README.md @@ -196,9 +196,10 @@ which follows the immutable infrastructure paradigm. <details> <summary>What ways are you familiar with to deliver a software? What are the advantages and disadvantages of each method?</summary><br><b> - * Archive - collect all your app files into one archive (e.g. tar) and deliver it to the user. - * Package - depends on the OS, you can use your OS package format (e.g. in RHEL/Fefodra it's RPM) to deliver your software with a way to install, uninstall and update it using the standard packager commands - * Images - Either VM or container images where your package is included with everything it needs in order to run successfully. + * Source - Maintain build script within version control system so that user can build your app after cloning repository. Advantage: User can quickly checkout different versions of application. Disadvantage: requires build tools installed on users machine. + * Archive - collect all your app files into one archive (e.g. tar) and deliver it to the user. Advantage: Only tool needed is an unarchiver. Disadvantage: Requires repeating the same procedure when updating, not good if there are a lot of dependencies. + * Package - depends on the OS, you can use your OS package format (e.g. in RHEL/Fefodra it's RPM) to deliver your software with a way to install, uninstall and update it using the standard packager commands. Advantages: Package manager takes care of support for installation, uninstallation, updating and dependency management. Disadvantage: Requires managing package repository. + * Images - Either VM or container images where your package is included with everything it needs in order to run successfully. Advantage: everything is preinstalled, it has high degree of environment isolation. Disadvantage: Requires knowledge of building and optimizing images. </b></details> <details>
Added "from source" way of delivering a software and added Advantages/Disadvantages
https://api.github.com/repos/bregman-arie/devops-exercises/pulls/121
2020-12-09T21:47:59Z
2020-12-10T06:40:35Z
2020-12-10T06:40:34Z
2020-12-10T06:40:35Z
444
bregman-arie/devops-exercises
17,463
chore(hooks): Remove unused sidebar hook
diff --git a/src/sentry/static/sentry/app/stores/hookStore.jsx b/src/sentry/static/sentry/app/stores/hookStore.jsx index 0fdc83e121fff..c85641be75666 100644 --- a/src/sentry/static/sentry/app/stores/hookStore.jsx +++ b/src/sentry/static/sentry/app/stores/hookStore.jsx @@ -49,9 +49,6 @@ const validHookNames = new Set([ 'feature-disabled:sso-basic', 'feature-disabled:sso-rippling', 'feature-disabled:sso-saml2', - - // TODO(epurkhiser): These are not used anymore and should be removed - 'organization:sidebar', ]); /**
Do as the comment says
https://api.github.com/repos/getsentry/sentry/pulls/12311
2019-03-06T19:19:37Z
2019-03-06T20:27:18Z
2019-03-06T20:27:18Z
2020-12-20T17:48:36Z
159
getsentry/sentry
44,640
templates: simplify tool in gemini-functions-agent 2
diff --git a/templates/gemini-functions-agent/gemini_functions_agent/agent.py b/templates/gemini-functions-agent/gemini_functions_agent/agent.py index 1f1c756d7c42f4..38ffc315ee9a2f 100644 --- a/templates/gemini-functions-agent/gemini_functions_agent/agent.py +++ b/templates/gemini-functions-agent/gemini_functions_agent/agent.py @@ -32,7 +32,7 @@ ] ) -llm_with_tools = llm.bind(functions=[tavily_tool]) +llm_with_tools = llm.bind(functions=tools) def _format_chat_history(chat_history: List[Tuple[str, str]]):
https://api.github.com/repos/langchain-ai/langchain/pulls/17283
2024-02-09T03:11:21Z
2024-02-09T03:39:29Z
2024-02-09T03:39:29Z
2024-02-09T03:39:30Z
157
langchain-ai/langchain
43,193
Reduce peak memory usage when changing models
diff --git a/modules/sd_models.py b/modules/sd_models.py index e697bb72b35..203e99a8a04 100644 --- a/modules/sd_models.py +++ b/modules/sd_models.py @@ -170,7 +170,9 @@ def load_model_weights(model, checkpoint_info): print(f"Global Step: {pl_sd['global_step']}") sd = get_state_dict_from_checkpoint(pl_sd) - missing, extra = model.load_state_dict(sd, strict=False) + del pl_sd + model.load_state_dict(sd, strict=False) + del sd if shared.cmd_opts.opt_channelslast: model.to(memory_format=torch.channels_last) @@ -194,9 +196,10 @@ def load_model_weights(model, checkpoint_info): model.first_stage_model.to(devices.dtype_vae) - checkpoints_loaded[checkpoint_info] = model.state_dict().copy() - while len(checkpoints_loaded) > shared.opts.sd_checkpoint_cache: - checkpoints_loaded.popitem(last=False) # LRU + if shared.opts.sd_checkpoint_cache > 0: + checkpoints_loaded[checkpoint_info] = model.state_dict().copy() + while len(checkpoints_loaded) > shared.opts.sd_checkpoint_cache: + checkpoints_loaded.popitem(last=False) # LRU else: print(f"Loading weights [{sd_model_hash}] from cache") checkpoints_loaded.move_to_end(checkpoint_info)
A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM.
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/3818
2022-10-27T21:03:21Z
2022-10-29T06:16:01Z
2022-10-29T06:16:01Z
2022-10-29T06:16:01Z
314
AUTOMATIC1111/stable-diffusion-webui
39,884
Fixed #29860 -- Allowed BaseValidator to accept a callable limit_value.
diff --git a/django/core/validators.py b/django/core/validators.py index c1c9cd1c87e83..38e4b6aa1d7a0 100644 --- a/django/core/validators.py +++ b/django/core/validators.py @@ -317,8 +317,9 @@ def __init__(self, limit_value, message=None): def __call__(self, value): cleaned = self.clean(value) - params = {'limit_value': self.limit_value, 'show_value': cleaned, 'value': value} - if self.compare(cleaned, self.limit_value): + limit_value = self.limit_value() if callable(self.limit_value) else self.limit_value + params = {'limit_value': limit_value, 'show_value': cleaned, 'value': value} + if self.compare(cleaned, limit_value): raise ValidationError(self.message, code=self.code, params=params) def __eq__(self, other): diff --git a/docs/ref/validators.txt b/docs/ref/validators.txt index 6294d519f8130..b6a233014d54e 100644 --- a/docs/ref/validators.txt +++ b/docs/ref/validators.txt @@ -236,7 +236,12 @@ to, or in lieu of custom ``field.clean()`` methods. .. class:: MaxValueValidator(limit_value, message=None) Raises a :exc:`~django.core.exceptions.ValidationError` with a code of - ``'max_value'`` if ``value`` is greater than ``limit_value``. + ``'max_value'`` if ``value`` is greater than ``limit_value``, which may be + a callable. + + .. versionchanged:: 2.2 + + ``limit_value`` can now be a callable. ``MinValueValidator`` --------------------- @@ -244,7 +249,12 @@ to, or in lieu of custom ``field.clean()`` methods. .. class:: MinValueValidator(limit_value, message=None) Raises a :exc:`~django.core.exceptions.ValidationError` with a code of - ``'min_value'`` if ``value`` is less than ``limit_value``. + ``'min_value'`` if ``value`` is less than ``limit_value``, which may be a + callable. + + .. versionchanged:: 2.2 + + ``limit_value`` can now be a callable. ``MaxLengthValidator`` ---------------------- @@ -252,7 +262,12 @@ to, or in lieu of custom ``field.clean()`` methods. .. class:: MaxLengthValidator(limit_value, message=None) Raises a :exc:`~django.core.exceptions.ValidationError` with a code of - ``'max_length'`` if the length of ``value`` is greater than ``limit_value``. + ``'max_length'`` if the length of ``value`` is greater than + ``limit_value``, which may be a callable. + + .. versionchanged:: 2.2 + + ``limit_value`` can now be a callable. ``MinLengthValidator`` ---------------------- @@ -260,7 +275,12 @@ to, or in lieu of custom ``field.clean()`` methods. .. class:: MinLengthValidator(limit_value, message=None) Raises a :exc:`~django.core.exceptions.ValidationError` with a code of - ``'min_length'`` if the length of ``value`` is less than ``limit_value``. + ``'min_length'`` if the length of ``value`` is less than ``limit_value``, + which may be a callable. + + .. versionchanged:: 2.2 + + ``limit_value`` can now be a callable. ``DecimalValidator`` -------------------- diff --git a/docs/releases/2.2.txt b/docs/releases/2.2.txt index 90099d9fc369c..4a6e74fab311a 100644 --- a/docs/releases/2.2.txt +++ b/docs/releases/2.2.txt @@ -259,7 +259,9 @@ URLs Validators ~~~~~~~~~~ -* ... +* :class:`.MaxValueValidator`, :class:`.MinValueValidator`, + :class:`.MinLengthValidator`, and :class:`.MaxLengthValidator` now accept + a callable ``limit_value``. .. _backwards-incompatible-2.2: diff --git a/tests/validators/tests.py b/tests/validators/tests.py index 9f69854902ef5..36d0b2a520b3f 100644 --- a/tests/validators/tests.py +++ b/tests/validators/tests.py @@ -203,6 +203,10 @@ (MinValueValidator(0), -1, ValidationError), (MinValueValidator(NOW), NOW - timedelta(days=1), ValidationError), + # limit_value may be a callable. + (MinValueValidator(lambda: 1), 0, ValidationError), + (MinValueValidator(lambda: 1), 1, None), + (MaxLengthValidator(10), '', None), (MaxLengthValidator(10), 10 * 'x', None),
https://code.djangoproject.com/ticket/29860
https://api.github.com/repos/django/django/pulls/10522
2018-10-17T15:00:27Z
2018-10-22T18:35:27Z
2018-10-22T18:35:26Z
2019-04-04T09:02:01Z
1,123
django/django
51,388
Tiny clean-up in `program()`
diff --git a/httpie/core.py b/httpie/core.py index cf2567d489..9c9e3ce406 100644 --- a/httpie/core.py +++ b/httpie/core.py @@ -213,7 +213,7 @@ def request_body_read_callback(chunk: bytes): finally: if downloader and not downloader.finished: downloader.failed() - if not isinstance(args, list) and args.output_file and args.output_file_specified: + if args.output_file and args.output_file_specified: args.output_file.close()
https://api.github.com/repos/httpie/cli/pulls/1135
2021-09-02T14:45:17Z
2021-09-02T14:47:02Z
2021-09-02T14:47:02Z
2021-09-02T14:47:35Z
121
httpie/cli
33,877
Print the mypy command in mypy wrapper
diff --git a/scripts/mypy b/scripts/mypy index 8475a1d9510a..7e3892313086 100755 --- a/scripts/mypy +++ b/scripts/mypy @@ -22,7 +22,8 @@ import os import os.path import sys import tempfile -from typing import List +import shlex +from typing import List, Iterable import click import mypy.main as mypy_main @@ -32,6 +33,14 @@ PATHS = ["lib/streamlit/", "scripts/*", "e2e/scripts/*"] EXCLUDE_FILES = {"scripts/add_license_headers.py", "e2e/scripts/st_reuse_label.py"} +def shlex_join(split_command: Iterable[str]): + """Return a shell-escaped string from *split_command*. + + This function is backported from Python 3.8 - shlex.join + """ + return ' '.join(shlex.quote(arg) for arg in split_command) + + class Module: _COLUMNS = (56, 5, 5, 7) _HEADERS = ("Module", "Lines", "Typed", "Percent") @@ -92,7 +101,15 @@ def process_report(path: str) -> None: @click.command() @click.option("--report", is_flag=True, help="Emit line coverage report for all files") -def main(report: bool = False) -> None: [email protected]( + "--verbose", + '-v', + is_flag=True, + help=( + "Verbose mode. Causes this command to print mypy command being executed." + ) +) +def main(report: bool = False, verbose: bool = False) -> None: paths: List[str] = [] for path in PATHS: if "*" in path: @@ -109,6 +126,9 @@ def main(report: bool = False) -> None: args.append("--") args.extend(paths) + if verbose: + shell_command = shlex_join(itertools.chain(['mypy'], args)) + print("Executing command:", shell_command) mypy_main.main(None, sys.stdout, sys.stderr, args=args) if report: process_report(os.path.join(tempdir.name, "lineprecision.txt"))
<!-- Before contributing (PLEASE READ!) ⚠️ If your contribution is more than a few lines of code, then prior to starting to code on it please post in the issue saying you want to volunteer, then wait for a positive response. And if there is no issue for it yet, create it first. This helps make sure: 1. Two people aren't working on the same thing 2. This is something Streamlit's maintainers believe should be implemented/fixed 3. Any API, UI, or deeper architectural changes that need to be implemented have been fully thought through by Streamlit's maintainers 4. Your time is well spent! More information in our wiki: https://github.com/streamlit/streamlit/wiki/Contributing --> ## 📚 Context In our project, calling a myypy is not standard and therefore we have a wrapper that makes it easy. For more advanced use cases, there's a need to run it directly. To make this possible, I printed the commands on sys.stdout, so you can copy and paste them into a terminal to modify later to suit your requirements. <img width="1785" alt="Screenshot 2022-08-30 at 19 36 00" src="https://user-images.githubusercontent.com/78743291/187505568-6afd3e80-6853-4c42-af7c-ecaa71216ebb.png"> It also helps to troubleshoot this script. _Please describe the project or issue background here_ - What kind of change does this PR introduce? - [ ] Bugfix - [ ] Feature - [ ] Refactoring - [ ] Other, please describe: ## 🧠 Description of Changes - _Add bullet points summarizing your changes here_ - [ ] This is a breaking API change - [ ] This is a visible (user-facing) change **Revised:** _Insert screenshot of your updated UI/code here_ **Current:** _Insert screenshot of existing UI/code here_ ## 🧪 Testing Done - [ ] Screenshots included - [ ] Added/Updated unit tests - [ ] Added/Updated e2e tests ## 🌐 References _Does this depend on other work, documents, or tickets?_ - **Issue**: Closes #XXXX --- **Contribution License Agreement** By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
https://api.github.com/repos/streamlit/streamlit/pulls/5274
2022-08-30T17:37:52Z
2022-09-01T09:36:51Z
2022-09-01T09:36:51Z
2023-10-05T19:28:23Z
497
streamlit/streamlit
22,195
cloudformation_facts: don't fail on nonexistent stack - fixes #23419
diff --git a/lib/ansible/modules/cloud/amazon/cloudformation_facts.py b/lib/ansible/modules/cloud/amazon/cloudformation_facts.py index 46712accdf71fe..dc0f808dc6dfe1 100644 --- a/lib/ansible/modules/cloud/amazon/cloudformation_facts.py +++ b/lib/ansible/modules/cloud/amazon/cloudformation_facts.py @@ -80,6 +80,13 @@ stack_resources: true stack_policy: true +# Fail if the stack doesn't exist +- name: try to get facts about a stack but fail if it doesn't exist + cloudformation_facts: + stack_name: nonexistent-stack + all_facts: yes + failed_when: cloudformation['nonexistent-stack'] is undefined + # Example dictionary outputs for stack_outputs, stack_parameters and stack_resources: # "stack_outputs": { # "ApplicationDatabaseName": "dazvlpr01xj55a.ap-southeast-2.rds.amazonaws.com", @@ -102,38 +109,38 @@ RETURN = ''' stack_description: description: Summary facts about the stack - returned: always + returned: if the stack exists type: dict stack_outputs: description: Dictionary of stack outputs keyed by the value of each output 'OutputKey' parameter and corresponding value of each output 'OutputValue' parameter - returned: always + returned: if the stack exists type: dict stack_parameters: description: Dictionary of stack parameters keyed by the value of each parameter 'ParameterKey' parameter and corresponding value of each parameter 'ParameterValue' parameter - returned: always + returned: if the stack exists type: dict stack_events: description: All stack events for the stack - returned: only if all_facts or stack_events is true + returned: only if all_facts or stack_events is true and the stack exists type: list stack_policy: description: Describes the stack policy for the stack - returned: only if all_facts or stack_policy is true + returned: only if all_facts or stack_policy is true and the stack exists type: dict stack_template: description: Describes the stack template for the stack - returned: only if all_facts or stack_template is true + returned: only if all_facts or stack_template is true and the stack exists type: dict stack_resource_list: description: Describes stack resources for the stack - returned: only if all_facts or stack_resourses is true + returned: only if all_facts or stack_resourses is true and the stack exists type: list stack_resources: description: Dictionary of stack resources keyed by the value of each resource 'LogicalResourceId' parameter and corresponding value of each resource 'PhysicalResourceId' parameter - returned: only if all_facts or stack_resourses is true + returned: only if all_facts or stack_resourses is true and the stack exists type: dict ''' @@ -148,6 +155,7 @@ except ImportError: HAS_BOTO3 = False +from ansible.module_utils._text import to_native from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.ec2 import (get_aws_connection_info, ec2_argument_spec, boto3_conn, camel_dict_to_snake_dict, AWSRetry, boto3_tag_list_to_ansible_dict) @@ -180,22 +188,25 @@ def describe_stacks(self, stack_name=None): kwargs = {'StackName': stack_name} if stack_name else {} func = partial(self.client.describe_stacks, **kwargs) response = self.paginated_response(func, 'Stacks') - if response: + if response is not None: return response self.module.fail_json(msg="Error describing stack(s) - an empty response was returned") except Exception as e: - self.module.fail_json(msg="Error describing stack(s) - " + str(e), exception=traceback.format_exc()) + if 'does not exist' in e.response['Error']['Message']: + # missing stack, don't bail. + return {} + self.module.fail_json(msg="Error describing stack - " + to_native(e), exception=traceback.format_exc()) def list_stack_resources(self, stack_name): try: - func = partial(self.client.list_stack_resources,StackName=stack_name) + func = partial(self.client.list_stack_resources, StackName=stack_name) return self.paginated_response(func, 'StackResourceSummaries') except Exception as e: self.module.fail_json(msg="Error listing stack resources - " + str(e), exception=traceback.format_exc()) def describe_stack_events(self, stack_name): try: - func = partial(self.client.describe_stack_events,StackName=stack_name) + func = partial(self.client.describe_stack_events, StackName=stack_name) return self.paginated_response(func, 'StackEvents') except Exception as e: self.module.fail_json(msg="Error describing stack events - " + str(e), exception=traceback.format_exc()) @@ -222,7 +233,7 @@ def paginated_response(self, func, result_key, next_token=None): Returns expanded response for paginated operations. The 'result_key' is used to define the concatenated results that are combined from each paginated response. ''' - args=dict() + args = dict() if next_token: args['NextToken'] = next_token response = func(**args) @@ -232,6 +243,7 @@ def paginated_response(self, func, result_key, next_token=None): return result return result + self.paginated_response(func, result_key, next_token) + def to_dict(items, key, value): ''' Transforms a list of items to a Key/Value dictionary ''' if items: @@ -239,6 +251,7 @@ def to_dict(items, key, value): else: return dict() + def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( diff --git a/test/sanity/pep8/legacy-files.txt b/test/sanity/pep8/legacy-files.txt index 62d04da6cc0c64..758d95a0c495fb 100644 --- a/test/sanity/pep8/legacy-files.txt +++ b/test/sanity/pep8/legacy-files.txt @@ -12,7 +12,6 @@ lib/ansible/modules/cloud/openstack/_os_server_actions.py lib/ansible/modules/cloud/ovirt/_ovirt_affinity_groups.py lib/ansible/modules/cloud/amazon/aws_kms.py lib/ansible/modules/cloud/amazon/cloudformation.py -lib/ansible/modules/cloud/amazon/cloudformation_facts.py lib/ansible/modules/cloud/amazon/cloudfront_facts.py lib/ansible/modules/cloud/amazon/dynamodb_table.py lib/ansible/modules/cloud/amazon/ec2_ami_copy.py
##### SUMMARY Return empty results and message if there is no stack from which to get facts. Fixes #23419. Made pep8. ##### ISSUE TYPE - Bugfix Pull Request ##### COMPONENT NAME lib/ansible/modules/cloud/amazon/cloudformation_facts.py ##### ANSIBLE VERSION ``` 2.4.0 ``` Before: ``` The full traceback is: Traceback (most recent call last): File "/var/folders/by/k8_fbl593dlctgqmwq5wzl2c0000gn/T/ansible_NPiajF/ansible_module_cloudformation_facts.py", line 179, in describe_stack response = self.paginated_response(func, 'Stacks') File "/var/folders/by/k8_fbl593dlctgqmwq5wzl2c0000gn/T/ansible_NPiajF/ansible_module_cloudformation_facts.py", line 225, in paginated_response response = func(**args) File "/Users/shertel/Workspace/01-18-17/ansible/venv/python2env/lib/python2.7/site-packages/botocore/client.py", line 253, in _api_call return self._make_api_call(operation_name, kwargs) File "/Users/shertel/Workspace/01-18-17/ansible/venv/python2env/lib/python2.7/site-packages/botocore/client.py", line 543, in _make_api_call raise error_class(parsed_response, operation_name) ClientError: An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id notthere does not exist fatal: [localhost]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "all_facts": true, "aws_access_key": null, "aws_secret_key": null, "ec2_url": null, "profile": "shertel", "region": "us-east-1", "security_token": null, "stack_events": false, "stack_name": "notthere", "stack_policy": false, "stack_resources": false, "stack_template": false, "validate_certs": true } }, "msg": "Error describing stack - An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id notthere does not exist" } to retry, use: --limit @/Users/shertel/Workspace/01-18-17/ansible/my_playbooks/cloudformation/nonexistent_stack.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ```
https://api.github.com/repos/ansible/ansible/pulls/23758
2017-04-19T18:01:06Z
2017-10-26T19:18:31Z
2017-10-26T19:18:31Z
2019-04-26T21:09:52Z
1,527
ansible/ansible
49,438