title
stringlengths
2
169
diff
stringlengths
235
19.5k
body
stringlengths
0
30.5k
url
stringlengths
48
84
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
diff_len
float64
101
3.99k
repo_name
stringclasses
83 values
__index_level_0__
int64
15
52.7k
Add some properties to NotionDBLoader
diff --git a/libs/langchain/langchain/document_loaders/notiondb.py b/libs/langchain/langchain/document_loaders/notiondb.py index d7728efcc88da8..e24d6fe555cf69 100644 --- a/libs/langchain/langchain/document_loaders/notiondb.py +++ b/libs/langchain/langchain/document_loaders/notiondb.py @@ -130,6 +130,14 @@ def load_page(self, page_summary: Dict[str, Any]) -> Document: ) elif prop_type == "created_time": value = prop_data["created_time"] if prop_data["created_time"] else None + elif prop_type == "checkbox": + value = prop_data["checkbox"] + elif prop_type == "email": + value = prop_data["email"] + elif prop_type == "number": + value = prop_data["number"] + elif prop_type == "select": + value = prop_data["select"]["name"] if prop_data["select"] else None else: value = None
<!-- Thank you for contributing to LangChain! Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes (if applicable), - **Dependencies:** any dependencies required for this change, - **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below), - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/extras` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. --> fix #13356 Add supports following properties for metadata to NotionDBLoader. - `checkbox` - `email` - `number` - `select` There are no relevant tests for this code to be updated.
https://api.github.com/repos/langchain-ai/langchain/pulls/13358
2023-11-14T17:46:08Z
2023-11-15T04:31:12Z
2023-11-15T04:31:12Z
2023-11-15T20:33:05Z
230
langchain-ai/langchain
43,613
Fix a grammar mistake in the docs
diff --git a/docs/extensiondev.rst b/docs/extensiondev.rst index 9a4cd14c5e..9119abdb9a 100644 --- a/docs/extensiondev.rst +++ b/docs/extensiondev.rst @@ -35,7 +35,7 @@ called ``flask_something`` users would import it as ``flask.ext.something``. This is done to transition from the old namespace packages. See :ref:`ext-import-transition` for more details. -But how do extensions look like themselves? An extension has to ensure +But what do extensions look like themselves? An extension has to ensure that it works with multiple Flask application instances at once. This is a requirement because many people will use patterns like the :ref:`app-factories` pattern to create their application as needed to aid
https://api.github.com/repos/pallets/flask/pulls/1798
2016-05-05T17:24:14Z
2016-05-05T17:27:45Z
2016-05-05T17:27:45Z
2020-11-14T04:52:53Z
191
pallets/flask
20,287
[Serve] Fix Serve Release Tests
diff --git a/python/ray/serve/benchmarks/cluster.yaml b/python/ray/serve/benchmarks/cluster.yaml index 831f0f1ff3cb4..0a43885334533 100644 --- a/python/ray/serve/benchmarks/cluster.yaml +++ b/python/ray/serve/benchmarks/cluster.yaml @@ -1,7 +1,7 @@ cluster_name: default -min_workers: 22 -max_workers: 22 -initial_workers: 22 +min_workers: 5 +max_workers: 5 +initial_workers: 5 autoscaling_mode: default docker: image: 'anyscale/ray-ml:latest' @@ -28,6 +28,7 @@ initialization_commands: [] setup_commands: - apt-get install build-essential libssl-dev git -y - 'rm -r wrk || true && git clone https://github.com/wg/wrk.git wrk && cd wrk && make -j && cp wrk /usr/local/bin' + - ray install-nightly head_setup_commands: [] worker_setup_commands: [] head_start_ray_commands: diff --git a/python/ray/serve/benchmarks/microbenchmark.py b/python/ray/serve/benchmarks/microbenchmark.py index e4f058e0f199e..4a34e34183112 100644 --- a/python/ray/serve/benchmarks/microbenchmark.py +++ b/python/ray/serve/benchmarks/microbenchmark.py @@ -86,13 +86,14 @@ async def main(): client.create_backend("backend", backend) client.create_endpoint("endpoint", backend="backend", route="/api") for intermediate_handles in [False, True]: - if (intermediate_handles): + if intermediate_handles: client.create_endpoint( "backend", backend="backend", route="/backend") class forwardActor: def __init__(self): + client = serve.connect() self.handle = client.get_handle("backend") def __call__(self, _): diff --git a/python/ray/serve/benchmarks/scalability.py b/python/ray/serve/benchmarks/scalability.py index d11564567edae..c424ae32107d4 100644 --- a/python/ray/serve/benchmarks/scalability.py +++ b/python/ray/serve/benchmarks/scalability.py @@ -36,73 +36,76 @@ from ray.serve import BackendConfig from ray.serve.utils import logger -from ray.util.placement_group import (placement_group, remove_placement_group) +from ray.util.placement_group import placement_group, remove_placement_group ray.shutdown() ray.init(address="auto") -client = serve.start() -# These numbers need to correspond with the autoscaler config file. -# The number of remote nodes in the autoscaler should upper bound -# these because sometimes nodes fail to update. -num_workers = 20 -expected_num_nodes = num_workers + 1 -cpus_per_node = 4 -num_remote_cpus = expected_num_nodes * cpus_per_node +# We ask for more worker but only need to run on smaller subset. +# This should account for worker nodes failed to launch. +expected_num_nodes = 6 +num_replicas = 11 +# wrk HTTP load testing config +num_connections = 20 +num_threads = 2 +time_to_run = "20s" # Wait until the expected number of nodes have joined the cluster. while True: - num_nodes = len(ray.nodes()) + num_nodes = len(list(filter(lambda node: node["Alive"], ray.nodes()))) logger.info("Waiting for nodes {}/{}".format(num_nodes, expected_num_nodes)) if num_nodes >= expected_num_nodes: break time.sleep(5) + logger.info("Nodes have all joined. There are %s resources.", ray.cluster_resources()) +client = serve.start() + def hey(_): time.sleep(0.01) # Sleep for 10ms return b"hey" -num_connections = int(num_remote_cpus * 0.75) -num_threads = 2 -time_to_run = "10s" - pg = placement_group( [{ "CPU": 1 } for _ in range(expected_num_nodes)], strategy="STRICT_SPREAD") ray.get(pg.ready()) -# The number of replicas is the number of cores remaining after accounting -# for the one HTTP proxy actor on each node, the "hey" requester task on each -# node, and the serve controller. -# num_replicas = expected_num_nodes * (cpus_per_node - 2) - 1 -num_replicas = ray.available_resources()["CPU"] logger.info("Starting %i replicas", num_replicas) client.create_backend( "hey", hey, config=BackendConfig(num_replicas=num_replicas)) client.create_endpoint("hey", backend="hey", route="/hey") [email protected] [email protected](num_cpus=0) def run_wrk(): - logger.info("Warming up for ~3 seconds") - for _ in range(5): - resp = requests.get("http://127.0.0.1:8000/hey").text - logger.info("Received response \'" + resp + "\'") - time.sleep(0.5) + logger.info("Warming up") + for _ in range(10): + try: + resp = requests.get("http://127.0.0.1:8000/hey").text + logger.info("Received response '" + resp + "'") + time.sleep(0.5) + except Exception as e: + logger.info(f"Got exception {e}") result = subprocess.run( [ - "wrk", "-c", - str(num_connections), "-t", - str(num_threads), "-d", time_to_run, "http://127.0.0.1:8000/hey" + "wrk", + "-c", + str(num_connections), + "-t", + str(num_threads), + "-d", + time_to_run, + "http://127.0.0.1:8000/hey", ], - stdout=subprocess.PIPE) + stdout=subprocess.PIPE, + ) return result.stdout.decode() diff --git a/python/ray/serve/benchmarks/single.yaml b/python/ray/serve/benchmarks/single.yaml index b93577fcac40c..4500d65abd268 100644 --- a/python/ray/serve/benchmarks/single.yaml +++ b/python/ray/serve/benchmarks/single.yaml @@ -23,6 +23,7 @@ initialization_commands: [] setup_commands: - apt-get install build-essential libssl-dev git -y - 'rm -r wrk || true && git clone https://github.com/wg/wrk.git wrk && cd wrk && make -j && cp wrk /usr/local/bin' + - ray install-nightly head_setup_commands: [] worker_setup_commands: [] head_start_ray_commands: diff --git a/release/long_running_tests/cluster.yaml b/release/long_running_tests/cluster.yaml index 074d445ba9c1f..e51cd010b1ea5 100644 --- a/release/long_running_tests/cluster.yaml +++ b/release/long_running_tests/cluster.yaml @@ -1,48 +1,17 @@ -cluster_name: default -min_workers: 0 -max_workers: 0 -target_utilization_fraction: 0.8 -idle_timeout_minutes: 5 +cluster_name: ray-release-long-running-tests + +docker: + image: anyscale/ray:latest + container_name: ray_container + pull_before_run: False -# Cloud-provider specific configuration. provider: type: aws region: us-west-2 - availability_zone: us-west-2a + availability_zone: us-west-2a, us-west-2b, us-west-2c + auth: ssh_user: ubuntu head_node: - InstanceType: m5.2xlarge - ImageId: ami-0888a3b5189309429 # DLAMI 7/1/19 - BlockDeviceMappings: - - DeviceName: /dev/sda1 - Ebs: - VolumeSize: 150 - -worker_nodes: - InstanceType: m5.large - ImageId: ami-0888a3b5189309429 # DLAMI 7/1/19 - BlockDeviceMappings: - - DeviceName: /dev/sda1 - Ebs: - VolumeSize: 150 - - # Run workers on spot by default. Comment this out to use on-demand. - InstanceMarketOptions: - MarketType: spot - -# List of shell commands to run to set up nodes. -setup_commands: [] - -# Custom commands that will be run on the head node after common setup. -head_setup_commands: [] - -# Custom commands that will be run on worker nodes after common setup. -worker_setup_commands: [] - -# Command to start ray on the head node. You don't need to change this. -head_start_ray_commands: [] - -# Command to start ray on worker nodes. You don't need to change this. -worker_start_ray_commands: [] + InstanceType: m5.xlarge diff --git a/release/long_running_tests/run.sh b/release/long_running_tests/run.sh index eba55e842c8cb..e48d7c899ba22 100644 --- a/release/long_running_tests/run.sh +++ b/release/long_running_tests/run.sh @@ -1,6 +1,6 @@ #!/usr/bin/env bash -ray_version="" +ray_version="" commit="" ray_branch="" workload="" @@ -48,20 +48,20 @@ echo "commit: $commit" echo "branch: $ray_branch" echo "workload: $workload" -wheel="https://s3-us-west-2.amazonaws.com/ray-wheels/$ray_branch/$commit/ray-$ray_version-cp36-cp36m-manylinux2014_x86_64.whl" +wheel="https://s3-us-west-2.amazonaws.com/ray-wheels/$ray_branch/$commit/ray-$ray_version-cp37-cp37m-manylinux2014_x86_64.whl" -echo set-window-option -g mouse on > ~/.tmux.conf -echo 'termcapinfo xterm* ti@:te@' > ~/.screenrc # Serve load testing tool -rm -r wrk || true && git clone https://github.com/wg/wrk.git wrk && cd wrk && make -j && sudo cp wrk /usr/local/bin -pip install -U pip -unset RAY_ADDRESS -source activate tensorflow_p36 -conda remove -y --force wrapt || true +cur_dir=$(pwd) +cd /tmp && rm -rf wrk && git clone https://github.com/wg/wrk.git wrk && cd wrk && make -j && cp wrk /usr/local/bin +cd "$cur_dir" || exit + pip install --upgrade pip pip install -U tensorflow==1.14 -pip install -q -U "$wheel" Click +pip install -q -U "$wheel" pip install -q "ray[all]" "gym[atari]" -cd .. + +ray stop && sleep 2 + +unset RAY_ADDRESS python "./workloads/$workload.py" diff --git a/release/long_running_tests/workloads/serve.py b/release/long_running_tests/workloads/serve.py index 6d404ac5137f0..59b2307764cf1 100644 --- a/release/long_running_tests/workloads/serve.py +++ b/release/long_running_tests/workloads/serve.py @@ -11,7 +11,7 @@ num_redis_shards = 1 redis_max_memory = 10**8 object_store_memory = 10**8 -num_nodes = 5 +num_nodes = 1 cluster = Cluster() for i in range(num_nodes): cluster.add_node( @@ -22,21 +22,20 @@ resources={str(i): 2}, object_store_memory=object_store_memory, redis_max_memory=redis_max_memory, - dashboard_host="0.0.0.0") + dashboard_host="0.0.0.0", + ) ray.init(address=cluster.address, dashboard_host="0.0.0.0") client = serve.start() @serve.accept_batch -def echo(_): +def echo(requests_batch): time.sleep(0.01) # Sleep for 10ms - ray.show_in_dashboard( - str(serve.context.batch_size), key="Current batch size") - return ["hi {}".format(i) for i in range(serve.context.batch_size)] + return ["hi" for _ in range(len(requests_batch))] -config = {"num_replicas": 30, "max_batch_size": 16} +config = {"num_replicas": 7, "max_batch_size": 16} client.create_backend("echo:v1", echo, config=config) client.create_endpoint("echo", backend="echo:v1", route="/echo") @@ -53,12 +52,18 @@ def echo(_): while True: proc = subprocess.Popen( [ - "wrk", "-c", - str(connections), "-t", - str(num_threads), "-s", time_to_run, "http://127.0.0.1:8000/echo" + "wrk", + "-c", + str(connections), + "-t", + str(num_threads), + "-d", + time_to_run, + "http://127.0.0.1:8000/echo", ], stdout=PIPE, - stderr=PIPE) + stderr=PIPE, + ) print("started load testing") proc.wait() out, err = proc.communicate() diff --git a/release/long_running_tests/workloads/serve_failure.py b/release/long_running_tests/workloads/serve_failure.py index 1298532890983..534dcbda73edb 100644 --- a/release/long_running_tests/workloads/serve_failure.py +++ b/release/long_running_tests/workloads/serve_failure.py @@ -11,19 +11,20 @@ num_redis_shards = 1 redis_max_memory = 10**8 object_store_memory = 10**8 -num_nodes = 5 -cpus_per_node = 2 +num_nodes = 1 +cpus_per_node = 10 cluster = Cluster() for i in range(num_nodes): cluster.add_node( redis_port=6379 if i == 0 else None, num_redis_shards=num_redis_shards if i == 0 else None, - num_cpus=2, + num_cpus=16, num_gpus=0, resources={str(i): 2}, object_store_memory=object_store_memory, redis_max_memory=redis_max_memory, - dashboard_host="0.0.0.0") + dashboard_host="0.0.0.0", + ) ray.init( address=cluster.address, dashboard_host="0.0.0.0", log_to_driver=False)
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? Workaround for https://github.com/ray-project/ray/issues/12778 ## Related issue number <!-- For example: "Closes #1234" --> ## Checks - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :(
https://api.github.com/repos/ray-project/ray/pulls/12777
2020-12-11T05:50:33Z
2020-12-11T19:53:48Z
2020-12-11T19:53:48Z
2020-12-29T17:46:55Z
3,391
ray-project/ray
19,807
[iqiyi] update key
diff --git a/src/you_get/extractors/iqiyi.py b/src/you_get/extractors/iqiyi.py index 2359d1a9b9..1f1245906f 100644 --- a/src/you_get/extractors/iqiyi.py +++ b/src/you_get/extractors/iqiyi.py @@ -12,6 +12,10 @@ ''' Changelog: +-> http://www.iqiyi.com/common/flashplayer/20150810/MainPlayer_5_2_26_c3_3_7_1.swf + http://www.iqiyi.com/common/flashplayer/20150811/MainPlayer_5_2_26_c3_3_7_2.swf + some small changes in Zombie.bite function + -> http://www.iqiyi.com/common/flashplayer/20150805/MainPlayer_5_2_26_c3_3_7.swf former key still works until 20150809 In Zombie kcuf = [13, 3, 0, 15, 8, 2, 11, 7, 10, 1, 12, 9, 14, 6, 4, 5] ,which is construct in LogManager,CoreManager,impls.pub.setting,impls.pub.statistics,StageVideoManager @@ -24,11 +28,6 @@ -> http://www.iqiyi.com/common/flashplayer/20150618/MainPlayer_5_2_24_1_c3_3_2.swf In this version Z7elzzup.cexe,just use node.js to run this code(with some modification) and get innerkey. --> http://www.iqiyi.com/common/flashplayer/20150612/MainPlayer_5_2_23_1_c3_2_6_5.swf - In this version do not directly use enc key - gen enc key (so called sc ) in DMEmagelzzup.mix(tvid) -> (tm->getTimer(),src='hsalf',sc) - encrypy alogrithm is md5(DMEmagelzzup.mix.genInnerKey +tm+tvid) - how to gen genInnerKey ,can see first 3 lin in mix function in this file ''' ''' @@ -47,7 +46,7 @@ def mix(tvid): enc = [] - enc.append('65096542539c4e529c8ee97511cd979f') + enc.append('3601ba290e4f4662848c710e2122007e') tm = str(randint(2000,4000)) src = 'eknas' enc.append(str(tm))
<!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/593) <!-- Reviewable:end -->
https://api.github.com/repos/soimort/you-get/pulls/593
2015-08-12T04:55:10Z
2015-08-12T04:58:13Z
2015-08-12T04:58:13Z
2015-08-12T04:58:13Z
603
soimort/you-get
21,235
want to add iwata.tv support
diff --git a/src/you_get/common.py b/src/you_get/common.py index 8d4d2d7683..a7ae299e96 100755 --- a/src/you_get/common.py +++ b/src/you_get/common.py @@ -92,6 +92,7 @@ 'miaopai' : 'yixia', 'yizhibo' : 'yizhibo', 'youku' : 'youku', + 'iwara' : 'iwara', 'youtu' : 'youtube', 'youtube' : 'youtube', 'zhanqi' : 'zhanqi', diff --git a/src/you_get/extractors/iwara.py b/src/you_get/extractors/iwara.py new file mode 100644 index 0000000000..21b44608d8 --- /dev/null +++ b/src/you_get/extractors/iwara.py @@ -0,0 +1,35 @@ +#!/usr/bin/env python +__all__ = ['iwara_download'] +from ..common import * +headers = { + 'DNT': '1', + 'Accept-Encoding': 'gzip, deflate, sdch, br', + 'Accept-Language': 'en-CA,en;q=0.8,en-US;q=0.6,zh-CN;q=0.4,zh;q=0.2', + 'Upgrade-Insecure-Requests': '1', + 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36', + 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', + 'Cache-Control': 'max-age=0', + + 'Connection': 'keep-alive', + 'Save-Data': 'on', + 'Cookie':'has_js=1;show_adult=1', +} + +def iwara_download(url, output_dir='.', merge=True, info_only=False, **kwargs): + global headers + video_hash=match1(url, r'http://\w+.iwara.tv/videos/(\w+)') + video_url=match1(url, r'(http://\w+.iwara.tv)/videos/\w+') + html = get_content(url,headers=headers) + title = r1(r'<title>(.*)</title>', html) + api_url=video_url+'/api/video/'+video_hash + content=get_content(api_url,headers=headers) + data=json.loads(content) + type,ext,size=url_info(data[0]['uri'], headers=headers) + down_urls=data[0]['uri'] + print_info(down_urls,title+data[0]['resolution'],type,size) + + download_urls([down_urls], title, ext, size, output_dir, merge = merge,headers=headers) + +site_info = "iwara" +download = iwara_download +download_playlist = playlist_not_supported('iwara') \ No newline at end of file
https://api.github.com/repos/soimort/you-get/pulls/2107
2017-06-26T16:03:09Z
2017-08-27T02:04:43Z
2017-08-27T02:04:43Z
2017-08-27T02:04:43Z
705
soimort/you-get
21,253
Remove wheel hack in windows installer construction script
diff --git a/tools/dev_constraints.txt b/tools/dev_constraints.txt index 43a27a3d083..2cc2039ea9b 100644 --- a/tools/dev_constraints.txt +++ b/tools/dev_constraints.txt @@ -83,10 +83,10 @@ PyGithub==1.52 Pygments==2.2.0 pyjwt==1.7.1 pylint==2.4.3 +pynacl==1.3.0 # If pynsist version is upgraded, our NSIS template windows-installer/template.nsi # must be upgraded if necessary using the new built-in one from pynsist. -pynacl==1.3.0 -pynsist==2.6 +pynsist==2.7 pytest==3.2.5 pytest-cov==2.5.1 pytest-forked==0.2 diff --git a/windows-installer/windows_installer/construct.py b/windows-installer/windows_installer/construct.py index 0cec3811b3f..98392304987 100644 --- a/windows-installer/windows_installer/construct.py +++ b/windows-installer/windows_installer/construct.py @@ -1,7 +1,6 @@ #!/usr/bin/env python3 import ctypes import os -import re import shutil import struct import subprocess @@ -65,21 +64,6 @@ def _compile_wheels(repo_path, build_path, venv_python): command.extend(wheels_project) subprocess.check_call(command, env=env) - # Cryptography uses now a unique wheel name "cryptography-VERSION-cpXX-abi3-win32.whl where - # cpXX is the lowest supported version of Python (eg. cp36 says that the wheel is compatible - # with Python 3.6+). While technically valid to describe a wheel compliant with the Stable - # Application Binary Interface, this naming convention makes pynsist falsely think that the - # wheel is compatible with Python 3.6 only. - # Let's trick pynsist by renaming the wheel until this is fixed upstream. - for file in os.listdir(wheels_path): - # Given that our Python version is 3.8, this rename files like - # cryptography-VERSION-cpXX-abi3-win32.whl into cryptography-VERSION-cp38-abi3-win32.whl - renamed = re.sub(r'^(.*)-cp\d+-abi3-(\w+)\.whl$', r'\1-cp{0}{1}-abi3-\2.whl' - .format(PYTHON_VERSION[0], PYTHON_VERSION[1]), file) - print(renamed) - if renamed != file: - os.replace(os.path.join(wheels_path, file), os.path.join(wheels_path, renamed)) - def _prepare_build_tools(venv_path, venv_python, repo_path): print('Prepare build tools')
In #8649 we added some code to trick pynsist and make it understand that `abi3` wheels for Windows are forward compatible, meaning that the cryptography wheel tagged `cp36-abi3` is in fact compatible with Python 3.6+, and not only Python 3.6. Since pynsist 2.7 the tool now understand `abi3` wheels properly, and this trick is not needed anymore. Please note that despite modifying the pynsist pinning in `dev_constraints.txt`, it will have no effect since pynsist currently escape the pinning system. This is handled in https://github.com/certbot/certbot/pull/8749.
https://api.github.com/repos/certbot/certbot/pulls/8752
2021-03-28T21:16:42Z
2021-04-02T17:37:20Z
2021-04-02T17:37:20Z
2021-04-02T17:37:20Z
653
certbot/certbot
2,070
[soundcloud] Adding likes support to SoundcloudUserIE
diff --git a/test/test_playlists.py b/test/test_playlists.py index 994b1d4b057..3a88cf27073 100644 --- a/test/test_playlists.py +++ b/test/test_playlists.py @@ -137,6 +137,14 @@ def test_soundcloud_user(self): self.assertEqual(result['id'], '9615865') self.assertTrue(len(result['entries']) >= 12) + def test_soundcloud_likes(self): + dl = FakeYDL() + ie = SoundcloudUserIE(dl) + result = ie.extract('https://soundcloud.com/the-concept-band/likes') + self.assertIsPlaylist(result) + self.assertEqual(result['id'], '9615865') + self.assertTrue(len(result['entries']) >= 1) + def test_soundcloud_playlist(self): dl = FakeYDL() ie = SoundcloudPlaylistIE(dl) diff --git a/youtube_dl/extractor/soundcloud.py b/youtube_dl/extractor/soundcloud.py index 7aa100fb22f..14ec9452d4f 100644 --- a/youtube_dl/extractor/soundcloud.py +++ b/youtube_dl/extractor/soundcloud.py @@ -255,7 +255,7 @@ def _real_extract(self, url): class SoundcloudUserIE(SoundcloudIE): - _VALID_URL = r'https?://(www\.)?soundcloud\.com/(?P<user>[^/]+)(/?(tracks/)?)?(\?.*)?$' + _VALID_URL = r'https?://(www\.)?soundcloud\.com/(?P<user>[^/]+)/?((?P<rsrc>tracks|likes)/?)?(\?.*)?$' IE_NAME = 'soundcloud:user' # it's in tests/test_playlists.py @@ -264,24 +264,31 @@ class SoundcloudUserIE(SoundcloudIE): def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) uploader = mobj.group('user') + resource = mobj.group('rsrc') + if resource is None: + resource = 'tracks' + elif resource == 'likes': + resource = 'favorites' url = 'http://soundcloud.com/%s/' % uploader resolv_url = self._resolv_url(url) user = self._download_json( resolv_url, uploader, 'Downloading user info') - base_url = 'http://api.soundcloud.com/users/%s/tracks.json?' % uploader + base_url = 'http://api.soundcloud.com/users/%s/%s.json?' % (uploader, resource) entries = [] for i in itertools.count(): data = compat_urllib_parse.urlencode({ 'offset': i * 50, + 'limit': 50, 'client_id': self._CLIENT_ID, }) new_entries = self._download_json( base_url + data, uploader, 'Downloading track page %s' % (i + 1)) - entries.extend(self._extract_info_dict(e, quiet=True) for e in new_entries) - if len(new_entries) < 50: + if len(new_entries) == 0: + self.to_screen('%s: End page received' % uploader) break + entries.extend(self._extract_info_dict(e, quiet=True) for e in new_entries) return { '_type': 'playlist',
This commit adds support for downloading a user's liked tracks. Basically, i have modified `SoundcloudUserIE` to also accept likes page url. If the url points to a likes sub-resource then we simply switch the API call `base_url` to `/favorites` instead of `/tracks`. Additionally, i have modified the pagination loop break. Sometimes (especially towards the older pages), the soundcloud API seems to be sending fewer than 50 entries even when there are more pages left. The loop will now run until a zero entry response is received. I have also added a test in `test_playlists.py`
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/3211
2014-07-07T18:44:12Z
2014-07-11T09:07:12Z
2014-07-11T09:07:12Z
2014-07-11T09:07:20Z
764
ytdl-org/youtube-dl
50,327
Fixed #31376 -- Optimized nulls ordering when possible on SQLite and MySQL.
diff --git a/django/db/backends/base/features.py b/django/db/backends/base/features.py index 5760b590987df..c2b92ecdee0e0 100644 --- a/django/db/backends/base/features.py +++ b/django/db/backends/base/features.py @@ -91,6 +91,9 @@ class BaseDatabaseFeatures: # Does the backend support NULLS FIRST and NULLS LAST in ORDER BY? supports_order_by_nulls_modifier = True + # Does the backend orders NULLS FIRST by default? + order_by_nulls_first = False + # The database's limit on the number of query parameters. max_query_params = None diff --git a/django/db/backends/mysql/features.py b/django/db/backends/mysql/features.py index 1d0cd365dbc8a..e1d4ce726b7d4 100644 --- a/django/db/backends/mysql/features.py +++ b/django/db/backends/mysql/features.py @@ -51,6 +51,7 @@ class DatabaseFeatures(BaseDatabaseFeatures): # Neither MySQL nor MariaDB support partial indexes. supports_partial_indexes = False supports_order_by_nulls_modifier = False + order_by_nulls_first = True @cached_property def _mysql_storage_engine(self): diff --git a/django/db/backends/sqlite3/features.py b/django/db/backends/sqlite3/features.py index 6aebbc32627a3..84eca9864409c 100644 --- a/django/db/backends/sqlite3/features.py +++ b/django/db/backends/sqlite3/features.py @@ -45,3 +45,4 @@ class DatabaseFeatures(BaseDatabaseFeatures): supports_frame_range_fixed_distance = Database.sqlite_version_info >= (3, 28, 0) supports_aggregate_filter_clause = Database.sqlite_version_info >= (3, 30, 1) supports_order_by_nulls_modifier = Database.sqlite_version_info >= (3, 30, 0) + order_by_nulls_first = True diff --git a/django/db/models/expressions.py b/django/db/models/expressions.py index f773e99a0fdf1..84960d77e1653 100644 --- a/django/db/models/expressions.py +++ b/django/db/models/expressions.py @@ -1120,9 +1120,13 @@ def as_sql(self, compiler, connection, template=None, **extra_context): elif self.nulls_first: template = '%s NULLS FIRST' % template else: - if self.nulls_last: + if self.nulls_last and not ( + self.descending and connection.features.order_by_nulls_first + ): template = '%%(expression)s IS NULL, %s' % template - elif self.nulls_first: + elif self.nulls_first and not ( + not self.descending and connection.features.order_by_nulls_first + ): template = '%%(expression)s IS NOT NULL, %s' % template connection.ops.check_expression_support(self) expression_sql, params = compiler.compile(self.expression)
https://code.djangoproject.com/ticket/31376 --- Both backends order `NULL`s first on ascending ordering and last on descending ordering which makes `ORDER BY IS (NOT)? NULL` wasteful when `asc(nulls_first)` and `desc(nulls_last)` are used since it prevents indice usage.
https://api.github.com/repos/django/django/pulls/12583
2020-03-18T02:04:58Z
2020-03-18T06:09:10Z
2020-03-18T06:09:10Z
2020-03-22T22:08:43Z
694
django/django
50,914
[Update] Added 1 payload
diff --git a/XSS Injection/XSS in Angular.md b/XSS Injection/XSS in Angular.md index 1749a9a8b9..5a2be103d2 100644 --- a/XSS Injection/XSS in Angular.md +++ b/XSS Injection/XSS in Angular.md @@ -149,6 +149,14 @@ AngularJS 1.0.1 - 1.1.5 and Vue JS {{constructor.constructor('alert(1)')()}} ``` +### Advanced bypassing XSS + +AngularJS (without `'` single and `"` double quotes) by [@Viren](https://twitter.com/VirenPawar_) + +```javascript +{{x=valueOf.name.constructor.fromCharCode;constructor.constructor(x(97,108,101,114,116,40,49,41))()}} +``` + ### Blind XSS
Added one payload which executes without any usage of (`'`)single-quotes or (`"`)double-quotes. Helpful when you have AngularJS injection but quotes are blocked by the application. Working proof of payload here: https://portswigger-labs.net/xss/angularjs.php?type=reflected&csp=0&version=1.6.0&x={{x=valueOf.name.constructor.fromCharCode;constructor.constructor(x(97,108,101,114,116,40,49,41))()}}
https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/233
2020-08-15T11:00:32Z
2020-08-17T10:03:47Z
2020-08-17T10:03:47Z
2020-08-17T10:03:47Z
189
swisskyrepo/PayloadsAllTheThings
8,541
Cover Scrapy 1.8.0 in the release notes
diff --git a/docs/news.rst b/docs/news.rst index 8dfe8693c7a..669844045e3 100644 --- a/docs/news.rst +++ b/docs/news.rst @@ -6,6 +6,209 @@ Release notes .. note:: Scrapy 1.x will be the last series supporting Python 2. Scrapy 2.0, planned for Q4 2019 or Q1 2020, will support **Python 3 only**. +.. _release-1.8.0: + +Scrapy 1.8.0 (2019-10-28) +------------------------- + +Highlights: + +* Dropped Python 3.4 support and updated minimum requirements; made Python 3.8 + support official +* New :meth:`Request.from_curl <scrapy.http.Request.from_curl>` class method +* New :setting:`ROBOTSTXT_PARSER` and :setting:`ROBOTSTXT_USER_AGENT` settings +* New :setting:`DOWNLOADER_CLIENT_TLS_CIPHERS` and + :setting:`DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING` settings + +Backward-incompatible changes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +* Python 3.4 is no longer supported, and some of the minimum requirements of + Scrapy have also changed: + + * cssselect_ 0.9.1 + * cryptography_ 2.0 + * lxml_ 3.5.0 + * pyOpenSSL_ 16.2.0 + * queuelib_ 1.4.2 + * service_identity_ 16.0.0 + * six_ 1.10.0 + * Twisted_ 17.9.0 (16.0.0 with Python 2) + * zope.interface_ 4.1.3 + + (:issue:`3892`) + +* ``JSONRequest`` is now called :class:`~scrapy.http.JsonRequest` for + consistency with similar classes (:issue:`3929`, :issue:`3982`) + +* If you are using a custom context factory + (:setting:`DOWNLOADER_CLIENTCONTEXTFACTORY`), its ``__init__`` method must + accept two new parameters: ``tls_verbose_logging`` and ``tls_ciphers`` + (:issue:`2111`, :issue:`3392`, :issue:`3442`, :issue:`3450`) + +* :class:`~scrapy.loader.ItemLoader` now turns the values of its input item + into lists:: + + >>> item = MyItem() + >>> item['field'] = 'value1' + >>> loader = ItemLoader(item=item) + >>> item['field'] + ['value1'] + + This is needed to allow adding values to existing fields + (``loader.add_value('field', 'value2')``). + + (:issue:`3804`, :issue:`3819`, :issue:`3897`, :issue:`3976`, :issue:`3998`, + :issue:`4036`) + +See also :ref:`1.8-deprecation-removals` below. + + +New features +~~~~~~~~~~~~ + +* A new :meth:`Request.from_curl <scrapy.http.Request.from_curl>` class + method allows :ref:`creating a request from a cURL command + <requests-from-curl>` (:issue:`2985`, :issue:`3862`) + +* A new :setting:`ROBOTSTXT_PARSER` setting allows choosing which robots.txt_ + parser to use. It includes built-in support for + :ref:`RobotFileParser <python-robotfileparser>`, + :ref:`Protego <protego-parser>` (default), :ref:`Reppy <reppy-parser>`, and + :ref:`Robotexclusionrulesparser <rerp-parser>`, and allows you to + :ref:`implement support for additional parsers + <support-for-new-robots-parser>` (:issue:`754`, :issue:`2669`, + :issue:`3796`, :issue:`3935`, :issue:`3969`, :issue:`4006`) + +* A new :setting:`ROBOTSTXT_USER_AGENT` setting allows defining a separate + user agent string to use for robots.txt_ parsing (:issue:`3931`, + :issue:`3966`) + +* :class:`~scrapy.spiders.Rule` no longer requires a :class:`LinkExtractor + <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>` parameter + (:issue:`781`, :issue:`4016`) + +* Use the new :setting:`DOWNLOADER_CLIENT_TLS_CIPHERS` setting to customize + the TLS/SSL ciphers used by the default HTTP/1.1 downloader (:issue:`3392`, + :issue:`3442`) + +* Set the new :setting:`DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING` setting to + ``True`` to enable debug-level messages about TLS connection parameters + after establishing HTTPS connections (:issue:`2111`, :issue:`3450`) + +* Callbacks that receive keyword arguments + (see :attr:`Request.cb_kwargs <scrapy.http.Request.cb_kwargs>`) can now be + tested using the new :class:`@cb_kwargs + <scrapy.contracts.default.CallbackKeywordArgumentsContract>` + :ref:`spider contract <topics-contracts>` (:issue:`3985`, :issue:`3988`) + +* When a :class:`@scrapes <scrapy.contracts.default.ScrapesContract>` spider + contract fails, all missing fields are now reported (:issue:`766`, + :issue:`3939`) + +* :ref:`Custom log formats <custom-log-formats>` can now drop messages by + having the corresponding methods of the configured :setting:`LOG_FORMATTER` + return ``None`` (:issue:`3984`, :issue:`3987`) + +* A much improved completion definition is now available for Zsh_ + (:issue:`4069`) + + +Bug fixes +~~~~~~~~~ + +* :meth:`ItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>` no + longer makes later calls to :meth:`ItemLoader.get_output_value() + <scrapy.loader.ItemLoader.get_output_value>` or + :meth:`ItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>` return + empty data (:issue:`3804`, :issue:`3819`, :issue:`3897`, :issue:`3976`, + :issue:`3998`, :issue:`4036`) + +* Fixed :class:`~scrapy.statscollectors.DummyStatsCollector` raising a + :exc:`TypeError` exception (:issue:`4007`, :issue:`4052`) + +* :meth:`FilesPipeline.file_path + <scrapy.pipelines.files.FilesPipeline.file_path>` and + :meth:`ImagesPipeline.file_path + <scrapy.pipelines.images.ImagesPipeline.file_path>` no longer choose + file extensions that are not `registered with IANA`_ (:issue:`1287`, + :issue:`3953`, :issue:`3954`) + +* When using botocore_ to persist files in S3, all botocore-supported headers + are properly mapped now (:issue:`3904`, :issue:`3905`) + +* FTP passwords in :setting:`FEED_URI` containing percent-escaped characters + are now properly decoded (:issue:`3941`) + +* A memory-handling and error-handling issue in + :func:`scrapy.utils.ssl.get_temp_key_info` has been fixed (:issue:`3920`) + + +Documentation +~~~~~~~~~~~~~ + +* The documentation now covers how to define and configure a :ref:`custom log + format <custom-log-formats>` (:issue:`3616`, :issue:`3660`) + +* API documentation added for :class:`~scrapy.exporters.MarshalItemExporter` + and :class:`~scrapy.exporters.PythonItemExporter` (:issue:`3973`) + +* API documentation added for :class:`~scrapy.item.BaseItem` and + :class:`~scrapy.item.ItemMeta` (:issue:`3999`) + +* Minor documentation fixes (:issue:`2998`, :issue:`3398`, :issue:`3597`, + :issue:`3894`, :issue:`3934`, :issue:`3978`, :issue:`3993`, :issue:`4022`, + :issue:`4028`, :issue:`4033`, :issue:`4046`, :issue:`4050`, :issue:`4055`, + :issue:`4056`, :issue:`4061`, :issue:`4072`, :issue:`4071`, :issue:`4079`, + :issue:`4081`, :issue:`4089`, :issue:`4093`) + + +.. _1.8-deprecation-removals: + +Deprecation removals +~~~~~~~~~~~~~~~~~~~~ + +* ``scrapy.xlib`` has been removed (:issue:`4015`) + + +Deprecations +~~~~~~~~~~~~ + +* The LevelDB_ storage backend + (``scrapy.extensions.httpcache.LeveldbCacheStorage``) of + :class:`~scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware` is + deprecated (:issue:`4085`, :issue:`4092`) + +* Use of the undocumented ``SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE`` environment + variable is deprecated (:issue:`3910`) + +* ``scrapy.item.DictItem`` is deprecated, use :class:`~scrapy.item.Item` + instead (:issue:`3999`) + + +Other changes +~~~~~~~~~~~~~ + +* Minimum versions of optional Scrapy requirements that are covered by + continuous integration tests have been updated: + + * botocore_ 1.3.23 + * Pillow_ 3.4.2 + + Lower versions of these optional requirements may work, but it is not + guaranteed (:issue:`3892`) + +* GitHub templates for bug reports and feature requests (:issue:`3126`, + :issue:`3471`, :issue:`3749`, :issue:`3754`) + +* Continuous integration fixes (:issue:`3923`) + +* Code cleanup (:issue:`3391`, :issue:`3907`, :issue:`3946`, :issue:`3950`, + :issue:`4023`, :issue:`4031`) + + +.. _release-1.7.4: + Scrapy 1.7.4 (2019-10-21) ------------------------- @@ -18,22 +221,31 @@ makes later calls to :meth:`ItemLoader.get_output_value() <scrapy.loader.ItemLoader.get_output_value>` or :meth:`ItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>` return empty data. + +.. _release-1.7.3: + Scrapy 1.7.3 (2019-08-01) ------------------------- Enforce lxml 4.3.5 or lower for Python 3.4 (:issue:`3912`, :issue:`3918`). + +.. _release-1.7.2: + Scrapy 1.7.2 (2019-07-23) ------------------------- Fix Python 2 support (:issue:`3889`, :issue:`3893`, :issue:`3896`). +.. _release-1.7.1: + Scrapy 1.7.1 (2019-07-18) ------------------------- Re-packaging of Scrapy 1.7.0, which was missing some changes in PyPI. + .. _release-1.7.0: Scrapy 1.7.0 (2019-07-18) @@ -568,7 +780,7 @@ Scrapy 1.5.2 (2019-01-22) See :ref:`telnet console <topics-telnetconsole>` documentation for more info -* Backport CI build failure under GCE environemnt due to boto import error. +* Backport CI build failure under GCE environment due to boto import error. .. _release-1.5.1: @@ -2830,23 +3042,35 @@ First release of Scrapy. .. _AJAX crawleable urls: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started?csw=1 +.. _botocore: https://github.com/boto/botocore .. _chunked transfer encoding: https://en.wikipedia.org/wiki/Chunked_transfer_encoding .. _ClientForm: http://wwwsearch.sourceforge.net/old/ClientForm/ .. _Creating a pull request: https://help.github.com/en/articles/creating-a-pull-request +.. _cryptography: https://cryptography.io/en/latest/ .. _cssselect: https://github.com/scrapy/cssselect/ .. _docstrings: https://docs.python.org/glossary.html#term-docstring .. _KeyboardInterrupt: https://docs.python.org/library/exceptions.html#KeyboardInterrupt +.. _LevelDB: https://github.com/google/leveldb .. _lxml: http://lxml.de/ .. _marshal: https://docs.python.org/2/library/marshal.html .. _parsel.csstranslator.GenericTranslator: https://parsel.readthedocs.io/en/latest/parsel.html#parsel.csstranslator.GenericTranslator .. _parsel.csstranslator.HTMLTranslator: https://parsel.readthedocs.io/en/latest/parsel.html#parsel.csstranslator.HTMLTranslator .. _parsel.csstranslator.XPathExpr: https://parsel.readthedocs.io/en/latest/parsel.html#parsel.csstranslator.XPathExpr .. _PEP 257: https://www.python.org/dev/peps/pep-0257/ +.. _Pillow: https://python-pillow.org/ +.. _pyOpenSSL: https://www.pyopenssl.org/en/stable/ .. _queuelib: https://github.com/scrapy/queuelib +.. _registered with IANA: https://www.iana.org/assignments/media-types/media-types.xhtml .. _resource: https://docs.python.org/2/library/resource.html +.. _robots.txt: http://www.robotstxt.org/ .. _scrapely: https://github.com/scrapy/scrapely +.. _service_identity: https://service-identity.readthedocs.io/en/stable/ +.. _six: https://six.readthedocs.io/ .. _tox: https://pypi.python.org/pypi/tox +.. _Twisted: https://twistedmatrix.com/trac/ .. _Twisted - hello, asynchronous programming: http://jessenoller.com/blog/2009/02/11/twisted-hello-asynchronous-programming/ .. _w3lib: https://github.com/scrapy/w3lib .. _w3lib.encoding: https://github.com/scrapy/w3lib/blob/master/w3lib/encoding.py .. _What is cacheable: https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1 +.. _zope.interface: https://zopeinterface.readthedocs.io/en/latest/ +.. _Zsh: https://www.zsh.org/ diff --git a/docs/topics/logging.rst b/docs/topics/logging.rst index 87ea43c7dd0..2db0ffddd3e 100644 --- a/docs/topics/logging.rst +++ b/docs/topics/logging.rst @@ -198,8 +198,9 @@ to override some of the Scrapy settings regarding logging. Custom Log Formats ------------------ -A custom log format can be set for different actions by extending :class:`~scrapy.logformatter.LogFormatter` class -and making :setting:`LOG_FORMATTER` point to your new class. +A custom log format can be set for different actions by extending +:class:`~scrapy.logformatter.LogFormatter` class and making +:setting:`LOG_FORMATTER` point to your new class. .. autoclass:: scrapy.logformatter.LogFormatter :members: diff --git a/scrapy/logformatter.py b/scrapy/logformatter.py index f15940ed116..3c61ed7e016 100644 --- a/scrapy/logformatter.py +++ b/scrapy/logformatter.py @@ -29,7 +29,7 @@ class LogFormatter(object): * ``args`` should be a tuple or dict with the formatting placeholders for ``msg``. The final log message is computed as ``msg % args``. - Users can define their own ``LogFormatter`` class if they want to customise how + Users can define their own ``LogFormatter`` class if they want to customize how each action is logged or if they want to omit it entirely. In order to omit logging an action the method must return ``None``.
Last covered commit: 7731814cc25c57fe31db9ba749450cd5a27eed39 Also covers #4092
https://api.github.com/repos/scrapy/scrapy/pulls/3952
2019-08-12T10:10:14Z
2019-10-29T11:53:46Z
2019-10-29T11:53:46Z
2019-10-29T11:53:47Z
3,791
scrapy/scrapy
35,103
Deprecate ds.lazy() since Dataset is lazy already
diff --git a/python/ray/data/dataset.py b/python/ray/data/dataset.py index 76d50afbe9244..199dca052df4f 100644 --- a/python/ray/data/dataset.py +++ b/python/ray/data/dataset.py @@ -4131,6 +4131,10 @@ def get_internal_block_refs(self) -> List[ObjectRef[Block]]: self._synchronize_progress_bar() return blocks + @Deprecated( + message="Dataset is lazy by default, so this conversion call is no longer " + "needed and this API will be removed in a future release" + ) def lazy(self) -> "Dataset[T]": """Enable lazy evaluation.
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? https://github.com/ray-project/ray/issues/31639 <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number <!-- For example: "Closes #1234" --> ## Checks - [ ] I've signed off every commit(by using the -s flag, i.e., `git commit -s`) in this PR. - [ ] I've run `scripts/format.sh` to lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/. - [ ] I've added any new APIs to the API Reference. For example, if I added a method in Tune, I've added it in `doc/source/tune/api/` under the corresponding `.rst` file. - [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/ - Testing Strategy - [ ] Unit tests - [ ] Release tests - [ ] This PR is not tested :(
https://api.github.com/repos/ray-project/ray/pulls/33812
2023-03-28T18:05:57Z
2023-03-29T22:19:57Z
2023-03-29T22:19:57Z
2023-03-29T22:19:57Z
160
ray-project/ray
19,221
Remove Default Detector log line.
diff --git a/certbot/cli.py b/certbot/cli.py index 62246227847..498a945d685 100644 --- a/certbot/cli.py +++ b/certbot/cli.py @@ -207,13 +207,15 @@ def set_by_cli(var): # propagate plugin requests: eg --standalone modifies config.authenticator detector.authenticator, detector.installer = ( plugin_selection.cli_plugin_requests(detector)) - logger.debug("Default Detector is %r", detector) if not isinstance(getattr(detector, var), _Default): + logger.debug("Var %s=%s (set by user).", var, getattr(detector, var)) return True for modifier in VAR_MODIFIERS.get(var, []): if set_by_cli(modifier): + logger.debug("Var %s=%s (set by user).", + var, VAR_MODIFIERS.get(var, [])) return True return False
This produces a super-long log line that wraps to 30-60 lines, depending on screen width. Even though it's just at debug level, it clutters up the integration test output without providing proportional debugging value.
https://api.github.com/repos/certbot/certbot/pulls/5372
2018-01-06T01:45:25Z
2018-01-24T23:01:42Z
2018-01-24T23:01:42Z
2018-01-24T23:04:25Z
213
certbot/certbot
48
[ie/LBRY] Fix original format extraction
diff --git a/yt_dlp/extractor/lbry.py b/yt_dlp/extractor/lbry.py index 6af64f0df4a..7dd3a486130 100644 --- a/yt_dlp/extractor/lbry.py +++ b/yt_dlp/extractor/lbry.py @@ -1,5 +1,6 @@ import functools import json +import re import urllib.parse from .common import InfoExtractor @@ -83,7 +84,7 @@ class LBRYIE(LBRYBaseIE): _TESTS = [{ # Video 'url': 'https://lbry.tv/@Mantega:1/First-day-LBRY:1', - 'md5': 'fffd15d76062e9a985c22c7c7f2f4805', + 'md5': '65bd7ec1f6744ada55da8e4c48a2edf9', 'info_dict': { 'id': '17f983b61f53091fb8ea58a9c56804e4ff8cff4d', 'ext': 'mp4', @@ -132,9 +133,8 @@ class LBRYIE(LBRYBaseIE): 'license': 'None', } }, { - # HLS 'url': 'https://odysee.com/@gardeningincanada:b/plants-i-will-never-grow-again.-the:e', - 'md5': '25049011f3c8bc2f8b60ad88a031837e', + 'md5': 'c35fac796f62a14274b4dc2addb5d0ba', 'info_dict': { 'id': 'e51671357333fe22ae88aad320bde2f6f96b1410', 'ext': 'mp4', @@ -246,12 +246,13 @@ def _real_extract(self, url): streaming_url = self._call_api_proxy( 'get', claim_id, {'uri': uri}, 'streaming url')['streaming_url'] - # GET request returns original video/audio file if available + # GET request to v3 API returns original video/audio file if available + direct_url = re.sub(r'/api/v\d+/', '/api/v3/', streaming_url) ext = urlhandle_detect_ext(self._request_webpage( - streaming_url, display_id, 'Checking for original quality', headers=headers)) + direct_url, display_id, 'Checking for original quality', headers=headers)) if ext != 'm3u8': formats.append({ - 'url': streaming_url, + 'url': direct_url, 'format_id': 'original', 'quality': 1, **traverse_obj(result, ('value', {
The site migrated to v4 of its API and broke our extractor's "original" format extraction. This patch hardcodes the v3 API endpoint for the "original" format URLs, which is the same as the embed link found in the JSON LD block in the webpage. Addresses https://github.com/yt-dlp/yt-dlp/issues/7251#issuecomment-1650204079 <details open><summary>Template</summary> <!-- OPEN is intentional --> <!-- # PLEASE FOLLOW THE GUIDE BELOW - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x]) - Use *Preview* tab to see how your *pull request* will actually look like --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [x] Fix or improvement to an extractor (Make sure to add/update tests) - [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy)) - [ ] Core bug fix/improvement - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes)) <!-- Do NOT edit/remove anything below this! --> </details><details><summary>Copilot Summary</summary> <!-- copilot:all --> ### <samp>🤖 Generated by Copilot at dcf3973</samp> ### Summary 🆙🎥🛠️ <!-- 1. 🆙 - This emoji can signify that the extractor was updated to use a newer version of the API, which is a positive and beneficial change. 2. 🎥 - This emoji can indicate that the extractor deals with video streaming, which is the main feature of the lbry platform and the extractor. 3. 🛠️ - This emoji can suggest that the extractor was fixed and the test checksums were adjusted, which is a maintenance and repair work. --> Improved lbry extractor by switching to v3 API and fixing tests. > _`lbry` is the key to unlock the stream of doom_ > _We defy the old API and embrace the v3_ > _Our test checksums are fixed, no more errors in our way_ > _We download with quality and reliability_ ### Walkthrough * Change the streaming URL to use the v3 API and download the original file if available ([link](https://github.com/yt-dlp/yt-dlp/pull/7711/files?diff=unified&w=0#diff-64c082b4efe2e91db77dc41e4db23319ad4cef533ab0ac782a831a489c0eeba8R3), [link](https://github.com/yt-dlp/yt-dlp/pull/7711/files?diff=unified&w=0#diff-64c082b4efe2e91db77dc41e4db23319ad4cef533ab0ac782a831a489c0eeba8L249-R255)) * Update the md5 checksums of the test videos to match the current files on the server ([link](https://github.com/yt-dlp/yt-dlp/pull/7711/files?diff=unified&w=0#diff-64c082b4efe2e91db77dc41e4db23319ad4cef533ab0ac782a831a489c0eeba8L86-R87), [link](https://github.com/yt-dlp/yt-dlp/pull/7711/files?diff=unified&w=0#diff-64c082b4efe2e91db77dc41e4db23319ad4cef533ab0ac782a831a489c0eeba8L135-R137)) </details>
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/7711
2023-07-27T18:00:43Z
2023-07-29T16:01:44Z
2023-07-29T16:01:44Z
2023-12-07T15:11:05Z
634
yt-dlp/yt-dlp
7,996
Add setup option to Makefile
diff --git a/Makefile b/Makefile index be062ce73..a2e2d8d3c 100644 --- a/Makefile +++ b/Makefile @@ -52,4 +52,7 @@ ingest: @poetry run python scripts/ingest_folder.py $(call args) wipe: - poetry run python scripts/utils.py wipe \ No newline at end of file + poetry run python scripts/utils.py wipe + +setup: + poetry run python scripts/setup
https://api.github.com/repos/zylon-ai/private-gpt/pulls/1368
2023-12-05T18:59:22Z
2023-12-08T09:34:13Z
2023-12-08T09:34:13Z
2023-12-08T09:34:13Z
119
zylon-ai/private-gpt
38,616
Update Dockerfile `torch==1.10.2+cu113`
diff --git a/Dockerfile b/Dockerfile index 95e2cd4af66..a5526189cd9 100644 --- a/Dockerfile +++ b/Dockerfile @@ -10,8 +10,8 @@ RUN apt update && apt install -y zip htop screen libgl1-mesa-glx COPY requirements.txt . RUN python -m pip install --upgrade pip RUN pip uninstall -y nvidia-tensorboard nvidia-tensorboard-plugin-dlprof -RUN pip install --no-cache -r requirements.txt albumentations coremltools onnx gsutil notebook numpy Pillow wandb>=0.12.2 -RUN pip install --no-cache torch==1.10.1+cu113 torchvision==0.11.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html +RUN pip install --no-cache -r requirements.txt albumentations wandb gsutil notebook +RUN pip install --no-cache torch==1.10.2+cu113 torchvision==0.11.3+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html # RUN pip install --no-cache -U torch torchvision # Create working directory
## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Optimization of the YOLOv5 Docker image dependencies for better performance and compatibility. ### 📊 Key Changes - Removed explicit installations for `coremltools`, `onnx`, `numpy`, and `Pillow` from the requirements list. - Updated PyTorch version from `1.10.1+cu113` to `1.10.2+cu113`. - Updated torchvision from `0.11.2+cu113` to `0.11.3+cu113`. ### 🎯 Purpose & Impact - 🔄 Streamlining dependencies to reduce potential conflicts and unnecessary package installations. - 🚀 Leveraging newer versions of PyTorch and torchvision to improve performance and provide the latest features. - ⚙️ Ensures the Docker image remains compact and up-to-date, benefiting users by reducing image size and build time, which could improve overall efficiency in environments using this image.
https://api.github.com/repos/ultralytics/yolov5/pulls/6669
2022-02-17T12:24:40Z
2022-02-17T12:24:45Z
2022-02-17T12:24:45Z
2024-01-19T12:50:26Z
273
ultralytics/yolov5
25,030
adding in try around __import__ to catch invalid files/paths
diff --git a/flask/cli.py b/flask/cli.py index 58b6fb3a3d..9b8fa2cd21 100644 --- a/flask/cli.py +++ b/flask/cli.py @@ -86,7 +86,13 @@ def locate_app(app_id): module = app_id app_obj = None - __import__(module) + try: + __import__(module) + except ImportError: + raise NoAppException('The file/path provided (%s) does not appear to ' + 'exist. Please verify the path is correct. If ' + 'app is not on PYTHONPATH, ensure the extension ' + 'is .py' % module) mod = sys.modules[module] if app_obj is None: app = find_best_app(mod) diff --git a/tests/test_cli.py b/tests/test_cli.py index 3f2cceab1a..d2bb61b97c 100644 --- a/tests/test_cli.py +++ b/tests/test_cli.py @@ -80,6 +80,8 @@ def test_locate_app(test_apps): assert locate_app("cliapp.app").name == "testapp" assert locate_app("cliapp.app:testapp").name == "testapp" assert locate_app("cliapp.multiapp:app1").name == "app1" + pytest.raises(NoAppException, locate_app, "notanpp.py") + pytest.raises(NoAppException, locate_app, "cliapp/app") pytest.raises(RuntimeError, locate_app, "cliapp.app:notanapp")
I'll start this off by saying this may be a naive approach to what appears to be a common thread between a few recent issues (#1926, #1902, #1829). I'm attempting to solve the issue of mistyped or improperly defined app paths reaching the **import** call for the flask app server. Given the way `cli.py` is currently structured, I can't find a bulletproof way to ensure only correct paths/module names reach the import. If that's the case, I believe the best response is to simply catch the error and return something more descriptive than `ImportError: Import by filename is not supported.` for debugging purposes.
https://api.github.com/repos/pallets/flask/pulls/1950
2016-07-08T05:37:27Z
2016-08-12T13:12:00Z
2016-08-12T13:12:00Z
2022-03-07T18:52:53Z
355
pallets/flask
20,308
DetectMultiBackend improvements
diff --git a/models/common.py b/models/common.py index d308244c4a4..2e5d5a198e3 100644 --- a/models/common.py +++ b/models/common.py @@ -354,6 +354,7 @@ def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, import onnxruntime providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider'] session = onnxruntime.InferenceSession(w, providers=providers) + output_names = [x.name for x in session.get_outputs()] meta = session.get_modelmeta().custom_metadata_map # metadata if 'stride' in meta: stride, names = int(meta['stride']), eval(meta['names']) @@ -372,9 +373,7 @@ def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, batch_size = batch_dim.get_length() executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2 output_layer = next(iter(executable_network.outputs)) - meta = Path(w).with_suffix('.yaml') - if meta.exists(): - stride, names = self._load_metadata(meta) # load metadata + stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata elif engine: # TensorRT LOGGER.info(f'Loading {w} for TensorRT inference...') import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download @@ -476,7 +475,7 @@ def forward(self, im, augment=False, visualize=False, val=False): y = self.net.forward() elif self.onnx: # ONNX Runtime im = im.cpu().numpy() # torch to numpy - y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0] + y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})[0] elif self.xml: # OpenVINO im = im.cpu().numpy() # FP32 y = self.executable_network([im])[self.output_layer] @@ -524,7 +523,7 @@ def forward(self, im, augment=False, visualize=False, val=False): y[..., :4] *= [w, h, w, h] # xywh normalized to pixels if isinstance(y, np.ndarray): - y = torch.tensor(y, device=self.device) + y = torch.from_numpy(y).to(self.device) return (y, []) if val else y def warmup(self, imgsz=(1, 3, 640, 640)): @@ -548,10 +547,12 @@ def _model_type(p='path/to/model.pt'): return pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs @staticmethod - def _load_metadata(f='path/to/meta.yaml'): + def _load_metadata(f=Path('path/to/meta.yaml')): # Load metadata from meta.yaml if it exists - d = yaml_load(f) - return d['stride'], d['names'] # assign stride, names + if f.exists(): + d = yaml_load(f) + return d['stride'], d['names'] # assign stride, names + return None, None class AutoShape(nn.Module):
Signed-off-by: Glenn Jocher <[email protected]> <!-- Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started: - Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists. - Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented. - Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable). Please see our ✅ [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) for more details. --> ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Enhancements to model loading and inference processes in YOLOv5. ### 📊 Key Changes - 🔧 Optimized output name retrieval for ONNX Runtime sessions. - ✂️ Simplified metadata loading from YAML files during model initialization. - 🚀 Improved forward method by caching output names for ONNX and using `torch.from_numpy` for efficient tensor creation. - 🧹 Cleaned up conditional checks and default values in `_load_metadata` method. ### 🎯 Purpose & Impact - 🚅 Provides quicker setup for ONNX models by caching output names, leading to faster inference times. - 📁 Ensures more robust metadata handling, preventing errors and enabling smoother user experiences. - 🧠 Facilitates better memory management by avoiding unnecessary tensor conversions. - 💼 Reduces code complexity and potential bugs by refining the metadata loading process, beneficial for maintainers and contributors.
https://api.github.com/repos/ultralytics/yolov5/pulls/9269
2022-09-03T23:02:04Z
2022-09-03T23:33:38Z
2022-09-03T23:33:38Z
2024-01-19T06:15:36Z
799
ultralytics/yolov5
25,010
Remove extra space from README
diff --git a/README.md b/README.md index ddf0e91f0..895fcb4cd 100644 --- a/README.md +++ b/README.md @@ -46,4 +46,4 @@ The rules are meant for gradual introduction into a code base. We plan to build ## Contributions and LICENSE Comments and suggestions for improvements are most welcome. We plan to modify and extend this document as our understanding improves and the -language and the set of available libraries improve. More details are found at [CONTRIBUTING](./CONTRIBUTING.md) and [LICENSE](./LICENSE) . +language and the set of available libraries improve. More details are found at [CONTRIBUTING](./CONTRIBUTING.md) and [LICENSE](./LICENSE).
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1376
2019-03-09T00:53:42Z
2019-03-09T22:33:20Z
2019-03-09T22:33:20Z
2019-03-09T22:33:20Z
169
isocpp/CppCoreGuidelines
15,286
fix format
diff --git a/colossalai/amp/naive_amp/_fp16_optimizer.py b/colossalai/amp/naive_amp/_fp16_optimizer.py index b1fc621c211c..01842590f432 100644 --- a/colossalai/amp/naive_amp/_fp16_optimizer.py +++ b/colossalai/amp/naive_amp/_fp16_optimizer.py @@ -14,7 +14,7 @@ from colossalai.core import global_context as gpc from colossalai.logging import get_dist_logger from colossalai.utils import (print_rank_0, copy_tensor_parallel_attributes, - clip_grad_norm_fp32, count_zeros_fp32, multi_tensor_applier, is_using_pp) + clip_grad_norm_fp32, count_zeros_fp32, multi_tensor_applier) def _zero_grad_group_helper(group, set_to_none): diff --git a/colossalai/nn/optimizer/cpu_adam.py b/colossalai/nn/optimizer/cpu_adam.py index 21f607c485df..55f50bfdc0a6 100644 --- a/colossalai/nn/optimizer/cpu_adam.py +++ b/colossalai/nn/optimizer/cpu_adam.py @@ -1,10 +1,4 @@ -# modified from https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/ops/adam/cpu_adam.py - -import math import torch -import time -from pathlib import Path -import colossalai class CPUAdam(torch.optim.Optimizer):
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/362
2022-03-10T03:33:06Z
2022-03-10T03:33:21Z
2022-03-10T03:33:21Z
2022-03-10T03:33:22Z
337
hpcaitech/ColossalAI
11,896
[watchindianporn] Fix parser
diff --git a/youtube_dl/extractor/watchindianporn.py b/youtube_dl/extractor/watchindianporn.py index ed099beea63..fadc539eefd 100644 --- a/youtube_dl/extractor/watchindianporn.py +++ b/youtube_dl/extractor/watchindianporn.py @@ -4,11 +4,7 @@ import re from .common import InfoExtractor -from ..utils import ( - unified_strdate, - parse_duration, - int_or_none, -) +from ..utils import parse_duration class WatchIndianPornIE(InfoExtractor): @@ -23,11 +19,8 @@ class WatchIndianPornIE(InfoExtractor): 'ext': 'mp4', 'title': 'Hot milf from kerala shows off her gorgeous large breasts on camera', 'thumbnail': r're:^https?://.*\.jpg$', - 'uploader': 'LoveJay', - 'upload_date': '20160428', 'duration': 226, 'view_count': int, - 'comment_count': int, 'categories': list, 'age_limit': 18, } @@ -40,51 +33,36 @@ def _real_extract(self, url): webpage = self._download_webpage(url, display_id) - video_url = self._html_search_regex( - r"url: escape\('([^']+)'\)", webpage, 'url') + info_dict = self._parse_html5_media_entries(url, webpage, video_id)[0] - title = self._html_search_regex( - r'<h2 class="he2"><span>(.*?)</span>', - webpage, 'title') - thumbnail = self._html_search_regex( - r'<span id="container"><img\s+src="([^"]+)"', - webpage, 'thumbnail', fatal=False) - - uploader = self._html_search_regex( - r'class="aupa">\s*(.*?)</a>', - webpage, 'uploader') - upload_date = unified_strdate(self._html_search_regex( - r'Added: <strong>(.+?)</strong>', webpage, 'upload date', fatal=False)) + title = self._html_search_regex(( + r'<title>(.+?)\s*-\s*Indian\s+Porn</title>', + r'<h4>(.+?)</h4>' + ), webpage, 'title') duration = parse_duration(self._search_regex( - r'<td>Time:\s*</td>\s*<td align="right"><span>\s*(.+?)\s*</span>', + r'Time:\s*<strong>\s*(.+?)\s*</strong>', webpage, 'duration', fatal=False)) - view_count = int_or_none(self._search_regex( - r'<td>Views:\s*</td>\s*<td align="right"><span>\s*(\d+)\s*</span>', + view_count = int(self._search_regex( + r'(?s)Time:\s*<strong>.*?</strong>.*?<strong>\s*(\d+)\s*</strong>', webpage, 'view count', fatal=False)) - comment_count = int_or_none(self._search_regex( - r'<td>Comments:\s*</td>\s*<td align="right"><span>\s*(\d+)\s*</span>', - webpage, 'comment count', fatal=False)) categories = re.findall( - r'<a href="[^"]+/search/video/desi"><span>([^<]+)</span></a>', + r'<a[^>]+class=[\'"]categories[\'"][^>]*>\s*([^<]+)\s*</a>', webpage) - return { + info_dict.update({ 'id': video_id, 'display_id': display_id, - 'url': video_url, 'http_headers': { 'Referer': url, }, 'title': title, - 'thumbnail': thumbnail, - 'uploader': uploader, - 'upload_date': upload_date, 'duration': duration, 'view_count': view_count, - 'comment_count': comment_count, 'categories': categories, 'age_limit': 18, - } + }) + + return info_dict
### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [adding new extractor tutorial](https://github.com/rg3/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/rg3/youtube-dl#youtube-dl-coding-conventions) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests ### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [x] Bug fix - [ ] Improvement - [ ] New extractor - [ ] New feature --- ### Description of your *pull request* and other information Fix #13411. Note that uploader, upload_info and comment_count infos are no more present in the service.
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/13415
2017-06-17T23:33:29Z
2017-06-19T21:30:46Z
2017-06-19T21:30:46Z
2017-06-20T20:09:52Z
948
ytdl-org/youtube-dl
50,226
bugfix: image widget was mis-aligned when node has multiline widget
diff --git a/web/scripts/app.js b/web/scripts/app.js index 09310c7f8a..bf424058fd 100644 --- a/web/scripts/app.js +++ b/web/scripts/app.js @@ -368,7 +368,11 @@ export class ComfyApp { shiftY = w.last_y; if (w.computeSize) { shiftY += w.computeSize()[1] + 4; - } else { + } + else if(w.computedHeight) { + shiftY += w.computedHeight; + } + else { shiftY += LiteGraph.NODE_WIDGET_HEIGHT + 4; } } else { diff --git a/web/scripts/widgets.js b/web/scripts/widgets.js index dfa26aef43..89d4a2e38f 100644 --- a/web/scripts/widgets.js +++ b/web/scripts/widgets.js @@ -129,6 +129,7 @@ function addMultilineWidget(node, name, opts, app) { w.y = y; if (w.type === "customtext") { y += freeSpace; + w.computedHeight = freeSpace - multi.length*4; } else if (w.computeSize) { y += w.computeSize()[1] + 4; } else {
![bug](https://github.com/comfyanonymous/ComfyUI/assets/128333288/232e02fe-63b3-4b17-b8c6-4d172aa6d6e8) ![fixed](https://github.com/comfyanonymous/ComfyUI/assets/128333288/86a3239c-c480-45b3-91f8-0a150c40b52a)
https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/840
2023-07-07T16:48:44Z
2023-07-08T07:57:04Z
2023-07-08T07:57:04Z
2023-07-08T08:31:13Z
295
comfyanonymous/ComfyUI
17,869
Improve ingest logs
diff --git a/private_gpt/server/ingest/ingest_service.py b/private_gpt/server/ingest/ingest_service.py index 3b8373d44..aa2f73c39 100644 --- a/private_gpt/server/ingest/ingest_service.py +++ b/private_gpt/server/ingest/ingest_service.py @@ -73,6 +73,7 @@ def _ingest_data(self, file_name: str, file_data: AnyStr) -> list[IngestedDoc]: def ingest_file(self, file_name: str, file_data: Path) -> list[IngestedDoc]: logger.info("Ingesting file_name=%s", file_name) documents = self.ingest_component.ingest(file_name, file_data) + logger.info("Finished ingestion file_name=%s", file_name) return [IngestedDoc.from_document(document) for document in documents] def ingest_text(self, file_name: str, text: str) -> list[IngestedDoc]: @@ -89,6 +90,7 @@ def ingest_bin_data( def bulk_ingest(self, files: list[tuple[str, Path]]) -> list[IngestedDoc]: logger.info("Ingesting file_names=%s", [f[0] for f in files]) documents = self.ingest_component.bulk_ingest(files) + logger.info("Finished ingestion file_name=%s", [f[0] for f in files]) return [IngestedDoc.from_document(document) for document in documents] def list_ingested(self) -> list[IngestedDoc]:
https://api.github.com/repos/zylon-ai/private-gpt/pulls/1438
2023-12-21T15:33:22Z
2023-12-21T16:13:46Z
2023-12-21T16:13:46Z
2023-12-21T16:13:47Z
355
zylon-ai/private-gpt
38,566
Gate WS :: add book ticker option
diff --git a/js/gate.js b/js/gate.js index 509bebe6b242..717358ad7b8d 100644 --- a/js/gate.js +++ b/js/gate.js @@ -2093,13 +2093,27 @@ module.exports = class gate extends Exchange { // "index_price": "6531" // } // - const marketId = this.safeString2 (ticker, 'currency_pair', 'contract'); + // bookTicker + // { + // t: 1671363004228, + // u: 9793320464, + // s: 'BTC_USDT', + // b: '16716.8', // best bid price + // B: '0.0134', // best bid size + // a: '16716.9', // best ask price + // A: '0.0353' // best ask size + // } + // + const marketId = this.safeStringN (ticker, [ 'currency_pair', 'contract', 's' ]); const symbol = this.safeSymbol (marketId, market); const last = this.safeString (ticker, 'last'); - const ask = this.safeString (ticker, 'lowest_ask'); - const bid = this.safeString (ticker, 'highest_bid'); + const ask = this.safeString2 (ticker, 'lowest_ask', 'a'); + const bid = this.safeString2 (ticker, 'highest_bid', 'b'); const high = this.safeString (ticker, 'high_24h'); const low = this.safeString (ticker, 'low_24h'); + const bidVolume = this.safeString (ticker, 'B'); + const askVolume = this.safeString (ticker, 'A'); + const timestamp = this.safeInteger (ticker, 't'); let baseVolume = this.safeString2 (ticker, 'base_volume', 'volume_24h_base'); if (baseVolume === 'nan') { baseVolume = '0'; @@ -2111,14 +2125,14 @@ module.exports = class gate extends Exchange { const percentage = this.safeString (ticker, 'change_percentage'); return this.safeTicker ({ 'symbol': symbol, - 'timestamp': undefined, - 'datetime': undefined, + 'timestamp': timestamp, + 'datetime': this.iso8601 (timestamp), 'high': high, 'low': low, 'bid': bid, - 'bidVolume': undefined, + 'bidVolume': bidVolume, 'ask': ask, - 'askVolume': undefined, + 'askVolume': askVolume, 'vwap': undefined, 'open': undefined, 'close': last, diff --git a/js/pro/gate.js b/js/pro/gate.js index a373bb5ffc88..3af1a2f94637 100644 --- a/js/pro/gate.js +++ b/js/pro/gate.js @@ -54,6 +54,9 @@ module.exports = class gate extends gateRest { 'watchTradesSubscriptions': {}, 'watchTickerSubscriptions': {}, 'watchOrderBookSubscriptions': {}, + 'watchTicker': { + 'name': 'tickers', // or book_ticker + }, 'watchOrderBook': { 'interval': '100ms', }, @@ -358,7 +361,9 @@ module.exports = class gate extends gateRest { const marketId = market['id']; const type = market['type']; const messageType = this.getUniformType (type); - const channel = messageType + '.' + 'tickers'; + const options = this.safeValue (this.options, 'watchTicker', {}); + const topic = this.safeString (options, 'topic', 'tickers'); + const channel = messageType + '.' + topic; const messageHash = channel + '.' + market['symbol']; const payload = [ marketId ]; const url = this.getUrlByMarketType (type, market['inverse']); @@ -383,6 +388,21 @@ module.exports = class gate extends gateRest { // low_24h: '42721.03' // } // } + // { + // time: 1671363004, + // time_ms: 1671363004235, + // channel: 'spot.book_ticker', + // event: 'update', + // result: { + // t: 1671363004228, + // u: 9793320464, + // s: 'BTC_USDT', + // b: '16716.8', + // B: '0.0134', + // a: '16716.9', + // A: '0.0353' + // } + // } // const channel = this.safeString (message, 'channel'); let result = this.safeValue (message, 'result'); @@ -1126,6 +1146,7 @@ module.exports = class gate extends gateRest { 'candlesticks': this.handleOHLCV, 'orders': this.handleOrder, 'tickers': this.handleTicker, + 'book_ticker': this.handleTicker, 'trades': this.handleTrades, 'order_book_update': this.handleOrderBook, 'balances': this.handleBalanceMessage, diff --git a/keys.json b/keys.json index 3358bd6a06c3..0119f1751be9 100644 --- a/keys.json +++ b/keys.json @@ -38,5 +38,6 @@ "cex": { "skip": true }, "ascendex": { "skip": true }, "btcex": { "skip": true }, - "wazirx": { "skipWs": true } + "wazirx": { "skipWs": true }, + "okx": { "skipWs": true } }
- add `book_ticker` option to allow the best bid/best ask query - configurable through an option Swap ``` n gate watchTicker "BTC/USDT:USDT" --swap Debugger attached. 2022-12-18T11:43:07.046Z Node.js: v17.9.0 CCXT v2.4.26 gate.watchTicker (BTC/USDT:USDT) { symbol: 'BTC/USDT:USDT', timestamp: 1671363791595, datetime: '2022-12-18T11:43:11.595Z', high: undefined, low: undefined, bid: 16719.5, bidVolume: 53462, ask: 16719.6, askVolume: 84827, vwap: undefined, open: undefined, close: undefined, last: undefined, previousClose: undefined, change: undefined, percentage: undefined, average: undefined, baseVolume: undefined, quoteVolume: undefined, info: { t: 1671363791595, u: 25128744662, s: 'BTC_USDT', b: '16719.5', B: 53462, a: '16719.6', A: 84827 } } ``` Spot ``` n gate watchTicker "BTC/USDT" --spot Debugger attached. 2022-12-18T11:44:12.945Z Node.js: v17.9.0 CCXT v2.4.26 gate.watchTicker (BTC/USDT) { symbol: 'BTC/USDT', timestamp: 1671363856144, datetime: '2022-12-18T11:44:16.144Z', high: undefined, low: undefined, bid: 16723.5, bidVolume: 0.0047, ask: 16723.6, askVolume: 0.0077, vwap: undefined, open: undefined, close: undefined, last: undefined, previousClose: undefined, change: undefined, percentage: undefined, average: undefined, baseVolume: undefined, quoteVolume: undefined, info: { t: 1671363856144, u: 9793590567, s: 'BTC_USDT', b: '16723.5', B: '0.0047', a: '16723.6', A: '0.0077' } } ```
https://api.github.com/repos/ccxt/ccxt/pulls/16124
2022-12-18T11:44:55Z
2022-12-18T14:10:40Z
2022-12-18T14:10:40Z
2022-12-18T14:10:40Z
1,355
ccxt/ccxt
13,791
fix dtype handling in 2 optimizers and 1 layer
diff --git a/keras/backend/tensorflow_backend.py b/keras/backend/tensorflow_backend.py index 8ebd473bf88..4ea7823abf2 100644 --- a/keras/backend/tensorflow_backend.py +++ b/keras/backend/tensorflow_backend.py @@ -549,7 +549,7 @@ def dtype(x): 'float32_ref' ``` """ - return x.dtype.name + return x.dtype.base_dtype.name def eval(x): diff --git a/keras/layers/embeddings.py b/keras/layers/embeddings.py index 157d3f01edf..557c7d52551 100644 --- a/keras/layers/embeddings.py +++ b/keras/layers/embeddings.py @@ -75,7 +75,6 @@ def __init__(self, input_dim, output_dim, mask_zero=False, input_length=None, **kwargs): - kwargs['dtype'] = 'int32' if 'input_shape' not in kwargs: if input_length: kwargs['input_shape'] = (input_length,) @@ -98,7 +97,8 @@ def build(self, input_shape): initializer=self.embeddings_initializer, name='embeddings', regularizer=self.embeddings_regularizer, - constraint=self.embeddings_constraint) + constraint=self.embeddings_constraint, + dtype=self.dtype) self.built = True def compute_mask(self, inputs, mask=None): diff --git a/keras/optimizers.py b/keras/optimizers.py index bde1e422af1..ec9484adb40 100644 --- a/keras/optimizers.py +++ b/keras/optimizers.py @@ -195,8 +195,7 @@ def __init__(self, lr=0.001, rho=0.9, epsilon=1e-8, decay=0., def get_updates(self, params, constraints, loss): grads = self.get_gradients(loss, params) - shapes = [K.get_variable_shape(p) for p in params] - accumulators = [K.zeros(shape) for shape in shapes] + accumulators = [K.zeros(K.get_variable_shape(p), dtype=K.dtype(p)) for p in params] self.weights = accumulators self.updates = [] @@ -389,9 +388,8 @@ def get_updates(self, params, constraints, loss): lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t))) - shapes = [K.get_variable_shape(p) for p in params] - ms = [K.zeros(shape) for shape in shapes] - vs = [K.zeros(shape) for shape in shapes] + ms = [K.zeros(K.get_variable_shape(p), dtype=K.dtype(p)) for p in params] + vs = [K.zeros(K.get_variable_shape(p), dtype=K.dtype(p)) for p in params] self.weights = [self.iterations] + ms + vs for p, g, m, v in zip(params, grads, ms, vs):
Update `Adam` and `RMSprop` so dtype of momentum variables matches dtype of corresponding parameter. Shouldn't affect most people unless you're using a mix of dtypes. Would be nice if `K.zeros_like` used the parameter's dtype but that might break a lot of other things. May be worth updating the other optimizers but those are the two I use the most. Update `Embedding` layer to have a `output_dtype` parameter that is passed to `add_weight`. Useful if you need mixed precision or if you have integer embeddings (`trainable=False`). Cheers
https://api.github.com/repos/keras-team/keras/pulls/7088
2017-06-22T03:29:00Z
2017-06-23T20:41:12Z
2017-06-23T20:41:12Z
2017-06-26T03:16:18Z
703
keras-team/keras
47,084
Update audio_conv_utils.py
diff --git a/keras/applications/audio_conv_utils.py b/keras/applications/audio_conv_utils.py index daddbbeede1..d8d8cba66a9 100644 --- a/keras/applications/audio_conv_utils.py +++ b/keras/applications/audio_conv_utils.py @@ -57,8 +57,8 @@ def preprocess_input(audio_path, dim_ordering='default'): if n_sample < n_sample_wanted: # if too short src = np.hstack((src, np.zeros((int(duration * sr) - n_sample,)))) elif n_sample > n_sample_wanted: # if too long - src = src[(n_sample - n_sample_wanted) / 2: - (n_sample + n_sample_wanted) / 2] + src = src[(n_sample - n_sample_wanted) // 2: + (n_sample + n_sample_wanted) // 2] logam = librosa.logamplitude melgram = librosa.feature.melspectrogram
integer division for python3
https://api.github.com/repos/keras-team/keras/pulls/5301
2017-02-07T04:45:23Z
2017-02-07T22:40:34Z
2017-02-07T22:40:34Z
2017-02-07T22:40:34Z
232
keras-team/keras
47,587
Add --verbose-level flag and fix logging level calculations
diff --git a/certbot/certbot/_internal/cli/__init__.py b/certbot/certbot/_internal/cli/__init__.py index 7d53ad6499b..69192eda81b 100644 --- a/certbot/certbot/_internal/cli/__init__.py +++ b/certbot/certbot/_internal/cli/__init__.py @@ -71,6 +71,11 @@ def prepare_and_parse_args(plugins, args, detect_defaults=False): default=flag_default("verbose_count"), help="This flag can be used " "multiple times to incrementally increase the verbosity of output, " "e.g. -vvv.") + # This is for developers to set the level in the cli.ini, and overrides + # the --verbose flag + helpful.add( + None, "--verbose-level", dest="verbose_level", + default=flag_default("verbose_level"), help=argparse.SUPPRESS) helpful.add( None, "-t", "--text", dest="text_mode", action="store_true", default=flag_default("text_mode"), help=argparse.SUPPRESS) @@ -449,6 +454,7 @@ def set_by_cli(var): plugins = plugins_disco.PluginsRegistry.find_all() # reconstructed_args == sys.argv[1:], or whatever was passed to main() reconstructed_args = helpful_parser.args + [helpful_parser.verb] + detector = set_by_cli.detector = prepare_and_parse_args( # type: ignore plugins, reconstructed_args, detect_defaults=True) # propagate plugin requests: eg --standalone modifies config.authenticator diff --git a/certbot/certbot/_internal/constants.py b/certbot/certbot/_internal/constants.py index 61895edb1c7..2e368db52f0 100644 --- a/certbot/certbot/_internal/constants.py +++ b/certbot/certbot/_internal/constants.py @@ -22,7 +22,8 @@ ], # Main parser - verbose_count=-int(logging.WARNING / 10), + verbose_count=0, + verbose_level=None, text_mode=False, max_log_backups=1000, preconfigured_renewal=False, @@ -142,6 +143,9 @@ QUIET_LOGGING_LEVEL = logging.ERROR """Logging level to use in quiet mode.""" +DEFAULT_LOGGING_LEVEL = logging.WARNING +"""Default logging level to use when not in quiet mode.""" + RENEWER_DEFAULTS = dict( renewer_enabled="yes", renew_before_expiry="30 days", diff --git a/certbot/certbot/_internal/log.py b/certbot/certbot/_internal/log.py index 835ec77f9c0..fd665c6885f 100644 --- a/certbot/certbot/_internal/log.py +++ b/certbot/certbot/_internal/log.py @@ -120,8 +120,11 @@ def post_arg_parse_setup(config): if config.quiet: level = constants.QUIET_LOGGING_LEVEL + elif config.verbose_level is not None: + level = constants.DEFAULT_LOGGING_LEVEL - int(config.verbose_level) * 10 else: - level = -config.verbose_count * 10 + level = constants.DEFAULT_LOGGING_LEVEL - config.verbose_count * 10 + stderr_handler.setLevel(level) logger.debug('Root logging level set at %d', level) diff --git a/certbot/examples/dev-cli.ini b/certbot/examples/dev-cli.ini index a405a0aefda..21f7db85c8f 100644 --- a/certbot/examples/dev-cli.ini +++ b/certbot/examples/dev-cli.ini @@ -13,8 +13,6 @@ domains = example.com text = True agree-tos = True debug = True -# Unfortunately, it's not possible to specify "verbose" multiple times -# (correspondingly to -vvvvvv) -verbose = True +verbose-level = 2 # -vv (debug) authenticator = standalone diff --git a/certbot/tests/log_test.py b/certbot/tests/log_test.py index 9b3b31030d1..cb52c1fffbb 100644 --- a/certbot/tests/log_test.py +++ b/certbot/tests/log_test.py @@ -122,7 +122,7 @@ def test_common(self): if self.config.quiet: self.assertEqual(level, constants.QUIET_LOGGING_LEVEL) else: - self.assertEqual(level, -self.config.verbose_count * 10) + self.assertEqual(level, constants.DEFAULT_LOGGING_LEVEL) def test_debug(self): self.config.debug = True
Also, update `dev-cli.ini` example to use new flag. Although https://github.com/bw2/ConfigArgParse/pull/216 allowed setting a `count` action value in a config file, our default detection system won't let us use that functionality. While we should eventually fix that, for now, let developers have a cli.ini with a higher logging level by adding this flag. Also, our logging level calculations have never worked properly, but it didn't matter because there was only one level to go. Now that there's an intermediate level that users might want to set, fix the math so it actually does what we want. Note that this flag is intended to work the same way adding `-vvv`s does; that is, as a modifier to the pre-set level, rather than setting the absolute level. The number it is set to is equivalent to the number of `v`s that would otherwise have been passed, with "2" as the current maximum effective number of levels (warning --> info --> debug).
https://api.github.com/repos/certbot/certbot/pulls/8900
2021-06-10T22:11:58Z
2021-06-10T23:45:07Z
2021-06-10T23:45:07Z
2021-06-10T23:45:08Z
1,040
certbot/certbot
1,742
Update default GitHub assets
diff --git a/utils/downloads.py b/utils/downloads.py index 433de84b51c..73b8334cb94 100644 --- a/utils/downloads.py +++ b/utils/downloads.py @@ -87,9 +87,7 @@ def github_assets(repository, version='latest'): return file # GitHub assets - assets = [ - 'yolov5n.pt', 'yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt', 'yolov5n6.pt', 'yolov5s6.pt', - 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt'] + assets = [f'yolov5{size}{suffix}.pt' for size in 'nsmlx' for suffix in ('', '6', '-cls', '-seg')] # default try: tag, assets = github_assets(repo, release) except Exception: @@ -107,7 +105,6 @@ def github_assets(repository, version='latest'): safe_download( file, url=f'https://github.com/{repo}/releases/download/{tag}/{name}', - url2=f'https://storage.googleapis.com/{repo}/{tag}/{name}', # backup url (optional) min_bytes=1E5, error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/{tag} or {url3}')
Avoid intermittent CI fails. Signed-off-by: Glenn Jocher <[email protected]> <!-- Thank you for submitting a YOLOv5 🚀 Pull Request! We want to make contributing to YOLOv5 as easy and transparent as possible. A few tips to get you started: - Search existing YOLOv5 [PRs](https://github.com/ultralytics/yolov5/pull) to see if a similar PR already exists. - Link this PR to a YOLOv5 [issue](https://github.com/ultralytics/yolov5/issues) to help us understand what bug fix or feature is being implemented. - Provide before and after profiling/inference/training results to help us quantify the improvement your PR provides (if applicable). Please see our ✅ [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) for more details. --> ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Streamlined asset naming and download process in Ultralytic's YOLOv5 GitHub repository. ### 📊 Key Changes - Simplified asset list generation with a list comprehension replacing a hard-coded list of model weights. - Removed the backup URL for downloading the assets, opting for a more straightforward download mechanism. ### 🎯 Purpose & Impact - The refactoring makes adding new model sizes or types more efficient 🚀, improving code maintainability. - By removing the backup download URL, the download process is more straightforward, which may reduce complexity and potential errors 🧹. - Users will benefit from a cleaner and more reliable way to download necessary model files, improving user experience 🌐.
https://api.github.com/repos/ultralytics/yolov5/pulls/9573
2022-09-25T00:46:32Z
2022-09-25T00:59:26Z
2022-09-25T00:59:26Z
2024-01-19T05:25:08Z
332
ultralytics/yolov5
25,109
Update env.py
diff --git a/backend/alembic/env.py b/backend/alembic/env.py index 511ed97f90..4ef4e37d1a 100644 --- a/backend/alembic/env.py +++ b/backend/alembic/env.py @@ -1,41 +1,31 @@ -from logging.config import fileConfig +import logging -import sqlmodel from alembic import context from oasst_backend import models # noqa: F401 from sqlalchemy import engine_from_config, pool -# this is the Alembic Config object, which provides -# access to the values within the .ini file in use. +# Read in the Alembic config file. config = context.config -# Interpret the config file for Python logging. -# This line sets up loggers basically. +# Set up loggers. if config.config_file_name is not None: - fileConfig(config.config_file_name) + logging.config.fileConfig(config.config_file_name) -# add your model's MetaData object here -# for 'autogenerate' support -# from myapp import mymodel -# target_metadata = mymodel.Base.metadata -target_metadata = sqlmodel.SQLModel.metadata +# Add the model's MetaData object here for 'autogenerate' support. +target_metadata = models.Base.metadata -# other values from the config, defined by the needs of env.py, -# can be acquired: +# Other values from the config file can be acquired as follows: # my_important_option = config.get_main_option("my_important_option") # ... etc. -def run_migrations_offline() -> None: - """Run migrations in 'offline' mode. - - This configures the context with just a URL - and not an Engine, though an Engine is acceptable - here as well. By skipping the Engine creation - we don't even need a DBAPI to be available. +def run_migrations_offline(): + """ + Run migrations in 'offline' mode. - Calls to context.execute() here emit the given string to the - script output. + This configures the context with just a URL and not an Engine, + though an Engine is acceptable here as well. By skipping the + Engine creation we don't even need a DBAPI to be available. """ url = config.get_main_option("sqlalchemy.url") @@ -45,16 +35,16 @@ def run_migrations_offline() -> None: literal_binds=True, dialect_opts={"paramstyle": "named"}, ) - with context.begin_transaction(): context.run_migrations() -def run_migrations_online() -> None: - """Run migrations in 'online' mode. +def run_migrations_online(): + """ + Run migrations in 'online' mode. - In this scenario we need to create an Engine - and associate a connection with the context. + In this scenario we need to create an Engine and associate a + connection with the context. """ connectable = engine_from_config( @@ -62,10 +52,8 @@ def run_migrations_online() -> None: prefix="sqlalchemy.", poolclass=pool.NullPool, ) - with connectable.connect() as connection: context.configure(connection=connection, target_metadata=target_metadata) - with context.begin_transaction(): context.get_context()._ensure_version_table() connection.execute("LOCK TABLE alembic_version IN ACCESS EXCLUSIVE MODE")
Here are the improvements that I made: Imported the logging module and used it to set up the loggers, rather than using the deprecated logging.config.fileConfig function. Renamed the sqlmodel module to models and imported it correctly. Changed the # noqa: F401 comment to a more appropriate # Ignore unused import comment. Added type annotations and docstrings to the functions. Improved the formatting and added some comments to make the code more readable. Changed the with context.begin_transaction(): block to use the contextlib.suppress context manager to suppress any exceptions that might be raised, so that the script can gracefully exit in case of an error.
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/413
2023-01-05T17:18:21Z
2023-01-06T13:59:10Z
2023-01-06T13:59:10Z
2023-01-06T15:20:18Z
760
LAION-AI/Open-Assistant
37,759
Clarify authenticator interface
diff --git a/letsencrypt/client/auth_handler.py b/letsencrypt/client/auth_handler.py index 6f0ece53571..984862f461b 100644 --- a/letsencrypt/client/auth_handler.py +++ b/letsencrypt/client/auth_handler.py @@ -129,6 +129,7 @@ def _satisfy_challenges(self): .. todo:: It might be worth it to try different challenges to find one that doesn't throw an exception + .. todo:: separate into more functions """ logging.info("Performing the following challenges:") @@ -145,14 +146,19 @@ def _satisfy_challenges(self): # Order is important here as we will not expose the outside # Authenticator to our own indices. flat_client = [] - flat_auth = [] + flat_dv = [] + for dom in self.domains: flat_client.extend(ichall.chall for ichall in self.client_c[dom]) - flat_auth.extend(ichall.chall for ichall in self.dv_c[dom]) + flat_dv.extend(ichall.chall for ichall in self.dv_c[dom]) + client_resp = [] + dv_resp = [] try: - client_resp = self.client_auth.perform(flat_client) - dv_resp = self.dv_auth.perform(flat_auth) + if flat_client: + client_resp = self.client_auth.perform(flat_client) + if flat_dv: + dv_resp = self.dv_auth.perform(flat_dv) # This will catch both specific types of errors. except errors.LetsEncryptAuthHandlerError as err: logging.critical("Failure in setting up challenges:") @@ -167,8 +173,10 @@ def _satisfy_challenges(self): logging.info("Ready for verification...") # Assemble Responses - self._assign_responses(client_resp, self.client_c) - self._assign_responses(dv_resp, self.dv_c) + if client_resp: + self._assign_responses(client_resp, self.client_c) + if dv_resp: + self._assign_responses(dv_resp, self.dv_c) def _assign_responses(self, flat_list, ichall_dict): """Assign responses from flat_list back to the IndexedChall dicts. @@ -212,9 +220,13 @@ def _cleanup_challenges(self, domain): # These are indexed challenges... give just the challenges to the auth # Chose to make these lists instead of a generator to make it easier to # work with... - self.dv_auth.cleanup([ichall.chall for ichall in self.dv_c[domain]]) - self.client_auth.cleanup( - [ichall.chall for ichall in self.client_c[domain]]) + dv_list = [ichall.chall for ichall in self.dv_c[domain]] + client_list = [ichall.chall for ichall in self.client_c[domain]] + if dv_list: + self.dv_auth.cleanup(dv_list) + if client_list: + self.client_auth.cleanup(client_list) + def _cleanup_state(self, delete_list): """Cleanup state after an authorization is received. diff --git a/letsencrypt/client/interfaces.py b/letsencrypt/client/interfaces.py index 9fcd95c6a7a..04c7d35e783 100644 --- a/letsencrypt/client/interfaces.py +++ b/letsencrypt/client/interfaces.py @@ -30,6 +30,9 @@ def perform(chall_list): :param list chall_list: List of namedtuple types defined in :mod:`letsencrypt.client.challenge_util` (``DvsniChall``, etc.). + - chall_list will never be empty + - chall_list will only contain types found within + :func:`get_chall_pref` :returns: ACME Challenge responses or if it cannot be completed then: @@ -43,20 +46,16 @@ def perform(chall_list): """ def cleanup(chall_list): - """Revert changes and shutdown after challenges complete.""" + """Revert changes and shutdown after challenges complete. + :param list chall_list: List of namedtuple types defined in + :mod:`letsencrypt.client.challenge_util` (``DvsniChall``, etc.) -class IChallenge(zope.interface.Interface): - """Let's Encrypt challenge.""" - - def perform(): - """Perform the challenge.""" - - def generate_response(): - """Generate response.""" + - Only challenges given previously in the perform function will be + found in chall_list. + - chall_list will never be empty - def cleanup(): - """Cleanup.""" + """ class IConfig(zope.interface.Interface): diff --git a/letsencrypt/client/tests/acme_util.py b/letsencrypt/client/tests/acme_util.py index aa142af8eec..08a7e44bd51 100644 --- a/letsencrypt/client/tests/acme_util.py +++ b/letsencrypt/client/tests/acme_util.py @@ -26,7 +26,7 @@ "successURL": "https://example.ca/confirmrecovery/bb1b9928932", "contact": "c********[email protected]" }, - "recoveryTokent": + "recoveryToken": { "type": "recoveryToken" }, diff --git a/letsencrypt/client/tests/auth_handler_test.py b/letsencrypt/client/tests/auth_handler_test.py index 137b1627e13..945141f4e1d 100644 --- a/letsencrypt/client/tests/auth_handler_test.py +++ b/letsencrypt/client/tests/auth_handler_test.py @@ -25,8 +25,8 @@ class SatisfyChallengesTest(unittest.TestCase): def setUp(self): from letsencrypt.client.auth_handler import AuthHandler - self.mock_dv_auth = mock.MagicMock(name='ApacheConfigurator') - self.mock_client_auth = mock.MagicMock(name='ClientAuthenticator') + self.mock_dv_auth = mock.MagicMock(name="ApacheConfigurator") + self.mock_client_auth = mock.MagicMock(name="ClientAuthenticator") self.mock_dv_auth.get_chall_pref.return_value = ["dvsni"] self.mock_client_auth.get_chall_pref.return_value = ["recoveryToken"] @@ -59,6 +59,29 @@ def test_name1_dvsni1(self): self.assertEqual(len(self.handler.dv_c[dom]), 1) self.assertEqual(len(self.handler.client_c[dom]), 0) + def test_name1_rectok1(self): + dom = "0" + challenge = [acme_util.CHALLENGES["recoveryToken"]] + msg = acme_util.get_chall_msg(dom, "nonce0", challenge) + self.handler.add_chall_msg(dom, msg, "dummy_key") + + self.handler._satisfy_challenges() # pylint: disable=protected-access + + self.assertEqual(len(self.handler.responses), 1) + self.assertEqual(len(self.handler.responses[dom]), 1) + + # Test if statement for dv_auth perform + self.assertEqual(self.mock_client_auth.perform.call_count, 1) + self.assertEqual(self.mock_dv_auth.perform.call_count, 0) + + self.assertEqual("RecTokenChall0", self.handler.responses[dom][0]) + # Assert 1 domain + self.assertEqual(len(self.handler.dv_c), 1) + self.assertEqual(len(self.handler.client_c), 1) + # Assert 1 auth challenge, 0 dv + self.assertEqual(len(self.handler.dv_c[dom]), 0) + self.assertEqual(len(self.handler.client_c[dom]), 1) + def test_name5_dvsni5(self): challenge = [acme_util.CHALLENGES["dvsni"]] for i in xrange(5): @@ -74,6 +97,10 @@ def test_name5_dvsni5(self): self.assertEqual(len(self.handler.client_c), 5) # Each message contains 1 auth, 0 client + # Test proper call count for methods + self.assertEqual(self.mock_client_auth.perform.call_count, 0) + self.assertEqual(self.mock_dv_auth.perform.call_count, 1) + for i in xrange(5): dom = str(i) self.assertEqual(len(self.handler.responses[dom]), 1) @@ -103,6 +130,10 @@ def test_name1_auth(self, mock_chall_path): self.assertEqual(len(self.handler.dv_c), 1) self.assertEqual(len(self.handler.client_c), 1) + # Test if statement for client_auth perform + self.assertEqual(self.mock_client_auth.perform.call_count, 0) + self.assertEqual(self.mock_dv_auth.perform.call_count, 1) + self.assertEqual( self.handler.responses[dom], self._get_exp_response(dom, path, challenges)) @@ -251,33 +282,38 @@ def test_perform_exception_cleanup(self, mock_chall_path): str(i), "nonce%d" % i, challenges, combos), "dummy_key") - mock_chall_path.return_value = gen_path( - ["dvsni", "proofOfPossession"], challenges) + mock_chall_path.side_effect = [ + gen_path(["dvsni", "proofOfPossession"], challenges), + gen_path(["proofOfPossession"], challenges), + gen_path(["dvsni"], challenges), + ] # This may change in the future... but for now catch the error self.assertRaises(errors.LetsEncryptAuthHandlerError, self.handler._satisfy_challenges) # Verify cleanup is actually run correctly - self.assertEqual(self.mock_dv_auth.cleanup.call_count, 3) - self.assertEqual(self.mock_client_auth.cleanup.call_count, 3) + self.assertEqual(self.mock_dv_auth.cleanup.call_count, 2) + self.assertEqual(self.mock_client_auth.cleanup.call_count, 2) + + + dv_cleanup_args = self.mock_dv_auth.cleanup.call_args_list + client_cleanup_args = self.mock_client_auth.cleanup.call_args_list # Check DV cleanup - mock_cleanup_args = self.mock_dv_auth.cleanup.call_args_list - for i in xrange(3): - # Assert length of arg list was 1 - arg_chall_list = mock_cleanup_args[i][0][0] - self.assertEqual(len(arg_chall_list), 1) - self.assertTrue(isinstance(arg_chall_list[0], - challenge_util.DvsniChall)) + for i in xrange(2): + dv_chall_list = dv_cleanup_args[i][0][0] + self.assertEqual(len(dv_chall_list), 1) + self.assertTrue( + isinstance(dv_chall_list[0], challenge_util.DvsniChall)) + # Check Auth cleanup - mock_cleanup_args = self.mock_client_auth.cleanup.call_args_list - for i in xrange(3): - arg_chall_list = mock_cleanup_args[i][0][0] - self.assertEqual(len(arg_chall_list), 1) - self.assertTrue(isinstance(arg_chall_list[0], - challenge_util.PopChall)) + for i in xrange(2): + client_chall_list = client_cleanup_args[i][0][0] + self.assertEqual(len(client_chall_list), 1) + self.assertTrue( + isinstance(client_chall_list[0], challenge_util.PopChall)) def _get_exp_response(self, domain, path, challenges): # pylint: disable=no-self-use @@ -293,8 +329,8 @@ class GetAuthorizationsTest(unittest.TestCase): def setUp(self): from letsencrypt.client.auth_handler import AuthHandler - self.mock_dv_auth = mock.MagicMock(name='ApacheConfigurator') - self.mock_client_auth = mock.MagicMock(name='ClientAuthenticator') + self.mock_dv_auth = mock.MagicMock(name="ApacheConfigurator") + self.mock_client_auth = mock.MagicMock(name="ClientAuthenticator") self.mock_sat_chall = mock.MagicMock(name="_satisfy_challenges") self.mock_acme_auth = mock.MagicMock(name="acme_authorization") @@ -484,5 +520,5 @@ def gen_path(str_list, challenges): return path -if __name__ == '__main__': +if __name__ == "__main__": unittest.main()
https://api.github.com/repos/certbot/certbot/pulls/242
2015-02-12T09:11:04Z
2015-02-13T10:14:07Z
2015-02-13T10:14:07Z
2016-05-06T19:22:12Z
2,741
certbot/certbot
2,051
Add doctests to primelib.py
diff --git a/maths/primelib.py b/maths/primelib.py index cf01750cf912..d5c124255e56 100644 --- a/maths/primelib.py +++ b/maths/primelib.py @@ -574,6 +574,11 @@ def fib(n): """ input: positive integer 'n' returns the n-th fibonacci term , indexing by 0 + + >>> fib(5) + 8 + >>> fib(99) + 354224848179261915075 """ # precondition @@ -589,3 +594,9 @@ def fib(n): fib1 = tmp return ans + + +if __name__ == "__main__": + import doctest + + doctest.testmod()
### Describe your change: * [ ] Add an algorithm? * [ ] Fix a bug or typo in an existing algorithm? * [X] Documentation change? ### Checklist: * [X] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [x] All new Python files are placed inside an existing directory. * [x] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [X] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
https://api.github.com/repos/TheAlgorithms/Python/pulls/10242
2023-10-10T17:24:38Z
2023-10-10T21:07:07Z
2023-10-10T21:07:07Z
2023-10-10T21:07:09Z
179
TheAlgorithms/Python
29,422
Added missing RuntimeError to builder functions of models that do not currently support feature extraction
diff --git a/timm/models/convmixer.py b/timm/models/convmixer.py index 55220bc084..854f84a07f 100644 --- a/timm/models/convmixer.py +++ b/timm/models/convmixer.py @@ -101,6 +101,9 @@ def forward(self, x): def _create_convmixer(variant, pretrained=False, **kwargs): + if kwargs.get('features_only', None): + raise RuntimeError('features_only not implemented for ConvMixer models.') + return build_model_with_cfg(ConvMixer, variant, pretrained, **kwargs) diff --git a/timm/models/efficientformer.py b/timm/models/efficientformer.py index 04204b3dd3..fe869d6654 100644 --- a/timm/models/efficientformer.py +++ b/timm/models/efficientformer.py @@ -534,6 +534,9 @@ def _cfg(url='', **kwargs): def _create_efficientformer(variant, pretrained=False, **kwargs): + if kwargs.get('features_only', None): + raise RuntimeError('features_only not implemented for EfficientFormer models.') + model = build_model_with_cfg( EfficientFormer, variant, pretrained, pretrained_filter_fn=_checkpoint_filter_fn, diff --git a/timm/models/mvitv2.py b/timm/models/mvitv2.py index bc18bbc2cd..692bf0eaee 100644 --- a/timm/models/mvitv2.py +++ b/timm/models/mvitv2.py @@ -948,6 +948,9 @@ def checkpoint_filter_fn(state_dict, model): def _create_mvitv2(variant, cfg_variant=None, pretrained=False, **kwargs): + if kwargs.get('features_only', None): + raise RuntimeError('features_only not implemented for Multiscale Vision Transformer models.') + return build_model_with_cfg( MultiScaleVit, variant, diff --git a/timm/models/xcit.py b/timm/models/xcit.py index a4cf9e46cb..7160c836ec 100644 --- a/timm/models/xcit.py +++ b/timm/models/xcit.py @@ -497,6 +497,9 @@ def checkpoint_filter_fn(state_dict, model): def _create_xcit(variant, pretrained=False, default_cfg=None, **kwargs): + if kwargs.get('features_only', None): + raise RuntimeError('features_only not implemented for Cross-Covariance Image Transformers models.') + model = build_model_with_cfg( Xcit, variant,
The following 4 models are missing RuntimeErrors to be raised when `features_only=True` is passed during model creation: - ConvMixer - EfficientFormer - MultiScaleVit - Xcit I added the missing guard statement to each constructor function matching the style of the other guard statements from other models. For the model's name in the error message, I used the name given by the designer of the model architecture in the associated white paper. The names are: - ConvMixer -> "ConvMixer" - EfficientFormer -> "EfficientFormer" - MultiScaleVit -> "Multiscale Vision Transformer" - Xcit -> "Cross-Covariance Image Transformers" Let me know if any changes need to be made. The associated issue is #1957
https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1958
2023-09-19T02:26:46Z
2023-09-19T15:19:14Z
2023-09-19T15:19:14Z
2023-09-19T15:19:50Z
564
huggingface/pytorch-image-models
16,323
Add BlueOak to acceptable licenses
diff --git a/scripts/audit_frontend_licenses.py b/scripts/audit_frontend_licenses.py index 3c1e612cb259..a5e93952499f 100755 --- a/scripts/audit_frontend_licenses.py +++ b/scripts/audit_frontend_licenses.py @@ -40,6 +40,7 @@ "Apache-2.0", # https://opensource.org/licenses/Apache-2.0 "Apache-2.0 WITH LLVM-exception", # https://spdx.org/licenses/LLVM-exception.html "0BSD", # https://opensource.org/licenses/0BSD + "BlueOak-1.0.0", # https://blueoakcouncil.org/license/1.0.0 "BSD-2-Clause", # https://opensource.org/licenses/BSD-2-Clause "BSD-3-Clause", # https://opensource.org/licenses/BSD-3-Clause "ISC", # https://opensource.org/licenses/ISC
## Describe your changes Our licenses check is failing because one of the frontend dependencies changed the license to `BlueOak`. We already checked this with legal a couple of weeks ago and its fine to add this to the acceptable licenses. --- **Contribution License Agreement** By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
https://api.github.com/repos/streamlit/streamlit/pulls/8472
2024-04-09T23:52:02Z
2024-04-10T00:34:28Z
2024-04-10T00:34:28Z
2024-04-10T00:34:28Z
222
streamlit/streamlit
22,549
Update text_client_utils.py
diff --git a/inference/text-client/text_client_utils.py b/inference/text-client/text_client_utils.py index 71073ff684..1e8c2363f9 100644 --- a/inference/text-client/text_client_utils.py +++ b/inference/text-client/text_client_utils.py @@ -9,6 +9,7 @@ class DebugClient: def __init__(self, backend_url, http_client=requests): self.backend_url = backend_url self.http_client = http_client + self.available_models = self.get_available_models() def login(self, username): auth_data = self.http_client.get(f"{self.backend_url}/auth/callback/debug", params={"code": username}).json() @@ -28,7 +29,18 @@ def create_chat(self): self.message_id = None return self.chat_id + def get_available_models(self): + response = self.http_client.get( + f"{self.backend_url}/models", + headers=self.auth_headers, + ) + response.raise_for_status() + return [model["name"] for model in response.json()] + def send_message(self, message, model_config_name): + available_models = self.get_available_models() + if model_config_name not in available_models: + raise ValueError(f"Invalid model config name: {model_config_name}") response = self.http_client.post( f"{self.backend_url}/chats/{self.chat_id}/prompter_message", json={
This implementation adds a new get_available_models() method to the DebugClient class, which retrieves the list of available model configurations from the API and returns a list of their names. The send_message() method then calls this method and checks if the provided model_config_name is in the list of available models. If it's not, a ValueError is raised with an appropriate error message.
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/2729
2023-04-18T22:39:32Z
2023-04-21T18:44:54Z
2023-04-21T18:44:54Z
2023-04-21T18:44:55Z
319
LAION-AI/Open-Assistant
37,448
[ie/mixch] Fix extractor
diff --git a/yt_dlp/extractor/mixch.py b/yt_dlp/extractor/mixch.py index 82a7c325724..b980fd01a82 100644 --- a/yt_dlp/extractor/mixch.py +++ b/yt_dlp/extractor/mixch.py @@ -1,6 +1,6 @@ from .common import InfoExtractor from ..networking.exceptions import HTTPError -from ..utils import ExtractorError, UserNotLive, url_or_none +from ..utils import ExtractorError, UserNotLive, int_or_none, url_or_none from ..utils.traversal import traverse_obj @@ -27,25 +27,23 @@ class MixchIE(InfoExtractor): def _real_extract(self, url): video_id = self._match_id(url) - webpage = self._download_webpage(f'https://mixch.tv/u/{video_id}/live', video_id) - - initial_js_state = self._parse_json(self._search_regex( - r'(?m)^\s*window\.__INITIAL_JS_STATE__\s*=\s*(\{.+?\});\s*$', webpage, 'initial JS state'), video_id) - if not initial_js_state.get('liveInfo'): + data = self._download_json(f'https://mixch.tv/api-web/users/{video_id}/live', video_id) + if not traverse_obj(data, ('liveInfo', {dict})): raise UserNotLive(video_id=video_id) return { 'id': video_id, - 'title': traverse_obj(initial_js_state, ('liveInfo', 'title')), - 'comment_count': traverse_obj(initial_js_state, ('liveInfo', 'comments')), - 'view_count': traverse_obj(initial_js_state, ('liveInfo', 'visitor')), - 'timestamp': traverse_obj(initial_js_state, ('liveInfo', 'created')), - 'uploader': traverse_obj(initial_js_state, ('broadcasterInfo', 'name')), 'uploader_id': video_id, + **traverse_obj(data, { + 'title': ('liveInfo', 'title', {str}), + 'comment_count': ('liveInfo', 'comments', {int_or_none}), + 'view_count': ('liveInfo', 'visitor', {int_or_none}), + 'timestamp': ('liveInfo', 'created', {int_or_none}), + 'uploader': ('broadcasterInfo', 'name', {str}), + }), 'formats': [{ 'format_id': 'hls', - 'url': (traverse_obj(initial_js_state, ('liveInfo', 'hls')) - or f'https://d1hd0ww6piyb43.cloudfront.net/hls/torte_{video_id}.m3u8'), + 'url': data['liveInfo']['hls'], 'ext': 'mp4', 'protocol': 'm3u8', }],
Thanks @nipotan for the API endpoint knowledge Closes #9536 <details open><summary>Template</summary> <!-- OPEN is intentional --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) ### What is the purpose of your *pull request*? - [x] Fix or improvement to an extractor (Make sure to add/update tests) </details>
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/9608
2024-04-03T20:36:37Z
2024-04-03T22:53:42Z
2024-04-03T22:53:42Z
2024-04-03T22:53:43Z
645
yt-dlp/yt-dlp
7,981
Fixed DoubleArrow Scaling
diff --git a/topics/geometry.py b/topics/geometry.py index fe98bb11e6..81ca1316e3 100644 --- a/topics/geometry.py +++ b/topics/geometry.py @@ -25,8 +25,8 @@ def generate_points(self): anchors = np.array([ np.cos(a)*RIGHT+np.sin(a)*UP for a in np.linspace( - self.start_angle, - self.start_angle + self.angle, + self.start_angle, + self.start_angle + self.angle, self.num_anchors ) ]) @@ -56,7 +56,7 @@ def add_tip(self, tip_length = 0.25, at_start = False, at_end = True): p1, p2 = self.points[-3:-1] # self.points[-2:] did overshoot start_arrow = Arrow( - p1, 2*p2 - p1, + p1, 2*p2 - p1, tip_length = tip_length, max_tip_length_to_length_ratio = 2.0 ) @@ -66,16 +66,12 @@ def add_tip(self, tip_length = 0.25, at_start = False, at_end = True): p4, p3 = self.points[1:3] # self.points[:2] did overshoot end_arrow = Arrow( - p3, 2*p4 - p3, + p3, 2*p4 - p3, tip_length = tip_length, max_tip_length_to_length_ratio = 2.0 ) self.add(end_arrow.split()[-1]) - - - - self.set_color(self.get_color()) return self @@ -100,7 +96,7 @@ def stop_angle(self): def set_bound_angles(self,start=0,stop=np.pi): self.start_angle = start self.angle = stop - start - + return self @@ -132,7 +128,7 @@ def __init__(self, start_point, end_point, angle = TAU/4, **kwargs): radius = radius, start_angle = start_angle, **kwargs) - + self.move_arc_center_to(arc_center) class CurvedArrow(ArcBetweenPoints): @@ -145,7 +141,7 @@ def __init__(self, start_point, end_point, angle = TAU/4, **kwargs): else: ArcBetweenPoints.__init__(self, end_point, start_point, angle = -angle, **kwargs) self.add_tip(at_start = False, at_end = True) - + class CurvedDoubleArrow(ArcBetweenPoints): @@ -483,7 +479,7 @@ def __init__(self, *args, **kwargs): self.init_colors() def init_tip(self): - self.tip = self.add_tip() + self.add_tip() def add_tip(self, add_at_end = True): tip = VMobject( @@ -496,13 +492,16 @@ def add_tip(self, add_at_end = True): ) self.set_tip_points(tip, add_at_end, preserve_normal = False) self.add(tip) + if not hasattr(self, 'tip'): + self.tip = [] + self.tip.append(tuple((tip, add_at_end))) return tip def add_rectangular_stem(self): self.rect = Rectangle( stroke_width = 0, - fill_color = self.tip.get_fill_color(), - fill_opacity = self.tip.get_fill_opacity() + fill_color = self.tip[0][0].get_fill_color(), + fill_opacity = self.tip[0][0].get_fill_opacity() ) self.add_to_back(self.rect) self.set_stroke(width = 0) @@ -511,7 +510,7 @@ def add_rectangular_stem(self): def set_rectangular_stem_points(self): start, end = self.get_start_and_end() vect = end - start - tip_base_points = self.tip.get_anchors()[1:] + tip_base_points = self.tip[0][0].get_anchors()[1:] tip_base = center_of_mass(tip_base_points) tbp1, tbp2 = tip_base_points perp_vect = tbp2 - tbp1 @@ -535,8 +534,8 @@ def set_rectangular_stem_points(self): return self def set_tip_points( - self, tip, - add_at_end = True, + self, tip, + add_at_end = True, tip_length = None, preserve_normal = True, ): @@ -564,7 +563,7 @@ def set_tip_points( v *= tip_length/np.linalg.norm(v) ratio = self.tip_width_to_length_ratio tip.set_points_as_corners([ - end_point, + end_point, end_point-vect+perp_vect*ratio/2, end_point-vect-perp_vect*ratio/2, ]) @@ -572,7 +571,7 @@ def set_tip_points( return self def get_normal_vector(self): - p0, p1, p2 = self.tip.get_anchors() + p0, p1, p2 = self.tip[0][0].get_anchors() result = np.cross(p2 - p1, p1 - p0) norm = np.linalg.norm(result) if norm == 0: @@ -586,7 +585,7 @@ def reset_normal_vector(self): def get_end(self): if hasattr(self, "tip"): - return self.tip.get_anchors()[0] + return self.tip[0][0].get_anchors()[0] else: return Line.get_end(self) @@ -595,14 +594,16 @@ def get_tip(self): def put_start_and_end_on(self, *args, **kwargs): Line.put_start_and_end_on(self, *args, **kwargs) - self.set_tip_points(self.tip, preserve_normal = False) + self.set_tip_points(self.tip[0][0], preserve_normal = False) self.set_rectangular_stem_points() return self def scale(self, scale_factor, **kwargs): Line.scale(self, scale_factor, **kwargs) if self.preserve_tip_size_when_scaling: - self.set_tip_points(self.tip) + for t in self.tip: + print(t) + self.set_tip_points(t[0], add_at_end=t[1]) if self.use_rectangular_stem: self.set_rectangular_stem_points() return self @@ -622,8 +623,7 @@ def __init__(self, direction, **kwargs): class DoubleArrow(Arrow): def init_tip(self): - self.tip = self.add_tip() - self.second_tip = self.add_tip(add_at_end = False) + self.tip = [(self.add_tip(), True), (self.add_tip(add_at_end = False), False)] class CubicBezier(VMobject): def __init__(self, points, **kwargs): @@ -681,7 +681,7 @@ class Square(Rectangle): def __init__(self, **kwargs): digest_config(self, kwargs) Rectangle.__init__( - self, + self, height = self.side_length, width = self.side_length, **kwargs @@ -762,7 +762,7 @@ class Cross(VGroup): "stroke_width" : 6, } def __init__(self, mobject, **kwargs): - VGroup.__init__(self, + VGroup.__init__(self, Line(UP+LEFT, DOWN+RIGHT), Line(UP+RIGHT, DOWN+LEFT), )
Previously, only one tip was kept as self.tip and was scaled properly. This meant the first part of the solutionw as to create a list of all tips. After this, the issue was that draw_at_end defaults to True, and is not preserved. So the list is now a tuple of the tip VMobject and its add_at_end property. Since these should be identifical except for position, everything that asked for a property from tip now asks for the same property from the first tip. This isn't pretty, but it should resolve #123 .
https://api.github.com/repos/3b1b/manim/pulls/174
2018-03-25T20:23:18Z
2018-03-31T20:07:41Z
2018-03-31T20:07:41Z
2018-03-31T20:07:41Z
1,710
3b1b/manim
18,477
Add installation instructions on FreeBSD
diff --git a/README.md b/README.md index 7505717f0..ec6549bc3 100644 --- a/README.md +++ b/README.md @@ -111,6 +111,12 @@ sudo apt install python3-dev python3-pip sudo pip3 install thefuck ``` +On FreeBSD you can install `The Fuck` with: +```bash +sudo portsnap fetch update +cd /usr/ports/misc/thefuck && sudo make install clean +``` + On other systems you can install `The Fuck` with `pip`: ```bash
misc/thefuck has recently been committed to the FreeBSD ports tree (https://svnweb.freebsd.org/ports?view=revision&revision=458123).
https://api.github.com/repos/nvbn/thefuck/pulls/770
2018-01-08T21:31:16Z
2018-01-10T20:31:39Z
2018-01-10T20:31:39Z
2018-01-11T17:57:32Z
134
nvbn/thefuck
30,729
Add account type to Forecast.Solar integration
diff --git a/homeassistant/components/forecast_solar/sensor.py b/homeassistant/components/forecast_solar/sensor.py index 29ba14ac4638d0..2ad86186652df7 100644 --- a/homeassistant/components/forecast_solar/sensor.py +++ b/homeassistant/components/forecast_solar/sensor.py @@ -5,7 +5,12 @@ from homeassistant.components.sensor import DOMAIN as SENSOR_DOMAIN, SensorEntity from homeassistant.config_entries import ConfigEntry -from homeassistant.const import ATTR_IDENTIFIERS, ATTR_MANUFACTURER, ATTR_NAME +from homeassistant.const import ( + ATTR_IDENTIFIERS, + ATTR_MANUFACTURER, + ATTR_MODEL, + ATTR_NAME, +) from homeassistant.core import HomeAssistant from homeassistant.helpers.entity_platform import AddEntitiesCallback from homeassistant.helpers.typing import StateType @@ -56,6 +61,7 @@ def __init__( ATTR_IDENTIFIERS: {(DOMAIN, entry_id)}, ATTR_NAME: "Solar Production Forecast", ATTR_MANUFACTURER: "Forecast.Solar", + ATTR_MODEL: coordinator.data.account_type.value, ATTR_ENTRY_TYPE: ENTRY_TYPE_SERVICE, } diff --git a/tests/components/forecast_solar/conftest.py b/tests/components/forecast_solar/conftest.py index 8b9227a8d04aeb..0bf080535f683a 100644 --- a/tests/components/forecast_solar/conftest.py +++ b/tests/components/forecast_solar/conftest.py @@ -61,6 +61,7 @@ def mock_forecast_solar() -> Generator[None, MagicMock, None]: estimate = MagicMock(spec=models.Estimate) estimate.now.return_value = now estimate.timezone = "Europe/Amsterdam" + estimate.account_type.value = "public" estimate.energy_production_today = 100000 estimate.energy_production_tomorrow = 200000 estimate.power_production_now = 300000 diff --git a/tests/components/forecast_solar/test_sensor.py b/tests/components/forecast_solar/test_sensor.py index a2b105ccbd128a..6c910d699c4888 100644 --- a/tests/components/forecast_solar/test_sensor.py +++ b/tests/components/forecast_solar/test_sensor.py @@ -142,7 +142,7 @@ async def test_sensors( assert device_entry.manufacturer == "Forecast.Solar" assert device_entry.name == "Solar Production Forecast" assert device_entry.entry_type == ENTRY_TYPE_SERVICE - assert not device_entry.model + assert device_entry.model == "public" assert not device_entry.sw_version
## Proposed change <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> This PR will add the `account_type` as device model. ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [ ] Dependency upgrade - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [X] New feature (which adds functionality to an existing integration) - [ ] Breaking change (fix/feature causing existing functionality to break) - [ ] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> - This PR fixes or closes issue: fixes # - This PR is related to issue: - Link to documentation pull request: ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [X] The code change is tested and works locally. - [X] Local tests pass. **Your PR cannot be merged unless tests pass** - [X] There is no commented out code in this PR. - [X] I have followed the [development checklist][dev-checklist] - [X] The code has been formatted using Black (`black --fast homeassistant tests`) - [X] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [ ] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [ ] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. The integration reached or maintains the following [Integration Quality Scale][quality-scale]: <!-- The Integration Quality Scale scores an integration on the code quality and user experience. Each level of the quality scale consists of a list of requirements. We highly recommend getting your integration scored! --> - [ ] No score or internal - [ ] 🥈 Silver - [ ] 🥇 Gold - [ ] 🏆 Platinum <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [ ] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/55175
2021-08-24T19:24:53Z
2021-08-25T09:16:23Z
2021-08-25T09:16:23Z
2021-08-27T00:02:52Z
579
home-assistant/core
38,826
Document common config attributes
diff --git a/src/transformers/configuration_utils.py b/src/transformers/configuration_utils.py index 621f855a126f4..9aa2440ce9dfe 100755 --- a/src/transformers/configuration_utils.py +++ b/src/transformers/configuration_utils.py @@ -34,19 +34,30 @@ class PretrainedConfig(object): Base class for all configuration classes. Handles a few parameters common to all models' configurations as well as methods for loading/downloading/saving configurations. - Note: A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to - initialize a model does **not** load the model weights. It only affects the model's configuration. + Note: + A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to + initialize a model does **not** load the model weights. It only affects the model's configuration. Class attributes (overridden by derived classes) - - **model_type** (:obj:`str`): An identifier for the model type, serialized into the JSON file, and used to + - **model_type** (:obj:`str`) -- An identifier for the model type, serialized into the JSON file, and used to recreate the correct object in :class:`~transformers.AutoConfig`. - - **is_composition** (:obj:`bool`): Whether the config class is composed of multiple sub-configs. In this case - the config has to be initialized from two or more configs of type :class:`~transformers.PretrainedConfig` - like: :class:`~transformers.EncoderDecoderConfig` or :class:`~RagConfig`. - - **keys_to_ignore_at_inference** (:obj:`List[str]`): A list of keys to ignore by default when looking at + - **is_composition** (:obj:`bool`) -- Whether the config class is composed of multiple sub-configs. In this + case the config has to be initialized from two or more configs of type + :class:`~transformers.PretrainedConfig` like: :class:`~transformers.EncoderDecoderConfig` or + :class:`~RagConfig`. + - **keys_to_ignore_at_inference** (:obj:`List[str]`) -- A list of keys to ignore by default when looking at dictionary outputs of the model during inference. + Common attributes (present in all subclasses) + + - **vocab_size** (:obj:`int`) -- The number of tokens in the vocabulary, which is also the first dimension of + the embeddings matrix (this attribute may be missing for models that don't have a text modality like ViT). + - **hidden_size** (:obj:`int`) -- The hidden size of the model. + - **num_attention_heads** (:obj:`int`) -- The number of attention heads used in the multi-head attention layers + of the model. + - **num_hidden_layers** (:obj:`int`) -- The number of blocks in the model. + Args: name_or_path (:obj:`str`, `optional`, defaults to :obj:`""`): Store the string that was passed to :func:`~transformers.PreTrainedModel.from_pretrained` or
# What does this PR do? This PR adds documentation for the common attributes in all model configs, such as `hidden_size`. It also formats the class attributes to be consistent with the rest of the parameters.
https://api.github.com/repos/huggingface/transformers/pulls/11070
2021-04-05T18:41:54Z
2021-04-05T19:29:02Z
2021-04-05T19:29:02Z
2021-04-05T19:29:03Z
709
huggingface/transformers
11,978
Add Netlify API
diff --git a/README.md b/README.md index b2fcab633e..6d75331f54 100644 --- a/README.md +++ b/README.md @@ -506,6 +506,7 @@ API | Description | Auth | HTTPS | CORS | | [Mocky](https://designer.mocky.io/) | Mock user defined test JSON for REST API endpoints | No | Yes | Yes | | [MY IP](https://www.myip.com/api-docs/) | Get IP address information | No | Yes | Unknown | | [Nationalize.io](https://nationalize.io) | Estimate the nationality of a first name | No | Yes | Yes | +| [Netlify](https://docs.netlify.com/api/get-started/) | Netlify is a hosting service for the programmable web | `OAuth` | Yes | Unknown | | [npm Registry](https://github.com/npm/registry/blob/master/docs/REGISTRY-API.md) | Query information about your favorite Node.js libraries programatically | No | Yes | Unknown | | [OneSignal](https://documentation.onesignal.com/docs/onesignal-api) | Self-serve customer engagement solution for Push Notifications, Email, SMS & In-App | `apiKey` | Yes | Unknown | | [OOPSpam](https://oopspam.com/) | Multiple spam filtering service | No | Yes | Yes |
<!-- Thank you for taking the time to work on a Pull Request for this project! --> <!-- To ensure your PR is dealt with swiftly please check the following: --> - [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md) - [x] My addition is ordered alphabetically - [x] My submission has a useful description - [x] The description does not have more than 100 characters - [x] The description does not end with punctuation - [x] Each table column is padded with one space on either side - [x] I have searched the repository for any relevant issues or pull requests - [x] Any category I am creating has the minimum requirement of 3 items - [x] All changes have been [squashed][squash-link] into a single commit [squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
https://api.github.com/repos/public-apis/public-apis/pulls/2789
2021-10-29T15:12:53Z
2021-10-30T01:14:08Z
2021-10-30T01:14:08Z
2021-10-30T01:14:08Z
293
public-apis/public-apis
35,985
Add direction configuration for RMVtransport component
diff --git a/homeassistant/components/sensor/rmvtransport.py b/homeassistant/components/sensor/rmvtransport.py index 79ec8c7a5e782e..f9bd65c1a74f3c 100644 --- a/homeassistant/components/sensor/rmvtransport.py +++ b/homeassistant/components/sensor/rmvtransport.py @@ -24,7 +24,7 @@ CONF_STATION = 'station' CONF_DESTINATIONS = 'destinations' -CONF_DIRECTIONS = 'directions' +CONF_DIRECTION = 'direction' CONF_LINES = 'lines' CONF_PRODUCTS = 'products' CONF_TIME_OFFSET = 'time_offset' @@ -57,8 +57,7 @@ vol.Required(CONF_STATION): cv.string, vol.Optional(CONF_DESTINATIONS, default=[]): vol.All(cv.ensure_list, [cv.string]), - vol.Optional(CONF_DIRECTIONS, default=[]): - vol.All(cv.ensure_list, [cv.string]), + vol.Optional(CONF_DIRECTION): cv.string, vol.Optional(CONF_LINES, default=[]): vol.All(cv.ensure_list, [cv.positive_int, cv.string]), vol.Optional(CONF_PRODUCTS, default=VALID_PRODUCTS): @@ -84,7 +83,7 @@ async def async_setup_platform(hass, config, async_add_entities, session, next_departure[CONF_STATION], next_departure.get(CONF_DESTINATIONS), - next_departure.get(CONF_DIRECTIONS), + next_departure.get(CONF_DIRECTION), next_departure.get(CONF_LINES), next_departure.get(CONF_PRODUCTS), next_departure.get(CONF_TIME_OFFSET), @@ -97,14 +96,14 @@ async def async_setup_platform(hass, config, async_add_entities, class RMVDepartureSensor(Entity): """Implementation of an RMV departure sensor.""" - def __init__(self, session, station, destinations, directions, lines, + def __init__(self, session, station, destinations, direction, lines, products, time_offset, max_journeys, name, timeout): """Initialize the sensor.""" self._station = station self._name = name self._state = None self.data = RMVDepartureData(session, station, destinations, - directions, lines, products, time_offset, + direction, lines, products, time_offset, max_journeys, timeout) self._icon = ICONS[None] @@ -167,7 +166,7 @@ async def async_update(self): class RMVDepartureData: """Pull data from the opendata.rmv.de web page.""" - def __init__(self, session, station_id, destinations, directions, lines, + def __init__(self, session, station_id, destinations, direction, lines, products, time_offset, max_journeys, timeout): """Initialize the sensor.""" from RMVtransport import RMVtransport @@ -175,7 +174,7 @@ def __init__(self, session, station_id, destinations, directions, lines, self.station = None self._station_id = station_id self._destinations = destinations - self._directions = directions + self._direction = direction self._lines = lines self._products = products self._time_offset = time_offset @@ -189,6 +188,7 @@ async def async_update(self): try: _data = await self.rmv.get_departures(self._station_id, products=self._products, + directionId=self._direction, maxJourneys=50) except ValueError: self.departures = []
## Description: Implement direction configuration. **Pull request in [home-assistant.io](https://github.com/home-assistant/home-assistant.io) with documentation (if applicable):** home-assistant/home-assistant.io#6646 ## Checklist: - [x] The code change is tested and works locally. - [x] Local tests pass with `tox`. **Your PR cannot be merged unless tests pass** If user exposed functionality or configuration variables are added/changed: - [x] Documentation added/updated in [home-assistant.io](https://github.com/home-assistant/home-assistant.io) If the code communicates with devices, web services, or third-party tools: - [ ] New dependencies have been added to the `REQUIREMENTS` variable ([example][ex-requir]). - [ ] New dependencies are only imported inside functions that use them ([example][ex-import]). - [ ] New or updated dependencies have been added to `requirements_all.txt` by running `script/gen_requirements_all.py`. - [ ] New files were added to `.coveragerc`. If the code does not interact with devices: - [ ] Tests have been added to verify that the new code works. [ex-requir]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L14 [ex-import]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L54
https://api.github.com/repos/home-assistant/core/pulls/17308
2018-10-10T12:47:02Z
2018-10-10T15:59:56Z
2018-10-10T15:59:56Z
2019-03-27T07:53:30Z
796
home-assistant/core
39,241
Add description field and f-string to progress logs example
diff --git a/docs/source/progress.rst b/docs/source/progress.rst index 6ac052b4a..008a53931 100644 --- a/docs/source/progress.rst +++ b/docs/source/progress.rst @@ -126,9 +126,9 @@ Print / log The Progress class will create an internal Console object which you can access via ``progress.console``. If you print or log to this console, the output will be displayed *above* the progress display. Here's an example:: with Progress() as progress: - task = progress.add_task(total=10) + task = progress.add_task("twiddling thumbs", total=10) for job in range(10): - progress.console.print("Working on job #{job}") + progress.console.print(f"Working on job #{job}") run_job(job) progress.advance(task)
## Type of changes - [ ] Bug fix - [ ] New feature - [x] Documentation / docstrings - [ ] Tests - [ ] Other ## Checklist - [x] I've run the latest [black](https://github.com/psf/black) with default args on new code. - [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate. - [ ] I've added tests for new code. - [x] I accept that @willmcgugan may be pedantic in the code review. ## Description Fixed small issue with example code for docs on progress usage with logs.
https://api.github.com/repos/Textualize/rich/pulls/860
2020-12-29T16:51:56Z
2020-12-29T17:12:13Z
2020-12-29T17:12:13Z
2020-12-29T17:12:13Z
193
Textualize/rich
48,349
Update hyp.scratch-high.yaml `lrf: 0.1`
diff --git a/data/hyps/hyp.scratch-high.yaml b/data/hyps/hyp.scratch-high.yaml index 5a586cc63fa..123cc840741 100644 --- a/data/hyps/hyp.scratch-high.yaml +++ b/data/hyps/hyp.scratch-high.yaml @@ -4,7 +4,7 @@ # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) -lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf) +lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf) momentum: 0.937 # SGD momentum/Adam beta1 weight_decay: 0.0005 # optimizer weight decay 5e-4 warmup_epochs: 3.0 # warmup epochs (fractions ok)
Update `lrf: 0.1`, tested on YOLOv5x6 to 55.0 [email protected]:0.95, slightly higher than current. ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Optimization of learning rate schedule in YOLOv5 training configuration. ### 📊 Key Changes - Decreased the final OneCycleLR learning rate (`lrf`) from 0.2 to 0.1 in the training hyperparameters. ### 🎯 Purpose & Impact - 🔄 This change aims to enhance the model training stability and performance by adjusting the learning rate schedule. - 📈 Potential impact includes improved convergence during training and possibly better model accuracy, benefiting users by potentially enhancing model performance on a variety of tasks.
https://api.github.com/repos/ultralytics/yolov5/pulls/6525
2022-02-03T11:13:54Z
2022-02-03T11:15:14Z
2022-02-03T11:15:14Z
2024-01-19T13:05:22Z
223
ultralytics/yolov5
25,482
Update README-id-ID.md
diff --git a/doc/translations/README-id-ID.md b/doc/translations/README-id-ID.md index bd2ffd0926c..d3a0c5a8456 100644 --- a/doc/translations/README-id-ID.md +++ b/doc/translations/README-id-ID.md @@ -2,7 +2,7 @@ [![Build Status](https://api.travis-ci.org/sqlmapproject/sqlmap.svg?branch=master)](https://travis-ci.org/sqlmapproject/sqlmap) [![Python 2.6|2.7|3.x](https://img.shields.io/badge/python-2.6|2.7|3.x-yellow.svg)](https://www.python.org/) [![License](https://img.shields.io/badge/license-GPLv2-red.svg)](https://raw.githubusercontent.com/sqlmapproject/sqlmap/master/LICENSE) [![PyPI version](https://badge.fury.io/py/sqlmap.svg)](https://badge.fury.io/py/sqlmap) [![GitHub closed issues](https://img.shields.io/github/issues-closed-raw/sqlmapproject/sqlmap.svg?colorB=ff69b4)](https://github.com/sqlmapproject/sqlmap/issues?q=is%3Aissue+is%3Aclosed) [![Twitter](https://img.shields.io/badge/[email protected])](https://twitter.com/sqlmap) -sqlmap merupakan alat _(tool)_ bantu _open source_ dalam melakukan tes penetrasi yang mengotomasi proses deteksi dan eksploitasi kelemahan _SQL injection_ dan pengambil-alihan server basis data. sqlmap dilengkapi dengan pendeteksi canggih, fitur-fitur hanal bagi _penetration tester_, beragam cara untuk mendeteksi basis data, hingga mengakses _file system_ dan mengeksekusi perintah dalam sistem operasi melalui koneksi _out-of-band_. +sqlmap merupakan alat _(tool)_ bantu _open source_ dalam melakukan tes penetrasi yang mengotomasi proses deteksi dan eksploitasi kelemahan _SQL injection_ dan pengambil-alihan server basis data. sqlmap dilengkapi dengan pendeteksi canggih, fitur-fitur handal bagi _penetration tester_, beragam cara untuk mendeteksi basis data, hingga mengakses _file system_ dan mengeksekusi perintah dalam sistem operasi melalui koneksi _out-of-band_. Tangkapan Layar ---- @@ -14,8 +14,7 @@ Anda dapat mengunjungi [koleksi tangkapan layar](https://github.com/sqlmapprojec Instalasi ---- -Anda dapat mengunduh tarball versi terbaru [di sini] -(https://github.com/sqlmapproject/sqlmap/tarball/master) atau zipball [di sini](https://github.com/sqlmapproject/sqlmap/zipball/master). +Anda dapat mengunduh tarball versi terbaru [di sini](https://github.com/sqlmapproject/sqlmap/tarball/master) atau zipball [di sini](https://github.com/sqlmapproject/sqlmap/zipball/master). Sebagai alternatif, Anda dapat mengunduh sqlmap dengan men-_clone_ repositori [Git](https://github.com/sqlmapproject/sqlmap):
https://api.github.com/repos/sqlmapproject/sqlmap/pulls/4663
2021-04-29T02:43:23Z
2021-05-01T09:33:14Z
2021-05-01T09:33:14Z
2021-05-01T09:33:14Z
756
sqlmapproject/sqlmap
14,971
Take scrollbars into account when computing Dataframe dimensions
diff --git a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals0.snap.png b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals0.snap.png index 1a6023cb9bdc..29668546b380 100644 Binary files a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals0.snap.png and b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals0.snap.png differ diff --git a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals4.snap.png b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals4.snap.png index 89f4f5455a96..0b467fd3baf9 100644 Binary files a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals4.snap.png and b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals4.snap.png differ diff --git a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals5.snap.png b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals5.snap.png index 6a066efb9188..7d40173f3754 100644 Binary files a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals5.snap.png and b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals5.snap.png differ diff --git a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals6.snap.png b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals6.snap.png index d1e415b4395c..da4439b1445d 100644 Binary files a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals6.snap.png and b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals6.snap.png differ diff --git a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals7.snap.png b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals7.snap.png index 91e51339787d..65b96c99c7d2 100644 Binary files a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals7.snap.png and b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals7.snap.png differ diff --git a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals8.snap.png b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals8.snap.png index e08f023a77ec..8e4db7bfa50a 100644 Binary files a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals8.snap.png and b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals8.snap.png differ diff --git a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals9.snap.png b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals9.snap.png index a7dabf4bec93..4e1f550d471d 100644 Binary files a/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals9.snap.png and b/frontend/cypress/snapshots/linux/2x/st_dataframe_sizes.spec.js/dataframe-visuals9.snap.png differ diff --git a/frontend/src/components/elements/DataFrame/DataFrame.test.tsx b/frontend/src/components/elements/DataFrame/DataFrame.test.tsx index 1605edb091a8..179ace091b60 100644 --- a/frontend/src/components/elements/DataFrame/DataFrame.test.tsx +++ b/frontend/src/components/elements/DataFrame/DataFrame.test.tsx @@ -18,11 +18,17 @@ import React from "react" import { shallow } from "enzyme" import { fromJS } from "immutable" +import { random, times } from "lodash" import mockDataFrame from "./mock" import { DataFrame, DataFrameProps } from "./DataFrame" import { MIN_CELL_WIDTH_PX } from "./DataFrameUtil" +const SCROLLBAR_SIZE = 10 +jest.mock("vendor/dom-helpers", () => ({ + scrollbarSize: () => SCROLLBAR_SIZE, +})) + const getProps = ( elementProps: Record<string, unknown> = {} ): DataFrameProps => ({ @@ -34,6 +40,20 @@ const getProps = ( height: 400, }) +const fakeData = ( + numRows: number, + numCols: number +): Partial<typeof mockDataFrame> => ({ + data: { + cols: times(numCols, () => ({ + int64s: { data: times(numRows, () => random(0, 9)) }, + type: "int64s", + })), + }, + index: { rangeIndex: { start: 0, stop: numRows }, type: "rangeIndex" }, + columns: { rangeIndex: { start: 0, stop: numCols }, type: "rangeIndex" }, +}) + describe("DataFrame Element", () => { const props = getProps() const wrapper = shallow(<DataFrame {...props} />) @@ -56,10 +76,12 @@ describe("DataFrame Element", () => { expect(multiGridProps.columnCount).toBe(11) expect(multiGridProps).toHaveProperty("enableFixedColumnScroll") expect(multiGridProps).toHaveProperty("enableFixedRowScroll") - expect(multiGridProps.height).toBe(275) + // 275px for the dataframe itself + 10px for the horizontal scrollbar + expect(multiGridProps.height).toBe(285) expect(multiGridProps.rowHeight).toBe(25) expect(multiGridProps.rowCount).toBe(11) - expect(multiGridProps.width).toBe(398) + // 400px full container width - 12px for border and vertical scrollbar + expect(multiGridProps.width).toBe(388) }) it("should render as empty if there's no data", () => { @@ -81,4 +103,30 @@ describe("DataFrame Element", () => { expect(multiGridProps.rowCount).toBe(1) expect(multiGridProps.width).toBe(60) }) + + it("adds extra height for horizontal scrollbar when wide but not tall", () => { + let props = getProps({ ...fakeData(1, 1) }) + let wrapper = shallow(<DataFrame {...props} />) + const normalHeight = wrapper.find("MultiGrid").props().height + + props = getProps({ ...fakeData(1, 20) }) + wrapper = shallow(<DataFrame {...props} />) + const heightWithScrollbar = wrapper.find("MultiGrid").props().height + + expect(heightWithScrollbar).toBe(normalHeight + SCROLLBAR_SIZE) + }) + + it("adds extra width for vertical scrollbar when tall but not wide", () => { + // Be careful to ensure that the number of digits needed to display the + // largest row number is the same for the two DataFrames. + let props = getProps({ ...fakeData(11, 1) }) + let wrapper = shallow(<DataFrame {...props} />) + const normalWidth = wrapper.find("MultiGrid").props().width + + props = getProps({ ...fakeData(99, 1) }) + wrapper = shallow(<DataFrame {...props} />) + const widthWithScrollbar = wrapper.find("MultiGrid").props().width + + expect(widthWithScrollbar).toBe(normalWidth + SCROLLBAR_SIZE) + }) }) diff --git a/frontend/src/components/elements/DataFrame/DataFrameUtil.tsx b/frontend/src/components/elements/DataFrame/DataFrameUtil.tsx index 1a32fc954c0f..2dfe6c35b591 100644 --- a/frontend/src/components/elements/DataFrame/DataFrameUtil.tsx +++ b/frontend/src/components/elements/DataFrame/DataFrameUtil.tsx @@ -5,6 +5,7 @@ import { } from "lib/dataFrameProto" import { toFormattedString } from "lib/format" import { logWarning } from "lib/log" +import { scrollbarSize } from "vendor/dom-helpers" import React, { ReactElement, ComponentType } from "react" import { Map as ImmutableMap } from "immutable" import { @@ -73,6 +74,7 @@ interface ComputedWidths { elementWidth: number columnWidth: ({ index }: { index: number }) => number headerWidth: number + needsHorizontalScrollbar: boolean } const DEFAULT_HEIGHT = 300 @@ -99,15 +101,21 @@ export const getDimensions = ( const headerHeight = rowHeight * headerRows const border = 2 - let { elementWidth, columnWidth, headerWidth } = getWidths( + // Reserve enough space to render the dataframe border as well as a vertical + // scrollbar if necessary. + const availableWidth = width - border - scrollbarSize() + const widths = getWidths( cols, rows, headerCols, headerRows, - width - border, + availableWidth, cellContentsGetter ) + let { elementWidth, columnWidth, headerWidth } = widths + const { needsHorizontalScrollbar } = widths + // Add space for the "empty" text when the table is empty. const EMPTY_WIDTH = 60 // px if (dataRows === 0 && elementWidth < EMPTY_WIDTH) { @@ -122,14 +130,24 @@ export const getDimensions = ( } } + // Allocate extra space for horizontal and vertical scrollbars, if needed. + const totalHeight = rows * rowHeight + const maxHeight = height || DEFAULT_HEIGHT + + const horizScrollbarHeight = needsHorizontalScrollbar ? scrollbarSize() : 0 + height = Math.min(totalHeight + horizScrollbarHeight, maxHeight) + + const needsVerticalScrollbar = totalHeight > maxHeight + elementWidth += needsVerticalScrollbar ? scrollbarSize() : 0 + return { rowHeight, headerHeight, border, - height: Math.min(rows * rowHeight, height || DEFAULT_HEIGHT), - elementWidth, columnWidth, headerWidth, + elementWidth, + height, } } @@ -288,6 +306,7 @@ export function getWidths( } const elementWidth = Math.min(distributedTableTotal, containerWidth) + const needsHorizontalScrollbar = distributedTableTotal > containerWidth const columnWidth = ({ index }: { index: number }): number => distributedTable[index] @@ -299,5 +318,6 @@ export function getWidths( elementWidth, columnWidth, headerWidth, + needsHorizontalScrollbar, } } diff --git a/frontend/src/vendor/dom-helpers.ts b/frontend/src/vendor/dom-helpers.ts new file mode 100644 index 000000000000..4cc267a6f098 --- /dev/null +++ b/frontend/src/vendor/dom-helpers.ts @@ -0,0 +1,33 @@ +/* eslint-disable */ + +// We only need a single function from https://github.com/react-bootstrap/dom-helpers, +// so we copy it here instead of adding a new dependency. + +const canUseDOM = !!( + typeof window !== "undefined" && + window.document && + window.document.createElement +) + +let size: number + +// https://github.com/react-bootstrap/dom-helpers/blob/3f509a03c5e330faa93bcf8acf30976b5a7bacac/src/scrollbarSize.ts#L5 +export function scrollbarSize(recalc?: boolean) { + if ((!size && size !== 0) || recalc) { + if (canUseDOM) { + const scrollDiv = document.createElement("div") + + scrollDiv.style.position = "absolute" + scrollDiv.style.top = "-9999px" + scrollDiv.style.width = "50px" + scrollDiv.style.height = "50px" + scrollDiv.style.overflow = "scroll" + + document.body.appendChild(scrollDiv) + size = scrollDiv.offsetWidth - scrollDiv.clientWidth + document.body.removeChild(scrollDiv) + } + } + + return size +}
It turns out the bug in #2543 appears only when there are enough elements in a Dataframe for a scrollbar to be necessary to display all the data. In the case where we need a vertical scrollbar, the width that we provide to the `MultiGrid` we're using to render our Dataframe ends up being slightly too small due to the presence of the scrollbar. This causes a horizontal scrollbar to appear. The other case is analogous. To fix this, we borrowed a helper function from [react-bootstrap/dom-helpers](https://github.com/react-bootstrap/dom-helpers) for getting the width of scrollbars, then used the helper to fix our rendering dimensions calculations to take scrollbars into account. Closes #2543
https://api.github.com/repos/streamlit/streamlit/pulls/2622
2021-01-21T02:21:21Z
2021-01-26T20:47:26Z
2021-01-26T20:47:26Z
2021-07-24T00:37:01Z
2,867
streamlit/streamlit
21,899
Update CppCoreGuidelines.md
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md index f28366f73..a8231ae7d 100644 --- a/CppCoreGuidelines.md +++ b/CppCoreGuidelines.md @@ -2108,7 +2108,7 @@ When I call `length(s)` should I test for `s==nullptr` first? Should the impleme **Example**: - void fct(const string& s); // OK: pass by const reference; always checp + void fct(const string& s); // OK: pass by const reference; always cheap void fct2(string s); // bad: potentially expensive
Typo in example F.20
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/100
2015-09-23T14:35:28Z
2015-09-24T14:42:28Z
2015-09-24T14:42:28Z
2015-09-24T15:06:32Z
154
isocpp/CppCoreGuidelines
15,673
[hotfix] fixed error when no collective communication in CommProfiler
diff --git a/colossalai/utils/profiler/comm_profiler.py b/colossalai/utils/profiler/comm_profiler.py index 93c72cc65237..a4f5729c97ec 100644 --- a/colossalai/utils/profiler/comm_profiler.py +++ b/colossalai/utils/profiler/comm_profiler.py @@ -93,16 +93,16 @@ def disable(self): dist.reduce = torch_reduce def to_tensorboard(self, writer): - writer.add_text(tag="Collective Communication", text_string=self.result_list("\n\n")) + writer.add_text(tag="Collective Communication", text_string=self.result_str("\n\n")) def to_file(self, filename: Path): with open(filename, "w") as f: - f.write(self.result_list()) + f.write(self.result_str()) def show(self): - print(self.result_list()) + print(self.result_str()) - def result_list(self, sep: str = "\n"): + def result_str(self, sep: str = "\n"): res = [] def append(s: str = None): @@ -114,6 +114,9 @@ def append(s: str = None): append("Warnning: there exists multiple communication operations in the same time. As a result, " "the profiling result is not accurate.") + if self.total_cuda_time == 0: + return "No collective communication has been called yet!" + append("Collective communication profiling result:") append("total cuda time: {}".format(_format_time(self.total_cuda_time))) append("average bandwidth: {}".format(_format_bandwidth(self.total_comm_vol, self.total_cuda_time))) diff --git a/colossalai/utils/profiler/pcie_profiler.py b/colossalai/utils/profiler/pcie_profiler.py index 3a9ec95b4d3f..526222941ef9 100644 --- a/colossalai/utils/profiler/pcie_profiler.py +++ b/colossalai/utils/profiler/pcie_profiler.py @@ -105,16 +105,16 @@ def disable(self): self.profiler = None def to_tensorboard(self, writer): - writer.add_text(tag="Data Transmission", text_string=self.result_list("\n\n")) + writer.add_text(tag="Data Transmission", text_string=self.result_str("\n\n")) def to_file(self, filename: Path): with open(filename, "w") as f: - f.write(self.result_list()) + f.write(self.result_str()) def show(self): - print(self.result_list()) + print(self.result_str()) - def result_list(self, sep: str = "\n"): + def result_str(self, sep: str = "\n"): res = [] def append(s: str = None):
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/409
2022-03-14T08:44:26Z
2022-03-14T09:43:45Z
2022-03-14T09:43:45Z
2022-03-14T09:43:45Z
634
hpcaitech/ColossalAI
11,274
feat: Maintain the original exceptions of OpenAI and HTTPX during exception handling.
diff --git a/metagpt/utils/common.py b/metagpt/utils/common.py index c7751c2af..3102158c2 100644 --- a/metagpt/utils/common.py +++ b/metagpt/utils/common.py @@ -28,7 +28,7 @@ import aiofiles import loguru from pydantic_core import to_jsonable_python -from tenacity import RetryCallState, _utils +from tenacity import RetryCallState, RetryError, _utils from metagpt.const import MESSAGE_ROUTE_TO_ALL from metagpt.logs import logger @@ -501,7 +501,7 @@ async def wrapper(self, *args, **kwargs): self.rc.memory.delete(self.latest_observed_msg) # raise again to make it captured outside raise Exception(format_trackback_info(limit=None)) - except Exception: + except Exception as e: if self.latest_observed_msg: logger.warning( "There is a exception in role's execution, in order to resume, " @@ -510,6 +510,12 @@ async def wrapper(self, *args, **kwargs): # remove role newest observed msg to make it observed again self.rc.memory.delete(self.latest_observed_msg) # raise again to make it captured outside + if isinstance(e, RetryError): + last_error = e.last_attempt._exception + name = any_to_str(last_error) + if re.match(r"^openai\.", name) or re.match(r"^httpx\.", name): + raise last_error + raise Exception(format_trackback_info(limit=None)) return wrapper diff --git a/setup.py b/setup.py index ca8bb3980..cc8112ba9 100644 --- a/setup.py +++ b/setup.py @@ -57,7 +57,7 @@ def run(self): setup( name="metagpt", - version="0.6.5", + version="0.6.6", description="The Multi-Agent Framework", long_description=long_description, long_description_content_type="text/markdown",
**Features** - feat: Maintain the original exceptions of OpenAI and HTTPX during exception handling.
https://api.github.com/repos/geekan/MetaGPT/pulls/777
2024-01-19T03:46:24Z
2024-01-19T05:45:24Z
2024-01-19T05:45:24Z
2024-03-19T03:43:42Z
457
geekan/MetaGPT
16,825
Updated sites.json
diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json index 6808e24dd..d54cc9c9c 100644 --- a/sherlock/resources/data.json +++ b/sherlock/resources/data.json @@ -926,6 +926,14 @@ "username_claimed": "blue", "username_unclaimed": "noonewouldeverusethis7" }, + "Clubhouse":{ + "errorMsg":"Nothing found!", + "errorType":"message", + "url":"https://www.clubhouse.com/@{}", + "urlMain":"https://www.clubhouse.com", + "username_claimed":"waniathar", + "username_unclaimed":"noonewouldeverusethis7" + }, "Instructables": { "errorType": "status_code", "url": "https://www.instructables.com/member/{}",
added a new social media website "clubhouse"
https://api.github.com/repos/sherlock-project/sherlock/pulls/1280
2022-03-22T10:42:16Z
2022-04-04T16:10:34Z
2022-04-04T16:10:34Z
2022-04-04T16:10:34Z
207
sherlock-project/sherlock
36,362
[bot] Bump submodules
diff --git a/cereal b/cereal index c54369f8ad4e0b..80e1e55f0dd71c 160000 --- a/cereal +++ b/cereal @@ -1 +1 @@ -Subproject commit c54369f8ad4e0bcb18c96feb4334755c6f65e8f1 +Subproject commit 80e1e55f0dd71cea7f596e8b80c7c33865b689f3 diff --git a/panda b/panda index 457e3b262d798a..f48fc21a17079b 160000 --- a/panda +++ b/panda @@ -1 +1 @@ -Subproject commit 457e3b262d798aa6e400033c92d12a0b0f52a7ed +Subproject commit f48fc21a17079bc04cfb3d8042fd2d67d0aac104
Automatic PR from repo-maintenance -> bump_submodules
https://api.github.com/repos/commaai/openpilot/pulls/31299
2024-02-05T12:02:56Z
2024-02-06T19:11:58Z
2024-02-06T19:11:58Z
2024-02-06T19:11:59Z
224
commaai/openpilot
9,097
Add MLJ.jl
diff --git a/README.md b/README.md index ab8bfb48..666472f1 100644 --- a/README.md +++ b/README.md @@ -643,6 +643,7 @@ Further resources: * [ScikitLearn](https://github.com/cstjean/ScikitLearn.jl) - Julia implementation of the scikit-learn API. * [Knet](https://github.com/denizyuret/Knet.jl) - Koç University Deep Learning Framework. * [Flux](https://fluxml.ai/) - Relax! Flux is the ML library that doesn't make you tensor +* [MLJ](https://github.com/alan-turing-institute/MLJ.jl) - A Julia machine learning framework <a name="julia-nlp"></a> #### Natural Language Processing
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/697
2020-05-29T15:29:37Z
2020-05-30T17:00:12Z
2020-05-30T17:00:12Z
2020-05-30T17:13:18Z
183
josephmisiti/awesome-machine-learning
52,041
update docs
diff --git a/deploy/README.md b/deploy/README.md index 033662a753..69e2438996 100644 --- a/deploy/README.md +++ b/deploy/README.md @@ -22,9 +22,11 @@ PP-OCR has supported muti deployment schemes. Click the link to get the specific - [Python Inference](../doc/doc_en/inference_ppocr_en.md) - [C++ Inference](./cpp_infer/readme.md) -- [Serving](./pdserving/README.md) -- [Paddle-Lite](./lite/readme.md) +- [Serving (Python/C++)](./pdserving/README.md) +- [Paddle-Lite (ARM CPU/OpenCL ARM GPU/Metal ARM GPU)](./lite/readme.md) - [Paddle.js](./paddlejs/README.md) +- [Jetson Inference]() +- [XPU Inference]() - [Paddle2ONNX](./paddle2onnx/readme.md) If you need the deployment tutorial of academic algorithm models other than PP-OCR, please directly enter the main page of corresponding algorithms, [entrance](../doc/doc_en/algorithm_overview_en.md)。 \ No newline at end of file diff --git a/deploy/README_ch.md b/deploy/README_ch.md index 96b49ddd9b..63ae595373 100644 --- a/deploy/README_ch.md +++ b/deploy/README_ch.md @@ -22,9 +22,11 @@ PP-OCR模型已打通多种场景部署方案,点击链接获取具体的使 - [Python 推理](../doc/doc_ch/inference_ppocr.md) - [C++ 推理](./cpp_infer/readme_ch.md) -- [Serving 服务化部署](./pdserving/README_CN.md) -- [Paddle-Lite 端侧部署](./lite/readme_ch.md) -- [Paddle.js 服务化部署](./paddlejs/README_ch.md) +- [Serving 服务化部署(Python/C++)](./pdserving/README_CN.md) +- [Paddle-Lite 端侧部署(ARM CPU/OpenCL ARM GPU/Metal ARM GPU)](./lite/readme_ch.md) +- [Paddle.js 部署](./paddlejs/README_ch.md) +- [Jetson 推理]() +- [XPU 推理]() - [Paddle2ONNX 推理](./paddle2onnx/readme_ch.md) 需要PP-OCR以外的学术算法模型的推理部署,请直接进入相应算法主页面,[入口](../doc/doc_ch/algorithm_overview.md)。 \ No newline at end of file diff --git a/doc/doc_ch/algorithm_det_db.md b/doc/doc_ch/algorithm_det_db.md index 7f94ceaee0..90837c2ac1 100644 --- a/doc/doc_ch/algorithm_det_db.md +++ b/doc/doc_ch/algorithm_det_db.md @@ -25,8 +25,8 @@ |模型|骨干网络|配置文件|precision|recall|Hmean|下载链接| | --- | --- | --- | --- | --- | --- | --- | -|DB|ResNet50_vd|configs/det/det_r50_vd_db.yml|86.41%|78.72%|82.38%|[训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_db_v2.0_train.tar)| -|DB|MobileNetV3|configs/det/det_mv3_db.yml|77.29%|73.08%|75.12%|[训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar)| +|DB|ResNet50_vd|[configs/det/det_r50_vd_db.yml](../../configs/det/det_r50_vd_db.yml)|86.41%|78.72%|82.38%|[训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_db_v2.0_train.tar)| +|DB|MobileNetV3|[configs/det/det_mv3_db.yml](../../configs/det/det_mv3_db.yml)|77.29%|73.08%|75.12%|[训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar)| <a name="2"></a> diff --git a/doc/doc_ch/ppocr_introduction.md b/doc/doc_ch/ppocr_introduction.md index d9b5a4e023..2e25ebc950 100644 --- a/doc/doc_ch/ppocr_introduction.md +++ b/doc/doc_ch/ppocr_introduction.md @@ -17,6 +17,8 @@ PP-OCR是PaddleOCR自研的实用的超轻量OCR系统。在实现[前沿算法](algorithm.md)的基础上,考虑精度与速度的平衡,进行**模型瘦身**和**深度优化**,使其尽可能满足产业落地需求。 +#### PP-OCR + PP-OCR是一个两阶段的OCR系统,其中文本检测算法选用[DB](algorithm_det_db.md),文本识别算法选用[CRNN](algorithm_rec_crnn.md),并在检测和识别模块之间添加[文本方向分类器](angle_class.md),以应对不同方向的文本识别。 PP-OCR系统pipeline如下: @@ -28,9 +30,13 @@ PP-OCR系统pipeline如下: PP-OCR系统在持续迭代优化,目前已发布PP-OCR和PP-OCRv2两个版本: -[1] PP-OCR从骨干网络选择和调整、预测头部的设计、数据增强、学习率变换策略、正则化参数选择、预训练模型使用以及模型自动裁剪量化8个方面,采用19个有效策略,对各个模块的模型进行效果调优和瘦身(如绿框所示),最终得到整体大小为3.5M的超轻量中英文OCR和2.8M的英文数字OCR。更多细节请参考PP-OCR技术方案 https://arxiv.org/abs/2009.09941 +PP-OCR从骨干网络选择和调整、预测头部的设计、数据增强、学习率变换策略、正则化参数选择、预训练模型使用以及模型自动裁剪量化8个方面,采用19个有效策略,对各个模块的模型进行效果调优和瘦身(如绿框所示),最终得到整体大小为3.5M的超轻量中英文OCR和2.8M的英文数字OCR。更多细节请参考PP-OCR技术方案 https://arxiv.org/abs/2009.09941 + +#### PP-OCRv2 + +PP-OCRv2在PP-OCR的基础上,进一步在5个方面重点优化,检测模型采用CML协同互学习知识蒸馏策略和CopyPaste数据增广策略;识别模型采用LCNet轻量级骨干网络、UDML 改进知识蒸馏策略和[Enhanced CTC loss](./doc/doc_ch/enhanced_ctc_loss.md)损失函数改进(如上图红框所示),进一步在推理速度和预测效果上取得明显提升。更多细节请参考PP-OCRv2[技术报告](https://arxiv.org/abs/2109.03144)。 -[2] PP-OCRv2在PP-OCR的基础上,进一步在5个方面重点优化,检测模型采用CML协同互学习知识蒸馏策略和CopyPaste数据增广策略;识别模型采用LCNet轻量级骨干网络、UDML 改进知识蒸馏策略和[Enhanced CTC loss](./doc/doc_ch/enhanced_ctc_loss.md)损失函数改进(如上图红框所示),进一步在推理速度和预测效果上取得明显提升。更多细节请参考PP-OCRv2[技术报告](https://arxiv.org/abs/2109.03144)。 +#### PP-OCRv3 <a name="2"></a> diff --git a/doc/doc_en/algorithm_det_db_en.md b/doc/doc_en/algorithm_det_db_en.md index b387a8ec21..f5f333a039 100644 --- a/doc/doc_en/algorithm_det_db_en.md +++ b/doc/doc_en/algorithm_det_db_en.md @@ -25,8 +25,8 @@ On the ICDAR2015 dataset, the text detection result is as follows: |Model|Backbone|Configuration|Precision|Recall|Hmean|Download| | --- | --- | --- | --- | --- | --- | --- | -|DB|ResNet50_vd|configs/det/det_r50_vd_db.yml|86.41%|78.72%|82.38%|[trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_db_v2.0_train.tar)| -|DB|MobileNetV3|configs/det/det_mv3_db.yml|77.29%|73.08%|75.12%|[trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar)| +|DB|ResNet50_vd|[configs/det/det_r50_vd_db.yml](../../configs/det/det_r50_vd_db.yml)|86.41%|78.72%|82.38%|[trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_r50_vd_db_v2.0_train.tar)| +|DB|MobileNetV3|[configs/det/det_mv3_db.yml](../../configs/det/det_mv3_db.yml)|77.29%|73.08%|75.12%|[trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/det_mv3_db_v2.0_train.tar)| <a name="2"></a>
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/6074
2022-04-27T11:26:08Z
2022-04-27T12:22:16Z
2022-04-27T12:22:16Z
2022-04-27T12:22:17Z
2,360
PaddlePaddle/PaddleOCR
41,863
[MRG] DOC recommend editable install using pip in contributing
diff --git a/doc/developers/contributing.rst b/doc/developers/contributing.rst index c4c0e57c936be..108142300939c 100644 --- a/doc/developers/contributing.rst +++ b/doc/developers/contributing.rst @@ -67,16 +67,20 @@ extension in place:: python setup.py build_ext --inplace -Another option is to use the ``develop`` option if you change your code a lot -and do not want to have to reinstall every time. This basically builds the -extension in place and creates a link to the development directory (see -`the setuptool docs <http://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode>`_):: +Another option is to install the package in editable mode if you change your +code a lot and do not want to have to reinstall every time. This basically +builds the extension in place and creates a link to the development directory +(see `the pip docs <https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs>`_):: - python setup.py develop + pip install --editable . .. note:: - if you decide to do that you have to rerun:: + This is fundamentally similar to using the command ``python setup.py develop`` (see `the setuptool docs <http://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode>`_). It is however preferred to use pip. + +.. note:: + + If you decide to do an editable install you have to rerun:: python setup.py build_ext --inplace
#### Reference Issue None #### What does this implement/fix? Explain your changes. This recommends using `pip install --editable .` instead of `python setup.py develop` in the Contributing section of the documentation. It also makes the documentation more consistent as `pip install --editable .` is the command given at the end of the Advanced installation instructions. #### Any other comments? I also made the following note about having to build the extension in place more explicit. <!-- Please be aware that we are a loose team of volunteers so patience is necessary; assistance handling other issues is very welcome. We value all user contributions, no matter how minor they are. If we are slow to review, either the pull request needs some benchmarking, tinkering, convincing, etc. or more likely the reviewers are simply busy. In either case, we ask for your understanding during the review process. For more information, see our FAQ on this topic: http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention. Thanks for contributing! -->
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/8974
2017-06-01T22:05:49Z
2017-06-03T08:47:21Z
2017-06-03T08:47:21Z
2017-06-03T09:51:56Z
363
scikit-learn/scikit-learn
46,375
generate unclaimed username based on regex
diff --git a/requirements.txt b/requirements.txt index 7b57b0659..0bee5bfdf 100644 --- a/requirements.txt +++ b/requirements.txt @@ -7,3 +7,4 @@ stem>=1.8.0 torrequest>=0.1.0 pandas>=1.0.0 openpyxl<=3.0.10 +exrex>=0.11.0 \ No newline at end of file diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json index 709e940f6..5f98c3399 100644 --- a/sherlock/resources/data.json +++ b/sherlock/resources/data.json @@ -94,6 +94,7 @@ "username_claimed": "pink" }, "AllMyLinks": { + "regexCheck": "^[a-z0-9][a-z0-9-]{2,32}$", "errorMsg": "Not Found", "errorType": "message", "url": "https://allmylinks.com/{}", diff --git a/sherlock/sherlock.py b/sherlock/sherlock.py index 9e8712437..725619501 100644 --- a/sherlock/sherlock.py +++ b/sherlock/sherlock.py @@ -28,7 +28,7 @@ from colorama import init module_name = "Sherlock: Find Usernames Across Social Networks" -__version__ = "0.14.2" +__version__ = "0.14.3" class SherlockFuturesSession(FuturesSession): diff --git a/sherlock/tests/all.py b/sherlock/tests/all.py index 4a2b78b6b..7943fab46 100644 --- a/sherlock/tests/all.py +++ b/sherlock/tests/all.py @@ -3,7 +3,7 @@ This module contains various tests. """ from tests.base import SherlockBaseTest -import secrets +import exrex class SherlockDetectTests(SherlockBaseTest): @@ -27,10 +27,7 @@ def test_detect_true_via_message(self): # Ensure that the site's detection method has not changed. self.assertEqual("message", site_data["errorType"]) - self.username_check([site_data["username_claimed"]], - [site], - exist_check=True - ) + self.username_check([site_data["username_claimed"]], [site], exist_check=True) return @@ -54,10 +51,16 @@ def test_detect_false_via_message(self): # Ensure that the site's detection method has not changed. self.assertEqual("message", site_data["errorType"]) - self.username_check([secrets.token_urlsafe(10)], - [site], - exist_check=False - ) + # Generate a valid username based on the regex for a username that the + # site supports that is *most likely* not taken. The regex is slighlty + # modified version of site_data["regexCheck"] as we want a username + # that has the maximum length that is supported by the site. This way, + # we wont generate a random username that might actually exist. This + # method is very hacky, but it does the job as having hardcoded + # usernames that dont exists will lead to people with ill intent to + # create an account with that username which will break the tests + valid_username = exrex.getone(r"^[a-z0-9][a-z0-9-]{32}$") + self.username_check([valid_username], [site], exist_check=False) return @@ -75,16 +78,13 @@ def test_detect_true_via_status_code(self): Will trigger an assert if detection mechanism did not work as expected. """ - site = "9GAG" + site = "BitBucket" site_data = self.site_data_all[site] # Ensure that the site's detection method has not changed. self.assertEqual("status_code", site_data["errorType"]) - self.username_check([site_data["username_claimed"]], - [site], - exist_check=True - ) + self.username_check([site_data["username_claimed"]], [site], exist_check=True) return @@ -102,57 +102,27 @@ def test_detect_false_via_status_code(self): Will trigger an assert if detection mechanism did not work as expected. """ - site = "9GAG" + site = "BitBucket" site_data = self.site_data_all[site] # Ensure that the site's detection method has not changed. self.assertEqual("status_code", site_data["errorType"]) - self.username_check([secrets.token_urlsafe(10)], - [site], - exist_check=False - ) + # Generate a valid username based on the regex for a username that the + # site supports that is *most likely* not taken. The regex is slighlty + # modified version of site_data["regexCheck"] as we want a username + # that has the maximum length that is supported by the site. This way, + # we wont generate a random username that might actually exist. This + # method is very hacky, but it does the job as having hardcoded + # usernames that dont exists will lead to people with ill intent to + # create an account with that username which will break the tests + valid_username = exrex.getone(r"^[a-zA-Z0-9-_]{30}") + self.username_check([valid_username], [site], exist_check=False) return class SherlockSiteCoverageTests(SherlockBaseTest): - def test_coverage_false_via_response_url(self): - """Test Username Does Not Exist Site Coverage (Via Response URL). - - This test checks all sites with the "response URL" detection mechanism - to ensure that a Username that does not exist is reported that way. - - Keyword Arguments: - self -- This object. - - Return Value: - Nothing. - Will trigger an assert if detection mechanism did not work as expected. - """ - - self.detect_type_check("response_url", exist_check=False) - - return - - def test_coverage_true_via_response_url(self): - """Test Username Does Exist Site Coverage (Via Response URL). - - This test checks all sites with the "response URL" detection mechanism - to ensure that a Username that does exist is reported that way. - - Keyword Arguments: - self -- This object. - - Return Value: - Nothing. - Will trigger an assert if detection mechanism did not work as expected. - """ - - self.detect_type_check("response_url", exist_check=True) - - return - def test_coverage_false_via_status(self): """Test Username Does Not Exist Site Coverage (Via HTTP Status). diff --git a/sherlock/tests/base.py b/sherlock/tests/base.py index be87ceeea..de958b9db 100644 --- a/sherlock/tests/base.py +++ b/sherlock/tests/base.py @@ -7,9 +7,8 @@ import unittest import sherlock from result import QueryStatus -from result import QueryResult from notify import QueryNotify -from sites import SitesInformation +from sites import SitesInformation import warnings @@ -26,16 +25,16 @@ def setUp(self): Nothing. """ - #This ignores the ResourceWarning from an unclosed SSLSocket. - #TODO: Figure out how to fix the code so this is not needed. + # This ignores the ResourceWarning from an unclosed SSLSocket. + # TODO: Figure out how to fix the code so this is not needed. warnings.simplefilter("ignore", ResourceWarning) - #Create object with all information about sites we are aware of. - sites = SitesInformation() + # Create object with all information about sites we are aware of. + sites = SitesInformation(data_file_path=os.path.join(os.path.dirname(__file__), "../resources/data.json")) - #Create original dictionary from SitesInformation() object. - #Eventually, the rest of the code will be updated to use the new object - #directly, but this will glue the two pieces together. + # Create original dictionary from SitesInformation() object. + # Eventually, the rest of the code will be updated to use the new object + # directly, but this will glue the two pieces together. site_data_all = {} for site in sites: site_data_all[site.name] = site.information @@ -44,18 +43,18 @@ def setUp(self): # Load excluded sites list, if any excluded_sites_path = os.path.join(os.path.dirname(os.path.realpath(sherlock.__file__)), "tests/.excluded_sites") try: - with open(excluded_sites_path, "r", encoding="utf-8") as excluded_sites_file: - self.excluded_sites = excluded_sites_file.read().splitlines() + with open(excluded_sites_path, "r", encoding="utf-8") as excluded_sites_file: + self.excluded_sites = excluded_sites_file.read().splitlines() except FileNotFoundError: - self.excluded_sites = [] + self.excluded_sites = [] - #Create notify object for query results. + # Create notify object for query results. self.query_notify = QueryNotify() - self.tor=False - self.unique_tor=False - self.timeout=None - self.skip_error_sites=True + self.tor = False + self.unique_tor = False + self.timeout = None + self.skip_error_sites = True return @@ -102,7 +101,7 @@ def username_check(self, username_list, site_list, exist_check=True): existence state. """ - #Filter all site data down to just what is needed for this test. + # Filter all site data down to just what is needed for this test. site_data = self.site_data_filter(site_list) if exist_check: @@ -161,8 +160,8 @@ def detect_type_check(self, detect_type, exist_check=True): existence state. """ - #Dictionary of sites that should be tested for having a username. - #This will allow us to test sites with a common username in parallel. + # Dictionary of sites that should be tested for having a username. + # This will allow us to test sites with a common username in parallel. sites_by_username = {} for site, site_data in self.site_data_all.items(): @@ -181,9 +180,9 @@ def detect_type_check(self, detect_type, exist_check=True): # Figure out which type of user if exist_check: - username = site_data.get("username_claimed") + username = site_data.get("username_claimed") else: - username = site_data.get("username_unclaimed") + username = site_data.get("username_unclaimed") # Add this site to the list of sites corresponding to this # username.
1. Generate unclaimed username based on regex with `exrex`. The regex couldn't be fetched from `data.json` as some use lengths (`{1,10}`). If a random username with a length of 2 was generated, the chance of that username existing was very high. So I copied and hardcoded the regex for the site so that it generates the maximum length. See comments in code for more context. cc: @DenavDot, you may find this interesting :) 3. Added regex to AllMyLinks 4. Use local `data.json` for unittests. This is more logical as we want to test the local code 5. Removed old code about `response_url` from tests 6. Fixed formatting of the code
https://api.github.com/repos/sherlock-project/sherlock/pulls/1730
2023-03-12T21:06:26Z
2023-03-12T21:13:07Z
2023-03-12T21:13:07Z
2023-03-12T21:15:13Z
2,474
sherlock-project/sherlock
36,681
Do not call deprecated datetime.utcnow() and datetime.utcfromtimestamp()
diff --git a/acme/acme/_internal/tests/fields_test.py b/acme/acme/_internal/tests/fields_test.py index a45bdc47bf0..0c2b8c4a084 100644 --- a/acme/acme/_internal/tests/fields_test.py +++ b/acme/acme/_internal/tests/fields_test.py @@ -34,7 +34,7 @@ class RFC3339FieldTest(unittest.TestCase): """Tests for acme.fields.RFC3339Field.""" def setUp(self): - self.decoded = datetime.datetime(2015, 3, 27, tzinfo=pytz.utc) + self.decoded = datetime.datetime(2015, 3, 27, tzinfo=pytz.UTC) self.encoded = '2015-03-27T00:00:00Z' def test_default_encoder(self): diff --git a/acme/acme/fields.py b/acme/acme/fields.py index bcd0346d895..2ff5da419aa 100644 --- a/acme/acme/fields.py +++ b/acme/acme/fields.py @@ -34,7 +34,7 @@ class RFC3339Field(jose.Field): Handles decoding/encoding between RFC3339 strings and aware (not naive) `datetime.datetime` objects - (e.g. ``datetime.datetime.now(pytz.utc)``). + (e.g. ``datetime.datetime.now(pytz.UTC)``). """ diff --git a/certbot-ci/certbot_integration_tests/utils/pebble_ocsp_server.py b/certbot-ci/certbot_integration_tests/utils/pebble_ocsp_server.py index 72520bd8bb7..f01332fedd9 100755 --- a/certbot-ci/certbot_integration_tests/utils/pebble_ocsp_server.py +++ b/certbot-ci/certbot_integration_tests/utils/pebble_ocsp_server.py @@ -5,6 +5,7 @@ """ import datetime import http.server as BaseHTTPServer +import pytz import re from typing import cast from typing import Union @@ -54,7 +55,7 @@ def do_POST(self) -> None: else: data = response.json() - now = datetime.datetime.utcnow() + now = datetime.datetime.now(pytz.UTC) cert = x509.load_pem_x509_certificate(data['Certificate'].encode(), default_backend()) if data['Status'] != 'Revoked': ocsp_status = ocsp.OCSPCertStatus.GOOD diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md index 20cc0fb251d..7eae5a0a368 100644 --- a/certbot/CHANGELOG.md +++ b/certbot/CHANGELOG.md @@ -17,7 +17,7 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). ### Fixed -* +* Do not call deprecated datetime.utcnow() and datetime.utcfromtimestamp() More details about these changes can be found on our GitHub repo. diff --git a/certbot/certbot/_internal/cert_manager.py b/certbot/certbot/_internal/cert_manager.py index c7205c30490..0b6115d26d7 100644 --- a/certbot/certbot/_internal/cert_manager.py +++ b/certbot/certbot/_internal/cert_manager.py @@ -301,7 +301,7 @@ def human_readable_cert_info(config: configuration.NamespaceConfig, cert: storag return None if config.domains and not set(config.domains).issubset(cert.names()): return None - now = pytz.UTC.fromutc(datetime.datetime.utcnow()) + now = datetime.datetime.now(pytz.UTC) reasons = [] if cert.is_test_cert: diff --git a/certbot/certbot/_internal/storage.py b/certbot/certbot/_internal/storage.py index 7ddc5e9296c..a535ac3365f 100644 --- a/certbot/certbot/_internal/storage.py +++ b/certbot/certbot/_internal/storage.py @@ -1023,7 +1023,7 @@ def should_autorenew(self) -> bool: interval = self.configuration.get("renew_before_expiry", default_interval) expiry = crypto_util.notAfter(self.version( "cert", self.latest_common_version())) - now = pytz.UTC.fromutc(datetime.datetime.utcnow()) + now = datetime.datetime.now(pytz.UTC) if expiry < add_time_interval(now, interval): logger.debug("Should renew, less than %s before certificate " "expiry %s.", interval, diff --git a/certbot/certbot/_internal/tests/cert_manager_test.py b/certbot/certbot/_internal/tests/cert_manager_test.py index c8483efa72f..d5e3b2cc5cf 100644 --- a/certbot/certbot/_internal/tests/cert_manager_test.py +++ b/certbot/certbot/_internal/tests/cert_manager_test.py @@ -254,7 +254,7 @@ def test_report_human_readable(self, mock_revoked, mock_serial): import pytz from certbot._internal import cert_manager - expiry = pytz.UTC.fromutc(datetime.datetime.utcnow()) + expiry = datetime.datetime.now(pytz.UTC) cert = mock.MagicMock(lineagename="nameone") cert.target_expiry = expiry diff --git a/certbot/certbot/_internal/tests/main_test.py b/certbot/certbot/_internal/tests/main_test.py index b2b715ea8ba..c7e8d21676c 100644 --- a/certbot/certbot/_internal/tests/main_test.py +++ b/certbot/certbot/_internal/tests/main_test.py @@ -2081,13 +2081,12 @@ class ReportNewCertTest(unittest.TestCase): """ def setUp(self): - from datetime import datetime self.notify_patch = mock.patch('certbot._internal.main.display_util.notify') self.mock_notify = self.notify_patch.start() self.notafter_patch = mock.patch('certbot._internal.main.crypto_util.notAfter') self.mock_notafter = self.notafter_patch.start() - self.mock_notafter.return_value = datetime.utcfromtimestamp(0) + self.mock_notafter.return_value = datetime.datetime(1970, 1, 1, 0, 0) def tearDown(self): self.notify_patch.stop() diff --git a/certbot/certbot/_internal/tests/ocsp_test.py b/certbot/certbot/_internal/tests/ocsp_test.py index 0d1404fcb4f..86abf8bf49e 100644 --- a/certbot/certbot/_internal/tests/ocsp_test.py +++ b/certbot/certbot/_internal/tests/ocsp_test.py @@ -65,7 +65,7 @@ def test_init(self, mock_exists, mock_run, mock_log): @mock.patch('certbot.ocsp.crypto_util.notAfter') @mock.patch('certbot.util.run_script') def test_ocsp_revoked(self, mock_run, mock_na, mock_determine): - now = pytz.UTC.fromutc(datetime.utcnow()) + now = datetime.now(pytz.UTC) cert_obj = mock.MagicMock() cert_obj.cert_path = "x" cert_obj.chain_path = "y" @@ -138,7 +138,7 @@ def setUp(self): self.cert_obj = mock.MagicMock() self.cert_obj.cert_path = self.cert_path self.cert_obj.chain_path = self.chain_path - now = pytz.UTC.fromutc(datetime.utcnow()) + now = datetime.now(pytz.UTC) self.mock_notAfter = mock.patch('certbot.ocsp.crypto_util.notAfter', return_value=now + timedelta(hours=2)) self.mock_notAfter.start() @@ -324,8 +324,8 @@ def _construct_mock_ocsp_response(certificate_status, response_status): responder_name=responder.subject, certificates=[responder], hash_algorithm=hashes.SHA1(), - next_update=datetime.now() + timedelta(days=1), - this_update=datetime.now() - timedelta(days=1), + next_update=datetime.now(pytz.UTC).replace(tzinfo=None) + timedelta(days=1), + this_update=datetime.now(pytz.UTC).replace(tzinfo=None) - timedelta(days=1), signature_algorithm_oid=x509.oid.SignatureAlgorithmOID.RSA_WITH_SHA1, ) diff --git a/certbot/certbot/_internal/tests/storage_test.py b/certbot/certbot/_internal/tests/storage_test.py index 3b407547803..020a7062aa4 100644 --- a/certbot/certbot/_internal/tests/storage_test.py +++ b/certbot/certbot/_internal/tests/storage_test.py @@ -481,8 +481,8 @@ def test_time_interval_judgments(self, mock_datetime, mock_set_by_user): (1420070400, "10 weeks", True), (1420070400, "10 months", True), (1420070400, "10 years", True), (1420070400, "99 months", True), ]: - sometime = datetime.datetime.utcfromtimestamp(current_time) - mock_datetime.datetime.utcnow.return_value = sometime + sometime = datetime.datetime.fromtimestamp(current_time, pytz.UTC) + mock_datetime.datetime.now.return_value = sometime self.test_rc.configuration["renew_before_expiry"] = interval assert self.test_rc.should_autorenew() == result @@ -739,10 +739,10 @@ def test_add_time_interval(self): from certbot._internal import storage # this month has 30 days, and the next year is a leap year - time_1 = pytz.UTC.fromutc(datetime.datetime(2003, 11, 20, 11, 59, 21)) + time_1 = datetime.datetime(2003, 11, 20, 11, 59, 21, tzinfo=pytz.UTC) # this month has 31 days, and the next year is not a leap year - time_2 = pytz.UTC.fromutc(datetime.datetime(2012, 10, 18, 21, 31, 16)) + time_2 = datetime.datetime(2012, 10, 18, 21, 31, 16, tzinfo=pytz.UTC) # in different time zone (GMT+8) time_3 = pytz.timezone('Asia/Shanghai').fromutc( diff --git a/certbot/certbot/ocsp.py b/certbot/certbot/ocsp.py index 8f558eb7b5c..a24f04f0da8 100644 --- a/certbot/certbot/ocsp.py +++ b/certbot/certbot/ocsp.py @@ -78,7 +78,7 @@ def ocsp_revoked_by_paths(self, cert_path: str, chain_path: str, timeout: int = # Let's Encrypt doesn't update OCSP for expired certificates, # so don't check OCSP if the cert is expired. # https://github.com/certbot/certbot/issues/7152 - now = pytz.UTC.fromutc(datetime.utcnow()) + now = datetime.now(pytz.UTC) if crypto_util.notAfter(cert_path) <= now: return False @@ -233,7 +233,8 @@ def _check_ocsp_response(response_ocsp: 'ocsp.OCSPResponse', request_ocsp: 'ocsp # for OpenSSL, so we do not do it here. # See OpenSSL implementation as a reference: # https://github.com/openssl/openssl/blob/ef45aa14c5af024fcb8bef1c9007f3d1c115bd85/crypto/ocsp/ocsp_cl.c#L338-L391 - now = datetime.utcnow() # thisUpdate/nextUpdate are expressed in UTC/GMT time zone + # thisUpdate/nextUpdate are expressed in UTC/GMT time zone + now = datetime.now(pytz.UTC).replace(tzinfo=None) if not response_ocsp.this_update: raise AssertionError('param thisUpdate is not set.') if response_ocsp.this_update > now + timedelta(minutes=5): diff --git a/pytest.ini b/pytest.ini index 672e2c0542d..8d06655fff4 100644 --- a/pytest.ini +++ b/pytest.ini @@ -26,7 +26,11 @@ # It is also is used in sphinxcontrib-devhelp 1.0.2 which as of writing this # is the latest version of that library. See # https://github.com/sphinx-doc/sphinxcontrib-devhelp/blob/1.0.2/setup.py#L69. +# 6) Ignore DeprecationWarning from using pkg_resources API # 7) Ignore our own PendingDeprecationWarning about Python 3.7 soon to be dropped. +# 8) Ignore DeprecationWarning for datetime.utcfromtimestamp() triggered +# when importing the pytz.tzinfo module +# https://github.com/stub42/pytz/issues/105 filterwarnings = error ignore:decodestring\(\) is a deprecated alias:DeprecationWarning:dns @@ -34,4 +38,6 @@ filterwarnings = ignore:'urllib3.contrib.pyopenssl:DeprecationWarning:requests_toolbelt ignore:update_symlinks is deprecated:PendingDeprecationWarning ignore:.*declare_namespace\(':DeprecationWarning + ignore:pkg_resources is deprecated as an API:DeprecationWarning:pkg_resources ignore:Python 3.7 support will be dropped:PendingDeprecationWarning + ignore:datetime.utcfromtimestamp\(\) is deprecated:DeprecationWarning:pytz.tzinfo
With Python 3.12: ~~~ # python3 Python 3.12.0b4 (main, Jul 12 2023, 00:00:00) [GCC 13.1.1 20230614 (Red Hat 13.1.1-4)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datetime >>> datetime.datetime.utcnow() <stdin>:1: DeprecationWarning: datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.now(datetime.UTC). datetime.datetime(2023, 7, 15, 23, 20, 12, 672120) >>> datetime.datetime.utcfromtimestamp(0) <stdin>:1: DeprecationWarning: datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.fromtimestamp(timestamp, datetime.UTC). datetime.datetime(1970, 1, 1, 0, 0) ~~~ ## Pull Request Checklist - [ ] The Certbot team has recently expressed interest in reviewing a PR for this. If not, this PR may be closed due our limited resources and need to prioritize how we spend them. - [x] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made. - [ ] Add or update any documentation as needed to support the changes in this PR. - [ ] Include your name in `AUTHORS.md` if you like.
https://api.github.com/repos/certbot/certbot/pulls/9735
2023-07-15T23:23:53Z
2023-07-18T22:44:25Z
2023-07-18T22:44:25Z
2023-07-19T03:21:49Z
3,111
certbot/certbot
2,552
Fix print() and xrange() for Python 3
diff --git a/certbot-compatibility-test/certbot_compatibility_test/test_driver.py b/certbot-compatibility-test/certbot_compatibility_test/test_driver.py index 71a0ba57443..2c6c917b329 100644 --- a/certbot-compatibility-test/certbot_compatibility_test/test_driver.py +++ b/certbot-compatibility-test/certbot_compatibility_test/test_driver.py @@ -10,6 +10,8 @@ import OpenSSL +from six.moves import xrange # pylint: disable=import-error,redefined-builtin + from acme import challenges from acme import crypto_util from acme import messages diff --git a/certbot-compatibility-test/certbot_compatibility_test/validator.py b/certbot-compatibility-test/certbot_compatibility_test/validator.py index 0fd6efab5d3..791fe0da28a 100644 --- a/certbot-compatibility-test/certbot_compatibility_test/validator.py +++ b/certbot-compatibility-test/certbot_compatibility_test/validator.py @@ -5,6 +5,7 @@ import zope.interface import six +from six.moves import xrange # pylint: disable=import-error,redefined-builtin from acme import crypto_util from acme import errors as acme_errors diff --git a/certbot-compatibility-test/nginx/roundtrip.py b/certbot-compatibility-test/nginx/roundtrip.py index 852221df5cc..85d283c7842 100644 --- a/certbot-compatibility-test/nginx/roundtrip.py +++ b/certbot-compatibility-test/nginx/roundtrip.py @@ -8,7 +8,7 @@ def roundtrip(stuff): success = True for t in stuff: - print t + print(t) if not os.path.isfile(t): continue with open(t, "r") as f: diff --git a/letsencrypt-auto-source/tests/auto_test.py b/letsencrypt-auto-source/tests/auto_test.py index d187452a125..c5109e20839 100644 --- a/letsencrypt-auto-source/tests/auto_test.py +++ b/letsencrypt-auto-source/tests/auto_test.py @@ -18,6 +18,7 @@ from unittest import TestCase from pytest import mark +from six.moves import xrange # pylint: disable=redefined-builtin @mark.skip diff --git a/tools/simple_http_server.py b/tools/simple_http_server.py index 26bf231b7bf..14ac9a3d37c 100755 --- a/tools/simple_http_server.py +++ b/tools/simple_http_server.py @@ -14,7 +14,7 @@ def serve_forever(port=0): """ server = HTTPServer(('', port), SimpleHTTPRequestHandler) - print 'Serving HTTP on {0} port {1} ...'.format(*server.server_address) + print('Serving HTTP on {0} port {1} ...'.format(*server.server_address)) sys.stdout.flush() server.serve_forever()
* __print()__ is a function in modern Python * __from six.moves import xrange__ because __xrange()__ was removed in Python 3 $ __python3 -m flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics__ ``` ./tools/simple_http_server.py:17:44: E999 SyntaxError: invalid syntax print 'Serving HTTP on {0} port {1} ...'.format(*server.server_address) ^ ./letsencrypt-auto-source/tests/auto_test.py:84:17: F821 undefined name 'xrange' for port in xrange(4443, 4543): ^ ./certbot-compatibility-test/certbot_compatibility_test/validator.py:69:40: F821 undefined name 'xrange' return response.status_code in xrange(300, 309) ^ ./certbot-compatibility-test/certbot_compatibility_test/test_driver.py:59:14: F821 undefined name 'xrange' for i in xrange(len(responses)): ^ ./certbot-compatibility-test/nginx/roundtrip.py:11:15: E999 SyntaxError: invalid syntax print t ^ 2 E999 SyntaxError: invalid syntax 3 F821 undefined name 'xrange' 5 ```
https://api.github.com/repos/certbot/certbot/pulls/5590
2018-02-17T10:11:25Z
2018-03-14T16:37:30Z
2018-03-14T16:37:30Z
2018-03-14T16:42:53Z
678
certbot/certbot
1,282
feat (login) : login banner updates
diff --git a/src/sentry/templates/sentry/partial/alerts.html b/src/sentry/templates/sentry/partial/alerts.html index 43ac3d5e5526c..f2b3a8420b65b 100644 --- a/src/sentry/templates/sentry/partial/alerts.html +++ b/src/sentry/templates/sentry/partial/alerts.html @@ -78,9 +78,9 @@ <div class="alert-banner"> <div class="alert-message"> {% if banner_choice == 0 %} - Find out why we think declarative programming is the future for mobile apps. &nbsp<a target="_blank" href="https://blog.sentry.io/2022/12/07/mobile-the-future-is-declarative/?utm_medium=banner&utm_source=sentry&utm_campaign=login-declarative&utm_content=login-banner&utm_term=">Learn more</a>. + With Suspect Commits via Git Blame, Sentry identifies you who introduced the exact line of code that triggered an error. &nbsp<a target="_blank" href="https://blog.sentry.io/2022/12/06/suspect-commits-via-git-blame/?utm_medium=banner&utm_source=sentry-app&utm_campaign=login-suspect-commits-gitblame">Learn more</a>. {% elif banner_choice == 1 %} - Have you noticed improved JavaScript stack traces? Learn how Sentry used JavaScript parsing to improve stack traces. &nbsp<a target="_blank" href="https://blog.sentry.io/2022/11/30/how-we-made-javascript-stack-traces-awesome/?utm_medium=banner&utm_source=sentry&utm_campaign=login-js-parsing&utm_content=login-banner&utm_term=">Read more</a>. + Interested Jetpack Compose for your Android app? Join our upcoming Jetpack Compose livestream to learn how to get started. &nbsp<a target="_blank" href="https://sentry.io/resources/jetpack-compose-ama/?utm_medium=banner&utm_source=sentry-app&utm_campaign=login-jetpack-compose-2023">Register now</a>. {% endif %} </div> </div>
This PR updates the login banners to highlight a Jetpack Compose webinar and a GitBlame blog.
https://api.github.com/repos/getsentry/sentry/pulls/43804
2023-01-27T19:32:46Z
2023-01-27T19:57:11Z
2023-01-27T19:57:11Z
2023-02-12T00:02:20Z
461
getsentry/sentry
44,612
Optimization for shell sort
diff --git a/sorts/shell_sort.py b/sorts/shell_sort.py index bf3c2c7f9cc6..2e749e43d056 100644 --- a/sorts/shell_sort.py +++ b/sorts/shell_sort.py @@ -1,13 +1,5 @@ """ -This is a pure Python implementation of the shell sort algorithm - -For doctests run following command: -python -m doctest -v shell_sort.py -or -python3 -m doctest -v shell_sort.py - -For manual testing run: -python shell_sort.py +https://en.wikipedia.org/wiki/Shellsort#Pseudocode """ @@ -19,26 +11,29 @@ def shell_sort(collection): >>> shell_sort([0, 5, 3, 2, 2]) [0, 2, 2, 3, 5] - >>> shell_sort([]) [] - >>> shell_sort([-2, -5, -45]) [-45, -5, -2] """ # Marcin Ciura's gap sequence - gaps = [701, 301, 132, 57, 23, 10, 4, 1] + gaps = [701, 301, 132, 57, 23, 10, 4, 1] for gap in gaps: for i in range(gap, len(collection)): + insert_value = collection[i] j = i - while j >= gap and collection[j] < collection[j - gap]: - collection[j], collection[j - gap] = collection[j - gap], collection[j] + while j >= gap and collection[j - gap] > insert_value: + collection[j] = collection[j - gap] j -= gap + collection[j] = insert_value return collection if __name__ == "__main__": + from doctest import testmod + + testmod() user_input = input("Enter numbers separated by a comma:\n").strip() unsorted = [int(item) for item in user_input.split(",")] print(shell_sort(unsorted))
### **Describe your change:** * [ ] Add an algorithm? * [ ] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### **Checklist:** * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [x] All new Python files are placed inside an existing directory. * [x] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [x] All function parameters and return values are annotated with Python [type hints](hxtps://docs.python.org/3/library/typing.html). * [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
https://api.github.com/repos/TheAlgorithms/Python/pulls/4038
2020-12-16T10:47:37Z
2020-12-18T09:39:52Z
2020-12-18T09:39:52Z
2020-12-19T00:27:30Z
474
TheAlgorithms/Python
30,215
Switch MPT model to load zero shot in default scenario
diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py index d542526246..55741e1189 100644 --- a/fastchat/model/model_adapter.py +++ b/fastchat/model/model_adapter.py @@ -621,7 +621,11 @@ def get_default_conv_template(self, model_path: str) -> Conversation: elif "mpt-30b-instruct" in model_path: return get_conv_template("mpt-30b-instruct") else: - raise ValueError(f"Unknown MPT model: {model_path}") + print( + "Warning: Loading base MPT model with `zero_shot` conversation configuration. " + "If this is not desired, inspect model configurations and names." + ) + return get_conv_template("zero_shot") class BaizeAdapter(BaseModelAdapter):
<!-- Thank you for your contribution! --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? With Peft models added, its likely others (namely myself) will be training adapters on base models. In order to load the base model, it should load some reasonable conversation template. This ensures any MPT Peft model loads the `zero_shot` conversation template rather than crashing. ## Related issue number (if applicable) ## Checks - [x] I've run `format.sh` to lint the changes in this PR. - [x] I've included any doc changes needed. - [x] I've made sure the relevant tests are passing (if applicable).
https://api.github.com/repos/lm-sys/FastChat/pulls/1896
2023-07-08T07:21:31Z
2023-07-08T09:13:39Z
2023-07-08T09:13:39Z
2023-07-08T21:48:27Z
185
lm-sys/FastChat
41,173
`is_coco` list fix
diff --git a/test.py b/test.py index cbc97b42015..a38298da54d 100644 --- a/test.py +++ b/test.py @@ -78,7 +78,7 @@ def test(data, with open(data) as f: data = yaml.safe_load(f) check_dataset(data) # check - is_coco = data['val'].endswith('coco/val2017.txt') # COCO dataset + is_coco = type(data['val']) is str and data['val'].endswith('coco/val2017.txt') # COCO dataset nc = 1 if single_cls else int(data['nc']) # number of classes iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for [email protected]:0.95 niou = iouv.numel()
## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Improved robustness in dataset type detection for COCO validation. ### 📊 Key Changes - Enhanced the check for the COCO dataset by verifying the data type is a string before checking if the filepath ends with 'coco/val2017.txt'. ### 🎯 Purpose & Impact - The purpose of the change is to prevent potential errors when the `data['val']` isn't a string but another data type, such as a list. This makes the COCO dataset detection more reliable. - Users will benefit from reduced errors during test-time dataset checks, leading to smoother validation processes when using COCO or other datasets in their object detection tasks.
https://api.github.com/repos/ultralytics/yolov5/pulls/3646
2021-06-16T17:46:28Z
2021-06-16T20:56:16Z
2021-06-16T20:56:16Z
2024-01-19T17:37:08Z
208
ultralytics/yolov5
25,648
AIRFLOW-5126 Read aws_session_token in extra_config of the aws hook
diff --git a/airflow/contrib/hooks/aws_hook.py b/airflow/contrib/hooks/aws_hook.py index 714c8a883ccc1..cd763277af359 100644 --- a/airflow/contrib/hooks/aws_hook.py +++ b/airflow/contrib/hooks/aws_hook.py @@ -126,17 +126,21 @@ def _get_credentials(self, region_name): external_id = extra_config.get('external_id') aws_account_id = extra_config.get('aws_account_id') aws_iam_role = extra_config.get('aws_iam_role') + if 'aws_session_token' in extra_config and aws_session_token is None: + aws_session_token = extra_config['aws_session_token'] - if role_arn is None and aws_account_id is not None and \ - aws_iam_role is not None: + if role_arn is None and aws_account_id is not None and aws_iam_role is not None: role_arn = "arn:aws:iam::{}:role/{}" \ .format(aws_account_id, aws_iam_role) if role_arn is not None: + sts_session = boto3.session.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, - region_name=region_name) + region_name=region_name, + aws_session_token=aws_session_token + ) sts_client = sts_session.client('sts') diff --git a/docs/howto/connection/aws.rst b/docs/howto/connection/aws.rst index 5688011204f98..f55c58058c366 100644 --- a/docs/howto/connection/aws.rst +++ b/docs/howto/connection/aws.rst @@ -57,6 +57,7 @@ Extra (optional) * ``host``: Endpoint URL for the connection * ``region_name``: AWS region for the connection * ``role_arn``: AWS role ARN for the connection + * ``aws_session_token``: AWS session token if you use external credentials. You are responsible for renewing these. Example "extras" field: diff --git a/tests/contrib/hooks/test_aws_hook.py b/tests/contrib/hooks/test_aws_hook.py index 959704df0aa41..21183098edd19 100644 --- a/tests/contrib/hooks/test_aws_hook.py +++ b/tests/contrib/hooks/test_aws_hook.py @@ -17,7 +17,6 @@ # specific language governing permissions and limitations # under the License. # - import unittest import boto3 @@ -51,7 +50,6 @@ def test_get_client_type_returns_a_boto3_client_of_the_requested_type(self): @unittest.skipIf(mock_dynamodb2 is None, 'mock_dynamo2 package not present') @mock_dynamodb2 def test_get_resource_type_returns_a_boto3_resource_of_the_requested_type(self): - hook = AwsHook(aws_conn_id='aws_default') resource_from_hook = hook.get_resource_type('dynamodb') @@ -113,9 +111,24 @@ def test_get_session_returns_a_boto3_session(self): self.assertEqual(table.item_count, 0) @mock.patch.object(AwsHook, 'get_connection') - def test_get_credentials_from_login(self, mock_get_connection): + def test_get_credentials_from_login_with_token(self, mock_get_connection): mock_connection = Connection(login='aws_access_key_id', - password='aws_secret_access_key') + password='aws_secret_access_key', + extra='{"aws_session_token": "test_token"}' + ) + mock_get_connection.return_value = mock_connection + hook = AwsHook() + credentials_from_hook = hook.get_credentials() + self.assertEqual(credentials_from_hook.access_key, 'aws_access_key_id') + self.assertEqual(credentials_from_hook.secret_key, 'aws_secret_access_key') + self.assertEqual(credentials_from_hook.token, 'test_token') + + @mock.patch.object(AwsHook, 'get_connection') + def test_get_credentials_from_login_without_token(self, mock_get_connection): + mock_connection = Connection(login='aws_access_key_id', + password='aws_secret_access_key', + ) + mock_get_connection.return_value = mock_connection hook = AwsHook() credentials_from_hook = hook.get_credentials() @@ -124,10 +137,24 @@ def test_get_credentials_from_login(self, mock_get_connection): self.assertIsNone(credentials_from_hook.token) @mock.patch.object(AwsHook, 'get_connection') - def test_get_credentials_from_extra(self, mock_get_connection): + def test_get_credentials_from_extra_with_token(self, mock_get_connection): + mock_connection = Connection( + extra='{"aws_access_key_id": "aws_access_key_id",' + '"aws_secret_access_key": "aws_secret_access_key",' + ' "aws_session_token": "session_token"}' + ) + mock_get_connection.return_value = mock_connection + hook = AwsHook() + credentials_from_hook = hook.get_credentials() + self.assertEqual(credentials_from_hook.access_key, 'aws_access_key_id') + self.assertEqual(credentials_from_hook.secret_key, 'aws_secret_access_key') + self.assertEquals(credentials_from_hook.token, 'session_token') + + @mock.patch.object(AwsHook, 'get_connection') + def test_get_credentials_from_extra_without_token(self, mock_get_connection): mock_connection = Connection( extra='{"aws_access_key_id": "aws_access_key_id",' - '"aws_secret_access_key": "aws_secret_access_key"}' + '"aws_secret_access_key": "aws_secret_access_key"}' ) mock_get_connection.return_value = mock_connection hook = AwsHook()
### Description Read a temporary token in case it is present this is important, if you don't manage the session token through `Airflow` but rather you use something like [vault](https://www.vaultproject.io/docs/secrets/aws/index.html) to manage these. ### Tests adjusted existing test to also parse the session token not other impact.
https://api.github.com/repos/apache/airflow/pulls/6303
2019-10-11T06:27:40Z
2019-10-16T15:22:08Z
2019-10-16T15:22:08Z
2019-10-16T15:22:08Z
1,255
apache/airflow
14,152
bpo-38631: Avoid Py_FatalError() in init_slotdefs()
diff --git a/Include/internal/pycore_pylifecycle.h b/Include/internal/pycore_pylifecycle.h index 72923498decd08..2dd6149a6b3d35 100644 --- a/Include/internal/pycore_pylifecycle.h +++ b/Include/internal/pycore_pylifecycle.h @@ -51,6 +51,7 @@ extern int _PyFloat_Init(void); extern PyStatus _Py_HashRandomization_Init(const PyConfig *); extern PyStatus _PyTypes_Init(void); +extern PyStatus _PyTypes_InitSlotDefs(void); extern PyStatus _PyImportZip_Init(PyThreadState *tstate); extern PyStatus _PyGC_Init(PyThreadState *tstate); diff --git a/Objects/object.c b/Objects/object.c index 14533dba16d64a..7a6b653327e0bd 100644 --- a/Objects/object.c +++ b/Objects/object.c @@ -6,6 +6,7 @@ #include "pycore_initconfig.h" #include "pycore_object.h" #include "pycore_pyerrors.h" +#include "pycore_pylifecycle.h" #include "pycore_pystate.h" #include "frameobject.h" #include "interpreteridobject.h" @@ -1842,6 +1843,11 @@ PyObject _Py_NotImplementedStruct = { PyStatus _PyTypes_Init(void) { + PyStatus status = _PyTypes_InitSlotDefs(); + if (_PyStatus_EXCEPTION(status)) { + return status; + } + #define INIT_TYPE(TYPE, NAME) \ do { \ if (PyType_Ready(TYPE) < 0) { \ diff --git a/Objects/typeobject.c b/Objects/typeobject.c index 720363410ceb1e..0e781d0453854b 100644 --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -2,6 +2,7 @@ #include "Python.h" #include "pycore_call.h" +#include "pycore_initconfig.h" #include "pycore_object.h" #include "pycore_pyerrors.h" #include "pycore_pystate.h" @@ -6932,7 +6933,8 @@ which incorporates the additional structures used for numbers, sequences and mappings. Note that multiple names may map to the same slot (e.g. __eq__, __ne__ etc. all map to tp_richcompare) and one name may map to multiple slots (e.g. __str__ affects tp_str as well as tp_repr). The table is terminated with -an all-zero entry. (This table is further initialized in init_slotdefs().) +an all-zero entry. (This table is further initialized in +_PyTypes_InitSlotDefs().) */ typedef struct wrapperbase slotdef; @@ -7423,28 +7425,29 @@ update_slots_callback(PyTypeObject *type, void *data) static int slotdefs_initialized = 0; /* Initialize the slotdefs table by adding interned string objects for the names. */ -static void -init_slotdefs(void) +PyStatus +_PyTypes_InitSlotDefs(void) { - slotdef *p; + if (slotdefs_initialized) { + return _PyStatus_OK(); + } - if (slotdefs_initialized) - return; - for (p = slotdefs; p->name; p++) { + for (slotdef *p = slotdefs; p->name; p++) { /* Slots must be ordered by their offset in the PyHeapTypeObject. */ assert(!p[1].name || p->offset <= p[1].offset); p->name_strobj = PyUnicode_InternFromString(p->name); - if (!p->name_strobj || !PyUnicode_CHECK_INTERNED(p->name_strobj)) - Py_FatalError("Out of memory interning slotdef names"); + if (!p->name_strobj || !PyUnicode_CHECK_INTERNED(p->name_strobj)) { + return _PyStatus_NO_MEMORY(); + } } slotdefs_initialized = 1; + return _PyStatus_OK(); } -/* Undo init_slotdefs, releasing the interned strings. */ +/* Undo _PyTypes_InitSlotDefs(), releasing the interned strings. */ static void clear_slotdefs(void) { - slotdef *p; - for (p = slotdefs; p->name; p++) { + for (slotdef *p = slotdefs; p->name; p++) { Py_CLEAR(p->name_strobj); } slotdefs_initialized = 0; @@ -7462,7 +7465,7 @@ update_slot(PyTypeObject *type, PyObject *name) assert(PyUnicode_CheckExact(name)); assert(PyUnicode_CHECK_INTERNED(name)); - init_slotdefs(); + assert(slotdefs_initialized); pp = ptrs; for (p = slotdefs; p->name; p++) { if (p->name_strobj == name) @@ -7490,7 +7493,7 @@ fixup_slot_dispatchers(PyTypeObject *type) { slotdef *p; - init_slotdefs(); + assert(slotdefs_initialized); for (p = slotdefs; p->name; ) p = update_one_slot(type, p); } @@ -7503,7 +7506,7 @@ update_all_slots(PyTypeObject* type) /* Clear the VALID_VERSION flag of 'type' and all its subclasses. */ PyType_Modified(type); - init_slotdefs(); + assert(slotdefs_initialized); for (p = slotdefs; p->name; p++) { /* update_slot returns int but can't actually fail */ update_slot(type, p->name_strobj); @@ -7663,7 +7666,7 @@ add_operators(PyTypeObject *type) PyObject *descr; void **ptr; - init_slotdefs(); + assert(slotdefs_initialized); for (p = slotdefs; p->name; p++) { if (p->wrapper == NULL) continue;
Rename init_slotdefs() to _PyTypes_InitSlotDefs() and add a return value of type PyStatus. The function is now called exactly once from _PyTypes_Init(). Replace calls to init_slotdefs() with an assertion checking that slotdefs is initialized. <!-- Thanks for your contribution! Please read this comment in its entirety. It's quite important. # Pull Request title It should be in the following format: ``` bpo-NNNN: Summary of the changes made ``` Where: bpo-NNNN refers to the issue number in the https://bugs.python.org. Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue. # Backport Pull Request title If this is a backport PR (PR made against branches other than `master`), please ensure that the PR title is in the following format: ``` [X.Y] <title from the original PR> (GH-NNNN) ``` Where: [X.Y] is the branch name, e.g. [3.6]. GH-NNNN refers to the PR number from `master`. --> <!-- issue-number: [bpo-38631](https://bugs.python.org/issue38631) --> https://bugs.python.org/issue38631 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/18263
2020-01-29T23:54:51Z
2020-01-30T08:02:15Z
2020-01-30T08:02:15Z
2020-01-30T08:02:18Z
1,347
python/cpython
4,135
skyrock website added
diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json index 84135fef2..36aeefceb 100644 --- a/sherlock/resources/data.json +++ b/sherlock/resources/data.json @@ -2141,6 +2141,13 @@ "username_claimed": "red", "username_unclaimed": "noonewouldeverusethis7" }, + "skyrock": { + "errorType": "status_code", + "url": "https://{}.skyrock.com/", + "urlMain": "https://skyrock.com/", + "username_claimed": "red", + "username_unclaimed": "noonewouldeverusethis7" + }, "social.tchncs.de": { "errorType": "status_code", "url": "https://social.tchncs.de/@{}", @@ -2176,4 +2183,4 @@ "username_claimed": "blue", "username_unclaimed": "noonewouldeverusethis7" } -} \ No newline at end of file +} diff --git a/sites.md b/sites.md index 98e89f5e0..39c102890 100644 --- a/sites.md +++ b/sites.md @@ -1,4 +1,4 @@ -## List Of Supported Sites (287 Sites In Total!) +## List Of Supported Sites (288 Sites In Total!) 1. [2Dimensions](https://2Dimensions.com/) 1. [3dnews](http://forum.3dnews.ru/) 1. [7Cups](https://www.7cups.com/) @@ -169,6 +169,7 @@ 1. [Scribd](https://www.scribd.com/) 1. [ShitpostBot5000](https://www.shitpostbot.com/) 1. [Signal](https://community.signalusers.org) +1. [Skyrock](https://skyrock.com/) 1. [Slack](https://slack.com) 1. [Slashdot](https://slashdot.org) 1. [SlideShare](https://slideshare.net/)
https://api.github.com/repos/sherlock-project/sherlock/pulls/1127
2021-08-27T18:28:49Z
2022-04-05T17:59:19Z
2022-04-05T17:59:19Z
2022-04-05T17:59:19Z
482
sherlock-project/sherlock
36,238
[Windows|Unix] Use double quote compatible both for windows and linux
diff --git a/tools/_venv_common.py b/tools/_venv_common.py index ecd438f94c4..5408427734a 100755 --- a/tools/_venv_common.py +++ b/tools/_venv_common.py @@ -156,7 +156,7 @@ def main(venv_name, venv_args, args): new_environ['PATH'] = os.pathsep.join([get_venv_bin_path(venv_name), new_environ['PATH']]) subprocess_with_print('python {0}'.format('./letsencrypt-auto-source/pieces/pipstrap.py'), env=new_environ, shell=True) - subprocess_with_print("python -m pip install --upgrade 'setuptools>=30.3'", + subprocess_with_print('python -m pip install --upgrade "setuptools>=30.3"', env=new_environ, shell=True) subprocess_with_print('python {0} {1}'.format('./tools/pip_install.py', ' '.join(args)), env=new_environ, shell=True)
File `_venv_common.py` uses single quotes to ask `pip` to install `setuptools>=30.3`. Using single quotes to enclose a string is not supported on Windows shell (Batch). This PR replaces theses single quotes by double quotes supported both on Windows and Linux.
https://api.github.com/repos/certbot/certbot/pulls/6553
2018-12-04T13:38:26Z
2019-01-07T20:00:02Z
2019-01-07T20:00:02Z
2019-01-09T16:25:05Z
224
certbot/certbot
438
labels.jpg class names
diff --git a/train.py b/train.py index dcb89a3c199..005fdf60c02 100644 --- a/train.py +++ b/train.py @@ -203,7 +203,7 @@ def train(hyp, opt, device, tb_writer=None, wandb=None): # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency # model._initialize_biases(cf.to(device)) if plots: - plot_labels(labels, save_dir, loggers) + plot_labels(labels, names, save_dir, loggers) if tb_writer: tb_writer.add_histogram('classes', c, 0) diff --git a/utils/plots.py b/utils/plots.py index aa9a1cab81f..47e7b7b74f1 100644 --- a/utils/plots.py +++ b/utils/plots.py @@ -269,7 +269,7 @@ def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_tx plt.savefig(str(Path(path).name) + '.png', dpi=300) -def plot_labels(labels, save_dir=Path(''), loggers=None): +def plot_labels(labels, names=(), save_dir=Path(''), loggers=None): # plot dataset labels print('Plotting labels... ') c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes @@ -286,7 +286,12 @@ def plot_labels(labels, save_dir=Path(''), loggers=None): matplotlib.use('svg') # faster ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_xlabel('classes') + ax[0].set_ylabel('instances') + if 0 < len(names) < 30: + ax[0].set_xticks(range(len(names))) + ax[0].set_xticklabels(names, rotation=90, fontsize=10) + else: + ax[0].set_xlabel('classes') sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9)
## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Enhanced label plotting with class names in YOLOv5 training. ### 📊 Key Changes - `plot_labels` function now accepts `names` as an additional parameter. - Class names can be displayed on the x-axis of the histogram within `plot_labels`. - Conditional logic added to show class names if there are less than 30 classes. ### 🎯 Purpose & Impact - **Purpose:** Improve the usability of the model training process by helping users better understand the distribution of their dataset's classes through visualization. - **Impact:** Users will benefit from more informative plots during training, making it easier to identify potential class imbalances or labeling errors. 📈🔍
https://api.github.com/repos/ultralytics/yolov5/pulls/2454
2021-03-13T06:09:39Z
2021-03-13T06:15:42Z
2021-03-13T06:15:42Z
2024-01-19T19:12:03Z
558
ultralytics/yolov5
24,845
Add book, Fighting Churn With Data
diff --git a/books.md b/books.md index 8dd4788e..0b3a8625 100644 --- a/books.md +++ b/books.md @@ -45,6 +45,7 @@ The following is a list of free and/or open source books on machine learning, st - [Foundations of Machine Learning](https://cs.nyu.edu/~mohri/mlbook/) - Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar - [Understanding Machine Learning](http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/) - Shai Shalev-Shwartz and Shai Ben-David - [How Machine Learning Works](https://www.manning.com/books/how-machine-learning-works) - Mostafa Samir. Early access book that intorduces machine learning from both practical and theoretical aspects in a non-threating way. +- [Fighting Churn With Data](https://www.manning.com/books/fighting-churn-with-data) [Free Chapter] Carl Gold - Hands on course in applied data science in Python and SQL, taught through the use case of customer churn. ## Deep Learning
Hi, I added my new book from Manning Publications : "Fighting Churn With Data". Currently 5 out of 10 chapters area available in an ebook. Let me know if you want a free copy to check out. -carl24k
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/657
2019-12-22T21:48:18Z
2020-01-27T14:56:06Z
2020-01-27T14:56:06Z
2020-01-27T14:56:06Z
256
josephmisiti/awesome-machine-learning
51,742
Rationalise challenge and port selection flags #3343 [see #3466])
diff --git a/certbot/auth_handler.py b/certbot/auth_handler.py index a9473457202..dcde3f9a7a5 100644 --- a/certbot/auth_handler.py +++ b/certbot/auth_handler.py @@ -33,14 +33,16 @@ class AuthHandler(object): and values are :class:`acme.messages.AuthorizationResource` :ivar list achalls: DV challenges in the form of :class:`certbot.achallenges.AnnotatedChallenge` + :ivar list pref_challs: A list of user specified preferred challenges """ - def __init__(self, auth, acme, account): + def __init__(self, auth, acme, account, pref_challs): self.auth = auth self.acme = acme self.account = account self.authzr = dict() + self.pref_challs = pref_challs # List must be used to keep responses straight. self.achalls = [] @@ -246,6 +248,14 @@ def _get_chall_pref(self, domain): """ # Make sure to make a copy... chall_prefs = [] + plugin_pref = self.auth.get_chall_pref(domain) + if self.pref_challs: + out = [pref for pref in self.pref_challs if pref in plugin_pref] + if out: + return out + else: + raise errors.AuthorizationError( + "None of the selected challenges are supported by the selected plugins") chall_prefs.extend(self.auth.get_chall_pref(domain)) return chall_prefs diff --git a/certbot/cli.py b/certbot/cli.py index 46ff74cd087..a0cd9b173bb 100644 --- a/certbot/cli.py +++ b/certbot/cli.py @@ -13,6 +13,8 @@ import certbot +from acme import challenges + from certbot import constants from certbot import crypto_util from certbot import errors @@ -844,6 +846,13 @@ def prepare_and_parse_args(plugins, args, detect_defaults=False): # pylint: dis "security", "--strict-permissions", action="store_true", help="Require that all configuration files are owned by the current " "user; only needed if your config is somewhere unsafe like /tmp/") + helpful.add( + "security", "--preferred-challenges", dest="pref_chall", + action=_PrefChallAction, default=[], + help="Specify which challenges you'd prefer to use. If any of those " + "challenges are valid for your authenticator they will be used. " + "Otherwise Certbot will not attempt authorization. The first " + "challenge listed that is supported by the plugin will be used.") helpful.add( "renew", "--pre-hook", help="Command to be run in a shell before obtaining any certificates." @@ -1032,3 +1041,34 @@ def add_domains(args_or_config, domains): args_or_config.domains.append(domain) return validated_domains + +class _PrefChallAction(argparse.Action): + """Action class for parsing preferred challenges.""" + + def __call__(self, parser, namespace, pref_chall, option_string=None): + """Just wrap add_pref_challs in argparseese.""" + _ = add_pref_challs(namespace, pref_chall) + +def add_pref_challs(namespace, pref_challs): + """Parses and validates user specified challenge types. + + Adds challenges (in order) to the configuration object. + + :param namespace: parsed command line arguments + :type namespace: argparse.Namespace or + configuration.NamespaceConfig + :param str pref_challs: one or more comma separated challenge types + + :returns: Challenge objects which match the validated string inputs + :rtype: `list` + """ + challs = pref_challs.split(",") + unrecognized = [name for name in challs if name not in challenges.Challenge.TYPES] + if unrecognized: + raise argparse.ArgumentTypeError( + "Unrecognized challenges: {0}".format(", ".join(unrecognized))) + + out = [challenges.Challenge.TYPES[name] for name in challs] + print(namespace) + namespace.pref_chall.extend(out) + return out diff --git a/certbot/client.py b/certbot/client.py index ef59c6ce3b8..66e90bb1f37 100644 --- a/certbot/client.py +++ b/certbot/client.py @@ -192,7 +192,7 @@ def __init__(self, config, account_, auth, installer, acme=None): if auth is not None: self.auth_handler = auth_handler.AuthHandler( - auth, self.acme, self.account) + auth, self.acme, self.account, self.config.pref_chall) else: self.auth_handler = None diff --git a/certbot/plugins/standalone.py b/certbot/plugins/standalone.py index 97aca351a47..c00f3005285 100644 --- a/certbot/plugins/standalone.py +++ b/certbot/plugins/standalone.py @@ -3,6 +3,7 @@ import collections import logging import socket +import sys import threading import OpenSSL @@ -119,6 +120,8 @@ def supported_challenges_validator(data): It should be passed as `type` argument to `add_argument`. """ + sys.stderr.write("WARNING: The standalone specific supported challenges flag is depricated") + sys.stderr.write("\nPlease use the --preferred-challenges flag instead.\n") challs = data.split(",") # tls-sni-01 was dvsni during private beta @@ -177,7 +180,7 @@ def __init__(self, *args, **kwargs): @classmethod def add_parser_arguments(cls, add): add("supported-challenges", - help="Supported challenges. Preferred in the order they are listed.", + help=argparse.SUPPRESS, type=supported_challenges_validator, default=",".join(chall.typ for chall in SUPPORTED_CHALLENGES)) diff --git a/certbot/tests/auth_handler_test.py b/certbot/tests/auth_handler_test.py index fce130f7c43..e6e2445d9fe 100644 --- a/certbot/tests/auth_handler_test.py +++ b/certbot/tests/auth_handler_test.py @@ -24,7 +24,7 @@ def setUp(self): from certbot.auth_handler import AuthHandler # Account is mocked... - self.handler = AuthHandler(None, None, mock.Mock(key="mock_key")) + self.handler = AuthHandler(None, None, mock.Mock(key="mock_key"), []) self.dom = "test" self.handler.authzr[self.dom] = acme_util.gen_authzr( @@ -74,7 +74,7 @@ def setUp(self): self.mock_net = mock.MagicMock(spec=acme_client.Client) self.handler = AuthHandler( - self.mock_auth, self.mock_net, self.mock_account) + self.mock_auth, self.mock_net, self.mock_account, []) logging.disable(logging.CRITICAL) @@ -189,7 +189,7 @@ def setUp(self): # Account and network are mocked... self.mock_net = mock.MagicMock() self.handler = AuthHandler( - None, self.mock_net, mock.Mock(key="mock_key")) + None, self.mock_net, mock.Mock(key="mock_key"), []) self.doms = ["0", "1", "2"] self.handler.authzr[self.doms[0]] = acme_util.gen_authzr( diff --git a/certbot/tests/cli_test.py b/certbot/tests/cli_test.py index 2c6e3270515..d011be9574c 100644 --- a/certbot/tests/cli_test.py +++ b/certbot/tests/cli_test.py @@ -1035,7 +1035,6 @@ def test_text_mode_when_verbose(self): namespace = parse(short_args) self.assertTrue(namespace.text_mode) - class DetermineAccountTest(unittest.TestCase): """Tests for certbot.cli._determine_account."""
resolves issue #3343
https://api.github.com/repos/certbot/certbot/pulls/3457
2016-08-26T20:12:41Z
2016-09-22T21:15:30Z
2016-09-22T21:15:30Z
2016-09-29T21:47:06Z
1,830
certbot/certbot
2,919
Fix #26755 by ensuring that the first nic in the nic list is primary
diff --git a/lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py b/lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py index a073d4958c679f..dd082cbd075b08 100644 --- a/lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py +++ b/lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py @@ -819,7 +819,8 @@ def exec_module(self, **kwargs): if set(current_nics) != set(network_interfaces): self.log('CHANGED: virtual machine {0} - network interfaces are different.'.format(self.name)) differences.append('Network Interfaces') - updated_nics = [dict(id=id) for id in network_interfaces] + updated_nics = [dict(id=id, primary=(i is 0)) + for i, id in enumerate(network_interfaces)] vm_dict['properties']['networkProfile']['networkInterfaces'] = updated_nics changed = True @@ -928,7 +929,8 @@ def exec_module(self, **kwargs): if not self.short_hostname: self.short_hostname = self.name - nics = [self.compute_models.NetworkInterfaceReference(id=id) for id in network_interfaces] + nics = [self.compute_models.NetworkInterfaceReference(id=id, primary=(i is 0)) + for i, id in enumerate(network_interfaces)] # os disk if self.managed_disk_type: @@ -1057,9 +1059,8 @@ def exec_module(self, **kwargs): self.log("Update virtual machine {0}".format(self.name)) self.results['actions'].append('Updated VM {0}'.format(self.name)) - - nics = [self.compute_models.NetworkInterfaceReference(id=interface['id']) - for interface in vm_dict['properties']['networkProfile']['networkInterfaces']] + nics = [self.compute_models.NetworkInterfaceReference(id=interface['id'], primary=(i is 0)) + for i, interface in enumerate(vm_dict['properties']['networkProfile']['networkInterfaces'])] # os disk if not vm_dict['properties']['storageProfile']['osDisk'].get('managedDisk'): diff --git a/test/integration/targets/azure_rm_virtualmachine/tasks/virtualmachine.yml b/test/integration/targets/azure_rm_virtualmachine/tasks/virtualmachine.yml index 775909cc379f68..c2505331743c16 100644 --- a/test/integration/targets/azure_rm_virtualmachine/tasks/virtualmachine.yml +++ b/test/integration/targets/azure_rm_virtualmachine/tasks/virtualmachine.yml @@ -1,9 +1,14 @@ -- name: Delete virtual machine +- name: Delete virtual machines azure_rm_virtualmachine: resource_group: "{{ resource_group }}" - name: testvm002 + name: "{{ vms }}" state: absent vm_size: Standard_A0 + loop: + - testvm002 + - testvm003 + loop_control: + loop_var: vms register: output - name: Create storage account name @@ -59,7 +64,7 @@ priority: 110 direction: Inbound -- name: Create NIC +- name: Create NIC for single nic VM azure_rm_networkinterface: resource_group: "{{ resource_group }}" name: testvm001 @@ -68,7 +73,7 @@ public_ip_name: testvm001 security_group: testvm001 -- name: Create virtual machine +- name: Create virtual machine with a single NIC register: output azure_rm_virtualmachine: resource_group: "{{ resource_group }}" @@ -173,7 +178,7 @@ - "azure_vm.powerstate in ['starting', 'running']" - output.changed -- name: Should be idempotent +- name: Should be idempotent with a single NIC azure_rm_virtualmachine: resource_group: "{{ resource_group }}" name: testvm002 @@ -251,6 +256,82 @@ state: absent vm_size: Standard_A0 +- name: Create NICs for dual nic VM + azure_rm_networkinterface: + resource_group: "{{ resource_group }}" + name: "{{ item }}" + virtual_network: testvm001 + subnet: testvm001 + security_group: testvm001 + loop: + - testvm011 + - testvm012 + +- name: Create virtual machine with two NICs + register: output + vars: + niclist: + - testvm011 + - testvm012 + azure_rm_virtualmachine: + resource_group: "{{ resource_group }}" + name: testvm003 + vm_size: Standard_A0 + storage_account: "{{ storage_account }}" + storage_container: testvm001 + storage_blob: testvm003.vhd + admin_username: adminuser + admin_password: Password123! + short_hostname: testvm + os_type: Linux + network_interfaces: "{{ niclist }}" + availability_set: "avbs{{ resource_group | hash('md5') | truncate(7, True, '') }}" + image: + offer: UbuntuServer + publisher: Canonical + sku: 16.04-LTS + version: latest + +- assert: + that: + - azure_vm.properties.availabilitySet.id + +- name: Should be idempotent with a dual NICs + vars: + niclist: + - testvm011 + - testvm012 + azure_rm_virtualmachine: + resource_group: "{{ resource_group }}" + name: testvm003 + vm_size: Standard_A0 + storage_account: "{{ storage_account }}" + storage_container: testvm001 + storage_blob: testvm003.vhd + admin_username: adminuser + admin_password: Password123! + short_hostname: testvm + os_type: Linux + network_interfaces: "{{ niclist }}" + availability_set: "avbs{{ resource_group | hash('md5') | truncate(7, True, '') }}" + image: + offer: UbuntuServer + publisher: Canonical + sku: 16.04-LTS + version: latest + register: output + +- assert: + that: not output.changed + +- name: Delete dual NIC VM + azure_rm_virtualmachine: + resource_group: "{{ resource_group }}" + name: testvm003 + state: absent + vm_size: Standard_A0 + register: output + # TODO: Until we have a module to create/delete images this is the best tests # I can do - name: assert error thrown with invalid image dict
… set to True, and all other nics have primary set to False. ##### SUMMARY Fixes #26755 by ensuring that the first nic in the nic list has primary set as True, and all other nics have primary set to False. ##### ISSUE TYPE - Bugfix Pull Request ##### COMPONENT NAME azure_rm_virtualmachine ##### ANSIBLE VERSION ``` 2.6.0 ```
https://api.github.com/repos/ansible/ansible/pulls/38994
2018-04-19T05:18:04Z
2018-04-26T06:16:26Z
2018-04-26T06:16:26Z
2019-04-27T00:39:09Z
1,498
ansible/ansible
49,146
[youtube] Add debug message for SAPISID cookie extraction
diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py index 5ef59f680db..47f3fb804be 100644 --- a/yt_dlp/extractor/youtube.py +++ b/yt_dlp/extractor/youtube.py @@ -518,13 +518,15 @@ def _generate_sapisidhash_header(self, origin='https://www.youtube.com'): yt_cookies = self._get_cookies('https://www.youtube.com') sapisid_cookie = dict_get( yt_cookies, ('__Secure-3PAPISID', 'SAPISID')) - if sapisid_cookie is None: + if sapisid_cookie is None or not sapisid_cookie.value: return time_now = round(time.time()) # SAPISID cookie is required if not already present if not yt_cookies.get('SAPISID'): + self.write_debug('Copying __Secure-3PAPISID cookie to SAPISID cookie', only_once=True) self._set_cookie( '.youtube.com', 'SAPISID', sapisid_cookie.value, secure=True, expire_time=time_now + 3600) + self.write_debug('Extracted SAPISID cookie', only_once=True) # SAPISIDHASH algorithm from https://stackoverflow.com/a/32065323 sapisidhash = hashlib.sha1( f'{time_now} {sapisid_cookie.value} {origin}'.encode('utf-8')).hexdigest()
## Please follow the guide below - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x]) - Use *Preview* tab to see how your *pull request* will actually look like --- ### Before submitting a *pull request* make sure you have: - [X] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections - [X] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [X] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) ### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [X] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [ ] Bug fix - [X] Improvement - [ ] New extractor - [ ] New feature --- ### Description of your *pull request* and other information Explanation of your *pull request* in arbitrary form goes here. Please make sure the description explains the purpose and effect of your *pull request* and is worded well enough to be understood. Provide as much context and examples as possible. adds debug message when extracting sapisid cookie
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/540
2021-07-21T20:36:38Z
2021-07-21T20:45:05Z
2021-07-21T20:45:05Z
2022-03-04T06:03:41Z
337
yt-dlp/yt-dlp
7,926
asyncio: Use dict instead of OrderedDict
diff --git a/Lib/asyncio/base_events.py b/Lib/asyncio/base_events.py index cec47ce67f3824..36fe7e0076c969 100644 --- a/Lib/asyncio/base_events.py +++ b/Lib/asyncio/base_events.py @@ -1187,7 +1187,7 @@ async def create_datagram_endpoint(self, protocol_factory, (local_addr, remote_addr)), ) else: # join address by (family, protocol) - addr_infos = collections.OrderedDict() + addr_infos = {} # Using order preserving dict for idx, addr in ((0, local_addr), (1, remote_addr)): if addr is not None: assert isinstance(addr, tuple) and len(addr) == 2, (
https://api.github.com/repos/python/cpython/pulls/11710
2019-01-31T07:33:22Z
2019-02-05T08:04:41Z
2019-02-05T08:04:41Z
2019-02-05T08:04:44Z
176
python/cpython
4,619
MOD: close subprocess and its file handles when reset
diff --git a/interpreter/code_interpreters/subprocess_code_interpreter.py b/interpreter/code_interpreters/subprocess_code_interpreter.py index af6e111b2..33aeadf87 100644 --- a/interpreter/code_interpreters/subprocess_code_interpreter.py +++ b/interpreter/code_interpreters/subprocess_code_interpreter.py @@ -35,6 +35,8 @@ def preprocess_code(self, code): def terminate(self): self.process.terminate() + self.proc.stdin.close() + self.proc.stdout.close() def start_process(self): if self.process:
### Describe the changes you have made: Problem I encountered: When I use the same Interpreter instance to complete multiple code-writing tasks in a row within a process, I try to use Interpreter.reset() to reset the context and Interpreter state between each task. However, after several runs, I ran into the problem of too many open file handles. Solution I provide: So in this pr, Interpreter.reset closes the child process and closes its open file handles. ### Reference any relevant issue (Fixes #000) - [x] I have performed a self-review of my code: ### I have tested the code on the following OS: - [x] Windows - [x] MacOS - [x] Linux ### AI Language Model (if applicable) - [x] GPT4 - [x] GPT3 - [x] Llama 7B - [x] Llama 13B - [x] Llama 34B - [x] Huggingface model (Please specify which one)
https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/476
2023-09-22T11:26:33Z
2023-11-15T05:02:34Z
2023-11-15T05:02:34Z
2023-11-15T10:46:43Z
132
OpenInterpreter/open-interpreter
40,874
Question answers
diff --git a/README.md b/README.md index ef085ff86..ebdf1cdfd 100644 --- a/README.md +++ b/README.md @@ -3011,6 +3011,12 @@ Should be `x=2` <details> <summary>How to store the output of a command in a variable?</summary><br><b> + +``` +OUTPUT=$(ls -1) +echo "${OUTPUT}" +``` +[Source](https://stackoverflow.com/questions/4651437/how-do-i-set-a-variable-to-the-output-of-a-command-in-bash) </b></details> <details> @@ -3512,6 +3518,13 @@ execution or run forever, you may want to run them in the background instead of <details> <summary>How can you find how much memory a specific process consumes?</summary><br><b> +<code> +mem() +{ + ps -eo rss,pid,euser,args:100 --sort %mem | grep -v grep | grep -i $@ | awk '{printf $1/1024 "MB"; $1=""; print }' +} +</code> +[Source](https://stackoverflow.com/questions/3853655/in-linux-how-to-tell-how-much-memory-processes-are-using) </b></details> <details> @@ -13438,6 +13451,9 @@ In Copyleft, any derivative work must use the same licensing while in permissive <details> <summary>What is faster than RAM?</summary><br><b> + +CPU cache. +[Source](https://www.enterprisestorageforum.com/hardware/cache-memory/) </b></details> <details> diff --git a/scripts/random_question.py b/scripts/random_question.py new file mode 100644 index 000000000..691125d7c --- /dev/null +++ b/scripts/random_question.py @@ -0,0 +1,50 @@ +import random +import optparse + + +def main(): + """ Reads through README.md for question/answer pairs and adds them to a list to randomly select from and quiz yourself. + - supports skipping quesitons with no documented answer with the -s flag + """ + parser = optparse.OptionParser() + parser.add_option("-s", "--skip", action="store_true",help="skips questions without an answer.", default=False) + options, args = parser.parse_args() + + with open('README.md', 'r') as f: + text = f.read() + + questions = [] + + while True: + question_start = text.find('<summary>') + 9 + question_end = text.find('</summary>') + answer_end = text.find('</b></details>') + + if answer_end == -1: + break + + question = text[question_start: question_end].replace('<br>', '').replace('<b>', '') + answer = text[question_end + 17: answer_end] + questions.append((question, answer)) + text = text[answer_end + 1:] + + num_questions = len(questions) + + while True: + try: + question, answer = questions[random.randint(0, num_questions)] + + if options.skip and not answer.strip(): + continue + + if input(f'Q: {question} ...Show answer? "y" for yes: ').lower() == 'y': + print('A: ', answer) + + except KeyboardInterrupt: + break + + print("\nGoodbye! See you next time.") + + +if __name__ == '__main__': + main()
3 question answers.
https://api.github.com/repos/bregman-arie/devops-exercises/pulls/165
2021-10-20T21:05:06Z
2021-10-21T04:16:31Z
2021-10-21T04:16:31Z
2021-10-21T04:16:31Z
820
bregman-arie/devops-exercises
17,660
infra: update to pathspec for 'git grep' in lint check
diff --git a/Makefile b/Makefile index 5e66e6c07fb85d..20271ade0b038b 100644 --- a/Makefile +++ b/Makefile @@ -50,7 +50,7 @@ lint lint_package lint_tests: poetry run ruff docs templates cookbook poetry run ruff format docs templates cookbook --diff poetry run ruff --select I docs templates cookbook - git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0 + git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0 format format_diff: poetry run ruff format docs templates cookbook
**Description:** Update to the pathspec for 'git grep' in lint check in the Makefile **Issue:** The pathspec {docs/docs,templates,cookbook} is not handled correctly leading to the error during 'make lint' - "fatal: ambiguous argument '{docs/docs,templates,cookbook}': unknown revision or path not in the working tree." See changes made in https://github.com/langchain-ai/langchain/pull/18058
https://api.github.com/repos/langchain-ai/langchain/pulls/18178
2024-02-27T09:51:45Z
2024-03-01T22:03:46Z
2024-03-01T22:03:46Z
2024-03-04T11:27:11Z
195
langchain-ai/langchain
43,130
Fix method on manager's add command (#578)
diff --git a/shadowsocks/manager.py b/shadowsocks/manager.py index b42ffa9df..2a1200c89 100644 --- a/shadowsocks/manager.py +++ b/shadowsocks/manager.py @@ -141,6 +141,8 @@ def _parse_command(self, data): command, config_json = parts try: config = shell.parse_json_in_str(config_json) + if 'method' in config: + config['method'] = common.to_str(config['method']) return command, config except Exception as e: logging.error(e)
All strings in JSON are encoded into bytes on `parse_json_in_str()`. However, `config['method']` need a `str` rather than bytes.
https://api.github.com/repos/shadowsocks/shadowsocks/pulls/614
2016-09-05T16:54:44Z
2016-10-11T13:31:41Z
2016-10-11T13:31:41Z
2016-10-11T13:31:42Z
138
shadowsocks/shadowsocks
24,703
Alerts: fix inconsistent capitalization
diff --git a/selfdrive/controls/lib/events.py b/selfdrive/controls/lib/events.py index b3667280940747..d68c95bcf50d81 100755 --- a/selfdrive/controls/lib/events.py +++ b/selfdrive/controls/lib/events.py @@ -767,12 +767,12 @@ def joystick_alert(CP: car.CarParams, CS: car.CarState, sm: messaging.SubMaster, # is thrown. This can mean a service crashed, did not broadcast a message for # ten times the regular interval, or the average interval is more than 10% too high. EventName.commIssue: { - ET.SOFT_DISABLE: soft_disable_alert("Communication Issue between Processes"), + ET.SOFT_DISABLE: soft_disable_alert("Communication Issue Between Processes"), ET.NO_ENTRY: comm_issue_alert, }, EventName.commIssueAvgFreq: { - ET.SOFT_DISABLE: soft_disable_alert("Low Communication Rate between Processes"), - ET.NO_ENTRY: NoEntryAlert("Low Communication Rate between Processes"), + ET.SOFT_DISABLE: soft_disable_alert("Low Communication Rate Between Processes"), + ET.NO_ENTRY: NoEntryAlert("Low Communication Rate Between Processes"), }, EventName.controlsdLagging: {
https://api.github.com/repos/commaai/openpilot/pulls/31514
2024-02-19T23:45:09Z
2024-02-20T02:18:20Z
2024-02-20T02:18:20Z
2024-02-20T02:19:10Z
272
commaai/openpilot
8,991
limit challenge polling to 30 minutes
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md index f2ad5ba0a7d..7f39d307579 100644 --- a/certbot/CHANGELOG.md +++ b/certbot/CHANGELOG.md @@ -10,7 +10,9 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). ### Changed -* +* Certbot will no longer respect very long challenge polling intervals, which may be suggested + by some ACME servers. Certbot will continue to wait up to 90 seconds by default, or up to a + total of 30 minutes if requested by the server via `Retry-After`. ### Fixed diff --git a/certbot/certbot/_internal/auth_handler.py b/certbot/certbot/_internal/auth_handler.py index 05feaadc0f4..747520a7d00 100644 --- a/certbot/certbot/_internal/auth_handler.py +++ b/certbot/certbot/_internal/auth_handler.py @@ -55,7 +55,8 @@ def __init__(self, auth: interfaces.Authenticator, acme_client: Optional[client. def handle_authorizations(self, orderr: messages.OrderResource, config: configuration.NamespaceConfig, best_effort: bool = False, - max_retries: int = 30) -> List[messages.AuthorizationResource]: + max_retries: int = 30, + max_time_mins: float = 30) -> List[messages.AuthorizationResource]: """ Retrieve all authorizations, perform all challenges required to validate these authorizations, then poll and wait for the authorization to be checked. @@ -63,6 +64,7 @@ def handle_authorizations(self, orderr: messages.OrderResource, :param certbot.configuration.NamespaceConfig config: current Certbot configuration :param bool best_effort: if True, not all authorizations need to be validated (eg. renew) :param int max_retries: maximum number of retries to poll authorizations + :param float max_time_mins: maximum time (in minutes) to poll authorizations :returns: list of all validated authorizations :rtype: List @@ -103,7 +105,7 @@ def handle_authorizations(self, orderr: messages.OrderResource, # Wait for authorizations to be checked. logger.info('Waiting for verification...') - self._poll_authorizations(authzrs, max_retries, best_effort) + self._poll_authorizations(authzrs, max_retries, max_time_mins, best_effort) # Keep validated authorizations only. If there is none, no certificate can be issued. authzrs_validated = [authzr for authzr in authzrs @@ -143,11 +145,11 @@ def deactivate_valid_authorizations(self, orderr: messages.OrderResource) -> Tup return (deactivated, failed) def _poll_authorizations(self, authzrs: List[messages.AuthorizationResource], max_retries: int, - best_effort: bool) -> None: + deadline_minutes: float, best_effort: bool) -> None: """ Poll the ACME CA server, to wait for confirmation that authorizations have their challenges all verified. The poll may occur several times, until all authorizations are checked - (valid or invalid), or after a maximum of retries. + (valid or invalid), or a maximum of retries, or the polling deadline is reached. """ if not self.acme: raise errors.Error("No ACME client defined, cannot poll authorizations.") @@ -156,6 +158,7 @@ def _poll_authorizations(self, authzrs: List[messages.AuthorizationResource], ma Optional[Response]]] = {index: (authzr, None) for index, authzr in enumerate(authzrs)} authzrs_failed_to_report = [] + deadline = datetime.datetime.now() + datetime.timedelta(minutes=deadline_minutes) # Give an initial second to the ACME CA server to check the authorizations sleep_seconds: float = 1 for _ in range(max_retries): @@ -184,7 +187,7 @@ def _poll_authorizations(self, authzrs: List[messages.AuthorizationResource], ma authzrs_to_check = {index: (authzr, resp) for index, (authzr, resp) in authzrs_to_check.items() if authzr.body.status == messages.STATUS_PENDING} - if not authzrs_to_check: + if not authzrs_to_check or datetime.datetime.now() > deadline: # Polling process is finished, we can leave the loop break @@ -196,6 +199,9 @@ def _poll_authorizations(self, authzrs: List[messages.AuthorizationResource], ma retry_after = max(self.acme.retry_after(resp, 3) for _, resp in authzrs_to_check.values() if resp is not None) + # Whatever Retry-After the ACME server requests, the polling must not take + # longer than the overall deadline (https://github.com/certbot/certbot/issues/9526). + retry_after = min(retry_after, deadline) sleep_seconds = (retry_after - datetime.datetime.now()).total_seconds() # In case of failed authzrs, create a report to the user. diff --git a/certbot/tests/auth_handler_test.py b/certbot/tests/auth_handler_test.py index 23d5b2ae2e6..548356897e0 100644 --- a/certbot/tests/auth_handler_test.py +++ b/certbot/tests/auth_handler_test.py @@ -1,5 +1,5 @@ """Tests for certbot._internal.auth_handler.""" -import functools +import datetime import logging import unittest @@ -12,7 +12,6 @@ from acme import messages from certbot import achallenges from certbot import errors -from certbot import util from certbot._internal.display import obj as display_obj from certbot.plugins import common as plugin_common from certbot.tests import acme_util @@ -227,6 +226,39 @@ def test_max_retries_exceeded(self): self.handler.handle_authorizations(mock_order, self.mock_config, False, 1) self.assertIn('All authorizations were not finalized by the CA.', str(error.exception)) + @mock.patch('certbot._internal.auth_handler.time.sleep') + def test_deadline_exceeded(self, mock_sleep): + authzrs = [gen_dom_authzr(domain="0", challs=acme_util.CHALLENGES)] + mock_order = mock.MagicMock(authorizations=authzrs) + + orig_now = datetime.datetime.now + state = {'time_slept': 0} + + def mock_sleep_effect(secs): + state['time_slept'] += secs + mock_sleep.side_effect = mock_sleep_effect + + def mock_now_effect(): + return orig_now() + datetime.timedelta(seconds=state["time_slept"]) + + # We will return STATUS_PENDING and ask Certbot to sleep for 20 minutes at a time. + interval = datetime.timedelta(minutes=20).seconds + self.mock_net.poll.side_effect = _gen_mock_on_poll(status=messages.STATUS_PENDING, + wait_value=interval) + + with self.assertRaises(errors.AuthorizationError) as error, \ + mock.patch('certbot._internal.auth_handler.datetime.datetime') as mock_dt: + mock_dt.now.side_effect = mock_now_effect + # Polling will only proceed for 30 minutes at most, so the second 20 minute sleep + # should be truncated and the polling should be aborted. + self.handler.handle_authorizations(mock_order, self.mock_config, False) + self.assertIn('All authorizations were not finalized by the CA.', str(error.exception)) + + self.assertEqual(mock_sleep.call_count, 3) # 1s, 20m and 10m sleep + self.assertEqual(mock_sleep.call_args_list[0][0][0], 1) + self.assertAlmostEqual(mock_sleep.call_args_list[1][0][0], interval - 1, delta=1) + self.assertAlmostEqual(mock_sleep.call_args_list[2][0][0], interval/2 - 1, delta=1) + def test_no_domains(self): mock_order = mock.MagicMock(authorizations=[]) self.assertRaises(errors.AuthorizationError, self.handler.handle_authorizations,
Fixes #9526.
https://api.github.com/repos/certbot/certbot/pulls/9527
2023-01-02T23:35:26Z
2023-01-05T22:24:58Z
2023-01-05T22:24:58Z
2023-01-05T22:24:59Z
1,876
certbot/certbot
893
Fix dockerfiles
diff --git a/Dockerfile.external b/Dockerfile.external index cf3203e87..59ef285b3 100644 --- a/Dockerfile.external +++ b/Dockerfile.external @@ -5,6 +5,7 @@ RUN pip install pipx RUN python3 -m pipx ensurepath RUN pipx install poetry ENV PATH="/root/.local/bin:$PATH" +ENV PATH=".venv/bin/:$PATH" # https://python-poetry.org/docs/configuration/#virtualenvsin-project ENV POETRY_VIRTUALENVS_IN_PROJECT=true @@ -31,6 +32,9 @@ COPY --chown=worker --from=dependencies /home/worker/app/.venv/ .venv COPY --chown=worker private_gpt/ private_gpt COPY --chown=worker fern/ fern COPY --chown=worker *.yaml *.md ./ +COPY --chown=worker scripts/ scripts + +ENV PYTHONPATH="$PYTHONPATH:/private_gpt/" USER worker -ENTRYPOINT .venv/bin/python -m private_gpt \ No newline at end of file +ENTRYPOINT python -m private_gpt \ No newline at end of file diff --git a/Dockerfile.local b/Dockerfile.local index 0b9aee4d8..66590fdbc 100644 --- a/Dockerfile.local +++ b/Dockerfile.local @@ -7,6 +7,7 @@ RUN pip install pipx RUN python3 -m pipx ensurepath RUN pipx install poetry ENV PATH="/root/.local/bin:$PATH" +ENV PATH=".venv/bin/:$PATH" # Dependencies to build llama-cpp RUN apt update && apt install -y \ @@ -42,6 +43,9 @@ COPY --chown=worker --from=dependencies /home/worker/app/.venv/ .venv COPY --chown=worker private_gpt/ private_gpt COPY --chown=worker fern/ fern COPY --chown=worker *.yaml *.md ./ +COPY --chown=worker scripts/ scripts + +ENV PYTHONPATH="$PYTHONPATH:/private_gpt/" USER worker -ENTRYPOINT .venv/bin/python -m private_gpt \ No newline at end of file +ENTRYPOINT python -m private_gpt \ No newline at end of file
- Copy scripts folder - Set the right python paths to use the venv Instructions: Build image: `docker compose build -> builds the image` Run setup in service to download local models (in case you didn't download them already): `docker compose run --rm --entrypoint="bash -c '[ -f scripts/setup ] && scripts/setup'" private-gpt` Run the service: `docker compose run private-gpt`
https://api.github.com/repos/zylon-ai/private-gpt/pulls/1445
2023-12-22T12:57:21Z
2023-12-22T13:16:47Z
2023-12-22T13:16:47Z
2023-12-22T13:16:47Z
519
zylon-ai/private-gpt
38,588
.netrc support
diff --git a/HISTORY.rst b/HISTORY.rst index 2091ac8499..d6b8f49690 100644 --- a/HISTORY.rst +++ b/HISTORY.rst @@ -1,6 +1,11 @@ History ------- +0.10.4 (2012-02-20) ++++++++++++++++++++ + +* Honor netrc. + 0.10.3 (2012-02-20) +++++++++++++++++++ diff --git a/docs/index.rst b/docs/index.rst index df0f1937c2..232d683046 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -72,6 +72,7 @@ Requests is ready for today's web. - Unicode Response Bodies - Multipart File Uploads - Connection Timeouts +- ``.netrc`` support User Guide diff --git a/requests/models.py b/requests/models.py index 2c0b2e08a5..a238214a24 100644 --- a/requests/models.py +++ b/requests/models.py @@ -27,7 +27,7 @@ URLRequired, SSLError) from .utils import ( get_encoding_from_headers, stream_untransfer, guess_filename, requote_uri, - dict_from_string, stream_decode_response_unicode) + dict_from_string, stream_decode_response_unicode, get_netrc_auth) from .compat import ( urlparse, urlunparse, urljoin, urlsplit, urlencode, str, bytes, SimpleCookie, is_py2) @@ -435,6 +435,10 @@ def send(self, anyway=False, prefetch=False): if (content_type) and (not 'content-type' in self.headers): self.headers['Content-Type'] = content_type + # Use .netrc auth if none was provided. + if not self.auth: + self.auth = get_netrc_auth(url) + if self.auth: if isinstance(self.auth, tuple) and len(self.auth) == 2: # special-case basic HTTP auth diff --git a/requests/sessions.py b/requests/sessions.py index 7aafd0ea5d..29ae1d9aeb 100644 --- a/requests/sessions.py +++ b/requests/sessions.py @@ -40,7 +40,7 @@ def merge_kwargs(local_kwarg, default_kwarg): kwargs.update(local_kwarg) # Remove keys that are set to None. - for (k,v) in list(local_kwarg.items()): + for (k, v) in list(local_kwarg.items()): if v is None: del kwargs[k] diff --git a/requests/utils.py b/requests/utils.py index 97f5860036..68efa4697a 100644 --- a/requests/utils.py +++ b/requests/utils.py @@ -11,15 +11,49 @@ import cgi import codecs +import os import random import re import zlib +from netrc import netrc, NetrcParseError from .compat import parse_http_list as _parse_list_header -from .compat import quote, cookielib, SimpleCookie, is_py2 +from .compat import quote, cookielib, SimpleCookie, is_py2, urlparse from .compat import basestring, bytes +NETRC_FILES = ('.netrc', '_netrc') + + +def get_netrc_auth(url): + """Returns the Requests tuple auth for a given url from netrc.""" + + locations = (os.path.expanduser('~/{0}'.format(f)) for f in NETRC_FILES) + netrc_path = None + + for loc in locations: + if os.path.exists(loc) and not netrc_path: + netrc_path = loc + + # Abort early if there isn't one. + if netrc_path is None: + return netrc_path + + ri = urlparse(url) + + # Strip port numbers from netloc + host = ri.netloc.split(':')[0] + + try: + _netrc = netrc(netrc_path).authenticators(host) + if _netrc: + # Return with login / password + login_i = (0 if _netrc[0] else 1) + return (_netrc[login_i], _netrc[2]) + except NetrcParseError: + pass + + def dict_from_string(s): """Returns a MultiDict with Cookies.""" @@ -149,7 +183,7 @@ def header_expand(headers): headers = list(headers.items()) elif isinstance(headers, basestring): return headers - elif isinstance(headers, unicode): + elif isinstance(headers, str): # As discussed in https://github.com/kennethreitz/requests/issues/400 # latin-1 is the most conservative encoding used on the web. Anyone # who needs more can encode to a byte-string before calling
https://api.github.com/repos/psf/requests/pulls/446
2012-02-20T20:36:57Z
2012-02-20T21:21:05Z
2012-02-20T21:21:05Z
2021-09-08T23:06:29Z
1,098
psf/requests
32,937
BUG: Bug in to_datetime with a format and coerce=True not raising (GH5195)
diff --git a/doc/source/release.rst b/doc/source/release.rst index 55f786d263a0a..f899849475df8 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -593,6 +593,7 @@ Bug Fixes - Compound dtypes in a constructor raise ``NotImplementedError`` (:issue:`5191`) - Bug in comparing duplicate frames (:issue:`4421`) related - Bug in describe on duplicate frames + - Bug in ``to_datetime`` with a format and ``coerce=True`` not raising (:issue:`5195`) pandas 0.12.0 ------------- diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index 473ea21da1585..7f11fa5873fe7 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -879,6 +879,29 @@ def test_to_datetime_on_datetime64_series(self): result = to_datetime(s) self.assertEquals(result[0], s[0]) + def test_to_datetime_with_apply(self): + + # this is only locale tested with US/None locales + import locale + (lang,encoding) = locale.getlocale() + if lang is not None: + raise nose.SkipTest("format codes cannot work with a locale of {0}".format(lang)) + + # GH 5195 + # with a format and coerce a single item to_datetime fails + td = Series(['May 04', 'Jun 02', 'Dec 11'], index=[1,2,3]) + expected = pd.to_datetime(td, format='%b %y') + result = td.apply(pd.to_datetime, format='%b %y') + assert_series_equal(result, expected) + + td = pd.Series(['May 04', 'Jun 02', ''], index=[1,2,3]) + self.assertRaises(ValueError, lambda : pd.to_datetime(td,format='%b %y')) + self.assertRaises(ValueError, lambda : td.apply(pd.to_datetime, format='%b %y')) + expected = pd.to_datetime(td, format='%b %y', coerce=True) + + result = td.apply(lambda x: pd.to_datetime(x, format='%b %y', coerce=True)) + assert_series_equal(result, expected) + def test_nat_vector_field_access(self): idx = DatetimeIndex(['1/1/2000', None, None, '1/4/2000']) diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py index 793d9409e662e..3d8803237931d 100644 --- a/pandas/tseries/tools.py +++ b/pandas/tseries/tools.py @@ -112,7 +112,7 @@ def _convert_listlike(arg, box): # fallback if result is None: - result = tslib.array_strptime(arg, format) + result = tslib.array_strptime(arg, format, coerce=coerce) else: result = tslib.array_to_datetime(arg, raise_=errors == 'raise', utc=utc, dayfirst=dayfirst, diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx index c6c2b418f553d..372de1e7c1b21 100644 --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -1174,7 +1174,7 @@ def repr_timedelta64(object value): return "%s%02d:%02d:%s" % (sign_pretty, hours, minutes, seconds_pretty) -def array_strptime(ndarray[object] values, object fmt): +def array_strptime(ndarray[object] values, object fmt, coerce=False): cdef: Py_ssize_t i, n = len(values) pandas_datetimestruct dts @@ -1237,9 +1237,15 @@ def array_strptime(ndarray[object] values, object fmt): for i in range(n): found = format_regex.match(values[i]) if not found: + if coerce: + iresult[i] = iNaT + continue raise ValueError("time data %r does not match format %r" % (values[i], fmt)) if len(values[i]) != found.end(): + if coerce: + iresult[i] = iNaT + continue raise ValueError("unconverted data remains: %s" % values[i][found.end():]) year = 1900
closes #5195
https://api.github.com/repos/pandas-dev/pandas/pulls/5197
2013-10-12T22:13:49Z
2013-10-12T23:53:46Z
2013-10-12T23:53:46Z
2014-06-24T06:53:46Z
1,034
pandas-dev/pandas
44,928
Org code for removing all the purpose print functions.
diff --git a/sherlock.py b/sherlock.py index acca9f9f5..049345295 100644 --- a/sherlock.py +++ b/sherlock.py @@ -72,6 +72,13 @@ def timing(r, *args, **kwargs): return super(ElapsedFuturesSession, self).request(method, url, hooks=hooks, *args, **kwargs) +def print_info(title, info): + print(Style.BRIGHT + Fore.GREEN + "[" + + Fore.YELLOW + "*" + + Fore.GREEN + f"] {title}" + + Fore.WHITE + f" {info}" + + Fore.GREEN + " on:") + def print_error(err, errstr, var, verbose=False): print(Style.BRIGHT + Fore.WHITE + "[" + Fore.RED + "-" + @@ -91,7 +98,6 @@ def print_found(social_network, url, response_time, verbose=False): format_response_time(response_time, verbose) + Fore.GREEN + " {}:").format(social_network), url) - def print_not_found(social_network, response_time, verbose=False): print((Style.BRIGHT + Fore.WHITE + "[" + Fore.RED + "-" + @@ -100,9 +106,17 @@ def print_not_found(social_network, response_time, verbose=False): Fore.GREEN + " {}:" + Fore.YELLOW + " Not Found!").format(social_network)) +def print_invalid(social_network, msg): + """Print invalid search result.""" + print((Style.BRIGHT + Fore.WHITE + "[" + + Fore.RED + "-" + + Fore.WHITE + "]" + + Fore.GREEN + " {}:" + + Fore.YELLOW + f" {msg}").format(social_network)) + def get_response(request_future, error_type, social_network, verbose=False, retry_no=None): - + global proxy_list try: @@ -160,11 +174,7 @@ def sherlock(username, site_data, verbose=False, tor=False, unique_tor=False, pr """ global amount - print((Style.BRIGHT + Fore.GREEN + "[" + - Fore.YELLOW + "*" + - Fore.GREEN + "] Checking username" + - Fore.WHITE + " {}" + - Fore.GREEN + " on:").format(username)) + print_info("Checking username", username) # A user agent is needed because some sites don't # return the correct information since they think that @@ -203,11 +213,7 @@ def sherlock(username, site_data, verbose=False, tor=False, unique_tor=False, pr regex_check = net_info.get("regexCheck") if regex_check and re.search(regex_check, username) is None: # No need to do the check at the site: this user name is not allowed. - print((Style.BRIGHT + Fore.WHITE + "[" + - Fore.RED + "-" + - Fore.WHITE + "]" + - Fore.GREEN + " {}:" + - Fore.YELLOW + " Illegal Username Format For This Site!").format(social_network)) + print_invalid(social_network, "Illegal Username Format For This Site!") results_site["exists"] = "illegal" else: # URL of user on site (if it exists) @@ -331,11 +337,7 @@ def sherlock(username, site_data, verbose=False, tor=False, unique_tor=False, pr exists = "no" elif error_type == "": - print((Style.BRIGHT + Fore.WHITE + "[" + - Fore.RED + "-" + - Fore.WHITE + "]" + - Fore.GREEN + " {}:" + - Fore.YELLOW + " Error!").format(social_network)) + print_invalid(social_network, "Error!") exists = "error" # Save exists flag @@ -444,11 +446,7 @@ def main(): global proxy_list if args.proxy_list != None: - print((Style.BRIGHT + Fore.GREEN + "[" + - Fore.YELLOW + "*" + - Fore.GREEN + "] Loading proxies from" + - Fore.WHITE + " {}" + - Fore.GREEN + " :").format(args.proxy_list)) + print_info("Loading proxies from", args.proxy_list) proxy_list = load_proxies_from_csv(args.proxy_list) @@ -538,7 +536,7 @@ def main(): sys.exit(1) if args.rank: - # Sort data by rank + # Sort data by rank site_dataCpy = dict(site_data) ranked_sites = sorted(site_data, key=lambda k: ("rank" not in k, site_data[k].get("rank", sys.maxsize))) site_data = {}
#### Purpose Moves all the `print` log related functions into something more readable and reasonable. #### Description Just cleaned up some code here. I aware there are modulation going on to this project? Hope this PR wouldn't effects to the modulation progress. I also realized some file have different coding style. I wonder would it be great to have a coding standard for people to contribute to this project? Code get messier every time a PR is made. Just a suggestion.
https://api.github.com/repos/sherlock-project/sherlock/pulls/184
2019-03-08T05:47:15Z
2019-03-08T10:03:10Z
2019-03-08T10:03:10Z
2019-03-08T10:03:10Z
1,021
sherlock-project/sherlock
36,554
Toyota: support openpilot long with a smartDSU on nodsu models
diff --git a/selfdrive/car/tests/routes.py b/selfdrive/car/tests/routes.py index 4dd19d24e3a5e6..40ed0ea047ee6c 100644 --- a/selfdrive/car/tests/routes.py +++ b/selfdrive/car/tests/routes.py @@ -185,6 +185,7 @@ CarTestRoute("9b36accae406390e|2021-03-30--10-41-38", TOYOTA.MIRAI), CarTestRoute("cd9cff4b0b26c435|2021-05-13--15-12-39", TOYOTA.CHR), CarTestRoute("ea8fbe72b96a185c|2023-02-08--15-11-46", TOYOTA.CHR_TSS2), + CarTestRoute("ea8fbe72b96a185c|2023-02-22--09-20-34", TOYOTA.CHR_TSS2), # openpilot longitudinal, with smartDSU CarTestRoute("57858ede0369a261|2021-05-18--20-34-20", TOYOTA.CHRH), CarTestRoute("6719965b0e1d1737|2023-02-09--22-44-05", TOYOTA.CHRH_TSS2), CarTestRoute("14623aae37e549f3|2021-10-24--01-20-49", TOYOTA.PRIUS_V), diff --git a/selfdrive/car/toyota/carstate.py b/selfdrive/car/toyota/carstate.py index 050f8747a2d0d1..68adc2ee578a74 100644 --- a/selfdrive/car/toyota/carstate.py +++ b/selfdrive/car/toyota/carstate.py @@ -115,7 +115,8 @@ def update(self, cp, cp_cam): cp_acc = cp_cam if self.CP.carFingerprint in (TSS2_CAR - RADAR_ACC_CAR) else cp if self.CP.carFingerprint in (TSS2_CAR | RADAR_ACC_CAR): - self.acc_type = cp_acc.vl["ACC_CONTROL"]["ACC_TYPE"] + if not (self.CP.flags & ToyotaFlags.SMART_DSU.value): + self.acc_type = cp_acc.vl["ACC_CONTROL"]["ACC_TYPE"] ret.stockFcw = bool(cp_acc.vl["ACC_HUD"]["FCW"]) # some TSS2 cars have low speed lockout permanently set, so ignore on those cars @@ -235,12 +236,17 @@ def get_can_parser(CP): checks.append(("BSM", 1)) if CP.carFingerprint in RADAR_ACC_CAR: + if not CP.flags & ToyotaFlags.SMART_DSU.value: + signals += [ + ("ACC_TYPE", "ACC_CONTROL"), + ] + checks += [ + ("ACC_CONTROL", 33), + ] signals += [ - ("ACC_TYPE", "ACC_CONTROL"), ("FCW", "ACC_HUD"), ] checks += [ - ("ACC_CONTROL", 33), ("ACC_HUD", 1), ] diff --git a/selfdrive/car/toyota/interface.py b/selfdrive/car/toyota/interface.py index 33a87451e9eb5e..6e8664c340333e 100644 --- a/selfdrive/car/toyota/interface.py +++ b/selfdrive/car/toyota/interface.py @@ -201,14 +201,18 @@ def _get_params(ret, candidate, fingerprint, car_fw, experimental_long): tire_stiffness_factor=tire_stiffness_factor) ret.enableBsm = 0x3F6 in fingerprint[0] and candidate in TSS2_CAR - # Detect smartDSU, which intercepts ACC_CMD from the DSU allowing openpilot to send it - smartDsu = 0x2FF in fingerprint[0] - # In TSS2 cars the camera does long control + + # Detect smartDSU, which intercepts ACC_CMD from the DSU (or radar) allowing openpilot to send it + if 0x2FF in fingerprint[0]: + ret.flags |= ToyotaFlags.SMART_DSU.value + + # In TSS2 cars, the camera does long control found_ecus = [fw.ecu for fw in car_fw] - ret.enableDsu = len(found_ecus) > 0 and Ecu.dsu not in found_ecus and candidate not in (NO_DSU_CAR | UNSUPPORTED_DSU_CAR) and not smartDsu + ret.enableDsu = len(found_ecus) > 0 and Ecu.dsu not in found_ecus and candidate not in (NO_DSU_CAR | UNSUPPORTED_DSU_CAR) and not (ret.flags & ToyotaFlags.SMART_DSU) ret.enableGasInterceptor = 0x201 in fingerprint[0] + # if the smartDSU is detected, openpilot can send ACC_CMD (and the smartDSU will block it from the DSU) or not (the DSU is "connected") - ret.openpilotLongitudinalControl = smartDsu or ret.enableDsu or candidate in (TSS2_CAR - RADAR_ACC_CAR) + ret.openpilotLongitudinalControl = bool(ret.flags & ToyotaFlags.SMART_DSU) or ret.enableDsu or candidate in (TSS2_CAR - RADAR_ACC_CAR) ret.autoResumeSng = ret.openpilotLongitudinalControl and candidate in NO_STOP_TIMER_CAR if not ret.openpilotLongitudinalControl: diff --git a/selfdrive/car/toyota/values.py b/selfdrive/car/toyota/values.py index f0e846cc540d29..8584c6b0787d3d 100644 --- a/selfdrive/car/toyota/values.py +++ b/selfdrive/car/toyota/values.py @@ -33,6 +33,7 @@ def __init__(self, CP): class ToyotaFlags(IntFlag): HYBRID = 1 + SMART_DSU = 2 class CAR:
As per Adeeb suggested [here](https://github.com/commaai/cereal/pull/410), this change is to enable openpilot longitudinal control on radar acc car (such as Camry, C-HR and perhaps new 2022 RAV4) w/ additional can filter set in between Radar and CAN gateway. see route: ea8fbe72b96a185c|2023-02-22--09-20-34 <img width="622" alt="radar can filter diagram PoC, with SmartDSU firmware installed" src="https://user-images.githubusercontent.com/16603033/220530662-52c995ee-0381-4c3a-95bd-d755b7f8c311.png">image
https://api.github.com/repos/commaai/openpilot/pulls/27417
2023-02-22T04:19:27Z
2023-04-01T22:34:56Z
2023-04-01T22:34:56Z
2023-07-31T02:17:48Z
1,366
commaai/openpilot
9,673
Add videakid as an alias for videa
diff --git a/youtube_dl/extractor/videa.py b/youtube_dl/extractor/videa.py index 311df58f4a0..d0e34c81980 100644 --- a/youtube_dl/extractor/videa.py +++ b/youtube_dl/extractor/videa.py @@ -16,7 +16,7 @@ class VideaIE(InfoExtractor): _VALID_URL = r'''(?x) https?:// - videa\.hu/ + videa(?:kid)?\.hu/ (?: videok/(?:[^/]+/)*[^?#&]+-| player\?.*?\bv=| @@ -31,7 +31,7 @@ class VideaIE(InfoExtractor): 'id': '8YfIAjxwWGwT8HVQ', 'ext': 'mp4', 'title': 'Az őrült kígyász 285 kígyót enged szabadon', - 'thumbnail': 'http://videa.hu/static/still/1.4.1.1007274.1204470.3', + 'thumbnail': r're:^https?://.*', 'duration': 21, }, }, { @@ -43,6 +43,15 @@ class VideaIE(InfoExtractor): }, { 'url': 'http://videa.hu/player/v/8YfIAjxwWGwT8HVQ?autoplay=1', 'only_matching': True, + }, { + 'url': 'https://videakid.hu/videok/origo/jarmuvek/supercars-elozes-jAHDWfWSJH5XuFhH', + 'only_matching': True, + }, { + 'url': 'https://videakid.hu/player?v=8YfIAjxwWGwT8HVQ', + 'only_matching': True, + }, { + 'url': 'https://videakid.hu/player/v/8YfIAjxwWGwT8HVQ?autoplay=1', + 'only_matching': True, }] @staticmethod
It seems that `videakid.hu` is just an alternative domain for `videa.hu` serving up the same videos, supporting the same download procedure. ## Please follow the guide below - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x]) - Use *Preview* tab to see how your *pull request* will actually look like --- ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [adding new extractor tutorial](https://github.com/rg3/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/rg3/youtube-dl#youtube-dl-coding-conventions) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) ### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [ ] Bug fix - [x] Improvement - [ ] New extractor - [ ] New feature --- ### Description of your *pull request* and other information Explanation of your *pull request* in arbitrary form goes here. Please make sure the description explains the purpose and effect of your *pull request* and is worded well enough to be understood. Provide as much context and examples as possible.
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/16003
2018-03-27T05:22:44Z
2018-03-27T15:02:05Z
2018-03-27T15:02:04Z
2018-03-27T16:32:06Z
482
ytdl-org/youtube-dl
49,910
added support to bilibili collectiondetail api
diff --git a/src/you_get/extractors/bilibili.py b/src/you_get/extractors/bilibili.py index 1a13b61cd2..6d34c2c45f 100644 --- a/src/you_get/extractors/bilibili.py +++ b/src/you_get/extractors/bilibili.py @@ -115,11 +115,15 @@ def bilibili_live_room_init_api(room_id): @staticmethod def bilibili_space_channel_api(mid, cid, pn=1, ps=100): return 'https://api.bilibili.com/x/space/channel/video?mid=%s&cid=%s&pn=%s&ps=%s&order=0&jsonp=jsonp' % (mid, cid, pn, ps) + + @staticmethod + def bilibili_space_collection_api(mid, cid, pn=1, ps=30): + return 'https://api.bilibili.com/x/polymer/space/seasons_archives_list?mid=%s&season_id=%s&sort_reverse=false&page_num=%s&page_size=%s' % (mid, cid, pn, ps) @staticmethod def bilibili_series_archives_api(mid, sid, pn=1, ps=100): return 'https://api.bilibili.com/x/series/archives?mid=%s&series_id=%s&pn=%s&ps=%s&only_normal=true&sort=asc&jsonp=jsonp' % (mid, sid, pn, ps) - + @staticmethod def bilibili_space_favlist_api(fid, pn=1, ps=20): return 'https://api.bilibili.com/x/v3/fav/resource/list?media_id=%s&pn=%s&ps=%s&order=mtime&type=0&tid=0&jsonp=jsonp' % (fid, pn, ps) @@ -628,6 +632,8 @@ def download_playlist_by_url(self, url, **kwargs): sort = 'space_channel' elif re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/seriesdetail\?.*sid=(\d+)', self.url): sort = 'space_channel_series' + elif re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/collectiondetail\?.*sid=(\d+)', self.url): + sort = 'space_channel_collection' elif re.match(r'https?://space\.?bilibili\.com/(\d+)/favlist\?.*fid=(\d+)', self.url): sort = 'space_favlist' elif re.match(r'https?://space\.?bilibili\.com/(\d+)/video', self.url): @@ -752,6 +758,20 @@ def download_playlist_by_url(self, url, **kwargs): url = 'https://www.bilibili.com/video/av%s' % video['aid'] self.__class__().download_playlist_by_url(url, **kwargs) + elif sort == 'space_channel_collection': + m = re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/collectiondetail\?.*sid=(\d+)', self.url) + mid, sid = m.group(1), m.group(2) + api_url = self.bilibili_space_collection_api(mid, sid) + api_content = get_content(api_url, headers=self.bilibili_headers(referer=self.url)) + archives_info = json.loads(api_content) + # TBD: channel of more than 100 videos + + epn, i = len(archives_info['data']['archives']), 0 + for video in archives_info['data']['archives']: + i += 1; log.w('Extracting %s of %s videos ...' % (i, epn)) + url = 'https://www.bilibili.com/video/av%s' % video['aid'] + self.__class__().download_playlist_by_url(url, **kwargs) + elif sort == 'space_favlist': m = re.match(r'https?://space\.?bilibili\.com/(\d+)/favlist\?.*fid=(\d+)', self.url) vmid, fid = m.group(1), m.group(2)
Added support of collectiondetail api of bilibili, tested works with -l option `./you-get --format=flv360 -l https://space.bilibili.com/364152971/channel/collectiondetail?sid=13909` There is already a pull request for the test code: https://github.com/soimort/you-get/pull/2958 @soldatjiang
https://api.github.com/repos/soimort/you-get/pulls/2983
2022-09-26T02:09:47Z
2022-10-08T22:13:13Z
2022-10-08T22:13:13Z
2022-10-08T22:13:34Z
944
soimort/you-get
21,170
Always include either SNI or target IP address as SAN
diff --git a/CHANGELOG.md b/CHANGELOG.md index 0040b75c32..16a539b060 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -39,6 +39,9 @@ ([#6148](https://github.com/mitmproxy/mitmproxy/pull/6148), @mhils) * Add zstd to valid gRPC encoding schemes. ([#6188](https://github.com/mitmproxy/mitmproxy/pull/6188), @tsaaristo) +* For reverse proxy directly accessed via IP address, the IP address is now included + as a subject in the generated certificate. + ([#6202](https://github.com/mitmproxy/mitmproxy/pull/6202), @mhils) ### Breaking Changes diff --git a/mitmproxy/addons/tlsconfig.py b/mitmproxy/addons/tlsconfig.py index 6d14c0f373..de437a4500 100644 --- a/mitmproxy/addons/tlsconfig.py +++ b/mitmproxy/addons/tlsconfig.py @@ -497,17 +497,16 @@ def get_cert(self, conn_context: context.Context) -> certs.CertStoreEntry: if upstream_cert.organization: organization = upstream_cert.organization - # Add SNI. If not available, try the server address as well. + # Add SNI or our local IP address. if conn_context.client.sni: altnames.append(conn_context.client.sni) - elif conn_context.server.address: - altnames.append(conn_context.server.address[0]) - - # As a last resort, add our local IP address. This may be necessary for HTTPS Proxies which are addressed - # via IP. Here we neither have an upstream cert, nor can an IP be included in the server name indication. - if not altnames: + else: altnames.append(conn_context.client.sockname[0]) + # If we already know of a server address, include that in the SANs as well. + if conn_context.server.address: + altnames.append(conn_context.server.address[0]) + # only keep first occurrence of each hostname altnames = list(dict.fromkeys(altnames)) diff --git a/test/mitmproxy/addons/test_tlsconfig.py b/test/mitmproxy/addons/test_tlsconfig.py index ab008e4fb5..64144e12e5 100644 --- a/test/mitmproxy/addons/test_tlsconfig.py +++ b/test/mitmproxy/addons/test_tlsconfig.py @@ -131,12 +131,17 @@ def test_get_cert(self, tdata): assert entry.cert.altnames == [ "example.mitmproxy.org", "server-address.example", + "127.0.0.1", ] # And now we also incorporate SNI. ctx.client.sni = "sni.example" entry = ta.get_cert(ctx) - assert entry.cert.altnames == ["example.mitmproxy.org", "sni.example"] + assert entry.cert.altnames == [ + "example.mitmproxy.org", + "sni.example", + "server-address.example", + ] with open(tdata.path("mitmproxy/data/invalid-subject.pem"), "rb") as f: ctx.server.certificate_list = [certs.Cert.from_pem(f.read())]
this unbreaks reverse proxy setups that are directly addressed by IP.
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/6202
2023-06-25T20:27:35Z
2023-06-26T00:35:51Z
2023-06-26T00:35:51Z
2023-06-26T00:35:55Z
760
mitmproxy/mitmproxy
28,060
Make message printed after bootstrap slightly less confusing
diff --git a/bootstrap/dev/_venv_common.sh b/bootstrap/dev/_venv_common.sh index 2d84dc39b47..d07f38ed8d0 100755 --- a/bootstrap/dev/_venv_common.sh +++ b/bootstrap/dev/_venv_common.sh @@ -21,5 +21,6 @@ pip install -U setuptools pip install -U pip pip install "$@" +set +x echo "Please run the following command to activate developer environment:" echo "source $VENV_NAME/bin/activate"
![setx1](https://cloud.githubusercontent.com/assets/1378718/10774828/ef5d8c3e-7d29-11e5-8f50-1833b3e101ca.png) vs. ![setx2](https://cloud.githubusercontent.com/assets/1378718/10774830/f8324e80-7d29-11e5-9cfd-e03820a9f080.png)
https://api.github.com/repos/certbot/certbot/pulls/1161
2015-10-27T22:41:51Z
2015-10-27T22:52:51Z
2015-10-27T22:52:51Z
2016-05-06T19:22:09Z
118
certbot/certbot
360
Fix PyTorch Hub export inference shapes
diff --git a/models/common.py b/models/common.py index 70ee7105abf..ac3af20d533 100644 --- a/models/common.py +++ b/models/common.py @@ -544,10 +544,9 @@ def forward(self, imgs, size=640, augment=False, profile=False): g = (size / max(s)) # gain shape1.append([y * g for y in s]) imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update - shape1 = [make_divisible(x, self.stride) for x in np.stack(shape1, 0).max(0)] # inference shape - x = [letterbox(im, new_shape=shape1 if self.pt else size, auto=False)[0] for im in imgs] # pad - x = np.stack(x, 0) if n > 1 else x[0][None] # stack - x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW + shape1 = [make_divisible(x, self.stride) if self.pt else size for x in np.array(shape1).max(0)] # inf shape + x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad + x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 t.append(time_sync())
May resolve https://github.com/ultralytics/yolov5/issues/6947 ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Simplified image preprocessing in the YOLOv5 forward pass method. ### 📊 Key Changes - Removed conditional logic for different inference shapes based on `self.pt`. - Streamlined the process for making inference shape divisible by the stride and applying letterbox padding to images. - Improved image stacking and format conversion (BHWC to BCHW) process. ### 🎯 Purpose & Impact - Enhances code readability and maintainability by reducing complexity. - Ensures consistent image preprocessing regardless of the PyTorch (`self.pt`) presence. - Potentially increases efficiency during the image preprocessing step, leading to faster image handling in the forward pass. These changes can provide a smoother and more understandable experience for developers working with the YOLOv5 codebase, while end-users might benefit from slight performance improvements during model inference. 🚀
https://api.github.com/repos/ultralytics/yolov5/pulls/6949
2022-03-11T15:14:06Z
2022-03-11T15:18:40Z
2022-03-11T15:18:40Z
2024-01-19T12:18:48Z
382
ultralytics/yolov5
24,995
Fix `quantile` and `median` for JAX backend with jax 0.4.26.
diff --git a/keras/backend/jax/numpy.py b/keras/backend/jax/numpy.py index a2a005ad5b9..6e94e58ef4d 100644 --- a/keras/backend/jax/numpy.py +++ b/keras/backend/jax/numpy.py @@ -727,9 +727,9 @@ def median(x, axis=None, keepdims=False): result = jnp.median(x, axis=axis, keepdims=keepdims) - # TODO: jnp.median failed to keepdims when axis is None + # TODO: with jax < 0.4.26 jnp.median failed to keepdims when axis is None if keepdims is True and axis is None: - for _ in range(x.ndim - 1): + while result.ndim < x.ndim: result = jnp.expand_dims(result, axis=-1) return result @@ -818,9 +818,10 @@ def quantile(x, q, axis=None, method="linear", keepdims=False): result = jnp.quantile(x, q, axis=axis, method=method, keepdims=keepdims) - # TODO: jnp.quantile failed to keepdims when axis is None + # TODO: with jax < 0.4.26 jnp.quantile failed to keepdims when axis is None if keepdims is True and axis is None: - for _ in range(x.ndim - 1): + result_ndim = x.ndim + (1 if len(q.shape) > 0 else 0) + while result.ndim < result_ndim: result = jnp.expand_dims(result, axis=-1) return result
The JAX bug with `axis=None` and `keep_dims=True` is fixed.
https://api.github.com/repos/keras-team/keras/pulls/19443
2024-04-04T17:35:20Z
2024-04-04T17:51:02Z
2024-04-04T17:51:02Z
2024-04-04T17:55:03Z
379
keras-team/keras
47,843
[tests] install pytest-responses
diff --git a/setup.py b/setup.py index 47a0b818d2ab2e..e1437ba23ca51b 100755 --- a/setup.py +++ b/setup.py @@ -79,6 +79,7 @@ # /cassandra 'datadog', 'pytest-cov>=1.8.0,<1.9.0', + 'pytest-responses', 'pytest-timeout>=0.5.0,<0.6.0', 'pytest-xdist>=1.11.0,<1.12.0', 'python-coveralls', diff --git a/src/sentry/testutils/cases.py b/src/sentry/testutils/cases.py index 5023719f5caba5..4142ca31ebae6c 100644 --- a/src/sentry/testutils/cases.py +++ b/src/sentry/testutils/cases.py @@ -448,6 +448,7 @@ def invoke(self, *args): @pytest.mark.usefixtures('browser') [email protected] class AcceptanceTestCase(TransactionTestCase): def save_session(self): self.session.save()
https://api.github.com/repos/getsentry/sentry/pulls/4661
2016-12-12T21:57:28Z
2016-12-13T00:13:47Z
2016-12-13T00:13:47Z
2020-12-23T07:38:27Z
254
getsentry/sentry
44,338
unit test for strategy.py
diff --git a/__init__.py b/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/test_strategy.py b/test_strategy.py new file mode 100644 index 00000000..7e0953d5 --- /dev/null +++ b/test_strategy.py @@ -0,0 +1,25 @@ +""" +Tests for strategy.py +""" + +import unittest +import subprocess + +class StrategyTest(unittest.TestCase): + + def test_print_output(self): + """ + Verifies the print output when strategy.py is executed. + The expected_output is equivalent to the output on the command + line when running 'python strategy.py'. + """ + output = subprocess.check_output(["python", "strategy.py"]) + expected_output = 'Strategy Example 0\r\n\ +Strategy Example 1 from execute 1\r\n\ +Strategy Example 2 from execute 2\r\n' + # byte representation required due to EOF returned subprocess + expected_output_as_bytes = expected_output.encode(encoding='UTF-8') + self.assertEqual(output, expected_output_as_bytes) + +if __name__ == "__main__": + unitest.main()
I added a basic unit test for _strategy.py_ which verifies the output on the command line output when `python strategy.py` is run in the root directory of python-patterns. Just execute `python -m unittest -v` in the root directory of python-patterns to run the test.
https://api.github.com/repos/faif/python-patterns/pulls/107
2016-01-09T15:04:51Z
2016-01-09T17:10:48Z
2016-01-09T17:10:48Z
2016-01-09T17:10:48Z
272
faif/python-patterns
33,493
[wistia] Use API and make more generic
diff --git a/youtube_dl/extractor/generic.py b/youtube_dl/extractor/generic.py index 40eeaad16d4..2d77f604abe 100644 --- a/youtube_dl/extractor/generic.py +++ b/youtube_dl/extractor/generic.py @@ -382,6 +382,19 @@ class GenericIE(InfoExtractor): 'thumbnail': 're:^https?://.*\.jpg$', }, }, + # Wistia embed + { + 'url': 'http://education-portal.com/academy/lesson/north-american-exploration-failed-colonies-of-spain-france-england.html#lesson', + 'md5': '8788b683c777a5cf25621eaf286d0c23', + 'info_dict': { + 'id': '1cfaf6b7ea', + 'ext': 'mov', + 'title': 'md5:51364a8d3d009997ba99656004b5e20d', + 'duration': 643.0, + 'filesize': 182808282, + 'uploader': 'education-portal.com', + }, + }, ] def report_download_webpage(self, video_id): @@ -654,6 +667,16 @@ def _playlist_from_matches(matches, getter, ie=None): 'title': video_title, 'id': video_id, } + match = re.search(r'(?:id=["\']wistia_|data-wistiaid=["\']|Wistia\.embed\(["\'])(?P<id>[^"\']+)', webpage) + if match: + return { + '_type': 'url_transparent', + 'url': 'http://fast.wistia.net/embed/iframe/{0:}'.format(match.group('id')), + 'ie_key': 'Wistia', + 'uploader': video_uploader, + 'title': video_title, + 'id': match.group('id') + } # Look for embedded blip.tv player mobj = re.search(r'<meta\s[^>]*https?://api\.blip\.tv/\w+/redirect/\w+/(\d+)', webpage) diff --git a/youtube_dl/extractor/wistia.py b/youtube_dl/extractor/wistia.py index e6bfa9e147a..748443f811f 100644 --- a/youtube_dl/extractor/wistia.py +++ b/youtube_dl/extractor/wistia.py @@ -1,13 +1,14 @@ from __future__ import unicode_literals -import json import re from .common import InfoExtractor +from ..utils import ExtractorError, compat_urllib_request class WistiaIE(InfoExtractor): _VALID_URL = r'https?://(?:fast\.)?wistia\.net/embed/iframe/(?P<id>[a-z0-9]+)' + _API_URL = 'http://fast.wistia.com/embed/medias/{0:}.json' _TEST = { 'url': 'http://fast.wistia.net/embed/iframe/sh7fpupwlt', @@ -24,11 +25,13 @@ def _real_extract(self, url): mobj = re.match(self._VALID_URL, url) video_id = mobj.group('id') - webpage = self._download_webpage(url, video_id) - data_json = self._html_search_regex( - r'Wistia\.iframeInit\((.*?), {}\);', webpage, 'video data') - - data = json.loads(data_json) + request = compat_urllib_request.Request(self._API_URL.format(video_id)) + request.add_header('Referer', url) # Some videos require this. + data_json = self._download_json(request, video_id) + if data_json.get('error'): + raise ExtractorError('Error while getting the playlist', + expected=True) + data = data_json['media'] formats = [] thumbnails = []
Hi, this changes the wistia extractor to use wistia JSON API instead of searching for metadata in the embedded javascript. Doing that allows us to make it more generic, as only the video ID is required. Also touches issues: #3314, #1765 and #3791. Improvements and suggestions are very welcome.
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/3799
2014-09-20T00:17:31Z
2014-09-25T00:04:31Z
2014-09-25T00:04:31Z
2014-09-25T15:38:15Z
912
ytdl-org/youtube-dl
49,690
Add `pod_template_dict` field to `KubernetesPodOperator`
diff --git a/airflow/providers/cncf/kubernetes/operators/pod.py b/airflow/providers/cncf/kubernetes/operators/pod.py index 55fe1e4c3668c..5153b85965be4 100644 --- a/airflow/providers/cncf/kubernetes/operators/pod.py +++ b/airflow/providers/cncf/kubernetes/operators/pod.py @@ -218,6 +218,7 @@ class KubernetesPodOperator(BaseOperator): /airflow/xcom/return.json in the container will also be pushed to an XCom when the container completes. :param pod_template_file: path to pod template file (templated) + :param pod_template_dict: pod template dictionary (templated) :param priority_class_name: priority class name for the launched Pod :param pod_runtime_info_envs: (Optional) A list of environment variables, to be set in the container. @@ -267,6 +268,7 @@ class KubernetesPodOperator(BaseOperator): "labels", "config_file", "pod_template_file", + "pod_template_dict", "namespace", "container_resources", "volumes", @@ -322,6 +324,7 @@ def __init__( log_events_on_failure: bool = False, do_xcom_push: bool = False, pod_template_file: str | None = None, + pod_template_dict: dict | None = None, priority_class_name: str | None = None, pod_runtime_info_envs: list[k8s.V1EnvVar] | None = None, termination_grace_period: int | None = None, @@ -404,6 +407,7 @@ def __init__( self.log_events_on_failure = log_events_on_failure self.priority_class_name = priority_class_name self.pod_template_file = pod_template_file + self.pod_template_dict = pod_template_dict self.name = self._set_name(name) self.random_name_suffix = random_name_suffix self.termination_grace_period = termination_grace_period @@ -897,6 +901,11 @@ def build_pod_request_obj(self, context: Context | None = None) -> k8s.V1Pod: pod_template = pod_generator.PodGenerator.deserialize_model_file(self.pod_template_file) if self.full_pod_spec: pod_template = PodGenerator.reconcile_pods(pod_template, self.full_pod_spec) + elif self.pod_template_dict: + self.log.debug("Pod template dict found, will parse for base pod") + pod_template = pod_generator.PodGenerator.deserialize_model_dict(self.pod_template_dict) + if self.full_pod_spec: + pod_template = PodGenerator.reconcile_pods(pod_template, self.full_pod_spec) elif self.full_pod_spec: pod_template = self.full_pod_spec else: diff --git a/tests/providers/cncf/kubernetes/operators/test_pod.py b/tests/providers/cncf/kubernetes/operators/test_pod.py index 5e9cbbb9168a0..8402f0f6b2399 100644 --- a/tests/providers/cncf/kubernetes/operators/test_pod.py +++ b/tests/providers/cncf/kubernetes/operators/test_pod.py @@ -30,6 +30,7 @@ from airflow.exceptions import AirflowException, AirflowSkipException, TaskDeferred from airflow.models import DAG, DagModel, DagRun, TaskInstance from airflow.models.xcom import XCom +from airflow.providers.cncf.kubernetes import pod_generator from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator, _optionally_suppress from airflow.providers.cncf.kubernetes.secret import Secret from airflow.providers.cncf.kubernetes.triggers.pod import KubernetesPodTrigger @@ -1019,6 +1020,64 @@ def test_pod_template_file_kwargs_override(self, randomize_name, pod_template_fi "run_id": "test", } + @pytest.mark.parametrize(("randomize_name",), ([True], [False])) + def test_pod_template_dict(self, randomize_name): + templated_pod = k8s.V1Pod( + metadata=k8s.V1ObjectMeta( + namespace="templatenamespace", + name="hello", + labels={"release": "stable"}, + ), + spec=k8s.V1PodSpec( + containers=[], + init_containers=[ + k8s.V1Container( + name="git-clone", + image="registry.k8s.io/git-sync:v3.1.1", + args=[ + "[email protected]:airflow/some_repo.git", + "--branch={{ params.get('repo_branch', 'master') }}", + ], + ), + ], + ), + ) + k = KubernetesPodOperator( + task_id="task", + random_name_suffix=randomize_name, + pod_template_dict=pod_generator.PodGenerator.serialize_pod(templated_pod), + labels={"hello": "world"}, + ) + + # render templated fields before checking generated pod spec + k.render_template_fields(context={"params": {"repo_branch": "test_branch"}}) + pod = k.build_pod_request_obj(create_context(k)) + + if randomize_name: + assert pod.metadata.name.startswith("hello") + assert pod.metadata.name != "hello" + else: + assert pod.metadata.name == "hello" + + assert pod.metadata.labels == { + "hello": "world", + "release": "stable", + "dag_id": "dag", + "kubernetes_pod_operator": "True", + "task_id": "task", + "try_number": "1", + "airflow_version": mock.ANY, + "airflow_kpo_in_cluster": str(k.hook.is_in_cluster), + "run_id": "test", + } + + assert pod.spec.init_containers[0].name == "git-clone" + assert pod.spec.init_containers[0].image == "registry.k8s.io/git-sync:v3.1.1" + assert pod.spec.init_containers[0].args == [ + "[email protected]:airflow/some_repo.git", + "--branch=test_branch", + ] + @patch(f"{POD_MANAGER_CLASS}.fetch_container_logs") @patch(f"{POD_MANAGER_CLASS}.await_container_completion", new=MagicMock) def test_no_handle_failure_on_success(self, fetch_container_mock):
I would like to propose adding a new templated field to kubernetes pod operator. Currently, operator has `pod_template_file` field to create base pod spec but you need to create a file to use this feature. IMHO it might be useful to accept stream or content, thus users might easily configure the base pod spec. **Use Case** I would like to write airflow dag which clones git repository and mounts volume for cloned branch. Branch name should be parameterized so that users can specify it while running the DAG. Example code: ```python templated_pod = k8s.V1Pod( metadata=k8s.V1ObjectMeta(), spec=k8s.V1PodSpec( containers=[], init_containers=[ k8s.V1Container( name='git-clone', image='registry.k8s.io/git-sync:v3.1.1', args=[ f'[email protected]:airflow/some_repo.git', f'--branch={{ params.get("repo_branch", "master") }}', '--root=/tmp/git', '--dest=gitclone', '--ssh=true', '--wait=120', '--one-time=true', ], volume_mounts=[self.git_sync_mount, self.ssh_key_mount], security_context=k8s.V1SecurityContext(run_as_user=65533), resources=k8s.V1ResourceRequirements( requests={ "cpu": "300m", "memory": "512Mi", }, limits={ "cpu": "300m", "memory": "512Mi", }, ), ], volumes=[], ), ) serialized_pod = yaml.dump( pod_generator.PodGenerator.serialize_pod(templated_pod) ) operator = KubernetesPodOperator(pod_template_content=serialized_pod, ...) ```
https://api.github.com/repos/apache/airflow/pulls/33174
2023-08-07T13:53:25Z
2023-12-17T19:13:49Z
2023-12-17T19:13:49Z
2023-12-17T19:13:53Z
1,419
apache/airflow
14,647
Copy cookie policy when copying a CookieJar
diff --git a/requests/cookies.py b/requests/cookies.py index ab3c88b9bf..6a0906b8da 100644 --- a/requests/cookies.py +++ b/requests/cookies.py @@ -415,9 +415,14 @@ def __setstate__(self, state): def copy(self): """Return a copy of this RequestsCookieJar.""" new_cj = RequestsCookieJar() + new_cj.set_policy(self.get_policy()) new_cj.update(self) return new_cj + def get_policy(self): + """Return the CookiePolicy instance used.""" + return self._policy + def _copy_cookie_jar(jar): if jar is None: diff --git a/tests/test_requests.py b/tests/test_requests.py index 0106713d50..9f86b68579 100644 --- a/tests/test_requests.py +++ b/tests/test_requests.py @@ -1171,6 +1171,14 @@ def test_cookie_duplicate_names_raises_cookie_conflict_error(self): with pytest.raises(requests.cookies.CookieConflictError): jar.get(key) + def test_cookie_policy_copy(self): + class MyCookiePolicy(cookielib.DefaultCookiePolicy): + pass + + jar = requests.cookies.RequestsCookieJar() + jar.set_policy(MyCookiePolicy()) + assert isinstance(jar.copy().get_policy(), MyCookiePolicy) + def test_time_elapsed_blank(self, httpbin): r = requests.get(httpbin('get')) td = r.elapsed
https://api.github.com/repos/psf/requests/pulls/4513
2018-02-14T16:45:26Z
2018-06-12T05:13:08Z
2018-06-12T05:13:08Z
2021-09-02T00:07:47Z
340
psf/requests
32,837
use correct masks
diff --git a/fooocus_version.py b/fooocus_version.py index 65980b5d0..9cdc51839 100644 --- a/fooocus_version.py +++ b/fooocus_version.py @@ -1 +1 @@ -version = '2.0.72' +version = '2.0.73' diff --git a/modules/inpaint_worker.py b/modules/inpaint_worker.py index 480e6358d..cb8eb131f 100644 --- a/modules/inpaint_worker.py +++ b/modules/inpaint_worker.py @@ -25,33 +25,25 @@ def __call__(self, x): current_task = None -def morphological_soft_open(x): - k = 12 - x = Image.fromarray(x) - for _ in range(k): - x = x.filter(ImageFilter.MaxFilter(3)) - x = x.filter(ImageFilter.BoxBlur(k * 2 + 1)) - x = np.array(x) - return x - - def box_blur(x, k): x = Image.fromarray(x) x = x.filter(ImageFilter.BoxBlur(k)) return np.array(x) -def threshold_0_255(x): - y = np.zeros_like(x) - y[x > 127] = 255 - return y +def max33(x): + x = Image.fromarray(x) + x = x.filter(ImageFilter.MaxFilter(3)) + return np.array(x) -def morphological_hard_open(x): - y = threshold_0_255(x) - z = morphological_soft_open(x) - z[y > 127] = 255 - return z +def morphological_open(x): + x_int32 = np.zeros_like(x).astype(np.int32) + x_int32[x > 127] = 256 + for _ in range(32): + maxed = max33(x_int32) - 8 + x_int32 = np.maximum(maxed, x_int32) + return x_int32.clip(0, 255).astype(np.uint8) def imsave(x, path): @@ -132,21 +124,20 @@ def fooocus_fill(image, mask): class InpaintWorker: def __init__(self, image, mask, is_outpaint): # mask processing - self.image_raw = fooocus_fill(image, mask) - self.mask_raw_user_input = mask - self.mask_raw_soft = morphological_hard_open(mask) + self.mask_raw_soft = morphological_open(mask) self.mask_raw_fg = (self.mask_raw_soft == 255).astype(np.uint8) * 255 self.mask_raw_bg = (self.mask_raw_soft == 0).astype(np.uint8) * 255 self.mask_raw_trim = 255 - np.maximum(self.mask_raw_fg, self.mask_raw_bg) - self.mask_raw_error = (self.mask_raw_user_input > self.mask_raw_fg).astype(np.uint8) * 255 + + # image processing + self.image_raw = fooocus_fill(image, self.mask_raw_fg) # log all images - # imsave(self.mask_raw_user_input, 'mask_raw_user_input.png') + # imsave(self.image_raw, 'image_raw.png') # imsave(self.mask_raw_soft, 'mask_raw_soft.png') # imsave(self.mask_raw_fg, 'mask_raw_fg.png') # imsave(self.mask_raw_bg, 'mask_raw_bg.png') # imsave(self.mask_raw_trim, 'mask_raw_trim.png') - # imsave(self.mask_raw_error, 'mask_raw_error.png') # compute abcd a, b, c, d = compute_initial_abcd(self.mask_raw_bg < 127)
https://api.github.com/repos/lllyasviel/Fooocus/pulls/452
2023-09-20T10:21:27Z
2023-09-20T10:25:30Z
2023-09-20T10:25:30Z
2023-09-20T10:25:32Z
826
lllyasviel/Fooocus
7,163
Make `reconfigure` use staging server
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md index ffed0c77841..58910277022 100644 --- a/certbot/CHANGELOG.md +++ b/certbot/CHANGELOG.md @@ -16,6 +16,8 @@ Certbot adheres to [Semantic Versioning](https://semver.org/). * Updates `joinpath` syntax to only use one addition per call, because the multiple inputs version was causing mypy errors on Python 3.10. +* Makes the `reconfigure` verb actually use the staging server for the dry run to check the new + configuration. More details about these changes can be found on our GitHub repo. diff --git a/certbot/certbot/_internal/cli/__init__.py b/certbot/certbot/_internal/cli/__init__.py index 4943dbbfb4b..0eccef8d8ba 100644 --- a/certbot/certbot/_internal/cli/__init__.py +++ b/certbot/certbot/_internal/cli/__init__.py @@ -36,6 +36,7 @@ from certbot._internal.cli.cli_utils import nonnegative_int from certbot._internal.cli.cli_utils import parse_preferred_challenges from certbot._internal.cli.cli_utils import read_file +from certbot._internal.cli.cli_utils import set_test_server_options from certbot._internal.cli.group_adder import _add_all_groups from certbot._internal.cli.helpful import HelpfulArgumentParser from certbot._internal.cli.paths_parser import _paths_parser diff --git a/certbot/certbot/_internal/cli/cli_utils.py b/certbot/certbot/_internal/cli/cli_utils.py index 9d9c309da10..4f7f4bba6bf 100644 --- a/certbot/certbot/_internal/cli/cli_utils.py +++ b/certbot/certbot/_internal/cli/cli_utils.py @@ -1,6 +1,7 @@ """Certbot command line util function""" import argparse import copy +import glob import inspect from typing import Any from typing import Iterable @@ -250,3 +251,48 @@ def nonnegative_int(value: str) -> int: if int_value < 0: raise argparse.ArgumentTypeError("value must be non-negative") return int_value + +def set_test_server_options(verb: str, config: configuration.NamespaceConfig) -> None: + """Updates server, break_my_certs, staging, tos, and + register_unsafely_without_email in config as necessary to prepare + to use the test server. + + We have --staging/--dry-run; perform sanity check and set config.server + + :param str verb: subcommand called + + :param config: parsed command line arguments + :type config: configuration.NamespaceConfig + + :raises errors.Error: if non-default server is used and --staging is set + :raises errors.Error: if inapplicable verb is used and --dry-run is set + """ + + # Flag combinations should produce these results: + # | --staging | --dry-run | + # ------------------------------------------------------------ + # | --server acme-v02 | Use staging | Use staging | + # | --server acme-staging-v02 | Use staging | Use staging | + # | --server <other> | Conflict error | Use <other> | + + default_servers = (flag_default("server"), constants.STAGING_URI) + + if config.staging and config.server not in default_servers: + raise errors.Error("--server value conflicts with --staging") + + if config.server == flag_default("server"): + config.server = constants.STAGING_URI + # If the account has already been loaded (such as by calling reconstitute before this), + # clear it so that we don't try to use the prod account on the staging server. + config.account = None + + if config.dry_run: + if verb not in ["certonly", "renew", "reconfigure"]: + raise errors.Error("--dry-run currently only works with the " + "'certonly' or 'renew' subcommands (%r)" % verb) + config.break_my_certs = config.staging = True + if glob.glob(os.path.join(config.config_dir, constants.ACCOUNTS_DIR, "*")): + # The user has a prod account, but might not have a staging + # one; we don't want to start trying to perform interactive registration + config.tos = True + config.register_unsafely_without_email = True diff --git a/certbot/certbot/_internal/cli/helpful.py b/certbot/certbot/_internal/cli/helpful.py index 73b4b316da0..10febf2ce48 100644 --- a/certbot/certbot/_internal/cli/helpful.py +++ b/certbot/certbot/_internal/cli/helpful.py @@ -2,7 +2,6 @@ import argparse import functools -import glob import sys from typing import Any from typing import Dict @@ -26,11 +25,11 @@ from certbot._internal.cli.cli_utils import CustomHelpFormatter from certbot._internal.cli.cli_utils import flag_default from certbot._internal.cli.cli_utils import HelpfulArgumentGroup +from certbot._internal.cli.cli_utils import set_test_server_options from certbot._internal.cli.verb_help import VERB_HELP from certbot._internal.cli.verb_help import VERB_HELP_MAP from certbot._internal.display import obj as display_obj from certbot._internal.plugins import disco -from certbot.compat import os from certbot.configuration import ArgumentSource from certbot.configuration import NamespaceConfig @@ -318,33 +317,10 @@ def parse_args(self) -> NamespaceConfig: return config def set_test_server(self, config: NamespaceConfig) -> None: - """We have --staging/--dry-run; perform sanity check and set config.server""" - - # Flag combinations should produce these results: - # | --staging | --dry-run | - # ------------------------------------------------------------ - # | --server acme-v02 | Use staging | Use staging | - # | --server acme-staging-v02 | Use staging | Use staging | - # | --server <other> | Conflict error | Use <other> | - - default_servers = (flag_default("server"), constants.STAGING_URI) - - if config.staging and config.server not in default_servers: - raise errors.Error("--server value conflicts with --staging") - - if config.server == flag_default("server"): - config.server = constants.STAGING_URI - - if config.dry_run: - if self.verb not in ["certonly", "renew"]: - raise errors.Error("--dry-run currently only works with the " - "'certonly' or 'renew' subcommands (%r)" % self.verb) - config.break_my_certs = config.staging = True - if glob.glob(os.path.join(config.config_dir, constants.ACCOUNTS_DIR, "*")): - # The user has a prod account, but might not have a staging - # one; we don't want to start trying to perform interactive registration - config.tos = True - config.register_unsafely_without_email = True + """Updates server, break_my_certs, staging, tos, and + register_unsafely_without_email in config as necessary to prepare + to use the test server.""" + return set_test_server_options(self.verb, config) def handle_csr(self, config: NamespaceConfig) -> None: """Process a --csr flag.""" diff --git a/certbot/certbot/_internal/main.py b/certbot/certbot/_internal/main.py index 70332e32c49..43eeaff1642 100644 --- a/certbot/certbot/_internal/main.py +++ b/certbot/certbot/_internal/main.py @@ -1727,10 +1727,8 @@ def reconfigure(config: configuration.NamespaceConfig, # to say nothing of the difficulty in explaining what exactly this subcommand can modify - # To make sure that the requested changes work, do a dry run. While setting up the dry run, - # we will set all the needed fields in config, which will then be saved upon success. - config.dry_run = True - + # To make sure that the requested changes work, we're going to do a dry run, and only save + # upon success. First, modify the config as the user requested. if not config.certname: certname_question = "Which certificate would you like to reconfigure?" config.certname = cert_manager.get_certnames( @@ -1772,17 +1770,44 @@ def reconfigure(config: configuration.NamespaceConfig, if not renewal_candidate: raise errors.ConfigurationError("Could not load certificate. See logs for errors.") + renewalparams = orig_renewal_conf['renewalparams'] + # If server was set but hasn't changed and no account is loaded, + # load the old account because reconstitute won't have + if lineage_config.set_by_user('server') and lineage_config.server == renewalparams['server']\ + and lineage_config.account is None: + lineage_config.account = renewalparams['account'] + for param in ('account', 'server',): + if getattr(lineage_config, param) != renewalparams.get(param): + msg = ("Using reconfigure to change the ACME account or server is not supported. " + "If you would like to do so, use renew with the --force-renewal flag instead " + "of reconfigure. Note that doing so will count against any rate limits. For " + "more information on this method, see " + "https://certbot.org/renew-reconfiguration") + raise errors.ConfigurationError(msg) + # this is where lineage_config gets fully filled out (e.g. --apache will set auth and installer) installer, auth = plug_sel.choose_configurator_plugins(lineage_config, plugins, "certonly") - le_client = _init_le_client(lineage_config, auth, installer) + + # make a deep copy of lineage_config because we're about to modify it for a test dry run + dry_run_lineage_config = copy.deepcopy(lineage_config) + + # we also set noninteractive_mode to more accurately simulate renewal (since `certbot renew` + # implies noninteractive mode) and to avoid prompting the user as changes made to + # dry_run_lineage_config beyond this point will not be applied to the original lineage_config + dry_run_lineage_config.noninteractive_mode = True + dry_run_lineage_config.dry_run = True + cli.set_test_server_options("reconfigure", dry_run_lineage_config) + + le_client = _init_le_client(dry_run_lineage_config, auth, installer) # renews cert as dry run to test that the new values are ok # at this point, renewal_candidate.configuration has the old values, but will use # the values from lineage_config when doing the dry run - _get_and_save_cert(le_client, lineage_config, certname=certname, + _get_and_save_cert(le_client, dry_run_lineage_config, certname=certname, lineage=renewal_candidate) # this function will update lineage.configuration with the new values, and save it to disk + # use the pre-dry-run version renewal_candidate.save_new_config_values(lineage_config) _report_reconfigure_results(renewal_file, orig_renewal_conf) diff --git a/certbot/certbot/_internal/tests/main_test.py b/certbot/certbot/_internal/tests/main_test.py index c7e8d21676c..e71eddc75b7 100644 --- a/certbot/certbot/_internal/tests/main_test.py +++ b/certbot/certbot/_internal/tests/main_test.py @@ -563,7 +563,7 @@ def setUp(self): # Options used in the renewal process [renewalparams] account = ee43634db0aa4e6804f152be39990e6a - server = https://acme-staging-v02.api.letsencrypt.org/directory + server = https://acme-v02.api.letsencrypt.org/directory authenticator = nginx installer = nginx key_type = rsa @@ -621,6 +621,72 @@ def test_update_configurator(self): new_config = self._call('--cert-name example.com --apache'.split()) assert new_config['renewalparams']['authenticator'] == 'apache' + def test_only_intended_changes(self): + """ Check that we don't accidentally modify anything that we didn't mean to """ + named_mock = mock.Mock() + named_mock.name = 'apache' + + self.mocks['pick_installer'].return_value = named_mock + self.mocks['pick_auth'].return_value = named_mock + self.mocks['find_init'].return_value = named_mock + + new_config = self._call('--cert-name example.com --apache'.split()) + # Undo the changes we made in calling and in testing + new_config['renewalparams']['authenticator'] = 'nginx' + new_config['renewalparams']['installer'] = 'nginx' + del new_config['renewalparams']['config_dir'] + new_config['version'] = self.original_config['version'] + + assert new_config == self.original_config + + @mock.patch('certbot._internal.hooks.validate_hooks') + def test_staging_used(self, unused_validate_hooks): + """ Check that we use the staging server for the dry run """ + assert self.original_config['renewalparams']['server'] == \ + 'https://acme-v02.api.letsencrypt.org/directory' + + self._call('--cert-name example.com --pre-hook'.split() + ['echo pre']) + + assert 'staging' in self.mocks['_init_le_client'].call_args.args[0].server + assert 'staging' in self.mocks['_get_and_save_cert'].call_args.args[1].server + + def test_new_account_or_server_errors(self): + """ Check that we error when attempting to change the account id or server, + but not when it's the same + """ + orig_account_id = self.original_config['renewalparams']['account'] + orig_server = self.original_config['renewalparams']['server'] + + # new account + try: + self._call(f'--cert-name example.com --account newaccountid'.split()) + except errors.ConfigurationError as err: + assert "Using reconfigure to change the ACME account" in str(err) + + # check that config isn't modified + with open(self.renewal_file, 'r') as f: + new_config = configobj.ConfigObj(f, encoding='utf-8', default_encoding='utf-8') + assert new_config['renewalparams']['account'] == orig_account_id + + # same account + new_config = self._call(f'--cert-name example.com --account {orig_account_id}'.split()) + assert new_config['renewalparams']['account'] == orig_account_id + + # new server + try: + self._call(f'--cert-name example.com --server x.com'.split()) + except errors.ConfigurationError as err: + assert "Using reconfigure to change the ACME account" in str(err) + + # check that config isn't modified + with open(self.renewal_file, 'r') as f: + new_config = configobj.ConfigObj(f, encoding='utf-8', default_encoding='utf-8') + assert new_config['renewalparams']['server'] == orig_server + + # same server + new_config = self._call(f'--cert-name example.com --server {orig_server}'.split()) + assert new_config['renewalparams']['server'] == orig_server + @mock.patch('certbot._internal.hooks.validate_hooks') def test_update_hooks(self, unused_validate_hooks): assert 'pre_hook' not in self.original_config
Fixes #9847. Creates `set_test_server_options` in `cli_utils` so that when `dry_run` is set after parse time, applicable options are additionally changed as well, and calls it from `main.reconfigure`. Clears `config.account` in `set_test_server_options` when default `server` is switched to `staging`, so that the account id loaded during `reconstitute` isn't used with the staging server. Note that this implies that if `account` was set on the cli and the default server is being used, the new account will not be tested during the dry run. Given that a dry run specifically implies using the staging server, there's not really a way to do that anyway. The new account will still be saved to the renewal conf file as the user requested. Regression test failing on master: ```python __________________________________________________________ ReconfigureTest.test_staging_used ___________________________________________________________ self = <certbot._internal.tests.main_test.ReconfigureTest testMethod=test_staging_used> def test_staging_used(self): """ Check that we use the staging server for the dry run""" assert self.original_config['renewalparams']['server'] == \ 'https://acme-v02.api.letsencrypt.org/directory' new_config = self._call('--cert-name example.com --pre-hook'.split() + ['echo pre']) > assert 'staging' in self.mocks['_init_le_client'].call_args.args[0].server E AssertionError: assert 'staging' in 'https://acme-v02.api.letsencrypt.org/directory' E + where 'https://acme-v02.api.letsencrypt.org/directory' = <certbot.configuration.NamespaceConfig object at 0x7f49e303d780>.server certbot/certbot/_internal/tests/main_test.py:649: AssertionError ```
https://api.github.com/repos/certbot/certbot/pulls/9870
2024-01-05T23:41:31Z
2024-01-26T20:09:20Z
2024-01-26T20:09:20Z
2024-01-26T20:09:21Z
3,664
certbot/certbot
2,504
feat(exmo): fetchMyTrades - margin
diff --git a/ts/src/exmo.ts b/ts/src/exmo.ts index cc1980fff384..177e15feeb61 100644 --- a/ts/src/exmo.ts +++ b/ts/src/exmo.ts @@ -1188,17 +1188,16 @@ export default class exmo extends Exchange { // "commission_percent": "0.2" // } // - // margin + // fetchMyTrades (margin) // // { - // "is_maker": false, - // "order_id": "123", - // "pair": "BTC_USD", - // "price": "54122.25", - // "quantity": "0.00069994", - // "trade_dt": "1619069561718824428", - // "trade_id": "692842802860135010", - // "type": "sell" + // "trade_id": "692861757015952517", + // "trade_dt": "1693951853197811824", + // "trade_type": "buy", + // "pair": "ADA_USDT", + // "quantity": "1.96607879", + // "price": "0.2568", + // "amount": "0.50488903" // } // const timestamp = this.safeTimestamp (trade, 'date'); @@ -1207,7 +1206,7 @@ export default class exmo extends Exchange { const priceString = this.safeString (trade, 'price'); const amountString = this.safeString (trade, 'quantity'); const costString = this.safeString (trade, 'amount'); - const side = this.safeString (trade, 'type'); + const side = this.safeString2 (trade, 'type', 'trade_type'); const type = undefined; const marketId = this.safeString (trade, 'pair'); market = this.safeMarket (marketId, market, '_'); @@ -1298,37 +1297,89 @@ export default class exmo extends Exchange { * @method * @name exmo#fetchMyTrades * @description fetch all trades made by the user - * @param {string} symbol unified market symbol + * @see https://documenter.getpostman.com/view/10287440/SzYXWKPi#b8d8d9af-4f46-46a1-939b-ad261d79f452 // spot + * @see https://documenter.getpostman.com/view/10287440/SzYXWKPi#f4b1aaf8-399f-403b-ab5e-4926d967a106 // margin + * @param {string} symbol a symbol is required but it can be a single string, or a non-empty array * @param {int} [since] the earliest time in ms to fetch trades for - * @param {int} [limit] the maximum number of trades structures to retrieve + * @param {int} [limit] *required for margin orders* the maximum number of trades structures to retrieve * @param {object} [params] extra parameters specific to the exmo api endpoint + * + * EXCHANGE SPECIFIC PARAMETERS + * @param {int} [params.offset] last deal offset, default = 0 * @returns {Trade[]} a list of [trade structures]{@link https://github.com/ccxt/ccxt/wiki/Manual#trade-structure} */ - // a symbol is required but it can be a single string, or a non-empty array - if (symbol === undefined) { - throw new ArgumentsRequired (this.id + ' fetchMyTrades() requires a symbol argument (a single symbol or an array)'); + this.checkRequiredSymbol ('fetchMyTrades', symbol); + let marginMode = undefined; + [ marginMode, params ] = this.handleMarginModeAndParams ('fetchMyTrades', params); + if (marginMode === 'cross') { + throw new BadRequest (this.id + 'only isolated margin is supported'); } await this.loadMarkets (); - let pair = undefined; - let market = undefined; - if (Array.isArray (symbol)) { - const numSymbols = symbol.length; - if (numSymbols < 1) { - throw new ArgumentsRequired (this.id + ' fetchMyTrades() requires a non-empty symbol array'); - } - const marketIds = this.marketIds (symbol); - pair = marketIds.join (','); + const market = this.market (symbol); + const pair = market['id']; + const isSpot = marginMode !== 'isolated'; + if (limit === undefined) { + limit = 100; + } + const request = {}; + if (isSpot) { + request['pair'] = pair; } else { - market = this.market (symbol); - pair = market['id']; + request['pair_name'] = pair; } - const request = { - 'pair': pair, - }; if (limit !== undefined) { request['limit'] = limit; } - const response = await this.privatePostUserTrades (this.extend (request, params)); + const offset = this.safeInteger (params, 'offset', 0); + request['offset'] = offset; + let response = undefined; + if (isSpot) { + response = await this.privatePostUserTrades (this.extend (request, params)); + // + // { + // "BTC_USD": [ + // { + // "trade_id": 20056872, + // "client_id": 100500, + // "date": 1435488248, + // "type": "buy", + // "pair": "BTC_USD", + // "quantity": "1", + // "price": "100", + // "amount": "100", + // "order_id": 7, + // "parent_order_id": 117684023830293, + // "exec_type": "taker", + // "commission_amount": "0.02", + // "commission_currency": "BTC", + // "commission_percent": "0.2" + // } + // ], + // ... + // } + // + } else { + const responseFromExchange = await this.privatePostMarginTrades (this.extend (request, params)); + // + // { + // "trades": { + // "ADA_USDT": [ + // { + // "trade_id": "692861757015952517", + // "trade_dt": "1693951853197811824", + // "trade_type": "buy", + // "pair": "ADA_USDT", + // "quantity": "1.96607879", + // "price": "0.2568", + // "amount": "0.50488903" + // }, + // ] + // ... + // } + // } + // + response = this.safeValue (responseFromExchange, 'trades'); + } let result = []; const marketIdsInner = Object.keys (response); for (let i = 0; i < marketIdsInner.length; i++) {
``` % exmo fetchMyTrades ADA/USDT undefined 100 '{"marginMode": "isolated"}' | condense 2023-09-05T22:47:47.544Z Node.js: v18.15.0 CCXT v4.0.83 exmo.fetchMyTrades (ADA/USDT, , 100, [object Object]) 2023-09-05T22:47:51.406Z iteration 0 passed in 509 ms id | timestamp | datetime | symbol | order | type | side | takerOrMaker | price | amount | cost | fee | fees ---------------------------------------------------------------------------------------------------------------------------------------------- 692861757016564983 | | | ADA/USDT | | | buy | | 0.2571 | 1.96607879 | 0.50547885 | | [] ... 692861757019668012 | | | ADA/USDT | | | buy | | 0.257 | 1.96607879 | 0.50528224 | | [] 100 objects 2023-09-05T22:47:51.406Z iteration 1 passed in 509 ms ``` ``` Python v3.11.3 CCXT v4.0.83 exmo.fetchMyTrades(ADA/USDT,None,100,{'marginMode': 'isolated'}) [{'amount': 1.96607879, 'cost': 0.50528224, 'datetime': None, 'fee': None, 'fees': [], 'id': '692861757016994152', 'info': {'amount': '0.50528224', 'pair': 'ADA_USDT', 'price': '0.257', 'quantity': '1.96607879', 'trade_dt': '1693952480723152404', 'trade_id': '692861757016994152', 'trade_type': 'buy'}, 'order': None, 'price': 0.257, 'side': 'buy', 'symbol': 'ADA/USDT', 'takerOrMaker': None, 'timestamp': None, 'type': None}, ... {'amount': 28.0, 'cost': 7.19167988, 'datetime': None, 'fee': None, 'fees': [], 'id': '692861757020122630', 'info': {'amount': '7.19167988', 'pair': 'ADA_USDT', 'price': '0.25684571', 'quantity': '28', 'trade_dt': '1693954337193924899', 'trade_id': '692861757020122630', 'trade_type': 'buy'}, 'order': None, 'price': 0.25684571, 'side': 'buy', 'symbol': 'ADA/USDT', 'takerOrMaker': None, 'timestamp': None, 'type': None}] ```
https://api.github.com/repos/ccxt/ccxt/pulls/19107
2023-09-05T22:50:31Z
2023-09-09T15:01:13Z
2023-09-09T15:01:13Z
2023-09-11T16:11:29Z
1,706
ccxt/ccxt
13,436
[requires.io] dependency update on master branch
diff --git a/setup.py b/setup.py index c2fb4718a4..54913e6fb3 100644 --- a/setup.py +++ b/setup.py @@ -61,10 +61,10 @@ # It is not considered best practice to use install_requires to pin dependencies to specific versions. install_requires=[ "blinker>=1.4, <1.5", - "brotlipy>=0.5.1, <0.7", + "brotlipy>=0.5.1, <0.8", "certifi>=2015.11.20.1", # no semver here - this should always be on the last release! "click>=6.2, <7", - "cryptography>=1.4, <1.9", + "cryptography>=1.4, <1.10", "cssutils>=1.0.1, <1.1", "h2>=3.0, <4", "html2text>=2016.1.8, <=2016.9.19",
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/2360
2017-05-30T08:50:17Z
2017-05-31T08:18:40Z
2017-05-31T08:18:40Z
2017-05-31T08:18:43Z
245
mitmproxy/mitmproxy
27,812
fix for markdown table
diff --git a/CHANGELOG.md b/CHANGELOG.md index 31ce39119..2cac5fe54 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Fixed Text.expand_tabs not expanding spans. - Fixed TimeElapsedColumn from showing negative. - Fix for escaping strings with a trailing backslash https://github.com/Textualize/rich/issues/2987 +- Fixed exception in Markdown with partial table https://github.com/Textualize/rich/issues/3053 ### Added diff --git a/rich/markdown.py b/rich/markdown.py index e2cedfaa8..704da3010 100644 --- a/rich/markdown.py +++ b/rich/markdown.py @@ -254,15 +254,14 @@ def __rich_console__( ) -> RenderResult: table = Table(box=box.SIMPLE_HEAVY) - assert self.header is not None - assert self.header.row is not None - for column in self.header.row.cells: - table.add_column(column.content) - - assert self.body is not None - for row in self.body.rows: - row_content = [element.content for element in row.cells] - table.add_row(*row_content) + if self.header is not None and self.header.row is not None: + for column in self.header.row.cells: + table.add_column(column.content) + + if self.body is not None: + for row in self.body.rows: + row_content = [element.content for element in row.cells] + table.add_row(*row_content) yield table diff --git a/tests/test_markdown.py b/tests/test_markdown.py index 861665cff..1321dceab 100644 --- a/tests/test_markdown.py +++ b/tests/test_markdown.py @@ -133,6 +133,14 @@ def test_markdown_table(): assert result == expected +def test_partial_table(): + markdown = Markdown("| Simple | Table |\n| ------ | ----- ") + result = render(markdown) + print(repr(result)) + expected = "\n \n \x1b[1m \x1b[0m\x1b[1mSimple\x1b[0m\x1b[1m \x1b[0m \x1b[1m \x1b[0m\x1b[1mTable\x1b[0m\x1b[1m \x1b[0m \n ━━━━━━━━━━━━━━━━ \n \n" + assert result == expected + + if __name__ == "__main__": markdown = Markdown(MARKDOWN) rendered = render(markdown)
Fixes https://github.com/Textualize/rich/issues/3053
https://api.github.com/repos/Textualize/rich/pulls/3064
2023-07-29T09:07:48Z
2023-07-29T09:13:37Z
2023-07-29T09:13:37Z
2023-07-29T09:13:38Z
634
Textualize/rich
48,261
use stdout for printing transcription progress
diff --git a/whisper/transcribe.py b/whisper/transcribe.py index d95d3336..62ef5fe5 100644 --- a/whisper/transcribe.py +++ b/whisper/transcribe.py @@ -169,7 +169,8 @@ def add_segment( line = f"[{format_timestamp(start)} --> {format_timestamp(end)}] {text}\n" # compared to just `print(line)`, this replaces any character not representable using # the system default encoding with an '?', avoiding UnicodeEncodeError. - sys.stderr.buffer.write(line.encode(sys.getdefaultencoding(), errors="replace")) + sys.stdout.buffer.write(line.encode(sys.getdefaultencoding(), errors="replace")) + sys.stdout.flush() # show the progress bar when verbose is False (otherwise the transcribed text will be printed) num_frames = mel.shape[-1]
https://api.github.com/repos/openai/whisper/pulls/867
2023-01-20T08:52:13Z
2023-01-20T08:54:06Z
2023-01-20T08:54:06Z
2023-01-24T18:15:03Z
193
openai/whisper
45,784
Add Cloverly API
diff --git a/README.md b/README.md index 04eeed8b42..e2cf332d40 100644 --- a/README.md +++ b/README.md @@ -362,6 +362,7 @@ API | Description | Auth | HTTPS | CORS | |---|---|---|---|---| | [AirVisual](https://airvisual.com/api) | Air quality and weather data | `apiKey` | Yes | Unknown | | [Carbon Interface](https://docs.carboninterface.com/) | API to calculate carbon (C02) emissions estimates for common C02 emitting activities | `apiKey` | Yes | Yes | +| [Cloverly](https://www.cloverly.com/carbon-offset-documentation) | API calculates the impact of common carbon-intensive activities in real time | `apiKey` | Yes | Unknown | | [GrünstromIndex](https://www.corrently.de/hintergrund/gruenstromindex/index.html) | Green Power Index for Germany (Grünstromindex/GSI) | No | No | Yes | | [La Data Verte](https://ladataverte.fr) | Aggregation of multiple environmental indicators (CO2 emissions, Average temperature, etc) | No | Yes | Unknown | | [OpenAQ](https://docs.openaq.org/) | Open air quality data | `apiKey` | Yes | Unknown |
<!-- Thank you for taking the time to work on a Pull Request for this project! --> <!-- To ensure your PR is dealt with swiftly please check the following: --> - [x] My submission is formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md) - [x] My addition is ordered alphabetically - [x] My submission has a useful description - [x] The description does not end with punctuation - [x] Each table column is padded with one space on either side - [x] I have searched the repository for any relevant issues or pull requests - [x] Any category I am creating has the minimum requirement of 3 items - [x] All changes have been [squashed][squash-link] into a single commit [squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
https://api.github.com/repos/public-apis/public-apis/pulls/1676
2021-04-27T07:37:45Z
2021-05-18T19:57:09Z
2021-05-18T19:57:08Z
2021-05-18T19:57:09Z
293
public-apis/public-apis
35,649
Load skills without overwriting them
diff --git a/interpreter/core/computer/computer.py b/interpreter/core/computer/computer.py index 870de9e42..3062a1af8 100644 --- a/interpreter/core/computer/computer.py +++ b/interpreter/core/computer/computer.py @@ -29,6 +29,7 @@ def __init__(self): self.emit_images = True self.api_base = "https://api.openinterpreter.com/v0" + self.save_skills = True # self.api_base = "http://0.0.0.0/v0" # Shortcut for computer.terminal.languages diff --git a/interpreter/core/computer/skills/skills.py b/interpreter/core/computer/skills/skills.py index 4154b0acc..e0f8023f8 100644 --- a/interpreter/core/computer/skills/skills.py +++ b/interpreter/core/computer/skills/skills.py @@ -1,17 +1,21 @@ +import glob import os import aifs -from ....terminal_interface.utils.oi_dir import oi_dir - -skills_dir = os.path.join(oi_dir, "skills") - class Skills: def __init__(self, computer): self.computer = computer - self.path = skills_dir + self.skills_dir = None def search(self, query): - result = aifs.search(skills_dir, query, python_docstrings_only=True) + result = aifs.search(self.skills_dir, query, python_docstrings_only=True) return result + + def import_skills(self): + self.computer.save_skills = False + for file in glob.glob(os.path.join(self.skills_dir, "*.py")): + with open(file, "r") as f: + self.computer.run("python", f.read()) + self.computer.save_skills = True diff --git a/interpreter/core/computer/terminal/languages/jupyter_language.py b/interpreter/core/computer/terminal/languages/jupyter_language.py index be21d53a2..33c1ffb3d 100644 --- a/interpreter/core/computer/terminal/languages/jupyter_language.py +++ b/interpreter/core/computer/terminal/languages/jupyter_language.py @@ -74,14 +74,15 @@ def run(self, code): # Non blocking functions = {} - skill_library_path = self.computer.skills.path + if self.computer.save_skills and functions: + skill_library_path = self.computer.skills.path - if not os.path.exists(skill_library_path): - os.makedirs(skill_library_path) + if not os.path.exists(skill_library_path): + os.makedirs(skill_library_path) - for filename, code in functions.items(): - with open(f"{skill_library_path}/{filename}.py", "w") as file: - file.write(code) + for filename, code in functions.items(): + with open(f"{skill_library_path}/{filename}.py", "w") as file: + file.write(code) # lel # exec(code) @@ -409,6 +410,9 @@ def string_to_python(code_as_string): import_statements.append(f"import {alias.name}") # Check for function definitions elif isinstance(node, ast.FunctionDef): + if node.name.startswith("_"): + # ignore private functions + continue func_info = { "name": node.name, "docstring": ast.get_docstring(node), diff --git a/interpreter/core/core.py b/interpreter/core/core.py index cd0a6c0ce..2a4b92444 100644 --- a/interpreter/core/core.py +++ b/interpreter/core/core.py @@ -6,6 +6,7 @@ import asyncio import json import os +from pathlib import Path import threading import time from datetime import datetime @@ -62,6 +63,8 @@ def __init__( system_message=default_system_message, custom_instructions="", computer=None, + skills_dir=None, + import_skills=True, ): # State self.messages = [] if messages is None else messages @@ -97,6 +100,12 @@ def __init__( # Computer self.computer = Computer() if computer is None else computer + self.computer.skills.skills_dir = ( + skills_dir if skills_dir else str(Path(oi_dir) / "skills") + + ) + if import_skills: + self.computer.skills.import_skills() def server(self, *args, **kwargs): server(self, *args, **kwargs)
### Describe the changes you have made: - When loading the skills initially, don't overwrite them. - Ignore private functions in skills ### Reference any relevant issues (e.g. "Fixes #000"): ### Pre-Submission Checklist (optional but appreciated): - [x] I have included relevant documentation updates (stored in /docs) - [x] I have read `docs/CONTRIBUTING.md` - [ ] I have read `docs/ROADMAP.md` ### OS Tests (optional but appreciated): - [ ] Tested on Windows - [x] Tested on MacOS - [ ] Tested on Linux
https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/1009
2024-02-11T10:59:32Z
2024-02-11T12:10:14Z
2024-02-11T12:10:14Z
2024-02-11T12:10:14Z
1,016
OpenInterpreter/open-interpreter
40,903
Add teddy-bear class back to first 1000 classes of imagenet22k_ms_synsets (line 851, index 850)
diff --git a/timm/data/_info/imagenet22k_ms_synsets.txt b/timm/data/_info/imagenet22k_ms_synsets.txt index 7891fa98c9..33fff6a427 100644 --- a/timm/data/_info/imagenet22k_ms_synsets.txt +++ b/timm/data/_info/imagenet22k_ms_synsets.txt @@ -848,6 +848,7 @@ n04380533 n04389033 n04392985 n04398044 +n04399382 n04404412 n04409515 n04417672
Working on fix for #2140 Need to investigate further.... look at 12k & 22k map files too.
https://api.github.com/repos/huggingface/pytorch-image-models/pulls/2145
2024-04-09T17:01:18Z
2024-04-09T21:56:31Z
2024-04-09T21:56:31Z
2024-04-09T21:56:31Z
137
huggingface/pytorch-image-models
16,231