title
stringlengths 2
169
| diff
stringlengths 235
19.5k
| body
stringlengths 0
30.5k
| url
stringlengths 48
84
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| diff_len
float64 101
3.99k
| repo_name
stringclasses 83
values | __index_level_0__
int64 15
52.7k
|
|---|---|---|---|---|---|---|---|---|---|---|
Highly experimental Amazon Linux bootstrapping
|
diff --git a/bootstrap/_rpm_common.sh b/bootstrap/_rpm_common.sh
index 26b91b8c422..9f670da6e87 100755
--- a/bootstrap/_rpm_common.sh
+++ b/bootstrap/_rpm_common.sh
@@ -16,10 +16,12 @@ else
fi
# "git-core" seems to be an alias for "git" in CentOS 7 (yum search fails)
+# Amazon Linux 2015.03 needs python27-virtualenv rather than python-virtualenv
$tool install -y \
git-core \
python \
python-devel \
+ python27-virtualenv \
python-virtualenv \
gcc \
dialog \
diff --git a/letsencrypt-auto b/letsencrypt-auto
index bb2dd9926bb..5a1c9b0339f 100755
--- a/letsencrypt-auto
+++ b/letsencrypt-auto
@@ -124,6 +124,8 @@ then
ExperimentalBootstrap "FreeBSD" freebsd.sh "$SUDO"
elif uname | grep -iq Darwin ; then
ExperimentalBootstrap "Mac OS X" mac.sh
+ elif grep -iq "Amazon Linux" /etc/issue ; then
+ ExperimentalBootstrap "Amazon Linux" amazon_linux.sh
else
echo "Sorry, I don't know how to bootstrap Let's Encrypt on your operating system!"
echo
|
(Not to be merged until after **1.0 for launch** is done, and after testing for impact on other RedHat based platforms).
Closes: #1458
|
https://api.github.com/repos/certbot/certbot/pulls/1465
|
2015-11-11T19:27:00Z
|
2015-11-15T03:57:44Z
|
2015-11-15T03:57:44Z
|
2016-05-06T19:21:54Z
| 301
|
certbot/certbot
| 1,325
|
Move more stuff into connection util
|
diff --git a/lib/streamlit/connections/snowpark_connection.py b/lib/streamlit/connections/snowpark_connection.py
index 1fb8010e90c3..3e90a74a6ed1 100644
--- a/lib/streamlit/connections/snowpark_connection.py
+++ b/lib/streamlit/connections/snowpark_connection.py
@@ -18,17 +18,20 @@
# way to configure this at a per-line level :(
# mypy: no-warn-unused-ignores
-import configparser
-import os
import threading
from collections import ChainMap
from contextlib import contextmanager
from datetime import timedelta
-from typing import TYPE_CHECKING, Any, Dict, Iterator, Optional, Union, cast
+from typing import TYPE_CHECKING, Iterator, Optional, Union, cast
import pandas as pd
from streamlit.connections import ExperimentalBaseConnection
+from streamlit.connections.util import (
+ SNOWSQL_CONNECTION_FILE,
+ load_from_snowsql_config_file,
+ running_in_sis,
+)
from streamlit.errors import StreamlitAPIException
from streamlit.runtime.caching import cache_data
@@ -37,43 +40,6 @@
_REQUIRED_CONNECTION_PARAMS = {"account"}
-_DEFAULT_CONNECTION_FILE = "~/.snowsql/config"
-
-
-def _load_from_snowsql_config_file(connection_name: str) -> Dict[str, Any]:
- """Loads the dictionary from snowsql config file."""
- snowsql_config_file = os.path.expanduser(_DEFAULT_CONNECTION_FILE)
- if not os.path.exists(snowsql_config_file):
- return {}
-
- config = configparser.ConfigParser(inline_comment_prefixes="#")
- config.read(snowsql_config_file)
-
- if f"connections.{connection_name}" in config:
- raw_conn_params = config[f"connections.{connection_name}"]
- elif "connections" in config:
- raw_conn_params = config["connections"]
- else:
- return {}
-
- conn_params = {
- k.replace("name", ""): v.strip('"') for k, v in raw_conn_params.items()
- }
-
- if "db" in conn_params:
- conn_params["database"] = conn_params["db"]
- del conn_params["db"]
-
- return conn_params
-
-
-def _running_in_sis() -> bool:
- import snowflake.connector.connection # type: ignore
-
- # snowflake.connector.connection.SnowflakeConnection does not exist inside a Stored
- # Proc or Streamlit. It is only part of the external package. So this returns true
- # only in SiS.
- return not hasattr(snowflake.connector.connection, "SnowflakeConnection")
class SnowparkConnection(ExperimentalBaseConnection["Session"]):
@@ -104,19 +70,19 @@ def _connect(self, **kwargs) -> "Session":
# If we're running in SiS, just call get_active_session(). Otherwise, attempt to
# create a new session from whatever credentials we have available.
- if _running_in_sis():
+ if running_in_sis():
return get_active_session()
conn_params = ChainMap(
kwargs,
self._secrets.to_dict(),
- _load_from_snowsql_config_file(self._connection_name),
+ load_from_snowsql_config_file(self._connection_name),
)
if not len(conn_params):
raise StreamlitAPIException(
"Missing Snowpark connection configuration. "
- f"Did you forget to set this in `secrets.toml`, `{_DEFAULT_CONNECTION_FILE}`, "
+ f"Did you forget to set this in `secrets.toml`, `{SNOWSQL_CONNECTION_FILE}`, "
"or as kwargs to `st.experimental_connection`?"
)
diff --git a/lib/streamlit/connections/util.py b/lib/streamlit/connections/util.py
index 7c9ae0118be9..6f1b94264ef7 100644
--- a/lib/streamlit/connections/util.py
+++ b/lib/streamlit/connections/util.py
@@ -12,9 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+# NOTE: We won't always be able to import from snowflake.connector.connection so need the
+# `type: ignore` comment below, but that comment will explode if `warn-unused-ignores` is
+# turned on when the package is available. Unfortunately, mypy doesn't provide a good
+# way to configure this at a per-line level :(
+# mypy: no-warn-unused-ignores
+
+import configparser
+import os
from typing import Any, Collection, Dict
+SNOWSQL_CONNECTION_FILE = "~/.snowsql/config"
+
def extract_from_dict(
keys: Collection[str], source_dict: Dict[str, Any]
@@ -40,3 +50,40 @@ def extract_from_dict(
d[k] = source_dict.pop(k)
return d
+
+
+def load_from_snowsql_config_file(connection_name: str) -> Dict[str, Any]:
+ """Loads the dictionary from snowsql config file."""
+ snowsql_config_file = os.path.expanduser(SNOWSQL_CONNECTION_FILE)
+ if not os.path.exists(snowsql_config_file):
+ return {}
+
+ config = configparser.ConfigParser(inline_comment_prefixes="#")
+ config.read(snowsql_config_file)
+
+ if f"connections.{connection_name}" in config:
+ raw_conn_params = config[f"connections.{connection_name}"]
+ elif "connections" in config:
+ raw_conn_params = config["connections"]
+ else:
+ return {}
+
+ conn_params = {
+ k.replace("name", ""): v.strip('"') for k, v in raw_conn_params.items()
+ }
+
+ if "db" in conn_params:
+ conn_params["database"] = conn_params["db"]
+ del conn_params["db"]
+
+ return conn_params
+
+
+def running_in_sis() -> bool:
+ """Return whether this app seems to be running in SiS."""
+ import snowflake.connector.connection # type: ignore
+
+ # snowflake.connector.connection.SnowflakeConnection does not exist inside a Stored
+ # Proc or Streamlit. It is only part of the external package. So this returns true
+ # only in SiS.
+ return not hasattr(snowflake.connector.connection, "SnowflakeConnection")
diff --git a/lib/tests/streamlit/connections/snowpark_connection_test.py b/lib/tests/streamlit/connections/snowpark_connection_test.py
index f1a03fd36777..878cc82b10c1 100644
--- a/lib/tests/streamlit/connections/snowpark_connection_test.py
+++ b/lib/tests/streamlit/connections/snowpark_connection_test.py
@@ -14,13 +14,12 @@
import threading
import unittest
-from unittest.mock import MagicMock, PropertyMock, mock_open, patch
+from unittest.mock import MagicMock, PropertyMock, patch
import pytest
import streamlit as st
from streamlit.connections import SnowparkConnection
-from streamlit.connections.snowpark_connection import _load_from_snowsql_config_file
from streamlit.errors import StreamlitAPIException
from streamlit.runtime.scriptrunner import add_script_run_ctx
from streamlit.runtime.secrets import AttrDict
@@ -32,59 +31,12 @@ class SnowparkConnectionTest(unittest.TestCase):
def tearDown(self) -> None:
st.cache_data.clear()
- def test_load_from_snowsql_config_file_no_file(self):
- assert _load_from_snowsql_config_file("my_snowpark_connection") == {}
-
- @patch(
- "streamlit.connections.snowpark_connection.os.path.exists",
- MagicMock(return_value=True),
- )
- def test_load_from_snowsql_config_file_no_section(self):
- with patch("builtins.open", new_callable=mock_open, read_data=""):
- assert _load_from_snowsql_config_file("my_snowpark_connection") == {}
-
- @patch(
- "streamlit.connections.snowpark_connection.os.path.exists",
- MagicMock(return_value=True),
- )
- def test_load_from_snowsql_config_file_named_section(self):
- config_data = """
-[connections.my_snowpark_connection]
-accountname = "hello"
-dbname = notPostgres
-
-[connections]
-accountname = "i get overwritten"
-schemaname = public
-"""
- with patch("builtins.open", new_callable=mock_open, read_data=config_data):
- assert _load_from_snowsql_config_file("my_snowpark_connection") == {
- "account": "hello",
- "database": "notPostgres",
- }
-
- @patch(
- "streamlit.connections.snowpark_connection.os.path.exists",
- MagicMock(return_value=True),
- )
- def test_load_from_snowsql_config_file_default_section(self):
- config_data = """
-[connections]
-accountname = "not overwritten"
-schemaname = public
-"""
- with patch("builtins.open", new_callable=mock_open, read_data=config_data):
- assert _load_from_snowsql_config_file("my_snowpark_connection") == {
- "account": "not overwritten",
- "schema": "public",
- }
-
@patch(
"snowflake.snowpark.context.get_active_session",
MagicMock(return_value="some active session"),
)
@patch(
- "streamlit.connections.snowpark_connection._running_in_sis",
+ "streamlit.connections.snowpark_connection.running_in_sis",
MagicMock(return_value=True),
)
def test_uses_active_session_if_in_sis(self):
@@ -92,7 +44,7 @@ def test_uses_active_session_if_in_sis(self):
assert conn._instance == "some active session"
@patch(
- "streamlit.connections.snowpark_connection._load_from_snowsql_config_file",
+ "streamlit.connections.snowpark_connection.load_from_snowsql_config_file",
MagicMock(
return_value={"account": "some_val_1", "password": "i get overwritten"}
),
diff --git a/lib/tests/streamlit/connections/util_test.py b/lib/tests/streamlit/connections/util_test.py
index 0de59b1c04a1..6ea00597b4b2 100644
--- a/lib/tests/streamlit/connections/util_test.py
+++ b/lib/tests/streamlit/connections/util_test.py
@@ -13,8 +13,15 @@
# limitations under the License.
import unittest
+from unittest.mock import MagicMock, mock_open, patch
-from streamlit.connections.util import extract_from_dict
+import pytest
+
+from streamlit.connections.util import (
+ extract_from_dict,
+ load_from_snowsql_config_file,
+ running_in_sis,
+)
class ConnectionUtilTest(unittest.TestCase):
@@ -28,3 +35,63 @@ def test_extract_from_dict(self):
assert extracted == {"k1": "v1", "k2": "v2"}
assert d == {"k3": "v3", "k4": "v4"}
+
+ @pytest.mark.require_snowflake
+ @patch("snowflake.connector.connection", MagicMock())
+ def test_not_running_in_sis(self):
+ assert not running_in_sis()
+
+ @pytest.mark.require_snowflake
+ @patch(
+ "snowflake.connector.connection",
+ )
+ def test_running_in_sis(self, patched_connection):
+ delattr(patched_connection, "SnowflakeConnection")
+ assert running_in_sis()
+
+ def test_load_from_snowsql_config_file_no_file(self):
+ assert load_from_snowsql_config_file("my_snowpark_connection") == {}
+
+ @patch(
+ "streamlit.connections.util.os.path.exists",
+ MagicMock(return_value=True),
+ )
+ def test_load_from_snowsql_config_file_no_section(self):
+ with patch("builtins.open", new_callable=mock_open, read_data=""):
+ assert load_from_snowsql_config_file("my_snowpark_connection") == {}
+
+ @patch(
+ "streamlit.connections.util.os.path.exists",
+ MagicMock(return_value=True),
+ )
+ def test_load_from_snowsql_config_file_named_section(self):
+ config_data = """
+[connections.my_snowpark_connection]
+accountname = "hello"
+dbname = notPostgres
+
+[connections]
+accountname = "i get overwritten"
+schemaname = public
+"""
+ with patch("builtins.open", new_callable=mock_open, read_data=config_data):
+ assert load_from_snowsql_config_file("my_snowpark_connection") == {
+ "account": "hello",
+ "database": "notPostgres",
+ }
+
+ @patch(
+ "streamlit.connections.util.os.path.exists",
+ MagicMock(return_value=True),
+ )
+ def test_load_from_snowsql_config_file_default_section(self):
+ config_data = """
+[connections]
+accountname = "not overwritten"
+schemaname = public
+"""
+ with patch("builtins.open", new_callable=mock_open, read_data=config_data):
+ assert load_from_snowsql_config_file("my_snowpark_connection") == {
+ "account": "not overwritten",
+ "schema": "public",
+ }
|
Note: This work is being done for `feature/st.connection_GA` but is being merged straight
into `develop` to keep the final diff size down.
As we work to replace `SnowparkConnection` with a more general `SnowflakeConnection`,
some of the helper functions that live in `SnowparkConnection` are being reused, so we should
move them into the connections util file.
|
https://api.github.com/repos/streamlit/streamlit/pulls/7512
|
2023-10-06T00:39:36Z
|
2023-10-06T22:23:12Z
|
2023-10-06T22:23:12Z
|
2023-10-06T22:23:14Z
| 2,925
|
streamlit/streamlit
| 22,389
|
[MGR] TransformedTargetRegressor passes fit_params to regressor
|
diff --git a/doc/whats_new/v0.22.rst b/doc/whats_new/v0.22.rst
index ce3174218679f..2de229e72b656 100644
--- a/doc/whats_new/v0.22.rst
+++ b/doc/whats_new/v0.22.rst
@@ -75,6 +75,10 @@ Changelog
1.12.
:pr:`14510` by :user:`Guillaume Lemaitre <glemaitre>`.
+- |Fix| Fixed a bug in :class:`compose.TransformedTargetRegrssor` which did not
+ pass `**fit_params` to the underlying regressor.
+ :pr:`14890` by :user:`Miguel Cabrera <mfcabrera>`.
+
:mod:`sklearn.datasets`
.......................
@@ -205,7 +209,7 @@ Changelog
-|FIX| Fixed a bug where :class:`kernel_approximation.Nystroem` raised a
`KeyError` when using `kernel="precomputed"`.
- :pr:`14706` by :user:`Venkatachalam N <venkyyuvy>`.
+ :pr:`14706` by :user:`Venkatachalam N <venkyyuvy>`.
:mod:`sklearn.linear_model`
...........................
@@ -324,19 +328,19 @@ Changelog
- |Enhancement| SVM now throws more specific error when fit on non-square data
and kernel = precomputed. :class:`svm.BaseLibSVM`
:pr:`14336` by :user:`Gregory Dexter <gdex1>`.
-
+
:mod:`sklearn.tree`
...................
- |Feature| Adds minimal cost complexity pruning, controlled by ``ccp_alpha``,
to :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`,
:class:`tree.ExtraTreeClassifier`, :class:`tree.ExtraTreeRegressor`,
- :class:`ensemble.RandomForestClassifier`,
+ :class:`ensemble.RandomForestClassifier`,
:class:`ensemble.RandomForestRegressor`,
- :class:`ensemble.ExtraTreesClassifier`,
+ :class:`ensemble.ExtraTreesClassifier`,
:class:`ensemble.ExtraTreesRegressor`,
- :class:`ensemble.RandomTreesEmbedding`,
- :class:`ensemble.GradientBoostingClassifier`,
+ :class:`ensemble.RandomTreesEmbedding`,
+ :class:`ensemble.GradientBoostingClassifier`,
and :class:`ensemble.GradientBoostingRegressor`.
:pr:`12887` by `Thomas Fan`_.
diff --git a/sklearn/compose/_target.py b/sklearn/compose/_target.py
index 35b7ed6af962a..3c15ba9ea7521 100644
--- a/sklearn/compose/_target.py
+++ b/sklearn/compose/_target.py
@@ -148,7 +148,7 @@ def _fit_transformer(self, y):
" you are sure you want to proceed regardless"
", set 'check_inverse=False'", UserWarning)
- def fit(self, X, y, sample_weight=None):
+ def fit(self, X, y, **fit_params):
"""Fit the model according to the given training data.
Parameters
@@ -160,9 +160,10 @@ def fit(self, X, y, sample_weight=None):
y : array-like, shape (n_samples,)
Target values.
- sample_weight : array-like, shape (n_samples,) optional
- Array of weights that are assigned to individual samples.
- If not provided, then each sample is given unit weight.
+ **fit_params : dict of string -> object
+ Parameters passed to the ``fit`` method of the underlying
+ regressor.
+
Returns
-------
@@ -197,10 +198,7 @@ def fit(self, X, y, sample_weight=None):
else:
self.regressor_ = clone(self.regressor)
- if sample_weight is None:
- self.regressor_.fit(X, y_trans)
- else:
- self.regressor_.fit(X, y_trans, sample_weight=sample_weight)
+ self.regressor_.fit(X, y_trans, **fit_params)
return self
diff --git a/sklearn/compose/tests/test_target.py b/sklearn/compose/tests/test_target.py
index fcbf92e2a44ea..37c9f43c51da9 100644
--- a/sklearn/compose/tests/test_target.py
+++ b/sklearn/compose/tests/test_target.py
@@ -14,6 +14,8 @@
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import StandardScaler
+from sklearn.pipeline import Pipeline
+
from sklearn.linear_model import LinearRegression, Lasso
from sklearn import datasets
@@ -294,3 +296,39 @@ def test_transform_target_regressor_count_fit(check_inverse):
)
ttr.fit(X, y)
assert ttr.transformer_.fit_counter == 1
+
+
+class DummyRegressorWithExtraFitParams(DummyRegressor):
+ def fit(self, X, y, sample_weight=None, check_input=True):
+ # on the test below we force this to false, we make sure this is
+ # actually passed to the regressor
+ assert not check_input
+ return super().fit(X, y, sample_weight)
+
+
+def test_transform_target_regressor_pass_fit_parameters():
+ X, y = friedman
+ regr = TransformedTargetRegressor(
+ regressor=DummyRegressorWithExtraFitParams(),
+ transformer=DummyTransformer()
+ )
+
+ regr.fit(X, y, check_input=False)
+ assert regr.transformer_.fit_counter == 1
+
+
+def test_transform_target_regressor_route_pipeline():
+ X, y = friedman
+
+ regr = TransformedTargetRegressor(
+ regressor=DummyRegressorWithExtraFitParams(),
+ transformer=DummyTransformer()
+ )
+ estimators = [
+ ('normalize', StandardScaler()), ('est', regr)
+ ]
+
+ pip = Pipeline(estimators)
+ pip.fit(X, y, **{'est__check_input': False})
+
+ assert regr.transformer_.fit_counter == 1
|
#### Reference Issues/PRs
Fixes #13349
#### What does this implement/fix? Explain your changes.
Originally TransformedTargetRegressor only passed `sample_weight` to the `fit` method of the underlying regressor. However This regressor might have other parameters and could theoretically even be a pipeline. With this change you can pass arbitrary paramter to it.
|
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/14890
|
2019-09-05T13:19:06Z
|
2019-09-06T10:26:05Z
|
2019-09-06T10:26:05Z
|
2019-09-16T11:15:35Z
| 1,393
|
scikit-learn/scikit-learn
| 46,634
|
Fix ValueError for LSTM(implementation=1, use_bias=False)
|
diff --git a/keras/layers/recurrent.py b/keras/layers/recurrent.py
index ec9fa871c6d..46eb08cd8c5 100644
--- a/keras/layers/recurrent.py
+++ b/keras/layers/recurrent.py
@@ -1803,10 +1803,15 @@ def call(self, inputs, states, training=None):
inputs_f = inputs
inputs_c = inputs
inputs_o = inputs
- x_i = K.dot(inputs_i, self.kernel_i) + self.bias_i
- x_f = K.dot(inputs_f, self.kernel_f) + self.bias_f
- x_c = K.dot(inputs_c, self.kernel_c) + self.bias_c
- x_o = K.dot(inputs_o, self.kernel_o) + self.bias_o
+ x_i = K.dot(inputs_i, self.kernel_i)
+ x_f = K.dot(inputs_f, self.kernel_f)
+ x_c = K.dot(inputs_c, self.kernel_c)
+ x_o = K.dot(inputs_o, self.kernel_o)
+ if self.use_bias:
+ x_i = K.bias_add(x_i, self.bias_i)
+ x_f = K.bias_add(x_f, self.bias_f)
+ x_c = K.bias_add(x_c, self.bias_c)
+ x_o = K.bias_add(x_o, self.bias_o)
if 0 < self.recurrent_dropout < 1.:
h_tm1_i = h_tm1 * rec_dp_mask[0]
diff --git a/tests/keras/layers/recurrent_test.py b/tests/keras/layers/recurrent_test.py
index 19d318a060a..20f0065f376 100644
--- a/tests/keras/layers/recurrent_test.py
+++ b/tests/keras/layers/recurrent_test.py
@@ -183,6 +183,12 @@ def test_implementation_mode(layer_class):
'dropout': 0.1,
'recurrent_dropout': 0.1},
input_shape=(num_samples, timesteps, embedding_dim))
+ # Without bias
+ layer_test(layer_class,
+ kwargs={'units': units,
+ 'implementation': mode,
+ 'use_bias': False},
+ input_shape=(num_samples, timesteps, embedding_dim))
@rnn_test
|
Fixes `ValueError` for `LSTM(implementation=1, use_bias=False)` where the addition operation fails because `self.bias` is set to `None`.
Also added a test in `test_implementation_mode()` that fails before this fix and passes after this fix.
|
https://api.github.com/repos/keras-team/keras/pulls/8299
|
2017-10-30T05:27:41Z
|
2017-10-31T17:44:03Z
|
2017-10-31T17:44:03Z
|
2017-11-01T01:42:02Z
| 514
|
keras-team/keras
| 47,871
|
perf(metrics-extraction): Cache should_use_on_demand decision
|
diff --git a/src/sentry/options/defaults.py b/src/sentry/options/defaults.py
index 9f76414e315eb7..de3877d3c492ad 100644
--- a/src/sentry/options/defaults.py
+++ b/src/sentry/options/defaults.py
@@ -2021,6 +2021,12 @@
default=False,
flags=FLAG_PRIORITIZE_DISK | FLAG_AUTOMATOR_MODIFIABLE,
)
+# Use to rollout using a cache for should_use_on_demand function, which resolves queries
+register(
+ "on_demand_metrics.cache_should_use_on_demand",
+ default=0.0,
+ flags=FLAG_AUTOMATOR_MODIFIABLE | FLAG_MODIFIABLE_RATE,
+)
# Relocation: whether or not the self-serve API for the feature is enabled. When set on a region
# silo, this flag controls whether or not that region's API will serve relocation requests to
diff --git a/src/sentry/snuba/metrics/extraction.py b/src/sentry/snuba/metrics/extraction.py
index ba129a9accf12a..2963b5c5705b65 100644
--- a/src/sentry/snuba/metrics/extraction.py
+++ b/src/sentry/snuba/metrics/extraction.py
@@ -36,6 +36,7 @@
from sentry.constants import DataCategory
from sentry.discover.arithmetic import is_equation
from sentry.exceptions import InvalidSearchQuery
+from sentry.features.rollout import in_random_rollout
from sentry.models.organization import Organization
from sentry.models.project import Project
from sentry.models.transaction_threshold import ProjectTransactionThreshold, TransactionMetric
@@ -45,6 +46,8 @@
from sentry.snuba.dataset import Dataset
from sentry.snuba.metrics.naming_layer.mri import ParsedMRI, parse_mri
from sentry.snuba.metrics.utils import MetricOperationType
+from sentry.utils.cache import cache
+from sentry.utils.hashlib import md5_text
from sentry.utils.snuba import is_measurement, is_span_op_breakdown, resolve_column
logger = logging.getLogger(__name__)
@@ -619,7 +622,7 @@ def should_use_on_demand_metrics_for_querying(organization: Organization, **kwar
return should_use_on_demand_metrics(**kwargs)
-def should_use_on_demand_metrics(
+def _should_use_on_demand_metrics(
dataset: str | Dataset | None,
aggregate: str,
query: str | None,
@@ -659,6 +662,39 @@ def should_use_on_demand_metrics(
return not supported_by.standard_metrics and supported_by.on_demand_metrics
+def should_use_on_demand_metrics(
+ dataset: str | Dataset | None,
+ aggregate: str,
+ query: str | None,
+ groupbys: Sequence[str] | None = None,
+ prefilling: bool = False,
+) -> bool:
+ if in_random_rollout("on_demand_metrics.cache_should_use_on_demand"):
+
+ dataset_str = dataset.value if isinstance(dataset, Enum) else str(dataset or "")
+ groupbys_str = ",".join(sorted(groupbys)) if groupbys else ""
+ cache_key = md5_text(
+ f"{dataset_str}-{aggregate}-{query or ''}-{groupbys_str}-prefilling={prefilling}"
+ ).hexdigest()
+ cached_result = cache.get(cache_key)
+ if cached_result:
+ return cached_result
+ else:
+ result = _should_use_on_demand_metrics(
+ dataset=dataset,
+ aggregate=aggregate,
+ query=query,
+ groupbys=groupbys,
+ prefilling=prefilling,
+ )
+ cache.set(cache_key, result, timeout=3600)
+ return result
+
+ return _should_use_on_demand_metrics(
+ dataset=dataset, aggregate=aggregate, query=query, groupbys=groupbys, prefilling=prefilling
+ )
+
+
def _extract_aggregate_components(aggregate: str) -> tuple[str, list[str]] | None:
try:
if is_equation(aggregate):
|
### Summary
Checking whether a query should use on-demand requires us to parse the query and use the event_search_grammar every time. This isn't that costly when doing an api call (in the 10ms to 100ms range) but when doing thousands of checks it adds up to a significant amount of time. This caches the decision of using on-demand solely based on the properties for up to an hour.
Behind an option so we can check the impact of this and roll it off quickly if required.
|
https://api.github.com/repos/getsentry/sentry/pulls/67709
|
2024-03-26T17:22:08Z
|
2024-03-26T18:11:14Z
|
2024-03-26T18:11:14Z
|
2024-04-11T00:24:46Z
| 893
|
getsentry/sentry
| 44,737
|
Update request-response.rst
|
diff --git a/docs/topics/request-response.rst b/docs/topics/request-response.rst
index 675574e287b..9dc54f07bfd 100644
--- a/docs/topics/request-response.rst
+++ b/docs/topics/request-response.rst
@@ -43,7 +43,7 @@ Request objects
:param body: the request body. If a ``unicode`` is passed, then it's encoded to
``str`` using the `encoding` passed (which defaults to ``utf-8``). If
``body`` is not given,, an empty string is stored. Regardless of the
- type of this argument, the final value stored will be a ``str``` (never
+ type of this argument, the final value stored will be a ``str`` (never
``unicode`` or ``None``).
:type body: str or unicode
|
Fix small doc typo (too many backticks)
|
https://api.github.com/repos/scrapy/scrapy/pulls/391
|
2013-09-18T14:46:42Z
|
2013-09-18T14:59:46Z
|
2013-09-18T14:59:46Z
|
2014-06-12T16:08:02Z
| 189
|
scrapy/scrapy
| 34,392
|
ref(indexer): Deprecate DbKey
|
diff --git a/src/sentry/sentry_metrics/configuration.py b/src/sentry/sentry_metrics/configuration.py
index e34f0d0a4e71e..32cfc4585b447 100644
--- a/src/sentry/sentry_metrics/configuration.py
+++ b/src/sentry/sentry_metrics/configuration.py
@@ -10,14 +10,8 @@ class UseCaseKey(Enum):
PERFORMANCE = "performance"
-class DbKey(Enum):
- STRING_INDEXER = "StringIndexer"
- PERF_STRING_INDEXER = "PerfStringIndexer"
-
-
@dataclass(frozen=True)
class MetricsIngestConfiguration:
- db_model: DbKey
input_topic: str
output_topic: str
use_case_id: UseCaseKey
@@ -36,7 +30,6 @@ def get_ingest_config(use_case_key: UseCaseKey) -> MetricsIngestConfiguration:
if len(_METRICS_INGEST_CONFIG_BY_USE_CASE) == 0:
_register_ingest_config(
MetricsIngestConfiguration(
- db_model=DbKey.STRING_INDEXER,
input_topic=settings.KAFKA_INGEST_METRICS,
output_topic=settings.KAFKA_SNUBA_METRICS,
use_case_id=UseCaseKey.RELEASE_HEALTH,
@@ -46,7 +39,6 @@ def get_ingest_config(use_case_key: UseCaseKey) -> MetricsIngestConfiguration:
)
_register_ingest_config(
MetricsIngestConfiguration(
- db_model=DbKey.PERF_STRING_INDEXER,
input_topic=settings.KAFKA_INGEST_PERFORMANCE_METRICS,
output_topic=settings.KAFKA_SNUBA_GENERIC_METRICS,
use_case_id=UseCaseKey.PERFORMANCE,
diff --git a/src/sentry/sentry_metrics/consumers/last_seen_updater.py b/src/sentry/sentry_metrics/consumers/last_seen_updater.py
index ea9504b0feaea..8e85825276207 100644
--- a/src/sentry/sentry_metrics/consumers/last_seen_updater.py
+++ b/src/sentry/sentry_metrics/consumers/last_seen_updater.py
@@ -125,7 +125,7 @@ def _last_seen_updater_processing_factory(
process_message=retrieve_db_read_keys,
prefilter=LastSeenUpdaterMessageFilter(metrics=get_metrics()),
collector=lambda: LastSeenUpdaterCollector(
- metrics=get_metrics(), table=TABLE_MAPPING[ingest_config.db_model]
+ metrics=get_metrics(), table=TABLE_MAPPING[ingest_config.use_case_id]
),
)
diff --git a/src/sentry/sentry_metrics/indexer/db.py b/src/sentry/sentry_metrics/indexer/db.py
index cf85ab83faef6..ff4ea6e3999bc 100644
--- a/src/sentry/sentry_metrics/indexer/db.py
+++ b/src/sentry/sentry_metrics/indexer/db.py
@@ -1,11 +1,11 @@
from typing import Mapping, Type
-from sentry.sentry_metrics.configuration import DbKey
+from sentry.sentry_metrics.configuration import UseCaseKey
from sentry.sentry_metrics.indexer.models import BaseIndexer, PerfStringIndexer, StringIndexer
IndexerTable = Type[BaseIndexer]
-TABLE_MAPPING: Mapping[DbKey, IndexerTable] = {
- DbKey.STRING_INDEXER: StringIndexer,
- DbKey.PERF_STRING_INDEXER: PerfStringIndexer,
+TABLE_MAPPING: Mapping[UseCaseKey, IndexerTable] = {
+ UseCaseKey.RELEASE_HEALTH: StringIndexer,
+ UseCaseKey.PERFORMANCE: PerfStringIndexer,
}
diff --git a/src/sentry/sentry_metrics/indexer/postgres_v2.py b/src/sentry/sentry_metrics/indexer/postgres_v2.py
index d141dfeb7092e..f3f1ec1742bc6 100644
--- a/src/sentry/sentry_metrics/indexer/postgres_v2.py
+++ b/src/sentry/sentry_metrics/indexer/postgres_v2.py
@@ -5,7 +5,7 @@
from django.conf import settings
from django.db.models import Q
-from sentry.sentry_metrics.configuration import UseCaseKey, get_ingest_config
+from sentry.sentry_metrics.configuration import UseCaseKey
from sentry.sentry_metrics.indexer.base import KeyCollection, KeyResult, KeyResults, StringIndexer
from sentry.sentry_metrics.indexer.cache import CachingIndexer, StringIndexerCache
from sentry.sentry_metrics.indexer.db import TABLE_MAPPING, IndexerTable
@@ -148,7 +148,7 @@ def reverse_resolve(self, use_case_id: UseCaseKey, id: int) -> Optional[str]:
return string
def _table(self, use_case_id: UseCaseKey) -> IndexerTable:
- return TABLE_MAPPING[get_ingest_config(use_case_id).db_model]
+ return TABLE_MAPPING[use_case_id]
class PostgresIndexer(StaticStringIndexer):
|
There's no reason for DbKey to exist on the main indexer configuration.
It's a Postgres specific implementation detail. The mapping between
UseCaseKey -> DbKey -> table name is now just UseCaseKey -> table name
and this is only in the Postgres backend.
|
https://api.github.com/repos/getsentry/sentry/pulls/37790
|
2022-08-12T21:27:37Z
|
2022-08-12T22:03:08Z
|
2022-08-12T22:03:08Z
|
2022-08-28T00:02:20Z
| 1,075
|
getsentry/sentry
| 44,372
|
Update both main VA and remote VA to use the provided DNS server
|
diff --git a/certbot-ci/certbot_integration_tests/utils/acme_server.py b/certbot-ci/certbot_integration_tests/utils/acme_server.py
index 5559b44a652..aa501a279e4 100755
--- a/certbot-ci/certbot_integration_tests/utils/acme_server.py
+++ b/certbot-ci/certbot_integration_tests/utils/acme_server.py
@@ -149,10 +149,10 @@ def _prepare_pebble_server(self):
[pebble_path, '-config', pebble_config_path, '-dnsserver', dns_server, '-strict'],
env=environ)
- # pebble_ocsp_server is imported here and not at the top of module in order to avoid a useless
- # ImportError, in the case where cryptography dependency is too old to support ocsp, but
- # Boulder is used instead of Pebble, so pebble_ocsp_server is not used. This is the typical
- # situation of integration-certbot-oldest tox testenv.
+ # pebble_ocsp_server is imported here and not at the top of module in order to avoid a
+ # useless ImportError, in the case where cryptography dependency is too old to support ocsp,
+ # but Boulder is used instead of Pebble, so pebble_ocsp_server is not used. This is the
+ # typical situation of integration-certbot-oldest tox testenv.
from certbot_integration_tests.utils import pebble_ocsp_server
self._launch_process([sys.executable, pebble_ocsp_server.__file__])
@@ -178,11 +178,12 @@ def _prepare_boulder_server(self):
if self._dns_server:
# Change Boulder config to use the provided DNS server
- with open(join(instance_path, 'test/config/va.json'), 'r') as file_h:
- config = json.loads(file_h.read())
- config['va']['dnsResolvers'] = [self._dns_server]
- with open(join(instance_path, 'test/config/va.json'), 'w') as file_h:
- file_h.write(json.dumps(config, indent=2, separators=(',', ': ')))
+ for suffix in ["", "-remote-a", "-remote-b"]:
+ with open(join(instance_path, 'test/config/va{}.json'.format(suffix)), 'r') as f:
+ config = json.loads(f.read())
+ config['va']['dnsResolvers'] = [self._dns_server]
+ with open(join(instance_path, 'test/config/va{}.json'.format(suffix)), 'w') as f:
+ f.write(json.dumps(config, indent=2, separators=(',', ': ')))
try:
# Launch the Boulder server
|
Fixes #8453
|
https://api.github.com/repos/certbot/certbot/pulls/8467
|
2020-11-19T20:17:23Z
|
2020-12-04T01:00:33Z
|
2020-12-04T01:00:33Z
|
2020-12-04T01:00:33Z
| 592
|
certbot/certbot
| 1,596
|
Adjust ReAct system prompt examples to match default `FnSchema`
|
diff --git a/llama_index/agent/react/prompts.py b/llama_index/agent/react/prompts.py
index fc34fdeab8020..717317fddf132 100644
--- a/llama_index/agent/react/prompts.py
+++ b/llama_index/agent/react/prompts.py
@@ -24,12 +24,12 @@
```
Thought: I need to use a tool to help me answer the question.
Action: tool name (one of {tool_names}) if using a tool.
-Action Input: the input to the tool, in a JSON format representing the kwargs (e.g. {{"text": "hello world", "num_beams": 5}})
+Action Input: the input to the tool, in a JSON format representing the kwargs (e.g. {{"input": "hello world", "num_beams": 5}})
```
Please ALWAYS start with a Thought.
-Please use a valid JSON format for the Action Input. Do NOT do this {{'text': 'hello world', 'num_beams': 5}}.
+Please use a valid JSON format for the Action Input. Do NOT do this {{'input': 'hello world', 'num_beams': 5}}.
If this format is used, the user will respond in the following format:
|
# Description
- REACT_CHAT_SYSTEM_HEADER is the default system prompt for `ReActAgent` and it contains some in-context examples for how LLM should respond to specify a desire to take an action
- The keyword used in those examples is "text", which doesn't match the default `FnSchema` keyword, namely: "input"
Interestingly:
- GPT Models follow the `FnSchema` of the tool that is also included in the prompt, whereas
- Open source models (Llama2 and Zephyr) follow the in-context examples (i.e., use "text")
This is a quick fix to align the default prompt with the default function schema. Though, we should look to add in abilities to customize the system prompt as well as handle prompt selection according to LLM choice in a future PR.
Fixes # (issue)
## Type of Change
Please delete options that are not relevant.
- [x] Bug fix (non-breaking change which fixes an issue)
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [x] Tested various notebooks
- [x] I stared at the code and made sure it makes sense
|
https://api.github.com/repos/run-llama/llama_index/pulls/9103
|
2023-11-22T23:00:33Z
|
2023-11-23T16:46:44Z
|
2023-11-23T16:46:44Z
|
2023-11-23T16:46:45Z
| 280
|
run-llama/llama_index
| 6,615
|
add note to readme about project being completed
|
diff --git a/README.md b/README.md
index b0bb4562d6..054fb65ef2 100644
--- a/README.md
+++ b/README.md
@@ -3,6 +3,10 @@
<img width="auto" height="50px" src="https://github.com/LAION-AI/Open-Assistant/blob/main/assets/logo_crop.png"/>
</h1>
+<blockquote>
+<p>:memo: <strong>NOTE</strong>: OpenAssistant is completed, and the project is now finished. Thank you to everyone who contributed! Check out our <a href="https://projects.laion.ai/Open-Assistant/blog/2023/10/25/open-assistant-is-completed">blog post</a> for more information.</p>
+</blockquote>
+
<div align="center">
<a href="https://github.com/LAION-AI/Open-Assistant/stargazers"></a>
|
@yk @andreaskoepf small one to just add note about project being completed to the top of readme
|
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3724
|
2023-11-06T11:23:23Z
|
2023-11-07T22:40:44Z
|
2023-11-07T22:40:44Z
|
2023-11-08T02:25:23Z
| 230
|
LAION-AI/Open-Assistant
| 37,348
|
[MRG+1] Enable genspider command outside project folder
|
diff --git a/docs/topics/commands.rst b/docs/topics/commands.rst
index 9a40a2c2934..d7999900b82 100644
--- a/docs/topics/commands.rst
+++ b/docs/topics/commands.rst
@@ -159,6 +159,7 @@ settings).
Global commands:
* :command:`startproject`
+* :command:`genspider`
* :command:`settings`
* :command:`runspider`
* :command:`shell`
@@ -173,7 +174,6 @@ Project-only commands:
* :command:`list`
* :command:`edit`
* :command:`parse`
-* :command:`genspider`
* :command:`bench`
.. command:: startproject
@@ -197,14 +197,9 @@ genspider
---------
* Syntax: ``scrapy genspider [-t template] <name> <domain>``
-* Requires project: *yes*
-
-Create a new spider in the current project.
+* Requires project: *no*
-This is just a convenience shortcut command for creating spiders based on
-pre-defined templates, but certainly not the only way to create spiders. You
-can just create the spider source code files yourself, instead of using this
-command.
+Create a new spider in the current folder or in the current project's ``spiders`` folder, if called from inside a project. The ``<name>`` parameter is set as the spider's ``name``, while ``<domain>`` is used to generate the ``allowed_domains`` and ``start_urls`` spider's attributes.
Usage example::
@@ -215,22 +210,16 @@ Usage example::
csvfeed
xmlfeed
- $ scrapy genspider -d basic
- import scrapy
+ $ scrapy genspider example example.com
+ Created spider 'example' using template 'basic'
- class $classname(scrapy.Spider):
- name = "$name"
- allowed_domains = ["$domain"]
- start_urls = (
- 'http://www.$domain/',
- )
+ $ scrapy genspider -t crawl scrapyorg scrapy.org
+ Created spider 'scrapyorg' using template 'crawl'
- def parse(self, response):
- pass
-
- $ scrapy genspider -t basic example example.com
- Created spider 'example' using template 'basic' in module:
- mybot.spiders.example
+This is just a convenience shortcut command for creating spiders based on
+pre-defined templates, but certainly not the only way to create spiders. You
+can just create the spider source code files yourself, instead of using this
+command.
.. command:: crawl
diff --git a/scrapy/commands/genspider.py b/scrapy/commands/genspider.py
index 58bdb915660..d5498bb5cad 100644
--- a/scrapy/commands/genspider.py
+++ b/scrapy/commands/genspider.py
@@ -25,7 +25,7 @@ def sanitize_module_name(module_name):
class Command(ScrapyCommand):
- requires_project = True
+ requires_project = False
default_settings = {'LOG_ENABLED': False}
def syntax(self):
@@ -94,14 +94,19 @@ def _genspider(self, module, name, domain, template_name, template_file):
'classname': '%sSpider' % ''.join(s.capitalize() \
for s in module.split('_'))
}
- spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])
- spiders_dir = abspath(dirname(spiders_module.__file__))
+ if self.settings.get('NEWSPIDER_MODULE'):
+ spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])
+ spiders_dir = abspath(dirname(spiders_module.__file__))
+ else:
+ spiders_module = None
+ spiders_dir = "."
spider_file = "%s.py" % join(spiders_dir, module)
shutil.copyfile(template_file, spider_file)
render_templatefile(spider_file, **tvars)
- print("Created spider %r using template %r in module:" % (name, \
- template_name))
- print(" %s.%s" % (spiders_module.__name__, module))
+ print("Created spider %r using template %r " % (name, \
+ template_name), end=('' if spiders_module else '\n'))
+ if spiders_module:
+ print("in module:\n %s.%s" % (spiders_module.__name__, module))
def _find_template(self, template):
template_file = join(self.templates_dir, '%s.tmpl' % template)
diff --git a/scrapy/templates/spiders/crawl.tmpl b/scrapy/templates/spiders/crawl.tmpl
index a179d16ff4c..154237d9c2b 100644
--- a/scrapy/templates/spiders/crawl.tmpl
+++ b/scrapy/templates/spiders/crawl.tmpl
@@ -3,8 +3,6 @@ import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
-from $project_name.items import ${ProjectName}Item
-
class $classname(CrawlSpider):
name = '$name'
@@ -16,7 +14,7 @@ class $classname(CrawlSpider):
)
def parse_item(self, response):
- i = ${ProjectName}Item()
+ i = {}
#i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
#i['name'] = response.xpath('//div[@id="name"]').extract()
#i['description'] = response.xpath('//div[@id="description"]').extract()
diff --git a/scrapy/templates/spiders/csvfeed.tmpl b/scrapy/templates/spiders/csvfeed.tmpl
index 69c6065385c..0544e0ae7d8 100644
--- a/scrapy/templates/spiders/csvfeed.tmpl
+++ b/scrapy/templates/spiders/csvfeed.tmpl
@@ -1,8 +1,6 @@
# -*- coding: utf-8 -*-
from scrapy.spiders import CSVFeedSpider
-from $project_name.items import ${ProjectName}Item
-
class $classname(CSVFeedSpider):
name = '$name'
@@ -16,7 +14,7 @@ class $classname(CSVFeedSpider):
# return response
def parse_row(self, response, row):
- i = ${ProjectName}Item()
+ i = {}
#i['url'] = row['url']
#i['name'] = row['name']
#i['description'] = row['description']
diff --git a/scrapy/templates/spiders/xmlfeed.tmpl b/scrapy/templates/spiders/xmlfeed.tmpl
index 9c0910d237b..d8ff61f6e00 100644
--- a/scrapy/templates/spiders/xmlfeed.tmpl
+++ b/scrapy/templates/spiders/xmlfeed.tmpl
@@ -1,8 +1,6 @@
# -*- coding: utf-8 -*-
from scrapy.spiders import XMLFeedSpider
-from $project_name.items import ${ProjectName}Item
-
class $classname(XMLFeedSpider):
name = '$name'
@@ -12,7 +10,7 @@ class $classname(XMLFeedSpider):
itertag = 'item' # change it accordingly
def parse_node(self, response, selector):
- i = ${ProjectName}Item()
+ i = {}
#i['url'] = selector.select('url').extract()
#i['name'] = selector.select('name').extract()
#i['description'] = selector.select('description').extract()
diff --git a/tests/test_commands.py b/tests/test_commands.py
index 2e47160d773..cf415a3888f 100644
--- a/tests/test_commands.py
+++ b/tests/test_commands.py
@@ -146,6 +146,13 @@ def test_same_name_as_project(self):
assert not exists(join(self.proj_mod_path, 'spiders', '%s.py' % self.project_name))
+class GenspiderStandaloneCommandTest(ProjectTest):
+
+ def test_generate_standalone_spider(self):
+ self.call('genspider', 'example', 'example.com')
+ assert exists(join(self.temp_path, 'example.py'))
+
+
class MiscCommandsTest(CommandTest):
def test_list(self):
|
This PR enables the `genspider` CLI command even when the working dir is not a scrapy project.
The rationale behind is that some users (including me) are used to create standalone spiders and run it with the `runspider` command, because it's a quick and convenient way to fire up simple spiders. Having `genspider` available would make it even quicker.
|
https://api.github.com/repos/scrapy/scrapy/pulls/2052
|
2016-06-12T22:07:37Z
|
2016-07-06T21:01:30Z
|
2016-07-06T21:01:30Z
|
2016-07-06T21:01:30Z
| 1,842
|
scrapy/scrapy
| 35,042
|
Docs: Fix Rockset links
|
diff --git a/docs/extras/integrations/providers/rockset.mdx b/docs/extras/integrations/providers/rockset.mdx
index 477306f46df508..4dd5431dc1c431 100644
--- a/docs/extras/integrations/providers/rockset.mdx
+++ b/docs/extras/integrations/providers/rockset.mdx
@@ -12,7 +12,7 @@ pip install rockset
## Vector Store
-See a [usage example](/docs/modules/data_connection/vectorstores/integrations/rockset.html).
+See a [usage example](/docs/integrations/vectorstores/rockset).
```python
from langchain.vectorstores import RocksetDB
@@ -20,7 +20,7 @@ from langchain.vectorstores import RocksetDB
## Document Loader
-See a [usage example](docs/modules/data_connection/document_loaders/integrations/rockset).
+See a [usage example](/docs/integrations/document_loaders/rockset).
```python
from langchain.document_loaders import RocksetLoader
```
\ No newline at end of file
|
Fix broken Rockset links.
Right now links at https://python.langchain.com/docs/integrations/providers/rockset are broken.
@rlancemartin, @eyurtsev
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the same people again.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
|
https://api.github.com/repos/langchain-ai/langchain/pulls/8214
|
2023-07-25T03:56:10Z
|
2023-07-26T17:38:38Z
|
2023-07-26T17:38:38Z
|
2023-07-26T17:38:38Z
| 242
|
langchain-ai/langchain
| 42,991
|
GM: Chevy Equinox 2019-22
|
diff --git a/selfdrive/car/gm/interface.py b/selfdrive/car/gm/interface.py
index be288904840676..49d56cf096ec9d 100755
--- a/selfdrive/car/gm/interface.py
+++ b/selfdrive/car/gm/interface.py
@@ -71,7 +71,7 @@ def get_params(candidate, fingerprint=gen_empty_fingerprint(), car_fw=None, expe
# These cars have been put into dashcam only due to both a lack of users and test coverage.
# These cars likely still work fine. Once a user confirms each car works and a test route is
# added to selfdrive/car/tests/routes.py, we can remove it from this list.
- ret.dashcamOnly = candidate in {CAR.CADILLAC_ATS, CAR.HOLDEN_ASTRA, CAR.MALIBU, CAR.BUICK_REGAL}
+ ret.dashcamOnly = candidate in {CAR.CADILLAC_ATS, CAR.HOLDEN_ASTRA, CAR.MALIBU, CAR.BUICK_REGAL, CAR.EQUINOX}
# Start with a baseline tuning for all GM vehicles. Override tuning as needed in each model section below.
ret.minSteerSpeed = 10 * CV.KPH_TO_MS
@@ -170,6 +170,14 @@ def get_params(candidate, fingerprint=gen_empty_fingerprint(), car_fw=None, expe
tire_stiffness_factor = 1.0
CarInterfaceBase.configure_torque_tune(candidate, ret.lateralTuning)
+ elif candidate == CAR.EQUINOX:
+ ret.minEnableSpeed = -1
+ ret.mass = 3500. * CV.LB_TO_KG + STD_CARGO_KG
+ ret.wheelbase = 2.72
+ ret.steerRatio = 14.4
+ ret.centerToFront = ret.wheelbase * 0.4
+ CarInterfaceBase.configure_torque_tune(candidate, ret.lateralTuning)
+
# TODO: get actual value, for now starting with reasonable value for
# civic and scaling by mass and wheelbase
ret.rotationalInertia = scale_rot_inertia(ret.mass, ret.wheelbase)
diff --git a/selfdrive/car/gm/values.py b/selfdrive/car/gm/values.py
index a84cbdc91a694c..25e624da7bc5b2 100644
--- a/selfdrive/car/gm/values.py
+++ b/selfdrive/car/gm/values.py
@@ -60,6 +60,7 @@ class CAR:
ESCALADE_ESV = "CADILLAC ESCALADE ESV 2016"
BOLT_EUV = "CHEVROLET BOLT EUV 2022"
SILVERADO = "CHEVROLET SILVERADO 1500 2020"
+ EQUINOX = "CHEVROLET EQUINOX 2019"
class Footnote(Enum):
@@ -89,6 +90,7 @@ class GMCarInfo(CarInfo):
GMCarInfo("Chevrolet Silverado 1500 2020-21", "Safety Package II", footnotes=[], harness=Harness.gm),
GMCarInfo("GMC Sierra 1500 2020-21", "Driver Alert Package II", footnotes=[], harness=Harness.gm),
],
+ CAR.EQUINOX: GMCarInfo("Chevrolet Equinox 2019-22", "Adaptive Cruise Control (ACC)", footnotes=[], harness=Harness.gm),
}
@@ -166,6 +168,10 @@ class CanBus:
{
190: 6, 193: 8, 197: 8, 201: 8, 208: 8, 209: 7, 211: 2, 241: 6, 249: 8, 257: 8, 288: 5, 289: 8, 298: 8, 304: 3, 309: 8, 311: 8, 313: 8, 320: 4, 322: 7, 328: 1, 352: 5, 381: 8, 384: 4, 386: 8, 388: 8, 413: 8, 451: 8, 452: 8, 453: 6, 455: 7, 460: 5, 463: 3, 479: 3, 481: 7, 485: 8, 489: 8, 497: 8, 500: 6, 501: 8, 528: 5, 532: 6, 534: 2, 560: 8, 562: 8, 563: 5, 565: 5, 608: 8, 609: 6, 610: 6, 611: 6, 612: 8, 613: 8, 707: 8, 715: 8, 717: 5, 761: 7, 789: 5, 800: 6, 801: 8, 810: 8, 840: 5, 842: 5, 844: 8, 848: 4, 869: 4, 880: 6, 977: 8, 1001: 8, 1011: 6, 1017: 8, 1020: 8, 1033: 7, 1034: 7, 1217: 8, 1221: 5, 1233: 8, 1249: 8, 1259: 8, 1261: 7, 1263: 4, 1265: 8, 1267: 1, 1271: 8, 1280: 4, 1296: 4, 1300: 8, 1930: 7
}],
+ CAR.EQUINOX: [
+ {
+ 190: 6, 193: 8, 197: 8, 201: 8, 209: 7, 211: 2, 241: 6, 249: 8, 257: 8, 288: 5, 289: 8, 298: 8, 304: 1, 309: 8, 311: 8, 313: 8, 320: 3, 328: 1, 352: 5, 381: 8, 384: 4, 386: 8, 388: 8, 413: 8, 451: 8, 452: 8, 453: 6, 455: 7, 463: 3, 479: 3, 481: 7, 485: 8, 489: 8, 497: 8, 500: 6, 501: 8, 510: 8, 528: 5, 532: 6, 560: 8, 562: 8, 563: 5, 565: 5, 608: 8, 609: 6, 610: 6, 611: 6, 612: 8, 613: 8, 707: 8, 715: 8, 717: 5, 753: 5, 761: 7, 789: 5, 800: 6, 810: 8, 840: 5, 842: 5, 844: 8, 869: 4, 880: 6, 977: 8, 1001: 8, 1011: 6, 1017: 8, 1020: 8, 1033: 7, 1034: 7, 1217: 8, 1221: 5, 1233: 8, 1249: 8, 1259: 8, 1261: 7, 1263: 4, 1265: 8, 1267: 1, 1271: 8, 1280: 4, 1296: 4, 1300: 8, 1930: 7
+ }],
}
DBC: Dict[str, Dict[str, str]] = defaultdict(lambda: dbc_dict('gm_global_a_powertrain_generated', 'gm_global_a_object', chassis_dbc='gm_global_a_chassis'))
@@ -173,6 +179,6 @@ class CanBus:
EV_CAR = {CAR.VOLT, CAR.BOLT_EUV}
# We're integrated at the camera with VOACC on these cars (instead of ASCM w/ OBD-II harness)
-CAMERA_ACC_CAR = {CAR.BOLT_EUV, CAR.SILVERADO}
+CAMERA_ACC_CAR = {CAR.BOLT_EUV, CAR.SILVERADO, CAR.EQUINOX}
STEER_THRESHOLD = 1.0
diff --git a/selfdrive/car/tests/routes.py b/selfdrive/car/tests/routes.py
index 1f5d3edaf06d33..9f28bf85433c7c 100644
--- a/selfdrive/car/tests/routes.py
+++ b/selfdrive/car/tests/routes.py
@@ -21,6 +21,7 @@
GM.CADILLAC_ATS,
GM.HOLDEN_ASTRA,
GM.MALIBU,
+ GM.EQUINOX,
HYUNDAI.ELANTRA_GT_I30,
HYUNDAI.GENESIS_G90,
HYUNDAI.KIA_OPTIMA_H,
diff --git a/selfdrive/car/torque_data/override.yaml b/selfdrive/car/torque_data/override.yaml
index 20fb5f7a6489fa..2e0f601e842e91 100644
--- a/selfdrive/car/torque_data/override.yaml
+++ b/selfdrive/car/torque_data/override.yaml
@@ -27,6 +27,7 @@ RAM HD 5TH GEN: [1.4, 1.4, 0.0]
SUBARU OUTBACK 6TH GEN: [2.3, 2.3, 0.11]
CHEVROLET BOLT EUV 2022: [2.0, 2.0, 0.05]
CHEVROLET SILVERADO 1500 2020: [1.9, 1.9, 0.112]
+CHEVROLET EQUINOX 2019: [2.0, 2.0, 0.05]
VOLKSWAGEN PASSAT NMS: [2.5, 2.5, 0.1]
HYUNDAI TUCSON HYBRID 4TH GEN: [2.5, 2.5, 0.0]
|
Note: Only applies to vehicles with camera-based ACC. The RPO configuration is called "Adaptive Cruise Control - Camera", "KSG without UGN"
**Checklist**
- [x] added entry to CarInfo in selfdrive/car/*/values.py and ran `selfdrive/car/docs.py` to generate new docs
- [ ] test route added to [routes.py](https://github.com/commaai/openpilot/blob/master/selfdrive/car/tests/routes.py)
- [ ] route with openpilot:
- [ ] route with stock system:
|
https://api.github.com/repos/commaai/openpilot/pulls/25431
|
2022-08-13T11:46:48Z
|
2022-09-16T00:55:02Z
|
2022-09-16T00:55:02Z
|
2022-09-16T00:55:45Z
| 2,424
|
commaai/openpilot
| 9,622
|
Test added for _AppCtxGlobals __repr__ method
|
diff --git a/tests/test_appctx.py b/tests/test_appctx.py
index 678bf510e9..251764cf7c 100644
--- a/tests/test_appctx.py
+++ b/tests/test_appctx.py
@@ -159,6 +159,8 @@ def test_app_ctx_globals_methods(app, app_ctx):
assert flask.g.pop('bar', 'more cake') == 'more cake'
# __iter__
assert list(flask.g) == ['foo']
+ #__repr__
+ assert repr(flask.g) == "<flask.g of 'flask_test'>"
def test_custom_app_ctx_globals_class(app):
|
I followed the instructions of how to contribute and added a test that didn't have test coverage.
|
https://api.github.com/repos/pallets/flask/pulls/2807
|
2018-05-29T06:02:06Z
|
2018-05-29T16:42:44Z
|
2018-05-29T16:42:44Z
|
2020-11-14T02:42:38Z
| 139
|
pallets/flask
| 19,929
|
Update readme
|
diff --git a/PPOCRLabel/README_ch.md b/PPOCRLabel/README_ch.md
index 397df67660..b41ae200e0 100644
--- a/PPOCRLabel/README_ch.md
+++ b/PPOCRLabel/README_ch.md
@@ -145,9 +145,6 @@ PPOCRLabel支持三种保存方式:
```
pip install opencv-contrib-python-headless
```
-### 成为特殊兴趣小组的一员
-
-PPOCRSIG(Paddle Paddle OCR Special Interest Group,飞桨OCR特殊兴趣小组)致力于,我们希望拥有各种背景的,以开源的精神将OCR应用于各行各业。小组
### 参考资料
|
Update readme
|
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/1701
|
2021-01-09T14:35:35Z
|
2021-01-09T14:36:50Z
|
2021-01-09T14:36:50Z
|
2021-01-09T14:36:50Z
| 163
|
PaddlePaddle/PaddleOCR
| 41,923
|
[3.9] closes bpo-29017: Update the bindings for Qt information with PySide2 (GH-20149)
|
diff --git a/Doc/library/othergui.rst b/Doc/library/othergui.rst
index 4548459f8e261d..48c1f2754111aa 100644
--- a/Doc/library/othergui.rst
+++ b/Doc/library/othergui.rst
@@ -30,10 +30,11 @@ available for Python:
for generating bindings for C++ libraries as Python classes, and
is specifically designed for Python.
- `PySide <https://wiki.qt.io/PySide>`_
- PySide is a newer binding to the Qt toolkit, provided by Nokia.
- Compared to PyQt, its licensing scheme is friendlier to non-open source
- applications.
+ `PySide2 <https://doc.qt.io/qtforpython/>`_
+ Also known as the Qt for Python project, PySide2 is a newer binding to the
+ Qt toolkit. It is provided by The Qt Company and aims to provide a
+ complete port of PySide to Qt 5. Compared to PyQt, its licensing scheme is
+ friendlier to non-open source applications.
`wxPython <https://www.wxpython.org>`_
wxPython is a cross-platform GUI toolkit for Python that is built around
@@ -47,7 +48,7 @@ available for Python:
an XML-based resource format and more, including an ever growing library
of user-contributed modules.
-PyGTK, PyQt, and wxPython, all have a modern look and feel and more
+PyGTK, PyQt, PySide2, and wxPython, all have a modern look and feel and more
widgets than Tkinter. In addition, there are many other GUI toolkits for
Python, both cross-platform, and platform-specific. See the `GUI Programming
<https://wiki.python.org/moin/GuiProgramming>`_ page in the Python Wiki for a
|
Reference to PySide has been removed has it is for Qt 4, which has reached end of life.
(cherry picked from commit 4649202ea75d48e1496e99911709824ca2d3170e)
Co-authored-by: Samuel Gaist <[email protected]>
<!-- issue-number: [bpo-29017](https://bugs.python.org/issue29017) -->
https://bugs.python.org/issue29017
<!-- /issue-number -->
|
https://api.github.com/repos/python/cpython/pulls/20526
|
2020-05-30T01:57:24Z
|
2020-05-30T02:04:26Z
|
2020-05-30T02:04:26Z
|
2020-05-30T02:04:30Z
| 411
|
python/cpython
| 4,534
|
Load env variable handling & Project preprompt enhancement
|
diff --git a/gpt_engineer/main.py b/gpt_engineer/main.py
index 123df6a51b..0897f6de33 100644
--- a/gpt_engineer/main.py
+++ b/gpt_engineer/main.py
@@ -20,6 +20,9 @@
def load_env_if_needed():
if os.getenv("OPENAI_API_KEY") is None:
load_dotenv()
+ if os.getenv("OPENAI_API_KEY") is None:
+ # if there is no .env file, try to load from the current working directory
+ load_dotenv(dotenv_path=os.path.join(os.getcwd(), ".env"))
openai.api_key = os.getenv("OPENAI_API_KEY")
@@ -50,6 +53,12 @@ def main(
help="""Endpoint for your Azure OpenAI Service (https://xx.openai.azure.com).
In that case, the given model is the deployment name chosen in the Azure AI Studio.""",
),
+ use_project_preprompts: bool = typer.Option(
+ False,
+ "--use-project-preprompts",
+ help="""Use the project's preprompts instead of the default ones.
+ Copies all original preprompts to the project's workspace if they don't exist there.""",
+ ),
verbose: bool = typer.Option(False, "--verbose", "-v"),
):
logging.basicConfig(level=logging.DEBUG if verbose else logging.INFO)
@@ -80,13 +89,24 @@ def main(
project_metadata_path = input_path / ".gpteng"
memory_path = project_metadata_path / "memory"
archive_path = project_metadata_path / "archive"
+ preprompts_path = Path(__file__).parent / "preprompts"
+
+ if use_project_preprompts:
+ project_preprompts_path = input_path / "preprompts"
+ if not project_preprompts_path.exists():
+ project_preprompts_path.mkdir()
+
+ for file in preprompts_path.glob("*"):
+ if not (project_preprompts_path / file.name).exists():
+ (project_preprompts_path / file.name).write_text(file.read_text())
+ preprompts_path = project_preprompts_path
dbs = DBs(
memory=DB(memory_path),
logs=DB(memory_path / "logs"),
input=DB(input_path),
workspace=DB(workspace_path),
- preprompts=DB(Path(__file__).parent / "preprompts"),
+ preprompts=DB(preprompts_path),
archive=DB(archive_path),
project_metadata=DB(project_metadata_path),
)
|
Hey everyone 👋,
thanks for the great project. I found some things that I believe can benefit the project.
# env file loading
I use pyenv and pyenv-virtualenv and I noticed that my pip installed gpt-engineer always tries to load the .env from its site-packages directory and searches up to the root. My project is located somewhere else, so it did not find my .env. I introduced a small change that as a fallback, if the .env file isn't found before, will attempt to find it in the current working directory. (c834e6998e4e96502e7fa3cb42d2beb04b5cf4d3 and ac4cac62dcc8ae1eb989cd648f943f831eb35e2a)
# preprompt management
I noticed that the preprompts are part of the project itself and a change inside my venv directory would a) be overwritten by an update and b) be global. I wanted to have project specific preprompts so I implemented them. I noticed that the `DB` class cannot handle kind of a "merged" path so that only the files that I want to overwrite would come from the project specific directory so I decided to copy all existing files to the project so that when the preprompts would be updated in the future (inside gpt-engineer itself) they would be copied over to the project directory.
In order to test this feature and because I don't know every implication of the preprompts yet, I changed all of them inside my example project to say `You do not code, you only shout back at people` and it looks like this:
```
gpt-engineer --use-project-preprompts projects/example
I'M SORRY, BUT I CAN'T HELP YOU WITH THAT. I DON'T CODE, I ONLY SHOUT BACK AT PEOPLE!It seems like there's a misunderstanding. Could you please provide the actual information about the codebase?
```
```
gpt-engineer projects/example
To implement the game Snake in Python using the MVC (Model-View-Controller) design pattern, we will need several classes and files. The game will be controlled using the keyboard. Here are the core classes and their purposes: …
```
# Future steps if this PR is accepted
- [ ] doc enhancement
- [ ] tests
|
https://api.github.com/repos/gpt-engineer-org/gpt-engineer/pulls/740
|
2023-09-24T09:28:01Z
|
2023-09-25T06:29:12Z
|
2023-09-25T06:29:12Z
|
2023-09-30T12:04:01Z
| 558
|
gpt-engineer-org/gpt-engineer
| 33,101
|
Resolve LOCALSTACK_HOST to the LocalStack container
|
diff --git a/localstack/dns/server.py b/localstack/dns/server.py
index 035568ff70ab1..8fe4aa77f127e 100644
--- a/localstack/dns/server.py
+++ b/localstack/dns/server.py
@@ -824,11 +824,16 @@ def start_server(upstream_dns: str, host: str, port: int = config.DNS_PORT):
LOG.debug("Starting DNS servers (tcp/udp port %s on %s)..." % (port, host))
dns_server = DnsServer(port, protocols=["tcp", "udp"], host=host, upstream_dns=upstream_dns)
+
for name in NAME_PATTERNS_POINTING_TO_LOCALSTACK:
dns_server.add_host_pointing_to_localstack(name)
+ if config.LOCALSTACK_HOST.host != LOCALHOST_HOSTNAME:
+ dns_server.add_host_pointing_to_localstack(f".*{config.LOCALSTACK_HOST.host}")
+
if config.DNS_LOCAL_NAME_PATTERNS:
for skip_pattern in re.split(r"[,;\s]+", config.DNS_LOCAL_NAME_PATTERNS):
dns_server.add_skip(skip_pattern)
+
dns_server.start()
if not dns_server.wait_is_up(timeout=5):
LOG.warning("DNS server did not come up within 5 seconds.")
diff --git a/tests/bootstrap/test_container_listen_configuration.py b/tests/bootstrap/test_container_listen_configuration.py
index b283cb9ff05ec..d504f6ecb7cbd 100644
--- a/tests/bootstrap/test_container_listen_configuration.py
+++ b/tests/bootstrap/test_container_listen_configuration.py
@@ -3,39 +3,57 @@
from localstack.config import in_docker
from localstack.testing.pytest.container import ContainerFactory
+from localstack.utils.bootstrap import ContainerConfigurators
from localstack.utils.net import get_free_tcp_port
+pytestmarks = pytest.mark.skipif(
+ condition=in_docker(), reason="cannot run bootstrap tests in docker"
+)
+
[email protected](condition=in_docker(), reason="cannot run bootstrap tests in docker")
class TestContainerConfiguration:
- def test_defaults(self, container_factory: ContainerFactory, wait_for_localstack_ready):
+ def test_defaults(
+ self, container_factory: ContainerFactory, stream_container_logs, wait_for_localstack_ready
+ ):
"""
The default configuration is to listen on 0.0.0.0:4566
"""
port = get_free_tcp_port()
- container = container_factory()
- container.config.ports.add(port, 4566)
+ container = container_factory(
+ configurators=[
+ ContainerConfigurators.debug,
+ ContainerConfigurators.mount_docker_socket,
+ ContainerConfigurators.port(port, 4566),
+ ]
+ )
running_container = container.start(attach=False)
+ stream_container_logs(container)
wait_for_localstack_ready(running_container)
r = requests.get(f"http://127.0.0.1:{port}/_localstack/health")
assert r.status_code == 200
def test_gateway_listen_single_value(
- self, container_factory: ContainerFactory, wait_for_localstack_ready
+ self, container_factory: ContainerFactory, stream_container_logs, wait_for_localstack_ready
):
"""
Test using GATEWAY_LISTEN to change the hypercorn port
"""
port1 = get_free_tcp_port()
-
container = container_factory(
- env_vars={
- "GATEWAY_LISTEN": "0.0.0.0:5000",
- },
+ configurators=[
+ ContainerConfigurators.debug,
+ ContainerConfigurators.mount_docker_socket,
+ ContainerConfigurators.port(port1, 5000),
+ ContainerConfigurators.env_vars(
+ {
+ "GATEWAY_LISTEN": "0.0.0.0:5000",
+ }
+ ),
+ ]
)
- container.config.ports.add(port1, 5000)
running_container = container.start(attach=False)
+ stream_container_logs(container)
wait_for_localstack_ready(running_container)
# check the ports listening on 0.0.0.0
@@ -43,7 +61,11 @@ def test_gateway_listen_single_value(
assert r.status_code == 200
def test_gateway_listen_multiple_values(
- self, container_factory: ContainerFactory, docker_network, wait_for_localstack_ready
+ self,
+ container_factory: ContainerFactory,
+ docker_network,
+ stream_container_logs,
+ wait_for_localstack_ready,
):
"""
Test multiple container ports
@@ -52,19 +74,26 @@ def test_gateway_listen_multiple_values(
port2 = get_free_tcp_port()
container = container_factory(
- env_vars={
- "GATEWAY_LISTEN": ",".join(
- [
- "0.0.0.0:5000",
- "0.0.0.0:2000",
- ]
- )
- },
- network=docker_network,
+ configurators=[
+ ContainerConfigurators.debug,
+ ContainerConfigurators.mount_docker_socket,
+ ContainerConfigurators.network(docker_network),
+ ContainerConfigurators.port(port1, 5000),
+ ContainerConfigurators.port(port2, 2000),
+ ContainerConfigurators.env_vars(
+ {
+ "GATEWAY_LISTEN": ",".join(
+ [
+ "0.0.0.0:5000",
+ "0.0.0.0:2000",
+ ]
+ ),
+ }
+ ),
+ ]
)
- container.config.ports.add(port1, 5000)
- container.config.ports.add(port2, 2000)
running_container = container.start(attach=False)
+ stream_container_logs(container)
wait_for_localstack_ready(running_container)
# check the ports listening on 0.0.0.0
diff --git a/tests/bootstrap/test_dns_server.py b/tests/bootstrap/test_dns_server.py
index 85146cf251757..071b825995ce5 100644
--- a/tests/bootstrap/test_dns_server.py
+++ b/tests/bootstrap/test_dns_server.py
@@ -5,6 +5,8 @@
from localstack.config import in_docker
from localstack.constants import LOCALHOST_HOSTNAME
from localstack.testing.pytest.container import ContainerFactory
+from localstack.utils.bootstrap import ContainerConfigurators
+from localstack.utils.strings import short_uid
LOG = logging.getLogger(__name__)
@@ -14,33 +16,92 @@
def test_default_network(
- container_factory: ContainerFactory, wait_for_localstack_ready, dns_query_from_container
+ container_factory: ContainerFactory,
+ stream_container_logs,
+ wait_for_localstack_ready,
+ dns_query_from_container,
):
- ls_container = container_factory(env_vars={"DEBUG": "1"})
- ls_container.config.volumes.append(("/var/run/docker.sock", "/var/run/docker.sock"))
+ ls_container = container_factory(
+ configurators=[
+ ContainerConfigurators.debug,
+ ContainerConfigurators.mount_docker_socket,
+ ]
+ )
running_container = ls_container.start()
+ stream_container_logs(ls_container)
wait_for_localstack_ready(running_container)
container_ip = running_container.ip_address()
+
stdout, _ = dns_query_from_container(name=LOCALHOST_HOSTNAME, ip_address=container_ip)
+ assert container_ip in stdout.decode().splitlines()
+ stdout, _ = dns_query_from_container(name=f"foo.{LOCALHOST_HOSTNAME}", ip_address=container_ip)
assert container_ip in stdout.decode().splitlines()
def test_user_defined_network(
docker_network,
container_factory: ContainerFactory,
+ stream_container_logs,
wait_for_localstack_ready,
dns_query_from_container,
):
- ls_container = container_factory(env_vars={"DEBUG": "1"}, network=docker_network)
- ls_container.config.volumes.append(("/var/run/docker.sock", "/var/run/docker.sock"))
+ ls_container = container_factory(
+ configurators=[
+ ContainerConfigurators.debug,
+ ContainerConfigurators.mount_docker_socket,
+ ContainerConfigurators.network(docker_network),
+ ]
+ )
running_ls_container = ls_container.start()
+ stream_container_logs(ls_container)
wait_for_localstack_ready(running_ls_container)
container_ip = running_ls_container.ip_address(docker_network=docker_network)
stdout, _ = dns_query_from_container(
name=LOCALHOST_HOSTNAME, ip_address=container_ip, network=docker_network
)
+ assert container_ip in stdout.decode().splitlines()
+
+ stdout, _ = dns_query_from_container(
+ name=f"foo.{LOCALHOST_HOSTNAME}", ip_address=container_ip, network=docker_network
+ )
+ assert container_ip in stdout.decode().splitlines()
+
+
+def test_resolve_localstack_host(
+ container_factory: ContainerFactory,
+ stream_container_logs,
+ wait_for_localstack_ready,
+ dns_query_from_container,
+):
+ localstack_host = f"host-{short_uid()}"
+ ls_container = container_factory(
+ configurators=[
+ ContainerConfigurators.debug,
+ ContainerConfigurators.mount_docker_socket,
+ ContainerConfigurators.env_vars(
+ {
+ "LOCALSTACK_HOST": localstack_host,
+ },
+ ),
+ ],
+ )
+ running_container = ls_container.start()
+ stream_container_logs(ls_container)
+ wait_for_localstack_ready(running_container)
+
+ container_ip = running_container.ip_address()
+
+ stdout, _ = dns_query_from_container(name=LOCALHOST_HOSTNAME, ip_address=container_ip)
+ assert container_ip in stdout.decode().splitlines()
+
+ stdout, _ = dns_query_from_container(name=f"foo.{LOCALHOST_HOSTNAME}", ip_address=container_ip)
+ assert container_ip in stdout.decode().splitlines()
+
+ stdout, _ = dns_query_from_container(name=localstack_host, ip_address=container_ip)
+ assert container_ip in stdout.decode().splitlines()
+ stdout, _ = dns_query_from_container(name=f"foo.{localstack_host}", ip_address=container_ip)
assert container_ip in stdout.decode().splitlines()
|
<!-- Please refer to the contribution guidelines before raising a PR: https://github.com/localstack/localstack/blob/master/CONTRIBUTING.md -->
<!-- Why am I raising this PR? Add context such as related issues, PRs, or documentation. -->
## Motivation
The user may specify a custom value of `LOCALSTACK_HOST` which is the domain name of the LocalStack container from the outside. This will not work however in created compute environments, or external docker containers since we only resolve `localhost.localstack.cloud` to the LocalStack container.
<!-- What notable changes does this PR make? -->
## Changes
- Bind LOCALSTACK_HOST to LocalStack IP address
- Update bootstrap tests to use `stream_container_logs` and `ContainerConfigurators`
<!-- The following sections are optional, but can be useful!
## Testing
Description of how to test the changes
## TODO
What's left to do:
- [ ] ...
- [ ] ...
-->
|
https://api.github.com/repos/localstack/localstack/pulls/9178
|
2023-09-19T09:14:05Z
|
2023-10-04T08:46:01Z
|
2023-10-04T08:46:01Z
|
2023-10-04T10:45:54Z
| 2,259
|
localstack/localstack
| 29,094
|
Adds copy image option if browser feature available
|
diff --git a/web/scripts/app.js b/web/scripts/app.js
index d131045d7a..e6c0106174 100644
--- a/web/scripts/app.js
+++ b/web/scripts/app.js
@@ -269,6 +269,71 @@ export class ComfyApp {
* @param {*} node The node to add the menu handler
*/
#addNodeContextMenuHandler(node) {
+ function getCopyImageOption(img) {
+ if (typeof window.ClipboardItem === "undefined") return [];
+ return [
+ {
+ content: "Copy Image",
+ callback: async () => {
+ const url = new URL(img.src);
+ url.searchParams.delete("preview");
+
+ const writeImage = async (blob) => {
+ await navigator.clipboard.write([
+ new ClipboardItem({
+ [blob.type]: blob,
+ }),
+ ]);
+ };
+
+ try {
+ const data = await fetch(url);
+ const blob = await data.blob();
+ try {
+ await writeImage(blob);
+ } catch (error) {
+ // Chrome seems to only support PNG on write, convert and try again
+ if (blob.type !== "image/png") {
+ const canvas = $el("canvas", {
+ width: img.naturalWidth,
+ height: img.naturalHeight,
+ });
+ const ctx = canvas.getContext("2d");
+ let image;
+ if (typeof window.createImageBitmap === "undefined") {
+ image = new Image();
+ const p = new Promise((resolve, reject) => {
+ image.onload = resolve;
+ image.onerror = reject;
+ }).finally(() => {
+ URL.revokeObjectURL(image.src);
+ });
+ image.src = URL.createObjectURL(blob);
+ await p;
+ } else {
+ image = await createImageBitmap(blob);
+ }
+ try {
+ ctx.drawImage(image, 0, 0);
+ canvas.toBlob(writeImage, "image/png");
+ } finally {
+ if (typeof image.close === "function") {
+ image.close();
+ }
+ }
+
+ return;
+ }
+ throw error;
+ }
+ } catch (error) {
+ alert("Error copying image: " + (error.message ?? error));
+ }
+ },
+ },
+ ];
+ }
+
node.prototype.getExtraMenuOptions = function (_, options) {
if (this.imgs) {
// If this node has images then we add an open in new tab item
@@ -286,16 +351,17 @@ export class ComfyApp {
content: "Open Image",
callback: () => {
let url = new URL(img.src);
- url.searchParams.delete('preview');
- window.open(url, "_blank")
+ url.searchParams.delete("preview");
+ window.open(url, "_blank");
},
},
+ ...getCopyImageOption(img),
{
content: "Save Image",
callback: () => {
const a = document.createElement("a");
let url = new URL(img.src);
- url.searchParams.delete('preview');
+ url.searchParams.delete("preview");
a.href = url;
a.setAttribute("download", new URLSearchParams(url.search).get("filename"));
document.body.append(a);
@@ -308,33 +374,41 @@ export class ComfyApp {
}
options.push({
- content: "Bypass",
- callback: (obj) => { if (this.mode === 4) this.mode = 0; else this.mode = 4; this.graph.change(); }
- });
+ content: "Bypass",
+ callback: (obj) => {
+ if (this.mode === 4) this.mode = 0;
+ else this.mode = 4;
+ this.graph.change();
+ },
+ });
// prevent conflict of clipspace content
- if(!ComfyApp.clipspace_return_node) {
+ if (!ComfyApp.clipspace_return_node) {
options.push({
- content: "Copy (Clipspace)",
- callback: (obj) => { ComfyApp.copyToClipspace(this); }
- });
+ content: "Copy (Clipspace)",
+ callback: (obj) => {
+ ComfyApp.copyToClipspace(this);
+ },
+ });
- if(ComfyApp.clipspace != null) {
+ if (ComfyApp.clipspace != null) {
options.push({
- content: "Paste (Clipspace)",
- callback: () => { ComfyApp.pasteFromClipspace(this); }
- });
+ content: "Paste (Clipspace)",
+ callback: () => {
+ ComfyApp.pasteFromClipspace(this);
+ },
+ });
}
- if(ComfyApp.isImageNode(this)) {
+ if (ComfyApp.isImageNode(this)) {
options.push({
- content: "Open in MaskEditor",
- callback: (obj) => {
- ComfyApp.copyToClipspace(this);
- ComfyApp.clipspace_return_node = this;
- ComfyApp.open_maskeditor();
- }
- });
+ content: "Open in MaskEditor",
+ callback: (obj) => {
+ ComfyApp.copyToClipspace(this);
+ ComfyApp.clipspace_return_node = this;
+ ComfyApp.open_maskeditor();
+ },
+ });
}
}
};
|
Adds Copy Image menu item if browser supports writing images to clipboard:

Writing images to clipboard is enabled in Chromium based browsers by default, Firefox requires a config setting:
https://developer.mozilla.org/en-US/docs/Web/API/ClipboardItem#browser_compatibility
> From version 87: this feature is behind the dom.events.asyncClipboard.clipboardItem preference (needs to be set to true). To change preferences in Firefox, visit about:config.
Chrome seems to currently only support writing PNG data, Firefox supports that and other types so it'll try writing the raw image then if that fails converts to PNG.
|
https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/2544
|
2024-01-14T16:03:03Z
|
2024-01-14T19:53:53Z
|
2024-01-14T19:53:53Z
|
2024-01-14T19:53:53Z
| 1,291
|
comfyanonymous/ComfyUI
| 17,899
|
Adding Glot the plotting library for Golang
|
diff --git a/README.md b/README.md
index ce6257b0..de797f46 100644
--- a/README.md
+++ b/README.md
@@ -329,7 +329,7 @@ Further resources:
* [go-graph](https://github.com/StepLg/go-graph) - Graph library for Go/Golang language.
* [SVGo](http://www.svgopen.org/2011/papers/34-SVGo_a_Go_Library_for_SVG_generation/) - The Go Language library for SVG generation
* [RF](https://github.com/fxsjy/RF.go) - Random forests implementation in Go
-
+* [Glot](https://github.com/arafatk/glot) - Glot is a plotting library for Golang built on top of gnuplot.
<a name="haskell"></a>
## Haskell
|
Hi,
I am Arafat.
The lead developer of [Glot](https://medium.com/@Arafat./introducing-glot-the-plotting-library-for-golang-3133399948a1).
I think Glot could take a place in Visualisation tools for golang.
|
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/443
|
2017-10-25T20:09:47Z
|
2017-10-27T01:20:39Z
|
2017-10-27T01:20:39Z
|
2017-10-27T01:20:39Z
| 189
|
josephmisiti/awesome-machine-learning
| 51,881
|
DOC Ensures that VarianceThreshold passes numpydoc validation
|
diff --git a/maint_tools/test_docstrings.py b/maint_tools/test_docstrings.py
index 0e84bfd7638ac..294f2582e5087 100644
--- a/maint_tools/test_docstrings.py
+++ b/maint_tools/test_docstrings.py
@@ -64,7 +64,6 @@
"TheilSenRegressor",
"TransformedTargetRegressor",
"TweedieRegressor",
- "VarianceThreshold",
]
diff --git a/sklearn/feature_selection/_variance_threshold.py b/sklearn/feature_selection/_variance_threshold.py
index 9d4be461b8f67..41ccfba2605cc 100644
--- a/sklearn/feature_selection/_variance_threshold.py
+++ b/sklearn/feature_selection/_variance_threshold.py
@@ -39,6 +39,15 @@ class VarianceThreshold(SelectorMixin, BaseEstimator):
.. versionadded:: 1.0
+ See Also
+ --------
+ SelectFromModel: Meta-transformer for selecting features based on
+ importance weights.
+ SelectPercentile : Select features according to a percentile of the highest
+ scores.
+ SequentialFeatureSelector : Transformer that performs Sequential Feature
+ Selection.
+
Notes
-----
Allows NaN in the input.
@@ -66,7 +75,8 @@ def fit(self, X, y=None):
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
- Sample vectors from which to compute variances.
+ Data from which to compute variances, where `n_samples` is
+ the number of samples and `n_features` is the number of features.
y : any, default=None
Ignored. This parameter exists only for compatibility with
@@ -74,7 +84,8 @@ def fit(self, X, y=None):
Returns
-------
- self
+ self : object
+ Returns the instance itself.
"""
X = self._validate_data(
X,
|
<!--
Thanks for contributing a pull request! Please ensure you have taken a look at
the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md
-->
#### Reference Issues/PRs
<!--
Example: Fixes #1234. See also #3456.
Please use keywords (e.g., Fixes) to create link to the issues or pull requests
you resolved, so that they will automatically be closed when your pull request
is merged. See https://github.com/blog/1506-closing-issues-via-pull-requests
-->
Addresses #20308
#### What does this implement/fix? Explain your changes.
#### Any other comments?
<!--
Please be aware that we are a loose team of volunteers so patience is
necessary; assistance handling other issues is very welcome. We value
all user contributions, no matter how minor they are. If we are slow to
review, either the pull request needs some benchmarking, tinkering,
convincing, etc. or more likely the reviewers are simply busy. In either
case, we ask for your understanding during the review process.
For more information, see our FAQ on this topic:
http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention.
Thanks for contributing!
-->
|
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/21034
|
2021-09-14T01:09:10Z
|
2021-09-14T08:10:26Z
|
2021-09-14T08:10:25Z
|
2021-10-26T05:24:57Z
| 454
|
scikit-learn/scikit-learn
| 46,590
|
DOC: Try with new token
|
diff --git a/.travis.yml b/.travis.yml
index 0156f17aa32a5..8386dce478eb3 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -20,10 +20,10 @@ env:
# pandas-docs/pandas-docs-travis GH #
#
# create a github personal access token
- # cd pandas-docs/pandas-docs-travis
- # travis encrypt
- # PANDAS_GH_TOKEN=personal_access_token
- secure: "S49Tn5dzBRu6QaQcSV8MoCeX9rn7l8xuHFJbFsT9jPm1l0YPb94S8iDk0Isw71SqvHBgh+j2cms9jgYn2N3VCArh5MpA0oKwTKRZEX3iLQv248dCY2C6LdzAKLA+8m2naDGcfc0qMLeNieCGZICccs0EKIGDt8m7VQBMqeT0YU0="
+ # cd pandas-dev/pandas
+ # travis encrypt PANDAS_GH_TOKEN=personal_access_token
+ # correct the repo to be pandas-dev/pandas, not your fork
+ secure: "EkWLZhbrp/mXJOx38CHjs7BnjXafsqHtwxPQrqWy457VDFWhIY1DMnIR/lOWG+a20Qv52sCsFtiZEmMfUjf0pLGXOqurdxbYBGJ7/ikFLk9yV2rDwiArUlVM9bWFnFxHvdz9zewBH55WurrY4ShZWyV+x2dWjjceWG5VpWeI6sA="
git:
# for cloning
|
Let's see if this works @jreback.
I think you have to be in the `pandas` git repo when you `travis-encrypt`, since that's the Travis build that's accessing the env var.
|
https://api.github.com/repos/pandas-dev/pandas/pulls/16389
|
2017-05-18T22:08:46Z
|
2017-05-18T22:39:22Z
|
2017-05-18T22:39:22Z
|
2017-05-27T16:40:24Z
| 426
|
pandas-dev/pandas
| 45,578
|
Bump the python-requirements group in /requirements with 6 updates
|
diff --git a/requirements/dev.txt b/requirements/dev.txt
index 454616e2ae..4ac1cdde5d 100644
--- a/requirements/dev.txt
+++ b/requirements/dev.txt
@@ -30,7 +30,7 @@ click==8.1.7
# via pip-tools
colorama==0.4.6
# via tox
-cryptography==41.0.7
+cryptography==42.0.2
# via -r typing.in
distlib==0.3.8
# via virtualenv
@@ -92,9 +92,9 @@ pyproject-api==1.6.1
# via tox
pyproject-hooks==1.0.0
# via build
-pytest==7.4.4
+pytest==8.0.0
# via -r tests.in
-python-dotenv==1.0.0
+python-dotenv==1.0.1
# via
# -r tests.in
# -r typing.in
@@ -111,9 +111,9 @@ sphinx==7.2.6
# sphinx-issues
# sphinx-tabs
# sphinxcontrib-log-cabinet
-sphinx-issues==3.0.1
+sphinx-issues==4.0.0
# via -r docs.in
-sphinx-tabs==3.4.4
+sphinx-tabs==3.4.5
# via -r docs.in
sphinxcontrib-applehelp==1.0.8
# via sphinx
@@ -129,7 +129,7 @@ sphinxcontrib-qthelp==1.0.7
# via sphinx
sphinxcontrib-serializinghtml==1.1.10
# via sphinx
-tox==4.12.0
+tox==4.12.1
# via -r dev.in
types-contextvars==2.4.7.3
# via -r typing.in
diff --git a/requirements/docs.txt b/requirements/docs.txt
index fed1b7b9ef..cb4fb7f09c 100644
--- a/requirements/docs.txt
+++ b/requirements/docs.txt
@@ -45,9 +45,9 @@ sphinx==7.2.6
# sphinx-issues
# sphinx-tabs
# sphinxcontrib-log-cabinet
-sphinx-issues==3.0.1
+sphinx-issues==4.0.0
# via -r docs.in
-sphinx-tabs==3.4.4
+sphinx-tabs==3.4.5
# via -r docs.in
sphinxcontrib-applehelp==1.0.8
# via sphinx
diff --git a/requirements/tests.txt b/requirements/tests.txt
index 4f7a590c06..7fdb2d0372 100644
--- a/requirements/tests.txt
+++ b/requirements/tests.txt
@@ -12,7 +12,7 @@ packaging==23.2
# via pytest
pluggy==1.3.0
# via pytest
-pytest==7.4.4
+pytest==8.0.0
# via -r tests.in
-python-dotenv==1.0.0
+python-dotenv==1.0.1
# via -r tests.in
diff --git a/requirements/typing.txt b/requirements/typing.txt
index adbef1ab16..fdf87bf751 100644
--- a/requirements/typing.txt
+++ b/requirements/typing.txt
@@ -8,7 +8,7 @@ asgiref==3.7.2
# via -r typing.in
cffi==1.16.0
# via cryptography
-cryptography==41.0.7
+cryptography==42.0.2
# via -r typing.in
mypy==1.8.0
# via -r typing.in
@@ -16,7 +16,7 @@ mypy-extensions==1.0.0
# via mypy
pycparser==2.21
# via cffi
-python-dotenv==1.0.0
+python-dotenv==1.0.1
# via -r typing.in
types-contextvars==2.4.7.3
# via -r typing.in
|
Bumps the python-requirements group in /requirements with 6 updates:
| Package | From | To |
| --- | --- | --- |
| [sphinx-issues](https://github.com/sloria/sphinx-issues) | `3.0.1` | `4.0.0` |
| [sphinx-tabs](https://github.com/executablebooks/sphinx-tabs) | `3.4.4` | `3.4.5` |
| [cryptography](https://github.com/pyca/cryptography) | `41.0.7` | `42.0.2` |
| [pytest](https://github.com/pytest-dev/pytest) | `7.4.4` | `8.0.0` |
| [python-dotenv](https://github.com/theskumar/python-dotenv) | `1.0.0` | `1.0.1` |
| [tox](https://github.com/tox-dev/tox) | `4.12.0` | `4.12.1` |
Updates `sphinx-issues` from 3.0.1 to 4.0.0
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/sloria/sphinx-issues/commit/50128ce23c52eeef3307ef8b8057d18f7ea0f82b"><code>50128ce</code></a> Bump version and update changelog</li>
<li><a href="https://github.com/sloria/sphinx-issues/commit/0239bda1a6868577b0f546f25d08db3a9dbab627"><code>0239bda</code></a> Default to linking to GH sponsors URL for the :user: role (<a href="https://redirect.github.com/sloria/sphinx-issues/issues/131">#131</a>)</li>
<li><a href="https://github.com/sloria/sphinx-issues/commit/9d176de1b54abf813355823f55d5fc35a1c64f3b"><code>9d176de</code></a> Dev Chores (<a href="https://redirect.github.com/sloria/sphinx-issues/issues/130">#130</a>)</li>
<li><a href="https://github.com/sloria/sphinx-issues/commit/08c376fc2f994038b653720c431abeadf6cc628e"><code>08c376f</code></a> Run pre-commit autoupdate</li>
<li>See full diff in <a href="https://github.com/sloria/sphinx-issues/compare/3.0.1...4.0.0">compare view</a></li>
</ul>
</details>
<br />
Updates `sphinx-tabs` from 3.4.4 to 3.4.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/executablebooks/sphinx-tabs/releases">sphinx-tabs's releases</a>.</em></p>
<blockquote>
<h2>Version 3.4.5</h2>
<h2>What's Changed</h2>
<ul>
<li>FIX: unpin docutils by <a href="https://github.com/agoose77"><code>@agoose77</code></a> in <a href="https://redirect.github.com/executablebooks/sphinx-tabs/pull/186">executablebooks/sphinx-tabs#186</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/agoose77"><code>@agoose77</code></a> made their first contribution in <a href="https://redirect.github.com/executablebooks/sphinx-tabs/pull/186">executablebooks/sphinx-tabs#186</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/executablebooks/sphinx-tabs/compare/v3.4.4...v3.4.5">https://github.com/executablebooks/sphinx-tabs/compare/v3.4.4...v3.4.5</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/executablebooks/sphinx-tabs/blob/master/CHANGELOG.md">sphinx-tabs's changelog</a>.</em></p>
<blockquote>
<h2>3.4.5 - 2024-01-21</h2>
<h3>Removed</h3>
<ul>
<li>docutils version pin</li>
</ul>
<h2>3.4.2 - 2023-19-22</h2>
<h3>Added</h3>
<ul>
<li>Testing for Python 3.11 and 3.12</li>
</ul>
<h3>Removed</h3>
<ul>
<li>Dependency on unsupported sphinx_testing package</li>
</ul>
<h2>3.4.2 - 2023-19-22</h2>
<h3>Fixed</h3>
<ul>
<li>tests for sphinx 7.2</li>
<li>slice assignment in update_context(), which was removing JS scripts from other sphinx extensions/themes on pages where tabs were not used</li>
</ul>
<h3>Added</h3>
<ul>
<li>Note in docs to clarify that include directive can't be used within a code-tab</li>
</ul>
<h2>3.4.1 - 2022-07-02</h2>
<h3>Added</h3>
<ul>
<li>Weekly scheduled testing, to catch breaking changes in unpinned dependencies</li>
</ul>
<h3>Changed</h3>
<ul>
<li>docutils version pin to allow use of verison 0.18.x</li>
</ul>
<h3>Removed</h3>
<ul>
<li>sphinx version pinning - only the latest version of sphinx will now be fully supported, but previous versions will work if sphinx dependencies (i.e. jinja2) are managed correctly. This is inline with the approach at sphinx</li>
<li>tests that were specific to older versions of sphinx and pygments</li>
<li>jinja2 version pinning, as this is now pinned in latest version of sphinx</li>
</ul>
<h2>3.4.0 - 2022-06-26</h2>
<h3>Added</h3>
<ul>
<li>Testing for sphinx 5</li>
<li>Tesing for python 3.10</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Fixed parsing of MyST content, where first line was being stripped</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/executablebooks/sphinx-tabs/commit/b8666a582f9bb571de4186fea7fa0871b1b198fb"><code>b8666a5</code></a> Fix numbering in CHANGELOG.md</li>
<li><a href="https://github.com/executablebooks/sphinx-tabs/commit/8cceb665e578b64870c1ea4447a7d54507120632"><code>8cceb66</code></a> Bump version number</li>
<li><a href="https://github.com/executablebooks/sphinx-tabs/commit/e6a00acdf64db2099c8916788d767511f16bd5e9"><code>e6a00ac</code></a> Update CHANGELOG.md</li>
<li><a href="https://github.com/executablebooks/sphinx-tabs/commit/0632f4bfb4df799c4c66612051ce5f1aabf9269b"><code>0632f4b</code></a> FIX: unpin docutils (<a href="https://redirect.github.com/executablebooks/sphinx-tabs/issues/186">#186</a>)</li>
<li>See full diff in <a href="https://github.com/executablebooks/sphinx-tabs/compare/v3.4.4...v3.4.5">compare view</a></li>
</ul>
</details>
<br />
Updates `cryptography` from 41.0.7 to 42.0.2
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>42.0.2 - 2024-01-30</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.2.1.
* Fixed an issue that prevented the use of Python buffer protocol objects in
``sign`` and ``verify`` methods on asymmetric keys.
* Fixed an issue with incorrect keyword-argument naming with ``EllipticCurvePrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey.exchange`,
``X25519PrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.exchange`,
``X448PrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.exchange`,
and ``DHPrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.dh.DHPrivateKey.exchange`.
<p>.. _v42-0-1:</p>
<p>42.0.1 - 2024-01-24
</code></pre></p>
<ul>
<li>Fixed an issue with incorrect keyword-argument naming with <code>EllipticCurvePrivateKey</code>
:meth:<code>~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey.sign</code>.</li>
<li>Resolved compatibility issue with loading certain RSA public keys in
:func:<code>~cryptography.hazmat.primitives.serialization.load_pem_public_key</code>.</li>
</ul>
<p>.. _v42-0-0:</p>
<p>42.0.0 - 2024-01-22</p>
<pre><code>
* **BACKWARDS INCOMPATIBLE:** Dropped support for LibreSSL < 3.7.
* **BACKWARDS INCOMPATIBLE:** Loading a PKCS7 with no content field using
:func:`~cryptography.hazmat.primitives.serialization.pkcs7.load_pem_pkcs7_certificates`
or
:func:`~cryptography.hazmat.primitives.serialization.pkcs7.load_der_pkcs7_certificates`
will now raise a ``ValueError`` rather than return an empty list.
* Parsing SSH certificates no longer permits malformed critical options with
values, as documented in the 41.0.2 release notes.
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.2.0.
* Updated the minimum supported Rust version (MSRV) to 1.63.0, from 1.56.0.
* We now publish both ``py37`` and ``py39`` ``abi3`` wheels. This should
resolve some errors relating to initializing a module multiple times per
process.
* Support :class:`~cryptography.hazmat.primitives.asymmetric.padding.PSS` for
X.509 certificate signing requests and certificate revocation lists with the
keyword-only argument ``rsa_padding`` on the ``sign`` methods for
:class:`~cryptography.x509.CertificateSigningRequestBuilder` and
:class:`~cryptography.x509.CertificateRevocationListBuilder`.
* Added support for obtaining X.509 certificate signing request signature
algorithm parameters (including PSS) via
</tr></table>
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/2202123b50de1b8788f909a3e5afe350c56ad81e"><code>2202123</code></a> changelog and version bump 42.0.2 (<a href="https://redirect.github.com/pyca/cryptography/issues/10268">#10268</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f7032bdd409838f67fc2b93343f897fb5f397d80"><code>f7032bd</code></a> bump openssl in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/10298">#10298</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10299">#10299</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/002e886f16d8857151c09b11dc86b35f2ac9aec3"><code>002e886</code></a> Fixes <a href="https://redirect.github.com/pyca/cryptography/issues/10294">#10294</a> -- correct accidental change to exchange kwarg (<a href="https://redirect.github.com/pyca/cryptography/issues/10295">#10295</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10296">#10296</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/92fa9f2f606caea5d499c825e832be5bac6f0c23"><code>92fa9f2</code></a> support bytes-like consistently across our asym sign/verify APIs (<a href="https://redirect.github.com/pyca/cryptography/issues/10260">#10260</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/1">#1</a>...</li>
<li><a href="https://github.com/pyca/cryptography/commit/6478f7e28be54b51931277235de01b249ceabd96"><code>6478f7e</code></a> explicitly support bytes-like for signature/data in RSA sign/verify (<a href="https://redirect.github.com/pyca/cryptography/issues/10259">#10259</a>) ...</li>
<li><a href="https://github.com/pyca/cryptography/commit/4bb8596ae02d95bb054dbcf55e8771379dbe0c19"><code>4bb8596</code></a> fix the release script (<a href="https://redirect.github.com/pyca/cryptography/issues/10233">#10233</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10254">#10254</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/337437dc2e62772bde4ad5544f4b1db9ee7572d9"><code>337437d</code></a> 42.0.1 bump (<a href="https://redirect.github.com/pyca/cryptography/issues/10252">#10252</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/56255de6b2d1a2d2e502b0275231ca81907f33f1"><code>56255de</code></a> allow SPKI RSA keys to be parsed even if they have an incorrect delimiter (<a href="https://redirect.github.com/pyca/cryptography/issues/1">#1</a>...</li>
<li><a href="https://github.com/pyca/cryptography/commit/12f038b38af76e36efe8cef09597010c97647e8f"><code>12f038b</code></a> fixes <a href="https://redirect.github.com/pyca/cryptography/issues/10237">#10237</a> -- correct EC sign parameter name (<a href="https://redirect.github.com/pyca/cryptography/issues/10239">#10239</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10240">#10240</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/4e64baf360a3a89bd92582f59344c12b5c0bd3fd"><code>4e64baf</code></a> 42.0.0 version bump (<a href="https://redirect.github.com/pyca/cryptography/issues/10232">#10232</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/41.0.7...42.0.2">compare view</a></li>
</ul>
</details>
<br />
Updates `pytest` from 7.4.4 to 8.0.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytest-dev/pytest/releases">pytest's releases</a>.</em></p>
<blockquote>
<h2>pytest 8.0.0 (2024-01-27)</h2>
<p>See <a href="https://github.com/pytest-dev/pytest/releases/tag/8.0.0rc1">8.0.0rc1</a> and <a href="https://github.com/pytest-dev/pytest/releases/tag/8.0.0rc2">8.0.0rc2</a> for the full changes since pytest 7.4!</p>
<h2>Bug Fixes</h2>
<ul>
<li><a href="https://redirect.github.com/pytest-dev/pytest/issues/11842">#11842</a>: Properly escape the <code>reason</code> of a <code>skip <pytest.mark.skip ref></code>{.interpreted-text role="ref"} mark when writing JUnit XML files.</li>
<li><a href="https://redirect.github.com/pytest-dev/pytest/issues/11861">#11861</a>: Avoid microsecond exceeds <code>1_000_000</code> when using <code>log-date-format</code> with <code>%f</code> specifier, which might cause the test suite to crash.</li>
</ul>
<h2>8.0.0rc2</h2>
<h1>pytest 8.0.0rc2 (2024-01-17)</h1>
<h2>Improvements</h2>
<ul>
<li><a href="https://redirect.github.com/pytest-dev/pytest/issues/11233">#11233</a>: Improvements to <code>-r</code> for xfailures and xpasses:
<ul>
<li>Report tracebacks for xfailures when <code>-rx</code> is set.</li>
<li>Report captured output for xpasses when <code>-rX</code> is set.</li>
<li>For xpasses, add <code>-</code> in summary between test name and reason, to match how xfail is displayed.</li>
</ul>
</li>
<li><a href="https://redirect.github.com/pytest-dev/pytest/issues/11825">#11825</a>: The <code>pytest_plugin_registered</code>{.interpreted-text role="hook"} hook has a new <code>plugin_name</code> parameter containing the name by which <code>plugin</code> is registered.</li>
</ul>
<h2>Bug Fixes</h2>
<ul>
<li>
<p><a href="https://redirect.github.com/pytest-dev/pytest/issues/11706">#11706</a>: Fix reporting of teardown errors in higher-scoped fixtures when using [--maxfail]{.title-ref} or [--stepwise]{.title-ref}.</p>
</li>
<li>
<p><a href="https://redirect.github.com/pytest-dev/pytest/issues/11758">#11758</a>: Fixed <code>IndexError: string index out of range</code> crash in <code>if highlighted[-1] == "\n" and source[-1] != "\n"</code>.
This bug was introduced in pytest 8.0.0rc1.</p>
</li>
<li>
<p><a href="https://redirect.github.com/pytest-dev/pytest/issues/9765">#9765</a>, <a href="https://redirect.github.com/pytest-dev/pytest/issues/11816">#11816</a>: Fixed a frustrating bug that afflicted some users with the only error being <code>assert mod not in mods</code>. The issue was caused by the fact that <code>str(Path(mod))</code> and <code>mod.__file__</code> don't necessarily produce the same string, and was being erroneously used interchangably in some places in the code.</p>
<p>This fix also broke the internal API of <code>PytestPluginManager.consider_conftest</code> by introducing a new parameter -- we mention this in case it is being used by external code, even if marked as <em>private</em>.</p>
</li>
</ul>
<h2>pytest 8.0.0rc1 (2023-12-30)</h2>
<p>See <a href="https://docs.pytest.org/en/latest/changelog.html#pytest-8-0-0rc1-2023-12-30">https://docs.pytest.org/en/latest/changelog.html#pytest-8-0-0rc1-2023-12-30</a> for the rendered changelog.</p>
<h2>Breaking Changes</h2>
<h3>Old Deprecations Are Now Errors</h3>
<ul>
<li>
<p><a href="https://redirect.github.com/pytest-dev/pytest/issues/7363">#7363</a>: <strong>PytestRemovedIn8Warning deprecation warnings are now errors by default.</strong></p>
<p>Following our plan to remove deprecated features with as little disruption as possible, all warnings of type <code>PytestRemovedIn8Warning</code> now generate errors instead of warning messages by default.</p>
<p><strong>The affected features will be effectively removed in pytest 8.1</strong>, so please consult the <code>deprecations</code>{.interpreted-text role="ref"} section in the docs for directions on how to update existing code.</p>
<p>In the pytest <code>8.0.X</code> series, it is possible to change the errors back into warnings as a stopgap measure by adding this to your <code>pytest.ini</code> file:</p>
<pre lang="ini"><code>[pytest]
</code></pre>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytest-dev/pytest/commit/478f8233bca8147445f0c5129f04ada892cc6c91"><code>478f823</code></a> Prepare release version 8.0.0</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/608590097a6542768099dd371b84d8b37a1990da"><code>6085900</code></a> [8.0.x] fix: avoid rounding microsecond to <code>1_000_000</code> (<a href="https://redirect.github.com/pytest-dev/pytest/issues/11863">#11863</a>)</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/3b41c65c81d649d962be5ec469f44104b8d09748"><code>3b41c65</code></a> [8.0.x] Escape skip reason in junitxml (<a href="https://redirect.github.com/pytest-dev/pytest/issues/11845">#11845</a>)</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/747072ad26f2443dc8a62eb88db8cbf56fa95470"><code>747072a</code></a> [8.0.x] Update docstring of scripts/generate-gh-release-notes.py (<a href="https://redirect.github.com/pytest-dev/pytest/issues/11768">#11768</a>)</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/011a475baf6e1d0e9ec30c5996d9cbcbe7c95475"><code>011a475</code></a> Properly attach packages to the GH release notes (<a href="https://redirect.github.com/pytest-dev/pytest/issues/11839">#11839</a>) (<a href="https://redirect.github.com/pytest-dev/pytest/issues/11840">#11840</a>)</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/97960bdd148972b2f26bd9b336163e590bbc4c6b"><code>97960bd</code></a> Merge pull request <a href="https://redirect.github.com/pytest-dev/pytest/issues/11835">#11835</a> from pytest-dev/release-8.0.0rc2</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/6be0a3cbf7e014834610139421a0d9804d4a3eae"><code>6be0a3c</code></a> Prepare release version 8.0.0rc2</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/44ffe071658f5ac609fe8d3b967e8dba93abc819"><code>44ffe07</code></a> Merge pull request <a href="https://redirect.github.com/pytest-dev/pytest/issues/11837">#11837</a> from pytest-dev/backport-11836-to-8.0.x</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/14ecb049732bed4dc913e2da55c616882432c978"><code>14ecb04</code></a> [8.0.x] testing: temporarily disable test due to hypothesis issue</li>
<li><a href="https://github.com/pytest-dev/pytest/commit/41c8dabee3c40a5d363bf03a3ca2370ee27cbcd0"><code>41c8dab</code></a> Merge pull request <a href="https://redirect.github.com/pytest-dev/pytest/issues/11831">#11831</a> from bluetech/backport-11825-to-8.0.x</li>
<li>Additional commits viewable in <a href="https://github.com/pytest-dev/pytest/compare/7.4.4...8.0.0">compare view</a></li>
</ul>
</details>
<br />
Updates `python-dotenv` from 1.0.0 to 1.0.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/theskumar/python-dotenv/releases">python-dotenv's releases</a>.</em></p>
<blockquote>
<h2>v1.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>FIx year in release date in changelog.md by <a href="https://github.com/jankislinger"><code>@jankislinger</code></a> in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/453">theskumar/python-dotenv#453</a></li>
<li>Gracefully handle code which has been imported from a zipfile by <a href="https://github.com/samwyma"><code>@samwyma</code></a> in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/456">theskumar/python-dotenv#456</a></li>
<li>Use pathlib.Path in tests by <a href="https://github.com/eumiro"><code>@eumiro</code></a> in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/466">theskumar/python-dotenv#466</a></li>
<li>fixes <a href="https://redirect.github.com/theskumar/python-dotenv/issues/473">#473</a> Use https in README links by <a href="https://github.com/Nicals"><code>@Nicals</code></a> in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/474">theskumar/python-dotenv#474</a></li>
<li>Allow modules using load_dotenv to be reloaded when launched in a separate thread by <a href="https://github.com/freddyaboulton"><code>@freddyaboulton</code></a> in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/497">theskumar/python-dotenv#497</a></li>
<li>Fix error handling in the rewrite function by <a href="https://github.com/Qwerty-133"><code>@Qwerty-133</code></a> in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/468">theskumar/python-dotenv#468</a></li>
<li>Add python 3.12 and pypy3.10 to test suite by <a href="https://github.com/theskumar"><code>@theskumar</code></a> in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/498">theskumar/python-dotenv#498</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/jankislinger"><code>@jankislinger</code></a> made their first contribution in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/453">theskumar/python-dotenv#453</a></li>
<li><a href="https://github.com/samwyma"><code>@samwyma</code></a> made their first contribution in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/456">theskumar/python-dotenv#456</a></li>
<li><a href="https://github.com/eumiro"><code>@eumiro</code></a> made their first contribution in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/466">theskumar/python-dotenv#466</a></li>
<li><a href="https://github.com/Nicals"><code>@Nicals</code></a> made their first contribution in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/474">theskumar/python-dotenv#474</a></li>
<li><a href="https://github.com/freddyaboulton"><code>@freddyaboulton</code></a> made their first contribution in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/497">theskumar/python-dotenv#497</a></li>
<li><a href="https://github.com/Qwerty-133"><code>@Qwerty-133</code></a> made their first contribution in <a href="https://redirect.github.com/theskumar/python-dotenv/pull/468">theskumar/python-dotenv#468</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/theskumar/python-dotenv/compare/v1.0.0...v1.0.1">https://github.com/theskumar/python-dotenv/compare/v1.0.0...v1.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/theskumar/python-dotenv/blob/main/CHANGELOG.md">python-dotenv's changelog</a>.</em></p>
<blockquote>
<h2>[1.0.1] - 2024-01-23</h2>
<p><strong>Fixed</strong></p>
<ul>
<li>Gracefully handle code which has been imported from a zipfile (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/456">#456</a> by [<a href="https://github.com/samwyma"><code>@samwyma</code></a>])</li>
<li>Allow modules using load_dotenv to be reloaded when launched in a separate thread (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/497">#497</a> by [<a href="https://github.com/freddyaboulton"><code>@freddyaboulton</code></a>])</li>
<li>Fix file not closed after deletion, handle error in the rewrite function (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/469">#469</a> by [<a href="https://github.com/Qwerty-133"><code>@Qwerty-133</code></a>])</li>
</ul>
<p><strong>Misc</strong></p>
<ul>
<li>Use pathlib.Path in tests (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/466">#466</a> by [<a href="https://github.com/eumiro"><code>@eumiro</code></a>])</li>
<li>Fix year in release date in changelog.md (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/454">#454</a> by [<a href="https://github.com/jankislinger"><code>@jankislinger</code></a>])</li>
<li>Use https in README links (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/474">#474</a> by [<a href="https://github.com/Nicals"><code>@Nicals</code></a>])</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/theskumar/python-dotenv/commit/d6c0b9638349a7dd605d60ee555ff60421c1a594"><code>d6c0b96</code></a> Bumpversion 1.0.0 -> 1.0.1</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/42dc08664bc7cef185a139137a39126a030f272c"><code>42dc086</code></a> Update changelog for 1.0.1</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/b1eebbaaab2cf3e1c48fa5c7ad88cfb00e4b5e54"><code>b1eebba</code></a> Add python 3.12 and pypy3.10 to test runner</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/6ff139147559eff4d124c038ec5a4b60ffcf3033"><code>6ff1391</code></a> Fix temporary file is deleted before closing, in the rewrite function (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/468">#468</a>)</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/0b94ac0822241eb526828cf506048fb0525d5c38"><code>0b94ac0</code></a> Allow modules using load_dotenv to be reloaded when launched in a separate th...</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/3ffcef60d10813b72ecf85d5941d51b0207cd40e"><code>3ffcef6</code></a> Use https in README links (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/474">#474</a>)</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/be96be259c7eaf687360367de53e5e099aea48df"><code>be96be2</code></a> Use pathlib.Path in tests (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/466">#466</a>)</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/137bc3dc0b8cf3d417a1e800c4065c526e3fb96a"><code>137bc3d</code></a> Gracefully handle code which has been imported from a zipfile (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/456">#456</a>)</li>
<li><a href="https://github.com/theskumar/python-dotenv/commit/dd1af684f2586d2c2fdd722f9c45d3212e1e4e59"><code>dd1af68</code></a> FIx year in release in changelog (<a href="https://redirect.github.com/theskumar/python-dotenv/issues/453">#453</a>)</li>
<li>See full diff in <a href="https://github.com/theskumar/python-dotenv/compare/v1.0.0...v1.0.1">compare view</a></li>
</ul>
</details>
<br />
Updates `tox` from 4.12.0 to 4.12.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tox-dev/tox/releases">tox's releases</a>.</em></p>
<blockquote>
<h2>4.12.1</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<ul>
<li>Fix tox version requirement in docs by <a href="https://github.com/Viicos"><code>@Viicos</code></a> in <a href="https://redirect.github.com/tox-dev/tox/pull/3184">tox-dev/tox#3184</a></li>
<li><a href="https://redirect.github.com/tox-dev/tox/issues/3165">#3165</a> fixed Tox failing with --installpkg and multi testenvs by <a href="https://github.com/Stefanhg"><code>@Stefanhg</code></a> in <a href="https://redirect.github.com/tox-dev/tox/pull/3186">tox-dev/tox#3186</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Viicos"><code>@Viicos</code></a> made their first contribution in <a href="https://redirect.github.com/tox-dev/tox/pull/3184">tox-dev/tox#3184</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/tox-dev/tox/compare/4.12.0...4.12.1">https://github.com/tox-dev/tox/compare/4.12.0...4.12.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tox-dev/tox/blob/main/docs/changelog.rst">tox's changelog</a>.</em></p>
<blockquote>
<h2>v4.12.1 (2024-01-16)</h2>
<p>Bugfixes - 4.12.1</p>
<pre><code>- Fixed bug where running with --installpkg and multiple envs could not clean up between tests (:issue:`3165`)
</code></pre>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tox-dev/tox/commit/8eaf09f2e7d38c056819009a36f204d2c36d6981"><code>8eaf09f</code></a> release 4.12.1</li>
<li><a href="https://github.com/tox-dev/tox/commit/37b11d4636d184affa1f71c7454844b5c9cbc02b"><code>37b11d4</code></a> <a href="https://redirect.github.com/tox-dev/tox/issues/3165">#3165</a> fixed Tox failing with --installpkg and multi testenvs (<a href="https://redirect.github.com/tox-dev/tox/issues/3186">#3186</a>)</li>
<li><a href="https://github.com/tox-dev/tox/commit/fa390ce3ed99e3d7c27f0a82396aaa17ec25d611"><code>fa390ce</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://redirect.github.com/tox-dev/tox/issues/3185">#3185</a>)</li>
<li><a href="https://github.com/tox-dev/tox/commit/3b403bca2bf397e3eee14140739a10fcf242df76"><code>3b403bc</code></a> Fix tox version requirement in docs (<a href="https://redirect.github.com/tox-dev/tox/issues/3184">#3184</a>)</li>
<li>See full diff in <a href="https://github.com/tox-dev/tox/compare/4.12.0...4.12.1">compare view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions
</details>
|
https://api.github.com/repos/pallets/flask/pulls/5401
|
2024-02-01T15:41:58Z
|
2024-02-03T15:40:28Z
|
2024-02-03T15:40:28Z
|
2024-02-18T00:06:24Z
| 983
|
pallets/flask
| 20,039
|
Prevent flowview from creating duplicated windows
|
diff --git a/libmproxy/console/flowview.py b/libmproxy/console/flowview.py
index 5271db4f6e..4304afb5d1 100644
--- a/libmproxy/console/flowview.py
+++ b/libmproxy/console/flowview.py
@@ -519,10 +519,10 @@ def keypress(self, size, key):
self._w.keypress(size, key)
elif key == "a":
self.flow.accept_intercept(self.master)
- self.master.view_flow(self.flow)
+ signals.flow_change.send(self, flow = self.flow)
elif key == "A":
self.master.accept_all()
- self.master.view_flow(self.flow)
+ signals.flow_change.send(self, flow = self.flow)
elif key == "d":
if self.state.flow_count() == 1:
self.master.view_flowlist()
|
When accepting intercepted flows with 'a' or 'A' command in flowview, mitmproxy makes new window every time. So, I need to press 'q' button several times to return the flow list.
|
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/791
|
2015-10-04T05:25:11Z
|
2015-10-28T12:10:38Z
|
2015-10-28T12:10:38Z
|
2015-10-28T12:10:38Z
| 192
|
mitmproxy/mitmproxy
| 28,094
|
Updating the link to the newest version of PyMC
|
diff --git a/README.md b/README.md
index 3b63582e8..ce2cfe039 100644
--- a/README.md
+++ b/README.md
@@ -901,7 +901,7 @@ A curated list of awesome Python frameworks, libraries and software. Inspired by
* [NetworkX](https://networkx.github.io/) - A high-productivity software for complex networks.
* [Pandas](http://pandas.pydata.org/) - A library providing high-performance, easy-to-use data structures and data analysis tools.
* [Open Mining](https://github.com/avelino/mining) - Business Intelligence (BI) in Python (Pandas web interface)
-* [PyMC](https://github.com/pymc-devs/pymc) - Markov Chain Monte Carlo sampling toolkit.
+* [PyMC](https://github.com/pymc-devs/pymc3) - Markov Chain Monte Carlo sampling toolkit.
* [zipline](https://github.com/quantopian/zipline) - A Pythonic algorithmic trading library.
* [PyDy](https://pydy.org/) - Short for Python Dynamics, used to assist with workflow in the modeling of dynamic motion based around NumPy, SciPy, IPython, and matplotlib.
* [SymPy](https://github.com/sympy/sympy) - A Python library for symbolic mathematics.
|
The development version of PyMC (version 3) has been moved to its own repository called pymc3.
|
https://api.github.com/repos/vinta/awesome-python/pulls/410
|
2015-07-01T23:00:47Z
|
2015-07-02T03:43:57Z
|
2015-07-02T03:43:57Z
|
2015-11-12T20:33:24Z
| 299
|
vinta/awesome-python
| 27,283
|
Allow output folder to be a symbolic link
|
diff --git a/nodes.py b/nodes.py
index 0b8be76592..d952d1c572 100644
--- a/nodes.py
+++ b/nodes.py
@@ -752,7 +752,7 @@ def map_filename(filename):
full_output_folder = os.path.join(self.output_dir, subfolder)
- if os.path.commonpath((self.output_dir, os.path.realpath(full_output_folder))) != self.output_dir:
+ if os.path.commonpath((self.output_dir, os.path.abspath(full_output_folder))) != self.output_dir:
print("Saving image outside the output folder is not allowed.")
return {}
diff --git a/server.py b/server.py
index 73429accaa..e4f688cf73 100644
--- a/server.py
+++ b/server.py
@@ -125,7 +125,7 @@ async def view_image(request):
output_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), type)
if "subfolder" in request.rel_url.query:
full_output_dir = os.path.join(output_dir, request.rel_url.query["subfolder"])
- if os.path.commonpath((os.path.realpath(full_output_dir), output_dir)) != output_dir:
+ if os.path.commonpath((os.path.abspath(full_output_dir), output_dir)) != output_dir:
return web.Response(status=403)
output_dir = full_output_dir
|
The `realpath()` expands symbolic links, this PR changes it to `abspath()`, because it preserves the symbolic links, allowing the output folder to be a symbolic link pointing to a folder outside the ComfyUI folder.
|
https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/238
|
2023-03-24T00:31:50Z
|
2023-03-25T21:43:40Z
|
2023-03-25T21:43:40Z
|
2023-03-26T05:16:49Z
| 304
|
comfyanonymous/ComfyUI
| 17,702
|
Merge dev branch
|
diff --git a/modules/presets.py b/modules/presets.py
index 2a4a4dde3e..2686355d8e 100644
--- a/modules/presets.py
+++ b/modules/presets.py
@@ -120,9 +120,12 @@ def generate_preset_yaml(state):
defaults = default_preset()
data = {k: state[k] for k in presets_params()}
- # Remove entries that are identical to the defaults
+ # Remove entries that are identical to the defaults.
+ # sampler_priority is always saved because it is experimental
+ # and the default order may change.
+
for k in list(data.keys()):
- if data[k] == defaults[k]:
+ if data[k] == defaults[k] and k != 'sampler_priority':
del data[k]
return yaml.dump(data, sort_keys=False)
diff --git a/modules/sampler_hijack.py b/modules/sampler_hijack.py
index 9701b03434..6f8de41688 100644
--- a/modules/sampler_hijack.py
+++ b/modules/sampler_hijack.py
@@ -428,16 +428,15 @@ def custom_sort_key(obj):
# Sort the list using the custom key function
warpers = sorted(warpers, key=custom_sort_key)
+ if shared.args.verbose:
+ logger.info("WARPERS=")
+ pprint.PrettyPrinter(indent=4, sort_dicts=False).pprint([x.__class__.__name__ for x in warpers])
if normalize is not None:
warpers.append(normalize)
warpers.append(SpyLogitsWarper())
warpers = LogitsProcessorList(warpers)
- if shared.args.verbose:
- logger.info("WARPERS=")
- pprint.PrettyPrinter(indent=4, sort_dicts=False).pprint([x.__class__.__name__ for x in warpers])
-
return warpers
|
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/5453
|
2024-02-06T14:49:54Z
|
2024-02-06T14:50:21Z
|
2024-02-06T14:50:21Z
|
2024-02-06T14:50:21Z
| 433
|
oobabooga/text-generation-webui
| 26,741
|
|
Catch occasional protocol errors in regular connect
|
diff --git a/mitmproxy/proxy/protocol/http.py b/mitmproxy/proxy/protocol/http.py
index f3e0f514cd..50d64e1743 100644
--- a/mitmproxy/proxy/protocol/http.py
+++ b/mitmproxy/proxy/protocol/http.py
@@ -182,6 +182,17 @@ def handle_regular_connect(self, f):
try:
self.set_server((f.request.host, f.request.port))
+
+ if f.response:
+ resp = f.response
+ else:
+ resp = http.make_connect_response(f.request.data.http_version)
+
+ self.send_response(resp)
+
+ if is_ok(resp.status_code):
+ layer = self.ctx.next_layer(self)
+ layer()
except (
exceptions.ProtocolException, exceptions.NetlibException
) as e:
@@ -192,17 +203,6 @@ def handle_regular_connect(self, f):
self.channel.ask("error", f)
return False
- if f.response:
- resp = f.response
- else:
- resp = http.make_connect_response(f.request.data.http_version)
-
- self.send_response(resp)
-
- if is_ok(resp.status_code):
- layer = self.ctx.next_layer(self)
- layer()
-
return False
def handle_upstream_connect(self, f):
|
Fixes #1843 and #1847
|
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/1860
|
2016-12-15T21:58:14Z
|
2016-12-15T23:16:34Z
|
2016-12-15T23:16:34Z
|
2017-03-19T00:28:29Z
| 294
|
mitmproxy/mitmproxy
| 27,710
|
remove six and __future__ imports
|
diff --git a/README.rst b/README.rst
index 8b778662f73..4fec59b8648 100644
--- a/README.rst
+++ b/README.rst
@@ -48,7 +48,7 @@ should know:
Supported systems
-----------------
-We currently support Linux and OS X running Python 2.7 or 3.5 -- 3.7.
+We currently support Linux and OS X running Python 3.5 -- 3.8
Windows support is experimental - algorithmic, toy_text, classic_control and atari *should* work on Windows (see next section for installation instructions); nevertheless, proceed at your own risk.
Installation
diff --git a/examples/agents/cem.py b/examples/agents/cem.py
index e76c4b9335e..6a7332ee7c2 100644
--- a/examples/agents/cem.py
+++ b/examples/agents/cem.py
@@ -1,9 +1,7 @@
-from __future__ import print_function
-
import gym
from gym import wrappers, logger
import numpy as np
-from six.moves import cPickle as pickle
+import pickle
import json, sys, os
from os import path
from _policies import BinaryActionLinearPolicy # Different file so it can be unpickled
diff --git a/examples/agents/keyboard_agent.py b/examples/agents/keyboard_agent.py
index 71142871804..7a24b05f0b3 100644
--- a/examples/agents/keyboard_agent.py
+++ b/examples/agents/keyboard_agent.py
@@ -1,6 +1,4 @@
#!/usr/bin/env python
-from __future__ import print_function
-
import sys, gym, time
#
diff --git a/gym/envs/algorithmic/algorithmic_env.py b/gym/envs/algorithmic/algorithmic_env.py
index 4a7ed75368c..35da87a8dfc 100644
--- a/gym/envs/algorithmic/algorithmic_env.py
+++ b/gym/envs/algorithmic/algorithmic_env.py
@@ -36,7 +36,7 @@
import sys
from contextlib import closing
import numpy as np
-from six import StringIO
+from io import StringIO
class AlgorithmicEnv(Env):
diff --git a/gym/envs/algorithmic/duplicated_input.py b/gym/envs/algorithmic/duplicated_input.py
index a6cc86e927e..18d06f255a0 100644
--- a/gym/envs/algorithmic/duplicated_input.py
+++ b/gym/envs/algorithmic/duplicated_input.py
@@ -2,7 +2,6 @@
Task is to return every nth character from the input tape.
http://arxiv.org/abs/1511.07275
"""
-from __future__ import division
from gym.envs.algorithmic import algorithmic_env
diff --git a/gym/envs/algorithmic/reversed_addition.py b/gym/envs/algorithmic/reversed_addition.py
index 976412d8b90..a14b6b52b73 100644
--- a/gym/envs/algorithmic/reversed_addition.py
+++ b/gym/envs/algorithmic/reversed_addition.py
@@ -1,4 +1,3 @@
-from __future__ import division
from gym.envs.algorithmic import algorithmic_env
diff --git a/gym/envs/classic_control/rendering.py b/gym/envs/classic_control/rendering.py
index 84d0ca3167b..c80e6e453a4 100644
--- a/gym/envs/classic_control/rendering.py
+++ b/gym/envs/classic_control/rendering.py
@@ -1,9 +1,7 @@
"""
2D rendering framework
"""
-from __future__ import division
import os
-import six
import sys
if "Apple" in sys.version:
@@ -46,7 +44,7 @@ def get_display(spec):
"""
if spec is None:
return None
- elif isinstance(spec, six.string_types):
+ elif isinstance(spec, str):
return pyglet.canvas.Display(spec)
else:
raise error.Error('Invalid display specification: {}. (Must be a string like :0 or None.)'.format(spec))
diff --git a/gym/envs/tests/test_envs.py b/gym/envs/tests/test_envs.py
index d1c0ad0a71f..8c6abd7fb63 100644
--- a/gym/envs/tests/test_envs.py
+++ b/gym/envs/tests/test_envs.py
@@ -51,7 +51,6 @@ def test_random_rollout():
def test_env_render_result_is_immutable():
- from six import string_types
environs = [
envs.make('Taxi-v3'),
envs.make('FrozenLake-v0'),
@@ -61,5 +60,5 @@ def test_env_render_result_is_immutable():
for env in environs:
env.reset()
output = env.render(mode='ansi')
- assert isinstance(output, string_types)
+ assert isinstance(output, str)
env.close()
diff --git a/gym/envs/tests/test_envs_semantics.py b/gym/envs/tests/test_envs_semantics.py
index 4843375f677..4ebb6f14bb9 100644
--- a/gym/envs/tests/test_envs_semantics.py
+++ b/gym/envs/tests/test_envs_semantics.py
@@ -4,7 +4,6 @@
"""
-from __future__ import unicode_literals
import json
import hashlib
import os
diff --git a/gym/envs/toy_text/frozen_lake.py b/gym/envs/toy_text/frozen_lake.py
index a4ab92ea42a..36ec7d170c7 100644
--- a/gym/envs/toy_text/frozen_lake.py
+++ b/gym/envs/toy_text/frozen_lake.py
@@ -2,7 +2,7 @@
from contextlib import closing
import numpy as np
-from six import StringIO, b
+from io import StringIO
from gym import utils
from gym.envs.toy_text import discrete
diff --git a/gym/envs/toy_text/taxi.py b/gym/envs/toy_text/taxi.py
index abe361c6f20..9a7deadd3f6 100644
--- a/gym/envs/toy_text/taxi.py
+++ b/gym/envs/toy_text/taxi.py
@@ -1,6 +1,6 @@
import sys
from contextlib import closing
-from six import StringIO
+from io import StringIO
from gym import utils
from gym.envs.toy_text import discrete
import numpy as np
diff --git a/gym/utils/colorize.py b/gym/utils/colorize.py
index da70184991a..0e7fe054c84 100644
--- a/gym/utils/colorize.py
+++ b/gym/utils/colorize.py
@@ -21,15 +21,10 @@ def colorize(string, color, bold=False, highlight = False):
blue, magenta, cyan, white, crimson
"""
- # Import six here so that `utils` has no import-time dependencies.
- # We want this since we use `utils` during our import-time sanity checks
- # that verify that our dependencies (including six) are actually present.
- import six
-
attr = []
num = color2num[color]
if highlight: num += 10
- attr.append(six.u(str(num)))
- if bold: attr.append(six.u('1'))
- attrs = six.u(';').join(attr)
- return six.u('\x1b[%sm%s\x1b[0m') % (attrs, string)
+ attr.append(str(num))
+ if bold: attr.append('1')
+ attrs = ';'.join(attr)
+ return '\x1b[%sm%s\x1b[0m' % (attrs, string)
diff --git a/gym/utils/seeding.py b/gym/utils/seeding.py
index 39fe3422717..dfd7076c213 100644
--- a/gym/utils/seeding.py
+++ b/gym/utils/seeding.py
@@ -2,14 +2,13 @@
import numpy as np
import os
import random as _random
-from six import integer_types
import struct
import sys
from gym import error
def np_random(seed=None):
- if seed is not None and not (isinstance(seed, integer_types) and 0 <= seed):
+ if seed is not None and not (isinstance(seed, int) and 0 <= seed):
raise error.Error('Seed must be a non-negative integer or omitted, not {}'.format(seed))
seed = create_seed(seed)
@@ -58,7 +57,7 @@ def create_seed(a=None, max_bytes=8):
a = a.encode('utf8')
a += hashlib.sha512(a).digest()
a = _bigint_from_bytes(a[:max_bytes])
- elif isinstance(a, integer_types):
+ elif isinstance(a, int):
a = a % 2**(8 * max_bytes)
else:
raise error.Error('Invalid type for seed: {} ({})'.format(type(a), a))
diff --git a/gym/wrappers/monitor.py b/gym/wrappers/monitor.py
index e26f82bea18..b2b6733607a 100644
--- a/gym/wrappers/monitor.py
+++ b/gym/wrappers/monitor.py
@@ -1,7 +1,7 @@
import gym
from gym import Wrapper
from gym import error, version, logger
-import os, json, numpy as np, six
+import os, json, numpy as np
from gym.wrappers.monitoring import stats_recorder, video_recorder
from gym.utils import atomic_write, closer
from gym.utils.json_utils import json_encode_np
@@ -66,10 +66,7 @@ def _start(self, directory, video_callable=None, force=False, resume=False,
if not os.path.exists(directory):
logger.info('Creating monitor directory %s', directory)
- if six.PY3:
- os.makedirs(directory, exist_ok=True)
- else:
- os.makedirs(directory)
+ os.makedirs(directory, exist_ok=True)
if video_callable is None:
video_callable = capped_cubic_video_schedule
diff --git a/gym/wrappers/monitoring/video_recorder.py b/gym/wrappers/monitoring/video_recorder.py
index 7af82bf026f..fb14c709094 100644
--- a/gym/wrappers/monitoring/video_recorder.py
+++ b/gym/wrappers/monitoring/video_recorder.py
@@ -5,8 +5,7 @@
import os.path
import distutils.spawn, distutils.version
import numpy as np
-from six import StringIO
-import six
+from io import StringIO
from gym import error, logger
def touch(path):
@@ -182,9 +181,8 @@ def __init__(self, output_path, frames_per_sec):
self.frames = []
def capture_frame(self, frame):
- from six import string_types
string = None
- if isinstance(frame, string_types):
+ if isinstance(frame, str):
string = frame
elif isinstance(frame, StringIO):
string = frame.getvalue()
@@ -193,10 +191,10 @@ def capture_frame(self, frame):
frame_bytes = string.encode('utf-8')
- if frame_bytes[-1:] != six.b('\n'):
+ if frame_bytes[-1:] != b'\n':
raise error.InvalidFrame('Frame must end with a newline: """{}"""'.format(string))
- if six.b('\r') in frame_bytes:
+ if b'\r' in frame_bytes:
raise error.InvalidFrame('Frame contains carriage returns (only newlines are allowed: """{}"""'.format(string))
self.frames.append(frame_bytes)
@@ -208,14 +206,14 @@ def close(self):
# Turn frames into events: clear screen beforehand
# https://rosettacode.org/wiki/Terminal_control/Clear_the_screen#Python
# https://rosettacode.org/wiki/Terminal_control/Cursor_positioning#Python
- clear_code = six.b("%c[2J\033[1;1H" % (27))
+ clear_code = b"%c[2J\033[1;1H" % (27)
# Decode the bytes as UTF-8 since JSON may only contain UTF-8
- events = [ (frame_duration, (clear_code+frame.replace(six.b('\n'),six.b('\r\n'))).decode('utf-8')) for frame in self.frames ]
+ events = [ (frame_duration, (clear_code+frame.replace(b'\n', b'\r\n')).decode('utf-8')) for frame in self.frames ]
# Calculate frame size from the largest frames.
# Add some padding since we'll get cut off otherwise.
- height = max([frame.count(six.b('\n')) for frame in self.frames]) + 1
- width = max([max([len(line) for line in frame.split(six.b('\n'))]) for frame in self.frames]) + 2
+ height = max([frame.count(b'\n') for frame in self.frames]) + 1
+ width = max([max([len(line) for line in frame.split(b'\n')]) for frame in self.frames]) + 2
data = {
"version": 1,
diff --git a/scripts/generate_json.py b/scripts/generate_json.py
index 3cc938ccb89..49bc9e19432 100644
--- a/scripts/generate_json.py
+++ b/scripts/generate_json.py
@@ -1,4 +1,3 @@
-from __future__ import unicode_literals
from gym import envs, spaces, logger
import json
import os
diff --git a/setup.py b/setup.py
index 9f50f5e4033..0feeb85e7fa 100644
--- a/setup.py
+++ b/setup.py
@@ -28,7 +28,7 @@
if package.startswith('gym')],
zip_safe=False,
install_requires=[
- 'scipy', 'numpy>=1.10.4', 'six', 'pyglet>=1.4.0,<=1.5.0', 'cloudpickle>=1.2.0,<1.4.0',
+ 'scipy', 'numpy>=1.10.4', 'pyglet>=1.4.0,<=1.5.0', 'cloudpickle>=1.2.0,<1.4.0',
'enum34~=1.1.6;python_version<"3.4"',
],
extras_require=extras,
|
fixes https://github.com/openai/gym/issues/1823
|
https://api.github.com/repos/openai/gym/pulls/1840
|
2020-03-06T22:25:53Z
|
2020-04-10T22:10:35Z
|
2020-04-10T22:10:35Z
|
2020-04-10T22:10:39Z
| 3,293
|
openai/gym
| 5,497
|
Make refiner switchover based on model timesteps instead of sampling steps
|
diff --git a/modules/sd_samplers_cfg_denoiser.py b/modules/sd_samplers_cfg_denoiser.py
index a73d3b03692..93581c9acc6 100644
--- a/modules/sd_samplers_cfg_denoiser.py
+++ b/modules/sd_samplers_cfg_denoiser.py
@@ -152,7 +152,7 @@ def forward(self, x, sigma, uncond, cond, cond_scale, s_min_uncond, image_cond):
if state.interrupted or state.skipped:
raise sd_samplers_common.InterruptedException
- if sd_samplers_common.apply_refiner(self):
+ if sd_samplers_common.apply_refiner(self, sigma):
cond = self.sampler.sampler_extra_args['cond']
uncond = self.sampler.sampler_extra_args['uncond']
diff --git a/modules/sd_samplers_common.py b/modules/sd_samplers_common.py
index 6bd38e12a41..045b9e2fe06 100644
--- a/modules/sd_samplers_common.py
+++ b/modules/sd_samplers_common.py
@@ -155,8 +155,17 @@ def torchsde_randn(size, dtype, device, seed):
replace_torchsde_browinan()
-def apply_refiner(cfg_denoiser):
- completed_ratio = cfg_denoiser.step / cfg_denoiser.total_steps
+def apply_refiner(cfg_denoiser, sigma):
+ if opts.refiner_switch_by_sample_steps:
+ completed_ratio = cfg_denoiser.step / cfg_denoiser.total_steps
+ else:
+ # torch.max(sigma) only to handle rare case where we might have different sigmas in the same batch
+ try:
+ timestep = torch.argmin(torch.abs(cfg_denoiser.inner_model.sigmas - torch.max(sigma)))
+ except AttributeError: # for samplers that dont use sigmas (DDIM) sigma is actually the timestep
+ timestep = torch.max(sigma).to(dtype=int)
+ completed_ratio = (999 - timestep) / 1000
+
refiner_switch_at = cfg_denoiser.p.refiner_switch_at
refiner_checkpoint_info = cfg_denoiser.p.refiner_checkpoint_info
diff --git a/modules/shared_options.py b/modules/shared_options.py
index bb3752ba65f..e17eed512a8 100644
--- a/modules/shared_options.py
+++ b/modules/shared_options.py
@@ -227,7 +227,8 @@
"dont_fix_second_order_samplers_schedule": OptionInfo(False, "Do not fix prompt schedule for second order samplers."),
"hires_fix_use_firstpass_conds": OptionInfo(False, "For hires fix, calculate conds of second pass using extra networks of first pass."),
"use_old_scheduling": OptionInfo(False, "Use old prompt editing timelines.", infotext="Old prompt editing timelines").info("For [red:green:N]; old: If N < 1, it's a fraction of steps (and hires fix uses range from 0 to 1), if N >= 1, it's an absolute number of steps; new: If N has a decimal point in it, it's a fraction of steps (and hires fix uses range from 1 to 2), othewrwise it's an absolute number of steps"),
- "use_downcasted_alpha_bar": OptionInfo(False, "Downcast model alphas_cumprod to fp16 before sampling. For reproducing old seeds.", infotext="Downcast alphas_cumprod")
+ "use_downcasted_alpha_bar": OptionInfo(False, "Downcast model alphas_cumprod to fp16 before sampling. For reproducing old seeds.", infotext="Downcast alphas_cumprod"),
+ "refiner_switch_by_sample_steps": OptionInfo(False, "Switch to refiner by sampling steps instead of model timesteps. Old behavior for refiner.", infotext="Refiner switch by sampling steps")
}))
options_templates.update(options_section(('interrogate', "Interrogate"), {
|
## Description
* This makes the default behavior of refiner switchover aligned with model timesteps instead of sampling timesteps. This is easier to use, since setting refiner switchover to 0.8 (for a refiner trained on the last 200 timesteps like SDXL's) will now always switch to the refiner as soon as we're sampling from timesteps the refiner was trained on, where previously using img2img or different noise schedules could lead to unexpected model behavior (examples below).
* I changed this to work off of the existing refiner switchover slider. The new default behavior will be to derive timesteps from sigma (or in DDIM's case take them directly) and use that to determine whether it is time to switch over. Old behavior is supported by a compatibility option.
* Implements #14970
* ~~edit: converted to draft while I troubleshoot an off-by-one error~~ There is a bug where model alphas_cumprod changes (compatibility casting option or zero terminal snr) are reverted during the timestep the refiner is applied. This will have to be fixed separately.
## Screenshots/videos:
Examples of old behavior in txt2img:

The refiner model used here was trained for the last 200 timesteps. The Karras schedule type, especially on zero snr, drastically changes the model timesteps called during this 50 step sampling process, which results in the refiner being switched to too early on what is actually the correct setting on the default noise schedule. The effectively correct setting for Karras samplers is 0.88 for this refiner under the old configuration.
Now for the fixed version:

With this fix, the behavior of the refiner is consistent with the same settings across different schedules, and it no longer triggers too early. 0.8 is reliably a correct setting.
Examples of old behavior in img2img/inpainting (inpainting mask is over the head, adding a hat, 0.75 denoising strength):

This one is more complicated, and the differences are subtle. The effectively correct settings for the normal schedule is to switch over at 0.75, and for Karras it is correct to switch over at 0.85. Using the expected setting of 0.8 therefore is too late for normal schedules and too early for Karras ones. As denoising strength gets lower, the problem becomes more severe.

This grid shows the behavior after the fix. Switch at 0.8 is now correct for both.
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
|
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/14978
|
2024-02-20T21:57:51Z
|
2024-03-02T04:24:44Z
|
2024-03-02T04:24:44Z
|
2024-03-02T04:24:45Z
| 884
|
AUTOMATIC1111/stable-diffusion-webui
| 40,351
|
Fix alphabetical order
|
diff --git a/README.md b/README.md
index 6881181f93..373bcc7b66 100644
--- a/README.md
+++ b/README.md
@@ -178,7 +178,6 @@ API | Description | Auth | HTTPS | CORS |
### Cryptocurrency
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
-| [coinlayer](https://coinlayer.com) | Real-time Crypto Currency Exchange Rates | `apiKey` | Yes | Unknown |
| [Binance](https://github.com/binance/binance-spot-api-docs) | Exchange for Trading Cryptocurrencies based in China | `apiKey` | Yes | Unknown |
| [BitcoinAverage](https://apiv2.bitcoinaverage.com/) | Digital Asset Price Data for the blockchain industry | `apiKey` | Yes | Unknown |
| [BitcoinCharts](https://bitcoincharts.com/about/exchanges/) | Financial and Technical Data related to the Bitcoin Network | No | Yes | Unknown |
@@ -193,6 +192,7 @@ API | Description | Auth | HTTPS | CORS |
| [CoinDesk](http://www.coindesk.com/api/) | Bitcoin Price Index | No | No | Unknown |
| [CoinGecko](http://www.coingecko.com/api) | Cryptocurrency Price, Market, and Developer/Social Data | No | Yes | Yes |
| [Coinigy](https://coinigy.docs.apiary.io) | Interacting with Coinigy Accounts and Exchange Directly | `apiKey` | Yes | Unknown |
+| [coinlayer](https://coinlayer.com) | Real-time Crypto Currency Exchange Rates | `apiKey` | Yes | Unknown |
| [Coinlib](https://coinlib.io/apidocs) | Crypto Currency Prices | `apiKey` | Yes | Unknown |
| [Coinlore](https://www.coinlore.com/cryptocurrency-data-api) | Cryptocurrencies prices, volume and more | No | Yes | Unknown |
| [CoinMarketCap](https://coinmarketcap.com/api/) | Cryptocurrencies Prices | `apiKey` | Yes | Unknown |
@@ -322,13 +322,13 @@ API | Description | Auth | HTTPS | CORS |
### Environment
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
-| [weatherstack](https://weatherstack.com/) | Real-Time & Historical World Weather Data API | `apiKey` | Yes | Unknown |
| [AirVisual](https://airvisual.com/api) | Air quality and weather data | `apiKey` | Yes | Unknown |
| [GrünstromIndex](https://www.corrently.de/hintergrund/gruenstromindex/index.html) | Green Power Index for Germany (Grünstromindex/GSI) | No | No | Yes |
| [OpenAQ](https://docs.openaq.org/) | Open air quality data | `apiKey` | Yes | Unknown |
| [PM25.in](http://www.pm25.in/api_doc) | Air quality of China | `apiKey` | No | Unknown |
| [PVWatts](https://developer.nrel.gov/docs/solar/pvwatts/v6/) | Energy production photovoltaic (PV) energy systems | `apiKey` | Yes | Unknown |
| [UK Carbon Intensity](https://carbon-intensity.github.io/api-definitions/#carbon-intensity-api-v1-0-0) | The Official Carbon Intensity API for Great Britain developed by National Grid | No | Yes | Unknown |
+| [weatherstack](https://weatherstack.com/) | Real-Time & Historical World Weather Data API | `apiKey` | Yes | Unknown |
**[⬆ Back to Index](#index)**
### Events
@@ -342,10 +342,10 @@ API | Description | Auth | HTTPS | CORS |
### Finance
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
-| [marketstack](https://marketstack.com/) | Real-Time, Intraday & Historical Market Data API | `apiKey` | Yes | Unknown |
| [Alpha Vantage](https://www.alphavantage.co/) | Realtime and historical stock data | `apiKey` | Yes | Unknown |
| [IEX Cloud](https://iexcloud.io/docs/api/) | Realtime & Historical Stock and Market Data | `apiKey` | Yes | Yes |
| [IG](https://labs.ig.com/gettingstarted) | Spreadbetting and CFD Market Data | `apiKey` | Yes | Unknown |
+| [marketstack](https://marketstack.com/) | Real-Time, Intraday & Historical Market Data API | `apiKey` | Yes | Unknown |
| [Plaid](https://plaid.com/) | Connect with users’ bank accounts and access transaction data | `apiKey` | Yes | Unknown |
| [Razorpay IFSC](https://ifsc.razorpay.com/) | Indian Financial Systems Code (Bank Branch Codes) | No | Yes | Unknown |
| [Tradier](https://developer.tradier.com) | US equity/option market data (delayed, intraday, historical) | `OAuth` | Yes | Yes |
@@ -420,7 +420,6 @@ API | Description | Auth | HTTPS | CORS |
### Geocoding
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
-| [positionstack](https://positionstack.com/) | Forward & Reverse Batch Geocoding REST API | `apiKey` | Yes | Unknown |
| [adresse.data.gouv.fr](https://adresse.data.gouv.fr) | Address database of France, geocoding and reverse | No | Yes | Unknown |
| [Battuta](http://battuta.medunes.net) | A (country/region/city) in-cascade location API | `apiKey` | No | Unknown |
| [Bing Maps](https://www.microsoft.com/maps/) | Create/customize digital maps based on Bing Maps data | `apiKey` | Yes | Unknown |
@@ -447,9 +446,9 @@ API | Description | Auth | HTTPS | CORS |
| [IP Vigilante](https://www.ipvigilante.com/) | Free IP Geolocation API | No | Yes | Unknown |
| [IP2Location](https://www.ip2location.com/web-service/ip2location) | IP geolocation web service to get more than 55 parameters | `apiKey` | Yes | Unknown |
| [IP2Proxy](https://www.ip2location.com/web-service/ip2proxy) | Detect proxy and VPN using IP address | `apiKey` | Yes | Unknown |
+| [ipapi](https://ipapi.com/) | Real-time Geolocation & Reverse IP Lookup REST API | `apiKey` | Yes | Unknown |
| [IPGeolocationAPI.com](https://ipgeolocationapi.com/) | Locate your visitors by IP with country details | No | Yes | Yes |
| [IPInfoDB](https://ipinfodb.com/api) | Free Geolocation tools and APIs for country, region, city and time zone lookup by IP address | `apiKey` | Yes | Unknown |
-| [ipapi](https://ipapi.com/) | Real-time Geolocation & Reverse IP Lookup REST API | `apiKey` | Yes | Unknown |
| [ipstack](https://ipstack.com/) | Locate and identify website visitors by IP address | `apiKey` | Yes | Unknown |
| [LocationIQ](https://locationiq.org/docs/) | Provides forward/reverse geocoding and batch geocoding | `apiKey` | Yes | Yes |
| [Mapbox](https://www.mapbox.com/developers/) | Create/customize beautiful digital maps | `apiKey` | Yes | Unknown |
@@ -458,6 +457,7 @@ API | Description | Auth | HTTPS | CORS |
| [OnWater](https://onwater.io/) | Determine if a lat/lon is on water or land | No | Yes | Unknown |
| [OpenCage](https://opencagedata.com) | Forward and reverse geocoding using open data | `apiKey` | Yes | Yes |
| [OpenStreetMap](http://wiki.openstreetmap.org/wiki/API) | Navigation, geolocation and geographical data | `OAuth` | No | Unknown |
+| [positionstack](https://positionstack.com/) | Forward & Reverse Batch Geocoding REST API | `apiKey` | Yes | Unknown |
| [PostcodeData.nl](http://api.postcodedata.nl/v1/postcode/?postcode=1211EP&streetnumber=60&ref=domeinnaam.nl&type=json) | Provide geolocation data based on postcode for Dutch addresses | No | No | Unknown |
| [Postcodes.io](https://postcodes.io) | Postcode lookup & Geolocation for the UK | No | Yes | Yes |
| [REST Countries](https://restcountries.eu) | Get information about countries via a RESTful API | No | Yes | Unknown |
@@ -588,11 +588,11 @@ API | Description | Auth | HTTPS | CORS |
### News
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
-| [mediastack](https://mediastack.com/) | Free, Simple REST API for Live News & Blog Articles | `apiKey` | Yes | Unknown |
| [Associated Press](https://developer.ap.org/) | Search for news and metadata from Associated Press | `apiKey` | Yes | Unknown |
| [Chronicling America](http://chroniclingamerica.loc.gov/about/api/) | Provides access to millions of pages of historic US newspapers from the Library of Congress | No | No | Unknown |
| [Currents](https://currentsapi.services/) | Latest news published in various news sources, blogs and forums | `apiKey` | Yes | Yes |
| [Feedbin](https://github.com/feedbin/feedbin-api) | RSS reader | `OAuth` | Yes | Unknown |
+| [mediastack](https://mediastack.com/) | Free, Simple REST API for Live News & Blog Articles | `apiKey` | Yes | Unknown |
| [New York Times](https://developer.nytimes.com/) | Provides news | `apiKey` | Yes | Unknown |
| [News](https://newsapi.org/) | Headlines currently published on a range of news sources and blogs | `apiKey` | Yes | Unknown |
| [NPR One](http://dev.npr.org/api/) | Personalized news listening experience from NPR | `OAuth` | Yes | Unknown |
@@ -805,11 +805,11 @@ API | Description | Auth | HTTPS | CORS |
### Text Analysis
API | Description | Auth | HTTPS | CORS |
|---|---|---|---|---|
-| [languagelayer](https://languagelayer.com/) | Language Detection JSON API supporting 173 languages | `OAuth` | Yes | Unknown |
| [Aylien Text Analysis](https://docs.aylien.com/textapi/#getting-started) | A collection of information retrieval and natural language APIs | `apiKey` | Yes | Unknown |
| [Cloudmersive Natural Language Processing](https://www.cloudmersive.com/nlp-api) | Natural language processing and text analysis | `apiKey` | Yes | Yes |
| [Detect Language](https://detectlanguage.com/) | Detects text language | `apiKey` | Yes | Unknown |
| [Google Cloud Natural](https://cloud.google.com/natural-language/docs/) | Natural language understanding technology, including sentiment, entity and syntax analysis | `apiKey` | Yes | Unknown |
+| [languagelayer](https://languagelayer.com/) | Language Detection JSON API supporting 173 languages | `OAuth` | Yes | Unknown |
| [Semantira](https://semantria.readme.io/docs) | Text Analytics with sentiment analysis, categorization & named entity extraction | `OAuth` | Yes | Unknown |
| [Watson Natural Language Understanding](https://cloud.ibm.com/apidocs/natural-language-understanding/natural-language-understanding) | Natural language processing for advanced text analysis | `OAuth` | Yes | Unknown |
|
https://api.github.com/repos/public-apis/public-apis/pulls/1591
|
2021-03-19T21:02:18Z
|
2021-03-19T21:04:14Z
|
2021-03-19T21:04:14Z
|
2021-03-19T21:04:17Z
| 2,562
|
public-apis/public-apis
| 35,354
|
|
make the rag module optional
|
diff --git a/metagpt/environment/__init__.py b/metagpt/environment/__init__.py
index 692672fa7..28981f2f8 100644
--- a/metagpt/environment/__init__.py
+++ b/metagpt/environment/__init__.py
@@ -4,10 +4,9 @@
from metagpt.environment.base_env import Environment
from metagpt.environment.android_env.android_env import AndroidEnv
-from metagpt.environment.mincraft_env.mincraft_env import MincraftExtEnv
from metagpt.environment.werewolf_env.werewolf_env import WerewolfEnv
from metagpt.environment.stanford_town_env.stanford_town_env import StanfordTownEnv
from metagpt.environment.software_env.software_env import SoftwareEnv
-__all__ = ["AndroidEnv", "MincraftExtEnv", "WerewolfEnv", "StanfordTownEnv", "SoftwareEnv", "Environment"]
+__all__ = ["AndroidEnv", "WerewolfEnv", "StanfordTownEnv", "SoftwareEnv", "Environment"]
diff --git a/metagpt/roles/__init__.py b/metagpt/roles/__init__.py
index f033a5dfa..08a0406b3 100644
--- a/metagpt/roles/__init__.py
+++ b/metagpt/roles/__init__.py
@@ -14,7 +14,6 @@
from metagpt.roles.qa_engineer import QaEngineer
from metagpt.roles.searcher import Searcher
from metagpt.roles.sales import Sales
-from metagpt.roles.customer_service import CustomerService
__all__ = [
@@ -26,5 +25,4 @@
"QaEngineer",
"Searcher",
"Sales",
- "CustomerService",
]
diff --git a/requirements.txt b/requirements.txt
index 83565278b..a447eef13 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10,14 +10,6 @@ typer==0.9.0
# godot==0.1.1
# google_api_python_client==2.93.0 # Used by search_engine.py
lancedb==0.4.0
-llama-index-core==0.10.15
-llama-index-embeddings-azure-openai==0.1.6
-llama-index-embeddings-openai==0.1.5
-llama-index-llms-azure-openai==0.1.4
-llama-index-readers-file==0.1.4
-llama-index-retrievers-bm25==0.1.3
-llama-index-vector-stores-faiss==0.1.1
-chromadb==0.4.23
loguru==0.6.0
meilisearch==0.21.0
numpy==1.24.3
diff --git a/setup.py b/setup.py
index df9bedc9b..f834b4c44 100644
--- a/setup.py
+++ b/setup.py
@@ -28,6 +28,16 @@ def run(self):
"search-google": ["google-api-python-client==2.94.0"],
"search-ddg": ["duckduckgo-search~=4.1.1"],
"ocr": ["paddlepaddle==2.4.2", "paddleocr>=2.0.1", "tabulate==0.9.0"],
+ "rag": [
+ "llama-index-core==0.10.15",
+ "llama-index-embeddings-azure-openai==0.1.6",
+ "llama-index-embeddings-openai==0.1.5",
+ "llama-index-llms-azure-openai==0.1.4",
+ "llama-index-readers-file==0.1.4",
+ "llama-index-retrievers-bm25==0.1.3",
+ "llama-index-vector-stores-faiss==0.1.1",
+ "chromadb==0.4.23",
+ ],
}
extras_require["test"] = [
@@ -42,7 +52,6 @@ def run(self):
"connexion[uvicorn]~=3.0.5",
"azure-cognitiveservices-speech~=1.31.0",
"aioboto3~=11.3.0",
- "chromadb==0.4.23",
"gradio==3.0.0",
"grpcio-status==1.48.2",
"pylint==3.0.3",
|
**Features**
<!-- Clear and direct description of the submit features. -->
<!-- If it's a bug fix, please also paste the issue link. -->
- Make the rag module optional.
**Feature Docs**
<!-- The RFC, tutorial, or use cases about the feature if it's a pretty big update. If not, there is no need to fill. -->
**Influence**
<!-- Tell me the impact of the new feature and I'll focus on it. -->
**Result**
<!-- The screenshot/log of unittest/running result -->
```
2024-03-22 11:11:24.273 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /data/hjt/MetaGPT
2024-03-22 11:11:26.816 | INFO | metagpt.team:invest:90 - Investment: $3.0.
2024-03-22 11:11:26.818 | INFO | metagpt.roles.role:_act:391 - Alice(Product Manager): to do PrepareDocuments(PrepareDocuments)
2024-03-22 11:11:26.848 | INFO | metagpt.utils.file_repository:save:57 - save to: /data/hjt/MetaGPT/workspace/20240322111126/docs/requirement.txt
2024-03-22 11:11:26.850 | INFO | metagpt.roles.role:_act:391 - Alice(Product Manager): to do WritePRD(WritePRD)
2024-03-22 11:11:26.851 | INFO | metagpt.actions.write_prd:run:86 - New requirement detected: write a 2048 game
[CONTENT]
{
"Language": "English",
"Programming Language": "Python",
"Original Requirements": "write a 2048 game",
"Project Name": "game_2048",
"Product Goals": [
"Create an engaging user experience",
"Improve accessibility, be responsive",
"More beautiful UI"
],
"User Stories": [
"As a player, I want to be able to choose difficulty levels",
"As a player, I want to see my score after each game",
"As a player, I want to get restart button when I lose",
"As a player, I want to see beautiful UI that make me feel good",
"As a player, I want to play game via mobile phone"
...
```
```
...
## Code Review Result
LGTM
Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0125.
2024-03-22 11:12:29.107 | INFO | metagpt.utils.cost_manager:update_cost:57 - Total running cost: $0.021 | Max budget: $3.000 | Current cost: $0.002, prompt_tokens: 1383, completion_tokens: 84
2024-03-22 11:12:29.111 | INFO | metagpt.utils.file_repository:save:57 - save to: /data/hjt/MetaGPT/workspace/game_2048/game_2048/main.py
2024-03-22 11:12:29.113 | INFO | metagpt.utils.file_repository:save:62 - update dependency: /data/hjt/MetaGPT/workspace/game_2048/game_2048/main.py:['docs/task/20240322111143.json', 'docs/system_design/20240322111143.json']
2024-03-22 11:12:29.124 | INFO | metagpt.utils.git_repository:archive:168 - Archive: ['.dependencies.json', 'docs/prd/20240322111143.json', 'docs/requirement.txt', 'docs/system_design/20240322111143.json', 'docs/task/20240322111143.json', 'game_2048/game.py', 'game_2048/main.py', 'requirements.txt', 'resources/competitive_analysis/20240322111143.mmd', 'resources/data_api_design/20240322111143.mmd', 'resources/prd/20240322111143.md', 'resources/seq_flow/20240322111143.mmd', 'resources/system_design/20240322111143.md']
```
**Other**
<!-- Something else about this PR. -->
|
https://api.github.com/repos/geekan/MetaGPT/pulls/1070
|
2024-03-22T02:29:15Z
|
2024-03-22T03:21:29Z
|
2024-03-22T03:21:29Z
|
2024-03-30T09:22:31Z
| 1,021
|
geekan/MetaGPT
| 16,741
|
Fix some typos (found by codespell)
|
diff --git a/certbot-apache/certbot_apache/_internal/http_01.py b/certbot-apache/certbot_apache/_internal/http_01.py
index 872704db87c..12088724372 100644
--- a/certbot-apache/certbot_apache/_internal/http_01.py
+++ b/certbot-apache/certbot_apache/_internal/http_01.py
@@ -107,7 +107,7 @@ def _mod_config(self):
if any(a.is_wildcard() or a.get_port() == http_port for a in vhost.addrs):
found = True
- # If there's at least one elgible VirtualHost, also add all unnamed VirtualHosts
+ # If there's at least one eligible VirtualHost, also add all unnamed VirtualHosts
# because they might match at runtime (#8890)
if found:
selected_vhosts += self._unnamed_vhosts()
diff --git a/certbot-apache/certbot_apache/_internal/interfaces.py b/certbot-apache/certbot_apache/_internal/interfaces.py
index 106e2778a57..8381fd2a63d 100644
--- a/certbot-apache/certbot_apache/_internal/interfaces.py
+++ b/certbot-apache/certbot_apache/_internal/interfaces.py
@@ -312,7 +312,7 @@ def set_parameters(self, parameters):
"""
Sets the sequence of parameters for this ParserNode object without
whitespaces. While the whitespaces for parameters are discarded when using
- this method, the whitespacing preceeding the ParserNode itself should be
+ this method, the whitespacing preceding the ParserNode itself should be
kept intact.
:param list parameters: sequence of parameters
@@ -364,7 +364,7 @@ class BlockNode(DirectiveNode, metaclass=abc.ABCMeta):
def add_child_block(self, name, parameters=None, position=None):
"""
Adds a new BlockNode child node with provided values and marks the callee
- BlockNode dirty. This is used to add new children to the AST. The preceeding
+ BlockNode dirty. This is used to add new children to the AST. The preceding
whitespaces should not be added based on the ancestor or siblings for the
newly created object. This is to match the current behavior of the legacy
parser implementation.
@@ -385,7 +385,7 @@ def add_child_directive(self, name, parameters=None, position=None):
"""
Adds a new DirectiveNode child node with provided values and marks the
callee BlockNode dirty. This is used to add new children to the AST. The
- preceeding whitespaces should not be added based on the ancestor or siblings
+ preceding whitespaces should not be added based on the ancestor or siblings
for the newly created object. This is to match the current behavior of the
legacy parser implementation.
@@ -406,7 +406,7 @@ def add_child_comment(self, comment="", position=None):
"""
Adds a new CommentNode child node with provided value and marks the
callee BlockNode dirty. This is used to add new children to the AST. The
- preceeding whitespaces should not be added based on the ancestor or siblings
+ preceding whitespaces should not be added based on the ancestor or siblings
for the newly created object. This is to match the current behavior of the
legacy parser implementation.
diff --git a/certbot-apache/certbot_apache/_internal/override_centos.py b/certbot-apache/certbot_apache/_internal/override_centos.py
index 431e8ec46ed..c898615bb22 100644
--- a/certbot-apache/certbot_apache/_internal/override_centos.py
+++ b/certbot-apache/certbot_apache/_internal/override_centos.py
@@ -51,7 +51,7 @@ def config_test(self):
def _try_restart_fedora(self):
"""
- Tries to restart httpd using systemctl to generate the self signed keypair.
+ Tries to restart httpd using systemctl to generate the self signed key pair.
"""
try:
diff --git a/certbot-apache/certbot_apache/_internal/override_fedora.py b/certbot-apache/certbot_apache/_internal/override_fedora.py
index cf0764d682c..9f2fed9b637 100644
--- a/certbot-apache/certbot_apache/_internal/override_fedora.py
+++ b/certbot-apache/certbot_apache/_internal/override_fedora.py
@@ -43,7 +43,7 @@ def get_parser(self):
def _try_restart_fedora(self):
"""
- Tries to restart httpd using systemctl to generate the self signed keypair.
+ Tries to restart httpd using systemctl to generate the self signed key pair.
"""
try:
util.run_script(['systemctl', 'restart', 'httpd'])
diff --git a/certbot-apache/tests/entrypoint_test.py b/certbot-apache/tests/entrypoint_test.py
index 6f6f5bbb093..2a269441535 100644
--- a/certbot-apache/tests/entrypoint_test.py
+++ b/certbot-apache/tests/entrypoint_test.py
@@ -41,7 +41,7 @@ def test_nonexistent_generic(self):
with mock.patch("certbot.util.get_os_info") as mock_info:
mock_info.return_value = ("nonexistent", "irrelevant")
with mock.patch("certbot.util.get_systemd_os_like") as mock_like:
- mock_like.return_value = ["unknonwn"]
+ mock_like.return_value = ["unknown"]
self.assertEqual(entrypoint.get_configurator(),
configurator.ApacheConfigurator)
diff --git a/certbot-compatibility-test/Dockerfile b/certbot-compatibility-test/Dockerfile
index e0a439d01a6..646f2c99ebe 100644
--- a/certbot-compatibility-test/Dockerfile
+++ b/certbot-compatibility-test/Dockerfile
@@ -15,7 +15,7 @@ RUN tools/venv.py
ENV PATH /opt/certbot/src/venv/bin:$PATH
# install in editable mode (-e) to save space: it's not possible to
-# "rm -rf /opt/certbot/src" (it's stays in the underlaying image);
+# "rm -rf /opt/certbot/src" (it's stays in the underlying image);
# this might also help in debugging: you can "docker run --entrypoint
# bash" and investigate, apply patches, etc.
diff --git a/certbot-compatibility-test/certbot_compatibility_test/test_driver.py b/certbot-compatibility-test/certbot_compatibility_test/test_driver.py
index 240f528d488..62098488f9d 100644
--- a/certbot-compatibility-test/certbot_compatibility_test/test_driver.py
+++ b/certbot-compatibility-test/certbot_compatibility_test/test_driver.py
@@ -330,7 +330,7 @@ def setup_logging(args):
def setup_display():
- """"Prepares a display utility instace for the Certbot plugins """
+ """"Prepares a display utility instance for the Certbot plugins """
displayer = display_util.NoninteractiveDisplay(sys.stdout)
display_obj.set_display(displayer)
diff --git a/certbot-nginx/certbot_nginx/_internal/parser_obj.py b/certbot-nginx/certbot_nginx/_internal/parser_obj.py
index d4d332c47dc..33ed822c301 100644
--- a/certbot-nginx/certbot_nginx/_internal/parser_obj.py
+++ b/certbot-nginx/certbot_nginx/_internal/parser_obj.py
@@ -260,7 +260,7 @@ def __contains__(self, word):
class Block(Parsable):
- """ Any sort of bloc, denoted by a block name and curly braces, like so:
+ """ Any sort of block, denoted by a block name and curly braces, like so:
The parsed block:
block name {
content 1;
@@ -313,8 +313,8 @@ def parse(self, raw_list, add_spaces=False):
"""
if not Block.should_parse(raw_list):
raise errors.MisconfigurationError("Block parsing expects a list of length 2. "
- "First element should be a list of string types (the bloc names), "
- "and second should be another list of statements (the bloc content).")
+ "First element should be a list of string types (the block names), "
+ "and second should be another list of statements (the block content).")
self.names = Sentence(self)
if add_spaces:
raw_list[0].append(" ")
diff --git a/certbot/certbot/_internal/display/obj.py b/certbot/certbot/_internal/display/obj.py
index c36d6a6aed0..67043463b50 100644
--- a/certbot/certbot/_internal/display/obj.py
+++ b/certbot/certbot/_internal/display/obj.py
@@ -396,7 +396,7 @@ def _get_valid_int_ans(self, max_):
# through the public API in certbot.display.util.
@zope.interface.implementer(interfaces.IDisplay)
class NoninteractiveDisplay:
- """An diplay utility implementation that never asks for interactive user input"""
+ """A display utility implementation that never asks for interactive user input"""
def __init__(self, outfile, *unused_args, **unused_kwargs):
super().__init__()
diff --git a/certbot/certbot/_internal/renewal.py b/certbot/certbot/_internal/renewal.py
index d5a808e634a..183e83ec066 100644
--- a/certbot/certbot/_internal/renewal.py
+++ b/certbot/certbot/_internal/renewal.py
@@ -360,7 +360,7 @@ def _renew_describe_results(config: configuration.NamespaceConfig, renew_success
:param list renew_successes: list of fullchain paths which were renewed
:param list renew_failures: list of fullchain paths which failed to be renewed
:param list renew_skipped: list of messages to print about skipped certificates
- :param list parse_failures: list of renewal parameter paths which had erorrs
+ :param list parse_failures: list of renewal parameter paths which had errors
"""
notify = display_util.notify
notify_error = logger.error
diff --git a/certbot/certbot/interfaces.py b/certbot/certbot/interfaces.py
index 908b5a16acc..fa4d1d6ae25 100644
--- a/certbot/certbot/interfaces.py
+++ b/certbot/certbot/interfaces.py
@@ -474,7 +474,7 @@ def renew_deploy(self, lineage, *args, **kwargs):
"""Perform updates defined by installer when a certificate has been renewed
If an installer is a subclass of the class containing this method, this
- function will always be called when a certficate has been renewed by
+ function will always be called when a certificate has been renewed by
running "certbot renew". For example if a plugin needs to copy a
certificate over, or change configuration based on the new certificate.
diff --git a/certbot/docs/install.rst b/certbot/docs/install.rst
index 4533cfcc1ab..e36553a155b 100644
--- a/certbot/docs/install.rst
+++ b/certbot/docs/install.rst
@@ -90,7 +90,7 @@ recommended for your system at certbot.eff.org_, which enables you to use
installer plugins that cover both of those hard topics.
If you're still not convinced and have decided to use this method, from
-the server that the domain you're requesting a certficate for resolves
+the server that the domain you're requesting a certificate for resolves
to, `install Docker`_, then issue a command like the one found below. If
you are using Certbot with the :ref:`Standalone` plugin, you will need
to make the port it uses accessible from outside of the container by
diff --git a/certbot/docs/using.rst b/certbot/docs/using.rst
index 04c3be2693a..561782e764a 100644
--- a/certbot/docs/using.rst
+++ b/certbot/docs/using.rst
@@ -818,7 +818,7 @@ scheduled task to automatically renew your certificates in the background. If yo
whether your system has a pre-installed scheduled task for Certbot, it is safe to follow these
instructions to create one.
-If you're using Windows, these instructions are not neccessary as Certbot on Windows comes with
+If you're using Windows, these instructions are not necessary as Certbot on Windows comes with
a scheduled task for automated renewal pre-installed.
Run the following line, which will add a cron job to `/etc/crontab`:
diff --git a/certbot/tests/error_handler_test.py b/certbot/tests/error_handler_test.py
index 0146f0edd65..010a756c12b 100644
--- a/certbot/tests/error_handler_test.py
+++ b/certbot/tests/error_handler_test.py
@@ -80,7 +80,7 @@ def test_context_manager_with_signal(self):
send_signal(self.signals[0])
should_be_42 *= 10
- # check execution stoped when the signal was sent
+ # check execution stopped when the signal was sent
self.assertEqual(42, should_be_42)
# assert signals were caught
self.assertEqual([self.signals[0]], signals_received)
|
Signed-off-by: Stefan Weil <[email protected]>
|
https://api.github.com/repos/certbot/certbot/pulls/9017
|
2021-08-31T10:10:33Z
|
2021-09-02T20:43:14Z
|
2021-09-02T20:43:13Z
|
2021-09-02T21:12:56Z
| 3,079
|
certbot/certbot
| 3,024
|
FIX bagging with metadata routing and estimator implement __len__
|
diff --git a/sklearn/ensemble/_bagging.py b/sklearn/ensemble/_bagging.py
index e0ff0b9509c3b..7f278cb06f2ba 100644
--- a/sklearn/ensemble/_bagging.py
+++ b/sklearn/ensemble/_bagging.py
@@ -113,8 +113,6 @@ def _parallel_build_estimators(
estimators = []
estimators_features = []
- request_or_router = get_routing_for_object(ensemble.estimator_)
-
# TODO: (slep6) remove if condition for unrouted sample_weight when metadata
# routing can't be disabled.
support_sample_weight = has_fit_parameter(ensemble.estimator_, "sample_weight")
@@ -164,9 +162,14 @@ def _parallel_build_estimators(
# Note: Row sampling can be achieved either through setting sample_weight or
# by indexing. The former is more efficient. Therefore, use this method
# if possible, otherwise use indexing.
- if (
- _routing_enabled() and request_or_router.consumes("fit", ("sample_weight",))
- ) or (not _routing_enabled() and support_sample_weight):
+ if _routing_enabled():
+ request_or_router = get_routing_for_object(ensemble.estimator_)
+ consumes_sample_weight = request_or_router.consumes(
+ "fit", ("sample_weight",)
+ )
+ else:
+ consumes_sample_weight = support_sample_weight
+ if consumes_sample_weight:
# Draw sub samples, using sample weights, and then fit
curr_sample_weight = _check_sample_weight(
fit_params_.pop("sample_weight", None), X
@@ -635,6 +638,9 @@ def get_metadata_routing(self):
def _get_estimator(self):
"""Resolve which estimator to return."""
+ def _more_tags(self):
+ return {"allow_nan": _safe_tags(self._get_estimator(), "allow_nan")}
+
class BaggingClassifier(ClassifierMixin, BaseBagging):
"""A Bagging classifier.
@@ -835,7 +841,9 @@ def __init__(
def _get_estimator(self):
"""Resolve which estimator to return (default is DecisionTreeClassifier)"""
- return self.estimator or DecisionTreeClassifier()
+ if self.estimator is None:
+ return DecisionTreeClassifier()
+ return self.estimator
def _set_oob_score(self, X, y):
n_samples = y.shape[0]
@@ -1059,14 +1067,6 @@ def decision_function(self, X):
return decisions
- def _more_tags(self):
- if self.estimator is None:
- estimator = DecisionTreeClassifier()
- else:
- estimator = self.estimator
-
- return {"allow_nan": _safe_tags(estimator, "allow_nan")}
-
class BaggingRegressor(RegressorMixin, BaseBagging):
"""A Bagging regressor.
@@ -1328,13 +1328,8 @@ def _set_oob_score(self, X, y):
self.oob_prediction_ = predictions
self.oob_score_ = r2_score(y, predictions)
- def _more_tags(self):
- if self.estimator is None:
- estimator = DecisionTreeRegressor()
- else:
- estimator = self.estimator
- return {"allow_nan": _safe_tags(estimator, "allow_nan")}
-
def _get_estimator(self):
"""Resolve which estimator to return (default is DecisionTreeClassifier)"""
- return self.estimator or DecisionTreeRegressor()
+ if self.estimator is None:
+ return DecisionTreeRegressor()
+ return self.estimator
diff --git a/sklearn/ensemble/tests/test_bagging.py b/sklearn/ensemble/tests/test_bagging.py
index 2c1e308cee33b..da855a568b402 100644
--- a/sklearn/ensemble/tests/test_bagging.py
+++ b/sklearn/ensemble/tests/test_bagging.py
@@ -10,14 +10,19 @@
import numpy as np
import pytest
+import sklearn
from sklearn.base import BaseEstimator
from sklearn.datasets import load_diabetes, load_iris, make_hastie_10_2
from sklearn.dummy import DummyClassifier, DummyRegressor
from sklearn.ensemble import (
+ AdaBoostClassifier,
+ AdaBoostRegressor,
BaggingClassifier,
BaggingRegressor,
HistGradientBoostingClassifier,
HistGradientBoostingRegressor,
+ RandomForestClassifier,
+ RandomForestRegressor,
)
from sklearn.feature_selection import SelectKBest
from sklearn.linear_model import LogisticRegression, Perceptron
@@ -936,3 +941,36 @@ def fit(self, X, y):
def test_bagging_allow_nan_tag(bagging, expected_allow_nan):
"""Check that bagging inherits allow_nan tag."""
assert bagging._get_tags()["allow_nan"] == expected_allow_nan
+
+
[email protected](
+ "model",
+ [
+ BaggingClassifier(
+ estimator=RandomForestClassifier(n_estimators=1), n_estimators=1
+ ),
+ BaggingRegressor(
+ estimator=RandomForestRegressor(n_estimators=1), n_estimators=1
+ ),
+ ],
+)
+def test_bagging_with_metadata_routing(model):
+ """Make sure that metadata routing works with non-default estimator."""
+ with sklearn.config_context(enable_metadata_routing=True):
+ model.fit(iris.data, iris.target)
+
+
[email protected](
+ "model",
+ [
+ BaggingClassifier(
+ estimator=AdaBoostClassifier(n_estimators=1, algorithm="SAMME"),
+ n_estimators=1,
+ ),
+ BaggingRegressor(estimator=AdaBoostRegressor(n_estimators=1), n_estimators=1),
+ ],
+)
+def test_bagging_without_support_metadata_routing(model):
+ """Make sure that we still can use an estimator that does not implement the
+ metadata routing."""
+ model.fit(iris.data, iris.target)
|
I could catch a regression in `imbalanced-learn` when `estimator` in `Bagging*` implement `__len__` (e.g. `RandomForest*`) where `_get_estimator` will trigger calling `__len__` with the current pattern.
The problem is that `__len__` relies on fitted attribute while `_get_estimator` is called before `fit`.
This fix is check for `None` instead to know when to create a default estimator.
ping @adrinjalali @OmarManzoor @adam2392 since it was introduced in https://github.com/scikit-learn/scikit-learn/pull/28432
No changelog needed since we did not yet release this bug ;)
|
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/28734
|
2024-03-31T18:54:50Z
|
2024-04-03T13:53:49Z
|
2024-04-03T13:53:49Z
|
2024-04-03T13:54:40Z
| 1,307
|
scikit-learn/scikit-learn
| 46,174
|
ansible-galaxy - fix traceback error for invalid req file (#81917) - 2.15
|
diff --git a/changelogs/fragments/81901-galaxy-requirements-format.yml b/changelogs/fragments/81901-galaxy-requirements-format.yml
new file mode 100644
index 00000000000000..2e57a95550367a
--- /dev/null
+++ b/changelogs/fragments/81901-galaxy-requirements-format.yml
@@ -0,0 +1,2 @@
+bugfixes:
+- ansible-galaxy - Provide a better error message when using a requirements file with an invalid format - https://github.com/ansible/ansible/issues/81901
diff --git a/lib/ansible/cli/galaxy.py b/lib/ansible/cli/galaxy.py
index 917999c97d7635..a4f289b2aa411c 100755
--- a/lib/ansible/cli/galaxy.py
+++ b/lib/ansible/cli/galaxy.py
@@ -805,7 +805,7 @@ def parse_role_req(requirement):
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
- else:
+ elif isinstance(file_requirements, dict):
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
@@ -824,6 +824,9 @@ def parse_role_req(requirement):
for collection_req in file_requirements.get('collections') or []
]
+ else:
+ raise AnsibleError(f"Expecting requirements yaml to be a list or dictionary but got {type(file_requirements).__name__}")
+
return requirements
def _init_coll_req_dict(self, coll_req):
diff --git a/test/units/cli/test_galaxy.py b/test/units/cli/test_galaxy.py
index f0be9ebad87830..9f73de6c04d0e2 100644
--- a/test/units/cli/test_galaxy.py
+++ b/test/units/cli/test_galaxy.py
@@ -753,6 +753,20 @@ def test_collection_install_with_names(collection_install):
assert mock_install.call_args[0][6] is False # force_deps
+def test_collection_install_with_invalid_requirements_format(collection_install):
+ output_dir = collection_install[2]
+
+ requirements_file = os.path.join(output_dir, 'requirements.yml')
+ with open(requirements_file, 'wb') as req_obj:
+ req_obj.write(b'"invalid"')
+
+ galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file,
+ '--collections-path', output_dir]
+
+ with pytest.raises(AnsibleError, match="Expecting requirements yaml to be a list or dictionary but got str"):
+ GalaxyCLI(args=galaxy_args).run()
+
+
def test_collection_install_with_requirements_file(collection_install):
mock_install, mock_warning, output_dir = collection_install
|
Provide a better error message when encountering a YAML requirements file that is not a dictionary or list.
##### SUMMARY
Backport of https://github.com/ansible/ansible/pull/81917
##### ISSUE TYPE
- Bugfix Pull Request
|
https://api.github.com/repos/ansible/ansible/pulls/81925
|
2023-10-05T19:15:19Z
|
2023-10-26T20:10:23Z
|
2023-10-26T20:10:23Z
|
2023-11-23T14:00:11Z
| 630
|
ansible/ansible
| 48,935
|
add milvus to Awesome ML
|
diff --git a/README.md b/README.md
index 7e4e4e5a..204b3626 100644
--- a/README.md
+++ b/README.md
@@ -1684,6 +1684,7 @@ be
<a name="tools-misc"></a>
#### Misc
+* [milvus](https://milvus.io) – Milvus is [open source](https://github.com/milvus-io/milvus) vector database for production AI, written in Go and C++, scalable and blazing fast for billions of embedding vectors.
* [Weaviate](https://www.semi.technology/developers/weaviate/current/) – Weaviate is an [open source](https://github.com/semi-technologies/weaviate) vector search engine and vector database. Weaviate uses machine learning to vectorize and store data, and to find answers to natural language queries. With Weaviate you can also bring your custom ML models to production scale.
* [MLReef](https://about.mlreef.com/) - MLReef is an end-to-end development platform using the power of git to give structure and deep collaboration possibilities to the ML development process.
* [Pinecone](https://www.pinecone.io/) - Vector database for applications that require real-time, scalable vector embedding and similarity search.
|
- add milvus
|
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/848
|
2022-03-31T15:32:37Z
|
2022-03-31T15:43:46Z
|
2022-03-31T15:43:45Z
|
2022-03-31T15:48:29Z
| 293
|
josephmisiti/awesome-machine-learning
| 52,119
|
Select for Query and Deletion
|
diff --git a/poetry.lock b/poetry.lock
index 8466ddbf0..dfdafd2fd 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -1,4 +1,4 @@
-# This file is automatically @generated by Poetry 1.7.0 and should not be changed by hand.
+# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
[[package]]
name = "accelerate"
@@ -1273,13 +1273,13 @@ grpc = ["grpcio (>=1.44.0,<2.0.0.dev0)"]
[[package]]
name = "gradio"
-version = "4.10.0"
+version = "4.19.0"
description = "Python library for easily interacting with trained machine learning models"
optional = false
python-versions = ">=3.8"
files = [
- {file = "gradio-4.10.0-py3-none-any.whl", hash = "sha256:7595185716aff430381d010087d6ebc4eadef06fefc3dc1cfa76edcdd2c109db"},
- {file = "gradio-4.10.0.tar.gz", hash = "sha256:d4ca039aa7f5c2783b2bbf7b465153c80bb4257edcca4d8b9c59ce6f61a75b97"},
+ {file = "gradio-4.19.0-py3-none-any.whl", hash = "sha256:d09732190acc0f33b5e7ea3235d267472bf74beeea62dabb7a82f93193155e09"},
+ {file = "gradio-4.19.0.tar.gz", hash = "sha256:e77e3ce8a4113865abd1dcf92cc9426d9da4896e0a6fd2824a0c90ec751dd442"},
]
[package.dependencies]
@@ -1287,7 +1287,7 @@ aiofiles = ">=22.0,<24.0"
altair = ">=4.2.0,<6.0"
fastapi = "*"
ffmpy = "*"
-gradio-client = "0.7.3"
+gradio-client = "0.10.0"
httpx = "*"
huggingface-hub = ">=0.19.3"
importlib-resources = ">=1.3,<7.0"
@@ -1303,6 +1303,7 @@ pydantic = ">=2.0"
pydub = "*"
python-multipart = "*"
pyyaml = ">=5.0,<7.0"
+ruff = ">=0.1.7"
semantic-version = ">=2.0,<3.0"
tomlkit = "0.12.0"
typer = {version = ">=0.9,<1.0", extras = ["all"]}
@@ -1314,13 +1315,13 @@ oauth = ["authlib", "itsdangerous"]
[[package]]
name = "gradio-client"
-version = "0.7.3"
+version = "0.10.0"
description = "Python library for easily interacting with trained machine learning models"
optional = false
python-versions = ">=3.8"
files = [
- {file = "gradio_client-0.7.3-py3-none-any.whl", hash = "sha256:b91073770470ceb9f284977064c35bc0cffaf868eb887bf352db77aa01fe342a"},
- {file = "gradio_client-0.7.3.tar.gz", hash = "sha256:8146a1d19a125b38088dd201ddacd0008ea47ef9b0504d1c5b87ca09a43f4dcd"},
+ {file = "gradio_client-0.10.0-py3-none-any.whl", hash = "sha256:2bcfe61710f9f1c8f336fa9ff0f5c5f0ea52079233196cd753ad30cccdfd585c"},
+ {file = "gradio_client-0.10.0.tar.gz", hash = "sha256:feaee70f18363d76f81a7d25fc3456f40ed5f92417e642c8f1bf86dc65e3a981"},
]
[package.dependencies]
@@ -6111,4 +6112,4 @@ chroma = ["chromadb"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.11,<3.12"
-content-hash = "c2bcf29b5c894a0fae9682145cd001dfb57bb4919c9097b5e27323ddee58fc8c"
+content-hash = "121bf7797b74c02efaf11712e178c9c01880b79701eeff6485ede9ca8b25d307"
diff --git a/private_gpt/settings/settings.py b/private_gpt/settings/settings.py
index 499ce66d7..ed0dc2601 100644
--- a/private_gpt/settings/settings.py
+++ b/private_gpt/settings/settings.py
@@ -178,6 +178,12 @@ class UISettings(BaseModel):
default_query_system_prompt: str = Field(
None, description="The default system prompt to use for the query mode."
)
+ delete_file_button_enabled: bool = Field(
+ True, description="If the button to delete a file is enabled or not."
+ )
+ delete_all_files_button_enabled: bool = Field(
+ False, description="If the button to delete all files is enabled or not."
+ )
class QdrantSettings(BaseModel):
diff --git a/private_gpt/ui/ui.py b/private_gpt/ui/ui.py
index c23bd378f..879214bda 100644
--- a/private_gpt/ui/ui.py
+++ b/private_gpt/ui/ui.py
@@ -14,6 +14,7 @@
from private_gpt.constants import PROJECT_ROOT_PATH
from private_gpt.di import global_injector
+from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.server.chat.chat_service import ChatService, CompletionGen
from private_gpt.server.chunks.chunks_service import Chunk, ChunksService
from private_gpt.server.ingest.ingest_service import IngestService
@@ -30,7 +31,7 @@
SOURCES_SEPARATOR = "\n\n Sources: \n"
-MODES = ["Query Docs", "Search in Docs", "LLM Chat"]
+MODES = ["Query Files", "Search Files", "LLM Chat (no context from files)"]
class Source(BaseModel):
@@ -73,6 +74,8 @@ def __init__(
# Cache the UI blocks
self._ui_block = None
+ self._selected_filename = None
+
# Initialize system prompt based on default mode
self.mode = MODES[0]
self._system_prompt = self._get_default_system_prompt(self.mode)
@@ -130,20 +133,34 @@ def build_history() -> list[ChatMessage]:
),
)
match mode:
- case "Query Docs":
+ case "Query Files":
+
+ # Use only the selected file for the query
+ context_filter = None
+ if self._selected_filename is not None:
+ docs_ids = []
+ for ingested_document in self._ingest_service.list_ingested():
+ if (
+ ingested_document.doc_metadata["file_name"]
+ == self._selected_filename
+ ):
+ docs_ids.append(ingested_document.doc_id)
+ context_filter = ContextFilter(docs_ids=docs_ids)
+
query_stream = self._chat_service.stream_chat(
messages=all_messages,
use_context=True,
+ context_filter=context_filter,
)
yield from yield_deltas(query_stream)
- case "LLM Chat":
+ case "LLM Chat (no context from files)":
llm_stream = self._chat_service.stream_chat(
messages=all_messages,
use_context=False,
)
yield from yield_deltas(llm_stream)
- case "Search in Docs":
+ case "Search Files":
response = self._chunks_service.retrieve_relevant(
text=message, limit=4, prev_next_chunks=0
)
@@ -164,10 +181,10 @@ def _get_default_system_prompt(mode: str) -> str:
p = ""
match mode:
# For query chat mode, obtain default system prompt from settings
- case "Query Docs":
+ case "Query Files":
p = settings().ui.default_query_system_prompt
# For chat mode, obtain default system prompt from settings
- case "LLM Chat":
+ case "LLM Chat (no context from files)":
p = settings().ui.default_chat_system_prompt
# For any other mode, clear the system prompt
case _:
@@ -203,8 +220,71 @@ def _list_ingested_files(self) -> list[list[str]]:
def _upload_file(self, files: list[str]) -> None:
logger.debug("Loading count=%s files", len(files))
paths = [Path(file) for file in files]
+
+ # remove all existing Documents with name identical to a new file upload:
+ file_names = [path.name for path in paths]
+ doc_ids_to_delete = []
+ for ingested_document in self._ingest_service.list_ingested():
+ if (
+ ingested_document.doc_metadata
+ and ingested_document.doc_metadata["file_name"] in file_names
+ ):
+ doc_ids_to_delete.append(ingested_document.doc_id)
+ if len(doc_ids_to_delete) > 0:
+ logger.info(
+ "Uploading file(s) which were already ingested: %s document(s) will be replaced.",
+ len(doc_ids_to_delete),
+ )
+ for doc_id in doc_ids_to_delete:
+ self._ingest_service.delete(doc_id)
+
self._ingest_service.bulk_ingest([(str(path.name), path) for path in paths])
+ def _delete_all_files(self) -> Any:
+ ingested_files = self._ingest_service.list_ingested()
+ logger.debug("Deleting count=%s files", len(ingested_files))
+ for ingested_document in ingested_files:
+ self._ingest_service.delete(ingested_document.doc_id)
+ return [
+ gr.List(self._list_ingested_files()),
+ gr.components.Button(interactive=False),
+ gr.components.Button(interactive=False),
+ gr.components.Textbox("All files"),
+ ]
+
+ def _delete_selected_file(self) -> Any:
+ logger.debug("Deleting selected %s", self._selected_filename)
+ # Note: keep looping for pdf's (each page became a Document)
+ for ingested_document in self._ingest_service.list_ingested():
+ if (
+ ingested_document.doc_metadata
+ and ingested_document.doc_metadata["file_name"]
+ == self._selected_filename
+ ):
+ self._ingest_service.delete(ingested_document.doc_id)
+ return [
+ gr.List(self._list_ingested_files()),
+ gr.components.Button(interactive=False),
+ gr.components.Button(interactive=False),
+ gr.components.Textbox("All files"),
+ ]
+
+ def _deselect_selected_file(self) -> Any:
+ self._selected_filename = None
+ return [
+ gr.components.Button(interactive=False),
+ gr.components.Button(interactive=False),
+ gr.components.Textbox("All files"),
+ ]
+
+ def _selected_a_file(self, select_data: gr.SelectData) -> Any:
+ self._selected_filename = select_data.value
+ return [
+ gr.components.Button(interactive=True),
+ gr.components.Button(interactive=True),
+ gr.components.Textbox(self._selected_filename),
+ ]
+
def _build_ui_blocks(self) -> gr.Blocks:
logger.debug("Creating the UI blocks")
with gr.Blocks(
@@ -233,7 +313,7 @@ def _build_ui_blocks(self) -> gr.Blocks:
mode = gr.Radio(
MODES,
label="Mode",
- value="Query Docs",
+ value="Query Files",
)
upload_button = gr.components.UploadButton(
"Upload File(s)",
@@ -245,6 +325,7 @@ def _build_ui_blocks(self) -> gr.Blocks:
self._list_ingested_files,
headers=["File name"],
label="Ingested Files",
+ height=235,
interactive=False,
render=False, # Rendered under the button
)
@@ -258,6 +339,57 @@ def _build_ui_blocks(self) -> gr.Blocks:
outputs=ingested_dataset,
)
ingested_dataset.render()
+ deselect_file_button = gr.components.Button(
+ "De-select selected file", size="sm", interactive=False
+ )
+ selected_text = gr.components.Textbox(
+ "All files", label="Selected for Query or Deletion", max_lines=1
+ )
+ delete_file_button = gr.components.Button(
+ "🗑️ Delete selected file",
+ size="sm",
+ visible=settings().ui.delete_file_button_enabled,
+ interactive=False,
+ )
+ delete_files_button = gr.components.Button(
+ "⚠️ Delete ALL files",
+ size="sm",
+ visible=settings().ui.delete_all_files_button_enabled,
+ )
+ deselect_file_button.click(
+ self._deselect_selected_file,
+ outputs=[
+ delete_file_button,
+ deselect_file_button,
+ selected_text,
+ ],
+ )
+ ingested_dataset.select(
+ fn=self._selected_a_file,
+ outputs=[
+ delete_file_button,
+ deselect_file_button,
+ selected_text,
+ ],
+ )
+ delete_file_button.click(
+ self._delete_selected_file,
+ outputs=[
+ ingested_dataset,
+ delete_file_button,
+ deselect_file_button,
+ selected_text,
+ ],
+ )
+ delete_files_button.click(
+ self._delete_all_files,
+ outputs=[
+ ingested_dataset,
+ delete_file_button,
+ deselect_file_button,
+ selected_text,
+ ],
+ )
system_prompt_input = gr.Textbox(
placeholder=self._system_prompt,
label="System Prompt",
diff --git a/pyproject.toml b/pyproject.toml
index e75a7cb9a..97db9986a 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -31,7 +31,7 @@ types-pyyaml = "^6.0.12.12"
[tool.poetry.group.ui]
optional = true
[tool.poetry.group.ui.dependencies]
-gradio = "^4.4.1"
+gradio = "^4.19.0"
[tool.poetry.group.local]
optional = true
diff --git a/scripts/ingest_folder.py b/scripts/ingest_folder.py
index fc1740a27..b8aa799bb 100755
--- a/scripts/ingest_folder.py
+++ b/scripts/ingest_folder.py
@@ -18,10 +18,11 @@ def __init__(self, ingest_service: IngestService) -> None:
self.total_documents = 0
self.current_document_count = 0
- self._files_under_root_folder: list[Path] = list()
+ self._files_under_root_folder: list[Path] = []
def _find_all_files_in_folder(self, root_path: Path) -> None:
"""Search all files under the root folder recursively.
+
Count them at the same time
"""
for file_path in root_path.iterdir():
diff --git a/settings.yaml b/settings.yaml
index d7e7ce028..252dbde8c 100644
--- a/settings.yaml
+++ b/settings.yaml
@@ -31,6 +31,9 @@ ui:
You can only answer questions about the provided context.
If you know the answer but it is not based in the provided context, don't provide
the answer, just state the answer is not in the context provided.
+ delete_file_button_enabled: true
+ delete_all_files_button_enabled: true
+
llm:
mode: local
|
Make it possible in the UI to:
- Select the file to be used during Query chat
- Select a file to delete
- Delete all docs
Update Gradio to latest version (current version had a security issue)
Covers:
- https://github.com/imartinez/privateGPT/pull/1587
- https://github.com/imartinez/privateGPT/pull/1568
<img width="1269" alt="Screenshot 2024-02-16 at 17 17 43" src="https://github.com/imartinez/privateGPT/assets/721666/76f4007c-a44b-4f47-afdc-df77fe0ba3a3">
|
https://api.github.com/repos/zylon-ai/private-gpt/pulls/1612
|
2024-02-16T16:15:49Z
|
2024-02-16T16:36:09Z
|
2024-02-16T16:36:09Z
|
2024-02-16T16:36:10Z
| 3,648
|
zylon-ai/private-gpt
| 38,646
|
Minor improvements to issue template
|
diff --git a/.github/ISSUE_TEMPLATE_tmpl.md b/.github/ISSUE_TEMPLATE_tmpl.md
index df79503d3ec..26f61d3b43e 100644
--- a/.github/ISSUE_TEMPLATE_tmpl.md
+++ b/.github/ISSUE_TEMPLATE_tmpl.md
@@ -1,16 +1,16 @@
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
-- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
-- Use *Preview* tab to see how your issue will actually look like
+- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
+- Use the *Preview* tab to see what your issue will actually look like
---
-### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *%(version)s*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
+### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *%(version)s*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **%(version)s**
### Before submitting an *issue* make sure you have:
-- [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
+- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
@@ -28,9 +28,9 @@
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
-Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
+Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
+
```
-$ youtube-dl -v <your command line>
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
|
## Please follow the guide below
- You will be asked some questions, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *pull request* (like that [x])
- Use *Preview* tab to see how your *pull request* will actually look like
---
### Before submitting a *pull request* make sure you have:
- [x] At least skimmed through [adding new extractor tutorial](https://github.com/rg3/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/rg3/youtube-dl#youtube-dl-coding-conventions) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [ ] Bug fix
- [x] Improvement
- [ ] New extractor
- [ ] New feature
---
### Description of your *pull request* and other information
* Updated the issue template to collapse logs with HTML5 `<details>` and `<summary>`. This may make it easier to scroll through issues with long logs.
* Moved the command line input part from the logs to above, since it doesn't need to be included as it will be output with `-v` anyway, and it may contain sensitive information that is automatically redacted in the logs (hopefully people are smart enough to redact any sensitive information, but who knows).
* Made other minor grammar/formatting corrections.
|
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/13552
|
2017-07-03T09:57:58Z
|
2017-07-23T13:33:18Z
|
2017-07-23T13:33:18Z
|
2017-07-23T13:33:18Z
| 793
|
ytdl-org/youtube-dl
| 50,094
|
Fix:delete corresponding event source mappings on deletion of dynamod…
|
diff --git a/localstack/services/dynamodb/dynamodb_listener.py b/localstack/services/dynamodb/dynamodb_listener.py
index c56ded01a399f..8064e9e7d6042 100644
--- a/localstack/services/dynamodb/dynamodb_listener.py
+++ b/localstack/services/dynamodb/dynamodb_listener.py
@@ -277,12 +277,13 @@ def return_response(self, method, path, data, headers, response):
return
elif action == '%s.DeleteTable' % ACTION_PREFIX:
+ table_arn = json.loads(response._content).get('TableDescription', {}).get('TableArn')
event_publisher.fire_event(
event_publisher.EVENT_DYNAMODB_DELETE_TABLE,
payload={'n': event_publisher.get_hash(data['TableName'])}
)
-
- TABLE_TAGS.pop(json.loads(response._content).get('TableDescription', {}).get('TableArn', None), None)
+ self.delete_all_event_source_mappings(table_arn)
+ TABLE_TAGS.pop(table_arn, None)
return
@@ -401,6 +402,13 @@ def prepare_transact_write_item_records(self, record, data):
records.append(new_record)
return records
+ def delete_all_event_source_mappings(self, table_arn):
+ lambda_client = aws_stack.connect_to_service('lambda')
+ result = lambda_client.list_event_source_mappings(EventSourceArn=table_arn)
+ for event in result['EventSourceMappings']:
+ event_source_mapping_id = event['UUID']
+ lambda_client.delete_event_source_mapping(UUID=event_source_mapping_id)
+
@staticmethod
def _thread_local(name, default=None):
try:
diff --git a/tests/integration/test_lambda.py b/tests/integration/test_lambda.py
index e7dc1fad11ef1..d9bc999cba7f6 100644
--- a/tests/integration/test_lambda.py
+++ b/tests/integration/test_lambda.py
@@ -370,6 +370,32 @@ def test_disabled_event_source_mapping_with_dynamodb(self):
lambda_client.delete_function(FunctionName=function_name)
+ def test_deletion_event_source_mapping_with_dynamodb(self):
+ function_name = 'lambda_func-{}'.format(short_uid())
+ ddb_table = 'ddb_table-{}'.format(short_uid())
+
+ testutil.create_lambda_function(
+ handler_file=TEST_LAMBDA_ECHO_FILE,
+ func_name=function_name,
+ runtime=LAMBDA_RUNTIME_PYTHON36
+ )
+
+ table_arn = aws_stack.create_dynamodb_table(ddb_table, partition_key='id')['TableDescription']['TableArn']
+ lambda_client = aws_stack.connect_to_service('lambda')
+
+ lambda_client.create_event_source_mapping(
+ FunctionName=function_name,
+ EventSourceArn=table_arn
+ )
+
+ dynamodb_client = aws_stack.connect_to_service('dynamodb')
+ dynamodb_client.delete_table(TableName=ddb_table)
+
+ result = lambda_client.list_event_source_mappings(EventSourceArn=table_arn)
+ self.assertEqual(len(result['EventSourceMappings']), 0)
+ # clean up
+ lambda_client.delete_function(FunctionName=function_name)
+
class TestPythonRuntimes(LambdaTestBase):
@classmethod
|
Fix:delete corresponding event source mappings on deletion of dynamodb table.
#2561
|
https://api.github.com/repos/localstack/localstack/pulls/2580
|
2020-06-18T22:17:05Z
|
2020-06-19T21:29:19Z
|
2020-06-19T21:29:19Z
|
2020-06-19T21:29:19Z
| 697
|
localstack/localstack
| 28,475
|
Update index.json (Catalan translation)
|
diff --git a/website/public/locales/ca/index.json b/website/public/locales/ca/index.json
index deea65d726..c85d485d61 100644
--- a/website/public/locales/ca/index.json
+++ b/website/public/locales/ca/index.json
@@ -1,15 +1,15 @@
{
"blurb": "Creiem que podem crear una revolució.",
"blurb1": "De la mateixa manera que Stable Diffusion va ajudar el món a crear art i imatges de noves maneres, volem millorar el món proporcionant una IA conversacional sorprenent",
- "description": "IA conversacional per a tothom. Un projecte de codi obert per crear un GPT LLM preparat per xatejar administrat per LAION i col·laboradors de tot el món.",
+ "description": "IA conversacional per a tothom. Un projecte de codi obert per a crear un GPT LLM preparat per a xatejar, administrat per LAION i col·laboradors de tot el món.",
"faq_items": {
"q0": "Com està avançat el projecte?",
- "a0": "Estem en les primeres etapes de desenvolupament, treballant a partir de la investigació establerta per aplicar RLHF (aprenentatge per reforç amb realimentació humana) a models de llenguatge de grans dimensions.",
+ "a0": "Estem en les primeres etapes de desenvolupament, treballant a partir de la investigació establerta per a aplicar RLHF (aprenentatge per reforç amb realimentació humana) a models de llenguatge de grans dimensions.",
"q1": "Qui hi ha al darrere d'Open Assistant?",
- "a1": "Open Assistant és un projecte organitzat per LAION i persones de tot el planeta interessades a apropar aquesta tecnologia a tothom."
+ "a1": "Open Assistant és un projecte organitzat per LAION i per persones de tot el planeta interessades a apropar aquesta tecnologia a tothom."
},
"faq_title": "Preguntes freqüents",
"join_us_description": "Tots els projectes de codi obert comencen amb persones com tu. El codi obert és la creença que si col·laborem plegats, podem regalar el nostre coneixement i tecnologia al món en benefici de la humanitat. T'hi apuntes? Troba'ns aquí:",
"join_us_title": "Uneix-te a nosaltres",
- "subtitle": "AI conversacional per a tothom."
+ "subtitle": "IA conversacional per a tothom."
}
|
Minor fixes in the Catalan translation.
|
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/1434
|
2023-02-10T14:40:53Z
|
2023-02-10T14:55:24Z
|
2023-02-10T14:55:24Z
|
2023-02-10T14:55:24Z
| 654
|
LAION-AI/Open-Assistant
| 37,327
|
UI: fix CameraView crash on deleting.
|
diff --git a/selfdrive/ui/qt/offroad/driverview.cc b/selfdrive/ui/qt/offroad/driverview.cc
index 0ff786fb91d3dd..1377bb3b23ba84 100644
--- a/selfdrive/ui/qt/offroad/driverview.cc
+++ b/selfdrive/ui/qt/offroad/driverview.cc
@@ -35,6 +35,7 @@ void DriverViewScene::showEvent(QShowEvent* event) {
}
void DriverViewScene::hideEvent(QHideEvent* event) {
+ // TODO: stop vipc thread ?
params.putBool("IsDriverViewEnabled", false);
}
diff --git a/selfdrive/ui/qt/widgets/cameraview.cc b/selfdrive/ui/qt/widgets/cameraview.cc
index a606d6893e1cd2..347cdb1dca88f2 100644
--- a/selfdrive/ui/qt/widgets/cameraview.cc
+++ b/selfdrive/ui/qt/widgets/cameraview.cc
@@ -102,6 +102,7 @@ CameraWidget::CameraWidget(std::string stream_name, VisionStreamType type, bool
CameraWidget::~CameraWidget() {
makeCurrent();
+ stopVipcThread();
if (isValid()) {
glDeleteVertexArrays(1, &frame_vao);
glDeleteBuffers(1, &frame_vbo);
@@ -171,6 +172,15 @@ void CameraWidget::showEvent(QShowEvent *event) {
}
}
+void CameraWidget::stopVipcThread() {
+ if (vipc_thread) {
+ vipc_thread->requestInterruption();
+ vipc_thread->quit();
+ vipc_thread->wait();
+ vipc_thread = nullptr;
+ }
+}
+
void CameraWidget::updateFrameMat() {
int w = width(), h = height();
diff --git a/selfdrive/ui/qt/widgets/cameraview.h b/selfdrive/ui/qt/widgets/cameraview.h
index 0698d1fb9aaac9..7cc3847f99c0ba 100644
--- a/selfdrive/ui/qt/widgets/cameraview.h
+++ b/selfdrive/ui/qt/widgets/cameraview.h
@@ -51,6 +51,7 @@ class CameraWidget : public QOpenGLWidget, protected QOpenGLFunctions {
void updateCalibration(const mat3 &calib);
void vipcThread();
void clearFrames();
+ void stopVipcThread();
bool zoomed_view;
GLuint frame_vao, frame_vbo, frame_ibo;
|
The vipc thread should be stopped and deleted before deleting
|
https://api.github.com/repos/commaai/openpilot/pulls/26390
|
2022-11-06T10:07:05Z
|
2022-11-16T03:07:51Z
|
2022-11-16T03:07:50Z
|
2022-11-16T04:07:44Z
| 554
|
commaai/openpilot
| 9,248
|
Add webview docs and examples, Set webview as default
|
diff --git a/README.md b/README.md
index 451ec57de3..c07a1d4b22 100644
--- a/README.md
+++ b/README.md
@@ -41,7 +41,7 @@ As per the survey, here is a list of improvements to come
- [ ] 🚧 Improve Documentation (in /docs & Guides, Howtos, & Do video tutorials)
- [x] Improve the provider status list & updates
- [ ] Tutorials on how to reverse sites to write your own wrapper (PoC only ofc)
-- [ ] Improve the Bing wrapper. (might write a new wrapper in golang as it is very fast)
+- [x] Improve the Bing wrapper. (Wait and Retry or reuse conversation)
- [ ] Write a standard provider performance test to improve the stability
- [ ] Potential support and development of local models
- [ ] 🚧 Improve compatibility and error handling
@@ -170,7 +170,33 @@ image_url = response.data[0].url
- New Client API like the OpenAI Python library: [/docs/client](/docs/client.md)
- Legacy API with python modules: [/docs/legacy](/docs/legacy.md)
-#### Web UI
+### Webview GUI
+
+Open the GUI in a window of your OS. Runs on a local/static/ssl server with a js api. Supports login into the OpenAI Chat, Image Upload and streamed Text Generation.
+
+Supports all platforms, but only Linux tested.
+
+1. Install all requirements with:
+
+```bash
+pip install g4f[webview]
+```
+
+2. Follow the OS specific steps here:
+ [pywebview installation](https://pywebview.flowrl.com/guide/installation.html#dependencies)
+
+3. Run the app with:
+
+```python
+from g4f.gui.webview import run_webview
+run_webview(debug=True)
+```
+or execute the following command:
+```bash
+python -m g4f.gui.webview -debug
+```
+
+#### Webserver
To start the web interface, type the following codes in python:
@@ -237,7 +263,7 @@ set G4F_PROXY=http://host:port
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ |  | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ |  | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ |  | ❌ |
-| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ |  | ✔️ |
+| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ❌ | ✔️ |  | ✔️ |
| [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ |  | ✔️ |
| [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ |  | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ |  | ❌ |
diff --git a/g4f/Provider/Bing.py b/g4f/Provider/Bing.py
index a1d14d8785..f8b06dd1f8 100644
--- a/g4f/Provider/Bing.py
+++ b/g4f/Provider/Bing.py
@@ -414,7 +414,7 @@ async def stream_generate(
image_request = await upload_image(
session,
image,
- "Balanced" if Tones.copilot == "Copilot" else tone,
+ "Balanced" if tone == Tones.copilot else tone,
headers
) if image else None
async with session.ws_connect(
diff --git a/g4f/gui/client/static/js/chat.v1.js b/g4f/gui/client/static/js/chat.v1.js
index bcef4a78a5..f9bc456852 100644
--- a/g4f/gui/client/static/js/chat.v1.js
+++ b/g4f/gui/client/static/js/chat.v1.js
@@ -240,26 +240,26 @@ async function add_message_chunk(message) {
}
}
-cameraInput?.addEventListener("click", (e) => {
- if (window?.pywebview) {
- e.preventDefault();
- pywebview.api.choose_file();
- }
-})
+// fileInput?.addEventListener("click", (e) => {
+// if (window?.pywebview) {
+// e.preventDefault();
+// pywebview.api.choose_file();
+// }
+// });
cameraInput?.addEventListener("click", (e) => {
if (window?.pywebview) {
e.preventDefault();
pywebview.api.take_picture();
}
-})
+});
imageInput?.addEventListener("click", (e) => {
if (window?.pywebview) {
e.preventDefault();
pywebview.api.choose_image();
}
-})
+});
const ask_gpt = async () => {
regenerate.classList.add(`regenerate-hidden`);
diff --git a/g4f/gui/server/api.py b/g4f/gui/server/api.py
index 3adb88f433..e7683812fb 100644
--- a/g4f/gui/server/api.py
+++ b/g4f/gui/server/api.py
@@ -19,12 +19,12 @@
filters=[["Image", "*.jpg", "*.jpeg", "*.png", "*.webp", "*.svg"]],
)
has_plyer = True
-except (ImportError, NameError):
+except ImportError:
has_plyer = False
try:
from android.runnable import run_on_ui_thread
- from android.storage import app_storage_path
- from android.permissions import request_permissions, Permission
+ import android.permissions
+ from android.permissions import Permission
from android.permissions import _RequestPermissionsManager
_RequestPermissionsManager.register_callback()
from .android_gallery import user_select_image
@@ -161,7 +161,7 @@ def set_selected(self, input_id: str = None):
def request_permissions(self):
if has_android:
- request_permissions([
+ android.permissions.request_permissions([
Permission.CAMERA,
Permission.READ_EXTERNAL_STORAGE,
Permission.WRITE_EXTERNAL_STORAGE
diff --git a/g4f/gui/webview.py b/g4f/gui/webview.py
index 36ad0e6002..b015dbed94 100644
--- a/g4f/gui/webview.py
+++ b/g4f/gui/webview.py
@@ -16,6 +16,7 @@
def run_webview(
debug: bool = False,
+ ssl: bool = True,
storage_path: str = None
):
if getattr(sys, 'frozen', False):
@@ -36,7 +37,7 @@ def run_webview(
private_mode=False,
storage_path=storage_path,
debug=debug,
- ssl=True
+ ssl=ssl
)
if __name__ == "__main__":
diff --git a/requirements.txt b/requirements.txt
index def8c7e35d..671b23945d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -14,11 +14,9 @@ platformdirs
fastapi
uvicorn
flask
-py-arkose-generator
-undetected-chromedriver>=3.5.5
brotli
beautifulsoup4
-setuptools
aiohttp_socks
-selenium-wire
-gpt4all
\ No newline at end of file
+gpt4all
+pywebview
+plyer
\ No newline at end of file
diff --git a/setup.py b/setup.py
index b6c106c4ed..fa997b5060 100644
--- a/setup.py
+++ b/setup.py
@@ -18,23 +18,24 @@
'all': [
"curl_cffi>=0.6.2",
"certifi",
- "async-property", # openai
- "py-arkose-generator", # openai
+ #"py-arkose-generator", # not working
"browser_cookie3", # get_cookies
"PyExecJS", # GptForLove
"duckduckgo-search>=5.0" ,# internet.search
"beautifulsoup4", # internet.search and bing.create_images
"brotli", # openai
- "platformdirs", # webdriver
- "undetected-chromedriver>=3.5.5", # webdriver
- "setuptools", # webdriver
+ #"undetected-chromedriver>=3.5.5", # webdriver
+ #"setuptools", # webdriver
+ "pywebview",
+ "platformdirs",
+ "plyer",
"aiohttp_socks", # proxy
"pillow", # image
"cairosvg", # svg image
"werkzeug", "flask", # gui
"loguru", "fastapi",
"uvicorn", "nest_asyncio", # api
- "selenium-wire"
+ #"selenium-wire"
],
"image": [
"pillow",
@@ -47,6 +48,11 @@
"setuptools",
"selenium-wire"
],
+ "webview": [
+ "webview",
+ "platformdirs",
+ "plyer"
+ ],
"openai": [
"async-property",
"py-arkose-generator",
|
https://api.github.com/repos/xtekky/gpt4free/pulls/1742
|
2024-03-22T11:49:17Z
|
2024-03-22T12:01:48Z
|
2024-03-22T12:01:48Z
|
2024-03-22T12:01:55Z
| 2,335
|
xtekky/gpt4free
| 38,112
|
|
Deepspeed script changes to align with transformers>=4.30.0 for LoRA training
|
diff --git a/docs/training.md b/docs/training.md
index df16922a47..628557ad5a 100644
--- a/docs/training.md
+++ b/docs/training.md
@@ -30,7 +30,7 @@ torchrun --nproc_per_node=4 --master_port=9778 fastchat/train/train_flant5.py \
After training, please use our post-processing [function](https://github.com/lm-sys/FastChat/blob/55051ad0f23fef5eeecbda14a2e3e128ffcb2a98/fastchat/utils.py#L166-L185) to update the saved model weight. Additional discussions can be found [here](https://github.com/lm-sys/FastChat/issues/643).
### Fine-tuning using (Q)LoRA
-You can use the following command to train Vicuna-7B using QLoRA using ZeRO2. Note that ZeRO3 is not currently supported with QLoRA but ZeRO3 does support LoRA, which has a reference configuraiton under `playground/deepspeed_config_s3.json`.
+You can use the following command to train Vicuna-7B using QLoRA using ZeRO2. Note that ZeRO3 is not currently supported with QLoRA but ZeRO3 does support LoRA, which has a reference configuraiton under playground/deepspeed_config_s3.json. To use QLoRA, you must have bitsandbytes>=0.39.0 and transformers>=4.30.0 installed.
```bash
deepspeed train_lora.py \
--model_name_or_path ~/model_weights/llama-7b \
diff --git a/fastchat/train/train_lora.py b/fastchat/train/train_lora.py
index 67a71f2e62..9481fa788b 100644
--- a/fastchat/train/train_lora.py
+++ b/fastchat/train/train_lora.py
@@ -103,13 +103,14 @@ def train():
) = parser.parse_args_into_dataclasses()
device_map = None
+ world_size = int(os.environ.get("WORLD_SIZE", 1))
+ ddp = world_size != 1
if lora_args.q_lora:
- world_size = int(os.environ.get("WORLD_SIZE", 1))
- device_map = (
- {"": int(os.environ.get("LOCAL_RANK") or 0)} if world_size != 1 else None
- )
+ device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} if ddp else None
if len(training_args.fsdp) > 0 or deepspeed.is_deepspeed_zero3_enabled():
- logging.warn("FSDP and ZeRO3 are both currently incompatible with QLoRA.")
+ logging.warning(
+ "FSDP and ZeRO3 are both currently incompatible with QLoRA."
+ )
compute_dtype = (
torch.float16
@@ -143,7 +144,7 @@ def train():
model = prepare_model_for_kbit_training(
model, use_gradient_checkpointing=training_args.gradient_checkpointing
)
- if torch.cuda.device_count() > 1:
+ if not ddp and torch.cuda.device_count() > 1:
# keeps Trainer from trying its own DataParallelism when more than 1 gpu is available
model.is_parallelizable = True
model.model_parallel = True
@@ -178,7 +179,7 @@ def train():
trainer.save_state()
# check if zero3 mode enabled
- if trainer.hf_deepspeed_config_orig.is_zero3():
+ if deepspeed.is_deepspeed_zero3_enabled():
# use deepspeed engine internal function to gather state dict
# state_dict_zero3 contains whole parameters of base and lora adapters
# we will not extract lora parameters since peft save_pretrained will do that
diff --git a/playground/deepspeed_config_s2.json b/playground/deepspeed_config_s2.json
index aa350bb02c..3113cd3655 100644
--- a/playground/deepspeed_config_s2.json
+++ b/playground/deepspeed_config_s2.json
@@ -1,21 +1,12 @@
{
- "zero_optimization": {
- "stage": 2,
- "offload_optimizer": {
- "device": "cpu"
- },
- "contiguous_gradients": true,
- "overlap_comm": true
+ "zero_optimization": {
+ "stage": 2,
+ "offload_optimizer": {
+ "device": "cpu"
},
- "optimizer": {
- "type": "AdamW",
- "params": {
- "lr": "auto",
- "betas": "auto",
- "eps": "auto",
- "weight_decay": "auto"
- }
+ "contiguous_gradients": true,
+ "overlap_comm": true
},
- "train_micro_batch_size_per_gpu": "auto",
- "gradient_accumulation_steps": "auto"
- }
\ No newline at end of file
+ "train_micro_batch_size_per_gpu": "auto",
+ "gradient_accumulation_steps": "auto"
+}
\ No newline at end of file
diff --git a/playground/deepspeed_config_s3.json b/playground/deepspeed_config_s3.json
index 629bfd8ade..07f4b16a66 100644
--- a/playground/deepspeed_config_s3.json
+++ b/playground/deepspeed_config_s3.json
@@ -1,4 +1,12 @@
{
+ "fp16": {
+ "enabled": "auto",
+ "loss_scale": 0,
+ "loss_scale_window": 1000,
+ "initial_scale_power": 16,
+ "hysteresis": 2,
+ "min_loss_scale": 1
+ },
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
@@ -11,24 +19,14 @@
},
"overlap_comm": true,
"contiguous_gradients": true,
- "sub_group_size": "auto",
- "reduce_bucket_size": "auto",
- "stage3_prefetch_bucket_size": "auto",
- "stage3_param_persistence_threshold": "auto",
- "stage3_max_live_parameters": "auto",
- "stage3_max_reuse_distance": "auto",
+ "stage3_max_live_parameters" : 1e9,
+ "stage3_max_reuse_distance" : 1e9,
+ "stage3_prefetch_bucket_size" : 5e8,
+ "stage3_param_persistence_threshold" : 1e6,
+ "sub_group_size" : 1e12,
"stage3_gather_16bit_weights_on_model_save": true
},
- "optimizer": {
- "type": "AdamW",
- "params": {
- "lr": "auto",
- "betas": "auto",
- "eps": "auto",
- "weight_decay": "auto"
- }
- },
"train_batch_size": "auto",
- "micro_batch_size_per_gpu": "auto",
+ "train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto"
}
\ No newline at end of file
|
## Why are these changes needed?
Since QLoRA relies on at least transformers version==4.30.0 and bitsandbytes==0.39.0, we propose some changes to the deepspeed and `train_lora` scripts to fix it.
We align deepspeed and trainer with transformers>=4.30.0 by doing the following:
1. we must remove optimizer from the deepspeed config. Check [here](https://github.com/huggingface/transformers/issues/24359) for more information about supported combinations of optimizer and scheduler.
2. We also add the tag for `fp16` in the script to fix an input tensor mismatch [issue](https://github.com/microsoft/DeepSpeed/issues/3654).
This was tested on a distributed setup of 8 12GB GPUs with LLaMA-7B taking up ~5GB VRAM and training with effective training batch size of 1 taking around 10GB VRAM.
## Related issue number (if applicable)
#1741
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed.
- [ ] I've made sure the relevant tests are passing (if applicable).
|
https://api.github.com/repos/lm-sys/FastChat/pulls/1900
|
2023-07-08T19:26:35Z
|
2023-07-09T04:41:44Z
|
2023-07-09T04:41:44Z
|
2023-07-12T09:56:07Z
| 1,629
|
lm-sys/FastChat
| 41,404
|
ref(stacktrace-link): Drop usage of mobile_frame and fix tag
|
diff --git a/src/sentry/api/endpoints/project_stacktrace_link.py b/src/sentry/api/endpoints/project_stacktrace_link.py
index 38498079aa754..e9bb0e7d0fd25 100644
--- a/src/sentry/api/endpoints/project_stacktrace_link.py
+++ b/src/sentry/api/endpoints/project_stacktrace_link.py
@@ -51,20 +51,18 @@ def get_link(
return result
-# This is to support mobile languages with non-fully-qualified file pathing.
-# We attempt to 'munge' the proper source-relative filepath based on the stackframe data.
-def generate_mobile_frame(parameters: Dict[str, Optional[str]]) -> Dict[str, str]:
- abs_path = parameters.get("absPath")
- module = parameters.get("module")
- package = parameters.get("package")
- frame = {}
- if abs_path:
- frame["abs_path"] = abs_path
- if module:
- frame["module"] = module
- if package:
- frame["package"] = package
- return frame
+def generate_context(parameters: Dict[str, Optional[str]]) -> Dict[str, Optional[str]]:
+ return {
+ "file": parameters.get("file"),
+ # XXX: Temp change to support try_path_munging until refactored
+ "filename": parameters.get("file"),
+ "commit_id": parameters.get("commitId"),
+ "platform": parameters.get("platform"),
+ "sdk_name": parameters.get("sdkName"),
+ "abs_path": parameters.get("absPath"),
+ "module": parameters.get("module"),
+ "package": parameters.get("package"),
+ }
def set_top_tags(
@@ -80,10 +78,10 @@ def set_top_tags(
"organization.early_adopter", bool(project.organization.flags.early_adopter.is_set)
)
scope.set_tag("stacktrace_link.platform", ctx["platform"])
- scope.set_tag("stacktrace_link.has_code_mappings", has_code_mappings)
+ scope.set_tag("stacktrace_link.code_mappings", has_code_mappings)
if ctx["platform"] == "python":
# This allows detecting a file that belongs to Python's 3rd party modules
- scope.set_tag("stacktrace_link.in_app", "site-packages" in str(ctx["file"]))
+ scope.set_tag("stacktrace_link.in_app", "site-packages" not in str(ctx["file"]))
except Exception:
# If errors arises we can still proceed
logger.exception("We failed to set a tag.")
@@ -92,13 +90,11 @@ def set_top_tags(
def try_path_munging(
config: RepositoryProjectPathConfig,
filepath: str,
- mobile_frame: Mapping[str, Optional[str]],
ctx: Mapping[str, Optional[str]],
) -> Dict[str, str]:
result: Dict[str, str] = {}
- mobile_frame["filename"] = filepath # type: ignore
munged_frames = munged_filename_and_frames(
- str(ctx["platform"]), [mobile_frame], "munged_filename", sdk_name=str(ctx["sdk_name"])
+ str(ctx["platform"]), [ctx], "munged_filename", sdk_name=str(ctx["sdk_name"])
)
if munged_frames:
munged_frame: Mapping[str, Mapping[str, str]] = munged_frames[1][0]
@@ -132,18 +128,11 @@ class ProjectStacktraceLinkEndpoint(ProjectEndpoint): # type: ignore
"""
def get(self, request: Request, project: Project) -> Response:
- # should probably feature gate
- filepath = request.GET.get("file")
+ ctx = generate_context(request.GET)
+ filepath = ctx.get("file")
if not filepath:
return Response({"detail": "Filepath is required"}, status=400)
- ctx = {
- "file": request.GET.get("file"),
- "commit_id": request.GET.get("commitId"),
- "platform": request.GET.get("platform"),
- "sdk_name": request.GET.get("sdkName"),
- }
- mobile_frame = generate_mobile_frame(request.GET)
result: JSONData = {"config": None, "sourceUrl": None}
integrations = Integration.objects.filter(organizations=project.organization_id)
@@ -162,6 +151,7 @@ def get(self, request: Request, project: Project) -> Response:
configs = RepositoryProjectPathConfig.objects.filter(
project=project, organization_integration__isnull=False
)
+ derived = False
matched_code_mappings = []
with configure_scope() as scope:
set_top_tags(scope, project, ctx, len(configs) > 0)
@@ -176,13 +166,13 @@ def get(self, request: Request, project: Project) -> Response:
filepath.startswith(config.stack_root)
and config.automatically_generated is True
):
- scope.set_tag("stacktrace_link.automatically_generated", True)
+ derived = True
outcome = get_link(config, filepath, ctx["commit_id"])
# In some cases the stack root matches and it can either be that we have
# an invalid code mapping or that munging is expect it to work
if not outcome.get("sourceUrl"):
- munging_outcome = try_path_munging(config, filepath, mobile_frame, ctx)
+ munging_outcome = try_path_munging(config, filepath, ctx)
# If we failed to munge we should keep the original outcome
if munging_outcome:
outcome = munging_outcome
@@ -202,7 +192,7 @@ def get(self, request: Request, project: Project) -> Response:
# Post-processing before exiting scope context
found: bool = result["sourceUrl"] is not None
scope.set_tag("stacktrace_link.found", found)
-
+ scope.set_tag("stacktrace_link.auto_derived", derived)
if matched_code_mappings:
# Any code mapping that matches and its results will be returned
result["matched_code_mappings"] = matched_code_mappings
|
https://api.github.com/repos/getsentry/sentry/pulls/41733
|
2022-11-24T13:55:28Z
|
2022-11-24T17:11:46Z
|
2022-11-24T17:11:46Z
|
2023-05-17T22:12:52Z
| 1,335
|
getsentry/sentry
| 44,146
|
|
Fix printing, take two.
|
diff --git a/css/chat.css b/css/chat.css
index fcf19ee03b..45a518bc56 100644
--- a/css/chat.css
+++ b/css/chat.css
@@ -111,4 +111,16 @@ div.svelte-362y77>*, div.svelte-362y77>.form>* {
display: flex;
flex-direction: column-reverse;
}
+
+ .message {
+ break-inside: avoid;
+ }
+
+ .gradio-container {
+ overflow: visible;
+ }
+
+ .tab-nav {
+ display: none !important;
+ }
}
|
It wasn't printing longer chats correctly because a parent still had `overflow: hidden;` set.
I also fixed mid-message page breaks as [suggested](https://github.com/oobabooga/text-generation-webui/pull/2793#issuecomment-1600951892), and removed the gap at the top (the tabs were in the way).
|
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/2810
|
2023-06-22T05:05:20Z
|
2023-06-22T19:06:49Z
|
2023-06-22T19:06:49Z
|
2023-06-27T23:41:29Z
| 143
|
oobabooga/text-generation-webui
| 26,245
|
Update README.md
|
diff --git a/README.md b/README.md
index 5014007227..9f9c62ea0b 100644
--- a/README.md
+++ b/README.md
@@ -2,8 +2,8 @@
| [**Demo**](https://chat.lmsys.org/) | [**Discord**](https://discord.gg/HSWAKCrnFx) | [**X**](https://x.com/lmsysorg) |
FastChat is an open platform for training, serving, and evaluating large language model based chatbots.
-- FastChat powers Chatbot Arena (https://chat.lmsys.org/), serving over 4 million chat requests for 30+ LLMs.
-- Arena has collected over 80K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
+- FastChat powers Chatbot Arena (https://chat.lmsys.org/), serving over 5 million chat requests for 30+ LLMs.
+- Arena has collected over 100K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
FastChat's core features include:
- The training and evaluation code for state-of-the-art models (e.g., Vicuna, MT-Bench).
|
https://api.github.com/repos/lm-sys/FastChat/pulls/2679
|
2023-11-13T07:31:15Z
|
2023-11-13T07:31:24Z
|
2023-11-13T07:31:24Z
|
2023-11-13T07:31:27Z
| 311
|
lm-sys/FastChat
| 41,566
|
|
Add google/gemma-7b-it model to model list
|
diff --git a/g4f/Provider/HuggingChat.py b/g4f/Provider/HuggingChat.py
index 9644880c81..52c5ae31b4 100644
--- a/g4f/Provider/HuggingChat.py
+++ b/g4f/Provider/HuggingChat.py
@@ -14,6 +14,7 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
working = True
default_model = "meta-llama/Llama-2-70b-chat-hf"
models = [
+ "google/gemma-7b-it",
"mistralai/Mixtral-8x7B-Instruct-v0.1",
"meta-llama/Llama-2-70b-chat-hf",
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
|
https://api.github.com/repos/xtekky/gpt4free/pulls/1740
|
2024-03-22T05:40:02Z
|
2024-03-22T12:02:24Z
|
2024-03-22T12:02:24Z
|
2024-03-22T12:03:50Z
| 192
|
xtekky/gpt4free
| 38,068
|
|
Fixed #1618 awkward language during email problems
|
diff --git a/AUTHORS.md b/AUTHORS.md
index 21a6e7773d4..04f5b446f3e 100644
--- a/AUTHORS.md
+++ b/AUTHORS.md
@@ -237,6 +237,7 @@ Authors
* [Stefan Weil](https://github.com/stweil)
* [Steve Desmond](https://github.com/stevedesmond-ca)
* [sydneyli](https://github.com/sydneyli)
+* [taixx046](https://github.com/taixx046)
* [Tan Jay Jun](https://github.com/jayjun)
* [Tapple Gao](https://github.com/tapple)
* [Telepenin Nikolay](https://github.com/telepenin)
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md
index 9eb7cd9e81c..cd261d16609 100644
--- a/certbot/CHANGELOG.md
+++ b/certbot/CHANGELOG.md
@@ -18,6 +18,7 @@ Certbot adheres to [Semantic Versioning](https://semver.org/).
### Changed
+* Reorganized error message when a user entered an invalid email address.
* Stop asking interactively if the user would like to add a redirect.
* `mock` dependency is now conditional on Python 2 in all of our packages.
* Deprecate certbot-auto on Gentoo, macOS, and FreeBSD.
diff --git a/certbot/certbot/display/ops.py b/certbot/certbot/display/ops.py
index 21d169a5523..2c3503eabd9 100644
--- a/certbot/certbot/display/ops.py
+++ b/certbot/certbot/display/ops.py
@@ -30,7 +30,7 @@ def get_email(invalid=False, optional=True):
"""
invalid_prefix = "There seem to be problems with that address. "
- msg = "Enter email address (used for urgent renewal and security notices)"
+ msg = "Enter email address (used for urgent renewal and security notices)\n"
unsafe_suggestion = ("\n\nIf you really want to skip this, you can run "
"the client with --register-unsafely-without-email "
"but make sure you then backup your account key from "
@@ -64,7 +64,7 @@ def get_email(invalid=False, optional=True):
if util.safe_email(email):
return email
if suggest_unsafe:
- msg += unsafe_suggestion
+ msg = unsafe_suggestion + msg
suggest_unsafe = False # add this message at most once
invalid = bool(email)
|
## Pull Request Checklist
- [x] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made.
- [x] Include your name in `AUTHORS.md` if you like.
Fixes #1618
Below is a preview of reorganized message.

|
https://api.github.com/repos/certbot/certbot/pulls/7938
|
2020-04-23T20:35:14Z
|
2020-04-28T01:33:09Z
|
2020-04-28T01:33:08Z
|
2020-04-28T01:33:09Z
| 599
|
certbot/certbot
| 1,465
|
Show exception and stack trace on startup errors
|
diff --git a/mitmproxy/addons/errorcheck.py b/mitmproxy/addons/errorcheck.py
index 4dccffac53..9b6eff66a1 100644
--- a/mitmproxy/addons/errorcheck.py
+++ b/mitmproxy/addons/errorcheck.py
@@ -29,8 +29,8 @@ async def shutdown_if_errored(self):
if self.logger.has_errored:
plural = "s" if len(self.logger.has_errored) > 1 else ""
if self.repeat_errors_on_stderr:
- msg = "\n".join(r.msg for r in self.logger.has_errored)
- print(f"Error{plural} logged during startup: {msg}", file=sys.stderr)
+ msg = "\n".join(self.logger.format(r) for r in self.logger.has_errored)
+ print(f"Error{plural} logged during startup:\n{msg}", file=sys.stderr)
else:
print(
f"Error{plural} logged during startup, exiting...", file=sys.stderr
|
#### Description
It's hard to debug errors raised in addon scripts during startup as only a generic message is output on the console. Using logger.format() to format errors that occurred during startup instead of only displaying the LogRecord.msg improves the output if an exception is present by showing the stack trace. An additional newline was added for better readability.
Comparison with the load_error.py test script, before:
$ mitmproxy -s test/mitmproxy/data/addonscripts/load_error.py
Error logged during startup: Addon error:
After:
$ mitmproxy -s test/mitmproxy/data/addonscripts/load_error.py
Error logged during startup:
Addon error:
Traceback (most recent call last):
File "test/mitmproxy/data/addonscripts/load_error.py", line 2, in load
raise ValueError()
ValueError
Relates to issue #5935 and PR #6020
#### Checklist
- [ ] I have updated tests where applicable.
- I think the value of extending `test_errorcheck.py` for this behavior is low and tightly couples the test to `logger.format()`
- [ ] I have added an entry to the CHANGELOG.
- #6020 didn't introduce a changelog entry, so I figured this won't need one either
|
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/6491
|
2023-11-18T04:51:52Z
|
2023-11-18T09:03:50Z
|
2023-11-18T09:03:50Z
|
2023-11-18T09:51:07Z
| 227
|
mitmproxy/mitmproxy
| 27,667
|
fixed minor typo: "on the on the" -> "on the"
|
diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 05f8944a5..dd5879616 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -13840,7 +13840,7 @@ This implies that we cannot safely refer to local objects in `use()` from the th
##### Note
-Make "immortal threads" globals, put them in an enclosing scope, or put them on the on the free store rather than `detach()`.
+Make "immortal threads" globals, put them in an enclosing scope, or put them on the free store rather than `detach()`.
[don't `detach`](#Rconc-detached_thread).
##### Note
|
https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/1103
|
2017-12-12T20:09:17Z
|
2017-12-13T17:52:13Z
|
2017-12-13T17:52:13Z
|
2017-12-13T17:52:13Z
| 164
|
isocpp/CppCoreGuidelines
| 15,357
|
|
chore(logging): Fix `_handle_regression` log messages
|
diff --git a/src/sentry/event_manager.py b/src/sentry/event_manager.py
index fc7c7cb6336d4a..d5b51fc6d4af52 100644
--- a/src/sentry/event_manager.py
+++ b/src/sentry/event_manager.py
@@ -1846,7 +1846,7 @@ def _handle_regression(group: Group, event: Event, release: Optional[Release]) -
if not group.is_resolved():
if should_log_extra_info:
logger.info(
- "_handle_regression: group.is_resolved returned true", extra={**logging_details}
+ "_handle_regression: group.is_resolved() returned False", extra={**logging_details}
)
return None
@@ -1855,21 +1855,24 @@ def _handle_regression(group: Group, event: Event, release: Optional[Release]) -
elif GroupResolution.has_resolution(group, release):
if should_log_extra_info:
logger.info(
- "_handle_regression: group.is_resolved returned true", extra={**logging_details}
+ "_handle_regression: GroupResolution.has_resolution() returned True",
+ extra={**logging_details},
)
return None
elif has_pending_commit_resolution(group):
if should_log_extra_info:
logger.info(
- "_handle_regression: group.is_resolved returned true", extra={**logging_details}
+ "_handle_regression: has_pending_commit_resolution() returned True",
+ extra={**logging_details},
)
return None
if not plugin_is_regression(group, event):
if should_log_extra_info:
logger.info(
- "_handle_regression: group.is_resolved returned true", extra={**logging_details}
+ "_handle_regression: plugin_is_regression() returned False",
+ extra={**logging_details},
)
return None
|
This updates some log messages in `_handle_regression` to better reflect the conditions in which they're found. (All four were the same as the first one (which was making my linter mad), so I assume this was just a copying error.)
|
https://api.github.com/repos/getsentry/sentry/pulls/54914
|
2023-08-17T00:47:20Z
|
2023-08-17T17:08:27Z
|
2023-08-17T17:08:27Z
|
2023-09-02T00:02:41Z
| 390
|
getsentry/sentry
| 44,541
|
test_models: fuzz test panda and CarState
|
diff --git a/Jenkinsfile b/Jenkinsfile
index a7f272cd807de4..9868777dfbccc5 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -268,7 +268,7 @@ node {
'car tests': {
pcStage("car tests") {
sh label: "build", script: "selfdrive/manager/build.py"
- sh label: "run car tests", script: "cd selfdrive/car/tests && MAX_EXAMPLES=100 INTERNAL_SEG_CNT=250 FILEREADER_CACHE=1 \
+ sh label: "run car tests", script: "cd selfdrive/car/tests && MAX_EXAMPLES=300 INTERNAL_SEG_CNT=300 FILEREADER_CACHE=1 \
INTERNAL_SEG_LIST=selfdrive/car/tests/test_models_segs.txt pytest test_models.py test_car_interfaces.py"
}
},
diff --git a/conftest.py b/conftest.py
index 3c566e36728117..6792bd0c3d38f3 100644
--- a/conftest.py
+++ b/conftest.py
@@ -12,6 +12,17 @@ def pytest_sessionstart(session):
session.config.option.randomly_reorganize = False
[email protected](hookwrapper=True, trylast=True)
+def pytest_runtest_call(item):
+ # ensure we run as a hook after capturemanager's
+ if item.get_closest_marker("nocapture") is not None:
+ capmanager = item.config.pluginmanager.getplugin("capturemanager")
+ with capmanager.global_and_fixture_disabled():
+ yield
+ else:
+ yield
+
+
@pytest.fixture(scope="function", autouse=True)
def openpilot_function_fixture():
starting_env = dict(os.environ)
@@ -58,7 +69,8 @@ def pytest_collection_modifyitems(config, items):
@pytest.hookimpl(trylast=True)
def pytest_configure(config):
- config_line = (
- "xdist_group_class_property: group tests by a property of the class that contains them"
- )
+ config_line = "xdist_group_class_property: group tests by a property of the class that contains them"
+ config.addinivalue_line("markers", config_line)
+
+ config_line = "nocapture: don't capture test output"
config.addinivalue_line("markers", config_line)
diff --git a/selfdrive/car/tests/test_models.py b/selfdrive/car/tests/test_models.py
index 2103b6ccced8b2..e9c2a4ecd51ae4 100755
--- a/selfdrive/car/tests/test_models.py
+++ b/selfdrive/car/tests/test_models.py
@@ -6,10 +6,12 @@
import random
import unittest
from collections import defaultdict, Counter
+import hypothesis.strategies as st
+from hypothesis import Phase, given, settings
from typing import List, Optional, Tuple
from parameterized import parameterized_class
-from cereal import log, car
+from cereal import messaging, log, car
from openpilot.common.basedir import BASEDIR
from openpilot.common.params import Params
from openpilot.common.realtime import DT_CTRL
@@ -33,6 +35,7 @@
JOB_ID = int(os.environ.get("JOB_ID", "0"))
INTERNAL_SEG_LIST = os.environ.get("INTERNAL_SEG_LIST", "")
INTERNAL_SEG_CNT = int(os.environ.get("INTERNAL_SEG_CNT", "0"))
+MAX_EXAMPLES = int(os.environ.get("MAX_EXAMPLES", "50"))
def get_test_cases() -> List[Tuple[str, Optional[CarTestRoute]]]:
@@ -67,6 +70,7 @@ class TestCarModelBase(unittest.TestCase):
ci: bool = True
can_msgs: List[capnp.lib.capnp._DynamicStructReader]
+ fingerprint: dict[int, dict[int, int]]
elm_frame: Optional[int]
car_safety_mode_frame: Optional[int]
@@ -105,7 +109,7 @@ def setUpClass(cls):
can_msgs = []
cls.elm_frame = None
cls.car_safety_mode_frame = None
- fingerprint = gen_empty_fingerprint()
+ cls.fingerprint = gen_empty_fingerprint()
experimental_long = False
for msg in lr:
if msg.which() == "can":
@@ -113,7 +117,7 @@ def setUpClass(cls):
if len(can_msgs) <= FRAME_FINGERPRINT:
for m in msg.can:
if m.src < 64:
- fingerprint[m.src][m.address] = len(m.dat)
+ cls.fingerprint[m.src][m.address] = len(m.dat)
elif msg.which() == "carParams":
car_fw = msg.carParams.carFw
@@ -149,7 +153,7 @@ def setUpClass(cls):
cls.can_msgs = sorted(can_msgs, key=lambda msg: msg.logMonoTime)
cls.CarInterface, cls.CarController, cls.CarState = interfaces[cls.car_model]
- cls.CP = cls.CarInterface.get_params(cls.car_model, fingerprint, car_fw, experimental_long, docs=False)
+ cls.CP = cls.CarInterface.get_params(cls.car_model, cls.fingerprint, car_fw, experimental_long, docs=False)
assert cls.CP
assert cls.CP.carFingerprint == cls.car_model
@@ -297,6 +301,73 @@ def test_car_controller(car_control):
CC = car.CarControl.new_message(cruiseControl={'resume': True})
test_car_controller(CC)
+ # Skip stdout/stderr capture with pytest, causes elevated memory usage
+ @pytest.mark.nocapture
+ @settings(max_examples=MAX_EXAMPLES, deadline=None,
+ phases=(Phase.reuse, Phase.generate, Phase.shrink))
+ @given(data=st.data())
+ def test_panda_safety_carstate_fuzzy(self, data):
+ """
+ For each example, pick a random CAN message on the bus and fuzz its data,
+ checking for panda state mismatches.
+ """
+
+ if self.CP.dashcamOnly:
+ self.skipTest("no need to check panda safety for dashcamOnly")
+
+ valid_addrs = [(addr, bus, size) for bus, addrs in self.fingerprint.items() for addr, size in addrs.items()]
+ address, bus, size = data.draw(st.sampled_from(valid_addrs))
+
+ msg_strategy = st.binary(min_size=size, max_size=size)
+ msgs = data.draw(st.lists(msg_strategy, min_size=20))
+
+ CC = car.CarControl.new_message()
+
+ for dat in msgs:
+ # due to panda updating state selectively, only edges are expected to match
+ # TODO: warm up CarState with real CAN messages to check edge of both sources
+ # (eg. toyota's gasPressed is the inverse of a signal being set)
+ prev_panda_gas = self.safety.get_gas_pressed_prev()
+ prev_panda_brake = self.safety.get_brake_pressed_prev()
+ prev_panda_regen_braking = self.safety.get_regen_braking_prev()
+ prev_panda_vehicle_moving = self.safety.get_vehicle_moving()
+ prev_panda_cruise_engaged = self.safety.get_cruise_engaged_prev()
+ prev_panda_acc_main_on = self.safety.get_acc_main_on()
+
+ to_send = libpanda_py.make_CANPacket(address, bus, dat)
+ self.safety.safety_rx_hook(to_send)
+
+ can = messaging.new_message('can', 1)
+ can.can = [log.CanData(address=address, dat=dat, src=bus)]
+
+ CS = self.CI.update(CC, (can.to_bytes(),))
+
+ if self.safety.get_gas_pressed_prev() != prev_panda_gas:
+ self.assertEqual(CS.gasPressed, self.safety.get_gas_pressed_prev())
+
+ if self.safety.get_brake_pressed_prev() != prev_panda_brake:
+ # TODO: remove this exception once this mismatch is resolved
+ brake_pressed = CS.brakePressed
+ if CS.brakePressed and not self.safety.get_brake_pressed_prev():
+ if self.CP.carFingerprint in (HONDA.PILOT, HONDA.RIDGELINE) and CS.brake > 0.05:
+ brake_pressed = False
+
+ self.assertEqual(brake_pressed, self.safety.get_brake_pressed_prev())
+
+ if self.safety.get_regen_braking_prev() != prev_panda_regen_braking:
+ self.assertEqual(CS.regenBraking, self.safety.get_regen_braking_prev())
+
+ if self.safety.get_vehicle_moving() != prev_panda_vehicle_moving:
+ self.assertEqual(not CS.standstill, self.safety.get_vehicle_moving())
+
+ if not (self.CP.carName == "honda" and self.CP.carFingerprint not in HONDA_BOSCH):
+ if self.safety.get_cruise_engaged_prev() != prev_panda_cruise_engaged:
+ self.assertEqual(CS.cruiseState.enabled, self.safety.get_cruise_engaged_prev())
+
+ if self.CP.carName == "honda":
+ if self.safety.get_acc_main_on() != prev_panda_acc_main_on:
+ self.assertEqual(CS.cruiseState.available, self.safety.get_acc_main_on())
+
def test_panda_safety_carstate(self):
"""
Assert that panda safety matches openpilot's carState
|
Actually finding lots of legitimate mismatches between openpilot/CANParser and panda safety:
- CANParser issues:
- [x] Honda: `CANParser` maintains values from invalid msgs in `vl_all` when it shouldn't add them in the first place, causing bad brake press values in CS when it gets a valid msg - https://github.com/commaai/opendbc/pull/977
- [x] VW MQB DBC has `CHECKSUM` before `COUNTER`, leading to counter not always getting updated properly if checksum is invalid - https://github.com/commaai/opendbc/pull/977
- [x] CANParser's invalid counter counter wraps around since it's not clipped. It also takes a while to go valid after becoming invalid since it's not clipped - https://github.com/commaai/opendbc/pull/976
- panda issues:
- [x] ~VW PQ: standstill checks don't match, panda doesn't check counter for `MSG_BREMSE_1` but `CANParser` does~ **dashcam!**
- [x] Ford CS checks standstill signal == 1, safety checks != 0, but signal is 2 bits - https://github.com/commaai/panda/pull/1725
- [x] panda safety doesn't check interceptor counter (Honda & Toyota) - ~https://github.com/commaai/panda/pull/1738~ https://github.com/commaai/panda/pull/1735
- [x] Hyundai brake pressed is 2 bits, safety bitmasks 0x2 instead of 0x3. Value 3 is never seen in the fleet date, so never would be picked up with real routes. - https://github.com/commaai/panda/pull/1724
- [x] ~Tesla CS and safety check two different messages for `standstill`/`vehicle_moving`, probably because `ESP_B` isn't on both PT and the other bus. @robbederks can we just make openpilot use `DI_torque2`? This also fixes possible mismatches for the angle safety since it uses vehicle speed - https://github.com/commaai/panda/issues/1256~ **dashcam!**
- [x] Honda Bosch alt brake: added `BRAKE_MODULE` check to the same set as `POWERTRAIN_DATA` which always exists, so it randomly selects an address and only checks one message (https://github.com/commaai/panda/pull/649) - https://github.com/commaai/panda/pull/1746
- [x] dynamic rx check fields aren't reset on safety mode init - https://github.com/commaai/panda/pull/1767
- [x] Honda: prev brake switch value isn't reset on safety mode init - https://github.com/commaai/panda/pull/1781
|
https://api.github.com/repos/commaai/openpilot/pulls/30443
|
2023-11-11T11:30:07Z
|
2023-12-19T09:18:54Z
|
2023-12-19T09:18:54Z
|
2023-12-20T00:33:07Z
| 2,082
|
commaai/openpilot
| 9,468
|
Finish docs for v3.0.0
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index d69956cb03..435b5723ab 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,7 +5,7 @@ This project adheres to [Semantic Versioning](https://semver.org/).
## [3.0.0.dev0](https://github.com/httpie/httpie/compare/2.6.0...master) (unreleased)
-- Drop support for Python 3.6. ([#1177](https://github.com/httpie/httpie/issues/1177))
+- Dropped support for Python 3.6. ([#1177](https://github.com/httpie/httpie/issues/1177))
- Improved startup time by 40%. ([#1211](https://github.com/httpie/httpie/pull/1211))
- Added support for nested JSON syntax. ([#1169](https://github.com/httpie/httpie/issues/1169))
- Added `httpie plugins` interface for plugin management. ([#566](https://github.com/httpie/httpie/issues/566))
@@ -15,12 +15,12 @@ This project adheres to [Semantic Versioning](https://semver.org/).
- Added support for _receiving_ multiple HTTP headers lines with the same name. ([#1207](https://github.com/httpie/httpie/issues/1207))
- Added support for basic JSON types on `--form`/`--multipart` when using JSON only operators (`:=`/`:=@`). ([#1212](https://github.com/httpie/httpie/issues/1212))
- Added support for automatically enabling `--stream` when `Content-Type` is `text/event-stream`. ([#376](https://github.com/httpie/httpie/issues/376))
-- Added support for displaying the total elapsed time throguh `--meta`/`-vv` or `--print=m`. ([#243](https://github.com/httpie/httpie/issues/243))
+- Added support for displaying the total elapsed time through `--meta`/`-vv` or `--print=m`. ([#243](https://github.com/httpie/httpie/issues/243))
- Added new `pie-dark`/`pie-light` (and `pie`) styles that match with [HTTPie for Web and Desktop](https://httpie.io/product). ([#1237](https://github.com/httpie/httpie/issues/1237))
- Added support for better error handling on DNS failures. ([#1248](https://github.com/httpie/httpie/issues/1248))
- Added support for storing prompted passwords in the local sessions. ([#1098](https://github.com/httpie/httpie/issues/1098))
- Added warnings about the `--ignore-stdin`, when there is no incoming data from stdin. ([#1255](https://github.com/httpie/httpie/issues/1255))
-- Broken plugins will no longer crash the whole application. ([#1204](https://github.com/httpie/httpie/issues/1204))
+- Fixed crashing due to broken plugins. ([#1204](https://github.com/httpie/httpie/issues/1204))
- Fixed auto addition of XML declaration to every formatted XML response. ([#1156](https://github.com/httpie/httpie/issues/1156))
- Fixed highlighting when `Content-Type` specifies `charset`. ([#1242](https://github.com/httpie/httpie/issues/1242))
- Fixed an unexpected crash when `--raw` is used with `--chunked`. ([#1253](https://github.com/httpie/httpie/issues/1253))
diff --git a/docs/README.md b/docs/README.md
index 0c1cd9463f..063ebd4b11 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -598,7 +598,7 @@ Content-Type: application/json
## JSON
-JSON is the *lingua franca* of modern web services and it is also the **implicit content type** HTTPie uses by default.
+JSON is the *lingua franca* of modern web services, and it is also the **implicit content type** HTTPie uses by default.
Simple example:
@@ -624,7 +624,7 @@ Host: pie.dev
If your command includes some data [request items](#request-items), they are serialized as a JSON object by default. HTTPie also automatically sets the following headers, both of which can be overwritten:
| Header | Value |
-| -------------: | ----------------------------- |
+|---------------:|-------------------------------|
| `Content-Type` | `application/json` |
| `Accept` | `application/json, */*;q=0.5` |
@@ -677,132 +677,106 @@ The `:=`/`:=@` syntax is JSON-specific. You can switch your request to `--form`
and string, float, and number values will continue to be serialized (as string form values).
Other JSON types, however, are not allowed with `--form` or `--multipart`.
-### Nested JSON fields
+### Nested JSON
-In the past (pre-3.0), HTTPie's data operators (`=`/`:=`) allowed you to
-directly create basic JSON objects right from your terminal. Though this
-functionality was limited to only top-level keys.
-
-```bash
-$ http --offline --print=B pie.dev/post \
- type=success
-```
-
-```json
-{
- "type": "success"
-}
-```
-
-For embedding more complex JSON objects, you needed to use the `:=` operator.
-
-```bash
-$ http --offline --print=B pie.dev/post \
- type=success \
- 'product:={"name":"something", "price":10}'
-```
-
-```json
-{
- "product": {
- "name": "something",
- "price": 10
- },
- "type": "success"
-}
-```
-
-Starting with 3.0, we have created a mini language in HTTPie's own syntax to
-build complex JSON objects with ease. This syntax was inspired by the [JSON form](https://www.w3.org/TR/html-json-forms/)
-proposal for HTML, though we have changed a lot of parts to offer the best experience.
+If your use case involves sending complex JSON objects as part of the request body,
+HTTPie can help you build them right from your terminal. You still use the existing
+data field operators (`=`/`:=`) but instead of specifying a top-level field name (like `key=value`), you specify a path declaration. This tells HTTPie where and how to put the given value inside of an object.
#### Introduction
-Let's start with a simple introduction, and build the JSON object we have seen in the example
-above:
+Let's start with a simple example, and build a simple search query:
```bash
$ http --offline --print=B pie.dev/post \
- type=success \
- product[name]=something \
- product[price]:=10
+ category=tools \
+ search[type]=id \
+ search[id]:=1
```
-With the new syntax, you can designate the path for the value. For example `product[name]` means
-create a new object under the `product` key, and set the `name` field of that object to the given
-value.
+In the example above, the `search[type]` is an instruction for creating an object called `search`, and setting the `type` field of it to the given value (`"id"`).
+
+Also note that, just as the regular syntax, you can use the `:=` operator to directly pass raw JSON values (e.g numbers in the case above).
```json
{
- "product": {
- "name": "something",
- "price": 10
- },
- "type": "success"
+ "category": "tools",
+ "search": {
+ "id": 1,
+ "type": "id"
+ }
}
```
-You can also build arrays, through `[]` suffix. Which means create a list, and append the value
-to that list:
+Building arrays is also possible, through `[]` suffix (an append operation). This tells HTTPie to create an array in the given path (if there is not one already), and append the given value to that array.
```bash
$ http --offline --print=B pie.dev/post \
- search[keywords][]=soda \
- search[keywords][]=fries
+ category=tools \
+ search[type]=keyword \
+ search[keywords][]=APIs \
+ search[keywords][]=CLI
```
```json
{
+ "category": "tools",
"search": {
"keywords": [
- "soda",
- "fries"
- ]
+ "APIs",
+ "CLI"
+ ],
+ "type": "keyword"
}
}
```
-If you want to specify the direct index, that is also supported:
+If you want to explicitly specify the position of elements inside an array,
+you can simply pass the desired index as the path:
```bash
$ http --offline --print=B pie.dev/post \
- search[keywords][0]=soda \
- search[keywords][1]=fries
+ category=tools \
+ search[type]=keyword \
+ search[keywords][1]=APIs \
+ search[keywords][2]=CLI
```
```json
{
+ "category": "tools",
"search": {
"keywords": [
- "soda",
- "fries"
- ]
+ "CLIs",
+ "API"
+ ],
+ "type": "keyword"
}
}
```
-You can also create 'sparse arrays' (arrays where you set 2 non-consecutive indexes), which
-the missing values gets nullified:
+If there are any missing indexes, HTTPie will nullify them in order to create a concrete object that can be sent:
```bash
$ http --offline --print=B pie.dev/post \
- search[keywords][2]=soda \
- search[keywords][5]=fries \
- search[keywords][]=fish
+ category=tools \
+ search[type]=platforms \
+ search[platforms][]=Terminal \
+ search[platforms][1]=Desktop \
+ search[platforms][3]=Mobile
```
```json
{
+ "category": "tools",
"search": {
- "keywords": [
- null,
- null,
- "soda",
+ "platforms": [
+ "Terminal",
+ "Desktop",
null,
- null,
- "fries",
- "fish"
- ]
+ "Mobile"
+ ],
+ "type": "platforms"
}
}
```
@@ -811,27 +785,29 @@ It is also possible to embed raw JSON to a nested structure, for example:
```bash
$ http --offline --print=B pie.dev/post \
- invitation[type]=meetup \
- 'invitation[dates]:=[2021, 2022, 2023, 2024]' \
- invitation[dates][]:=2025
+ category=tools \
+ search[type]=platforms \
+ 'search[platforms]:=["Terminal", "Desktop"]' \
+ search[platforms][]=Web \
+ search[platforms][]=Mobile
```
```json
{
- "invitation": {
- "dates": [
- 2021,
- 2022,
- 2023,
- 2024,
- 2025
+ "category": "tools",
+ "search": {
+ "platforms": [
+ "Terminal",
+ "Desktop",
+ "Web",
+ "Mobile"
],
- "type": "meetup"
+ "type": "platforms"
}
}
```
-And for the last, let's create a very deeply nested JSON object:
+And just to demonstrate all of these features together, let's create a very deeply nested JSON object:
```bash
$ http PUT pie.dev/put \
@@ -843,11 +819,11 @@ $ http PUT pie.dev/put \
very[nested][json][3][httpie][power][]=Amaze # Nested object
```
-#### Advanced Usage
+#### Advanced usage
-##### Escaping Behavior
+##### Escaping behavior
-Nested JSON syntax uses the same escaping rules [escaping rules](escaping-rules) as
+Nested JSON syntax uses the same [escaping rules](#escaping-rules) as
the terminal. There are 3 special characters, and 1 special token that you can escape.
If you want to send a bracket as is, escape it with a backslash (`\`):
@@ -907,7 +883,7 @@ $ http --offline --print=B pie.dev/post \
}
```
-##### Guiding Syntax Errors
+##### Guiding syntax errors
If you make a typo or forget to close a bracket, the errors will guide you to fix it. For example:
@@ -925,7 +901,7 @@ foo[baz][quux
You can follow to given instruction (adding a `]`) and repair your expression.
-##### Type Safety
+##### Type safety
Each container path (e.g `x[y][z]` in `x[y][z][1]`) has a certain type, which gets defined with
the first usage and can't be changed after that. If you try to do a key-based access to an array or
@@ -959,18 +935,12 @@ $ http --offline --print=B pie.dev/post \
### Raw JSON
-Please note that on some very complex JSON structures, manually building the JSON object right from the terminal
-might be more complicated compared to typing it on a file and directly sending it through HTTPie. Depending on your
-use case, some of the following examples can help:
+For very complex JSON structures, it may be more convenient to [pass it as raw request body](#raw-request-body), for example:
```bash
$ echo -n '{"hello": "world"}' | http POST pie.dev/post
```
-```bash
-$ http --raw '{"hello": "world"}' POST pie.dev/post
-```
-
```bash
$ http POST pie.dev/post < files/data.json
```
@@ -1253,7 +1223,7 @@ the [sessions](#sessions) feature.
The currently supported authentication schemes are Basic and Digest (see [auth plugins](#auth-plugins) for more). There are two flags that control authentication:
| Flag | Arguments |
-| ----------------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|------------------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--auth, -a` | Pass either a `username:password` pair or a `token` as the argument. If the selected authenticated method requires username/password combination and if you only specify a username (`-a username`), you’ll be prompted for the password before the request is sent. To send an empty password, pass `username:`. The `username:password@hostname` URL syntax is supported as well (but credentials passed via `-a` have higher priority) |
| `--auth-type, -A` | Specify the auth mechanism. Possible values are `basic`, `digest`, `bearer` or the name of any [auth plugins](#auth-plugins) you have installed. The default value is `basic` so it can often be omitted |
@@ -1592,7 +1562,7 @@ The response headers are downloaded always, even if they are not part of the out
In addition to crafting structured [JSON](#json) and [forms](#forms) requests with the [request items](#request-items) syntax, you can provide a raw request body that will be sent without further processing.
These two approaches for specifying request data (i.e., structured and raw) cannot be combined.
-There’re three methods for passing raw request data: piping via `stdin`,
+There are three methods for passing raw request data: piping via `stdin`,
`--raw='data'`, and `@/file/path`.
### Redirected Input
|
https://api.github.com/repos/httpie/cli/pulls/1269
|
2022-01-14T19:07:37Z
|
2022-01-21T17:24:07Z
|
2022-01-21T17:24:07Z
|
2022-01-21T17:24:08Z
| 3,567
|
httpie/cli
| 33,863
|
|
Add save command to lovelace
|
diff --git a/homeassistant/components/lovelace/__init__.py b/homeassistant/components/lovelace/__init__.py
index 39644bd047b3..e40cb18a2b25 100644
--- a/homeassistant/components/lovelace/__init__.py
+++ b/homeassistant/components/lovelace/__init__.py
@@ -31,6 +31,7 @@
OLD_WS_TYPE_GET_LOVELACE_UI = 'frontend/lovelace_config'
WS_TYPE_GET_LOVELACE_UI = 'lovelace/config'
WS_TYPE_MIGRATE_CONFIG = 'lovelace/config/migrate'
+WS_TYPE_SAVE_CONFIG = 'lovelace/config/save'
WS_TYPE_GET_CARD = 'lovelace/config/card/get'
WS_TYPE_UPDATE_CARD = 'lovelace/config/card/update'
@@ -53,6 +54,13 @@
vol.Required('type'): WS_TYPE_MIGRATE_CONFIG,
})
+SCHEMA_SAVE_CONFIG = websocket_api.BASE_COMMAND_MESSAGE_SCHEMA.extend({
+ vol.Required('type'): WS_TYPE_SAVE_CONFIG,
+ vol.Required('config'): vol.Any(str, Dict),
+ vol.Optional('format', default=FORMAT_JSON):
+ vol.Any(FORMAT_JSON, FORMAT_YAML),
+})
+
SCHEMA_GET_CARD = websocket_api.BASE_COMMAND_MESSAGE_SCHEMA.extend({
vol.Required('type'): WS_TYPE_GET_CARD,
vol.Required('card_id'): str,
@@ -204,6 +212,13 @@ def migrate_config(fname: str) -> None:
yaml.save_yaml(fname, config)
+def save_config(fname: str, config, data_format: str = FORMAT_JSON) -> None:
+ """Save config to file."""
+ if data_format == FORMAT_YAML:
+ config = yaml.yaml_to_object(config)
+ yaml.save_yaml(fname, config)
+
+
def get_card(fname: str, card_id: str, data_format: str = FORMAT_YAML)\
-> JSON_TYPE:
"""Load a specific card config for id."""
@@ -422,13 +437,17 @@ async def async_setup(hass, config):
OLD_WS_TYPE_GET_LOVELACE_UI, websocket_lovelace_config,
SCHEMA_GET_LOVELACE_UI)
+ hass.components.websocket_api.async_register_command(
+ WS_TYPE_GET_LOVELACE_UI, websocket_lovelace_config,
+ SCHEMA_GET_LOVELACE_UI)
+
hass.components.websocket_api.async_register_command(
WS_TYPE_MIGRATE_CONFIG, websocket_lovelace_migrate_config,
SCHEMA_MIGRATE_CONFIG)
hass.components.websocket_api.async_register_command(
- WS_TYPE_GET_LOVELACE_UI, websocket_lovelace_config,
- SCHEMA_GET_LOVELACE_UI)
+ WS_TYPE_SAVE_CONFIG, websocket_lovelace_save_config,
+ SCHEMA_SAVE_CONFIG)
hass.components.websocket_api.async_register_command(
WS_TYPE_GET_CARD, websocket_lovelace_get_card, SCHEMA_GET_CARD)
@@ -516,6 +535,15 @@ async def websocket_lovelace_migrate_config(hass, connection, msg):
migrate_config, hass.config.path(LOVELACE_CONFIG_FILE))
+@websocket_api.async_response
+@handle_yaml_errors
+async def websocket_lovelace_save_config(hass, connection, msg):
+ """Save Lovelace UI configuration."""
+ return await hass.async_add_executor_job(
+ save_config, hass.config.path(LOVELACE_CONFIG_FILE), msg['config'],
+ msg.get('format', FORMAT_JSON))
+
+
@websocket_api.async_response
@handle_yaml_errors
async def websocket_lovelace_get_card(hass, connection, msg):
diff --git a/homeassistant/util/ruamel_yaml.py b/homeassistant/util/ruamel_yaml.py
index 8211252a516d..0659e3d80544 100644
--- a/homeassistant/util/ruamel_yaml.py
+++ b/homeassistant/util/ruamel_yaml.py
@@ -1,7 +1,7 @@
"""ruamel.yaml utility functions."""
import logging
import os
-from os import O_CREAT, O_TRUNC, O_WRONLY
+from os import O_CREAT, O_TRUNC, O_WRONLY, stat_result
from collections import OrderedDict
from typing import Union, List, Dict
@@ -104,13 +104,17 @@ def save_yaml(fname: str, data: JSON_TYPE) -> None:
yaml.indent(sequence=4, offset=2)
tmp_fname = fname + "__TEMP__"
try:
- file_stat = os.stat(fname)
+ try:
+ file_stat = os.stat(fname)
+ except OSError:
+ file_stat = stat_result(
+ (0o644, -1, -1, -1, -1, -1, -1, -1, -1, -1))
with open(os.open(tmp_fname, O_WRONLY | O_CREAT | O_TRUNC,
file_stat.st_mode), 'w', encoding='utf-8') \
as temp_file:
yaml.dump(data, temp_file)
os.replace(tmp_fname, fname)
- if hasattr(os, 'chown'):
+ if hasattr(os, 'chown') and file_stat.st_ctime > -1:
try:
os.chown(fname, file_stat.st_uid, file_stat.st_gid)
except OSError:
|
## Description:
Adds save command to save the automatically created config to file.
https://github.com/home-assistant/home-assistant-polymer/pull/2091
**Related issue (if applicable):** fixes #<home-assistant issue number goes here>
**Pull request in [home-assistant.io](https://github.com/home-assistant/home-assistant.io) with documentation (if applicable):** home-assistant/home-assistant.io#<home-assistant.io PR number goes here>
## Example entry for `configuration.yaml` (if applicable):
```yaml
```
## Checklist:
- [ ] The code change is tested and works locally.
- [ ] Local tests pass with `tox`. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated in [home-assistant.io](https://github.com/home-assistant/home-assistant.io)
If the code communicates with devices, web services, or third-party tools:
- [ ] New dependencies have been added to the `REQUIREMENTS` variable ([example][ex-requir]).
- [ ] New dependencies are only imported inside functions that use them ([example][ex-import]).
- [ ] New or updated dependencies have been added to `requirements_all.txt` by running `script/gen_requirements_all.py`.
- [ ] New files were added to `.coveragerc`.
If the code does not interact with devices:
- [ ] Tests have been added to verify that the new code works.
[ex-requir]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L14
[ex-import]: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/keyboard.py#L54
|
https://api.github.com/repos/home-assistant/core/pulls/18655
|
2018-11-23T15:43:05Z
|
2018-11-23T21:56:58Z
|
2018-11-23T21:56:58Z
|
2018-11-24T09:21:48Z
| 1,119
|
home-assistant/core
| 38,924
|
fix acfun bangumi page
|
diff --git a/src/you_get/extractors/acfun.py b/src/you_get/extractors/acfun.py
index 4b45c5e962..772132fe6c 100644
--- a/src/you_get/extractors/acfun.py
+++ b/src/you_get/extractors/acfun.py
@@ -105,27 +105,42 @@ def acfun_download_by_vid(vid, title, output_dir='.', merge=True, info_only=Fals
pass
def acfun_download(url, output_dir='.', merge=True, info_only=False, **kwargs):
- assert re.match(r'http://[^\.]*\.*acfun\.[^\.]+/\D/\D\D(\d+)', url)
- html = get_content(url)
+ assert re.match(r'http://[^\.]*\.*acfun\.[^\.]+/(\D|bangumi)/\D\D(\d+)', url)
+
+ if re.match(r'http://[^\.]*\.*acfun\.[^\.]+/\D/\D\D(\d+)', url):
+ html = get_content(url)
+ title = r1(r'data-title="([^"]+)"', html)
+ if match1(url, r'_(\d+)$'): # current P
+ title = title + " " + r1(r'active">([^<]*)', html)
+ vid = r1('data-vid="(\d+)"', html)
+ up = r1('data-name="([^"]+)"', html)
+ # bangumi
+ elif re.match("http://[^\.]*\.*acfun\.[^\.]+/bangumi/ab(\d+)", url):
+ html = get_content(url)
+ title = match1(html, r'"newTitle"\s*:\s*"([^"]+)"')
+ if match1(url, r'_(\d+)$'): # current P
+ title = title + " " + r1(r'active">([^<]*)', html)
+ vid = match1(html, r'videoId="(\d+)"')
+ up = "acfun"
+ else:
+ raise NotImplemented
- title = r1(r'data-title="([^"]+)"', html)
+ assert title and vid
title = unescape_html(title)
title = escape_file_path(title)
- assert title
- if match1(url, r'_(\d+)$'): # current P
- title = title + " " + r1(r'active">([^<]*)', html)
-
- vid = r1('data-vid="(\d+)"', html)
- up = r1('data-name="([^"]+)"', html)
p_title = r1('active">([^<]+)', html)
title = '%s (%s)' % (title, up)
- if p_title: title = '%s - %s' % (title, p_title)
+ if p_title:
+ title = '%s - %s' % (title, p_title)
+
+
acfun_download_by_vid(vid, title,
output_dir=output_dir,
merge=merge,
info_only=info_only,
**kwargs)
+
site_info = "AcFun.tv"
download = acfun_download
download_playlist = playlist_not_supported('acfun')
|
url pattern and html layout of acfun's bangumi page is different from other acfun page , which causes problem
this pr fixs it
~~~
you-get -i http://www.acfun.cn/bangumi/ab5022161
Site: AcFun.tv
Title: 但是不屈不挠 SAGA (acfun)
Type: MPEG-4 video (video/mp4)
Size: 539.6 MiB (565815129 Bytes)
~~~
|
https://api.github.com/repos/soimort/you-get/pulls/2660
|
2018-11-22T06:00:11Z
|
2018-11-22T11:04:13Z
|
2018-11-22T11:04:13Z
|
2018-11-22T11:04:13Z
| 737
|
soimort/you-get
| 21,078
|
Added settings vllm
|
diff --git a/fastchat/protocol/api_protocol.py b/fastchat/protocol/api_protocol.py
index 7dc8fe1c30..2dc99449dc 100644
--- a/fastchat/protocol/api_protocol.py
+++ b/fastchat/protocol/api_protocol.py
@@ -53,12 +53,15 @@ class APIChatCompletionRequest(BaseModel):
messages: Union[str, List[Dict[str, str]]]
temperature: Optional[float] = 0.7
top_p: Optional[float] = 1.0
+ top_k: Optional[int] = -1
n: Optional[int] = 1
max_tokens: Optional[int] = None
stop: Optional[Union[str, List[str]]] = None
stream: Optional[bool] = False
user: Optional[str] = None
repetition_penalty: Optional[float] = 1.0
+ frequency_penalty: Optional[float] = 0.0
+ presence_penalty: Optional[float] = 0.0
class ChatMessage(BaseModel):
@@ -130,6 +133,7 @@ class CompletionRequest(BaseModel):
stop: Optional[Union[str, List[str]]] = None
stream: Optional[bool] = False
top_p: Optional[float] = 1.0
+ top_k: Optional[int] = -1
logprobs: Optional[int] = None
echo: Optional[bool] = False
presence_penalty: Optional[float] = 0.0
diff --git a/fastchat/protocol/openai_api_protocol.py b/fastchat/protocol/openai_api_protocol.py
index 19c86abe93..3d53700956 100644
--- a/fastchat/protocol/openai_api_protocol.py
+++ b/fastchat/protocol/openai_api_protocol.py
@@ -53,6 +53,7 @@ class ChatCompletionRequest(BaseModel):
messages: Union[str, List[Dict[str, str]]]
temperature: Optional[float] = 0.7
top_p: Optional[float] = 1.0
+ top_k: Optional[int] = -1
n: Optional[int] = 1
max_tokens: Optional[int] = None
stop: Optional[Union[str, List[str]]] = None
@@ -146,6 +147,7 @@ class CompletionRequest(BaseModel):
stop: Optional[Union[str, List[str]]] = None
stream: Optional[bool] = False
top_p: Optional[float] = 1.0
+ top_k: Optional[int] = -1
logprobs: Optional[int] = None
echo: Optional[bool] = False
presence_penalty: Optional[float] = 0.0
diff --git a/fastchat/serve/openai_api_server.py b/fastchat/serve/openai_api_server.py
index 8c82c9995c..c5ca121b51 100644
--- a/fastchat/serve/openai_api_server.py
+++ b/fastchat/serve/openai_api_server.py
@@ -199,6 +199,11 @@ def check_requests(request) -> Optional[JSONResponse]:
ErrorCode.PARAM_OUT_OF_RANGE,
f"{request.top_p} is greater than the maximum of 1 - 'temperature'",
)
+ if request.top_k is not None and (request.top_k > -1 and request.top_k < 1):
+ return create_error_response(
+ ErrorCode.PARAM_OUT_OF_RANGE,
+ f"{request.top_k} is out of Range. Either set top_k to -1 or >=1.",
+ )
if request.stop is not None and (
not isinstance(request.stop, str) and not isinstance(request.stop, list)
):
@@ -240,6 +245,9 @@ async def get_gen_params(
*,
temperature: float,
top_p: float,
+ top_k: Optional[int],
+ presence_penalty: Optional[float],
+ frequency_penalty: Optional[float],
max_tokens: Optional[int],
echo: Optional[bool],
stop: Optional[Union[str, List[str]]],
@@ -284,6 +292,9 @@ async def get_gen_params(
"prompt": prompt,
"temperature": temperature,
"top_p": top_p,
+ "top_k": top_k,
+ "presence_penalty": presence_penalty,
+ "frequency_penalty": frequency_penalty,
"max_new_tokens": max_tokens,
"echo": echo,
"stop_token_ids": conv.stop_token_ids,
@@ -366,6 +377,9 @@ async def create_chat_completion(request: ChatCompletionRequest):
request.messages,
temperature=request.temperature,
top_p=request.top_p,
+ top_k=request.top_k,
+ presence_penalty=request.presence_penalty,
+ frequency_penalty=request.frequency_penalty,
max_tokens=request.max_tokens,
echo=False,
stop=request.stop,
@@ -498,6 +512,9 @@ async def create_completion(request: CompletionRequest):
text,
temperature=request.temperature,
top_p=request.top_p,
+ top_k=request.top_k,
+ frequency_penalty=request.frequency_penalty,
+ presence_penalty=request.presence_penalty,
max_tokens=request.max_tokens,
echo=request.echo,
stop=request.stop,
@@ -552,6 +569,9 @@ async def generate_completion_stream_generator(
text,
temperature=request.temperature,
top_p=request.top_p,
+ top_k=request.top_k,
+ presence_penalty=request.presence_penalty,
+ frequency_penalty=request.frequency_penalty,
max_tokens=request.max_tokens,
echo=request.echo,
stop=request.stop,
@@ -731,6 +751,9 @@ async def create_chat_completion(request: APIChatCompletionRequest):
request.messages,
temperature=request.temperature,
top_p=request.top_p,
+ top_k=request.top_k,
+ presence_penalty=request.presence_penalty,
+ frequency_penalty=request.frequency_penalty,
max_tokens=request.max_tokens,
echo=False,
stop=request.stop,
diff --git a/fastchat/serve/vllm_worker.py b/fastchat/serve/vllm_worker.py
index 1f639948ba..a13c72798d 100644
--- a/fastchat/serve/vllm_worker.py
+++ b/fastchat/serve/vllm_worker.py
@@ -68,6 +68,9 @@ async def generate_stream(self, params):
request_id = params.pop("request_id")
temperature = float(params.get("temperature", 1.0))
top_p = float(params.get("top_p", 1.0))
+ top_k = params.get("top_k", -1.0)
+ presence_penalty = float(params.get("presence_penalty", 0.0))
+ frequency_penalty = float(params.get("frequency_penalty", 0.0))
max_new_tokens = params.get("max_new_tokens", 256)
stop_str = params.get("stop", None)
stop_token_ids = params.get("stop_token_ids", None) or []
@@ -92,6 +95,7 @@ async def generate_stream(self, params):
top_p = max(top_p, 1e-5)
if temperature <= 1e-5:
top_p = 1.0
+
sampling_params = SamplingParams(
n=1,
temperature=temperature,
@@ -99,6 +103,9 @@ async def generate_stream(self, params):
use_beam_search=use_beam_search,
stop=list(stop),
max_tokens=max_new_tokens,
+ top_k=top_k,
+ presence_penalty=presence_penalty,
+ frequency_penalty=frequency_penalty,
best_of=best_of,
)
results_generator = engine.generate(context, sampling_params, request_id)
|
## Why are these changes needed?
Configuration currently does not take top_k, presence_penalty etc. in mind. When setting them they are not getting passed trough to vllm.
## Related issue number (if applicable)
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed. **-> Is actually now properly passing the params to vllm**
- [ ] I've made sure the relevant tests are passing (if applicable).
|
https://api.github.com/repos/lm-sys/FastChat/pulls/2599
|
2023-10-24T15:10:41Z
|
2023-11-01T08:26:15Z
|
2023-11-01T08:26:15Z
|
2023-11-01T08:26:15Z
| 1,656
|
lm-sys/FastChat
| 41,038
|
MAINT Clean up deprecations for 1.5: delayed import path
|
diff --git a/sklearn/utils/fixes.py b/sklearn/utils/fixes.py
index 1b34a3fe1ffbc..e33519a3154d8 100644
--- a/sklearn/utils/fixes.py
+++ b/sklearn/utils/fixes.py
@@ -23,7 +23,6 @@
import sklearn
from ..externals._packaging.version import parse as parse_version
-from .deprecation import deprecated
_IS_PYPY = platform.python_implementation() == "PyPy"
_IS_32BIT = 8 * struct.calcsize("P") == 32
@@ -134,16 +133,6 @@ def threadpool_info():
threadpool_info.__doc__ = threadpoolctl.threadpool_info.__doc__
-@deprecated(
- "The function `delayed` has been moved from `sklearn.utils.fixes` to "
- "`sklearn.utils.parallel`. This import path will be removed in 1.5."
-)
-def delayed(function):
- from sklearn.utils.parallel import delayed
-
- return delayed(function)
-
-
# TODO: Remove when SciPy 1.11 is the minimum supported version
def _mode(a, axis=0):
if sp_version >= parse_version("1.9.0"):
diff --git a/sklearn/utils/tests/test_fixes.py b/sklearn/utils/tests/test_fixes.py
index 60c57bbbaaa52..c312b8568c4c6 100644
--- a/sklearn/utils/tests/test_fixes.py
+++ b/sklearn/utils/tests/test_fixes.py
@@ -7,11 +7,7 @@
import pytest
from sklearn.utils._testing import assert_array_equal
-from sklearn.utils.fixes import (
- _object_dtype_isnan,
- _smallest_admissible_index_dtype,
- delayed,
-)
+from sklearn.utils.fixes import _object_dtype_isnan, _smallest_admissible_index_dtype
@pytest.mark.parametrize("dtype, val", ([object, 1], [object, "a"], [float, 1]))
@@ -25,17 +21,6 @@ def test_object_dtype_isnan(dtype, val):
assert_array_equal(mask, expected_mask)
-def test_delayed_deprecation():
- """Check that we issue the FutureWarning regarding the deprecation of delayed."""
-
- def func(x):
- return x
-
- warn_msg = "The function `delayed` has been moved from `sklearn.utils.fixes`"
- with pytest.warns(FutureWarning, match=warn_msg):
- delayed(func)
-
-
@pytest.mark.parametrize(
"params, expected_dtype",
[
|
removed deprecated import path for the ``delayed`` function.
|
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/28848
|
2024-04-16T14:33:09Z
|
2024-04-16T15:32:39Z
|
2024-04-16T15:32:39Z
|
2024-04-16T15:32:39Z
| 581
|
scikit-learn/scikit-learn
| 46,311
|
renewal: fix key_type not being preserved on <v1.25.0 renewal configs
|
diff --git a/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py b/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
index 6df79c7f959..8ed1fbf1e66 100644
--- a/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
+++ b/certbot-ci/certbot_integration_tests/certbot_tests/test_main.py
@@ -936,3 +936,24 @@ def test_preferred_chain(context: IntegrationTestsContext) -> None:
with open(conf_path, 'r') as f:
assert f'preferred_chain = {requested}' in f.read(), \
'Expected preferred_chain to be set in renewal config'
+
+
+def test_ancient_rsa_key_type_preserved(context: IntegrationTestsContext) -> None:
+ certname = context.get_domain('newname')
+ context.certbot(['certonly', '-d', certname, '--key-type', 'rsa'])
+ assert_saved_lineage_option(context.config_dir, certname, 'key_type', 'rsa')
+
+ # Remove `key_type = rsa` from the renewal config to emulate a <v1.25.0 Certbot certificate.
+ conf_path = join(context.config_dir, 'renewal', f'{certname}.conf')
+ conf_contents: str = ''
+ with open(conf_path) as f:
+ conf_contents = f.read()
+ conf_contents = conf_contents.replace('key_type = rsa', '')
+ with open(conf_path, 'w') as f:
+ f.write(conf_contents)
+
+ context.certbot(['renew', '--cert-name', certname, '--force-renewal'])
+
+ assert_saved_lineage_option(context.config_dir, certname, 'key_type', 'rsa')
+ key2 = join(context.config_dir, 'archive/{0}/privkey2.pem'.format(certname))
+ assert_rsa_key(key2, 2048)
diff --git a/certbot/CHANGELOG.md b/certbot/CHANGELOG.md
index 3348369f3c3..7fd4800e841 100644
--- a/certbot/CHANGELOG.md
+++ b/certbot/CHANGELOG.md
@@ -21,6 +21,10 @@ Certbot adheres to [Semantic Versioning](https://semver.org/).
### Fixed
+* Fixed `renew` sometimes not preserving the key type of RSA certificates.
+ * Users who upgraded from Certbot <v1.25.0 to Certbot >=v2.0.0 may
+ have had their RSA certificates inadvertently changed to ECDSA certificates. If desired,
+ the key type may be changed back to RSA. See the [User Guide](https://eff-certbot.readthedocs.io/en/stable/using.html#changing-a-certificate-s-key-type).
* Deprecated flags were inadvertently not printing warnings since v1.16.0. This is now fixed.
More details about these changes can be found on our GitHub repo.
diff --git a/certbot/certbot/_internal/renewal.py b/certbot/certbot/_internal/renewal.py
index 39f704f2f69..2b329dfd89a 100644
--- a/certbot/certbot/_internal/renewal.py
+++ b/certbot/certbot/_internal/renewal.py
@@ -87,6 +87,14 @@ def reconstitute(config: configuration.NamespaceConfig,
logger.error("Renewal configuration file %s does not specify "
"an authenticator. Skipping.", full_path)
return None
+
+ # Prior to Certbot v1.25.0, the default value of key_type (rsa) was not persisted to the
+ # renewal params. If the option is absent, it means the certificate was an RSA key.
+ # Restoring the option here is necessary to preserve the certificate key_type if
+ # the user has upgraded directly from Certbot <v1.25.0 to >=v2.0.0, where the default
+ # key_type was changed to ECDSA. See https://github.com/certbot/certbot/issues/9635.
+ renewalparams["key_type"] = renewalparams.get("key_type", "rsa")
+
# Now restore specific values along with their data types, if
# those elements are present.
renewalparams = _remove_deprecated_config_elements(renewalparams)
diff --git a/certbot/tests/main_test.py b/certbot/tests/main_test.py
index 6220ce53719..8dd4deba13e 100644
--- a/certbot/tests/main_test.py
+++ b/certbot/tests/main_test.py
@@ -1472,13 +1472,13 @@ def test_renew_verb(self):
self._test_renewal_common(True, [], args=args, should_renew=True)
def test_reuse_key(self):
- test_util.make_lineage(self.config.config_dir, 'sample-renewal.conf')
+ test_util.make_lineage(self.config.config_dir, 'sample-renewal.conf', ec=False)
args = ["renew", "--dry-run", "--reuse-key"]
self._test_renewal_common(True, [], args=args, should_renew=True, reuse_key=True)
@mock.patch('certbot._internal.storage.RenewableCert.save_successor')
def test_reuse_key_no_dry_run(self, unused_save_successor):
- test_util.make_lineage(self.config.config_dir, 'sample-renewal.conf')
+ test_util.make_lineage(self.config.config_dir, 'sample-renewal.conf', ec=False)
args = ["renew", "--reuse-key"]
self._test_renewal_common(True, [], args=args, should_renew=True, reuse_key=True)
diff --git a/certbot/tests/renewal_test.py b/certbot/tests/renewal_test.py
index 0bb915345ee..f11b01603b0 100644
--- a/certbot/tests/renewal_test.py
+++ b/certbot/tests/renewal_test.py
@@ -177,6 +177,17 @@ def test_remove_deprecated_config_elements(self, mock_set_by_cli, unused_mock_ge
# value in the renewal conf file
assert isinstance(lineage_config.manual_public_ip_logging_ok, mock.MagicMock)
+ @mock.patch('certbot._internal.renewal.cli.set_by_cli')
+ def test_absent_key_type_restored(self, mock_set_by_cli):
+ mock_set_by_cli.return_value = False
+
+ rc_path = test_util.make_lineage(self.config.config_dir, 'sample-renewal.conf', ec=False)
+
+ from certbot._internal import renewal
+ lineage_config = copy.deepcopy(self.config)
+ renewal.reconstitute(lineage_config, rc_path)
+ assert lineage_config.key_type == 'rsa'
+
class RestoreRequiredConfigElementsTest(test_util.ConfigTestCase):
"""Tests for certbot._internal.renewal.restore_required_config_elements."""
|
Fixes #9635.
We may wish to backport this to Certbot 2.1.0 in order to have it land in https://packages.debian.org/bookworm/python3-certbot.
|
https://api.github.com/repos/certbot/certbot/pulls/9636
|
2023-03-27T21:40:05Z
|
2023-03-28T15:44:20Z
|
2023-03-28T15:44:20Z
|
2023-04-25T23:17:21Z
| 1,543
|
certbot/certbot
| 834
|
Apply theme to VegaLite Charts
|
diff --git a/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-0.snap.png b/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-0.snap.png
index 8fe2b51c7377..9eb7bc187f89 100644
Binary files a/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-0.snap.png and b/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-0.snap.png differ
diff --git a/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-1.snap.png b/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-1.snap.png
index 03f15758dcb4..585e9895b3ca 100644
Binary files a/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-1.snap.png and b/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-1.snap.png differ
diff --git a/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-2.snap.png b/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-2.snap.png
index cfa1677cca9f..b620c67226e8 100644
Binary files a/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-2.snap.png and b/frontend/cypress/snapshots/linux/2x/st_chart_utc_time.spec.js/chartUTCTime-2.snap.png differ
diff --git a/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.test.tsx b/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.test.tsx
index ba50b23bc674..cf94c2ded1de 100644
--- a/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.test.tsx
+++ b/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.test.tsx
@@ -16,15 +16,17 @@
*/
import React from "react"
-import { shallow } from "lib/test_util"
+import { mount } from "lib/test_util"
import { fromJS } from "immutable"
import { VegaLiteChart as VegaLiteChartProto } from "autogen/proto"
+import { darkTheme, lightTheme } from "theme"
import mock from "./mock"
import { PropsWithHeight, VegaLiteChart } from "./VegaLiteChart"
const getProps = (
- elementProps: Partial<VegaLiteChartProto> = {}
+ elementProps: Partial<VegaLiteChartProto> = {},
+ props: Partial<PropsWithHeight> = {}
): PropsWithHeight => ({
element: fromJS({
...mock,
@@ -32,13 +34,52 @@ const getProps = (
}),
width: 0,
height: 0,
+ theme: lightTheme.emotion,
+ ...props,
})
describe("VegaLiteChart Element", () => {
it("renders without crashing", () => {
const props = getProps()
- const wrapper = shallow(<VegaLiteChart {...props} />)
+ const wrapper = mount(<VegaLiteChart {...props} />)
expect(wrapper.find("StyledVegaLiteChartContainer").length).toBe(1)
})
+
+ it("pulls default config values from theme", () => {
+ const props = getProps(undefined, { theme: darkTheme.emotion })
+
+ const wrapper = mount(<VegaLiteChart {...props} />)
+ const generatedSpec = wrapper.instance().generateSpec()
+
+ expect(generatedSpec.config.background).toBe(
+ darkTheme.emotion.colors.bgColor
+ )
+ expect(generatedSpec.config.axis.labelColor).toBe(
+ darkTheme.emotion.colors.bodyText
+ )
+ })
+
+ it("has user specified config take priority", () => {
+ const props = getProps(undefined, { theme: darkTheme.emotion })
+
+ const spec = JSON.parse(props.element.get("spec"))
+ spec.config = { background: "purple", axis: { labelColor: "blue" } }
+
+ props.element = fromJS({
+ ...props.element.toObject(),
+ spec: JSON.stringify(spec),
+ })
+
+ const wrapper = mount(<VegaLiteChart {...props} />)
+ const generatedSpec = wrapper.instance().generateSpec()
+
+ expect(generatedSpec.config.background).toBe("purple")
+ expect(generatedSpec.config.axis.labelColor).toBe("blue")
+ // Verify that things not overwritten by the user still fall back to the
+ // theme default.
+ expect(generatedSpec.config.axis.titleColor).toBe(
+ darkTheme.emotion.colors.bodyText
+ )
+ })
})
diff --git a/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.tsx b/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.tsx
index be60c3bc9859..de517e92a396 100644
--- a/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.tsx
+++ b/frontend/src/components/elements/VegaLiteChart/VegaLiteChart.tsx
@@ -16,10 +16,12 @@
*/
import React, { PureComponent } from "react"
+import { withTheme } from "emotion-theming"
import { logMessage } from "lib/log"
import { Map as ImmutableMap } from "immutable"
import withFullScreenWrapper from "hocs/withFullScreenWrapper"
import { tableGetRowsAndCols, indexGet, tableGet } from "lib/dataFrameProto"
+import { Theme } from "theme"
import embed from "vega-embed"
import * as vega from "vega"
import { StyledVegaLiteChartContainer } from "./styled-components"
@@ -54,6 +56,7 @@ const SUPPORTED_INDEX_TYPES = new Set([
interface Props {
width: number
element: ImmutableMap<string, any>
+ theme: Theme
}
export interface PropsWithHeight extends Props {
@@ -115,8 +118,8 @@ export class VegaLiteChart extends PureComponent<PropsWithHeight, State> {
}
public async componentDidUpdate(prevProps: PropsWithHeight): Promise<void> {
- const prevElement = prevProps.element
- const { element } = this.props
+ const { element: prevElement, theme: prevTheme } = prevProps
+ const { element, theme } = this.props
const prevSpec = prevElement.get("spec")
const spec = element.get("spec")
@@ -124,6 +127,7 @@ export class VegaLiteChart extends PureComponent<PropsWithHeight, State> {
if (
!this.vegaView ||
prevSpec !== spec ||
+ prevTheme !== theme ||
prevProps.width !== this.props.width ||
prevProps.height !== this.props.height
) {
@@ -163,10 +167,12 @@ export class VegaLiteChart extends PureComponent<PropsWithHeight, State> {
}
public generateSpec = (): any => {
- const el = this.props.element
+ const { element: el, theme } = this.props
const spec = JSON.parse(el.get("spec"))
const useContainerWidth = JSON.parse(el.get("useContainerWidth"))
+ spec.config = configWithThemeDefaults(spec.config, theme)
+
if (this.props.height) {
// fullscreen
spec.width = this.props.width - EMBED_PADDING
@@ -457,4 +463,56 @@ function dataIsAnAppendOfPrev(
return true
}
-export default withFullScreenWrapper(VegaLiteChart)
+function configWithThemeDefaults(config: any, theme: Theme): any {
+ const textColor = theme.colors.bodyText
+ const themeFonts = {
+ labelFont: theme.genericFonts.bodyFont,
+ titleFont: theme.genericFonts.bodyFont,
+ }
+ const themeBg = theme.inSidebar
+ ? theme.colors.sidebarBg
+ : theme.colors.bgColor
+
+ const themeDefaults = {
+ background: themeBg,
+ axis: {
+ labelColor: textColor,
+ titleColor: textColor,
+ ...themeFonts,
+ },
+ legend: {
+ labelColor: textColor,
+ titleColor: textColor,
+ ...themeFonts,
+ },
+ title: {
+ color: textColor,
+ subtitleColor: textColor,
+ ...themeFonts,
+ },
+ }
+
+ if (!config) {
+ return themeDefaults
+ }
+
+ // Fill in theme defaults where the user didn't specify config options.
+ return {
+ ...config,
+ background: config.background || themeDefaults.background,
+ axis: {
+ ...themeDefaults.axis,
+ ...config.axis,
+ },
+ legend: {
+ ...themeDefaults.legend,
+ ...config.legend,
+ },
+ title: {
+ ...themeDefaults.title,
+ ...config.title,
+ },
+ }
+}
+
+export default withTheme(withFullScreenWrapper(VegaLiteChart))
|
https://api.github.com/repos/streamlit/streamlit/pulls/2708
|
2021-02-03T03:30:06Z
|
2021-02-03T23:44:22Z
|
2021-02-03T23:44:22Z
|
2021-07-24T00:37:03Z
| 2,030
|
streamlit/streamlit
| 21,919
|
|
fixbug: rename folder does not work in windows os
|
diff --git a/metagpt/roles/engineer.py b/metagpt/roles/engineer.py
index e05e69cbb..b2a909400 100644
--- a/metagpt/roles/engineer.py
+++ b/metagpt/roles/engineer.py
@@ -204,7 +204,8 @@ async def _is_pass(self, summary) -> (str, str):
async def _think(self) -> Action | None:
if not CONFIG.src_workspace:
- CONFIG.src_workspace = CONFIG.git_repo.workdir / CONFIG.git_repo.workdir.name
+ project_name = CONFIG.project_name or CONFIG.git_repo.workdir.name
+ CONFIG.src_workspace = CONFIG.git_repo.workdir / project_name
write_code_filters = any_to_str_set([WriteTasks, SummarizeCode, FixBug])
summarize_code_filters = any_to_str_set([WriteCode, WriteCodeReview])
if not self.rc.news:
diff --git a/metagpt/utils/git_repository.py b/metagpt/utils/git_repository.py
index e9855df05..4feed89d5 100644
--- a/metagpt/utils/git_repository.py
+++ b/metagpt/utils/git_repository.py
@@ -199,10 +199,17 @@ def rename_root(self, new_dir_name):
if new_path.exists():
logger.info(f"Delete directory {str(new_path)}")
shutil.rmtree(new_path)
+ if new_path.exists(): # Recheck for windows os
+ logger.warning(f"Failed to delete directory {str(new_path)}")
+ return
try:
shutil.move(src=str(self.workdir), dst=str(new_path))
except Exception as e:
logger.warning(f"Move {str(self.workdir)} to {str(new_path)} error: {e}")
+ finally:
+ if not new_path.exists(): # Recheck for windows os
+ logger.warning(f"Failed to move {str(self.workdir)} to {str(new_path)}")
+ return
logger.info(f"Rename directory {str(self.workdir)} to {str(new_path)}")
self._repository = Repo(new_path)
self._gitignore_rules = parse_gitignore(full_path=str(new_path / ".gitignore"))
|
**Features**
- fixbug: rename folder does not work in windows os
|
https://api.github.com/repos/geekan/MetaGPT/pulls/716
|
2024-01-08T09:39:48Z
|
2024-01-08T09:59:59Z
|
2024-01-08T09:59:59Z
|
2024-01-08T10:05:24Z
| 478
|
geekan/MetaGPT
| 16,769
|
Fix tests on python 2.6
|
diff --git a/test_requests.py b/test_requests.py
index 7e5e4d8fa5..33fafdf4e7 100755
--- a/test_requests.py
+++ b/test_requests.py
@@ -17,7 +17,9 @@
from requests.adapters import HTTPAdapter
from requests.auth import HTTPDigestAuth, _basic_auth_str
from requests.compat import (
- Morsel, cookielib, getproxies, str, urljoin, urlparse, is_py3, builtin_str)
+ Morsel, cookielib, getproxies, str, urljoin, urlparse, is_py3,
+ builtin_str, OrderedDict
+ )
from requests.cookies import cookiejar_from_dict, morsel_to_cookie
from requests.exceptions import (ConnectionError, ConnectTimeout,
InvalidSchema, InvalidURL, MissingSchema,
@@ -126,7 +128,7 @@ def test_params_are_added_before_fragment(self):
assert request.url == "http://example.com/path?key=value&a=b#fragment"
def test_params_original_order_is_preserved_by_default(self):
- param_ordered_dict = collections.OrderedDict((('z', 1), ('a', 1), ('k', 1), ('d', 1)))
+ param_ordered_dict = OrderedDict((('z', 1), ('a', 1), ('k', 1), ('d', 1)))
session = requests.Session()
request = requests.Request('GET', 'http://example.com/', params=param_ordered_dict)
prep = session.prepare_request(request)
|
On Python 2.6, there's no `OrderedDict` in the `collections` module.
|
https://api.github.com/repos/psf/requests/pulls/2743
|
2015-08-25T01:47:46Z
|
2015-08-25T07:27:20Z
|
2015-08-25T07:27:20Z
|
2021-09-08T06:01:09Z
| 333
|
psf/requests
| 32,252
|
🌐 Update Japanese translation of `docs/ja/docs/tutorial/query-params.md`
|
diff --git a/docs/ja/docs/tutorial/query-params.md b/docs/ja/docs/tutorial/query-params.md
index 5202009ef8676..957726b9f02f4 100644
--- a/docs/ja/docs/tutorial/query-params.md
+++ b/docs/ja/docs/tutorial/query-params.md
@@ -73,11 +73,6 @@ http://127.0.0.1:8000/items/?skip=20
!!! check "確認"
パスパラメータ `item_id` はパスパラメータであり、`q` はそれとは違ってクエリパラメータであると判別できるほど**FastAPI** が賢いということにも注意してください。
-!!! note "備考"
- FastAPIは、`= None`があるおかげで、`q`がオプショナルだとわかります。
-
- `Optional[str]` の`Optional` はFastAPIでは使用されていません(FastAPIは`str`の部分のみ使用します)。しかし、`Optional[str]` はエディタがコードのエラーを見つけるのを助けてくれます。
-
## クエリパラメータの型変換
`bool` 型も宣言できます。これは以下の様に変換されます:
|
Optional is no longer used in the sample code, so the description of it has been removed (the same section has already been removed in the English documentation)
|
https://api.github.com/repos/tiangolo/fastapi/pulls/10808
|
2023-12-20T11:16:35Z
|
2024-03-30T23:22:21Z
|
2024-03-30T23:22:21Z
|
2024-03-30T23:22:28Z
| 287
|
tiangolo/fastapi
| 23,216
|
Set default AWS settings via config file rather than environment variables
|
diff --git a/Dockerfile b/Dockerfile
index af7b6430d254d..b8a18860d1563 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -46,12 +46,17 @@ RUN mkdir -p /.npm && \
ln -s `pwd` /tmp/localstack_install_dir
# expose default environment (required for aws-cli to work)
-ENV AWS_ACCESS_KEY_ID=foobar \
- AWS_SECRET_ACCESS_KEY=foobar \
- AWS_DEFAULT_REGION=us-east-1 \
- MAVEN_CONFIG=/opt/code/localstack \
+ENV MAVEN_CONFIG=/opt/code/localstack \
USER=localstack
+# set test AWS credentials and default region in config file
+RUN mkdir -p /root/.aws && \
+ echo '[default]' > /root/.aws/config && \
+ echo 'region = us-east-1' >> /root/.aws/config && \
+ echo '[default]' > /root/.aws/credentials && \
+ echo 'aws_access_key_id = foobar' >> /root/.aws/credentials && \
+ echo 'aws_secret_access_key = foobar' >> /root/.aws/credentials
+
# expose service & web dashboard ports
EXPOSE 4567-4583 8080
diff --git a/localstack/utils/aws/aws_stack.py b/localstack/utils/aws/aws_stack.py
index 2dddd625ba2f7..4fa6c25813ad5 100644
--- a/localstack/utils/aws/aws_stack.py
+++ b/localstack/utils/aws/aws_stack.py
@@ -146,7 +146,7 @@ def connect_to_service(service_name, client=True, env=None, region_name=None, en
if env.region == REGION_LOCAL:
endpoint_url = get_local_service_url(service_name)
verify = False
- region = env.region if env.region != REGION_LOCAL else DEFAULT_REGION
+ region = env.region if env.region != REGION_LOCAL else None
return method(service_name, region_name=region, endpoint_url=endpoint_url, verify=verify)
|
This PR supersedes #277 . Set default AWS settings via config file rather than environment variables.
|
https://api.github.com/repos/localstack/localstack/pulls/389
|
2017-10-08T21:42:15Z
|
2017-10-08T21:50:30Z
|
2017-10-08T21:50:30Z
|
2017-10-08T21:50:33Z
| 449
|
localstack/localstack
| 28,584
|
Allow dictionaries return values as JSON
|
diff --git a/CHANGES.rst b/CHANGES.rst
index 2a6641c65d..b3c092b7fa 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -52,6 +52,10 @@ Unreleased
- Add an ``--extra-files`` option to the ``flask run`` CLI command to
specify extra files that will trigger the reloader on change.
:issue:`2897`
+- Allow returning a dictionary from a view function. Similar to how
+ returning a string will produce a ``text/html`` response, returning
+ a dict will call ``jsonify`` to produce a ``application/json``
+ response. :pr:`3111`
.. _#2935: https://github.com/pallets/flask/issues/2935
.. _#2957: https://github.com/pallets/flask/issues/2957
diff --git a/docs/quickstart.rst b/docs/quickstart.rst
index 47200a13bc..8dad6f3c35 100644
--- a/docs/quickstart.rst
+++ b/docs/quickstart.rst
@@ -679,23 +679,26 @@ See :ref:`error-handlers` for more details.
About Responses
---------------
-The return value from a view function is automatically converted into a
-response object for you. If the return value is a string it's converted
-into a response object with the string as response body, a ``200 OK``
-status code and a :mimetype:`text/html` mimetype.
-The logic that Flask applies to converting return values into
-response objects is as follows:
+The return value from a view function is automatically converted into
+a response object for you. If the return value is a string it's
+converted into a response object with the string as response body, a
+``200 OK`` status code and a :mimetype:`text/html` mimetype. If the
+return value is a dict, :func:`jsonify` is called to produce a response.
+The logic that Flask applies to converting return values into response
+objects is as follows:
1. If a response object of the correct type is returned it's directly
returned from the view.
-2. If it's a string, a response object is created with that data and the
- default parameters.
-3. If a tuple is returned the items in the tuple can provide extra information.
- Such tuples have to be in the form ``(response, status, headers)``,
- ``(response, headers)`` or ``(response, status)`` where at least one item
- has to be in the tuple. The ``status`` value will override the status code
- and ``headers`` can be a list or dictionary of additional header values.
-4. If none of that works, Flask will assume the return value is a
+2. If it's a string, a response object is created with that data and
+ the default parameters.
+3. If it's a dict, a response object is created using ``jsonify``.
+4. If a tuple is returned the items in the tuple can provide extra
+ information. Such tuples have to be in the form
+ ``(response, status)``, ``(response, headers)``, or
+ ``(response, status, headers)``. The ``status`` value will override
+ the status code and ``headers`` can be a list or dictionary of
+ additional header values.
+5. If none of that works, Flask will assume the return value is a
valid WSGI application and convert that into a response object.
If you want to get hold of the resulting response object inside the view
@@ -717,6 +720,39 @@ return it::
resp.headers['X-Something'] = 'A value'
return resp
+
+APIs with JSON
+``````````````
+
+A common response format when writing an API is JSON. It's easy to get
+started writing such an API with Flask. If you return a ``dict`` from a
+view, it will be converted to a JSON response.
+
+.. code-block:: python
+
+ @app.route("/me")
+ def me_api():
+ user = get_current_user()
+ return {
+ "username": user.username,
+ "theme": user.theme,
+ "image": url_for("user_image", filename=user.image),
+ }
+
+Depending on your API design, you may want to create JSON responses for
+types other than ``dict``. In that case, use the
+:func:`~flask.json.jsonify` function, which will serialize any supported
+JSON data type. Or look into Flask community extensions that support
+more complex applications.
+
+.. code-block:: python
+
+ @app.route("/users")
+ def users_api():
+ users = get_all_users()
+ return jsonify([user.to_json() for user in users])
+
+
.. _sessions:
Sessions
diff --git a/flask/app.py b/flask/app.py
index 50775ce4ed..b0b2bc26c8 100644
--- a/flask/app.py
+++ b/flask/app.py
@@ -44,6 +44,7 @@
url_for,
get_load_dotenv,
)
+from .json import jsonify
from .logging import create_logger
from .sessions import SecureCookieSessionInterface
from .signals import (
@@ -2001,6 +2002,9 @@ def make_response(self, rv):
``bytes`` (``str`` in Python 2)
A response object is created with the bytes as the body.
+ ``dict``
+ A dictionary that will be jsonify'd before being returned.
+
``tuple``
Either ``(body, status, headers)``, ``(body, status)``, or
``(body, headers)``, where ``body`` is any of the other types
@@ -2064,6 +2068,8 @@ def make_response(self, rv):
# special logic
rv = self.response_class(rv, status=status, headers=headers)
status = headers = None
+ elif isinstance(rv, dict):
+ rv = jsonify(rv)
else:
# evaluate a WSGI callable, or coerce a different response
# class to the correct type
diff --git a/tests/test_basic.py b/tests/test_basic.py
index b759098f98..7a16ebd491 100644
--- a/tests/test_basic.py
+++ b/tests/test_basic.py
@@ -1147,8 +1147,12 @@ def from_response_status():
def from_wsgi():
return NotFound()
- assert client.get("/text").data == u"Hällo Wörld".encode("utf-8")
- assert client.get("/bytes").data == u"Hällo Wörld".encode("utf-8")
+ @app.route('/dict')
+ def from_dict():
+ return {"foo": "bar"}, 201
+
+ assert client.get('/text').data == u'Hällo Wörld'.encode('utf-8')
+ assert client.get('/bytes').data == u'Hällo Wörld'.encode('utf-8')
rv = client.get("/full_tuple")
assert rv.data == b"Meh"
@@ -1181,6 +1185,10 @@ def from_wsgi():
assert b"Not Found" in rv.data
assert rv.status_code == 404
+ rv = client.get('/dict')
+ assert rv.json == {"foo": "bar"}
+ assert rv.status_code == 201
+
def test_response_type_errors():
app = flask.Flask(__name__)
|
This is something I've been experimenting with in Quart and I don't see a downside. I can't find any old issues relating to this (although I've found it hard to search for). This is roughly an issue, but as a pull request (branch) you can take the code and try it out. Obliviously if you consider this a good idea I'll add much more testing and documentation.
This supports an increasingly common use-case whereby JSON is the primary response (rather than a templated string). Given Flask simplifies returning HTML responses, it seems fitting that it should also do so for JSON responses. In practice it allows,
```python
@app.route("/")
def index():
return {
"api_stuff": "values",
}
```
which is equivalent to
```python
@app.route("/")
def index():
return jsonify({
"api_stuff": "values",
})
```
### Note
This doesn't support returning anything other than an associate array at the top level in the JSON response. I'm ok with this as in practice APIs are only extensible if the top level is an associate array.
|
https://api.github.com/repos/pallets/flask/pulls/3111
|
2019-03-03T17:42:17Z
|
2019-05-24T17:39:12Z
|
2019-05-24T17:39:12Z
|
2020-11-14T02:09:35Z
| 1,693
|
pallets/flask
| 20,830
|
amd support
|
diff --git a/args_manager.py b/args_manager.py
new file mode 100644
index 000000000..a3a48eda8
--- /dev/null
+++ b/args_manager.py
@@ -0,0 +1,12 @@
+from comfy.options import enable_args_parsing
+enable_args_parsing(False)
+import comfy.cli_args as comfy_cli
+
+
+comfy_cli.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.")
+
+comfy_cli.args = comfy_cli.parser.parse_args()
+comfy_cli.args.disable_cuda_malloc = True
+comfy_cli.args.auto_launch = True
+
+args = comfy_cli.args
diff --git a/launch.py b/launch.py
index 1ed12ec93..ee412fc30 100644
--- a/launch.py
+++ b/launch.py
@@ -91,20 +91,12 @@ def download_models():
def ini_comfy_args():
- argv = sys.argv
- sys.argv = [sys.argv[0]]
-
- from comfy.cli_args import args as comfy_args
- comfy_args.disable_cuda_malloc = True
- comfy_args.auto_launch = False
-
- sys.argv = argv
+ from args_manager import args
+ return args
prepare_environment()
-
ini_comfy_args()
-
download_models()
from webui import *
diff --git a/readme.md b/readme.md
index 76e2af03f..102854c54 100644
--- a/readme.md
+++ b/readme.md
@@ -169,7 +169,18 @@ Same with the above instructions. You need to change torch to AMD version
AMD is not intensively tested, however.
-### Mac/Windows(AMD GPUs)
+### Windows(AMD GPUs)
+
+Same with Windows. Download the software, edit the content of `run.bat` as:
+
+ .\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
+ .\python_embeded\python.exe -m pip install torch-directml
+ .\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml
+ pause
+
+Then run the `run.bat`.
+
+### Mac
Coming soon ...
diff --git a/update_log.md b/update_log.md
index ba4d4122e..b7de0ee60 100644
--- a/update_log.md
+++ b/update_log.md
@@ -1,3 +1,7 @@
+# 2.1.25
+
+AMD support on Linux and Windows.
+
# 2.1.0
* Image Prompt
diff --git a/webui.py b/webui.py
index eab5bdc50..4f29b3ebc 100644
--- a/webui.py
+++ b/webui.py
@@ -10,6 +10,7 @@
import modules.flags as flags
import modules.gradio_hijack as grh
import modules.advanced_parameters as advanced_parameters
+import args_manager
from modules.sdxl_styles import style_keys, aspect_ratios, fooocus_expansion, default_styles, default_aspect_ratio
@@ -310,9 +311,9 @@ def model_refresh_clicked():
.then(lambda: (gr.update(visible=True), gr.update(visible=False)), outputs=[run_button, stop_button])
-parser = argparse.ArgumentParser()
-parser.add_argument("--port", type=int, default=None, help="Set the listen port.")
-parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.")
-parser.add_argument("--listen", type=str, default=None, metavar="IP", nargs="?", const="0.0.0.0", help="Set the listen interface.")
-args = parser.parse_args()
-shared.gradio_root.launch(inbrowser=True, server_name=args.listen, server_port=args.port, share=args.share)
+shared.gradio_root.launch(
+ inbrowser=args_manager.args.auto_launch,
+ server_name=args_manager.args.listen,
+ server_port=args_manager.args.port,
+ share=args_manager.args.share
+)
|
https://api.github.com/repos/lllyasviel/Fooocus/pulls/607
|
2023-10-09T22:45:32Z
|
2023-10-09T22:47:46Z
|
2023-10-09T22:47:46Z
|
2023-10-09T22:47:48Z
| 897
|
lllyasviel/Fooocus
| 7,063
|
|
update urllib3 to 60ba176f5d
|
diff --git a/requests/packages/urllib3/connectionpool.py b/requests/packages/urllib3/connectionpool.py
index f3e926089f..3d7d166a7a 100644
--- a/requests/packages/urllib3/connectionpool.py
+++ b/requests/packages/urllib3/connectionpool.py
@@ -110,7 +110,7 @@ def connect(self):
if self.assert_fingerprint:
assert_fingerprint(self.sock.getpeercert(binary_form=True),
self.assert_fingerprint)
- else:
+ elif self.assert_hostname is not False:
match_hostname(self.sock.getpeercert(),
self.assert_hostname or self.host)
@@ -513,6 +513,7 @@ class HTTPSConnectionPool(HTTPConnectionPool):
:class:`.VerifiedHTTPSConnection` uses one of ``assert_fingerprint``,
``assert_hostname`` and ``host`` in this order to verify connections.
+ If ``assert_hostname`` is False, no verification is done.
The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs`` and
``ssl_version`` are only used if :mod:`ssl` is available and are fed into
diff --git a/requests/packages/urllib3/contrib/ntlmpool.py b/requests/packages/urllib3/contrib/ntlmpool.py
index 277ee0b2ab..b8cd933034 100644
--- a/requests/packages/urllib3/contrib/ntlmpool.py
+++ b/requests/packages/urllib3/contrib/ntlmpool.py
@@ -33,7 +33,7 @@ class NTLMConnectionPool(HTTPSConnectionPool):
def __init__(self, user, pw, authurl, *args, **kwargs):
"""
authurl is a random URL on the server that is protected by NTLM.
- user is the Windows user, probably in the DOMAIN\username format.
+ user is the Windows user, probably in the DOMAIN\\username format.
pw is the password for the user.
"""
super(NTLMConnectionPool, self).__init__(*args, **kwargs)
diff --git a/requests/packages/urllib3/contrib/pyopenssl.py b/requests/packages/urllib3/contrib/pyopenssl.py
index 5c4c6d8d31..9829e80b60 100644
--- a/requests/packages/urllib3/contrib/pyopenssl.py
+++ b/requests/packages/urllib3/contrib/pyopenssl.py
@@ -115,6 +115,9 @@ def settimeout(self, timeout):
def sendall(self, data):
return self.connection.sendall(data)
+ def close(self):
+ return self.connection.shutdown()
+
def getpeercert(self, binary_form=False):
x509 = self.connection.get_peer_certificate()
if not x509:
diff --git a/requests/packages/urllib3/filepost.py b/requests/packages/urllib3/filepost.py
index 470309a006..526a7409c5 100644
--- a/requests/packages/urllib3/filepost.py
+++ b/requests/packages/urllib3/filepost.py
@@ -1,5 +1,5 @@
# urllib3/filepost.py
-# Copyright 2008-2012 Andrey Petrov and contributors (see CONTRIBUTORS.txt)
+# Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt)
#
# This module is part of urllib3 and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
diff --git a/requests/packages/urllib3/poolmanager.py b/requests/packages/urllib3/poolmanager.py
index ce0c248ea8..2a1aa48bf0 100644
--- a/requests/packages/urllib3/poolmanager.py
+++ b/requests/packages/urllib3/poolmanager.py
@@ -6,6 +6,11 @@
import logging
+try: # Python 3
+ from urllib.parse import urljoin
+except ImportError:
+ from urlparse import urljoin
+
from ._collections import RecentlyUsedContainer
from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool
from .connectionpool import connection_from_url, port_by_scheme
@@ -145,6 +150,10 @@ def urlopen(self, method, url, redirect=True, **kw):
if not redirect_location:
return response
+ # Support relative URLs for redirecting.
+ redirect_location = urljoin(url, redirect_location)
+
+ # RFC 2616, Section 10.3.4
if response.status == 303:
method = 'GET'
diff --git a/requests/packages/urllib3/request.py b/requests/packages/urllib3/request.py
index bf0256e964..66a9a0e690 100644
--- a/requests/packages/urllib3/request.py
+++ b/requests/packages/urllib3/request.py
@@ -30,7 +30,7 @@ class RequestMethods(object):
in the URL (such as GET, HEAD, DELETE).
:meth:`.request_encode_body` is for sending requests whose fields are
- encoded in the *body* of the request using multipart or www-orm-urlencoded
+ encoded in the *body* of the request using multipart or www-form-urlencoded
(such as for POST, PUT, PATCH).
:meth:`.request` is for making any kind of request, it will look up the
diff --git a/requests/packages/urllib3/response.py b/requests/packages/urllib3/response.py
index 2fa407887d..05bc38a31b 100644
--- a/requests/packages/urllib3/response.py
+++ b/requests/packages/urllib3/response.py
@@ -1,5 +1,5 @@
# urllib3/response.py
-# Copyright 2008-2012 Andrey Petrov and contributors (see CONTRIBUTORS.txt)
+# Copyright 2008-2013 Andrey Petrov and contributors (see CONTRIBUTORS.txt)
#
# This module is part of urllib3 and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
@@ -7,6 +7,7 @@
import logging
import zlib
+import io
from .exceptions import DecodeError
from .packages.six import string_types as basestring, binary_type
@@ -48,7 +49,7 @@ def _get_decoder(mode):
return DeflateDecoder()
-class HTTPResponse(object):
+class HTTPResponse(io.IOBase):
"""
HTTP Response container.
@@ -239,3 +240,35 @@ def getheaders(self):
def getheader(self, name, default=None):
return self.headers.get(name, default)
+
+ # Overrides from io.IOBase
+ def close(self):
+ if not self.closed:
+ self._fp.close()
+
+ @property
+ def closed(self):
+ if self._fp is None:
+ return True
+ elif hasattr(self._fp, 'closed'):
+ return self._fp.closed
+ elif hasattr(self._fp, 'isclosed'): # Python 2
+ return self._fp.isclosed()
+ else:
+ return True
+
+ def fileno(self):
+ if self._fp is None:
+ raise IOError("HTTPResponse has no file to get a fileno from")
+ elif hasattr(self._fp, "fileno"):
+ return self._fp.fileno()
+ else:
+ raise IOError("The file-like object this HTTPResponse is wrapped "
+ "around has no file descriptor")
+
+ def flush(self):
+ if self._fp is not None and hasattr(self._fp, 'flush'):
+ return self._fp.flush()
+
+ def readable(self):
+ return True
|
https://api.github.com/repos/psf/requests/pulls/1412
|
2013-06-08T08:24:17Z
|
2013-06-08T10:13:34Z
|
2013-06-08T10:13:34Z
|
2021-09-08T23:11:10Z
| 1,713
|
psf/requests
| 32,509
|
|
Complete strict typing to Humidifier entity platform
|
diff --git a/homeassistant/components/device_automation/toggle_entity.py b/homeassistant/components/device_automation/toggle_entity.py
index 2d0254b9a0ad90..5905236e050cd8 100644
--- a/homeassistant/components/device_automation/toggle_entity.py
+++ b/homeassistant/components/device_automation/toggle_entity.py
@@ -107,7 +107,7 @@ async def async_call_action_from_config(
hass: HomeAssistant,
config: ConfigType,
variables: TemplateVarsType,
- context: Context,
+ context: Context | None,
domain: str,
) -> None:
"""Change state based on configuration."""
diff --git a/homeassistant/components/humidifier/__init__.py b/homeassistant/components/humidifier/__init__.py
index bf3a45d6e91154..8eda2589417dd3 100644
--- a/homeassistant/components/humidifier/__init__.py
+++ b/homeassistant/components/humidifier/__init__.py
@@ -137,7 +137,7 @@ class HumidifierEntity(ToggleEntity):
def capability_attributes(self) -> dict[str, Any]:
"""Return capability attributes."""
supported_features = self.supported_features or 0
- data = {
+ data: dict[str, int | list[str] | None] = {
ATTR_MIN_HUMIDITY: self.min_humidity,
ATTR_MAX_HUMIDITY: self.max_humidity,
}
@@ -161,7 +161,7 @@ def device_class(self) -> HumidifierDeviceClass | str | None:
def state_attributes(self) -> dict[str, Any]:
"""Return the optional state attributes."""
supported_features = self.supported_features or 0
- data = {}
+ data: dict[str, int | str | None] = {}
if self.target_humidity is not None:
data[ATTR_HUMIDITY] = self.target_humidity
diff --git a/homeassistant/components/humidifier/device_action.py b/homeassistant/components/humidifier/device_action.py
index 3ad4b22dcec2b1..d8f13d31b557e3 100644
--- a/homeassistant/components/humidifier/device_action.py
+++ b/homeassistant/components/humidifier/device_action.py
@@ -1,6 +1,8 @@
"""Provides device actions for Humidifier."""
from __future__ import annotations
+from typing import Any
+
import voluptuous as vol
from homeassistant.components.device_automation import toggle_entity
@@ -70,7 +72,10 @@ async def async_get_actions(
async def async_call_action_from_config(
- hass: HomeAssistant, config: dict, variables: dict, context: Context | None
+ hass: HomeAssistant,
+ config: dict[str, Any],
+ variables: dict[str, Any],
+ context: Context | None,
) -> None:
"""Execute a device action."""
service_data = {ATTR_ENTITY_ID: config[CONF_ENTITY_ID]}
diff --git a/homeassistant/components/humidifier/device_condition.py b/homeassistant/components/humidifier/device_condition.py
index c8204c91a29335..a8baf4f491032c 100644
--- a/homeassistant/components/humidifier/device_condition.py
+++ b/homeassistant/components/humidifier/device_condition.py
@@ -77,7 +77,9 @@ def async_condition_from_config(
def test_is_state(hass: HomeAssistant, variables: TemplateVarsType) -> bool:
"""Test if an entity is a certain state."""
state = hass.states.get(config[ATTR_ENTITY_ID])
- return state and state.attributes.get(attribute) == config[attribute]
+ return (
+ state is not None and state.attributes.get(attribute) == config[attribute]
+ )
return test_is_state
diff --git a/homeassistant/components/humidifier/reproduce_state.py b/homeassistant/components/humidifier/reproduce_state.py
index e6d4fddafbcd8a..b0e9a29caccfcd 100644
--- a/homeassistant/components/humidifier/reproduce_state.py
+++ b/homeassistant/components/humidifier/reproduce_state.py
@@ -62,7 +62,9 @@ async def call_service(service: str, keys: Iterable, data=None):
if cur_state.state != STATE_ON:
await call_service(SERVICE_TURN_ON, [])
# refetch the state as turning on might allow us to see some more values
- cur_state = hass.states.get(state.entity_id)
+ if (cur_state := hass.states.get(state.entity_id)) is None:
+ _LOGGER.warning("Unable to find entity %s", state.entity_id)
+ return
# Then set the mode before target humidity, because switching modes
# may invalidate target humidity
diff --git a/mypy.ini b/mypy.ini
index 7346cc83ba9019..3e5d4e4af56da4 100644
--- a/mypy.ini
+++ b/mypy.ini
@@ -1804,9 +1804,6 @@ ignore_errors = true
[mypy-homeassistant.components.honeywell.*]
ignore_errors = true
-[mypy-homeassistant.components.humidifier.*]
-ignore_errors = true
-
[mypy-homeassistant.components.iaqualink.*]
ignore_errors = true
diff --git a/script/hassfest/mypy_config.py b/script/hassfest/mypy_config.py
index 6fc6c0e399301c..690a91d59de559 100644
--- a/script/hassfest/mypy_config.py
+++ b/script/hassfest/mypy_config.py
@@ -47,7 +47,6 @@
"homeassistant.components.homekit.*",
"homeassistant.components.homekit_controller.*",
"homeassistant.components.honeywell.*",
- "homeassistant.components.humidifier.*",
"homeassistant.components.iaqualink.*",
"homeassistant.components.icloud.*",
"homeassistant.components.image.*",
|
<!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Completes strict typing for the `humidifier` entity platform.
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [x] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [x] The code change is tested and works locally.
- [x] Local tests pass. **Your PR cannot be merged unless tests pass**
- [x] There is no commented out code in this PR.
- [x] I have followed the [development checklist][dev-checklist]
- [x] The code has been formatted using Black (`black --fast homeassistant tests`)
- [x] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
The integration reached or maintains the following [Integration Quality Scale][quality-scale]:
<!--
The Integration Quality Scale scores an integration on the code quality
and user experience. Each level of the quality scale consists of a list
of requirements. We highly recommend getting your integration scored!
-->
- [x] No score or internal
- [ ] 🥈 Silver
- [ ] 🥇 Gold
- [ ] 🏆 Platinum
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [x] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html
[manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html
[quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html
[docs-repository]: https://github.com/home-assistant/home-assistant.io
|
https://api.github.com/repos/home-assistant/core/pulls/61021
|
2021-12-04T22:41:11Z
|
2022-01-04T17:09:32Z
|
2022-01-04T17:09:32Z
|
2022-01-05T18:01:56Z
| 1,318
|
home-assistant/core
| 39,070
|
Add ConfigEntry template function
|
diff --git a/homeassistant/helpers/template.py b/homeassistant/helpers/template.py
index ea6b764a75a0f0..4c47e300dc7bd2 100644
--- a/homeassistant/helpers/template.py
+++ b/homeassistant/helpers/template.py
@@ -1062,6 +1062,14 @@ def integration_entities(hass: HomeAssistant, entry_name: str) -> Iterable[str]:
]
+def entry_id(hass: HomeAssistant, entity_id: str) -> str | None:
+ """Get an entry ID from an entity ID."""
+ entity_reg = entity_registry.async_get(hass)
+ if entity := entity_reg.async_get(entity_id):
+ return entity.config_entry_id
+ return None
+
+
def device_id(hass: HomeAssistant, entity_id_or_device_name: str) -> str | None:
"""Get a device ID from an entity ID or device name."""
entity_reg = entity_registry.async_get(hass)
@@ -2059,6 +2067,9 @@ def wrapper(*args, **kwargs):
self.globals["device_attr"] = hassfunction(device_attr)
self.globals["is_device_attr"] = hassfunction(is_device_attr)
+ self.globals["entry_id"] = hassfunction(entry_id)
+ self.filters["entry_id"] = pass_context(self.globals["entry_id"])
+
self.globals["device_id"] = hassfunction(device_id)
self.filters["device_id"] = pass_context(self.globals["device_id"])
diff --git a/tests/helpers/test_template.py b/tests/helpers/test_template.py
index 3186c10b20edbc..bed9e63ad28b97 100644
--- a/tests/helpers/test_template.py
+++ b/tests/helpers/test_template.py
@@ -2419,6 +2419,30 @@ async def test_integration_entities(hass):
assert info.rate_limit is None
+async def test_entry_id(hass):
+ """Test entry_id function."""
+ config_entry = MockConfigEntry(domain="light", title="Some integration")
+ config_entry.add_to_hass(hass)
+ entity_registry = mock_registry(hass)
+ entity_entry = entity_registry.async_get_or_create(
+ "sensor", "test", "test", suggested_object_id="test", config_entry=config_entry
+ )
+
+ info = render_to_info(hass, "{{ 'sensor.fail' | entry_id }}")
+ assert_result_info(info, None)
+ assert info.rate_limit is None
+
+ info = render_to_info(hass, "{{ 56 | entry_id }}")
+ assert_result_info(info, None)
+
+ info = render_to_info(hass, "{{ 'not_a_real_entity_id' | entry_id }}")
+ assert_result_info(info, None)
+
+ info = render_to_info(hass, f"{{{{ entry_id('{entity_entry.entity_id}') }}}}")
+ assert_result_info(info, config_entry.entry_id)
+ assert info.rate_limit is None
+
+
async def test_device_id(hass):
"""Test device_id function."""
config_entry = MockConfigEntry(domain="light")
|
<!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Breaking change
<!--
If your PR contains a breaking change for existing users, it is important
to tell them what breaks, how to make it work again and why we did this.
This piece of text is published with the release notes, so it helps if you
write it towards our users, not us.
Note: Remove this section if this PR is NOT a breaking change.
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Add template function `entry_id(entity_id)`: Returns entry_id.
This is useful for services that require the entry_id in their service data.
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [x] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request: https://github.com/home-assistant/home-assistant.io/pull/24275
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [x] The code change is tested and works locally.
- [x] Local tests pass. **Your PR cannot be merged unless tests pass**
- [x] There is no commented out code in this PR.
- [x] I have followed the [development checklist][dev-checklist]
- [x] The code has been formatted using Black (`black --fast homeassistant tests`)
- [x] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [x] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
The integration reached or maintains the following [Integration Quality Scale][quality-scale]:
<!--
The Integration Quality Scale scores an integration on the code quality
and user experience. Each level of the quality scale consists of a list
of requirements. We highly recommend getting your integration scored!
-->
- [ ] No score or internal
- [ ] 🥈 Silver
- [ ] 🥇 Gold
- [ ] 🏆 Platinum
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html
[manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html
[quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html
[docs-repository]: https://github.com/home-assistant/home-assistant.io
|
https://api.github.com/repos/home-assistant/core/pulls/78030
|
2022-09-08T09:00:48Z
|
2022-09-29T10:41:59Z
|
2022-09-29T10:41:59Z
|
2022-10-26T10:27:47Z
| 676
|
home-assistant/core
| 39,225
|
Improve Risco exception logging
|
diff --git a/homeassistant/components/risco/__init__.py b/homeassistant/components/risco/__init__.py
index 7ca18ea77c5e15..d25579343c8a65 100644
--- a/homeassistant/components/risco/__init__.py
+++ b/homeassistant/components/risco/__init__.py
@@ -101,7 +101,7 @@ async def _async_setup_local_entry(hass: HomeAssistant, entry: ConfigEntry) -> b
return False
async def _error(error: Exception) -> None:
- _LOGGER.error("Error in Risco library: %s", error)
+ _LOGGER.error("Error in Risco library", exc_info=error)
entry.async_on_unload(risco.add_error_handler(_error))
|
<!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Exception from pyrisco are sent as Exception objects rather than raised. This PR makes sure the error and stack trace are logged, rather than just the error. This should assist in troubleshooting Risco issues.
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [X] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [X] The code change is tested and works locally.
- [X] Local tests pass. **Your PR cannot be merged unless tests pass**
- [X] There is no commented out code in this PR.
- [X] I have followed the [development checklist][dev-checklist]
- [X] I have followed the [perfect PR recommendations][perfect-pr]
- [X] The code has been formatted using Ruff (`ruff format homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/development_checklist/
[manifest-docs]: https://developers.home-assistant.io/docs/creating_integration_manifest/
[quality-scale]: https://developers.home-assistant.io/docs/integration_quality_scale_index/
[docs-repository]: https://github.com/home-assistant/home-assistant.io
[perfect-pr]: https://developers.home-assistant.io/docs/review-process/#creating-the-perfect-pr
|
https://api.github.com/repos/home-assistant/core/pulls/115232
|
2024-04-08T19:00:41Z
|
2024-04-10T21:26:15Z
|
2024-04-10T21:26:15Z
|
2024-04-12T09:58:57Z
| 174
|
home-assistant/core
| 38,663
|
Make our test farm tests instances self-destruct
|
diff --git a/tests/letstest/auto_targets.yaml b/tests/letstest/auto_targets.yaml
index 9d97c6a8331..01d410227d1 100644
--- a/tests/letstest/auto_targets.yaml
+++ b/tests/letstest/auto_targets.yaml
@@ -31,10 +31,6 @@ targets:
virt: hvm
user: admin
machine_type: a1.medium
- # userdata: |
- # #cloud-init
- # runcmd:
- # - [ apt-get, install, -y, curl ]
#-----------------------------------------------------------------------------
# Other Redhat Distros
- ami: ami-0916c408cb02e310b
diff --git a/tests/letstest/multitester.py b/tests/letstest/multitester.py
index cf9f2899ad7..1a1958bd2e3 100644
--- a/tests/letstest/multitester.py
+++ b/tests/letstest/multitester.py
@@ -147,22 +147,32 @@ def make_instance(ec2_client,
keyname,
security_group_id,
subnet_id,
- machine_type='t2.micro',
- userdata=""): #userdata contains bash or cloud-init script
+ self_destruct,
+ machine_type='t2.micro'):
+ """Creates an instance using the given parameters.
+
+ If self_destruct is True, the instance will be configured to shutdown after
+ 1 hour and to terminate itself on shutdown.
+
+ """
block_device_mappings = _get_block_device_mappings(ec2_client, ami_id)
tags = [{'Key': 'Name', 'Value': instance_name}]
tag_spec = [{'ResourceType': 'instance', 'Tags': tags}]
- return ec2_client.create_instances(
- BlockDeviceMappings=block_device_mappings,
- ImageId=ami_id,
- SecurityGroupIds=[security_group_id],
- SubnetId=subnet_id,
- KeyName=keyname,
- MinCount=1,
- MaxCount=1,
- UserData=userdata,
- InstanceType=machine_type,
- TagSpecifications=tag_spec)[0]
+ kwargs = {
+ 'BlockDeviceMappings': block_device_mappings,
+ 'ImageId': ami_id,
+ 'SecurityGroupIds': [security_group_id],
+ 'SubnetId': subnet_id,
+ 'KeyName': keyname,
+ 'MinCount': 1,
+ 'MaxCount': 1,
+ 'InstanceType': machine_type,
+ 'TagSpecifications': tag_spec
+ }
+ if self_destruct:
+ kwargs['InstanceInitiatedShutdownBehavior'] = 'terminate'
+ kwargs['UserData'] = '#!/bin/bash\nshutdown -P +60\n'
+ return ec2_client.create_instances(**kwargs)[0]
def _get_block_device_mappings(ec2_client, ami_id):
"""Returns the list of block device mappings to ensure cleanup.
@@ -313,7 +323,7 @@ def grab_certbot_log(cxn):
'cat ./certbot.log; else echo "[nolocallog]"; fi\'')
-def create_client_instance(ec2_client, target, security_group_id, subnet_id):
+def create_client_instance(ec2_client, target, security_group_id, subnet_id, self_destruct):
"""Create a single client instance for running tests."""
if 'machine_type' in target:
machine_type = target['machine_type']
@@ -322,10 +332,6 @@ def create_client_instance(ec2_client, target, security_group_id, subnet_id):
else:
# 32 bit systems
machine_type = 'c1.medium'
- if 'userdata' in target:
- userdata = target['userdata']
- else:
- userdata = ''
name = 'le-%s'%target['name']
print(name, end=" ")
return make_instance(ec2_client,
@@ -335,7 +341,7 @@ def create_client_instance(ec2_client, target, security_group_id, subnet_id):
machine_type=machine_type,
security_group_id=security_group_id,
subnet_id=subnet_id,
- userdata=userdata)
+ self_destruct=self_destruct)
def test_client_process(fab_config, inqueue, outqueue, boulder_url, log_dir):
@@ -490,6 +496,9 @@ def main():
boulder_preexists = True
else:
print("Can't find a boulder server, starting one...")
+ # If we want to kill boulder on shutdown, have it self-destruct in case
+ # cleanup fails.
+ self_destruct = cl_args.killboulder
boulder_server = make_instance(ec2_client,
'le-boulderserver',
BOULDER_AMI,
@@ -497,16 +506,20 @@ def main():
machine_type='t2.micro',
#machine_type='t2.medium',
security_group_id=security_group_id,
- subnet_id=subnet_id)
+ subnet_id=subnet_id,
+ self_destruct=self_destruct)
instances = []
try:
if not cl_args.boulderonly:
print("Creating instances: ", end="")
+ # If we want to preserve instances, do not have them self-destruct.
+ self_destruct = not cl_args.saveinstances
for target in targetlist:
instances.append(
create_client_instance(ec2_client, target,
- security_group_id, subnet_id)
+ security_group_id, subnet_id,
+ self_destruct)
)
print()
|
Fixes https://github.com/certbot/certbot/issues/7567.
I used the first approach I described at https://github.com/certbot/certbot/issues/7567#issuecomment-743336813 which was to:
1. Change the instance shutdown behavior so shutting down the instance terminates it. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingInstanceInitiatedShutdownBehavior.
2. Add a shell script in user data that runs on machine startup as described at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts. This shell script schedules the machine to power off in 60 minutes.
To help accomplish this, I made `multitester.py` no longer accept `UserData` in our "target" YAML files, but we weren't using this functionality anyway.
I tested this by locally commenting out the usual cleanup code. After the script exited, I checked the AWS console and the instances were still running, but after the allotted time, they correctly terminated themselves.
With this change, we could delete the normal termination code, but I personally think it's worth keeping around. This scheduled shutdown is a little odd and in the normal case I think we can terminate the instances earlier and maybe save some money.
You can see the test farm tests passing with this change at https://dev.azure.com/certbot/certbot/_build/results?buildId=3175&view=results.
|
https://api.github.com/repos/certbot/certbot/pulls/8536
|
2020-12-15T00:38:39Z
|
2020-12-15T11:00:01Z
|
2020-12-15T11:00:00Z
|
2020-12-15T11:00:02Z
| 1,228
|
certbot/certbot
| 1,566
|
Patch GitPython to not use leaky persistent processes
|
diff --git a/modules/extensions.py b/modules/extensions.py
index 624832a00e9..fb7250e6a01 100644
--- a/modules/extensions.py
+++ b/modules/extensions.py
@@ -3,9 +3,8 @@
import threading
import traceback
-import git
-
from modules import shared
+from modules.gitpython_hack import Repo
from modules.paths_internal import extensions_dir, extensions_builtin_dir, script_path # noqa: F401
extensions = []
@@ -54,7 +53,7 @@ def do_read_info_from_repo(self):
repo = None
try:
if os.path.exists(os.path.join(self.path, ".git")):
- repo = git.Repo(self.path)
+ repo = Repo(self.path)
except Exception:
print(f"Error reading github repository info from {self.path}:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
@@ -94,7 +93,7 @@ def list_files(self, subdir, extension):
return res
def check_updates(self):
- repo = git.Repo(self.path)
+ repo = Repo(self.path)
for fetch in repo.remote().fetch(dry_run=True):
if fetch.flags != fetch.HEAD_UPTODATE:
self.can_update = True
@@ -116,7 +115,7 @@ def check_updates(self):
self.status = "latest"
def fetch_and_reset_hard(self, commit='origin'):
- repo = git.Repo(self.path)
+ repo = Repo(self.path)
# Fix: `error: Your local changes to the following files would be overwritten by merge`,
# because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
repo.git.fetch(all=True)
diff --git a/modules/gitpython_hack.py b/modules/gitpython_hack.py
new file mode 100644
index 00000000000..e537c1df93e
--- /dev/null
+++ b/modules/gitpython_hack.py
@@ -0,0 +1,42 @@
+from __future__ import annotations
+
+import io
+import subprocess
+
+import git
+
+
+class Git(git.Git):
+ """
+ Git subclassed to never use persistent processes.
+ """
+
+ def _get_persistent_cmd(self, attr_name, cmd_name, *args, **kwargs):
+ raise NotImplementedError(f"Refusing to use persistent process: {attr_name} ({cmd_name} {args} {kwargs})")
+
+ def get_object_header(self, ref: str | bytes) -> tuple[str, str, int]:
+ ret = subprocess.check_output(
+ [self.GIT_PYTHON_GIT_EXECUTABLE, "cat-file", "--batch-check"],
+ input=self._prepare_ref(ref),
+ cwd=self._working_dir,
+ timeout=2,
+ )
+ return self._parse_object_header(ret)
+
+ def stream_object_data(self, ref: str) -> tuple[str, str, int, "Git.CatFileContentStream"]:
+ # Not really streaming, per se; this buffers the entire object in memory.
+ # Shouldn't be a problem for our use case, since we're only using this for
+ # object headers (commit objects).
+ ret = subprocess.check_output(
+ [self.GIT_PYTHON_GIT_EXECUTABLE, "cat-file", "--batch"],
+ input=self._prepare_ref(ref),
+ cwd=self._working_dir,
+ timeout=30,
+ )
+ bio = io.BytesIO(ret)
+ hexsha, typename, size = self._parse_object_header(bio.readline())
+ return (hexsha, typename, size, self.CatFileContentStream(size, bio))
+
+
+class Repo(git.Repo):
+ GitCommandWrapperType = Git
diff --git a/modules/ui_extensions.py b/modules/ui_extensions.py
index 515ec262244..1c3f5ed93d0 100644
--- a/modules/ui_extensions.py
+++ b/modules/ui_extensions.py
@@ -490,8 +490,14 @@ def refresh_available_extensions_from_data(hide_tags, sort_column, filter_text="
def preload_extensions_git_metadata():
+ t0 = time.time()
for extension in extensions.extensions:
extension.read_info_from_repo()
+ print(
+ f"preload_extensions_git_metadata for "
+ f"{len(extensions.extensions)} extensions took "
+ f"{time.time() - t0:.2f}s"
+ )
def create_ui():
|
## Description
### a simple description of what you're trying to accomplish
On occasion on b957dcfece29c84ac0cfcd5a69475ff8684c531f, the server would fail to start (getting seemingly stuck), and when it did, there were tons of stray `git cat-file` processes left over [by GitPython's "persistent processes" system](https://github.com/gitpython-developers/GitPython/blob/6fc11e6e36e524a6749e15046eca3a8601745822/git/cmd.py#L1353-L1364) that would be useful if we actually needed to stream objects, etc. from Git.
Replacing the persistent cat-file commands with a simple `subprocess` call **seems to** make the initialization process solid **on my machine**. (This could be #10755.) On the other hand, this will make them somewhat slower, but the metadata load is done in a background thread anyway.
Tested on:
* Windows
* macOS
### a summary of changes in code
Subclassed `git.Git` and `git.Repo`
### which issues it fixes, if any
Maybe #10755 and other "stuck loading" issues.
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
|
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/10805
|
2023-05-28T21:50:51Z
|
2023-05-31T16:05:04Z
|
2023-05-31T16:05:03Z
|
2023-06-01T09:48:37Z
| 989
|
AUTOMATIC1111/stable-diffusion-webui
| 39,787
|
Update docs on working on docs
|
diff --git a/docs/docsite/Makefile b/docs/docsite/Makefile
index 76849d820f1b41..3add5bafaf71c3 100644
--- a/docs/docsite/Makefile
+++ b/docs/docsite/Makefile
@@ -78,32 +78,32 @@ gettext_generate_rst: collections_meta config cli keywords testing
# The following two symlinks are necessary to produce two different docsets
# from the same set of rst files (Ansible the package docs, and core docs).
# Symlink the relevant index into place for building Ansible docs
-ansible_structure: generate_rst
+ansible_structure:
# We must have python and python-packaging for the version_helper
# script so use it for version comparison
if python -c "import sys, packaging.version as p; sys.exit(not p.Version('$(MAJOR_VERSION)') > p.Version('2.10'))" ; then \
- echo "Creating symlinks in generate_rst"; \
+ echo "Creating symlinks in ansible_structure"; \
ln -sf ../rst/ansible_index.rst rst/index.rst; \
ln -sf ../sphinx_conf/ansible_conf.py rst/conf.py; \
else \
- echo 'Creating symlinks for older ansible in generate_rst'; \
+ echo 'Creating symlinks for older ansible in ansible_structure'; \
ln -sf ../rst/2.10_index.rst rst/index.rst; \
ln -sf ../sphinx_conf/2.10_conf.py rst/conf.py; \
fi
# Symlink the relevant index into place for building core docs
-core_structure: core_generate_rst
- @echo "Creating symlinks in core_generate_rst"
+core_structure:
+ @echo "Creating symlinks in core_structure"
-ln -sf ../rst/core_index.rst rst/index.rst
-ln -sf ../sphinx_conf/core_conf.py rst/conf.py
# Symlink the relevant index into place for building core docs
-gettext_structure: gettext_generate_rst
- @echo "Creating symlinks in gettext_generate_rst"
+gettext_structure:
+ @echo "Creating symlinks in gettext_structure"
-ln -sf ../rst/core_index.rst rst/index.rst
-ln -sf ../sphinx_conf/all_conf.py rst/conf.py
-gettext: gettext_structure
+gettext: gettext_structure gettext_generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx gettext
# if msgcat is installed handle all indexes, otherwise use the index from gettext_structure.
-msgcat "$(POTDIR)/core_index.pot" "$(POTDIR)/ansible_index.pot" "$(POTDIR)/2.10_index.pot" > "$(POTDIR)/tmp_index.pot" && mv "$(POTDIR)/tmp_index.pot" "$(POTDIR)/index.pot"
@@ -123,21 +123,27 @@ else
(cd docs/docsite/; sphinx-intl stat -d rst/locales -l $(LANGUAGES) | grep -E ' [1-9][0-9]* (fuzzy|untranslated)' | sort)
endif
-htmldocs: ansible_structure
+htmldocs: ansible_structure generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx html
-core_htmldocs: core_structure
+core_htmldocs: core_structure core_generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx html
-singlehtmldocs: ansible_structure
+singlehtmldocs: ansible_structure generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx singlehtml
-core_singlehtmldocs: core_structure
+core_singlehtmldocs: core_structure core_generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx singlehtml
-linkcheckdocs: generate_rst
+# Note: The linkcheckdocs and htmlsingle targets depend on gettext_structure
+# because that one does not exclude any rst files in its conf.py.
+linkcheckdocs: gettext_structure generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx linkcheck
+htmlsingle: assertrst gettext_structure
+ sphinx-build -j $(CPUS) -b html -d $(BUILDDIR)/doctrees ./rst $(BUILDDIR)/html rst/$(rst)
+ @echo "Output is in $(BUILDDIR)/html/$(rst:.rst=.html)"
+
webdocs: docs
#TODO: leaving htmlout removal for those having older versions, should eventually be removed also
@@ -170,7 +176,7 @@ clean:
fi \
done
@echo "Cleanning up generated ansible_structure"
- find -type l -delete
+ find . -type l -delete
@echo "Cleaning up legacy generated rst locations"
rm -rf rst/modules
rm -f rst/plugins/*/*.rst
@@ -205,7 +211,3 @@ testing:
epub:
(CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx epub)
-
-htmlsingle: assertrst
- sphinx-build -j $(CPUS) -b html -d $(BUILDDIR)/doctrees ./rst $(BUILDDIR)/html rst/$(rst)
- @echo "Output is in $(BUILDDIR)/html/$(rst:.rst=.html)"
diff --git a/docs/docsite/rst/community/documentation_contributions.rst b/docs/docsite/rst/community/documentation_contributions.rst
index ecd8ebc22c5a15..c3c34c94c9dc34 100644
--- a/docs/docsite/rst/community/documentation_contributions.rst
+++ b/docs/docsite/rst/community/documentation_contributions.rst
@@ -77,17 +77,18 @@ To build documentation locally, ensure you have a working :ref:`development envi
To work with documentation on your local machine, you need to have python-3.5 or greater and the
following packages installed:
-- gcc
-- jinja2
-- libyaml
-- Pygments >= 2.4.0
-- pyparsing
-- PyYAML
-- rstcheck
-- six
-- sphinx
-- sphinx-notfound-page
-- straight.plugin
+ - ``gcc``
+ - ``jinja2``
+ - ``libyaml``
+ - ``make``
+ - ``Pygments``
+ - ``pyparsing``
+ - ``PyYAML``
+ - ``rstcheck``
+ - ``six``
+ - ``sphinx``
+ - ``sphinx-notfound-page``
+ - ``straight.plugin``
These required packages are listed in two :file:`requirements.txt` files to make installation easier:
@@ -122,6 +123,12 @@ Building the documentation locally
Building the documentation is the best way to check for errors and review your changes. Once `rstcheck` runs with no errors, navigate to ``ansible/docs/docsite`` and then build the page(s) you want to review.
+ .. note::
+
+ If building on macOS with Python 3.8 or later, you must use Sphinx >= 2.2.2. See `#6803 <https://github.com/sphinx-doc/sphinx/pull/6879>`_ for details.
+
+
+
Building a single rST page
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
##### SUMMARY
<!--- Describe the change below, including rationale and design decisions -->
Fix a bug in the `make clean` target that would not cleanup symlinks.
Add documentation describing how to setup the configuration before trying to generate docs. This is a new requirement since the documentation was split up.
<!--- HINT: Include "Fixes #nnn" if you are fixing an existing issue -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest -->
- Docs Pull Request
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below -->
`docs/docsite/rst/community/documentation_contributions.rst`
|
https://api.github.com/repos/ansible/ansible/pulls/74201
|
2021-04-08T20:56:02Z
|
2021-04-21T19:15:28Z
|
2021-04-21T19:15:28Z
|
2021-05-19T13:00:04Z
| 1,659
|
ansible/ansible
| 49,557
|
Dev
|
diff --git a/fooocus_version.py b/fooocus_version.py
index 709af32d5..e1578ebba 100644
--- a/fooocus_version.py
+++ b/fooocus_version.py
@@ -1 +1 @@
-version = '2.1.848'
+version = '2.1.849'
diff --git a/modules/patch.py b/modules/patch.py
index 6a7111a6f..66b243cb5 100644
--- a/modules/patch.py
+++ b/modules/patch.py
@@ -271,12 +271,11 @@ def sdxl_encode_adm_patched(self, **kwargs):
height = float(height) * positive_adm_scale
def embedder(number_list):
- h = [self.embedder(torch.tensor([x], dtype=torch.float32)) for x in number_list]
- h = torch.cat(h)
+ h = self.embedder(torch.tensor(number_list, dtype=torch.float32))
h = torch.flatten(h).unsqueeze(dim=0).repeat(clip_pooled.shape[0], 1)
return h
- width, height = round_to_64(width), round_to_64(height)
+ width, height = int(width), int(height)
target_width, target_height = round_to_64(target_width), round_to_64(target_height)
adm_emphasized = embedder([height, width, 0, 0, target_height, target_width])
diff --git a/modules/patch_clip.py b/modules/patch_clip.py
index 4a1e0307a..0ef22e8b9 100644
--- a/modules/patch_clip.py
+++ b/modules/patch_clip.py
@@ -63,172 +63,94 @@ def encode_token_weights_fooocus(self, token_weight_pairs):
return torch.cat(output, dim=-2).to(ldm_patched.modules.model_management.intermediate_device()), first_pooled
-class SDClipModelFooocus(torch.nn.Module, ldm_patched.modules.sd1_clip.ClipTokenWeightEncoder):
- """Uses the CLIP transformer encoder for text (from huggingface)"""
- LAYERS = [
- "last",
- "pooled",
- "hidden"
- ]
-
- def __init__(self,
- max_length=77,
- freeze=True,
- layer="last",
- layer_idx=None,
- textmodel_json_config=None,
- dtype=None,
- special_tokens=None,
- layer_norm_hidden_state=True,
- **kwargs):
- super().__init__()
- assert layer in self.LAYERS
-
- if special_tokens is None:
- special_tokens = {"start": 49406, "end": 49407, "pad": 49407}
-
- if textmodel_json_config is None:
- textmodel_json_config = os.path.join(os.path.dirname(os.path.realpath(ldm_patched.modules.sd1_clip.__file__)), "sd1_clip_config.json")
-
- config = CLIPTextConfig.from_json_file(textmodel_json_config)
- self.num_layers = config.num_hidden_layers
+def patched_SDClipModel__init__(self, max_length=77, freeze=True, layer="last", layer_idx=None,
+ textmodel_json_config=None, dtype=None, special_tokens=None,
+ layer_norm_hidden_state=True, **kwargs):
+ torch.nn.Module.__init__(self)
+ assert layer in self.LAYERS
+
+ if special_tokens is None:
+ special_tokens = {"start": 49406, "end": 49407, "pad": 49407}
+
+ if textmodel_json_config is None:
+ textmodel_json_config = os.path.join(os.path.dirname(os.path.realpath(ldm_patched.modules.sd1_clip.__file__)),
+ "sd1_clip_config.json")
+
+ config = CLIPTextConfig.from_json_file(textmodel_json_config)
+ self.num_layers = config.num_hidden_layers
+
+ with modeling_utils.no_init_weights():
+ self.transformer = CLIPTextModel(config)
+
+ if 'cuda' not in model_management.text_encoder_device().type:
+ dtype = torch.float32
+
+ if dtype is not None:
+ self.transformer.to(dtype)
+ self.transformer.text_model.embeddings.to(torch.float32)
+
+ if freeze:
+ self.freeze()
+
+ self.max_length = max_length
+ self.layer = layer
+ self.layer_idx = None
+ self.special_tokens = special_tokens
+ self.text_projection = torch.nn.Parameter(torch.eye(self.transformer.get_input_embeddings().weight.shape[1]))
+ self.logit_scale = torch.nn.Parameter(torch.tensor(4.6055))
+ self.enable_attention_masks = False
+
+ self.layer_norm_hidden_state = layer_norm_hidden_state
+ if layer == "hidden":
+ assert layer_idx is not None
+ assert abs(layer_idx) < self.num_layers
+ self.clip_layer(layer_idx)
+ self.layer_default = (self.layer, self.layer_idx)
+
+
+def patched_SDClipModel_forward(self, tokens):
+ backup_embeds = self.transformer.get_input_embeddings()
+ device = backup_embeds.weight.device
+ tokens = self.set_up_textual_embeddings(tokens, backup_embeds)
+ tokens = torch.LongTensor(tokens).to(device)
+
+ if self.transformer.text_model.final_layer_norm.weight.dtype != torch.float32:
+ precision_scope = torch.autocast
+ else:
+ precision_scope = lambda a, dtype: contextlib.nullcontext(a)
+
+ with precision_scope(model_management.get_autocast_device(device), dtype=torch.float32):
+ attention_mask = None
+ if self.enable_attention_masks:
+ attention_mask = torch.zeros_like(tokens)
+ max_token = self.transformer.get_input_embeddings().weight.shape[0] - 1
+ for x in range(attention_mask.shape[0]):
+ for y in range(attention_mask.shape[1]):
+ attention_mask[x, y] = 1
+ if tokens[x, y] == max_token:
+ break
+
+ outputs = self.transformer(input_ids=tokens, attention_mask=attention_mask,
+ output_hidden_states=self.layer == "hidden")
+ self.transformer.set_input_embeddings(backup_embeds)
+
+ if self.layer == "last":
+ z = outputs.last_hidden_state
+ elif self.layer == "pooled":
+ z = outputs.pooler_output[:, None, :]
+ else:
+ z = outputs.hidden_states[self.layer_idx]
+ if self.layer_norm_hidden_state:
+ z = self.transformer.text_model.final_layer_norm(z)
- with modeling_utils.no_init_weights():
- self.transformer = CLIPTextModel(config)
-
- if 'cuda' not in model_management.text_encoder_device().type:
- dtype = torch.float32
-
- if dtype is not None:
- self.transformer.to(dtype)
- self.transformer.text_model.embeddings.to(torch.float32)
-
- if freeze:
- self.freeze()
-
- self.max_length = max_length
- self.layer = layer
- self.layer_idx = None
- self.special_tokens = special_tokens
- self.text_projection = torch.nn.Parameter(torch.eye(self.transformer.get_input_embeddings().weight.shape[1]))
- self.logit_scale = torch.nn.Parameter(torch.tensor(4.6055))
- self.enable_attention_masks = False
-
- self.layer_norm_hidden_state = layer_norm_hidden_state
- if layer == "hidden":
- assert layer_idx is not None
- self.clip_layer(layer_idx)
- self.layer_default = (self.layer, self.layer_idx)
-
- def freeze(self):
- self.transformer = self.transformer.eval()
- # self.train = disabled_train
- for param in self.parameters():
- param.requires_grad = False
-
- def clip_layer(self, layer_idx):
- self.layer = "hidden"
- self.layer_idx = layer_idx
-
- def reset_clip_layer(self):
- self.layer = self.layer_default[0]
- self.layer_idx = self.layer_default[1]
-
- def set_up_textual_embeddings(self, tokens, current_embeds):
- out_tokens = []
- next_new_token = token_dict_size = current_embeds.weight.shape[0] - 1
- embedding_weights = []
-
- for x in tokens:
- tokens_temp = []
- for y in x:
- if isinstance(y, int):
- if y == token_dict_size: # EOS token
- y = -1
- tokens_temp += [y]
- else:
- if y.shape[0] == current_embeds.weight.shape[1]:
- embedding_weights += [y]
- tokens_temp += [next_new_token]
- next_new_token += 1
- else:
- print("WARNING: shape mismatch when trying to apply embedding, embedding will be ignored",
- y.shape[0], current_embeds.weight.shape[1])
- while len(tokens_temp) < len(x):
- tokens_temp += [self.special_tokens["pad"]]
- out_tokens += [tokens_temp]
-
- n = token_dict_size
- if len(embedding_weights) > 0:
- new_embedding = torch.nn.Embedding(next_new_token + 1, current_embeds.weight.shape[1],
- device=current_embeds.weight.device, dtype=current_embeds.weight.dtype)
- new_embedding.weight[:token_dict_size] = current_embeds.weight[:-1]
- for x in embedding_weights:
- new_embedding.weight[n] = x
- n += 1
- new_embedding.weight[n] = current_embeds.weight[-1] # EOS embedding
- self.transformer.set_input_embeddings(new_embedding)
-
- processed_tokens = []
- for x in out_tokens:
- processed_tokens += [
- list(map(lambda a: n if a == -1 else a, x))] # The EOS token should always be the largest one
-
- return processed_tokens
-
- def forward(self, tokens):
- backup_embeds = self.transformer.get_input_embeddings()
- device = backup_embeds.weight.device
- tokens = self.set_up_textual_embeddings(tokens, backup_embeds)
- tokens = torch.LongTensor(tokens).to(device)
-
- if self.transformer.text_model.final_layer_norm.weight.dtype != torch.float32:
- precision_scope = torch.autocast
+ if hasattr(outputs, "pooler_output"):
+ pooled_output = outputs.pooler_output.float()
else:
- precision_scope = lambda a, dtype: contextlib.nullcontext(a)
-
- with precision_scope(model_management.get_autocast_device(device), dtype=torch.float32):
- attention_mask = None
- if self.enable_attention_masks:
- attention_mask = torch.zeros_like(tokens)
- max_token = self.transformer.get_input_embeddings().weight.shape[0] - 1
- for x in range(attention_mask.shape[0]):
- for y in range(attention_mask.shape[1]):
- attention_mask[x, y] = 1
- if tokens[x, y] == max_token:
- break
-
- outputs = self.transformer(input_ids=tokens, attention_mask=attention_mask,
- output_hidden_states=self.layer == "hidden")
- self.transformer.set_input_embeddings(backup_embeds)
-
- if self.layer == "last":
- z = outputs.last_hidden_state
- elif self.layer == "pooled":
- z = outputs.pooler_output[:, None, :]
- else:
- z = outputs.hidden_states[self.layer_idx]
- if self.layer_norm_hidden_state:
- z = self.transformer.text_model.final_layer_norm(z)
-
- if hasattr(outputs, "pooler_output"):
- pooled_output = outputs.pooler_output.float()
- else:
- pooled_output = None
-
- if self.text_projection is not None and pooled_output is not None:
- pooled_output = pooled_output.float().to(self.text_projection.device) @ self.text_projection.float()
- return z.float(), pooled_output
-
- def encode(self, tokens):
- return self(tokens)
+ pooled_output = None
- def load_sd(self, sd):
- if "text_projection" in sd:
- self.text_projection[:] = sd.pop("text_projection")
- if "text_projection.weight" in sd:
- self.text_projection[:] = sd.pop("text_projection.weight").transpose(0, 1)
- return self.transformer.load_state_dict(sd, strict=False)
+ if self.text_projection is not None and pooled_output is not None:
+ pooled_output = pooled_output.float().to(self.text_projection.device) @ self.text_projection.float()
+ return z.float(), pooled_output
class ClipVisionModelFooocus:
@@ -262,6 +184,7 @@ def load_sd(self, sd):
def patch_all_clip():
ldm_patched.modules.sd1_clip.ClipTokenWeightEncoder.encode_token_weights = encode_token_weights_fooocus
- ldm_patched.modules.sd1_clip.SDClipModel = SDClipModelFooocus
+ ldm_patched.modules.sd1_clip.SDClipModel.__init__ = patched_SDClipModel__init__
+ ldm_patched.modules.sd1_clip.SDClipModel.forward = patched_SDClipModel_forward
ldm_patched.modules.clip_vision.ClipVisionModel = ClipVisionModelFooocus
return
|
https://api.github.com/repos/lllyasviel/Fooocus/pulls/1460
|
2023-12-17T03:52:32Z
|
2023-12-17T03:54:05Z
|
2023-12-17T03:54:05Z
|
2023-12-17T03:54:08Z
| 2,948
|
lllyasviel/Fooocus
| 7,123
|
|
[Serve] ServeHandle detects ActorError and drop replicas from target group
|
diff --git a/python/ray/serve/_private/router.py b/python/ray/serve/_private/router.py
index eb358030c1bd2..67d21707ecbdf 100644
--- a/python/ray/serve/_private/router.py
+++ b/python/ray/serve/_private/router.py
@@ -9,6 +9,7 @@
import ray
from ray.actor import ActorHandle
+from ray.exceptions import RayActorError, RayTaskError
from ray.util import metrics
from ray.serve._private.common import RunningReplicaInfo
@@ -87,6 +88,17 @@ def __init__(
{"deployment": self.deployment_name}
)
+ def _reset_replica_iterator(self):
+ """Reset the iterator used to load balance replicas.
+
+ This call is expected to be called after the replica membership has
+ been updated. It will shuffle the replicas randomly to avoid multiple
+ handle sending requests in the same order.
+ """
+ replicas = list(self.in_flight_queries.keys())
+ random.shuffle(replicas)
+ self.replica_iterator = itertools.cycle(replicas)
+
def update_running_replicas(self, running_replicas: List[RunningReplicaInfo]):
added, removed, _ = compute_iterable_delta(
self.in_flight_queries.keys(), running_replicas
@@ -97,14 +109,13 @@ def update_running_replicas(self, running_replicas: List[RunningReplicaInfo]):
for removed_replica in removed:
# Delete it directly because shutdown is processed by controller.
- del self.in_flight_queries[removed_replica]
+ # Replicas might already been deleted due to early detection of
+ # actor error.
+ self.in_flight_queries.pop(removed_replica, None)
if len(added) > 0 or len(removed) > 0:
- # Shuffle the keys to avoid synchronization across clients.
- replicas = list(self.in_flight_queries.keys())
- random.shuffle(replicas)
- self.replica_iterator = itertools.cycle(replicas)
logger.debug(f"ReplicaSet: +{len(added)}, -{len(removed)} replicas.")
+ self._reset_replica_iterator()
self.config_updated_event.set()
def _try_assign_replica(self, query: Query) -> Optional[ray.ObjectRef]:
@@ -160,9 +171,38 @@ def _all_query_refs(self):
def _drain_completed_object_refs(self) -> int:
refs = self._all_query_refs
+ # NOTE(simon): even though the timeout is 0, a large number of refs can still
+ # cause some blocking delay in the event loop. Consider moving this to async?
done, _ = ray.wait(refs, num_returns=len(refs), timeout=0)
- for replica_in_flight_queries in self.in_flight_queries.values():
- replica_in_flight_queries.difference_update(done)
+ replicas_to_remove = []
+ for replica_info, replica_in_flight_queries in self.in_flight_queries.items():
+ completed_queries = replica_in_flight_queries.intersection(done)
+ if len(completed_queries):
+ try:
+ # NOTE(simon): this ray.get call should be cheap because all these
+ # refs are ready as indicated by previous `ray.wait` call.
+ ray.get(list(completed_queries))
+ except RayActorError:
+ logger.debug(
+ f"Removing {replica_info.replica_tag} from replica set "
+ "because the actor exited."
+ )
+ replicas_to_remove.append(replica_info)
+ except RayTaskError:
+ # Ignore application error.
+ pass
+ except Exception:
+ logger.exception(
+ "Handle received unexpected error when processing request."
+ )
+
+ replica_in_flight_queries.difference_update(completed_queries)
+
+ if len(replicas_to_remove) > 0:
+ for replica_info in replicas_to_remove:
+ self.in_flight_queries.pop(replica_info, None)
+ self._reset_replica_iterator()
+
return len(done)
async def assign_replica(self, query: Query) -> ray.ObjectRef:
diff --git a/python/ray/serve/tests/test_standalone2.py b/python/ray/serve/tests/test_standalone2.py
index 0afff871135fc..32990be3a9da1 100644
--- a/python/ray/serve/tests/test_standalone2.py
+++ b/python/ray/serve/tests/test_standalone2.py
@@ -10,6 +10,7 @@
import requests
import ray
+import ray.actor
import ray._private.state
from ray import serve
from ray._private.test_utils import wait_for_condition
@@ -650,6 +651,39 @@ def test_shutdown_remote(start_and_shutdown_ray_cli_function):
os.unlink(shutdown_file.name)
+def test_handle_early_detect_failure(shutdown_ray):
+ """Check that handle can be notified about replicas failure and take them out of the replicas set."""
+ ray.init()
+ serve.start(detached=True)
+
+ @serve.deployment(num_replicas=2, max_concurrent_queries=1)
+ def f(do_crash: bool = False):
+ if do_crash:
+ os._exit(1)
+ return os.getpid()
+
+ handle = serve.run(f.bind())
+ pids = ray.get([handle.remote() for _ in range(2)])
+ assert len(set(pids)) == 2
+ assert len(handle.router._replica_set.in_flight_queries.keys()) == 2
+
+ client = get_global_client()
+ # Kill the controller so that the replicas membership won't be updated
+ # through controller health check + long polling.
+ ray.kill(client._controller, no_restart=True)
+
+ with pytest.raises(RayActorError):
+ ray.get(handle.remote(do_crash=True))
+
+ pids = ray.get([handle.remote() for _ in range(10)])
+ assert len(set(pids)) == 1
+ assert len(handle.router._replica_set.in_flight_queries.keys()) == 1
+
+ # Restart the controller, and then clean up all the replicas
+ serve.start(detached=True)
+ serve.shutdown()
+
+
def test_autoscaler_shutdown_node_http_everynode(
shutdown_ray, call_ray_stop_only # noqa: F811
):
|
<!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
When ServeController crashes, the replicas membership updates is paused. This means ServeHandle will continue to send requests to the replicas that also crashed during this time. This PR show how can we detect actor failures locally from within the handle and take those replicas of the group it load balance to.
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [x] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
|
https://api.github.com/repos/ray-project/ray/pulls/26685
|
2022-07-18T22:52:00Z
|
2022-07-29T16:50:17Z
|
2022-07-29T16:50:17Z
|
2022-08-03T19:03:16Z
| 1,384
|
ray-project/ray
| 19,409
|
Okx: fetchLedger unify marginMode
|
diff --git a/js/okx.js b/js/okx.js
index 8d338c0f96cb..e9c39a4680d8 100644
--- a/js/okx.js
+++ b/js/okx.js
@@ -3118,10 +3118,14 @@ module.exports = class okx extends Exchange {
* @method
* @name okx#fetchLedger
* @description fetch the history of changes, actions done by the user or operations that altered balance of the user
+ * @see https://www.okx.com/docs-v5/en/#rest-api-account-get-bills-details-last-7-days
+ * @see https://www.okx.com/docs-v5/en/#rest-api-account-get-bills-details-last-3-months
+ * @see https://www.okx.com/docs-v5/en/#rest-api-funding-asset-bills-details
* @param {string|undefined} code unified currency code, default is undefined
* @param {int|undefined} since timestamp in ms of the earliest ledger entry, default is undefined
* @param {int|undefined} limit max number of ledger entrys to return, default is undefined
* @param {object} params extra parameters specific to the okx api endpoint
+ * @param {string|undefined} params.marginMode 'cross' or 'isolated'
* @returns {object} a [ledger structure]{@link https://docs.ccxt.com/en/latest/manual.html#ledger-structure}
*/
await this.loadMarkets ();
@@ -3190,6 +3194,16 @@ module.exports = class okx extends Exchange {
// 'before': 'id', // return records newer than the requested bill id
// 'limit': 100, // default 100, max 100
};
+ let marginMode = undefined;
+ [ marginMode, params ] = this.handleMarginModeAndParams ('fetchLedger', params);
+ if (marginMode === undefined) {
+ marginMode = this.safeString (params, 'mgnMode');
+ }
+ if (method !== 'privateGetAssetBills') {
+ if (marginMode !== undefined) {
+ request['mgnMode'] = marginMode;
+ }
+ }
const [ type, query ] = this.handleMarketTypeAndParams ('fetchLedger', undefined, params);
if (type !== undefined) {
request['instType'] = this.convertToInstrumentType (type);
|
Added marginMode to fetchLedger:
### Cross:
```
node examples/js/cli okx fetchLedger undefined undefined undefined '{"marginMode":"cross"}'
okx.fetchLedger (, , , [object Object])
2022-07-08T06:22:36.364Z iteration 0 passed in 377 ms
id | timestamp | datetime | account | referenceId | referenceAccount | type | currency | symbol | amount | before | after | status | fee
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
464807812252741636 | 1657090818152 | 2022-07-06T07:00:18.152Z | | 464807812227575826 | | trade | USDT | ETH/USDT | 0 | | 40.92288696303541 | ok | {"cost":0,"currency":"USDT"}
464807812252741635 | 1657090818152 | 2022-07-06T07:00:18.152Z | | 464807812227575826 | | trade | ETH | ETH/USDT | 0 | | 0.000908031 | ok | {"cost":0.000004583,"currency":"ETH"}
464807894226219048 | 1657090837696 | 2022-07-06T07:00:37.696Z | | | | trade | ETH | ETH/USDT | 0 | | 0.000908031 | ok | {"cost":0,"currency":"ETH"}
...
20 objects
2022-07-08T06:22:36.364Z iteration 1 passed in 377 ms
```
### Isolated:
```
node examples/js/cli okx fetchLedger undefined undefined undefined '{"marginMode":"isolated"}'
okx.fetchLedger (, , , [object Object])
2022-07-08T06:23:25.253Z iteration 0 passed in 367 ms
id | timestamp | datetime | account | referenceId | referenceAccount | type | currency | symbol | amount | before | after | status | fee
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
464803126858657828 | 1657089701067 | 2022-07-06T06:41:41.067Z | | 464803126829297665 | | trade | USDT | ETH/USDT | 0 | | 40.89741100923541 | ok | {"cost":0,"currency":"USDT"}
464803126858657827 | 1657089701067 | 2022-07-06T06:41:41.067Z | | 464803126829297665 | | trade | ETH | ETH/USDT | -0.0001586 | | 0.00080773 | ok | {"cost":7.93e-7,"currency":"ETH"}
464803126862852101 | 1657089701068 | 2022-07-06T06:41:41.068Z | | 464803126829297665 | | trade | USDT | ETH/USDT | 0 | | 40.89741100923541 | ok | {"cost":0,"currency":"USDT"}
...
40 objects
2022-07-08T06:23:25.253Z iteration 1 passed in 367 ms
```
|
https://api.github.com/repos/ccxt/ccxt/pulls/14267
|
2022-07-08T06:31:38Z
|
2022-08-24T10:38:15Z
|
2022-08-24T10:38:14Z
|
2022-08-24T10:38:15Z
| 547
|
ccxt/ccxt
| 13,860
|
Lat fix for Ray HP search
|
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 97072e689eaba..350375fe00b7d 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -843,6 +843,8 @@ def _objective(trial):
if getattr(self, "objective", None) is None:
metrics = self.evaluate()
self.objective = self.compute_objective(metrics)
+ if self.hp_search_backend == HPSearchBackend.RAY:
+ tune.report(objective=self.objective)
return self.objective
if self.hp_search_backend == HPSearchBackend.OPTUNA:
|
https://api.github.com/repos/huggingface/transformers/pulls/6691
|
2020-08-24T16:07:51Z
|
2020-08-24T16:15:01Z
|
2020-08-24T16:15:01Z
|
2020-08-24T16:15:02Z
| 154
|
huggingface/transformers
| 12,257
|
|
Fixed a bug with auto scroll in mitmweb
|
diff --git a/web/src/js/components/helpers/AutoScroll.tsx b/web/src/js/components/helpers/AutoScroll.tsx
index 517a93a2ca..d06a19804d 100644
--- a/web/src/js/components/helpers/AutoScroll.tsx
+++ b/web/src/js/components/helpers/AutoScroll.tsx
@@ -2,7 +2,8 @@ import React from "react";
import ReactDOM from "react-dom";
const symShouldStick = Symbol("shouldStick") as any;
-const isAtBottom = (v) => v.scrollTop + v.clientHeight === v.scrollHeight;
+const isAtBottom = (v) =>
+ Math.round(v.scrollTop) + v.clientHeight === v.scrollHeight;
export default (Component) =>
Object.assign(
|
#### Description
Fixed a bug that could prevent mitmweb autoscroll from working properly.
This bug was caused by the fact that clientHeight and scrollHeight returned integers, but scrollTop returned a decimal number.
[https://drafts.csswg.org/cssom-view/#extension-to-the-element-interface](https://drafts.csswg.org/cssom-view/#extension-to-the-element-interface)

#### Checklist
- [ ] I have updated tests where applicable.
- [ ] I have added an entry to the CHANGELOG.
|
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/6038
|
2023-04-01T07:06:11Z
|
2023-04-01T11:29:44Z
|
2023-04-01T11:29:44Z
|
2023-04-01T11:29:45Z
| 163
|
mitmproxy/mitmproxy
| 27,947
|
Update LICENSE
|
diff --git a/LICENSE b/LICENSE
index 6ca207ef..4c52c468 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,122 +1,3 @@
Creative Commons Legal Code
-CC0 1.0 Universal
-
- CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
- LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
- ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
- INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
- REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
- PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
- THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
- HEREUNDER.
-
-Statement of Purpose
-
-The laws of most jurisdictions throughout the world automatically confer
-exclusive Copyright and Related Rights (defined below) upon the creator
-and subsequent owner(s) (each and all, an "owner") of an original work of
-authorship and/or a database (each, a "Work").
-
-Certain owners wish to permanently relinquish those rights to a Work for
-the purpose of contributing to a commons of creative, cultural and
-scientific works ("Commons") that the public can reliably and without fear
-of later claims of infringement build upon, modify, incorporate in other
-works, reuse and redistribute as freely as possible in any form whatsoever
-and for any purposes, including without limitation commercial purposes.
-These owners may contribute to the Commons to promote the ideal of a free
-culture and the further production of creative, cultural and scientific
-works, or to gain reputation or greater distribution for their Work in
-part through the use and efforts of others.
-
-For these and/or other purposes and motivations, and without any
-expectation of additional consideration or compensation, the person
-associating CC0 with a Work (the "Affirmer"), to the extent that he or she
-is an owner of Copyright and Related Rights in the Work, voluntarily
-elects to apply CC0 to the Work and publicly distribute the Work under its
-terms, with knowledge of his or her Copyright and Related Rights in the
-Work and the meaning and intended legal effect of CC0 on those rights.
-
-1. Copyright and Related Rights. A Work made available under CC0 may be
-protected by copyright and related or neighboring rights ("Copyright and
-Related Rights"). Copyright and Related Rights include, but are not
-limited to, the following:
-
- i. the right to reproduce, adapt, distribute, perform, display,
- communicate, and translate a Work;
- ii. moral rights retained by the original author(s) and/or performer(s);
-iii. publicity and privacy rights pertaining to a person's image or
- likeness depicted in a Work;
- iv. rights protecting against unfair competition in regards to a Work,
- subject to the limitations in paragraph 4(a), below;
- v. rights protecting the extraction, dissemination, use and reuse of data
- in a Work;
- vi. database rights (such as those arising under Directive 96/9/EC of the
- European Parliament and of the Council of 11 March 1996 on the legal
- protection of databases, and under any national implementation
- thereof, including any amended or successor version of such
- directive); and
-vii. other similar, equivalent or corresponding rights throughout the
- world based on applicable law or treaty, and any national
- implementations thereof.
-
-2. Waiver. To the greatest extent permitted by, but not in contravention
-of, applicable law, Affirmer hereby overtly, fully, permanently,
-irrevocably and unconditionally waives, abandons, and surrenders all of
-Affirmer's Copyright and Related Rights and associated claims and causes
-of action, whether now known or unknown (including existing as well as
-future claims and causes of action), in the Work (i) in all territories
-worldwide, (ii) for the maximum duration provided by applicable law or
-treaty (including future time extensions), (iii) in any current or future
-medium and for any number of copies, and (iv) for any purpose whatsoever,
-including without limitation commercial, advertising or promotional
-purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
-member of the public at large and to the detriment of Affirmer's heirs and
-successors, fully intending that such Waiver shall not be subject to
-revocation, rescission, cancellation, termination, or any other legal or
-equitable action to disrupt the quiet enjoyment of the Work by the public
-as contemplated by Affirmer's express Statement of Purpose.
-
-3. Public License Fallback. Should any part of the Waiver for any reason
-be judged legally invalid or ineffective under applicable law, then the
-Waiver shall be preserved to the maximum extent permitted taking into
-account Affirmer's express Statement of Purpose. In addition, to the
-extent the Waiver is so judged Affirmer hereby grants to each affected
-person a royalty-free, non transferable, non sublicensable, non exclusive,
-irrevocable and unconditional license to exercise Affirmer's Copyright and
-Related Rights in the Work (i) in all territories worldwide, (ii) for the
-maximum duration provided by applicable law or treaty (including future
-time extensions), (iii) in any current or future medium and for any number
-of copies, and (iv) for any purpose whatsoever, including without
-limitation commercial, advertising or promotional purposes (the
-"License"). The License shall be deemed effective as of the date CC0 was
-applied by Affirmer to the Work. Should any part of the License for any
-reason be judged legally invalid or ineffective under applicable law, such
-partial invalidity or ineffectiveness shall not invalidate the remainder
-of the License, and in such case Affirmer hereby affirms that he or she
-will not (i) exercise any of his or her remaining Copyright and Related
-Rights in the Work or (ii) assert any associated claims and causes of
-action with respect to the Work, in either case contrary to Affirmer's
-express Statement of Purpose.
-
-4. Limitations and Disclaimers.
-
- a. No trademark or patent rights held by Affirmer are waived, abandoned,
- surrendered, licensed or otherwise affected by this document.
- b. Affirmer offers the Work as-is and makes no representations or
- warranties of any kind concerning the Work, express, implied,
- statutory or otherwise, including without limitation warranties of
- title, merchantability, fitness for a particular purpose, non
- infringement, or the absence of latent or other defects, accuracy, or
- the present or absence of errors, whether or not discoverable, all to
- the greatest extent permissible under applicable law.
- c. Affirmer disclaims responsibility for clearing rights of other persons
- that may apply to the Work or any use thereof, including without
- limitation any person's Copyright and Related Rights in the Work.
- Further, Affirmer disclaims responsibility for obtaining any necessary
- consents, permissions or other rights required for any use of the
- Work.
- d. Affirmer understands and acknowledges that Creative Commons is not a
- party to this document and has no duty or obligation with respect to
- this CC0 or use of the Work.
-
+No need
|
No need to the licence
|
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/902
|
2022-12-06T11:04:58Z
|
2022-12-06T17:39:47Z
|
2022-12-06T17:39:47Z
|
2022-12-06T18:26:41Z
| 1,644
|
josephmisiti/awesome-machine-learning
| 52,437
|
VW MQB: Add FW for 2020 Škoda Scala
|
diff --git a/selfdrive/car/volkswagen/values.py b/selfdrive/car/volkswagen/values.py
index f627e517be366e..6ae4969cc0ab12 100755
--- a/selfdrive/car/volkswagen/values.py
+++ b/selfdrive/car/volkswagen/values.py
@@ -1079,6 +1079,7 @@ def init_make(self, CP: car.CarParams):
b'\xf1\x870CW300050 \xf1\x891709',
],
(Ecu.srs, 0x715, None): [
+ b'\xf1\x872Q0959655AJ\xf1\x890250\xf1\x82\x1211110411110411--04040404131111112H14',
b'\xf1\x872Q0959655AM\xf1\x890351\xf1\x82\022111104111104112104040404111111112H14',
],
(Ecu.eps, 0x712, None): [
|
Add missing SRS firmware for the 2020 Škoda Scala.
**Route:** `b1293b9f37f88166|2023-01-14--19-30-14`
Thanks to community Scala owner WestN!
|
https://api.github.com/repos/commaai/openpilot/pulls/26976
|
2023-01-15T22:44:38Z
|
2023-01-16T01:16:45Z
|
2023-01-16T01:16:45Z
|
2023-01-16T01:55:22Z
| 216
|
commaai/openpilot
| 9,061
|
Add Max Fenwick Tree
|
diff --git a/data_structures/binary_tree/maximum_fenwick_tree.py b/data_structures/binary_tree/maximum_fenwick_tree.py
new file mode 100644
index 000000000000..e90bd634d51c
--- /dev/null
+++ b/data_structures/binary_tree/maximum_fenwick_tree.py
@@ -0,0 +1,102 @@
+class MaxFenwickTree:
+ """
+ Maximum Fenwick Tree
+
+ More info: https://cp-algorithms.com/data_structures/fenwick.html
+ ---------
+ >>> ft = MaxFenwickTree(5)
+ >>> ft.query(0, 5)
+ 0
+ >>> ft.update(4, 100)
+ >>> ft.query(0, 5)
+ 100
+ >>> ft.update(4, 0)
+ >>> ft.update(2, 20)
+ >>> ft.query(0, 5)
+ 20
+ >>> ft.update(4, 10)
+ >>> ft.query(2, 5)
+ 10
+ >>> ft.query(1, 5)
+ 20
+ >>> ft.update(2, 0)
+ >>> ft.query(0, 5)
+ 10
+ >>> ft = MaxFenwickTree(10000)
+ >>> ft.update(255, 30)
+ >>> ft.query(0, 10000)
+ 30
+ """
+
+ def __init__(self, size: int) -> None:
+ """
+ Create empty Maximum Fenwick Tree with specified size
+
+ Parameters:
+ size: size of Array
+
+ Returns:
+ None
+ """
+ self.size = size
+ self.arr = [0] * size
+ self.tree = [0] * size
+
+ @staticmethod
+ def get_next(index: int) -> int:
+ """
+ Get next index in O(1)
+ """
+ return index + (index & -index)
+
+ @staticmethod
+ def get_prev(index: int) -> int:
+ """
+ Get previous index in O(1)
+ """
+ return index - (index & -index)
+
+ def update(self, index: int, value: int) -> None:
+ """
+ Set index to value in O(lg^2 N)
+
+ Parameters:
+ index: index to update
+ value: value to set
+
+ Returns:
+ None
+ """
+ self.arr[index] = value
+ while index < self.size:
+ self.tree[index] = max(value, self.query(self.get_prev(index), index))
+ index = self.get_next(index)
+
+ def query(self, left: int, right: int) -> int:
+ """
+ Answer the query of maximum range [l, r) in O(lg^2 N)
+
+ Parameters:
+ left: left index of query range (inclusive)
+ right: right index of query range (exclusive)
+
+ Returns:
+ Maximum value of range [left, right)
+ """
+ right -= 1 # Because of right is exclusive
+ result = 0
+ while left < right:
+ current_left = self.get_prev(right)
+ if left < current_left:
+ result = max(result, self.tree[right])
+ right = current_left
+ else:
+ result = max(result, self.arr[right])
+ right -= 1
+ return result
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
|
### Describe your change:
I created MaxFenwickTreeOneBasedIndexing wich solves this problem:
- Update A[i] = value
- Query Maximum of Range (l, r]
This algorithm use classic fenwick tree with some changes in key ideas.
* [x] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
|
https://api.github.com/repos/TheAlgorithms/Python/pulls/6298
|
2022-08-06T13:11:04Z
|
2022-08-12T09:12:58Z
|
2022-08-12T09:12:58Z
|
2022-08-12T09:12:58Z
| 831
|
TheAlgorithms/Python
| 29,562
|
Set recreate = true in tox.ini
|
diff --git a/tox.ini b/tox.ini
index 3a31558d814..6d9814192c0 100644
--- a/tox.ini
+++ b/tox.ini
@@ -67,6 +67,9 @@ passenv =
commands =
{[base]install_and_test} {[base]all_packages}
python tests/lock_test.py
+# We always recreate the virtual environment to avoid problems like
+# https://github.com/certbot/certbot/issues/7745.
+recreate = true
setenv =
PYTEST_ADDOPTS = {env:PYTEST_ADDOPTS:--numprocesses auto}
PYTHONHASHSEED = 0
|
Fixes https://github.com/certbot/certbot/issues/7745.
|
https://api.github.com/repos/certbot/certbot/pulls/7746
|
2020-02-05T20:00:48Z
|
2020-02-05T22:37:40Z
|
2020-02-05T22:37:39Z
|
2020-02-05T22:37:43Z
| 150
|
certbot/certbot
| 3,777
|
Live language feedback
|
diff --git a/website/package-lock.json b/website/package-lock.json
index fe03e14575..3b8220e509 100644
--- a/website/package-lock.json
+++ b/website/package-lock.json
@@ -32,6 +32,7 @@
"eslint-plugin-simple-import-sort": "^8.0.0",
"focus-visible": "^5.2.0",
"framer-motion": "^6.5.1",
+ "lande": "^1.0.10",
"lucide-react": "^0.105.0",
"next": "13.0.6",
"next-auth": "^4.18.6",
@@ -26611,6 +26612,14 @@
"node": ">= 8"
}
},
+ "node_modules/lande": {
+ "version": "1.0.10",
+ "resolved": "https://registry.npmjs.org/lande/-/lande-1.0.10.tgz",
+ "integrity": "sha512-yT52DQh+UV2pEp08jOYrA4drDv0DbjpiRyZYgl25ak9G2cVR2AimzrqkYQWrD9a7Ud+qkAcaiDDoNH9DXfHPmw==",
+ "dependencies": {
+ "toygrad": "^2.6.0"
+ }
+ },
"node_modules/language-subtag-registry": {
"version": "0.3.22",
"resolved": "https://registry.npmjs.org/language-subtag-registry/-/language-subtag-registry-0.3.22.tgz",
@@ -36420,6 +36429,11 @@
"node": ">=0.8"
}
},
+ "node_modules/toygrad": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/toygrad/-/toygrad-2.6.0.tgz",
+ "integrity": "sha512-g4zBmlSbvzOE5FOILxYkAybTSxijKLkj1WoNqVGnbMcWDyj4wWQ+eYSr3ik7XOpIgMq/7eBcPRTJX3DM2E0YMg=="
+ },
"node_modules/tr46": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
@@ -58708,6 +58722,14 @@
"integrity": "sha512-pJiBpiXMbt7dkzXe8Ghj/u4FfXOOa98fPW+bihOJ4SjnoijweJrNThJfd3ifXpXhREjpoF2mZVH1GfS9LV3kHQ==",
"dev": true
},
+ "lande": {
+ "version": "1.0.10",
+ "resolved": "https://registry.npmjs.org/lande/-/lande-1.0.10.tgz",
+ "integrity": "sha512-yT52DQh+UV2pEp08jOYrA4drDv0DbjpiRyZYgl25ak9G2cVR2AimzrqkYQWrD9a7Ud+qkAcaiDDoNH9DXfHPmw==",
+ "requires": {
+ "toygrad": "^2.6.0"
+ }
+ },
"language-subtag-registry": {
"version": "0.3.22",
"resolved": "https://registry.npmjs.org/language-subtag-registry/-/language-subtag-registry-0.3.22.tgz",
@@ -65936,6 +65958,11 @@
"punycode": "^2.1.1"
}
},
+ "toygrad": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/toygrad/-/toygrad-2.6.0.tgz",
+ "integrity": "sha512-g4zBmlSbvzOE5FOILxYkAybTSxijKLkj1WoNqVGnbMcWDyj4wWQ+eYSr3ik7XOpIgMq/7eBcPRTJX3DM2E0YMg=="
+ },
"tr46": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
diff --git a/website/package.json b/website/package.json
index c68b15371b..7ab3f3d850 100644
--- a/website/package.json
+++ b/website/package.json
@@ -50,6 +50,7 @@
"eslint-plugin-simple-import-sort": "^8.0.0",
"focus-visible": "^5.2.0",
"framer-motion": "^6.5.1",
+ "lande": "^1.0.10",
"lucide-react": "^0.105.0",
"next": "13.0.6",
"next-auth": "^4.18.6",
diff --git a/website/src/components/Survey/TrackedTextarea.tsx b/website/src/components/Survey/TrackedTextarea.tsx
index f408e168a0..196d2f1a60 100644
--- a/website/src/components/Survey/TrackedTextarea.tsx
+++ b/website/src/components/Survey/TrackedTextarea.tsx
@@ -1,4 +1,23 @@
-import { Progress, Stack, Textarea, TextareaProps, useColorModeValue } from "@chakra-ui/react";
+import {} from "@chakra-ui/react";
+import lande from "lande";
+import { LanguageAbbreviations } from "src/lib/iso6393";
+import { useCookies } from "react-cookie";
+import React from "react";
+import {
+ Progress,
+ Stack,
+ Textarea,
+ TextareaProps,
+ useColorModeValue,
+ Button,
+ Modal,
+ ModalBody,
+ ModalCloseButton,
+ ModalContent,
+ ModalHeader,
+ ModalOverlay,
+ useDisclosure,
+} from "@chakra-ui/react";
interface TrackedTextboxProps {
text: string;
@@ -11,10 +30,55 @@ interface TrackedTextboxProps {
onTextChange: (event: React.ChangeEvent<HTMLTextAreaElement>) => void;
}
+const killEvent = (e) => e.stopPropagation();
+
export const TrackedTextarea = (props: TrackedTextboxProps) => {
+ const [wordLimitForLangDetection, setWordLimitForLangDetection] = React.useState(10);
const backgroundColor = useColorModeValue("gray.100", "gray.900");
-
+ const [cookies] = useCookies(["NEXT_LOCALE"]);
const wordCount = (props.text.match(/\w+/g) || []).length;
+ const { isOpen, onOpen, onClose } = useDisclosure();
+ const currentLanguage = cookies["NEXT_LOCALE"];
+
+ const closeTemporaryIgnoreLanguageDetection = () => {
+ setWordLimitForLangDetection(2 * wordCount);
+ onClose();
+ };
+
+ console.log("", wordCount, wordLimitForLangDetection);
+ if (wordCount > wordLimitForLangDetection) {
+ let mostProbableLanguage;
+ try {
+ mostProbableLanguage = LanguageAbbreviations[lande(props.text)[0][0]];
+ } catch (error) {
+ mostProbableLanguage = "";
+ }
+
+ /*const mostProbableLanguage = lande(props.text);*/
+ if (mostProbableLanguage !== currentLanguage) {
+ setTimeout(() => {
+ onOpen();
+ }, 200);
+
+ return (
+ <>
+ <Modal isOpen={isOpen} onClose={closeTemporaryIgnoreLanguageDetection} size="xl" scrollBehavior={"inside"}>
+ {/* we kill the event here to disable drag and drop, since it is in the same container */}
+ <ModalOverlay onMouseDown={killEvent}>
+ <ModalContent alignItems="center">
+ <ModalHeader>Switch Language?</ModalHeader>
+ <ModalCloseButton />
+ <ModalBody>
+ Do you want to switch language? The detected language is <b>{mostProbableLanguage}</b>, whereas your
+ chosen language is <b>{currentLanguage}</b>. The language can be changed on the top right.
+ </ModalBody>
+ </ModalContent>
+ </ModalOverlay>
+ </Modal>
+ </>
+ );
+ }
+ }
let progressColor: string;
switch (true) {
diff --git a/website/src/lib/iso6393.ts b/website/src/lib/iso6393.ts
new file mode 100644
index 0000000000..3720150a0a
--- /dev/null
+++ b/website/src/lib/iso6393.ts
@@ -0,0 +1,187 @@
+export const LanguageAbbreviations = {
+ aar: "aa",
+ abk: "ab",
+ afr: "af",
+ aka: "ak",
+ alb: "sq",
+ amh: "am",
+ ara: "ar",
+ arg: "an",
+ hye: "hy",
+ asm: "as",
+ ava: "av",
+ ave: "ae",
+ aym: "ay",
+ aze: "az",
+ bak: "ba",
+ bam: "bm",
+ eus: "eu",
+ bel: "be",
+ ben: "bn",
+ bih: "bh",
+ bis: "bi",
+ tib: "bo",
+ bos: "bs",
+ bre: "br",
+ bul: "bg",
+ mya: "my",
+ cat: "ca",
+ cze: "cs",
+ cha: "ch",
+ che: "ce",
+ zho: "zh",
+ chu: "cu",
+ chv: "cv",
+ cor: "kw",
+ cos: "co",
+ cre: "cr",
+ wel: "cy",
+ dan: "da",
+ ger: "de",
+ deu: "de",
+ div: "dv",
+ dut: "nl",
+ dzo: "dz",
+ gre: "el",
+ eng: "en",
+ epo: "eo",
+ est: "et",
+ ewe: "ee",
+ fao: "fo",
+ per: "fa",
+ fij: "fj",
+ fin: "fi",
+ fra: "fr",
+ fry: "fy",
+ ful: "ff",
+ geo: "ka",
+ gla: "gd",
+ gle: "ga",
+ glg: "gl",
+ glv: "gv",
+ grn: "gn",
+ guj: "gu",
+ hat: "ht",
+ hau: "ha",
+ heb: "he",
+ her: "hz",
+ hin: "hi",
+ hmo: "ho",
+ hrv: "hr",
+ hun: "hu",
+ ibo: "ig",
+ ice: "is",
+ ido: "io",
+ iii: "ii",
+ iku: "iu",
+ ile: "ie",
+ ina: "ia",
+ ind: "id",
+ ipk: "ik",
+ ita: "it",
+ jav: "jv",
+ jpn: "ja",
+ kal: "kl",
+ kan: "kn",
+ kas: "ks",
+ kau: "kr",
+ kaz: "kk",
+ khm: "km",
+ kik: "ki",
+ kin: "rw",
+ kir: "ky",
+ kom: "kv",
+ kon: "kg",
+ kor: "ko",
+ kua: "kj",
+ kur: "ku",
+ lao: "lo",
+ lat: "la",
+ lav: "lv",
+ lim: "li",
+ lin: "ln",
+ lit: "lt",
+ ltz: "lb",
+ lub: "lu",
+ lug: "lg",
+ mkd: "mk",
+ mah: "mh",
+ mal: "ml",
+ mri: "mi",
+ mar: "mr",
+ may: "ms",
+ mlg: "mg",
+ mlt: "mt",
+ mon: "mn",
+ nau: "na",
+ nav: "nv",
+ nbl: "nr",
+ nde: "nd",
+ ndo: "ng",
+ nep: "ne",
+ nno: "nn",
+ nob: "nb",
+ nor: "no",
+ nya: "ny",
+ oci: "oc",
+ oji: "oj",
+ ori: "or",
+ orm: "om",
+ oss: "os",
+ pan: "pa",
+ pli: "pi",
+ pol: "pl",
+ por: "pt",
+ pus: "ps",
+ que: "qu",
+ roh: "rm",
+ ron: "ro",
+ run: "rn",
+ rus: "ru",
+ sag: "sg",
+ san: "sa",
+ sin: "si",
+ slk: "sk",
+ slv: "sl",
+ sme: "se",
+ smo: "sm",
+ sna: "sn",
+ snd: "sd",
+ som: "so",
+ sot: "st",
+ spa: "es",
+ srd: "sc",
+ srp: "sr",
+ ssw: "ss",
+ sun: "su",
+ swa: "sw",
+ swe: "sv",
+ tah: "ty",
+ tam: "ta",
+ tat: "tt",
+ tel: "te",
+ tgk: "tg",
+ tgl: "tl",
+ tha: "th",
+ tir: "ti",
+ ton: "to",
+ tsn: "tn",
+ tso: "ts",
+ tuk: "tk",
+ tur: "tr",
+ twi: "tw",
+ uig: "ug",
+ ukr: "uk",
+ urd: "ur",
+ uzb: "uz",
+ ven: "ve",
+ vie: "vi",
+ vol: "vo",
+ wln: "wa",
+ wol: "wo",
+ xho: "xh",
+ yid: "yi",
+ yor: "yo",
+ zha: "za",
+ zul: "zu",
+};
|
This PR aims to resolve issue #997 , it uses the `lande` js library to detect the (most probable) language after 10 entered words in a TrackedTextarea and shows a Modal like for FullText view to remind the Prompter of changing to the language. After closing the dialog the word limit for detection is doubled (stored using a React state), the prompter can also disable language detection - stored as a cookie.
I had to include another file for conversion of ISO-639-3 language code conversion.
|
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/1071
|
2023-02-02T12:48:32Z
|
2023-02-05T06:10:54Z
|
2023-02-05T06:10:54Z
|
2023-02-05T06:10:54Z
| 3,461
|
LAION-AI/Open-Assistant
| 36,933
|
[seq2seq] correctly handle mt5
|
diff --git a/examples/seq2seq/utils.py b/examples/seq2seq/utils.py
index 8b24bfdadcf6f..303b89f78192d 100644
--- a/examples/seq2seq/utils.py
+++ b/examples/seq2seq/utils.py
@@ -563,7 +563,7 @@ def freeze_embeds(model):
"""Freeze token embeddings and positional embeddings for bart, just token embeddings for t5."""
model_type = model.config.model_type
- if model_type == "t5":
+ if model_type in ["t5", "mt5"]:
freeze_params(model.shared)
for d in [model.encoder, model.decoder]:
freeze_params(d.embed_tokens)
|
This PR fixes `seq2seq/utils.py` to handle `mt5` like it does `t5`.
Ideally there should be a test, which would require creating a tiny model for mt5, but I'm being told this code is going away anyway, so there is no point investing energy into it.
Fixes: https://github.com/huggingface/transformers/issues/9865
@patil-suraj, @sgugger
|
https://api.github.com/repos/huggingface/transformers/pulls/9879
|
2021-01-29T00:05:29Z
|
2021-01-29T16:11:22Z
|
2021-01-29T16:11:22Z
|
2021-01-29T16:11:27Z
| 155
|
huggingface/transformers
| 12,910
|
Concise `TransformerBlock()`
|
diff --git a/models/common.py b/models/common.py
index 4211db406c3..96d63a07a1b 100644
--- a/models/common.py
+++ b/models/common.py
@@ -77,18 +77,8 @@ def forward(self, x):
if self.conv is not None:
x = self.conv(x)
b, _, w, h = x.shape
- p = x.flatten(2)
- p = p.unsqueeze(0)
- p = p.transpose(0, 3)
- p = p.squeeze(3)
- e = self.linear(p)
- x = p + e
-
- x = self.tr(x)
- x = x.unsqueeze(3)
- x = x.transpose(0, 3)
- x = x.reshape(b, self.c2, w, h)
- return x
+ p = x.flatten(2).unsqueeze(0).transpose(0, 3).squeeze(3)
+ return self.tr(p + self.linear(p)).unsqueeze(3).transpose(0, 3).reshape(b, self.c2, w, h)
class Bottleneck(nn.Module):
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Simplified tensor operations in forward method of a model component.
### 📊 Key Changes
- Streamlined the forward pass code by chaining tensor operations.
- Removed intermediate variables and redundant reshaping steps.
### 🎯 Purpose & Impact
- ✨ **Clarity**: The PR makes the code more readable and concise, improving maintainability.
- ⚡ **Efficiency**: Chaining operations may offer slight runtime performance improvements.
- 🧠 **Understandability**: Reduces complexity for developers trying to understand or modify the forward pass.
|
https://api.github.com/repos/ultralytics/yolov5/pulls/3821
|
2021-06-29T11:58:57Z
|
2021-06-29T14:03:10Z
|
2021-06-29T14:03:10Z
|
2024-01-19T17:14:40Z
| 258
|
ultralytics/yolov5
| 25,165
|
Improve Project Euler problem 058 solution 1
|
diff --git a/project_euler/problem_058/sol1.py b/project_euler/problem_058/sol1.py
index d3b15157fbbd..ed407edf7158 100644
--- a/project_euler/problem_058/sol1.py
+++ b/project_euler/problem_058/sol1.py
@@ -33,11 +33,12 @@
count of current primes.
"""
+from math import isqrt
-def isprime(d: int) -> int:
+def isprime(number: int) -> int:
"""
- returns whether the given digit is prime or not
+ returns whether the given number is prime or not
>>> isprime(1)
0
>>> isprime(17)
@@ -45,14 +46,15 @@ def isprime(d: int) -> int:
>>> isprime(10000)
0
"""
- if d == 1:
+ if number == 1:
return 0
- i = 2
- while i * i <= d:
- if d % i == 0:
+ if number % 2 == 0 and number > 2:
+ return 0
+
+ for i in range(3, isqrt(number) + 1, 2):
+ if number % i == 0:
return 0
- i = i + 1
return 1
|
### **Describe your change:**
Improve Project Euler problem 058 solution 1 - the top 7 slowest solution on Travis CI logs (under `slowest 10 durations`: `11.38s call scripts/validate_solutions.py::test_project_euler[problem_058/sol1.py]`):
* Fix typo
* Improve solution (locally 4+ times - from 8+ seconds to ~2 seconds)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
|
https://api.github.com/repos/TheAlgorithms/Python/pulls/4782
|
2021-09-25T17:55:45Z
|
2021-10-25T08:07:10Z
|
2021-10-25T08:07:10Z
|
2021-10-25T12:44:35Z
| 317
|
TheAlgorithms/Python
| 29,717
|
Update microlink information
|
diff --git a/README.md b/README.md
index cf3d8f210d..a866d00bee 100644
--- a/README.md
+++ b/README.md
@@ -516,7 +516,7 @@ API | Description | Auth | HTTPS | CORS | Link |
| INQStats | Open demographic data such as population, life expectancy, migration rate, etc | `apiKey` | No | Unknown | [Go!](http://blog.inqubu.com/inqstats-open-api-published-to-get-demographic-data) |
| LinkPreview | Get JSON formatted summary with title, description and preview image for any requested URL | `apiKey` | Yes | Yes | [Go!](https://www.linkpreview.net) |
| Marijuana Strains | Marijuana strains, races, flavors, and effects | `apiKey` | No | Unknown | [Go!](http://strains.evanbusse.com/) |
-| Microlink.io | Turns any link into information | No | Yes | Unknown | [Go!](https://docs.microlink.io) |
+| Microlink.io | Extract structured data from any website | No | Yes | Yes | [Go!](https://microlink.io) |
| Quandl | Stock Market Data | No | Yes | Unknown | [Go!](https://www.quandl.com/) |
| Scoop.it | Content Curation Service | `apiKey` | No | Unknown | [Go!](http://www.scoop.it/dev) |
| Teleport | Quality of Life Data | No | Yes | Unknown | [Go!](https://developers.teleport.org/) |
|
https://api.github.com/repos/public-apis/public-apis/pulls/698
|
2018-06-13T09:33:32Z
|
2018-06-26T22:25:43Z
|
2018-06-26T22:25:43Z
|
2018-06-26T23:14:27Z
| 346
|
public-apis/public-apis
| 35,904
|
|
Temporarily disable extremely flaky tests
|
diff --git a/e2e/specs/st_checkbox.spec.js b/e2e/specs/st_checkbox.spec.js
index b3776f96364b..5c9dea74c19a 100644
--- a/e2e/specs/st_checkbox.spec.js
+++ b/e2e/specs/st_checkbox.spec.js
@@ -34,7 +34,7 @@ describe("st.checkbox", () => {
// We have to manually use the changeTheme command in the next two tests
// since changing the theme between snapshots using the matchThemedSnapshots
// command will unfocus the widget we're trying to take a snapshot of.
- it("shows focused widget correctly in dark mode", () => {
+ xit("shows focused widget correctly in dark mode", () => {
cy.changeTheme("Dark");
cy.get(".stCheckbox")
@@ -55,7 +55,7 @@ describe("st.checkbox", () => {
});
});
- it("shows focused widget correctly in light mode", () => {
+ xit("shows focused widget correctly in light mode", () => {
cy.changeTheme("Light");
cy.get(".stCheckbox")
diff --git a/e2e/specs/st_radio.spec.js b/e2e/specs/st_radio.spec.js
index b96d65119397..531423ae2fd6 100644
--- a/e2e/specs/st_radio.spec.js
+++ b/e2e/specs/st_radio.spec.js
@@ -34,7 +34,7 @@ describe("st.radio", () => {
// We have to manually use the changeTheme command in the next two tests
// since changing the theme between snapshots using the matchThemedSnapshots
// command will unfocus the widget we're trying to take a snapshot of.
- it("shows focused widget correctly in dark mode", () => {
+ xit("shows focused widget correctly in dark mode", () => {
cy.changeTheme("Dark");
cy.get(".stRadio")
@@ -56,7 +56,7 @@ describe("st.radio", () => {
});
});
- it("shows focused widget correctly in light mode", () => {
+ xit("shows focused widget correctly in light mode", () => {
cy.changeTheme("Light");
cy.get(".stRadio")
|
I added some snapshot tests for things that previously didn't have any
test coverage in the theming fast follow feature branch. These tests
seemed okay in the branch but suddenly turned flaky to the point of
being unusable once it got merged into develop, so this commit disables
them for now to un-break the build until the tests can be de-flaked and
revived.
|
https://api.github.com/repos/streamlit/streamlit/pulls/3180
|
2021-04-26T19:51:12Z
|
2021-04-26T20:12:54Z
|
2021-04-26T20:12:53Z
|
2021-07-24T00:37:17Z
| 502
|
streamlit/streamlit
| 21,834
|
feat: ignore providers
|
diff --git a/g4f/__init__.py b/g4f/__init__.py
index 1a696c6c33..6f777e4cf7 100644
--- a/g4f/__init__.py
+++ b/g4f/__init__.py
@@ -1,13 +1,14 @@
from __future__ import annotations
from requests import get
from g4f.models import Model, ModelUtils
-from .Provider import BaseProvider
-from .typing import Messages, CreateResult, Union
+from .Provider import BaseProvider, RetryProvider
+from .typing import Messages, CreateResult, Union, List
from .debug import logging
version = '0.1.6.2'
version_check = True
+
def check_pypi_version() -> None:
try:
response = get("https://pypi.org/pypi/g4f/json").json()
@@ -19,9 +20,11 @@ def check_pypi_version() -> None:
except Exception as e:
print(f'Failed to check g4f pypi version: {e}')
+
def get_model_and_provider(model : Union[Model, str],
provider : Union[type[BaseProvider], None],
- stream : bool) -> tuple[Model, type[BaseProvider]]:
+ stream : bool,
+ ignored : List[str] = None) -> tuple[Model, type[BaseProvider]]:
if isinstance(model, str):
if model in ModelUtils.convert:
@@ -32,6 +35,9 @@ def get_model_and_provider(model : Union[Model, str],
if not provider:
provider = model.best_provider
+ if isinstance(provider, RetryProvider) and ignored:
+ provider.providers = [p for p in provider.providers if p.__name__ not in ignored]
+
if not provider:
raise RuntimeError(f'No provider found for model: {model}')
@@ -46,15 +52,17 @@ def get_model_and_provider(model : Union[Model, str],
return model, provider
+
class ChatCompletion:
@staticmethod
def create(model: Union[Model, str],
messages : Messages,
provider : Union[type[BaseProvider], None] = None,
stream : bool = False,
- auth : Union[str, None] = None, **kwargs) -> Union[CreateResult, str]:
+ auth : Union[str, None] = None,
+ ignored : List[str] = None, **kwargs) -> Union[CreateResult, str]:
- model, provider = get_model_and_provider(model, provider, stream)
+ model, provider = get_model_and_provider(model, provider, stream, ignored)
if provider.needs_auth and not auth:
raise ValueError(
@@ -71,15 +79,17 @@ async def create_async(
model : Union[Model, str],
messages: Messages,
provider: Union[type[BaseProvider], None] = None,
- stream : bool = False, **kwargs) -> str:
+ stream : bool = False,
+ ignored : List[str] = None, **kwargs) -> str:
if stream:
raise ValueError(f'"create_async" does not support "stream" argument')
- model, provider = get_model_and_provider(model, provider, False)
+ model, provider = get_model_and_provider(model, provider, False, ignored)
return await provider.create_async(model.name, messages, **kwargs)
+
class Completion:
@staticmethod
def create(
@@ -87,6 +97,7 @@ def create(
prompt: str,
provider: Union[type[BaseProvider], None] = None,
stream: bool = False,
+ ignored : List[str] = None,
**kwargs
) -> Union[CreateResult, str]:
@@ -102,7 +113,7 @@ def create(
if model not in allowed_models:
raise Exception(f'ValueError: Can\'t use {model} with Completion.create()')
- model, provider = get_model_and_provider(model, provider, stream)
+ model, provider = get_model_and_provider(model, provider, stream, ignored)
result = provider.create_completion(model.name, [{"role": "user", "content": prompt}], stream, **kwargs)
|
According to #1014 , add a `ignored` parameter which can let users choose **ignore specific providers**.
## Example
```python
# normal
response = g4f.ChatCompletion.create(
model='gpt-3.5-turbo',
messages=[{"role": "user", "content": "hello"}],
ignored=["Ylokh", "GptGo", "AItianhu", "Aibn", "Myshell", "FreeGpt"] # Ignore these providers
)
print(response)
# async
async def test():
response = await g4f.ChatCompletion.create_async(
model='gpt-3.5-turbo',
messages=[{"role": "user", "content": "hello"}],
ignored=["Ylokh", "GptGo", "AItianhu", "Aibn", "Myshell", "FreeGpt"] # Ignore these providers
)
return response
async def run_all():
calls = [
test()
]
res = await asyncio.gather(*calls)
return res
print(asyncio.run(run_all()))
```
If the feature is needed, perhaps someone could extend it and implement it in the **web UI**?
(As I'm not proficient in web development.)
My idea is to have a `multiple-select option` that allows users to ignore certain providers they do not want to use (similar to this [example](https://www.cssscript.com/filterable-checkable-multi-select/)).
|
https://api.github.com/repos/xtekky/gpt4free/pulls/1064
|
2023-10-13T06:25:26Z
|
2023-10-13T10:33:44Z
|
2023-10-13T10:33:44Z
|
2023-10-13T10:33:44Z
| 938
|
xtekky/gpt4free
| 38,388
|
Fix typo in docs/patterns/javascript.rst
|
diff --git a/docs/patterns/javascript.rst b/docs/patterns/javascript.rst
index dd3bcb9b6c..4b1d7e0fb4 100644
--- a/docs/patterns/javascript.rst
+++ b/docs/patterns/javascript.rst
@@ -28,7 +28,7 @@ It is important to understand the difference between templates and
JavaScript. Templates are rendered on the server, before the response is
sent to the user's browser. JavaScript runs in the user's browser, after
the template is rendered and sent. Therefore, it is impossible to use
-JavaScript to affect how the Jinja template is rendered, but is is
+JavaScript to affect how the Jinja template is rendered, but it is
possible to render data into the JavaScript that will run.
To provide data to JavaScript when rendering the template, use the
|
Fixed a typo
|
https://api.github.com/repos/pallets/flask/pulls/4758
|
2022-08-09T08:48:09Z
|
2022-08-09T14:08:08Z
|
2022-08-09T14:08:08Z
|
2022-08-24T00:06:25Z
| 189
|
pallets/flask
| 20,696
|
Update rec_nrtr_head.py
|
diff --git a/ppocr/modeling/heads/rec_nrtr_head.py b/ppocr/modeling/heads/rec_nrtr_head.py
index bf9ef56145..2fffa52176 100644
--- a/ppocr/modeling/heads/rec_nrtr_head.py
+++ b/ppocr/modeling/heads/rec_nrtr_head.py
@@ -17,7 +17,6 @@
from paddle import nn
import paddle.nn.functional as F
from paddle.nn import LayerList
-# from paddle.nn.initializer import XavierNormal as xavier_uniform_
from paddle.nn import Dropout, Linear, LayerNorm
import numpy as np
from ppocr.modeling.backbones.rec_svtrnet import Mlp, zeros_, ones_
@@ -30,7 +29,6 @@ class Transformer(nn.Layer):
Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and
Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information
Processing Systems, pages 6000-6010.
-
Args:
d_model: the number of expected features in the encoder/decoder inputs (default=512).
nhead: the number of heads in the multiheadattention models (default=8).
@@ -162,7 +160,7 @@ def forward_test(self, src):
memory = src
dec_seq = paddle.full((bs, 1), 2, dtype=paddle.int64)
dec_prob = paddle.full((bs, 1), 1., dtype=paddle.float32)
- for len_dec_seq in range(1, self.max_len):
+ for len_dec_seq in range(1, paddle.to_tensor(self.max_len)):
dec_seq_embed = self.embedding(dec_seq)
dec_seq_embed = self.positional_encoding(dec_seq_embed)
tgt_mask = self.generate_square_subsequent_mask(
@@ -304,7 +302,7 @@ def collect_hypothesis_and_scores(inst_dec_beams, n_best):
inst_idx_to_position_map = get_inst_idx_to_tensor_position_map(
active_inst_idx_list)
# Decode
- for len_dec_seq in range(1, self.max_len):
+ for len_dec_seq in range(1, paddle.to_tensor(self.max_len)):
src_enc_copy = src_enc.clone()
active_inst_idx_list = beam_decode_step(
inst_dec_beams, len_dec_seq, src_enc_copy,
@@ -348,15 +346,12 @@ class MultiheadAttention(nn.Layer):
"""Allows the model to jointly attend to information
from different representation subspaces.
See reference: Attention Is All You Need
-
.. math::
\text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
\text{where} head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)
-
Args:
embed_dim: total dimension of the model
num_heads: parallel attention layers, or heads
-
"""
def __init__(self, embed_dim, num_heads, dropout=0., self_attn=False):
|
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/8564
|
2022-12-07T07:05:22Z
|
2022-12-07T07:07:50Z
|
2022-12-07T07:07:50Z
|
2022-12-07T07:07:50Z
| 683
|
PaddlePaddle/PaddleOCR
| 42,508
|
|
docs: fix minor capitalization typo
|
diff --git a/README.md b/README.md
index 52bf032c2..d721b3d8e 100644
--- a/README.md
+++ b/README.md
@@ -117,7 +117,7 @@ Don't know what to contribute? Here is the public
[Project Board](https://github.com/users/imartinez/projects/3) with several ideas.
Head over to Discord
-#contributors channel and ask for write permissions on that Github project.
+#contributors channel and ask for write permissions on that GitHub project.
## 💬 Community
Join the conversation around PrivateGPT on our:
@@ -158,4 +158,4 @@ This project has been strongly influenced and supported by other amazing project
[GPT4All](https://github.com/nomic-ai/gpt4all),
[LlamaCpp](https://github.com/ggerganov/llama.cpp),
[Chroma](https://www.trychroma.com/)
-and [SentenceTransformers](https://www.sbert.net/).
\ No newline at end of file
+and [SentenceTransformers](https://www.sbert.net/).
diff --git a/fern/docs.yml b/fern/docs.yml
index d3b0025a0..67021673b 100644
--- a/fern/docs.yml
+++ b/fern/docs.yml
@@ -89,7 +89,7 @@ navigation:
# `type:primary` is always displayed at the most right side of the navbar
navbar-links:
- type: secondary
- text: Github
+ text: GitHub
url: "https://github.com/imartinez/privateGPT"
- type: secondary
text: Contact us
|
:)
|
https://api.github.com/repos/zylon-ai/private-gpt/pulls/1392
|
2023-12-11T11:01:45Z
|
2023-12-12T19:31:38Z
|
2023-12-12T19:31:38Z
|
2023-12-13T14:11:36Z
| 367
|
zylon-ai/private-gpt
| 38,472
|
fix for default background in svg export
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 692249f2e..1484ecf08 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,16 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [12.4.1] - 2022-05-08
+
+### Fixed
+
+- Fix for https://github.com/Textualize/rich/issues/2260
+
+### Changed
+
+- Added a keyline around SVG terminals which is visible on dark backgrounds
+
## [12.4.0] - 2022-05-07
### Changed
@@ -1730,7 +1740,8 @@ Major version bump for a breaking change to `Text.stylize signature`, which corr
- First official release, API still to be stabilized
-[unreleased]: https://github.com/willmcgugan/rich/compare/v12.4.0...HEAD
+[unreleased]: https://github.com/willmcgugan/rich/compare/v12.4.1...HEAD
+[12.4.1]: https://github.com/willmcgugan/rich/compare/v12.4.0...v12.4.1
[12.4.0]: https://github.com/willmcgugan/rich/compare/v12.3.0...v12.4.0
[12.3.0]: https://github.com/willmcgugan/rich/compare/v12.2.0...v12.3.0
[12.2.0]: https://github.com/willmcgugan/rich/compare/v12.1.0...v12.2.0
diff --git a/pyproject.toml b/pyproject.toml
index 8c84fff9b..71f4e9ac6 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,7 +2,7 @@
name = "rich"
homepage = "https://github.com/willmcgugan/rich"
documentation = "https://rich.readthedocs.io/en/latest/"
-version = "12.4.0"
+version = "12.4.1"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
authors = ["Will McGugan <[email protected]>"]
license = "MIT"
diff --git a/rich/_export_format.py b/rich/_export_format.py
index cc59e965f..32aa71c91 100644
--- a/rich/_export_format.py
+++ b/rich/_export_format.py
@@ -21,6 +21,7 @@
CONSOLE_SVG_FORMAT = """\
<svg class="rich-terminal" viewBox="0 0 {width} {height}" xmlns="http://www.w3.org/2000/svg">
+ <!-- Generated with Rich https://www.textualize.io -->
<style>
@font-face {{
@@ -43,13 +44,13 @@
.{unique_id}-matrix {{
font-family: Fira Code, monospace;
font-size: {char_height}px;
- font-variant: east-asian-width-values;
line-height: {line_height}px;
+ font-variant-east-asian: full-width;
}}
.{unique_id}-title {{
font-size: 18px;
- opacity: 0.8;
+
font-weight: bold;
font-family: arial;
}}
diff --git a/rich/console.py b/rich/console.py
index 2f685c875..2159d3c17 100644
--- a/rich/console.py
+++ b/rich/console.py
@@ -2252,12 +2252,12 @@ def get_svg_style(style: Style) -> str:
css_rules = []
color = (
_theme.foreground_color
- if style.color is None
+ if (style.color is None or style.color.is_default)
else style.color.get_truecolor(_theme)
)
bgcolor = (
_theme.background_color
- if style.bgcolor is None
+ if (style.bgcolor is None or style.bgcolor.is_default)
else style.bgcolor.get_truecolor(_theme)
)
if style.reverse:
@@ -2365,7 +2365,8 @@ def stringify(value: object) -> str:
else style.color.get_truecolor(_theme).hex
)
else:
- has_background = style.bgcolor is not None
+ bgcolor = style.bgcolor
+ has_background = bgcolor is not None and not bgcolor.is_default
background = (
_theme.background_color.hex
if style.bgcolor is None
@@ -2407,12 +2408,15 @@ def stringify(value: object) -> str:
chrome = make_tag(
"rect",
fill=_theme.background_color.hex,
+ stroke="rgba(255,255,255,0.35)",
+ stroke_width="1",
x=margin_left,
y=margin_top,
width=terminal_width,
height=terminal_height,
rx=12,
)
+
title_color = _theme.foreground_color.hex
if title:
chrome += make_tag(
diff --git a/rich/terminal_theme.py b/rich/terminal_theme.py
index 5ceff8ee8..565e9d960 100644
--- a/rich/terminal_theme.py
+++ b/rich/terminal_theme.py
@@ -127,5 +127,27 @@ def __init__(
],
)
-
-SVG_EXPORT_THEME = DIMMED_MONOKAI
+SVG_EXPORT_THEME = TerminalTheme(
+ (41, 41, 41),
+ (197, 200, 198),
+ [
+ (75, 78, 85),
+ (204, 85, 90),
+ (152, 168, 75),
+ (208, 179, 68),
+ (96, 138, 177),
+ (152, 114, 159),
+ (104, 160, 179),
+ (197, 200, 198),
+ (154, 155, 153),
+ ],
+ [
+ (255, 38, 39),
+ (0, 130, 61),
+ (208, 132, 66),
+ (25, 132, 233),
+ (255, 44, 122),
+ (57, 130, 128),
+ (253, 253, 197),
+ ],
+)
diff --git a/tests/test_console.py b/tests/test_console.py
index aa92cc420..007cd2810 100644
--- a/tests/test_console.py
+++ b/tests/test_console.py
@@ -494,7 +494,7 @@ def test_export_html_inline():
assert html == expected
-EXPECTED_SVG = '<svg class="rich-terminal" viewBox="0 0 1296 118.4" xmlns="http://www.w3.org/2000/svg">\n <style>\n\n @font-face {\n font-family: "Fira Code";\n src: local("FiraCode-Regular"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2") format("woff2"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff") format("woff");\n font-style: normal;\n font-weight: 400;\n }\n @font-face {\n font-family: "Fira Code";\n src: local("FiraCode-Bold"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2") format("woff2"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff") format("woff");\n font-style: bold;\n font-weight: 700;\n }\n\n .terminal-614794459-matrix {\n font-family: Fira Code, monospace;\n font-size: 20px;\n font-variant: east-asian-width-values;\n line-height: 26.400000000000002px;\n }\n\n .terminal-614794459-title {\n font-size: 18px;\n opacity: 0.8;\n font-weight: bold;\n font-family: arial;\n }\n\n .terminal-614794459-r1 { fill: #4f76a1;font-weight: bold }\n.terminal-614794459-r2 { fill: #b9bcba }\n </style>\n <rect fill="#191919" x="16" y="20" width="1264" height="78.4" rx="12"/><text class="terminal-614794459-title" fill="#b9bcba" text-anchor="middle" x="632" y="46">Rich</text>\n <circle cx="40" cy="40" r="7" fill="#ff5f57"/>\n <circle cx="62" cy="40" r="7" fill="#febc2e"/>\n <circle cx="84" cy="40" r="7" fill="#28c840"/>\n \n <g transform="translate(28, 60)">\n <rect fill="#be3f48" x="0" y="0" width="38.2" height="27.4"/>\n <text alignment-baseline="baseline" class="terminal-614794459-matrix" font-variant="east-asian-width-values"><tspan class="terminal-614794459-r1" x="0" y="20" textLength="37.2">foo</tspan><tspan class="terminal-614794459-r2" x="37.2" y="20" textLength="12.4"> </tspan><tspan class="terminal-614794459-r2" x="49.6" y="20" textLength="62">Click</tspan><tspan class="terminal-614794459-r2" x="111.6" y="20" textLength="1128.4">                                                                                           </tspan><tspan class="terminal-614794459-r2" x="1240" y="20" textLength="12.4">\n</tspan></text>\n </g>\n</svg>\n'
+EXPECTED_SVG = '<svg class="rich-terminal" viewBox="0 0 1296 118.4" xmlns="http://www.w3.org/2000/svg">\n <!-- Generated with Rich https://www.textualize.io -->\n <style>\n\n @font-face {\n font-family: "Fira Code";\n src: local("FiraCode-Regular"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Regular.woff2") format("woff2"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Regular.woff") format("woff");\n font-style: normal;\n font-weight: 400;\n }\n @font-face {\n font-family: "Fira Code";\n src: local("FiraCode-Bold"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff2/FiraCode-Bold.woff2") format("woff2"),\n url("https://cdnjs.cloudflare.com/ajax/libs/firacode/6.2.0/woff/FiraCode-Bold.woff") format("woff");\n font-style: bold;\n font-weight: 700;\n }\n\n .terminal-614794459-matrix {\n font-family: Fira Code, monospace;\n font-size: 20px;\n line-height: 26.400000000000002px;\n font-variant-east-asian: full-width;\n }\n\n .terminal-614794459-title {\n font-size: 18px;\n\n font-weight: bold;\n font-family: arial;\n }\n\n .terminal-614794459-r1 { fill: #608ab1;font-weight: bold }\n.terminal-614794459-r2 { fill: #c5c8c6 }\n </style>\n <rect fill="#292929" stroke="rgba(255,255,255,0.35)" stroke-width="1" x="16" y="20" width="1264" height="78.4" rx="12"/><text class="terminal-614794459-title" fill="#c5c8c6" text-anchor="middle" x="632" y="46">Rich</text>\n <circle cx="40" cy="40" r="7" fill="#ff5f57"/>\n <circle cx="62" cy="40" r="7" fill="#febc2e"/>\n <circle cx="84" cy="40" r="7" fill="#28c840"/>\n \n <g transform="translate(28, 60)">\n <rect fill="#cc555a" x="0" y="0" width="38.2" height="27.4"/>\n <text alignment-baseline="baseline" class="terminal-614794459-matrix" font-variant="east-asian-width-values"><tspan class="terminal-614794459-r1" x="0" y="20" textLength="37.2">foo</tspan><tspan class="terminal-614794459-r2" x="37.2" y="20" textLength="12.4"> </tspan><tspan class="terminal-614794459-r2" x="49.6" y="20" textLength="62">Click</tspan><tspan class="terminal-614794459-r2" x="111.6" y="20" textLength="1128.4">                                                                                           </tspan><tspan class="terminal-614794459-r2" x="1240" y="20" textLength="12.4">\n</tspan></text>\n </g>\n</svg>\n'
def test_export_svg():
|
Fix for https://github.com/Textualize/rich/issues/2260
|
https://api.github.com/repos/Textualize/rich/pulls/2262
|
2022-05-08T07:32:57Z
|
2022-05-08T16:58:39Z
|
2022-05-08T16:58:38Z
|
2022-05-08T16:58:39Z
| 3,651
|
Textualize/rich
| 48,316
|
Fix windows LOGGER with emojis output
|
diff --git a/utils/general.py b/utils/general.py
index 10fa07f379f..502459d7b78 100755
--- a/utils/general.py
+++ b/utils/general.py
@@ -97,8 +97,9 @@ def set_logging(name=None, verbose=VERBOSE):
set_logging() # run before defining LOGGER
LOGGER = logging.getLogger("yolov5") # define globally (used in train.py, val.py, detect.py, etc.)
-for fn in LOGGER.info, LOGGER.warning:
- _fn, fn = fn, lambda x: _fn(emojis(x)) # emoji safe logging
+if platform.system() == 'Windows':
+ for fn in LOGGER.info, LOGGER.warning:
+ setattr(LOGGER, fn.__name__, lambda x: fn(emojis(x))) # emoji safe logging
def user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'):
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Enhanced logging for Windows users in YOLOv5.
### 📊 Key Changes
- Adjusted the logger setup to include emoji-safe logging specifically for Windows platforms.
### 🎯 Purpose & Impact
- 🎉 **Benefits**: Ensures that Windows users see log messages with emojis properly, enhancing readability and the user experience.
- 💻 **Impact to Users**: Windows-based developers and users will now have a better logging experience, with emojis appearing correctly in their console outputs. Other platform users remain unaffected.
|
https://api.github.com/repos/ultralytics/yolov5/pulls/8958
|
2022-08-13T21:40:49Z
|
2022-08-13T22:12:09Z
|
2022-08-13T22:12:09Z
|
2024-01-19T07:27:55Z
| 203
|
ultralytics/yolov5
| 24,790
|
Add link to IT translation
|
diff --git a/docs/_templates/sidebarintro.html b/docs/_templates/sidebarintro.html
index 03bf7e0e23..524c73f339 100644
--- a/docs/_templates/sidebarintro.html
+++ b/docs/_templates/sidebarintro.html
@@ -34,8 +34,7 @@ <h3>Translations</h3>
<li><a href="http://jp.python-requests.org/">Japanese</a></li>
<li><a href="http://cn.python-requests.org/">Chinese</a></li>
<li><a href="http://pt.python-requests.org/">Portuguese</a></li>
-
-
+<li><a href="http://it.python-requests.org/">Italian</a></li>
</ul>
<h3>Useful Links</h3>
|
Fixes #2744
|
https://api.github.com/repos/psf/requests/pulls/2746
|
2015-08-25T20:10:24Z
|
2015-08-26T00:38:04Z
|
2015-08-26T00:38:04Z
|
2021-09-08T06:01:08Z
| 175
|
psf/requests
| 32,093
|
Fix the setuptools_scm issue
|
diff --git a/python/build-wheel-macos.sh b/python/build-wheel-macos.sh
index f7229b348478a..7559fd02cd016 100755
--- a/python/build-wheel-macos.sh
+++ b/python/build-wheel-macos.sh
@@ -47,6 +47,11 @@ for ((i=0; i<${#PY_VERSIONS[@]}; ++i)); do
PYTHON_EXE=$MACPYTHON_PY_PREFIX/$PY_MM/bin/python$PY_MM
PIP_CMD="$(dirname $PYTHON_EXE)/pip$PY_MM"
+ pushd /tmp
+ # Install latest version of pip to avoid brownouts
+ curl https://bootstrap.pypa.io/get-pip.py | $PYTHON_EXE
+ popd
+
pushd python
# Install setuptools_scm because otherwise when building the wheel for
# Python 3.6, we see an error.
|
#1782
|
https://api.github.com/repos/ray-project/ray/pulls/1784
|
2018-03-26T23:11:06Z
|
2018-03-31T17:33:41Z
|
2018-03-31T17:33:41Z
|
2018-03-31T17:33:45Z
| 200
|
ray-project/ray
| 19,709
|
allow loading embeddings from subdirectories
|
diff --git a/modules/textual_inversion/textual_inversion.py b/modules/textual_inversion/textual_inversion.py
index 24b43045919..0a0590440a6 100644
--- a/modules/textual_inversion/textual_inversion.py
+++ b/modules/textual_inversion/textual_inversion.py
@@ -149,19 +149,20 @@ def process_file(path, filename):
else:
self.skipped_embeddings[name] = embedding
- for fn in os.listdir(self.embeddings_dir):
- try:
- fullfn = os.path.join(self.embeddings_dir, fn)
-
- if os.stat(fullfn).st_size == 0:
+ for root, dirs, fns in os.walk(self.embeddings_dir):
+ for fn in fns:
+ try:
+ fullfn = os.path.join(root, fn)
+
+ if os.stat(fullfn).st_size == 0:
+ continue
+
+ process_file(fullfn, fn)
+ except Exception:
+ print(f"Error loading embedding {fn}:", file=sys.stderr)
+ print(traceback.format_exc(), file=sys.stderr)
continue
- process_file(fullfn, fn)
- except Exception:
- print(f"Error loading embedding {fn}:", file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
- continue
-
print(f"Textual inversion embeddings loaded({len(self.word_embeddings)}): {', '.join(self.word_embeddings.keys())}")
if len(self.skipped_embeddings) > 0:
print(f"Textual inversion embeddings skipped({len(self.skipped_embeddings)}): {', '.join(self.skipped_embeddings.keys())}")
|
https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/6384
|
2023-01-05T20:39:52Z
|
2023-01-06T04:56:49Z
|
2023-01-06T04:56:49Z
|
2023-01-06T04:56:49Z
| 371
|
AUTOMATIC1111/stable-diffusion-webui
| 40,387
|
|
Cache docs URL; updated docstrings
|
diff --git a/lib/streamlit/runtime/caching/__init__.py b/lib/streamlit/runtime/caching/__init__.py
index 0550b47b8947..8f02e82363d2 100644
--- a/lib/streamlit/runtime/caching/__init__.py
+++ b/lib/streamlit/runtime/caching/__init__.py
@@ -32,8 +32,7 @@
)
from streamlit.runtime.state.session_state import WidgetMetadata
-# TODO: replace this with the proper URL once it's ready from the docs team
-CACHE_DOCS_URL = "https://NEED.CACHE.DOCS.URL"
+CACHE_DOCS_URL = "https://docs.streamlit.io/library/advanced-features/caching"
def save_element_message(
diff --git a/lib/streamlit/runtime/caching/cache_data_api.py b/lib/streamlit/runtime/caching/cache_data_api.py
index ec21aaaf4663..5c31fd220a86 100644
--- a/lib/streamlit/runtime/caching/cache_data_api.py
+++ b/lib/streamlit/runtime/caching/cache_data_api.py
@@ -250,8 +250,8 @@ def __call__(
*,
ttl: float | timedelta | None = None,
max_entries: int | None = None,
- persist: CachePersistType | bool = None,
show_spinner: bool | str = True,
+ persist: CachePersistType | bool = None,
experimental_allow_widgets: bool = False,
) -> Callable[[F], F]:
...
@@ -262,8 +262,8 @@ def __call__(
*,
ttl: float | timedelta | None = None,
max_entries: int | None = None,
- persist: CachePersistType | bool = None,
show_spinner: bool | str = True,
+ persist: CachePersistType | bool = None,
experimental_allow_widgets: bool = False,
):
return self._decorator(
@@ -281,18 +281,22 @@ def _decorator(
*,
ttl: float | timedelta | None,
max_entries: int | None,
- persist: CachePersistType | bool,
show_spinner: bool | str,
+ persist: CachePersistType | bool,
experimental_allow_widgets: bool,
):
- """Function decorator to cache function executions.
+ """Decorator to cache functions that return data (e.g. dataframe transforms,
+ database queries, ML inference).
- Cached data is stored in "pickled" form, which means that the return
- value of a cached function must be pickleable.
+ Cached objects are stored in "pickled" form, which means that the return
+ value of a cached function must be pickleable. Each caller of the cached
+ function gets its own copy of the cached data.
- Each caller of the cached function gets its own copy of the cached data.
+ You can clear a function's cache with `func.clear()` or clear the entire
+ cache with `st.cache_data.clear()`.
- You can clear a cached function's cache with f.clear().
+ To cache global resources, use `st.cache_resource` instead.
+ Learn more about caching at [https://docs.streamlit.io/library/advanced-features/caching](https://docs.streamlit.io/library/advanced-features/caching)
Parameters
----------
@@ -310,15 +314,15 @@ def _decorator(
for an unbounded cache. (When a new entry is added to a full cache,
the oldest cached entry will be removed.) The default is None.
+ show_spinner : boolean
+ Enable the spinner. Default is True to show a spinner when there is
+ a cache miss.
+
persist : str or boolean or None
Optional location to persist cached data to. Passing "disk" (or True)
will persist the cached data to the local disk. None (or False) will disable
persistence. The default is None.
- show_spinner : boolean
- Enable the spinner. Default is True to show a spinner when there is
- a cache miss.
-
experimental_allow_widgets : boolean
Allow widgets to be used in the cached function. Defaults to False.
Support for widgets in cached functions is currently experimental.
diff --git a/lib/streamlit/runtime/caching/cache_resource_api.py b/lib/streamlit/runtime/caching/cache_resource_api.py
index 6b8549d5d5e5..94fe40bc65f8 100644
--- a/lib/streamlit/runtime/caching/cache_resource_api.py
+++ b/lib/streamlit/runtime/caching/cache_resource_api.py
@@ -269,16 +269,19 @@ def _decorator(
validate: ValidateFunc | None,
experimental_allow_widgets: bool,
):
- """Function decorator to store cached resources.
+ """Decorator to cache functions that return global resources (e.g.
+ database connections, ML models).
- Each cache_resource object is shared across all users connected to the app.
- Cached resources *must* be thread-safe, because they can be accessed from
- multiple threads concurrently.
+ Cached objects are shared across all users, sessions, and reruns. They
+ must be thread-safe because they can be accessed from multiple threads
+ concurrently. If thread safety is an issue, consider using `st.session_state`
+ to store resources per session instead.
- (If thread-safety is an issue, consider using ``st.session_state`` to
- store per-session cached resources instead.)
+ You can clear a function's cache with `func.clear()` or clear the entire
+ cache with `st.cache_resource.clear()`.
- You can clear a cache_resource function's cache with f.clear().
+ To cache data, use `st.cache_data` instead.
+ Learn more about caching at [https://docs.streamlit.io/library/advanced-features/caching](https://docs.streamlit.io/library/advanced-features/caching)
Parameters
----------
@@ -301,12 +304,12 @@ def _decorator(
value of show_spinner param will be used for spinner text.
validate : callable or None
- validate (callable): An optional validation function for cached data.
- `validate` is called each time the cached value is accessed. It receives
- the cached value as its only parameter and it must return a boolean.
- If `validate` returns False, the current cached value is discarded, and
- the decorated function is called to compute a new value.
- This is useful e.g. to check the health of database connections.
+ An optional validation function for cached data. `validate` is called
+ each time the cached value is accessed. It receives the cached value as
+ its only parameter and it must return a boolean. If `validate` returns
+ False, the current cached value is discarded, and the decorated function
+ is called to compute a new value. This is useful e.g. to check the
+ health of database connections.
experimental_allow_widgets : boolean
Allow widgets to be used in the cached function. Defaults to False.
diff --git a/lib/tests/streamlit/runtime/caching/cache_data_api_test.py b/lib/tests/streamlit/runtime/caching/cache_data_api_test.py
index b1ac2ecde3f7..375141ce4184 100644
--- a/lib/tests/streamlit/runtime/caching/cache_data_api_test.py
+++ b/lib/tests/streamlit/runtime/caching/cache_data_api_test.py
@@ -137,7 +137,7 @@ def test_deprecation_warnings(
"""We show deprecation warnings when using `@st.experimental_memo`, but not `@st.cache_data`."""
warning_str = (
"`st.experimental_memo` is deprecated. Please use the new command `st.cache_data` instead, "
- "which has the same behavior. More information [in our docs](https://NEED.CACHE.DOCS.URL)."
+ "which has the same behavior. More information [in our docs](https://docs.streamlit.io/library/advanced-features/caching)."
)
# We show the deprecation warning at declaration time:
diff --git a/lib/tests/streamlit/runtime/caching/cache_resource_api_test.py b/lib/tests/streamlit/runtime/caching/cache_resource_api_test.py
index 0b3458197aad..b2d6a15ffc64 100644
--- a/lib/tests/streamlit/runtime/caching/cache_resource_api_test.py
+++ b/lib/tests/streamlit/runtime/caching/cache_resource_api_test.py
@@ -108,7 +108,7 @@ def test_deprecation_warnings(
"""We show deprecation warnings when using `@st.experimental_singleton`, but not `@st.cache_resource`."""
warning_str = (
"`st.experimental_singleton` is deprecated. Please use the new command `st.cache_resource` instead, "
- "which has the same behavior. More information [in our docs](https://NEED.CACHE.DOCS.URL)."
+ "which has the same behavior. More information [in our docs](https://docs.streamlit.io/library/advanced-features/caching)."
)
# We show the deprecation warning at declaration time:
diff --git a/lib/tests/streamlit/runtime/legacy_caching/caching_test.py b/lib/tests/streamlit/runtime/legacy_caching/caching_test.py
index 711569553967..d7e2c761604a 100644
--- a/lib/tests/streamlit/runtime/legacy_caching/caching_test.py
+++ b/lib/tests/streamlit/runtime/legacy_caching/caching_test.py
@@ -640,7 +640,7 @@ def func():
"`st.cache` is deprecated. Please use one of Streamlit's new caching commands,\n"
"`st.cache_data` or `st.cache_resource`. Based on this function's return value\n"
"of type `int`, we recommend using `st.cache_data`.\n\n"
- "More information [in our docs](https://NEED.CACHE.DOCS.URL)."
+ "More information [in our docs](https://docs.streamlit.io/library/advanced-features/caching)."
)
show_deprecation_warning.assert_called_once_with(expected_message)
@@ -662,7 +662,7 @@ def func():
expected_message = (
"`st.cache` is deprecated. Please use one of Streamlit's new caching commands,\n"
"`st.cache_data` or `st.cache_resource`.\n\n"
- "More information [in our docs](https://NEED.CACHE.DOCS.URL)."
+ "More information [in our docs](https://docs.streamlit.io/library/advanced-features/caching)."
)
show_deprecation_warning.assert_called_once_with(expected_message)
|
- Replace our placeholder `CACHE_DOCS_URL` with the proper URL
- Update `cache_data` and `cache_resource` docstrings, per product
- `cache_data`: swap the order of the `show_spinner` + `persist` kwargs, per product
|
https://api.github.com/repos/streamlit/streamlit/pulls/5980
|
2023-01-19T17:38:18Z
|
2023-01-19T23:19:43Z
|
2023-01-19T23:19:42Z
|
2023-01-19T23:19:45Z
| 2,315
|
streamlit/streamlit
| 21,664
|
#4242 Support multi emails register
|
diff --git a/acme/acme/messages.py b/acme/acme/messages.py
index 03dbc325579..827a4dd11a6 100644
--- a/acme/acme/messages.py
+++ b/acme/acme/messages.py
@@ -285,7 +285,7 @@ def from_data(cls, phone=None, email=None, **kwargs):
if phone is not None:
details.append(cls.phone_prefix + phone)
if email is not None:
- details.append(cls.email_prefix + email)
+ details.extend([cls.email_prefix + mail for mail in email.split(',')])
kwargs['contact'] = tuple(details)
return cls(**kwargs)
diff --git a/certbot/cli.py b/certbot/cli.py
index b71d60055a0..bd39362bc9f 100644
--- a/certbot/cli.py
+++ b/certbot/cli.py
@@ -418,7 +418,7 @@ def _get_help_string(self, action):
}),
("enhance", {
"short": "Add security enhancements to your existing configuration",
- "opts": ("Helps to harden the TLS configration by adding security enhancements "
+ "opts": ("Helps to harden the TLS configuration by adding security enhancements "
"to already existing configuration."),
"usage": "\n\n certbot enhance [options]\n\n"
}),
diff --git a/certbot/client.py b/certbot/client.py
index 45dc9c63b9e..59514f8d139 100644
--- a/certbot/client.py
+++ b/certbot/client.py
@@ -179,8 +179,9 @@ def perform_registration(acme, config, tos_cb):
Actually register new account, trying repeatedly if there are email
problems
- :param .IConfig config: Client configuration.
:param acme.client.Client client: ACME client object.
+ :param .IConfig config: Client configuration.
+ :param Callable tos_cb: a callback to handle Term of Service agreement.
:returns: Registration Resource.
:rtype: `acme.messages.RegistrationResource`
diff --git a/certbot/interfaces.py b/certbot/interfaces.py
index c96f6bd51f2..6233e35929a 100644
--- a/certbot/interfaces.py
+++ b/certbot/interfaces.py
@@ -201,7 +201,9 @@ class IConfig(zope.interface.Interface):
"""
server = zope.interface.Attribute("ACME Directory Resource URI.")
email = zope.interface.Attribute(
- "Email used for registration and recovery contact. (default: Ask)")
+ "Email used for registration and recovery contact. Use comma to "
+ "register multiple emails, ex: [email protected],[email protected]. "
+ "(default: Ask).")
rsa_key_size = zope.interface.Attribute("Size of the RSA key.")
must_staple = zope.interface.Attribute(
"Adds the OCSP Must Staple extension to the certificate. "
diff --git a/certbot/main.py b/certbot/main.py
index a041b998f98..c6247e762e2 100644
--- a/certbot/main.py
+++ b/certbot/main.py
@@ -483,6 +483,21 @@ def _determine_account(config):
:raises errors.Error: If unable to register an account with ACME server
"""
+ def _tos_cb(terms_of_service):
+ if config.tos:
+ return True
+ msg = ("Please read the Terms of Service at {0}. You "
+ "must agree in order to register with the ACME "
+ "server at {1}".format(
+ terms_of_service, config.server))
+ obj = zope.component.getUtility(interfaces.IDisplay)
+ result = obj.yesno(msg, "Agree", "Cancel",
+ cli_flag="--agree-tos", force_interactive=True)
+ if not result:
+ raise errors.Error(
+ "Registration cannot proceed without accepting "
+ "Terms of Service.")
+
account_storage = account.AccountFileStorage(config)
acme = None
@@ -497,21 +512,6 @@ def _determine_account(config):
else: # no account registered yet
if config.email is None and not config.register_unsafely_without_email:
config.email = display_ops.get_email()
-
- def _tos_cb(terms_of_service):
- if config.tos:
- return True
- msg = ("Please read the Terms of Service at {0}. You "
- "must agree in order to register with the ACME "
- "server at {1}".format(
- terms_of_service, config.server))
- obj = zope.component.getUtility(interfaces.IDisplay)
- result = obj.yesno(msg, "Agree", "Cancel",
- cli_flag="--agree-tos", force_interactive=True)
- if not result:
- raise errors.Error(
- "Registration cannot proceed without accepting "
- "Terms of Service.")
try:
acc, acme = client.register(
config, account_storage, tos_cb=_tos_cb)
@@ -731,8 +731,9 @@ def register(config, unused_plugins):
acc, acme = _determine_account(config)
cb_client = client.Client(config, acc, None, None, acme=acme)
# We rely on an exception to interrupt this process if it didn't work.
+ acc_contacts = ['mailto:' + email for email in config.email.split(',')]
acc.regr = cb_client.acme.update_registration(acc.regr.update(
- body=acc.regr.body.update(contact=('mailto:' + config.email,))))
+ body=acc.regr.body.update(contact=acc_contacts)))
account_storage.save_regr(acc, cb_client.acme)
eff.handle_subscription(config)
add_msg("Your e-mail address was updated to {0}.".format(config.email))
diff --git a/certbot/tests/main_test.py b/certbot/tests/main_test.py
index 22653ca3aba..68a068973a1 100644
--- a/certbot/tests/main_test.py
+++ b/certbot/tests/main_test.py
@@ -1433,7 +1433,9 @@ def test_update_registration_with_email(self, mock_utility, mock_email):
mocked_storage = mock.MagicMock()
mocked_account.AccountFileStorage.return_value = mocked_storage
mocked_storage.find_all.return_value = ["an account"]
- mocked_det.return_value = (mock.MagicMock(), "foo")
+ mock_acc = mock.MagicMock()
+ mock_regr = mock_acc.regr
+ mocked_det.return_value = (mock_acc, "foo")
cb_client = mock.MagicMock()
mocked_client.Client.return_value = cb_client
x = self._call_no_clientmock(
@@ -1443,8 +1445,10 @@ def test_update_registration_with_email(self, mock_utility, mock_email):
self.assertTrue(x[0] is None)
# and we got supposedly did update the registration from
# the server
- self.assertTrue(
- cb_client.acme.update_registration.called)
+ reg_arg = cb_client.acme.update_registration.call_args[0][0]
+ # Test the return value of .update() was used because
+ # the regr is immutable.
+ self.assertEqual(reg_arg, mock_regr.update())
# and we saved the updated registration on disk
self.assertTrue(mocked_storage.save_regr.called)
self.assertTrue(
diff --git a/tests/boulder-integration.sh b/tests/boulder-integration.sh
index 9748befa383..e931e30f3c5 100755
--- a/tests/boulder-integration.sh
+++ b/tests/boulder-integration.sh
@@ -191,7 +191,14 @@ for dir in $renewal_hooks_dirs; do
exit 1
fi
done
-common register --update-registration --email [email protected]
+
+common unregister
+
+common register --email [email protected],[email protected]
+
+common register --update-registration --email [email protected]
+
+common register --update-registration --email [email protected],[email protected]
common plugins --init --prepare | grep webroot
|
This change will allow registering/updating account with multi emails.
Detail is enclosed in #4242
|
https://api.github.com/repos/certbot/certbot/pulls/5994
|
2018-05-14T22:27:52Z
|
2018-05-22T22:32:45Z
|
2018-05-22T22:32:45Z
|
2018-05-22T23:45:47Z
| 1,819
|
certbot/certbot
| 334
|
ui: update calibration limits text
|
diff --git a/selfdrive/ui/qt/offroad/settings.cc b/selfdrive/ui/qt/offroad/settings.cc
index f74e671d298e14..669de5eae93e94 100644
--- a/selfdrive/ui/qt/offroad/settings.cc
+++ b/selfdrive/ui/qt/offroad/settings.cc
@@ -283,7 +283,7 @@ DevicePanel::DevicePanel(SettingsWindow *parent) : ListWidget(parent) {
void DevicePanel::updateCalibDescription() {
QString desc =
tr("openpilot requires the device to be mounted within 4° left or right and "
- "within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.");
+ "within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.");
std::string calib_bytes = params.get("CalibrationParams");
if (!calib_bytes.empty()) {
try {
diff --git a/selfdrive/ui/translations/main_ar.ts b/selfdrive/ui/translations/main_ar.ts
index dd9acffb0314a8..a5da450221902e 100644
--- a/selfdrive/ui/translations/main_ar.ts
+++ b/selfdrive/ui/translations/main_ar.ts
@@ -226,8 +226,8 @@
<translation>أطفاء</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>يتطلب openpilot أن يتم تركيب الجهاز في حدود 4 درجات يسارًا أو يمينًا و 5 درجات لأعلى أو 8 درجات لأسفل. يقوم برنامج openpilot بالمعايرة بشكل مستمر ، ونادراً ما تكون إعادة الضبط مطلوبة.</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>يتطلب openpilot أن يتم تركيب الجهاز في حدود 4 درجات يسارًا أو يمينًا و 5 درجات لأعلى أو 9 درجات لأسفل. يقوم برنامج openpilot بالمعايرة بشكل مستمر ، ونادراً ما تكون إعادة الضبط مطلوبة.</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_de.ts b/selfdrive/ui/translations/main_de.ts
index d1820fd423f6aa..c53134c5b3d160 100644
--- a/selfdrive/ui/translations/main_de.ts
+++ b/selfdrive/ui/translations/main_de.ts
@@ -226,8 +226,8 @@
<translation>Ausschalten</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>Damit Openpilot funktioniert, darf die Installationsposition nicht mehr als 4° nach rechts/links, 5° nach oben und 8° nach unten abweichen. Openpilot kalibriert sich durchgehend, ein Zurücksetzen ist selten notwendig.</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>Damit Openpilot funktioniert, darf die Installationsposition nicht mehr als 4° nach rechts/links, 5° nach oben und 9° nach unten abweichen. Openpilot kalibriert sich durchgehend, ein Zurücksetzen ist selten notwendig.</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_fr.ts b/selfdrive/ui/translations/main_fr.ts
index 79c4ae55964d04..95afab46a72328 100644
--- a/selfdrive/ui/translations/main_fr.ts
+++ b/selfdrive/ui/translations/main_fr.ts
@@ -234,8 +234,8 @@
<translation>Éteindre</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>openpilot nécessite que l'appareil soit monté à 4° à gauche ou à droite et à 5° vers le haut ou 8° vers le bas. openpilot se calibre en continu, la réinitialisation est rarement nécessaire.</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>openpilot nécessite que l'appareil soit monté à 4° à gauche ou à droite et à 5° vers le haut ou 9° vers le bas. openpilot se calibre en continu, la réinitialisation est rarement nécessaire.</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_ja.ts b/selfdrive/ui/translations/main_ja.ts
index 16595f8ebf82dc..8abd794c608926 100644
--- a/selfdrive/ui/translations/main_ja.ts
+++ b/selfdrive/ui/translations/main_ja.ts
@@ -226,8 +226,8 @@
<translation>電源を切る</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>openpilotの本体は、左右4°以内、上5°、下8°以内の角度で取付ける必要があります。継続してキャリブレーションを続けているので、手動でリセットを行う必要はほぼありません。</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>openpilotの本体は、左右4°以内、上5°、下9°以内の角度で取付ける必要があります。継続してキャリブレーションを続けているので、手動でリセットを行う必要はほぼありません。</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_ko.ts b/selfdrive/ui/translations/main_ko.ts
index cbd8e668ac1568..11a9ec9d09d483 100644
--- a/selfdrive/ui/translations/main_ko.ts
+++ b/selfdrive/ui/translations/main_ko.ts
@@ -226,8 +226,8 @@
<translation>전원 종료</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>openpilot 장치는 좌우측 4° 이내, 위쪽 5° 아래쪽 8° 이내로 장착되어야 합니다. openpilot은 지속적으로 보정되며 재설정은 거의 필요하지 않습니다.</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>openpilot 장치는 좌우측 4° 이내, 위쪽 5° 아래쪽 9° 이내로 장착되어야 합니다. openpilot은 지속적으로 보정되며 재설정은 거의 필요하지 않습니다.</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_pt-BR.ts b/selfdrive/ui/translations/main_pt-BR.ts
index a55d31034ec77e..3f429c2acfaa4b 100644
--- a/selfdrive/ui/translations/main_pt-BR.ts
+++ b/selfdrive/ui/translations/main_pt-BR.ts
@@ -226,8 +226,8 @@
<translation>Desligar</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>O openpilot requer que o dispositivo seja montado dentro de 4° esquerda ou direita e dentro de 5° para cima ou 8° para baixo. O openpilot está continuamente calibrando, resetar raramente é necessário.</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>O openpilot requer que o dispositivo seja montado dentro de 4° esquerda ou direita e dentro de 5° para cima ou 9° para baixo. O openpilot está continuamente calibrando, resetar raramente é necessário.</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_th.ts b/selfdrive/ui/translations/main_th.ts
index abc6210956e27a..5b6ecea49d2fb0 100644
--- a/selfdrive/ui/translations/main_th.ts
+++ b/selfdrive/ui/translations/main_th.ts
@@ -226,8 +226,8 @@
<translation>ปิดเครื่อง</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>openpilot กำหนดให้ติดตั้งอุปกรณ์ โดยสามารถเอียงด้านซ้ายหรือขวาไม่เกิน 4° และเอียงขึ้นด้านบนไม่เกิน 5° หรือเอียงลงด้านล่างไม่เกิน 8° openpilot ทำการคาลิเบรทอย่างต่อเนื่อง แทบจะไม่จำเป็นต้องทำการรีเซ็ตการคาลิเบรท</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>openpilot กำหนดให้ติดตั้งอุปกรณ์ โดยสามารถเอียงด้านซ้ายหรือขวาไม่เกิน 4° และเอียงขึ้นด้านบนไม่เกิน 5° หรือเอียงลงด้านล่างไม่เกิน 9° openpilot ทำการคาลิเบรทอย่างต่อเนื่อง แทบจะไม่จำเป็นต้องทำการรีเซ็ตการคาลิเบรท</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_tr.ts b/selfdrive/ui/translations/main_tr.ts
index febded8f597761..97e1282c68df5b 100644
--- a/selfdrive/ui/translations/main_tr.ts
+++ b/selfdrive/ui/translations/main_tr.ts
@@ -226,8 +226,8 @@
<translation>Sistemi kapat</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>openpilot, cihazın 4° sola veya 5° yukarı yada 8° aşağı bakıcak şekilde monte edilmesi gerekmektedir. openpilot sürekli kendisini kalibre edilmektedir ve nadiren sıfırlama gerebilir.</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>openpilot, cihazın 4° sola veya 5° yukarı yada 9° aşağı bakıcak şekilde monte edilmesi gerekmektedir. openpilot sürekli kendisini kalibre edilmektedir ve nadiren sıfırlama gerebilir.</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_zh-CHS.ts b/selfdrive/ui/translations/main_zh-CHS.ts
index 040dae0b30f38d..9253d922f5db26 100644
--- a/selfdrive/ui/translations/main_zh-CHS.ts
+++ b/selfdrive/ui/translations/main_zh-CHS.ts
@@ -226,8 +226,8 @@
<translation>关机</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>openpilot要求设备安装的偏航角在左4°和右4°之间,俯仰角在上5°和下8°之间。一般来说,openpilot会持续更新校准,很少需要重置。</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>openpilot要求设备安装的偏航角在左4°和右4°之间,俯仰角在上5°和下9°之间。一般来说,openpilot会持续更新校准,很少需要重置。</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
diff --git a/selfdrive/ui/translations/main_zh-CHT.ts b/selfdrive/ui/translations/main_zh-CHT.ts
index 0ffef3bb7b3948..3a2040bc3b72de 100644
--- a/selfdrive/ui/translations/main_zh-CHT.ts
+++ b/selfdrive/ui/translations/main_zh-CHT.ts
@@ -226,8 +226,8 @@
<translation>關機</translation>
</message>
<message>
- <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 8° down. openpilot is continuously calibrating, resetting is rarely required.</source>
- <translation>openpilot 需要將設備固定在左右偏差 4° 以內,朝上偏差 5° 以內或朝下偏差 8° 以內。鏡頭在後台會持續自動校準,很少有需要重設的情況。</translation>
+ <source>openpilot requires the device to be mounted within 4° left or right and within 5° up or 9° down. openpilot is continuously calibrating, resetting is rarely required.</source>
+ <translation>openpilot 需要將設備固定在左右偏差 4° 以內,朝上偏差 5° 以內或朝下偏差 9° 以內。鏡頭在後台會持續自動校準,很少有需要重設的情況。</translation>
</message>
<message>
<source> Your device is pointed %1° %2 and %3° %4.</source>
|
It's 9.7, but we round down. https://github.com/commaai/openpilot/pull/28255
|
https://api.github.com/repos/commaai/openpilot/pulls/30014
|
2023-09-22T22:47:34Z
|
2023-09-23T04:38:07Z
|
2023-09-23T04:38:07Z
|
2023-09-23T04:38:08Z
| 3,906
|
commaai/openpilot
| 9,353
|
Fix for #4264: --line-ranges formats entire file when ranges are at EOF
|
diff --git a/CHANGES.md b/CHANGES.md
index 1d20a4c9210..4f458b52b8e 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -15,6 +15,8 @@
of Black would incorrectly format the contents of certain unusual f-strings containing
nested strings with the same quote type. Now, Black will crash on such strings until
support for the new f-string syntax is implemented. (#4270)
+- Fixed a bug where line-ranges exceeding the last code line would not work as expected
+ (#4273)
### Preview style
diff --git a/src/black/__init__.py b/src/black/__init__.py
index da884e6027e..6f0e128f56c 100644
--- a/src/black/__init__.py
+++ b/src/black/__init__.py
@@ -84,7 +84,12 @@
parse_ast,
stringify_ast,
)
-from black.ranges import adjusted_lines, convert_unchanged_lines, parse_line_ranges
+from black.ranges import (
+ adjusted_lines,
+ convert_unchanged_lines,
+ parse_line_ranges,
+ sanitized_lines,
+)
from black.report import Changed, NothingChanged, Report
from black.trans import iter_fexpr_spans
from blib2to3.pgen2 import token
@@ -1220,6 +1225,10 @@ def f(
hey
"""
+ if lines:
+ lines = sanitized_lines(lines, src_contents)
+ if not lines:
+ return src_contents # Nothing to format
dst_contents = _format_str_once(src_contents, mode=mode, lines=lines)
# Forced second pass to work around optional trailing commas (becoming
# forced trailing commas on pass 2) interacting differently with optional
diff --git a/src/black/ranges.py b/src/black/ranges.py
index 06fa8790554..1ecaf7b0aed 100644
--- a/src/black/ranges.py
+++ b/src/black/ranges.py
@@ -45,6 +45,34 @@ def is_valid_line_range(lines: Tuple[int, int]) -> bool:
return not lines or lines[0] <= lines[1]
+def sanitized_lines(
+ lines: Collection[Tuple[int, int]], src_contents: str
+) -> Collection[Tuple[int, int]]:
+ """Returns the valid line ranges for the given source.
+
+ This removes ranges that are entirely outside the valid lines.
+
+ Other ranges are normalized so that the start values are at least 1 and the
+ end values are at most the (1-based) index of the last source line.
+ """
+ if not src_contents:
+ return []
+ good_lines = []
+ src_line_count = src_contents.count("\n")
+ if not src_contents.endswith("\n"):
+ src_line_count += 1
+ for start, end in lines:
+ if start > src_line_count:
+ continue
+ # line-ranges are 1-based
+ start = max(start, 1)
+ if end < start:
+ continue
+ end = min(end, src_line_count)
+ good_lines.append((start, end))
+ return good_lines
+
+
def adjusted_lines(
lines: Collection[Tuple[int, int]],
original_source: str,
diff --git a/tests/data/cases/line_ranges_exceeding_end.py b/tests/data/cases/line_ranges_exceeding_end.py
new file mode 100644
index 00000000000..8f17491f684
--- /dev/null
+++ b/tests/data/cases/line_ranges_exceeding_end.py
@@ -0,0 +1,36 @@
+# flags: --line-ranges=6-1000
+# NOTE: If you need to modify this file, pay special attention to the --line-ranges=
+# flag above as it's formatting specifically these lines.
+def foo1(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo2(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo3(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo4(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+
+# output
+# flags: --line-ranges=6-1000
+# NOTE: If you need to modify this file, pay special attention to the --line-ranges=
+# flag above as it's formatting specifically these lines.
+def foo1(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo2(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo3(
+ parameter_1,
+ parameter_2,
+ parameter_3,
+ parameter_4,
+ parameter_5,
+ parameter_6,
+ parameter_7,
+):
+ pass
+
+
+def foo4(
+ parameter_1,
+ parameter_2,
+ parameter_3,
+ parameter_4,
+ parameter_5,
+ parameter_6,
+ parameter_7,
+):
+ pass
diff --git a/tests/data/cases/line_ranges_outside_source.py b/tests/data/cases/line_ranges_outside_source.py
new file mode 100644
index 00000000000..edec9015ff8
--- /dev/null
+++ b/tests/data/cases/line_ranges_outside_source.py
@@ -0,0 +1,7 @@
+# flags: --line-ranges=5000-6000
+# NOTE: If you need to modify this file, pay special attention to the --line-ranges=
+# flag above as it's formatting specifically these lines, in this case none.
+def foo1(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo2(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo3(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
+def foo4(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, parameter_7): pass
diff --git a/tests/test_ranges.py b/tests/test_ranges.py
index d9fa9171a7f..a3028babf50 100644
--- a/tests/test_ranges.py
+++ b/tests/test_ranges.py
@@ -4,7 +4,7 @@
import pytest
-from black.ranges import adjusted_lines
+from black.ranges import adjusted_lines, sanitized_lines
@pytest.mark.parametrize(
@@ -183,3 +183,67 @@ def test_diffs(lines: List[Tuple[int, int]], adjusted: List[Tuple[int, int]]) ->
12. # last line changed
"""
assert adjusted == adjusted_lines(lines, original_source, modified_source)
+
+
[email protected](
+ "lines,sanitized",
+ [
+ (
+ [(1, 4)],
+ [(1, 4)],
+ ),
+ (
+ [(2, 3)],
+ [(2, 3)],
+ ),
+ (
+ [(2, 10)],
+ [(2, 4)],
+ ),
+ (
+ [(0, 3)],
+ [(1, 3)],
+ ),
+ (
+ [(0, 10)],
+ [(1, 4)],
+ ),
+ (
+ [(-2, 3)],
+ [(1, 3)],
+ ),
+ (
+ [(0, 0)],
+ [],
+ ),
+ (
+ [(-2, -1)],
+ [],
+ ),
+ (
+ [(-1, 0)],
+ [],
+ ),
+ (
+ [(3, 1), (1, 3), (5, 6)],
+ [(1, 3)],
+ ),
+ ],
+)
+def test_sanitize(
+ lines: List[Tuple[int, int]], sanitized: List[Tuple[int, int]]
+) -> None:
+ source = """\
+1. import re
+2. def func(arg1,
+3. arg2, arg3):
+4. pass
+"""
+ assert sanitized == sanitized_lines(lines, source)
+
+ source_no_trailing_nl = """\
+ 1. import re
+ 2. def func(arg1,
+ 3. arg2, arg3):
+ 4. pass"""
+ assert sanitized == sanitized_lines(lines, source_no_trailing_nl)
|
### Description
<!-- Good things to put here include: reasoning for the change (please link
any relevant issues!), any noteworthy (or hacky) choices to be aware of,
or what the problem resolved here looked like ... we won't mind a ranty
story :) -->
This fixes #4264. The issue was that the empty last line does not count as a line to `adjusted_lines` because it is not in the list returned by `str.splitlines`. Since `adjusted_lines` is only called on the second pass of `_format_str_once` in `format_str`, the first pass would format the code correctly, then the "invalid" line would get removed from `lines`, and the second pass would format the whole code.
I added a small change to `adjusted_lines` to cap the end value of any line tuple. For example, `--line-ranges 1-100` gets reduced to `(1, 4)` if the code is only four lines long.
One could change `if end > original_line_count` to `if end == original_line_count + 1` to only allow this one additional line, but I think allowing an oversized range to just cover the rest of the code is not surprising behavior, slices act similarly.
I also added a call to `adjusted_lines` with the unmodified source code before the first pass of `_format_str_once`. This is an additional computational expense but ensures consistency.
### Checklist - did you ...
<!-- If any of the following items aren't relevant for your contribution
please still tick them so we know you've gone through the checklist.
All user-facing changes should get an entry. Otherwise, signal to us
this should get the magical label to silence the CHANGELOG entry check.
Tests are required for bugfixes and new features. Documentation changes
are necessary for formatting and most enhancement changes. -->
- [x] Add an entry in `CHANGES.md` if necessary?
- [x] Add / update tests if necessary?
- [-] Add new / update outdated documentation?
<!-- Just as a reminder, everyone in all psf/black spaces including PRs
must follow the PSF Code of Conduct (link below).
Finally, once again thanks for your time and effort. If you have any
feedback in regards to your experience contributing here, please
let us know!
Helpful links:
PSF COC: https://www.python.org/psf/conduct/
Contributing docs: https://black.readthedocs.io/en/latest/contributing/index.html
Chat on Python Discord: https://discord.gg/RtVdv86PrH -->
|
https://api.github.com/repos/psf/black/pulls/4273
|
2024-03-12T16:43:54Z
|
2024-03-15T18:18:48Z
|
2024-03-15T18:18:48Z
|
2024-03-16T10:08:40Z
| 2,027
|
psf/black
| 23,799
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.