code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def get_feature_names_out(self, input_features=None): """Get output feature names for transformation. Parameters ---------- input_features : array-like of str or None, default=None Only used to validate feature names with the names seen in :meth:`fit`. Returns ------- feature_names_out : ndarray of str objects Transformed feature names. """ # Note that passing attributes="n_features_in_" forces check_is_fitted # to check if the attribute is present. Otherwise it will pass on this # stateless estimator (requires_fit=False) check_is_fitted(self, attributes="n_features_in_") input_features = _check_feature_names_in( self, input_features, generate_names=True ) est_name = self.__class__.__name__.lower() names_list = [f"{est_name}_{name}_sqrt" for name in input_features] for j in range(1, self.sample_steps): cos_names = [f"{est_name}_{name}_cos{j}" for name in input_features] sin_names = [f"{est_name}_{name}_sin{j}" for name in input_features] names_list.extend(cos_names + sin_names) return np.asarray(names_list, dtype=object)
Get output feature names for transformation. Parameters ---------- input_features : array-like of str or None, default=None Only used to validate feature names with the names seen in :meth:`fit`. Returns ------- feature_names_out : ndarray of str objects Transformed feature names.
get_feature_names_out
python
scikit-learn/scikit-learn
sklearn/kernel_approximation.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_approximation.py
BSD-3-Clause
def fit(self, X, y=None): """Fit estimator to data. Samples a subset of training points, computes kernel on these and computes normalization matrix. Parameters ---------- X : array-like, shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like, shape (n_samples,) or (n_samples, n_outputs), \ default=None Target values (None for unsupervised transformations). Returns ------- self : object Returns the instance itself. """ X = validate_data(self, X, accept_sparse="csr") rnd = check_random_state(self.random_state) n_samples = X.shape[0] # get basis vectors if self.n_components > n_samples: # XXX should we just bail? n_components = n_samples warnings.warn( "n_components > n_samples. This is not possible.\n" "n_components was set to n_samples, which results" " in inefficient evaluation of the full kernel." ) else: n_components = self.n_components n_components = min(n_samples, n_components) inds = rnd.permutation(n_samples) basis_inds = inds[:n_components] basis = X[basis_inds] basis_kernel = pairwise_kernels( basis, metric=self.kernel, filter_params=True, n_jobs=self.n_jobs, **self._get_kernel_params(), ) # sqrt of kernel matrix on basis vectors U, S, V = svd(basis_kernel) S = np.maximum(S, 1e-12) self.normalization_ = np.dot(U / np.sqrt(S), V) self.components_ = basis self.component_indices_ = basis_inds self._n_features_out = n_components return self
Fit estimator to data. Samples a subset of training points, computes kernel on these and computes normalization matrix. Parameters ---------- X : array-like, shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like, shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). Returns ------- self : object Returns the instance itself.
fit
python
scikit-learn/scikit-learn
sklearn/kernel_approximation.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_approximation.py
BSD-3-Clause
def transform(self, X): """Apply feature map to X. Computes an approximate feature map using the kernel between some training points and X. Parameters ---------- X : array-like of shape (n_samples, n_features) Data to transform. Returns ------- X_transformed : ndarray of shape (n_samples, n_components) Transformed data. """ check_is_fitted(self) X = validate_data(self, X, accept_sparse="csr", reset=False) kernel_params = self._get_kernel_params() embedded = pairwise_kernels( X, self.components_, metric=self.kernel, filter_params=True, n_jobs=self.n_jobs, **kernel_params, ) return np.dot(embedded, self.normalization_.T)
Apply feature map to X. Computes an approximate feature map using the kernel between some training points and X. Parameters ---------- X : array-like of shape (n_samples, n_features) Data to transform. Returns ------- X_transformed : ndarray of shape (n_samples, n_components) Transformed data.
transform
python
scikit-learn/scikit-learn
sklearn/kernel_approximation.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_approximation.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """Fit Kernel Ridge regression model. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. If kernel == "precomputed" this is instead a precomputed kernel matrix, of shape (n_samples, n_samples). y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : float or array-like of shape (n_samples,), default=None Individual weights for each sample, ignored if None is passed. Returns ------- self : object Returns the instance itself. """ # Convert data X, y = validate_data( self, X, y, accept_sparse=("csr", "csc"), multi_output=True, y_numeric=True ) if sample_weight is not None and not isinstance(sample_weight, float): sample_weight = _check_sample_weight(sample_weight, X) K = self._get_kernel(X) alpha = np.atleast_1d(self.alpha) ravel = False if len(y.shape) == 1: y = y.reshape(-1, 1) ravel = True copy = self.kernel == "precomputed" self.dual_coef_ = _solve_cholesky_kernel(K, y, alpha, sample_weight, copy) if ravel: self.dual_coef_ = self.dual_coef_.ravel() self.X_fit_ = X return self
Fit Kernel Ridge regression model. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. If kernel == "precomputed" this is instead a precomputed kernel matrix, of shape (n_samples, n_samples). y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : float or array-like of shape (n_samples,), default=None Individual weights for each sample, ignored if None is passed. Returns ------- self : object Returns the instance itself.
fit
python
scikit-learn/scikit-learn
sklearn/kernel_ridge.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_ridge.py
BSD-3-Clause
def predict(self, X): """Predict using the kernel ridge model. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Samples. If kernel == "precomputed" this is instead a precomputed kernel matrix, shape = [n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for this estimator. Returns ------- C : ndarray of shape (n_samples,) or (n_samples, n_targets) Returns predicted values. """ check_is_fitted(self) X = validate_data(self, X, accept_sparse=("csr", "csc"), reset=False) K = self._get_kernel(X, self.X_fit_) return np.dot(K, self.dual_coef_)
Predict using the kernel ridge model. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Samples. If kernel == "precomputed" this is instead a precomputed kernel matrix, shape = [n_samples, n_samples_fitted], where n_samples_fitted is the number of samples used in the fitting for this estimator. Returns ------- C : ndarray of shape (n_samples,) or (n_samples, n_targets) Returns predicted values.
predict
python
scikit-learn/scikit-learn
sklearn/kernel_ridge.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_ridge.py
BSD-3-Clause
def _predict_binary(estimator, X): """Make predictions using a single binary estimator.""" if is_regressor(estimator): return estimator.predict(X) try: score = np.ravel(estimator.decision_function(X)) except (AttributeError, NotImplementedError): # probabilities of the positive class score = estimator.predict_proba(X)[:, 1] return score
Make predictions using a single binary estimator.
_predict_binary
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def _threshold_for_binary_predict(estimator): """Threshold for predictions from binary estimator.""" if hasattr(estimator, "decision_function") and is_classifier(estimator): return 0.0 else: # predict_proba threshold return 0.5
Threshold for predictions from binary estimator.
_threshold_for_binary_predict
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def _estimators_has(attr): """Check if self.estimator or self.estimators_[0] has attr. If `self.estimators_[0]` has the attr, then its safe to assume that other estimators have it too. We raise the original `AttributeError` if `attr` does not exist. This function is used together with `available_if`. """ def check(self): if hasattr(self, "estimators_"): getattr(self.estimators_[0], attr) else: getattr(self.estimator, attr) return True return check
Check if self.estimator or self.estimators_[0] has attr. If `self.estimators_[0]` has the attr, then its safe to assume that other estimators have it too. We raise the original `AttributeError` if `attr` does not exist. This function is used together with `available_if`.
_estimators_has
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def fit(self, X, y, **fit_params): """Fit underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. y : {array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_classes) Multi-class targets. An indicator matrix turns on multilabel classification. **fit_params : dict Parameters passed to the ``estimator.fit`` method of each sub-estimator. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Instance of fitted estimator. """ _raise_for_params(fit_params, self, "fit") routed_params = process_routing( self, "fit", **fit_params, ) # A sparse LabelBinarizer, with sparse_output=True, has been shown to # outperform or match a dense label binarizer in all cases and has also # resulted in less or equal memory consumption in the fit_ovr function # overall. self.label_binarizer_ = LabelBinarizer(sparse_output=True) Y = self.label_binarizer_.fit_transform(y) Y = Y.tocsc() self.classes_ = self.label_binarizer_.classes_ columns = (col.toarray().ravel() for col in Y.T) # In cases where individual estimators are very fast to train setting # n_jobs > 1 in can results in slower performance due to the overhead # of spawning threads. See joblib issue #112. self.estimators_ = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)( delayed(_fit_binary)( self.estimator, X, column, fit_params=routed_params.estimator.fit, classes=[ "not %s" % self.label_binarizer_.classes_[i], self.label_binarizer_.classes_[i], ], ) for i, column in enumerate(columns) ) if hasattr(self.estimators_[0], "n_features_in_"): self.n_features_in_ = self.estimators_[0].n_features_in_ if hasattr(self.estimators_[0], "feature_names_in_"): self.feature_names_in_ = self.estimators_[0].feature_names_in_ return self
Fit underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. y : {array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_classes) Multi-class targets. An indicator matrix turns on multilabel classification. **fit_params : dict Parameters passed to the ``estimator.fit`` method of each sub-estimator. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Instance of fitted estimator.
fit
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def predict(self, X): """Predict multi-class targets using underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. Returns ------- y : {array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_classes) Predicted multi-class targets. """ check_is_fitted(self) n_samples = _num_samples(X) if self.label_binarizer_.y_type_ == "multiclass": maxima = np.empty(n_samples, dtype=float) maxima.fill(-np.inf) argmaxima = np.zeros(n_samples, dtype=int) for i, e in enumerate(self.estimators_): pred = _predict_binary(e, X) np.maximum(maxima, pred, out=maxima) argmaxima[maxima == pred] = i return self.classes_[argmaxima] else: thresh = _threshold_for_binary_predict(self.estimators_[0]) indices = array.array("i") indptr = array.array("i", [0]) for e in self.estimators_: indices.extend(np.where(_predict_binary(e, X) > thresh)[0]) indptr.append(len(indices)) data = np.ones(len(indices), dtype=int) indicator = sp.csc_matrix( (data, indices, indptr), shape=(n_samples, len(self.estimators_)) ) return self.label_binarizer_.inverse_transform(indicator)
Predict multi-class targets using underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. Returns ------- y : {array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_classes) Predicted multi-class targets.
predict
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def predict_proba(self, X): """Probability estimates. The returned estimates for all classes are ordered by label of classes. Note that in the multilabel case, each sample can have any number of labels. This returns the marginal probability that the given sample has the label in question. For example, it is entirely consistent that two labels both have a 90% probability of applying to a given sample. In the single label multiclass case, the rows of the returned matrix sum to 1. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data. Returns ------- T : array-like of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`. """ check_is_fitted(self) # Y[i, j] gives the probability that sample i has the label j. # In the multi-label case, these are not disjoint. Y = np.array([e.predict_proba(X)[:, 1] for e in self.estimators_]).T if len(self.estimators_) == 1: # Only one estimator, but we still want to return probabilities # for two classes. Y = np.concatenate(((1 - Y), Y), axis=1) if not self.multilabel_: # Then, (nonzero) sample probability distributions should be normalized. row_sums = np.sum(Y, axis=1)[:, np.newaxis] np.divide(Y, row_sums, out=Y, where=row_sums != 0) return Y
Probability estimates. The returned estimates for all classes are ordered by label of classes. Note that in the multilabel case, each sample can have any number of labels. This returns the marginal probability that the given sample has the label in question. For example, it is entirely consistent that two labels both have a 90% probability of applying to a given sample. In the single label multiclass case, the rows of the returned matrix sum to 1. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data. Returns ------- T : array-like of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
predict_proba
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def decision_function(self, X): """Decision function for the OneVsRestClassifier. Return the distance of each sample from the decision boundary for each class. This can only be used with estimators which implement the `decision_function` method. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data. Returns ------- T : array-like of shape (n_samples, n_classes) or (n_samples,) for \ binary classification. Result of calling `decision_function` on the final estimator. .. versionchanged:: 0.19 output shape changed to ``(n_samples,)`` to conform to scikit-learn conventions for binary classification. """ check_is_fitted(self) if len(self.estimators_) == 1: return self.estimators_[0].decision_function(X) return np.array( [est.decision_function(X).ravel() for est in self.estimators_] ).T
Decision function for the OneVsRestClassifier. Return the distance of each sample from the decision boundary for each class. This can only be used with estimators which implement the `decision_function` method. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data. Returns ------- T : array-like of shape (n_samples, n_classes) or (n_samples,) for binary classification. Result of calling `decision_function` on the final estimator. .. versionchanged:: 0.19 output shape changed to ``(n_samples,)`` to conform to scikit-learn conventions for binary classification.
decision_function
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def __sklearn_tags__(self): """Indicate if wrapped estimator is using a precomputed Gram matrix""" tags = super().__sklearn_tags__() tags.input_tags.pairwise = get_tags(self.estimator).input_tags.pairwise tags.input_tags.sparse = get_tags(self.estimator).input_tags.sparse return tags
Indicate if wrapped estimator is using a precomputed Gram matrix
__sklearn_tags__
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = ( MetadataRouter(owner=self.__class__.__name__) .add_self_request(self) .add( estimator=self.estimator, method_mapping=MethodMapping() .add(caller="fit", callee="fit") .add(caller="partial_fit", callee="partial_fit"), ) ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def _fit_ovo_binary(estimator, X, y, i, j, fit_params): """Fit a single binary estimator (one-vs-one).""" cond = np.logical_or(y == i, y == j) y = y[cond] y_binary = np.empty(y.shape, int) y_binary[y == i] = 0 y_binary[y == j] = 1 indcond = np.arange(_num_samples(X))[cond] fit_params_subset = _check_method_params(X, params=fit_params, indices=indcond) return ( _fit_binary( estimator, _safe_split(estimator, X, None, indices=indcond)[0], y_binary, fit_params=fit_params_subset, classes=[i, j], ), indcond, )
Fit a single binary estimator (one-vs-one).
_fit_ovo_binary
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def _partial_fit_ovo_binary(estimator, X, y, i, j, partial_fit_params): """Partially fit a single binary estimator(one-vs-one).""" cond = np.logical_or(y == i, y == j) y = y[cond] if len(y) != 0: y_binary = np.zeros_like(y) y_binary[y == j] = 1 partial_fit_params_subset = _check_method_params( X, params=partial_fit_params, indices=cond ) return _partial_fit_binary( estimator, X[cond], y_binary, partial_fit_params=partial_fit_params_subset ) return estimator
Partially fit a single binary estimator(one-vs-one).
_partial_fit_ovo_binary
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def fit(self, X, y, **fit_params): """Fit underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. y : array-like of shape (n_samples,) Multi-class targets. **fit_params : dict Parameters passed to the ``estimator.fit`` method of each sub-estimator. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object The fitted underlying estimator. """ _raise_for_params(fit_params, self, "fit") routed_params = process_routing( self, "fit", **fit_params, ) # We need to validate the data because we do a safe_indexing later. X, y = validate_data( self, X, y, accept_sparse=["csr", "csc"], ensure_all_finite=False ) check_classification_targets(y) self.classes_ = np.unique(y) if len(self.classes_) == 1: raise ValueError( "OneVsOneClassifier can not be fit when only one class is present." ) n_classes = self.classes_.shape[0] estimators_indices = list( zip( *( Parallel(n_jobs=self.n_jobs)( delayed(_fit_ovo_binary)( self.estimator, X, y, self.classes_[i], self.classes_[j], fit_params=routed_params.estimator.fit, ) for i in range(n_classes) for j in range(i + 1, n_classes) ) ) ) ) self.estimators_ = estimators_indices[0] pairwise = self.__sklearn_tags__().input_tags.pairwise self.pairwise_indices_ = estimators_indices[1] if pairwise else None return self
Fit underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. y : array-like of shape (n_samples,) Multi-class targets. **fit_params : dict Parameters passed to the ``estimator.fit`` method of each sub-estimator. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object The fitted underlying estimator.
fit
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def predict(self, X): """Estimate the best class label for each sample in X. This is implemented as ``argmax(decision_function(X), axis=1)`` which will return the label of the class with most votes by estimators predicting the outcome of a decision for each possible class pair. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. Returns ------- y : numpy array of shape [n_samples] Predicted multi-class targets. """ Y = self.decision_function(X) if self.n_classes_ == 2: thresh = _threshold_for_binary_predict(self.estimators_[0]) return self.classes_[(Y > thresh).astype(int)] return self.classes_[Y.argmax(axis=1)]
Estimate the best class label for each sample in X. This is implemented as ``argmax(decision_function(X), axis=1)`` which will return the label of the class with most votes by estimators predicting the outcome of a decision for each possible class pair. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. Returns ------- y : numpy array of shape [n_samples] Predicted multi-class targets.
predict
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def decision_function(self, X): """Decision function for the OneVsOneClassifier. The decision values for the samples are computed by adding the normalized sum of pair-wise classification confidence levels to the votes in order to disambiguate between the decision values when the votes for all the classes are equal leading to a tie. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data. Returns ------- Y : array-like of shape (n_samples, n_classes) or (n_samples,) Result of calling `decision_function` on the final estimator. .. versionchanged:: 0.19 output shape changed to ``(n_samples,)`` to conform to scikit-learn conventions for binary classification. """ check_is_fitted(self) X = validate_data( self, X, accept_sparse=True, ensure_all_finite=False, reset=False, ) indices = self.pairwise_indices_ if indices is None: Xs = [X] * len(self.estimators_) else: Xs = [X[:, idx] for idx in indices] predictions = np.vstack( [est.predict(Xi) for est, Xi in zip(self.estimators_, Xs)] ).T confidences = np.vstack( [_predict_binary(est, Xi) for est, Xi in zip(self.estimators_, Xs)] ).T Y = _ovr_decision_function(predictions, confidences, len(self.classes_)) if self.n_classes_ == 2: return Y[:, 1] return Y
Decision function for the OneVsOneClassifier. The decision values for the samples are computed by adding the normalized sum of pair-wise classification confidence levels to the votes in order to disambiguate between the decision values when the votes for all the classes are equal leading to a tie. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data. Returns ------- Y : array-like of shape (n_samples, n_classes) or (n_samples,) Result of calling `decision_function` on the final estimator. .. versionchanged:: 0.19 output shape changed to ``(n_samples,)`` to conform to scikit-learn conventions for binary classification.
decision_function
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def __sklearn_tags__(self): """Indicate if wrapped estimator is using a precomputed Gram matrix""" tags = super().__sklearn_tags__() tags.input_tags.pairwise = get_tags(self.estimator).input_tags.pairwise tags.input_tags.sparse = get_tags(self.estimator).input_tags.sparse return tags
Indicate if wrapped estimator is using a precomputed Gram matrix
__sklearn_tags__
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = ( MetadataRouter(owner=self.__class__.__name__) .add_self_request(self) .add( estimator=self.estimator, method_mapping=MethodMapping() .add(caller="fit", callee="fit") .add(caller="partial_fit", callee="partial_fit"), ) ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def fit(self, X, y, **fit_params): """Fit underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. y : array-like of shape (n_samples,) Multi-class targets. **fit_params : dict Parameters passed to the ``estimator.fit`` method of each sub-estimator. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns a fitted instance of self. """ _raise_for_params(fit_params, self, "fit") routed_params = process_routing( self, "fit", **fit_params, ) y = validate_data(self, X="no_validation", y=y) random_state = check_random_state(self.random_state) check_classification_targets(y) self.classes_ = np.unique(y) n_classes = self.classes_.shape[0] if n_classes == 0: raise ValueError( "OutputCodeClassifier can not be fit when no class is present." ) n_estimators = int(n_classes * self.code_size) # FIXME: there are more elaborate methods than generating the codebook # randomly. self.code_book_ = random_state.uniform(size=(n_classes, n_estimators)) self.code_book_[self.code_book_ > 0.5] = 1.0 if hasattr(self.estimator, "decision_function"): self.code_book_[self.code_book_ != 1] = -1.0 else: self.code_book_[self.code_book_ != 1] = 0.0 classes_index = {c: i for i, c in enumerate(self.classes_)} Y = np.array( [self.code_book_[classes_index[y[i]]] for i in range(_num_samples(y))], dtype=int, ) self.estimators_ = Parallel(n_jobs=self.n_jobs)( delayed(_fit_binary)( self.estimator, X, Y[:, i], fit_params=routed_params.estimator.fit ) for i in range(Y.shape[1]) ) if hasattr(self.estimators_[0], "n_features_in_"): self.n_features_in_ = self.estimators_[0].n_features_in_ if hasattr(self.estimators_[0], "feature_names_in_"): self.feature_names_in_ = self.estimators_[0].feature_names_in_ return self
Fit underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. y : array-like of shape (n_samples,) Multi-class targets. **fit_params : dict Parameters passed to the ``estimator.fit`` method of each sub-estimator. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns a fitted instance of self.
fit
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def predict(self, X): """Predict multi-class targets using underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. Returns ------- y : ndarray of shape (n_samples,) Predicted multi-class targets. """ check_is_fitted(self) # ArgKmin only accepts C-contiguous array. The aggregated predictions need to be # transposed. We therefore create a F-contiguous array to avoid a copy and have # a C-contiguous array after the transpose operation. Y = np.array( [_predict_binary(e, X) for e in self.estimators_], order="F", dtype=np.float64, ).T pred = pairwise_distances_argmin(Y, self.code_book_, metric="euclidean") return self.classes_[pred]
Predict multi-class targets using underlying estimators. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Data. Returns ------- y : ndarray of shape (n_samples,) Predicted multi-class targets.
predict
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__).add( estimator=self.estimator, method_mapping=MethodMapping().add(caller="fit", callee="fit"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/multiclass.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multiclass.py
BSD-3-Clause
def _available_if_estimator_has(attr): """Return a function to check if the sub-estimator(s) has(have) `attr`. Helper for Chain implementations. """ def _check(self): if hasattr(self, "estimators_"): return all(hasattr(est, attr) for est in self.estimators_) if hasattr(self.estimator, attr): return True return False return available_if(_check)
Return a function to check if the sub-estimator(s) has(have) `attr`. Helper for Chain implementations.
_available_if_estimator_has
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None, **fit_params): """Fit the model to data, separately for each output variable. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. y : {array-like, sparse matrix} of shape (n_samples, n_outputs) Multi-output targets. An indicator matrix turns on multilabel estimation. sample_weight : array-like of shape (n_samples,), default=None Sample weights. If `None`, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. **fit_params : dict of string -> object Parameters passed to the ``estimator.fit`` method of each step. .. versionadded:: 0.23 Returns ------- self : object Returns a fitted instance. """ if not hasattr(self.estimator, "fit"): raise ValueError("The base estimator should implement a fit method") y = validate_data(self, X="no_validation", y=y, multi_output=True) if is_classifier(self): check_classification_targets(y) if y.ndim == 1: raise ValueError( "y must have at least two dimensions for " "multi-output regression but has only one." ) if _routing_enabled(): if sample_weight is not None: fit_params["sample_weight"] = sample_weight routed_params = process_routing( self, "fit", **fit_params, ) else: if sample_weight is not None and not has_fit_parameter( self.estimator, "sample_weight" ): raise ValueError( "Underlying estimator does not support sample weights." ) fit_params_validated = _check_method_params(X, params=fit_params) routed_params = Bunch(estimator=Bunch(fit=fit_params_validated)) if sample_weight is not None: routed_params.estimator.fit["sample_weight"] = sample_weight self.estimators_ = Parallel(n_jobs=self.n_jobs)( delayed(_fit_estimator)( self.estimator, X, y[:, i], **routed_params.estimator.fit ) for i in range(y.shape[1]) ) if hasattr(self.estimators_[0], "n_features_in_"): self.n_features_in_ = self.estimators_[0].n_features_in_ if hasattr(self.estimators_[0], "feature_names_in_"): self.feature_names_in_ = self.estimators_[0].feature_names_in_ return self
Fit the model to data, separately for each output variable. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. y : {array-like, sparse matrix} of shape (n_samples, n_outputs) Multi-output targets. An indicator matrix turns on multilabel estimation. sample_weight : array-like of shape (n_samples,), default=None Sample weights. If `None`, then samples are equally weighted. Only supported if the underlying regressor supports sample weights. **fit_params : dict of string -> object Parameters passed to the ``estimator.fit`` method of each step. .. versionadded:: 0.23 Returns ------- self : object Returns a fitted instance.
fit
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def predict(self, X): """Predict multi-output variable using model for each target variable. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns ------- y : {array-like, sparse matrix} of shape (n_samples, n_outputs) Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor. """ check_is_fitted(self) if not hasattr(self.estimators_[0], "predict"): raise ValueError("The base estimator should implement a predict method") y = Parallel(n_jobs=self.n_jobs)( delayed(e.predict)(X) for e in self.estimators_ ) return np.asarray(y).T
Predict multi-output variable using model for each target variable. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns ------- y : {array-like, sparse matrix} of shape (n_samples, n_outputs) Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor.
predict
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.3 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__).add( estimator=self.estimator, method_mapping=MethodMapping() .add(caller="partial_fit", callee="partial_fit") .add(caller="fit", callee="fit"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.3 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def fit(self, X, Y, sample_weight=None, **fit_params): """Fit the model to data matrix X and targets Y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Y : array-like of shape (n_samples, n_classes) The target values. sample_weight : array-like of shape (n_samples,), default=None Sample weights. If `None`, then samples are equally weighted. Only supported if the underlying classifier supports sample weights. **fit_params : dict of string -> object Parameters passed to the ``estimator.fit`` method of each step. .. versionadded:: 0.23 Returns ------- self : object Returns a fitted instance. """ super().fit(X, Y, sample_weight=sample_weight, **fit_params) self.classes_ = [estimator.classes_ for estimator in self.estimators_] return self
Fit the model to data matrix X and targets Y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Y : array-like of shape (n_samples, n_classes) The target values. sample_weight : array-like of shape (n_samples,), default=None Sample weights. If `None`, then samples are equally weighted. Only supported if the underlying classifier supports sample weights. **fit_params : dict of string -> object Parameters passed to the ``estimator.fit`` method of each step. .. versionadded:: 0.23 Returns ------- self : object Returns a fitted instance.
fit
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def predict_proba(self, X): """Return prediction probabilities for each class of each output. This method will raise a ``ValueError`` if any of the estimators do not have ``predict_proba``. Parameters ---------- X : array-like of shape (n_samples, n_features) The input data. Returns ------- p : array of shape (n_samples, n_classes), or a list of n_outputs \ such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute :term:`classes_`. .. versionchanged:: 0.19 This function now returns a list of arrays where the length of the list is ``n_outputs``, and each array is (``n_samples``, ``n_classes``) for that particular output. """ check_is_fitted(self) results = [estimator.predict_proba(X) for estimator in self.estimators_] return results
Return prediction probabilities for each class of each output. This method will raise a ``ValueError`` if any of the estimators do not have ``predict_proba``. Parameters ---------- X : array-like of shape (n_samples, n_features) The input data. Returns ------- p : array of shape (n_samples, n_classes), or a list of n_outputs such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute :term:`classes_`. .. versionchanged:: 0.19 This function now returns a list of arrays where the length of the list is ``n_outputs``, and each array is (``n_samples``, ``n_classes``) for that particular output.
predict_proba
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def score(self, X, y): """Return the mean accuracy on the given test data and labels. Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. y : array-like of shape (n_samples, n_outputs) True values for X. Returns ------- scores : float Mean accuracy of predicted target versus true target. """ check_is_fitted(self) n_outputs_ = len(self.estimators_) if y.ndim == 1: raise ValueError( "y must have at least two dimensions for " "multi target classification but has only one" ) if y.shape[1] != n_outputs_: raise ValueError( "The number of outputs of Y for fit {0} and" " score {1} should be same".format(n_outputs_, y.shape[1]) ) y_pred = self.predict(X) return np.mean(np.all(y == y_pred, axis=1))
Return the mean accuracy on the given test data and labels. Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. y : array-like of shape (n_samples, n_outputs) True values for X. Returns ------- scores : float Mean accuracy of predicted target versus true target.
score
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def _available_if_base_estimator_has(attr): """Return a function to check if `base_estimator` or `estimators_` has `attr`. Helper for Chain implementations. """ def _check(self): return hasattr(self._get_estimator(), attr) or all( hasattr(est, attr) for est in self.estimators_ ) return available_if(_check)
Return a function to check if `base_estimator` or `estimators_` has `attr`. Helper for Chain implementations.
_available_if_base_estimator_has
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def _get_predictions(self, X, *, output_method): """Get predictions for each model in the chain.""" check_is_fitted(self) X = validate_data(self, X, accept_sparse=True, reset=False) Y_output_chain = np.zeros((X.shape[0], len(self.estimators_))) Y_feature_chain = np.zeros((X.shape[0], len(self.estimators_))) # `RegressorChain` does not have a `chain_method_` parameter so we # default to "predict" chain_method = getattr(self, "chain_method_", "predict") hstack = sp.hstack if sp.issparse(X) else np.hstack for chain_idx, estimator in enumerate(self.estimators_): previous_predictions = Y_feature_chain[:, :chain_idx] # if `X` is a scipy sparse dok_array, we convert it to a sparse # coo_array format before hstacking, it's faster; see # https://github.com/scipy/scipy/issues/20060#issuecomment-1937007039: if sp.issparse(X) and not sp.isspmatrix(X) and X.format == "dok": X = sp.coo_array(X) X_aug = hstack((X, previous_predictions)) feature_predictions, _ = _get_response_values( estimator, X_aug, response_method=chain_method, ) Y_feature_chain[:, chain_idx] = feature_predictions output_predictions, _ = _get_response_values( estimator, X_aug, response_method=output_method, ) Y_output_chain[:, chain_idx] = output_predictions inv_order = np.empty_like(self.order_) inv_order[self.order_] = np.arange(len(self.order_)) Y_output = Y_output_chain[:, inv_order] return Y_output
Get predictions for each model in the chain.
_get_predictions
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def fit(self, X, Y, **fit_params): """Fit the model to data matrix X and targets Y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Y : array-like of shape (n_samples, n_classes) The target values. **fit_params : dict of string -> object Parameters passed to the `fit` method of each step. .. versionadded:: 0.23 Returns ------- self : object Returns a fitted instance. """ X, Y = validate_data(self, X, Y, multi_output=True, accept_sparse=True) random_state = check_random_state(self.random_state) self.order_ = self.order if isinstance(self.order_, tuple): self.order_ = np.array(self.order_) if self.order_ is None: self.order_ = np.array(range(Y.shape[1])) elif isinstance(self.order_, str): if self.order_ == "random": self.order_ = random_state.permutation(Y.shape[1]) elif sorted(self.order_) != list(range(Y.shape[1])): raise ValueError("invalid order") self.estimators_ = [clone(self._get_estimator()) for _ in range(Y.shape[1])] if self.cv is None: Y_pred_chain = Y[:, self.order_] if sp.issparse(X): X_aug = sp.hstack((X, Y_pred_chain), format="lil") X_aug = X_aug.tocsr() else: X_aug = np.hstack((X, Y_pred_chain)) elif sp.issparse(X): # TODO: remove this condition check when the minimum supported scipy version # doesn't support sparse matrices anymore if not sp.isspmatrix(X): # if `X` is a scipy sparse dok_array, we convert it to a sparse # coo_array format before hstacking, it's faster; see # https://github.com/scipy/scipy/issues/20060#issuecomment-1937007039: if X.format == "dok": X = sp.coo_array(X) # in case that `X` is a sparse array we create `Y_pred_chain` as a # sparse array format: Y_pred_chain = sp.coo_array((X.shape[0], Y.shape[1])) else: Y_pred_chain = sp.coo_matrix((X.shape[0], Y.shape[1])) X_aug = sp.hstack((X, Y_pred_chain), format="lil") else: Y_pred_chain = np.zeros((X.shape[0], Y.shape[1])) X_aug = np.hstack((X, Y_pred_chain)) del Y_pred_chain if _routing_enabled(): routed_params = process_routing(self, "fit", **fit_params) else: routed_params = Bunch(estimator=Bunch(fit=fit_params)) if hasattr(self, "chain_method"): chain_method = _check_response_method( self._get_estimator(), self.chain_method, ).__name__ self.chain_method_ = chain_method else: # `RegressorChain` does not have a `chain_method` parameter chain_method = "predict" for chain_idx, estimator in enumerate(self.estimators_): message = self._log_message( estimator_idx=chain_idx + 1, n_estimators=len(self.estimators_), processing_msg=f"Processing order {self.order_[chain_idx]}", ) y = Y[:, self.order_[chain_idx]] with _print_elapsed_time("Chain", message): estimator.fit( X_aug[:, : (X.shape[1] + chain_idx)], y, **routed_params.estimator.fit, ) if self.cv is not None and chain_idx < len(self.estimators_) - 1: col_idx = X.shape[1] + chain_idx cv_result = cross_val_predict( self._get_estimator(), X_aug[:, :col_idx], y=y, cv=self.cv, method=chain_method, ) # `predict_proba` output is 2D, we use only output for classes[-1] if cv_result.ndim > 1: cv_result = cv_result[:, 1] if sp.issparse(X_aug): X_aug[:, col_idx] = np.expand_dims(cv_result, 1) else: X_aug[:, col_idx] = cv_result return self
Fit the model to data matrix X and targets Y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Y : array-like of shape (n_samples, n_classes) The target values. **fit_params : dict of string -> object Parameters passed to the `fit` method of each step. .. versionadded:: 0.23 Returns ------- self : object Returns a fitted instance.
fit
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def fit(self, X, Y, **fit_params): """Fit the model to data matrix X and targets Y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Y : array-like of shape (n_samples, n_classes) The target values. **fit_params : dict of string -> object Parameters passed to the `fit` method of each step. Only available if `enable_metadata_routing=True`. See the :ref:`User Guide <metadata_routing>`. .. versionadded:: 1.3 Returns ------- self : object Class instance. """ _raise_for_params(fit_params, self, "fit") super().fit(X, Y, **fit_params) self.classes_ = [estimator.classes_ for estimator in self.estimators_] return self
Fit the model to data matrix X and targets Y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input data. Y : array-like of shape (n_samples, n_classes) The target values. **fit_params : dict of string -> object Parameters passed to the `fit` method of each step. Only available if `enable_metadata_routing=True`. See the :ref:`User Guide <metadata_routing>`. .. versionadded:: 1.3 Returns ------- self : object Class instance.
fit
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.3 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__).add( estimator=self._get_estimator(), method_mapping=MethodMapping().add(caller="fit", callee="fit"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.3 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.3 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__).add( estimator=self._get_estimator(), method_mapping=MethodMapping().add(caller="fit", callee="fit"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.3 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/multioutput.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/multioutput.py
BSD-3-Clause
def _joint_log_likelihood(self, X): """Compute the unnormalized posterior log probability of X I.e. ``log P(c) + log P(x|c)`` for all rows x of X, as an array-like of shape (n_samples, n_classes). Public methods predict, predict_proba, predict_log_proba, and predict_joint_log_proba pass the input through _check_X before handing it over to _joint_log_likelihood. The term "joint log likelihood" is used interchangibly with "joint log probability". """
Compute the unnormalized posterior log probability of X I.e. ``log P(c) + log P(x|c)`` for all rows x of X, as an array-like of shape (n_samples, n_classes). Public methods predict, predict_proba, predict_log_proba, and predict_joint_log_proba pass the input through _check_X before handing it over to _joint_log_likelihood. The term "joint log likelihood" is used interchangibly with "joint log probability".
_joint_log_likelihood
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _check_X(self, X): """To be overridden in subclasses with the actual checks. Only used in predict* methods. """
To be overridden in subclasses with the actual checks. Only used in predict* methods.
_check_X
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def predict_joint_log_proba(self, X): """Return joint log probability estimates for the test vector X. For each row x of X and class y, the joint log probability is given by ``log P(x, y) = log P(y) + log P(x|y),`` where ``log P(y)`` is the class prior probability and ``log P(x|y)`` is the class-conditional probability. Parameters ---------- X : array-like of shape (n_samples, n_features) The input samples. Returns ------- C : ndarray of shape (n_samples, n_classes) Returns the joint log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute :term:`classes_`. """ check_is_fitted(self) X = self._check_X(X) return self._joint_log_likelihood(X)
Return joint log probability estimates for the test vector X. For each row x of X and class y, the joint log probability is given by ``log P(x, y) = log P(y) + log P(x|y),`` where ``log P(y)`` is the class prior probability and ``log P(x|y)`` is the class-conditional probability. Parameters ---------- X : array-like of shape (n_samples, n_features) The input samples. Returns ------- C : ndarray of shape (n_samples, n_classes) Returns the joint log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute :term:`classes_`.
predict_joint_log_proba
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def predict(self, X): """ Perform classification on an array of test vectors X. Parameters ---------- X : array-like of shape (n_samples, n_features) The input samples. Returns ------- C : ndarray of shape (n_samples,) Predicted target values for X. """ check_is_fitted(self) X = self._check_X(X) jll = self._joint_log_likelihood(X) return self.classes_[np.argmax(jll, axis=1)]
Perform classification on an array of test vectors X. Parameters ---------- X : array-like of shape (n_samples, n_features) The input samples. Returns ------- C : ndarray of shape (n_samples,) Predicted target values for X.
predict
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def predict_log_proba(self, X): """ Return log-probability estimates for the test vector X. Parameters ---------- X : array-like of shape (n_samples, n_features) The input samples. Returns ------- C : array-like of shape (n_samples, n_classes) Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute :term:`classes_`. """ check_is_fitted(self) X = self._check_X(X) jll = self._joint_log_likelihood(X) # normalize by P(x) = P(f_1, ..., f_n) log_prob_x = logsumexp(jll, axis=1) return jll - np.atleast_2d(log_prob_x).T
Return log-probability estimates for the test vector X. Parameters ---------- X : array-like of shape (n_samples, n_features) The input samples. Returns ------- C : array-like of shape (n_samples, n_classes) Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute :term:`classes_`.
predict_log_proba
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """Fit Gaussian Naive Bayes according to X, y. Parameters ---------- X : array-like of shape (n_samples, n_features) Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target values. sample_weight : array-like of shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). .. versionadded:: 0.17 Gaussian Naive Bayes supports fitting with *sample_weight*. Returns ------- self : object Returns the instance itself. """ y = validate_data(self, y=y) return self._partial_fit( X, y, np.unique(y), _refit=True, sample_weight=sample_weight )
Fit Gaussian Naive Bayes according to X, y. Parameters ---------- X : array-like of shape (n_samples, n_features) Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target values. sample_weight : array-like of shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). .. versionadded:: 0.17 Gaussian Naive Bayes supports fitting with *sample_weight*. Returns ------- self : object Returns the instance itself.
fit
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _update_mean_variance(n_past, mu, var, X, sample_weight=None): """Compute online update of Gaussian mean and variance. Given starting sample count, mean, and variance, a new set of points X, and optionally sample weights, return the updated mean and variance. (NB - each dimension (column) in X is treated as independent -- you get variance, not covariance). Can take scalar mean and variance, or vector mean and variance to simultaneously update a number of independent Gaussians. See Stanford CS tech report STAN-CS-79-773 by Chan, Golub, and LeVeque: http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf Parameters ---------- n_past : int Number of samples represented in old mean and variance. If sample weights were given, this should contain the sum of sample weights represented in old mean and variance. mu : array-like of shape (number of Gaussians,) Means for Gaussians in original set. var : array-like of shape (number of Gaussians,) Variances for Gaussians in original set. sample_weight : array-like of shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). Returns ------- total_mu : array-like of shape (number of Gaussians,) Updated mean for each Gaussian over the combined set. total_var : array-like of shape (number of Gaussians,) Updated variance for each Gaussian over the combined set. """ if X.shape[0] == 0: return mu, var # Compute (potentially weighted) mean and variance of new datapoints if sample_weight is not None: n_new = float(sample_weight.sum()) if np.isclose(n_new, 0.0): return mu, var new_mu = np.average(X, axis=0, weights=sample_weight) new_var = np.average((X - new_mu) ** 2, axis=0, weights=sample_weight) else: n_new = X.shape[0] new_var = np.var(X, axis=0) new_mu = np.mean(X, axis=0) if n_past == 0: return new_mu, new_var n_total = float(n_past + n_new) # Combine mean of old and new data, taking into consideration # (weighted) number of observations total_mu = (n_new * new_mu + n_past * mu) / n_total # Combine variance of old and new data, taking into consideration # (weighted) number of observations. This is achieved by combining # the sum-of-squared-differences (ssd) old_ssd = n_past * var new_ssd = n_new * new_var total_ssd = old_ssd + new_ssd + (n_new * n_past / n_total) * (mu - new_mu) ** 2 total_var = total_ssd / n_total return total_mu, total_var
Compute online update of Gaussian mean and variance. Given starting sample count, mean, and variance, a new set of points X, and optionally sample weights, return the updated mean and variance. (NB - each dimension (column) in X is treated as independent -- you get variance, not covariance). Can take scalar mean and variance, or vector mean and variance to simultaneously update a number of independent Gaussians. See Stanford CS tech report STAN-CS-79-773 by Chan, Golub, and LeVeque: http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf Parameters ---------- n_past : int Number of samples represented in old mean and variance. If sample weights were given, this should contain the sum of sample weights represented in old mean and variance. mu : array-like of shape (number of Gaussians,) Means for Gaussians in original set. var : array-like of shape (number of Gaussians,) Variances for Gaussians in original set. sample_weight : array-like of shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). Returns ------- total_mu : array-like of shape (number of Gaussians,) Updated mean for each Gaussian over the combined set. total_var : array-like of shape (number of Gaussians,) Updated variance for each Gaussian over the combined set.
_update_mean_variance
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _count(self, X, Y): """Update counts that are used to calculate probabilities. The counts make up a sufficient statistic extracted from the data. Accordingly, this method is called each time `fit` or `partial_fit` update the model. `class_count_` and `feature_count_` must be updated here along with any model specific counts. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input samples. Y : ndarray of shape (n_samples, n_classes) Binarized class labels. """
Update counts that are used to calculate probabilities. The counts make up a sufficient statistic extracted from the data. Accordingly, this method is called each time `fit` or `partial_fit` update the model. `class_count_` and `feature_count_` must be updated here along with any model specific counts. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input samples. Y : ndarray of shape (n_samples, n_classes) Binarized class labels.
_count
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _update_feature_log_prob(self, alpha): """Update feature log probabilities based on counts. This method is called each time `fit` or `partial_fit` update the model. Parameters ---------- alpha : float smoothing parameter. See :meth:`_check_alpha`. """
Update feature log probabilities based on counts. This method is called each time `fit` or `partial_fit` update the model. Parameters ---------- alpha : float smoothing parameter. See :meth:`_check_alpha`.
_update_feature_log_prob
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _update_class_log_prior(self, class_prior=None): """Update class log priors. The class log priors are based on `class_prior`, class count or the number of classes. This method is called each time `fit` or `partial_fit` update the model. """ n_classes = len(self.classes_) if class_prior is not None: if len(class_prior) != n_classes: raise ValueError("Number of priors must match number of classes.") self.class_log_prior_ = np.log(class_prior) elif self.fit_prior: with warnings.catch_warnings(): # silence the warning when count is 0 because class was not yet # observed warnings.simplefilter("ignore", RuntimeWarning) log_class_count = np.log(self.class_count_) # empirical prior, with sample_weight taken into account self.class_log_prior_ = log_class_count - np.log(self.class_count_.sum()) else: self.class_log_prior_ = np.full(n_classes, -np.log(n_classes))
Update class log priors. The class log priors are based on `class_prior`, class count or the number of classes. This method is called each time `fit` or `partial_fit` update the model.
_update_class_log_prior
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """Fit Naive Bayes classifier according to X, y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target values. sample_weight : array-like of shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). Returns ------- self : object Returns the instance itself. """ X, y = self._check_X_y(X, y) _, n_features = X.shape labelbin = LabelBinarizer() Y = labelbin.fit_transform(y) self.classes_ = labelbin.classes_ if Y.shape[1] == 1: if len(self.classes_) == 2: Y = np.concatenate((1 - Y, Y), axis=1) else: # degenerate case: just one class Y = np.ones_like(Y) # LabelBinarizer().fit_transform() returns arrays with dtype=np.int64. # We convert it to np.float64 to support sample_weight consistently; # this means we also don't have to cast X to floating point if sample_weight is not None: Y = Y.astype(np.float64, copy=False) sample_weight = _check_sample_weight(sample_weight, X) sample_weight = np.atleast_2d(sample_weight) Y *= sample_weight.T class_prior = self.class_prior # Count raw events from data before updating the class log prior # and feature log probas n_classes = Y.shape[1] self._init_counters(n_classes, n_features) self._count(X, Y) alpha = self._check_alpha() self._update_feature_log_prob(alpha) self._update_class_log_prior(class_prior=class_prior) return self
Fit Naive Bayes classifier according to X, y. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target values. sample_weight : array-like of shape (n_samples,), default=None Weights applied to individual samples (1. for unweighted). Returns ------- self : object Returns the instance itself.
fit
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _update_feature_log_prob(self, alpha): """Apply smoothing to raw counts and recompute log probabilities""" smoothed_fc = self.feature_count_ + alpha smoothed_cc = smoothed_fc.sum(axis=1) self.feature_log_prob_ = np.log(smoothed_fc) - np.log( smoothed_cc.reshape(-1, 1) )
Apply smoothing to raw counts and recompute log probabilities
_update_feature_log_prob
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _update_feature_log_prob(self, alpha): """Apply smoothing to raw counts and compute the weights.""" comp_count = self.feature_all_ + alpha - self.feature_count_ logged = np.log(comp_count / comp_count.sum(axis=1, keepdims=True)) # _BaseNB.predict uses argmax, but ComplementNB operates with argmin. if self.norm: summed = logged.sum(axis=1, keepdims=True) feature_log_prob = logged / summed else: feature_log_prob = -logged self.feature_log_prob_ = feature_log_prob
Apply smoothing to raw counts and compute the weights.
_update_feature_log_prob
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _joint_log_likelihood(self, X): """Calculate the class scores for the samples in X.""" jll = safe_sparse_dot(X, self.feature_log_prob_.T) if len(self.classes_) == 1: jll += self.class_log_prior_ return jll
Calculate the class scores for the samples in X.
_joint_log_likelihood
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _check_X(self, X): """Validate X, used only in predict* methods.""" X = super()._check_X(X) if self.binarize is not None: X = binarize(X, threshold=self.binarize) return X
Validate X, used only in predict* methods.
_check_X
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _update_feature_log_prob(self, alpha): """Apply smoothing to raw counts and recompute log probabilities""" smoothed_fc = self.feature_count_ + alpha smoothed_cc = self.class_count_ + alpha * 2 self.feature_log_prob_ = np.log(smoothed_fc) - np.log( smoothed_cc.reshape(-1, 1) )
Apply smoothing to raw counts and recompute log probabilities
_update_feature_log_prob
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _check_X(self, X): """Validate X, used only in predict* methods.""" X = validate_data( self, X, dtype="int", accept_sparse=False, ensure_all_finite=True, reset=False, ) check_non_negative(X, "CategoricalNB (input X)") return X
Validate X, used only in predict* methods.
_check_X
python
scikit-learn/scikit-learn
sklearn/naive_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/naive_bayes.py
BSD-3-Clause
def _raise_or_warn_if_not_fitted(estimator): """A context manager to make sure a NotFittedError is raised, if a sub-estimator raises the error. Otherwise, we raise a warning if the pipeline is not fitted, with the deprecation. TODO(1.8): remove this context manager and replace with check_is_fitted. """ try: yield except NotFittedError as exc: raise NotFittedError("Pipeline is not fitted yet.") from exc # we only get here if the above didn't raise try: check_is_fitted(estimator) except NotFittedError: warnings.warn( "This Pipeline instance is not fitted yet. Call 'fit' with " "appropriate arguments before using other methods such as transform, " "predict, etc. This will raise an error in 1.8 instead of the current " "warning.", FutureWarning, )
A context manager to make sure a NotFittedError is raised, if a sub-estimator raises the error. Otherwise, we raise a warning if the pipeline is not fitted, with the deprecation. TODO(1.8): remove this context manager and replace with check_is_fitted.
_raise_or_warn_if_not_fitted
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _final_estimator_has(attr): """Check that final_estimator has `attr`. Used together with `available_if` in `Pipeline`.""" def check(self): # raise original `AttributeError` if `attr` does not exist getattr(self._final_estimator, attr) return True return check
Check that final_estimator has `attr`. Used together with `available_if` in `Pipeline`.
_final_estimator_has
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _cached_transform( sub_pipeline, *, cache, param_name, param_value, transform_params ): """Transform a parameter value using a sub-pipeline and cache the result. Parameters ---------- sub_pipeline : Pipeline The sub-pipeline to be used for transformation. cache : dict The cache dictionary to store the transformed values. param_name : str The name of the parameter to be transformed. param_value : object The value of the parameter to be transformed. transform_params : dict The metadata to be used for transformation. This passed to the `transform` method of the sub-pipeline. Returns ------- transformed_value : object The transformed value of the parameter. """ if param_name not in cache: # If the parameter is a tuple, transform each element of the # tuple. This is needed to support the pattern present in # `lightgbm` and `xgboost` where users can pass multiple # validation sets. if isinstance(param_value, tuple): cache[param_name] = tuple( sub_pipeline.transform(element, **transform_params) for element in param_value ) else: cache[param_name] = sub_pipeline.transform(param_value, **transform_params) return cache[param_name]
Transform a parameter value using a sub-pipeline and cache the result. Parameters ---------- sub_pipeline : Pipeline The sub-pipeline to be used for transformation. cache : dict The cache dictionary to store the transformed values. param_name : str The name of the parameter to be transformed. param_value : object The value of the parameter to be transformed. transform_params : dict The metadata to be used for transformation. This passed to the `transform` method of the sub-pipeline. Returns ------- transformed_value : object The transformed value of the parameter.
_cached_transform
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def set_output(self, *, transform=None): """Set the output container when `"transform"` and `"fit_transform"` are called. Calling `set_output` will set the output of all estimators in `steps`. Parameters ---------- transform : {"default", "pandas", "polars"}, default=None Configure output of `transform` and `fit_transform`. - `"default"`: Default output format of a transformer - `"pandas"`: DataFrame output - `"polars"`: Polars output - `None`: Transform configuration is unchanged .. versionadded:: 1.4 `"polars"` option was added. Returns ------- self : estimator instance Estimator instance. """ for _, _, step in self._iter(): _safe_set_output(step, transform=transform) return self
Set the output container when `"transform"` and `"fit_transform"` are called. Calling `set_output` will set the output of all estimators in `steps`. Parameters ---------- transform : {"default", "pandas", "polars"}, default=None Configure output of `transform` and `fit_transform`. - `"default"`: Default output format of a transformer - `"pandas"`: DataFrame output - `"polars"`: Polars output - `None`: Transform configuration is unchanged .. versionadded:: 1.4 `"polars"` option was added. Returns ------- self : estimator instance Estimator instance.
set_output
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _iter(self, with_final=True, filter_passthrough=True): """ Generate (idx, (name, trans)) tuples from self.steps When filter_passthrough is True, 'passthrough' and None transformers are filtered out. """ stop = len(self.steps) if not with_final: stop -= 1 for idx, (name, trans) in enumerate(islice(self.steps, 0, stop)): if not filter_passthrough: yield idx, name, trans elif trans is not None and trans != "passthrough": yield idx, name, trans
Generate (idx, (name, trans)) tuples from self.steps When filter_passthrough is True, 'passthrough' and None transformers are filtered out.
_iter
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def __getitem__(self, ind): """Returns a sub-pipeline or a single estimator in the pipeline Indexing with an integer will return an estimator; using a slice returns another Pipeline instance which copies a slice of this Pipeline. This copy is shallow: modifying (or fitting) estimators in the sub-pipeline will affect the larger pipeline and vice-versa. However, replacing a value in `step` will not affect a copy. See :ref:`sphx_glr_auto_examples_feature_selection_plot_feature_selection_pipeline.py` for an example of how to use slicing to inspect part of a pipeline. """ if isinstance(ind, slice): if ind.step not in (1, None): raise ValueError("Pipeline slicing only supports a step of 1") return self.__class__( self.steps[ind], memory=self.memory, verbose=self.verbose ) try: name, est = self.steps[ind] except TypeError: # Not an int, try get step by name return self.named_steps[ind] return est
Returns a sub-pipeline or a single estimator in the pipeline Indexing with an integer will return an estimator; using a slice returns another Pipeline instance which copies a slice of this Pipeline. This copy is shallow: modifying (or fitting) estimators in the sub-pipeline will affect the larger pipeline and vice-versa. However, replacing a value in `step` will not affect a copy. See :ref:`sphx_glr_auto_examples_feature_selection_plot_feature_selection_pipeline.py` for an example of how to use slicing to inspect part of a pipeline.
__getitem__
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _estimator_type(self): """Return the estimator type of the last step in the pipeline.""" if not self.steps: return None return self.steps[-1][1]._estimator_type
Return the estimator type of the last step in the pipeline.
_estimator_type
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _get_metadata_for_step(self, *, step_idx, step_params, all_params): """Get params (metadata) for step `name`. This transforms the metadata up to this step if required, which is indicated by the `transform_input` parameter. If a param in `step_params` is included in the `transform_input` list, it will be transformed. Parameters ---------- step_idx : int Index of the step in the pipeline. step_params : dict Parameters specific to the step. These are routed parameters, e.g. `routed_params[name]`. If a parameter name here is included in the `pipeline.transform_input`, then it will be transformed. Note that these parameters are *after* routing, so the aliases are already resolved. all_params : dict All parameters passed by the user. Here this is used to call `transform` on the slice of the pipeline itself. Returns ------- dict Parameters to be passed to the step. The ones which should be transformed are transformed. """ if ( self.transform_input is None or not all_params or not step_params or step_idx == 0 ): # we only need to process step_params if transform_input is set # and metadata is given by the user. return step_params sub_pipeline = self[:step_idx] sub_metadata_routing = get_routing_for_object(sub_pipeline) # here we get the metadata required by sub_pipeline.transform transform_params = { key: value for key, value in all_params.items() if key in sub_metadata_routing.consumes( method="transform", params=all_params.keys() ) } transformed_params = dict() # this is to be returned transformed_cache = dict() # used to transform each param once # `step_params` is the output of `process_routing`, so it has a dict for each # method (e.g. fit, transform, predict), which are the args to be passed to # those methods. We need to transform the parameters which are in the # `transform_input`, before returning these dicts. for method, method_params in step_params.items(): transformed_params[method] = Bunch() for param_name, param_value in method_params.items(): # An example of `(param_name, param_value)` is # `('sample_weight', array([0.5, 0.5, ...]))` if param_name in self.transform_input: # This parameter now needs to be transformed by the sub_pipeline, to # this step. We cache these computations to avoid repeating them. transformed_params[method][param_name] = _cached_transform( sub_pipeline, cache=transformed_cache, param_name=param_name, param_value=param_value, transform_params=transform_params, ) else: transformed_params[method][param_name] = param_value return transformed_params
Get params (metadata) for step `name`. This transforms the metadata up to this step if required, which is indicated by the `transform_input` parameter. If a param in `step_params` is included in the `transform_input` list, it will be transformed. Parameters ---------- step_idx : int Index of the step in the pipeline. step_params : dict Parameters specific to the step. These are routed parameters, e.g. `routed_params[name]`. If a parameter name here is included in the `pipeline.transform_input`, then it will be transformed. Note that these parameters are *after* routing, so the aliases are already resolved. all_params : dict All parameters passed by the user. Here this is used to call `transform` on the slice of the pipeline itself. Returns ------- dict Parameters to be passed to the step. The ones which should be transformed are transformed.
_get_metadata_for_step
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _fit(self, X, y=None, routed_params=None, raw_params=None): """Fit the pipeline except the last step. routed_params is the output of `process_routing` raw_params is the parameters passed by the user, used when `transform_input` is set by the user, to transform metadata using a sub-pipeline. """ # shallow copy of steps - this should really be steps_ self.steps = list(self.steps) self._validate_steps() # Setup the memory memory = check_memory(self.memory) fit_transform_one_cached = memory.cache(_fit_transform_one) for step_idx, name, transformer in self._iter( with_final=False, filter_passthrough=False ): if transformer is None or transformer == "passthrough": with _print_elapsed_time("Pipeline", self._log_message(step_idx)): continue if hasattr(memory, "location") and memory.location is None: # we do not clone when caching is disabled to # preserve backward compatibility cloned_transformer = transformer else: cloned_transformer = clone(transformer) # Fit or load from cache the current transformer step_params = self._get_metadata_for_step( step_idx=step_idx, step_params=routed_params[name], all_params=raw_params, ) X, fitted_transformer = fit_transform_one_cached( cloned_transformer, X, y, weight=None, message_clsname="Pipeline", message=self._log_message(step_idx), params=step_params, ) # Replace the transformer of the step with the fitted # transformer. This is necessary when loading the transformer # from the cache. self.steps[step_idx] = (name, fitted_transformer) return X
Fit the pipeline except the last step. routed_params is the output of `process_routing` raw_params is the parameters passed by the user, used when `transform_input` is set by the user, to transform metadata using a sub-pipeline.
_fit
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def fit(self, X, y=None, **params): """Fit the model. Fit all the transformers one after the other and sequentially transform the data. Finally, fit the transformed data using the final estimator. Parameters ---------- X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True` is set via :func:`~sklearn.set_config`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Pipeline with fitted steps. """ if not _routing_enabled() and self.transform_input is not None: raise ValueError( "The `transform_input` parameter can only be set if metadata " "routing is enabled. You can enable metadata routing using " "`sklearn.set_config(enable_metadata_routing=True)`." ) routed_params = self._check_method_params(method="fit", props=params) Xt = self._fit(X, y, routed_params, raw_params=params) with _print_elapsed_time("Pipeline", self._log_message(len(self.steps) - 1)): if self._final_estimator != "passthrough": last_step_params = self._get_metadata_for_step( step_idx=len(self) - 1, step_params=routed_params[self.steps[-1][0]], all_params=params, ) self._final_estimator.fit(Xt, y, **last_step_params["fit"]) return self
Fit the model. Fit all the transformers one after the other and sequentially transform the data. Finally, fit the transformed data using the final estimator. Parameters ---------- X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True` is set via :func:`~sklearn.set_config`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Pipeline with fitted steps.
fit
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def fit_transform(self, X, y=None, **params): """Fit the model and transform with the final estimator. Fit all the transformers one after the other and sequentially transform the data. Only valid if the final estimator either implements `fit_transform` or `fit` and `transform`. Parameters ---------- X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- Xt : ndarray of shape (n_samples, n_transformed_features) Transformed samples. """ routed_params = self._check_method_params(method="fit_transform", props=params) Xt = self._fit(X, y, routed_params) last_step = self._final_estimator with _print_elapsed_time("Pipeline", self._log_message(len(self.steps) - 1)): if last_step == "passthrough": return Xt last_step_params = self._get_metadata_for_step( step_idx=len(self) - 1, step_params=routed_params[self.steps[-1][0]], all_params=params, ) if hasattr(last_step, "fit_transform"): return last_step.fit_transform( Xt, y, **last_step_params["fit_transform"] ) else: return last_step.fit(Xt, y, **last_step_params["fit"]).transform( Xt, **last_step_params["transform"] )
Fit the model and transform with the final estimator. Fit all the transformers one after the other and sequentially transform the data. Only valid if the final estimator either implements `fit_transform` or `fit` and `transform`. Parameters ---------- X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- Xt : ndarray of shape (n_samples, n_transformed_features) Transformed samples.
fit_transform
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def predict(self, X, **params): """Transform the data, and apply `predict` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict` method. Only valid if the final estimator implements `predict`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the ``predict`` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True` is set via :func:`~sklearn.set_config`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Note that while this may be used to return uncertainties from some models with ``return_std`` or ``return_cov``, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator. Returns ------- y_pred : ndarray Result of calling `predict` on the final estimator. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): Xt = X if not _routing_enabled(): for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt) return self.steps[-1][1].predict(Xt, **params) # metadata routing enabled routed_params = process_routing(self, "predict", **params) for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt, **routed_params[name].transform) return self.steps[-1][1].predict( Xt, **routed_params[self.steps[-1][0]].predict )
Transform the data, and apply `predict` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict` method. Only valid if the final estimator implements `predict`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the ``predict`` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True` is set via :func:`~sklearn.set_config`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Note that while this may be used to return uncertainties from some models with ``return_std`` or ``return_cov``, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator. Returns ------- y_pred : ndarray Result of calling `predict` on the final estimator.
predict
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def fit_predict(self, X, y=None, **params): """Transform the data, and apply `fit_predict` with the final estimator. Call `fit_transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `fit_predict` method. Only valid if the final estimator implements `fit_predict`. Parameters ---------- X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the ``predict`` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Note that while this may be used to return uncertainties from some models with ``return_std`` or ``return_cov``, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator. Returns ------- y_pred : ndarray Result of calling `fit_predict` on the final estimator. """ routed_params = self._check_method_params(method="fit_predict", props=params) Xt = self._fit(X, y, routed_params) params_last_step = routed_params[self.steps[-1][0]] with _print_elapsed_time("Pipeline", self._log_message(len(self.steps) - 1)): y_pred = self.steps[-1][1].fit_predict( Xt, y, **params_last_step.get("fit_predict", {}) ) return y_pred
Transform the data, and apply `fit_predict` with the final estimator. Call `fit_transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `fit_predict` method. Only valid if the final estimator implements `fit_predict`. Parameters ---------- X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the ``predict`` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Note that while this may be used to return uncertainties from some models with ``return_std`` or ``return_cov``, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator. Returns ------- y_pred : ndarray Result of calling `fit_predict` on the final estimator.
fit_predict
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def predict_proba(self, X, **params): """Transform the data, and apply `predict_proba` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict_proba` method. Only valid if the final estimator implements `predict_proba`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the `predict_proba` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y_proba : ndarray of shape (n_samples, n_classes) Result of calling `predict_proba` on the final estimator. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): Xt = X if not _routing_enabled(): for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt) return self.steps[-1][1].predict_proba(Xt, **params) # metadata routing enabled routed_params = process_routing(self, "predict_proba", **params) for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt, **routed_params[name].transform) return self.steps[-1][1].predict_proba( Xt, **routed_params[self.steps[-1][0]].predict_proba )
Transform the data, and apply `predict_proba` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict_proba` method. Only valid if the final estimator implements `predict_proba`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the `predict_proba` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y_proba : ndarray of shape (n_samples, n_classes) Result of calling `predict_proba` on the final estimator.
predict_proba
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def decision_function(self, X, **params): """Transform the data, and apply `decision_function` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `decision_function` method. Only valid if the final estimator implements `decision_function`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of string -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y_score : ndarray of shape (n_samples, n_classes) Result of calling `decision_function` on the final estimator. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): _raise_for_params(params, self, "decision_function") # not branching here since params is only available if # enable_metadata_routing=True routed_params = process_routing(self, "decision_function", **params) Xt = X for _, name, transform in self._iter(with_final=False): Xt = transform.transform( Xt, **routed_params.get(name, {}).get("transform", {}) ) return self.steps[-1][1].decision_function( Xt, **routed_params.get(self.steps[-1][0], {}).get("decision_function", {}), )
Transform the data, and apply `decision_function` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `decision_function` method. Only valid if the final estimator implements `decision_function`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of string -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y_score : ndarray of shape (n_samples, n_classes) Result of calling `decision_function` on the final estimator.
decision_function
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def score_samples(self, X): """Transform the data, and apply `score_samples` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `score_samples` method. Only valid if the final estimator implements `score_samples`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns ------- y_score : ndarray of shape (n_samples,) Result of calling `score_samples` on the final estimator. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): Xt = X for _, _, transformer in self._iter(with_final=False): Xt = transformer.transform(Xt) return self.steps[-1][1].score_samples(Xt)
Transform the data, and apply `score_samples` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `score_samples` method. Only valid if the final estimator implements `score_samples`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns ------- y_score : ndarray of shape (n_samples,) Result of calling `score_samples` on the final estimator.
score_samples
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def predict_log_proba(self, X, **params): """Transform the data, and apply `predict_log_proba` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict_log_proba` method. Only valid if the final estimator implements `predict_log_proba`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the `predict_log_proba` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y_log_proba : ndarray of shape (n_samples, n_classes) Result of calling `predict_log_proba` on the final estimator. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): Xt = X if not _routing_enabled(): for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt) return self.steps[-1][1].predict_log_proba(Xt, **params) # metadata routing enabled routed_params = process_routing(self, "predict_log_proba", **params) for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt, **routed_params[name].transform) return self.steps[-1][1].predict_log_proba( Xt, **routed_params[self.steps[-1][0]].predict_log_proba )
Transform the data, and apply `predict_log_proba` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict_log_proba` method. Only valid if the final estimator implements `predict_log_proba`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object - If `enable_metadata_routing=False` (default): Parameters to the `predict_log_proba` called at the end of all transformations in the pipeline. - If `enable_metadata_routing=True`: Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 0.20 .. versionchanged:: 1.4 Parameters are now passed to the ``transform`` method of the intermediate steps as well, if requested, and if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y_log_proba : ndarray of shape (n_samples, n_classes) Result of calling `predict_log_proba` on the final estimator.
predict_log_proba
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def transform(self, X, **params): """Transform the data, and apply `transform` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `transform` method. Only valid if the final estimator implements `transform`. This also works where final estimator is `None` in which case all prior transformations are applied. Parameters ---------- X : iterable Data to transform. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- Xt : ndarray of shape (n_samples, n_transformed_features) Transformed data. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): _raise_for_params(params, self, "transform") # not branching here since params is only available if # enable_metadata_routing=True routed_params = process_routing(self, "transform", **params) Xt = X for _, name, transform in self._iter(): Xt = transform.transform(Xt, **routed_params[name].transform) return Xt
Transform the data, and apply `transform` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `transform` method. Only valid if the final estimator implements `transform`. This also works where final estimator is `None` in which case all prior transformations are applied. Parameters ---------- X : iterable Data to transform. Must fulfill input requirements of first step of the pipeline. **params : dict of str -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- Xt : ndarray of shape (n_samples, n_transformed_features) Transformed data.
transform
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def inverse_transform(self, X, **params): """Apply `inverse_transform` for each step in a reverse order. All estimators in the pipeline must support `inverse_transform`. Parameters ---------- X : array-like of shape (n_samples, n_transformed_features) Data samples, where ``n_samples`` is the number of samples and ``n_features`` is the number of features. Must fulfill input requirements of last step of pipeline's ``inverse_transform`` method. **params : dict of str -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- X_original : ndarray of shape (n_samples, n_features) Inverse transformed data, that is, data in the original feature space. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): _raise_for_params(params, self, "inverse_transform") # we don't have to branch here, since params is only non-empty if # enable_metadata_routing=True. routed_params = process_routing(self, "inverse_transform", **params) reverse_iter = reversed(list(self._iter())) for _, name, transform in reverse_iter: X = transform.inverse_transform( X, **routed_params[name].inverse_transform ) return X
Apply `inverse_transform` for each step in a reverse order. All estimators in the pipeline must support `inverse_transform`. Parameters ---------- X : array-like of shape (n_samples, n_transformed_features) Data samples, where ``n_samples`` is the number of samples and ``n_features`` is the number of features. Must fulfill input requirements of last step of pipeline's ``inverse_transform`` method. **params : dict of str -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- X_original : ndarray of shape (n_samples, n_features) Inverse transformed data, that is, data in the original feature space.
inverse_transform
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def score(self, X, y=None, sample_weight=None, **params): """Transform the data, and apply `score` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `score` method. Only valid if the final estimator implements `score`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Targets used for scoring. Must fulfill label requirements for all steps of the pipeline. sample_weight : array-like, default=None If not None, this argument is passed as ``sample_weight`` keyword argument to the ``score`` method of the final estimator. **params : dict of str -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- score : float Result of calling `score` on the final estimator. """ # TODO(1.8): Remove the context manager and use check_is_fitted(self) with _raise_or_warn_if_not_fitted(self): Xt = X if not _routing_enabled(): for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt) score_params = {} if sample_weight is not None: score_params["sample_weight"] = sample_weight return self.steps[-1][1].score(Xt, y, **score_params) # metadata routing is enabled. routed_params = process_routing( self, "score", sample_weight=sample_weight, **params ) Xt = X for _, name, transform in self._iter(with_final=False): Xt = transform.transform(Xt, **routed_params[name].transform) return self.steps[-1][1].score( Xt, y, **routed_params[self.steps[-1][0]].score )
Transform the data, and apply `score` with the final estimator. Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `score` method. Only valid if the final estimator implements `score`. Parameters ---------- X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Targets used for scoring. Must fulfill label requirements for all steps of the pipeline. sample_weight : array-like, default=None If not None, this argument is passed as ``sample_weight`` keyword argument to the ``score`` method of the final estimator. **params : dict of str -> object Parameters requested and accepted by steps. Each step must have requested certain metadata for these parameters to be forwarded to them. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- score : float Result of calling `score` on the final estimator.
score
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def get_feature_names_out(self, input_features=None): """Get output feature names for transformation. Transform input features using the pipeline. Parameters ---------- input_features : array-like of str or None, default=None Input features. Returns ------- feature_names_out : ndarray of str objects Transformed feature names. """ feature_names_out = input_features for _, name, transform in self._iter(): if not hasattr(transform, "get_feature_names_out"): raise AttributeError( "Estimator {} does not provide get_feature_names_out. " "Did you mean to call pipeline[:-1].get_feature_names_out" "()?".format(name) ) feature_names_out = transform.get_feature_names_out(feature_names_out) return feature_names_out
Get output feature names for transformation. Transform input features using the pipeline. Parameters ---------- input_features : array-like of str or None, default=None Input features. Returns ------- feature_names_out : ndarray of str objects Transformed feature names.
get_feature_names_out
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def __sklearn_is_fitted__(self): """Indicate whether pipeline has been fit. This is done by checking whether the last non-`passthrough` step of the pipeline is fitted. An empty pipeline is considered fitted. """ # First find the last step that is not 'passthrough' last_step = None for _, estimator in reversed(self.steps): if estimator != "passthrough": last_step = estimator break if last_step is None: # All steps are 'passthrough', so the pipeline is considered fitted return True try: # check if the last step of the pipeline is fitted # we only check the last step since if the last step is fit, it # means the previous steps should also be fit. This is faster than # checking if every step of the pipeline is fit. check_is_fitted(last_step) return True except NotFittedError: return False
Indicate whether pipeline has been fit. This is done by checking whether the last non-`passthrough` step of the pipeline is fitted. An empty pipeline is considered fitted.
__sklearn_is_fitted__
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__) # first we add all steps except the last one for _, name, trans in self._iter(with_final=False, filter_passthrough=True): method_mapping = MethodMapping() # fit, fit_predict, and fit_transform call fit_transform if it # exists, or else fit and transform if hasattr(trans, "fit_transform"): ( method_mapping.add(caller="fit", callee="fit_transform") .add(caller="fit_transform", callee="fit_transform") .add(caller="fit_predict", callee="fit_transform") ) else: ( method_mapping.add(caller="fit", callee="fit") .add(caller="fit", callee="transform") .add(caller="fit_transform", callee="fit") .add(caller="fit_transform", callee="transform") .add(caller="fit_predict", callee="fit") .add(caller="fit_predict", callee="transform") ) ( method_mapping.add(caller="predict", callee="transform") .add(caller="predict", callee="transform") .add(caller="predict_proba", callee="transform") .add(caller="decision_function", callee="transform") .add(caller="predict_log_proba", callee="transform") .add(caller="transform", callee="transform") .add(caller="inverse_transform", callee="inverse_transform") .add(caller="score", callee="transform") ) router.add(method_mapping=method_mapping, **{name: trans}) final_name, final_est = self.steps[-1] if final_est is None or final_est == "passthrough": return router # then we add the last step method_mapping = MethodMapping() if hasattr(final_est, "fit_transform"): method_mapping.add(caller="fit_transform", callee="fit_transform") else: method_mapping.add(caller="fit", callee="fit").add( caller="fit", callee="transform" ) ( method_mapping.add(caller="fit", callee="fit") .add(caller="predict", callee="predict") .add(caller="fit_predict", callee="fit_predict") .add(caller="predict_proba", callee="predict_proba") .add(caller="decision_function", callee="decision_function") .add(caller="predict_log_proba", callee="predict_log_proba") .add(caller="transform", callee="transform") .add(caller="inverse_transform", callee="inverse_transform") .add(caller="score", callee="score") ) router.add(method_mapping=method_mapping, **{final_name: final_est}) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _transform_one(transformer, X, y, weight, params): """Call transform and apply weight to output. Parameters ---------- transformer : estimator Estimator to be used for transformation. X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data to be transformed. y : ndarray of shape (n_samples,) Ignored. weight : float Weight to be applied to the output of the transformation. params : dict Parameters to be passed to the transformer's ``transform`` method. This should be of the form ``process_routing()["step_name"]``. """ res = transformer.transform(X, **params.transform) # if we have a weight for this transformer, multiply output if weight is None: return res return res * weight
Call transform and apply weight to output. Parameters ---------- transformer : estimator Estimator to be used for transformation. X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data to be transformed. y : ndarray of shape (n_samples,) Ignored. weight : float Weight to be applied to the output of the transformation. params : dict Parameters to be passed to the transformer's ``transform`` method. This should be of the form ``process_routing()["step_name"]``.
_transform_one
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _fit_transform_one( transformer, X, y, weight, message_clsname="", message=None, params=None ): """ Fits ``transformer`` to ``X`` and ``y``. The transformed result is returned with the fitted transformer. If ``weight`` is not ``None``, the result will be multiplied by ``weight``. ``params`` needs to be of the form ``process_routing()["step_name"]``. """ params = params or {} with _print_elapsed_time(message_clsname, message): if hasattr(transformer, "fit_transform"): res = transformer.fit_transform(X, y, **params.get("fit_transform", {})) else: res = transformer.fit(X, y, **params.get("fit", {})).transform( X, **params.get("transform", {}) ) if weight is None: return res, transformer return res * weight, transformer
Fits ``transformer`` to ``X`` and ``y``. The transformed result is returned with the fitted transformer. If ``weight`` is not ``None``, the result will be multiplied by ``weight``. ``params`` needs to be of the form ``process_routing()["step_name"]``.
_fit_transform_one
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def set_output(self, *, transform=None): """Set the output container when `"transform"` and `"fit_transform"` are called. `set_output` will set the output of all estimators in `transformer_list`. Parameters ---------- transform : {"default", "pandas", "polars"}, default=None Configure output of `transform` and `fit_transform`. - `"default"`: Default output format of a transformer - `"pandas"`: DataFrame output - `"polars"`: Polars output - `None`: Transform configuration is unchanged Returns ------- self : estimator instance Estimator instance. """ super().set_output(transform=transform) for _, step, _ in self._iter(): _safe_set_output(step, transform=transform) return self
Set the output container when `"transform"` and `"fit_transform"` are called. `set_output` will set the output of all estimators in `transformer_list`. Parameters ---------- transform : {"default", "pandas", "polars"}, default=None Configure output of `transform` and `fit_transform`. - `"default"`: Default output format of a transformer - `"pandas"`: DataFrame output - `"polars"`: Polars output - `None`: Transform configuration is unchanged Returns ------- self : estimator instance Estimator instance.
set_output
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _iter(self): """ Generate (name, trans, weight) tuples excluding None and 'drop' transformers. """ get_weight = (self.transformer_weights or {}).get for name, trans in self.transformer_list: if trans == "drop": continue if trans == "passthrough": trans = FunctionTransformer(feature_names_out="one-to-one") yield (name, trans, get_weight(name))
Generate (name, trans, weight) tuples excluding None and 'drop' transformers.
_iter
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def get_feature_names_out(self, input_features=None): """Get output feature names for transformation. Parameters ---------- input_features : array-like of str or None, default=None Input features. Returns ------- feature_names_out : ndarray of str objects Transformed feature names. """ # List of tuples (name, feature_names_out) transformer_with_feature_names_out = [] for name, trans, _ in self._iter(): if not hasattr(trans, "get_feature_names_out"): raise AttributeError( "Transformer %s (type %s) does not provide get_feature_names_out." % (str(name), type(trans).__name__) ) feature_names_out = trans.get_feature_names_out(input_features) transformer_with_feature_names_out.append((name, feature_names_out)) return self._add_prefix_for_feature_names_out( transformer_with_feature_names_out )
Get output feature names for transformation. Parameters ---------- input_features : array-like of str or None, default=None Input features. Returns ------- feature_names_out : ndarray of str objects Transformed feature names.
get_feature_names_out
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _add_prefix_for_feature_names_out(self, transformer_with_feature_names_out): """Add prefix for feature names out that includes the transformer names. Parameters ---------- transformer_with_feature_names_out : list of tuples of (str, array-like of str) The tuple consistent of the transformer's name and its feature names out. Returns ------- feature_names_out : ndarray of shape (n_features,), dtype=str Transformed feature names. """ if self.verbose_feature_names_out: # Prefix the feature names out with the transformers name names = list( chain.from_iterable( (f"{name}__{i}" for i in feature_names_out) for name, feature_names_out in transformer_with_feature_names_out ) ) return np.asarray(names, dtype=object) # verbose_feature_names_out is False # Check that names are all unique without a prefix feature_names_count = Counter( chain.from_iterable(s for _, s in transformer_with_feature_names_out) ) top_6_overlap = [ name for name, count in feature_names_count.most_common(6) if count > 1 ] top_6_overlap.sort() if top_6_overlap: if len(top_6_overlap) == 6: # There are more than 5 overlapping names, we only show the 5 # of the feature names names_repr = str(top_6_overlap[:5])[:-1] + ", ...]" else: names_repr = str(top_6_overlap) raise ValueError( f"Output feature names: {names_repr} are not unique. Please set " "verbose_feature_names_out=True to add prefixes to feature names" ) return np.concatenate( [name for _, name in transformer_with_feature_names_out], )
Add prefix for feature names out that includes the transformer names. Parameters ---------- transformer_with_feature_names_out : list of tuples of (str, array-like of str) The tuple consistent of the transformer's name and its feature names out. Returns ------- feature_names_out : ndarray of shape (n_features,), dtype=str Transformed feature names.
_add_prefix_for_feature_names_out
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def fit(self, X, y=None, **fit_params): """Fit all transformers using X. Parameters ---------- X : iterable or array-like, depending on transformers Input data, used to fit transformers. y : array-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. **fit_params : dict, default=None - If `enable_metadata_routing=False` (default): Parameters directly passed to the `fit` methods of the sub-transformers. - If `enable_metadata_routing=True`: Parameters safely routed to the `fit` methods of the sub-transformers. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. .. versionchanged:: 1.5 `**fit_params` can be routed via metadata routing API. Returns ------- self : object FeatureUnion class instance. """ if _routing_enabled(): routed_params = process_routing(self, "fit", **fit_params) else: # TODO(SLEP6): remove when metadata routing cannot be disabled. routed_params = Bunch() for name, _ in self.transformer_list: routed_params[name] = Bunch(fit={}) routed_params[name].fit = fit_params transformers = self._parallel_func(X, y, _fit_one, routed_params) if not transformers: # All transformers are None return self self._update_transformer_list(transformers) return self
Fit all transformers using X. Parameters ---------- X : iterable or array-like, depending on transformers Input data, used to fit transformers. y : array-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. **fit_params : dict, default=None - If `enable_metadata_routing=False` (default): Parameters directly passed to the `fit` methods of the sub-transformers. - If `enable_metadata_routing=True`: Parameters safely routed to the `fit` methods of the sub-transformers. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. .. versionchanged:: 1.5 `**fit_params` can be routed via metadata routing API. Returns ------- self : object FeatureUnion class instance.
fit
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def fit_transform(self, X, y=None, **params): """Fit all transformers, transform the data and concatenate results. Parameters ---------- X : iterable or array-like, depending on transformers Input data to be transformed. y : array-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. **params : dict, default=None - If `enable_metadata_routing=False` (default): Parameters directly passed to the `fit` methods of the sub-transformers. - If `enable_metadata_routing=True`: Parameters safely routed to the `fit` methods of the sub-transformers. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. .. versionchanged:: 1.5 `**params` can now be routed via metadata routing API. Returns ------- X_t : array-like or sparse matrix of \ shape (n_samples, sum_n_components) The `hstack` of results of transformers. `sum_n_components` is the sum of `n_components` (output dimension) over transformers. """ if _routing_enabled(): routed_params = process_routing(self, "fit_transform", **params) else: # TODO(SLEP6): remove when metadata routing cannot be disabled. routed_params = Bunch() for name, obj in self.transformer_list: if hasattr(obj, "fit_transform"): routed_params[name] = Bunch(fit_transform={}) routed_params[name].fit_transform = params else: routed_params[name] = Bunch(fit={}) routed_params[name] = Bunch(transform={}) routed_params[name].fit = params results = self._parallel_func(X, y, _fit_transform_one, routed_params) if not results: # All transformers are None return np.zeros((X.shape[0], 0)) Xs, transformers = zip(*results) self._update_transformer_list(transformers) return self._hstack(Xs)
Fit all transformers, transform the data and concatenate results. Parameters ---------- X : iterable or array-like, depending on transformers Input data to be transformed. y : array-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. **params : dict, default=None - If `enable_metadata_routing=False` (default): Parameters directly passed to the `fit` methods of the sub-transformers. - If `enable_metadata_routing=True`: Parameters safely routed to the `fit` methods of the sub-transformers. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. .. versionchanged:: 1.5 `**params` can now be routed via metadata routing API. Returns ------- X_t : array-like or sparse matrix of shape (n_samples, sum_n_components) The `hstack` of results of transformers. `sum_n_components` is the sum of `n_components` (output dimension) over transformers.
fit_transform
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def _parallel_func(self, X, y, func, routed_params): """Runs func in parallel on X and y""" self.transformer_list = list(self.transformer_list) self._validate_transformers() self._validate_transformer_weights() transformers = list(self._iter()) return Parallel(n_jobs=self.n_jobs)( delayed(func)( transformer, X, y, weight, message_clsname="FeatureUnion", message=self._log_message(name, idx, len(transformers)), params=routed_params[name], ) for idx, (name, transformer, weight) in enumerate(transformers, 1) )
Runs func in parallel on X and y
_parallel_func
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def transform(self, X, **params): """Transform X separately by each transformer, concatenate results. Parameters ---------- X : iterable or array-like, depending on transformers Input data to be transformed. **params : dict, default=None Parameters routed to the `transform` method of the sub-transformers via the metadata routing API. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. .. versionadded:: 1.5 Returns ------- X_t : array-like or sparse matrix of shape (n_samples, sum_n_components) The `hstack` of results of transformers. `sum_n_components` is the sum of `n_components` (output dimension) over transformers. """ _raise_for_params(params, self, "transform") if _routing_enabled(): routed_params = process_routing(self, "transform", **params) else: # TODO(SLEP6): remove when metadata routing cannot be disabled. routed_params = Bunch() for name, _ in self.transformer_list: routed_params[name] = Bunch(transform={}) Xs = Parallel(n_jobs=self.n_jobs)( delayed(_transform_one)(trans, X, None, weight, params=routed_params[name]) for name, trans, weight in self._iter() ) if not Xs: # All transformers are None return np.zeros((X.shape[0], 0)) return self._hstack(Xs)
Transform X separately by each transformer, concatenate results. Parameters ---------- X : iterable or array-like, depending on transformers Input data to be transformed. **params : dict, default=None Parameters routed to the `transform` method of the sub-transformers via the metadata routing API. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. .. versionadded:: 1.5 Returns ------- X_t : array-like or sparse matrix of shape (n_samples, sum_n_components) The `hstack` of results of transformers. `sum_n_components` is the sum of `n_components` (output dimension) over transformers.
transform
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.5 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__) for name, transformer in self.transformer_list: router.add( **{name: transformer}, method_mapping=MethodMapping() .add(caller="fit", callee="fit") .add(caller="fit_transform", callee="fit_transform") .add(caller="fit_transform", callee="fit") .add(caller="fit_transform", callee="transform") .add(caller="transform", callee="transform"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.5 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/pipeline.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
BSD-3-Clause
def johnson_lindenstrauss_min_dim(n_samples, *, eps=0.1): """Find a 'safe' number of components to randomly project to. The distortion introduced by a random projection `p` only changes the distance between two points by a factor (1 +- eps) in a euclidean space with good probability. The projection `p` is an eps-embedding as defined by: .. code-block:: text (1 - eps) ||u - v||^2 < ||p(u) - p(v)||^2 < (1 + eps) ||u - v||^2 Where u and v are any rows taken from a dataset of shape (n_samples, n_features), eps is in ]0, 1[ and p is a projection by a random Gaussian N(0, 1) matrix of shape (n_components, n_features) (or a sparse Achlioptas matrix). The minimum number of components to guarantee the eps-embedding is given by: .. code-block:: text n_components >= 4 log(n_samples) / (eps^2 / 2 - eps^3 / 3) Note that the number of dimensions is independent of the original number of features but instead depends on the size of the dataset: the larger the dataset, the higher is the minimal dimensionality of an eps-embedding. Read more in the :ref:`User Guide <johnson_lindenstrauss>`. Parameters ---------- n_samples : int or array-like of int Number of samples that should be an integer greater than 0. If an array is given, it will compute a safe number of components array-wise. eps : float or array-like of shape (n_components,), dtype=float, \ default=0.1 Maximum distortion rate in the range (0, 1) as defined by the Johnson-Lindenstrauss lemma. If an array is given, it will compute a safe number of components array-wise. Returns ------- n_components : int or ndarray of int The minimal number of components to guarantee with good probability an eps-embedding with n_samples. References ---------- .. [1] https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma .. [2] `Sanjoy Dasgupta and Anupam Gupta, 1999, "An elementary proof of the Johnson-Lindenstrauss Lemma." <https://citeseerx.ist.psu.edu/doc_view/pid/95cd464d27c25c9c8690b378b894d337cdf021f9>`_ Examples -------- >>> from sklearn.random_projection import johnson_lindenstrauss_min_dim >>> johnson_lindenstrauss_min_dim(1e6, eps=0.5) np.int64(663) >>> johnson_lindenstrauss_min_dim(1e6, eps=[0.5, 0.1, 0.01]) array([ 663, 11841, 1112658]) >>> johnson_lindenstrauss_min_dim([1e4, 1e5, 1e6], eps=0.1) array([ 7894, 9868, 11841]) """ eps = np.asarray(eps) n_samples = np.asarray(n_samples) if np.any(eps <= 0.0) or np.any(eps >= 1): raise ValueError("The JL bound is defined for eps in ]0, 1[, got %r" % eps) if np.any(n_samples <= 0): raise ValueError( "The JL bound is defined for n_samples greater than zero, got %r" % n_samples ) denominator = (eps**2 / 2) - (eps**3 / 3) return (4 * np.log(n_samples) / denominator).astype(np.int64)
Find a 'safe' number of components to randomly project to. The distortion introduced by a random projection `p` only changes the distance between two points by a factor (1 +- eps) in a euclidean space with good probability. The projection `p` is an eps-embedding as defined by: .. code-block:: text (1 - eps) ||u - v||^2 < ||p(u) - p(v)||^2 < (1 + eps) ||u - v||^2 Where u and v are any rows taken from a dataset of shape (n_samples, n_features), eps is in ]0, 1[ and p is a projection by a random Gaussian N(0, 1) matrix of shape (n_components, n_features) (or a sparse Achlioptas matrix). The minimum number of components to guarantee the eps-embedding is given by: .. code-block:: text n_components >= 4 log(n_samples) / (eps^2 / 2 - eps^3 / 3) Note that the number of dimensions is independent of the original number of features but instead depends on the size of the dataset: the larger the dataset, the higher is the minimal dimensionality of an eps-embedding. Read more in the :ref:`User Guide <johnson_lindenstrauss>`. Parameters ---------- n_samples : int or array-like of int Number of samples that should be an integer greater than 0. If an array is given, it will compute a safe number of components array-wise. eps : float or array-like of shape (n_components,), dtype=float, default=0.1 Maximum distortion rate in the range (0, 1) as defined by the Johnson-Lindenstrauss lemma. If an array is given, it will compute a safe number of components array-wise. Returns ------- n_components : int or ndarray of int The minimal number of components to guarantee with good probability an eps-embedding with n_samples. References ---------- .. [1] https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma .. [2] `Sanjoy Dasgupta and Anupam Gupta, 1999, "An elementary proof of the Johnson-Lindenstrauss Lemma." <https://citeseerx.ist.psu.edu/doc_view/pid/95cd464d27c25c9c8690b378b894d337cdf021f9>`_ Examples -------- >>> from sklearn.random_projection import johnson_lindenstrauss_min_dim >>> johnson_lindenstrauss_min_dim(1e6, eps=0.5) np.int64(663) >>> johnson_lindenstrauss_min_dim(1e6, eps=[0.5, 0.1, 0.01]) array([ 663, 11841, 1112658]) >>> johnson_lindenstrauss_min_dim([1e4, 1e5, 1e6], eps=0.1) array([ 7894, 9868, 11841])
johnson_lindenstrauss_min_dim
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def _check_density(density, n_features): """Factorize density check according to Li et al.""" if density == "auto": density = 1 / np.sqrt(n_features) elif density <= 0 or density > 1: raise ValueError("Expected density in range ]0, 1], got: %r" % density) return density
Factorize density check according to Li et al.
_check_density
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def _check_input_size(n_components, n_features): """Factorize argument checking for random matrix generation.""" if n_components <= 0: raise ValueError( "n_components must be strictly positive, got %d" % n_components ) if n_features <= 0: raise ValueError("n_features must be strictly positive, got %d" % n_features)
Factorize argument checking for random matrix generation.
_check_input_size
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def _gaussian_random_matrix(n_components, n_features, random_state=None): """Generate a dense Gaussian random matrix. The components of the random matrix are drawn from N(0, 1.0 / n_components). Read more in the :ref:`User Guide <gaussian_random_matrix>`. Parameters ---------- n_components : int, Dimensionality of the target projection space. n_features : int, Dimensionality of the original source space. random_state : int, RandomState instance or None, default=None Controls the pseudo random number generator used to generate the matrix at fit time. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- components : ndarray of shape (n_components, n_features) The generated Gaussian random matrix. See Also -------- GaussianRandomProjection """ _check_input_size(n_components, n_features) rng = check_random_state(random_state) components = rng.normal( loc=0.0, scale=1.0 / np.sqrt(n_components), size=(n_components, n_features) ) return components
Generate a dense Gaussian random matrix. The components of the random matrix are drawn from N(0, 1.0 / n_components). Read more in the :ref:`User Guide <gaussian_random_matrix>`. Parameters ---------- n_components : int, Dimensionality of the target projection space. n_features : int, Dimensionality of the original source space. random_state : int, RandomState instance or None, default=None Controls the pseudo random number generator used to generate the matrix at fit time. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- components : ndarray of shape (n_components, n_features) The generated Gaussian random matrix. See Also -------- GaussianRandomProjection
_gaussian_random_matrix
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def _make_random_matrix(self, n_components, n_features): """Generate the random projection matrix. Parameters ---------- n_components : int, Dimensionality of the target projection space. n_features : int, Dimensionality of the original source space. Returns ------- components : {ndarray, sparse matrix} of shape (n_components, n_features) The generated random matrix. Sparse matrix will be of CSR format. """
Generate the random projection matrix. Parameters ---------- n_components : int, Dimensionality of the target projection space. n_features : int, Dimensionality of the original source space. Returns ------- components : {ndarray, sparse matrix} of shape (n_components, n_features) The generated random matrix. Sparse matrix will be of CSR format.
_make_random_matrix
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def _compute_inverse_components(self): """Compute the pseudo-inverse of the (densified) components.""" components = self.components_ if sp.issparse(components): components = components.toarray() return linalg.pinv(components, check_finite=False)
Compute the pseudo-inverse of the (densified) components.
_compute_inverse_components
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def fit(self, X, y=None): """Generate a sparse random projection matrix. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y : Ignored Not used, present here for API consistency by convention. Returns ------- self : object BaseRandomProjection class instance. """ X = validate_data( self, X, accept_sparse=["csr", "csc"], dtype=[np.float64, np.float32] ) n_samples, n_features = X.shape if self.n_components == "auto": self.n_components_ = johnson_lindenstrauss_min_dim( n_samples=n_samples, eps=self.eps ) if self.n_components_ <= 0: raise ValueError( "eps=%f and n_samples=%d lead to a target dimension of " "%d which is invalid" % (self.eps, n_samples, self.n_components_) ) elif self.n_components_ > n_features: raise ValueError( "eps=%f and n_samples=%d lead to a target dimension of " "%d which is larger than the original space with " "n_features=%d" % (self.eps, n_samples, self.n_components_, n_features) ) else: if self.n_components > n_features: warnings.warn( "The number of components is higher than the number of" " features: n_features < n_components (%s < %s)." "The dimensionality of the problem will not be reduced." % (n_features, self.n_components), DataDimensionalityWarning, ) self.n_components_ = self.n_components # Generate a projection matrix of size [n_components, n_features] self.components_ = self._make_random_matrix( self.n_components_, n_features ).astype(X.dtype, copy=False) if self.compute_inverse_components: self.inverse_components_ = self._compute_inverse_components() # Required by ClassNamePrefixFeaturesOutMixin.get_feature_names_out. self._n_features_out = self.n_components return self
Generate a sparse random projection matrix. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y : Ignored Not used, present here for API consistency by convention. Returns ------- self : object BaseRandomProjection class instance.
fit
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def inverse_transform(self, X): """Project data back to its original space. Returns an array X_original whose transform would be X. Note that even if X is sparse, X_original is dense: this may use a lot of RAM. If `compute_inverse_components` is False, the inverse of the components is computed during each call to `inverse_transform` which can be costly. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_components) Data to be transformed back. Returns ------- X_original : ndarray of shape (n_samples, n_features) Reconstructed data. """ check_is_fitted(self) X = check_array(X, dtype=[np.float64, np.float32], accept_sparse=("csr", "csc")) if self.compute_inverse_components: return X @ self.inverse_components_.T inverse_components = self._compute_inverse_components() return X @ inverse_components.T
Project data back to its original space. Returns an array X_original whose transform would be X. Note that even if X is sparse, X_original is dense: this may use a lot of RAM. If `compute_inverse_components` is False, the inverse of the components is computed during each call to `inverse_transform` which can be costly. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_components) Data to be transformed back. Returns ------- X_original : ndarray of shape (n_samples, n_features) Reconstructed data.
inverse_transform
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def _make_random_matrix(self, n_components, n_features): """Generate the random projection matrix. Parameters ---------- n_components : int, Dimensionality of the target projection space. n_features : int, Dimensionality of the original source space. Returns ------- components : ndarray of shape (n_components, n_features) The generated random matrix. """ random_state = check_random_state(self.random_state) return _gaussian_random_matrix( n_components, n_features, random_state=random_state )
Generate the random projection matrix. Parameters ---------- n_components : int, Dimensionality of the target projection space. n_features : int, Dimensionality of the original source space. Returns ------- components : ndarray of shape (n_components, n_features) The generated random matrix.
_make_random_matrix
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def transform(self, X): """Project the data by using matrix product with the random matrix. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input data to project into a smaller dimensional space. Returns ------- X_new : ndarray of shape (n_samples, n_components) Projected array. """ check_is_fitted(self) X = validate_data( self, X, accept_sparse=["csr", "csc"], reset=False, dtype=[np.float64, np.float32], ) return X @ self.components_.T
Project the data by using matrix product with the random matrix. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input data to project into a smaller dimensional space. Returns ------- X_new : ndarray of shape (n_samples, n_components) Projected array.
transform
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def _make_random_matrix(self, n_components, n_features): """Generate the random projection matrix Parameters ---------- n_components : int Dimensionality of the target projection space. n_features : int Dimensionality of the original source space. Returns ------- components : sparse matrix of shape (n_components, n_features) The generated random matrix in CSR format. """ random_state = check_random_state(self.random_state) self.density_ = _check_density(self.density, n_features) return _sparse_random_matrix( n_components, n_features, density=self.density_, random_state=random_state )
Generate the random projection matrix Parameters ---------- n_components : int Dimensionality of the target projection space. n_features : int Dimensionality of the original source space. Returns ------- components : sparse matrix of shape (n_components, n_features) The generated random matrix in CSR format.
_make_random_matrix
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause
def transform(self, X): """Project the data by using matrix product with the random matrix. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input data to project into a smaller dimensional space. Returns ------- X_new : {ndarray, sparse matrix} of shape (n_samples, n_components) Projected array. It is a sparse matrix only when the input is sparse and `dense_output = False`. """ check_is_fitted(self) X = validate_data( self, X, accept_sparse=["csr", "csc"], reset=False, dtype=[np.float64, np.float32], ) return safe_sparse_dot(X, self.components_.T, dense_output=self.dense_output)
Project the data by using matrix product with the random matrix. Parameters ---------- X : {ndarray, sparse matrix} of shape (n_samples, n_features) The input data to project into a smaller dimensional space. Returns ------- X_new : {ndarray, sparse matrix} of shape (n_samples, n_components) Projected array. It is a sparse matrix only when the input is sparse and `dense_output = False`.
transform
python
scikit-learn/scikit-learn
sklearn/random_projection.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/random_projection.py
BSD-3-Clause